You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the Inference issues and found no similar feature requests.
Question
I found that only models from Roboflow can be used, but is it possible to load and use a model that was trained locally by the user?
If it is possible, how can it be done?
Lastly, is it also possible to use a model that has been converted to ONNX format?
Thank you.
Additional
No response
The text was updated successfully, but these errors were encountered:
you can upload your locally-trained model to your Roboflow account and then refer to it when running inference (if you chose this option you can refer to your model in workflows builder, and also model will be available in all of your devices)
you can create your private custom block which you can then use to load your local model (that block can be made available in your local workflows builder but it adds requirement for you to distribute your model across your devices)
Search before asking
Question
I found that only models from Roboflow can be used, but is it possible to load and use a model that was trained locally by the user?
If it is possible, how can it be done?
Lastly, is it also possible to use a model that has been converted to ONNX format?
Thank you.
Additional
No response
The text was updated successfully, but these errors were encountered: