-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to inference on torchscript
#80
Comments
Hi @nobody-cheng , Are you interested in the python side or the cpp side? If you want to use python, you could check this notebook, and if you are interested in the Cpp backend, you could check the unit-test, and we provided an C++ example of how to infer with the transformed torchscript model, but this example is a little outdated. BTW the upstream PyTorch have refactored their interfaces of torchscript recently, I've only updated the unit-test to PyTorch 1.8. The example in deployment only works in PyTorch 1.7.x. I will update the example in deployment to PyTorch 1.8 soon. |
# TorchScript export
print(f'Starting TorchScript export with torch {torch.__version__}...')
export_script_name = './checkpoints/yolov5/yolov5s.torchscript.pt' # >>>>>>> ** Variable is not called **
model_script = torch.jit.script(model)
model_script.eval()
model_script = model_script.to(device)
x = img[None]
out = model(x)
out_script = model_script(x) export_script_name Variable is not called |
Hi @nobody-cheng , This is reserved for the target torchscript model to be generated. Fell free to delete this line! If you want to save the generated model, you can do something as following: model_script.save(export_script_name) |
ths model.model[-1].export = False # set
it work |
how inference torchscript
The text was updated successfully, but these errors were encountered: