Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to inference on torchscript #80

Closed
nobody-cheng opened this issue Mar 16, 2021 · 4 comments
Closed

how to inference on torchscript #80

nobody-cheng opened this issue Mar 16, 2021 · 4 comments
Labels
question Further information is requested

Comments

@nobody-cheng
Copy link

how inference torchscript

@nobody-cheng nobody-cheng added the question Further information is requested label Mar 16, 2021
@zhiqwang
Copy link
Owner

zhiqwang commented Mar 16, 2021

Hi @nobody-cheng ,

Are you interested in the python side or the cpp side?

If you want to use python, you could check this notebook, and if you are interested in the Cpp backend, you could check the unit-test, and we provided an C++ example of how to infer with the transformed torchscript model, but this example is a little outdated.

BTW the upstream PyTorch have refactored their interfaces of torchscript recently, I've only updated the unit-test to PyTorch 1.8. The example in deployment only works in PyTorch 1.7.x. I will update the example in deployment to PyTorch 1.8 soon.

@nobody-cheng
Copy link
Author

nobody-cheng commented Mar 17, 2021

Hi @nobody-cheng ,

Are you interested in the python side or the cpp side?

If you want to use python, you could check this notebook, and if you are interested in the Cpp backend, you could check the unit-test, and we provided an C++ example of how to infer with the transformed torchscript model, but this example is a little outdated.

BTW the upstream PyTorch have refactored their interfaces of torchscript recently, I've only updated the unit-test to PyTorch 1.8. The example in deployment only works in PyTorch 1.7.x. I will update the example in deployment to PyTorch 1.8 soon.

# TorchScript export
print(f'Starting TorchScript export with torch {torch.__version__}...')
export_script_name = './checkpoints/yolov5/yolov5s.torchscript.pt'  # >>>>>>> ** Variable is not called **

model_script = torch.jit.script(model)
model_script.eval()
model_script = model_script.to(device)

x = img[None]
out = model(x)
out_script = model_script(x)

export_script_name Variable is not called

@zhiqwang
Copy link
Owner

zhiqwang commented Mar 17, 2021

Hi @nobody-cheng ,

This is reserved for the target torchscript model to be generated. Fell free to delete this line! If you want to save the generated model, you can do something as following:

model_script.save(export_script_name)

@nobody-cheng
Copy link
Author

nobody-cheng commented Mar 17, 2021

This is reserved for the target torchscript model to be generated. Fell free to delete this line! If you want to save the generated model, you can do something as following:

model_script.save(export_script_name)

ths
ultralytics/yolov5 code

model.model[-1].export = False  # set
python models/export.py --weights ./runs/train/exp5/weights/best.pt --img 640 --batch 1

it work

@zhiqwang zhiqwang changed the title how inference torchscript how to inference on torchscript Aug 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants