-
-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting error while Converting to tensorRT #13109
Comments
👋 Hello @Amiya-Lahiri-AI, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered. If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it. If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results. Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users. InstallPip install the pip install ultralytics EnvironmentsYOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit. |
When I get errors like that from TensorRT it is typically because some layer is not quantizable. Yolov9 might have added some layer which is not quantizable. What TensorRT version are you on? |
It looks like you're encountering a compatibility issue with TensorRT and the model's architecture. The error message Could you confirm the GPU model you're using? Also, updating to the latest TensorRT version might help if your GPU is relatively new. This can often resolve issues with unsupported layers or features in newer models like YOLOv9. |
@glenn-jocher it turns out I have a problem with the instance I was using for inference. |
Glad to hear you resolved the issue by switching instances! If you have any more questions or run into other issues, feel free to reach out. Happy coding! 🚀 |
Search before asking
Question
I am getting an error while trying to convert yolov9e model to tensorRT
python version:: 3.10.13
torch : 2.2.0
nvidia-tensorrt : 8.4.3.1
cuda : 12.4
Additional
No response
The text was updated successfully, but these errors were encountered: