Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Onnx failure of TensorRTv8502.Y when running trtexec --onnx=yolo.onn --saveEngine=yolo.engine on Agx orin 64G #3070

Closed
powerdoudou opened this issue Jun 16, 2023 · 9 comments
Assignees
Labels
triaged Issue has been triaged by maintainers

Comments

@powerdoudou
Copy link

Unknown embeded device detected, Using 59655Mib as the allocation cap for memory on embedded devices

@zerollzeng
Copy link
Collaborator

Looks like a new device config that is not supported by TensorRT. Will check this internally.

@powerdoudou
Copy link
Author

Looks like a new device config that is not supported by TensorRT. Will check this internally.

Thank you zerozero, have a nice day~

@zerollzeng
Copy link
Collaborator

This should be fixed in TRT 8.6, but I don't know when will the corresponding Jetpack be released.

@powerdoudou
Copy link
Author

This should be fixed in TRT 8.6, but I don't know when will the corresponding Jetpack be released.

This issue just seems a bit annoying but does not affect usage. Thank you zerozero ~

@shreejalt
Copy link

Hi @powerdoudou Were you able to successfully convert the model to trt even after this warning?
I am not able to. It continues to print this warning and not able to convert the model

@kneatco
Copy link

kneatco commented Apr 18, 2024

@powerdoudou @zerollzeng Hi all wondering if there's any confirmation that TRT 8.6 actually resolved this error or not. I would also like emphasis on @shreejalt 's question if possible please. Right now on my jetson orin agx 64gb I am also experiencing a loop of this error and it not being able to complete. Not sure if it is just going to take a very long time to complete build due to memory cap, or if it will be stuck with this warning loop forever.

@zerollzeng
Copy link
Collaborator

Does this happens on the latest JP release?

@shreejalt
Copy link

shreejalt commented Apr 27, 2024

@kneatco I know that the warning is annoying, but models like RTDETR, YOLOv6, and all the models in mmdetection, I was able to convert. Its just that it takes somewhat long time. But whenever I try to convert YOLOv8 using ultralytics, it goes into infinite loop. I am usinf JETPACK 5.1.3(@zerollzeng ). I tried using Jetpack 6, but it does not support(or atleast it didnt back then) the paddle2onnx-gpu version. So I have to use jetpack 5.1.3

@zerollzeng
Copy link
Collaborator

I am usinf JETPACK 5.1.3(@zerollzeng ). I tried using Jetpack 6, but it does not support(or atleast it didnt back then) the paddle2onnx-gpu version. So I have to use jetpack 5.1.3

you can export the onnx in JP 5.1 and do the TRT conversion in JP 6. If it fails then it would be great if you can share the onnx model here, we can take a further check.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triaged Issue has been triaged by maintainers
Projects
None yet
Development

No branches or pull requests

4 participants