-
Notifications
You must be signed in to change notification settings - Fork 544
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
slow inference on jetson tx2 #23
Comments
Which demo are you referring to? Is it the SSD one? In addition, please also specify which version of JetPack and TensorFlow you are using. |
i am referring to the SSD one. i am using tensorflow 1.14 and and jetpack 4.2 with tensorrt 5. i have tried the trt_ssd_async.py for inference and managed to get 25 fps, but i think this is slow for an optimized model on jetson tx2. |
It indeed seems too slow. Unfortunately, I don't have a TX2 to verify that currently. Did you notice any suspicious warnings or errors when you built the TensorRT engine and ran inferencing? |
it seems like the conversion went smoothly and it does improve performance by almost 50%. i am really disappointed with the mobilenetv2 model. i thought it would do better than yolov3-tiny. do you have any idea why the mobilenetv2 model would not perform similarly to the numbers published by tensorflow? i have also tested mobilenetv3 and it performs similar to mobilenetv2 (14 fps) |
Please check out this discussion on StackOverflow: https://stackoverflow.com/questions/50385735/why-the-mobilenetv2-is-faster-than-mobilenetv1-only-at-mobile-device |
thank you, this is very informative. then maybe in my case i am better off using the ssd inception model. what do you think? |
I think it's worth digging out why FPS on your TX2 is not better than my test result on the Nano.
|
I am off until monday, ill get back to you then |
Any update? Otherwise, I'll close this issue due to inactivity. |
i have tested this demo on a jetson tx2 device and inference speed is at 22 fps. i expected better performance on a tx2 than a jetson nano. do you have any insights for achieving better results? and what are the expected speeds on a tx2?
anyone tried it on a tx2 and what were the results?
thanks
The text was updated successfully, but these errors were encountered: