-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
detect_ros.py inference time slower than detect.py #11
Comments
Hi @terrytao19, can you post the image size before feeding the image into the network in both detect.py and detect_ros.py? |
Hi @lukazso , I am using the default 640 image size, I stopped running on vm so my speeds are faster but there is still a difference: |
Please check the image size in the code. E.g. print the image size in both scripts right before model inference. |
Hello @lukazso , I have the same problem, but I cannot fix it. My input images are all 640 x 480 and after rescaling, I obtained 640 x640. |
Hi, I'm running on a custom v7-tiny model, and the inference times when running detect_ros.py is at around 500ms while the inference times on detect.py is around 150ms, I added this to track the times:
![image](https://user-images.githubusercontent.com/32805688/214906186-995c8fae-d559-4dbc-8389-d9962aa39218.png)
The text was updated successfully, but these errors were encountered: