-
Notifications
You must be signed in to change notification settings - Fork 307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the inference time cost two much? #21
Comments
@longzeyilang Can you provide your script used to test the speed? |
use default config. And nvidia is 1080ti |
@longzeyilang I mean how you test the speed. It looks like you test through the demo script, which includes additional initialization and processing steps like model loading and visualization part. |
@longzeyilang The inference speed should be tested in test_ins.py, not the inference_demo.py, due to the reasons stated above. |
what is difference? |
@longzeyilang The inference_demo.py includes additional initialization and processing steps like model loading and visualization part. |
I use a picture to detect, the model is Decoupled_SOLO_Light_R50_3x and the default config. the inference time cost two much time,about 0.6 sec,why?
The text was updated successfully, but these errors were encountered: