Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the inference time cost two much? #21

Closed
longzeyilang opened this issue Apr 8, 2020 · 7 comments
Closed

the inference time cost two much? #21

longzeyilang opened this issue Apr 8, 2020 · 7 comments

Comments

@longzeyilang
Copy link

I use a picture to detect, the model is Decoupled_SOLO_Light_R50_3x and the default config. the inference time cost two much time,about 0.6 sec,why?

@WXinlong
Copy link
Owner

WXinlong commented Apr 8, 2020

@longzeyilang Can you provide your script used to test the speed?

@longzeyilang
Copy link
Author

use default config. And nvidia is 1080ti

@WXinlong
Copy link
Owner

WXinlong commented Apr 9, 2020

@longzeyilang I mean how you test the speed. It looks like you test through the demo script, which includes additional initialization and processing steps like model loading and visualization part.

@longzeyilang
Copy link
Author

@WXinlong
Copy link
Owner

@longzeyilang The inference speed should be tested in test_ins.py, not the inference_demo.py, due to the reasons stated above.

@longzeyilang
Copy link
Author

what is difference?

@WXinlong
Copy link
Owner

@longzeyilang The inference_demo.py includes additional initialization and processing steps like model loading and visualization part.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants