Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is the inference time of MobileNetv2 smaller than v1??? #4

Open
yuanze-lin opened this issue Jun 18, 2018 · 4 comments
Open

Is the inference time of MobileNetv2 smaller than v1??? #4

yuanze-lin opened this issue Jun 18, 2018 · 4 comments

Comments

@yuanze-lin
Copy link

yuanze-lin commented Jun 18, 2018

Hello, in this project, is MobileNetv2 more faster? if it's this situation, then what's the fps of v2???

@yiran-THU
Copy link

I have tested.
The inference time of MobileNetv2 is about 19ms, and MobileNetv1 is about 11ms.

@yuanze-lin
Copy link
Author

yuanze-lin commented Jun 19, 2018

@yiran-THU Thank you for your response, but I read the paper, paper mentioned that the inference time of MibileNet v2 would be less than MobileNet v1, are there any other versions of MobileNet v2??

@eric612
Copy link
Owner

eric612 commented Jun 19, 2018

The original implement was come from here :
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
The inference time of v2 was 27 ms and v1 was 31 ms , but it is base on tensorflow framework

In this project , the deploy model of mobilenet-v1 was made by merge.py , which combined batch norm and scale layers into convolution layers , so it will be faster
https://github.com/chuanqi305/MobileNet-SSD/blob/master/merge_bn.py

@yuanze-lin
Copy link
Author

yuanze-lin commented Jun 19, 2018

@eric612 Hi, so if you use merge.py deploying the model of mobilenet-v2, is it possible to be faster?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants