-
Notifications
You must be signed in to change notification settings - Fork 45.2k
Description
Prerequisites
Please answer the following question for yourself before submitting an issue.
- [ X] I checked to make sure that this issue has not been filed already.
1. The entire URL of the documentation with the issue
https://github.com/tensorflow/models/blob/v2.3.0/research/object_detection/g3doc/detection_model_zoo.md
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md
2. Describe the issue
Our frozen inference graphs are generated using the v1.12.0 release version of Tensorflow and we do not guarantee that these will work with other versions; this being said, each frozen inference graph can be regenerated using your current version of Tensorflow by re-running the exporter, pointing it at the model directory as well as the corresponding config file in samples/configs.
When I try the suggestion with tensorflow r2.3.0, it seems that some models are not supported by the model_builder.py
ValueError: ssd_mobilenet_v2 is not supported. See `model_builder.py` for features extractors compatible with different versions of Tensorflow
similarily,
ValueError: ssd_mobilenet_v1 is not supported. See `model_builder.py` for features extractors compatible with different versions of Tensorflow
Reason I tried to re-export to tf2 is that,
I actually ran the frozen graph of the available mobile models as is on RPI2 (4-core), and with
coco_ssd_mobilenet_v1_1.0_quant_2018_06_29inference time is ~300ms, cpu usage is ~300% (Available 400%)ssd_mobilenet_v3_small_coco_2020_01_14inference time is ~500ms, cpu usage is ~90% (Available 400%)
What puzzels me is that v3 small takes much longer to inference a frame, but its cpu usage is way too low, indicating that a large portion of the ~500ms is spent on memory I/O ?
I don't know if that is caused by the model not exported from tf2.0, hence I tried to re-export it, and run into the stated error above.
Any suggestions?