-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does this work for SSD Mobilenet v2? #1
Comments
Hi there, until now the models I have tried to train and convert in model zoo, only the "ssd_mobilenet_v2_quantized_coco" can be used to fine tune and convert to tflite successfully. |
I will update train method and convert commands later |
So, if I try to convert ssd_mobilenet_v2 to tflite, should it not work out-of-the-box without re-training? |
If I just pass inference_input_type=QUANTIZED_UINT8 and leave output type to float, it works fine. But then the model size is no longer reduced...I just need to convert the model to tflite with qunatized weights and run inference. How do you suggest me to deal with? |
You can try the command that I provided. The method I provided has already been verified.
|
Thanks for your time. I tried this, but then the detections always point to class label 'person' at some fixed positions. |
Have you test you tflite model with python? |
No, I have not tested it. Please find the tflite model attached. There is a fundamental difference in the output section between the attached model and the model you have in the repo. I checked it with Netron. |
I doubt if I am generating the tflite model properly. I use similar command as you have mentioned above. |
I will try your tflite model. I had already implemented this model to detect heavy machinery on an arm board, so there is nothing to doubt, as long as you follow the method I recommended. |
Besides, you can use object_detector_detection_api_lite.py to test your model correctness after training before you implementing it with c++. |
Ok. Thanks. :-) |
Hi, |
The reaults of non quantized models I tested were same as yours, only the quantized mobilenet ssd V2 works for me and can be retrained. |
So, only one object getting detected in Non-Quantized case is as expected? It means Nothing wrong from my implementations? |
I think the tflite opts don’t support non quantized models correctly. |
You are right. You can close this issue? Thanks for your time buddy:-) |
You’re welcome |
Hi,
I was working with TFLite. I tried the infernece code for the tflite model in your repo. It is working good.
But when i run the code with the SSD mobilenet V2 tflite model, i get wrong classes and also boxes make no sense...Is this something you noticed?
Can you please help me?
I convert the model using following commands.
python object_detection/export_tflite_ssd_graph.py
--pipeline_config_path=$CONFIG_FILE
--trained_checkpoint_prefix=$CHKPT_DIR
--output_directory=$MODEL_DIR
--add_postprocessing_op=true
tflite_convert --graph_def_file $MODEL_DIR/tflite_graph.pb --output_file $MODEL_DIR/detect.tflite
--input_arrays=normalized_input_image_tensor
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3'
--input_shapes=1,300,300,3 --inference_type=QUANTIZED_UINT8
--mean_values=128 --std_dev_values=128
--change_concat_input_ranges=false --allow_custom_ops
--default_ranges_min=0 --default_ranges_max=255
The text was updated successfully, but these errors were encountered: