Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does this work for SSD Mobilenet v2? #1

Closed
ullasbharadwaj opened this issue May 27, 2020 · 18 comments
Closed

Does this work for SSD Mobilenet v2? #1

ullasbharadwaj opened this issue May 27, 2020 · 18 comments

Comments

@ullasbharadwaj
Copy link

ullasbharadwaj commented May 27, 2020

Hi,
I was working with TFLite. I tried the infernece code for the tflite model in your repo. It is working good.
But when i run the code with the SSD mobilenet V2 tflite model, i get wrong classes and also boxes make no sense...Is this something you noticed?

Can you please help me?

I convert the model using following commands.
python object_detection/export_tflite_ssd_graph.py
--pipeline_config_path=$CONFIG_FILE
--trained_checkpoint_prefix=$CHKPT_DIR
--output_directory=$MODEL_DIR
--add_postprocessing_op=true

tflite_convert --graph_def_file $MODEL_DIR/tflite_graph.pb --output_file $MODEL_DIR/detect.tflite
--input_arrays=normalized_input_image_tensor
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3'
--input_shapes=1,300,300,3 --inference_type=QUANTIZED_UINT8
--mean_values=128 --std_dev_values=128
--change_concat_input_ranges=false --allow_custom_ops
--default_ranges_min=0 --default_ranges_max=255

@finnickniu
Copy link
Owner

finnickniu commented May 28, 2020

Hi there, until now the models I have tried to train and convert in model zoo, only the "ssd_mobilenet_v2_quantized_coco" can be used to fine tune and convert to tflite successfully.

@finnickniu
Copy link
Owner

I will update train method and convert commands later

@ullasbharadwaj
Copy link
Author

So, if I try to convert ssd_mobilenet_v2 to tflite, should it not work out-of-the-box without re-training?
I am actually confused:-(

@ullasbharadwaj
Copy link
Author

If I just pass inference_input_type=QUANTIZED_UINT8 and leave output type to float, it works fine. But then the model size is no longer reduced...I just need to convert the model to tflite with qunatized weights and run inference. How do you suggest me to deal with?

@finnickniu
Copy link
Owner

You can try the command that I provided. The method I provided has already been verified.

tflite_convert --output_file=path to/models/research/object_detection/mobilenet_ssd_v2_train/tflite/model.tflite --graph_def_file=path to/models/research/object_detection/mobilenet_ssd_v2_train/tflite/tflite_graph.pb --input_arrays=normalized_input_image_tensor --output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' --input_shape=1,300,300,3 --allow_custom_ops --output_format=TFLITE --inference_type=QUANTIZED_UINT8 --mean_values=128 --std_dev_values=127

@ullasbharadwaj
Copy link
Author

Thanks for your time. I tried this, but then the detections always point to class label 'person' at some fixed positions.

@finnickniu
Copy link
Owner

Have you test you tflite model with python?

@ullasbharadwaj
Copy link
Author

No, I have not tested it. Please find the tflite model attached. There is a fundamental difference in the output section between the attached model and the model you have in the repo. I checked it with Netron.

detect.zip

@ullasbharadwaj
Copy link
Author

I doubt if I am generating the tflite model properly. I use similar command as you have mentioned above.

@finnickniu
Copy link
Owner

finnickniu commented May 28, 2020

I will try your tflite model. I had already implemented this model to detect heavy machinery on an arm board, so there is nothing to doubt, as long as you follow the method I recommended.

@finnickniu
Copy link
Owner

Besides, you can use object_detector_detection_api_lite.py to test your model correctness after training before you implementing it with c++.

@ullasbharadwaj
Copy link
Author

Ok. Thanks. :-)

@ullasbharadwaj
Copy link
Author

Hi,
A very quick question, with pre-trained SSD MobileNet v2 non-quantized, there is always only one object in the frame detected. However, your detect.tflite detects multiple objects, that is cool. What is the reason for this? Becasue you have retrained?

@finnickniu
Copy link
Owner

The reaults of non quantized models I tested were same as yours, only the quantized mobilenet ssd V2 works for me and can be retrained.

@ullasbharadwaj
Copy link
Author

So, only one object getting detected in Non-Quantized case is as expected? It means Nothing wrong from my implementations?
Your model works perfect.

@finnickniu
Copy link
Owner

I think the tflite opts don’t support non quantized models correctly.

@ullasbharadwaj
Copy link
Author

ullasbharadwaj commented May 29, 2020

You are right. You can close this issue? Thanks for your time buddy:-)

@finnickniu
Copy link
Owner

You’re welcome

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants