Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TF2.3 Converting SSD Mobilenet v2 to tflite (tflite file size about 0.5kbytes) #9394

Closed
3 tasks done
mpa74 opened this issue Oct 19, 2020 · 6 comments
Closed
3 tasks done
Assignees
Labels
models:research models that come under research directory stat:awaiting model gardener Waiting on input from TensorFlow model gardener type:bug Bug in the code

Comments

@mpa74
Copy link

mpa74 commented Oct 19, 2020

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am using the latest TensorFlow Model Garden release and TensorFlow 2.
  • I am reporting the issue to the correct repository. (Model Garden official or research directory)
  • I checked to make sure that this issue has not already been filed.

1. The entire URL of the file you are using

https://github.com/tensorflow/models/tree/master/research/

2. Describe the bug

After successful training SSD mobilenet v2 fpn-320 model on my own data (inference testing on last checkpoint is OK), i used export_tflite_graph_tf2.py and Python API to get TFlite model. But as result tflite generated with 544 byte size and no inference possibility

3. Steps to reproduce

  • use labeled dataset for 1 class (more classes not tested)

  • download mobilenetv2 fpn 320 config from link

  • in config change: num_classes = 1, fine_tune_checkpoint: "/content/models/research/deploy/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/checkpoint/ckpt-0", label_map_path and input path for train_input_reader and eval_input_reader

  • train model: !python /content/models/research/object_detection/model_main_tf2.py \ --pipeline_config_path={pipeline_file} \ --model_dir={model_dir} \ --alsologtostderr \ --num_train_steps={12000} \ --sample_1_of_n_eval_examples=1 \ --num_eval_steps={500}

  • convert to savedmodel using last ckpt as described in link
    python /content/models/research/object_detection/export_tflite_graph_tf2.py \ --pipeline_config_path /content/models/research/deploy/pipeline_file.config \ --trained_checkpoint_dir /content/training \ --output_directory tflite

image

  • get basic TFLite model
    converter = tf.lite.TFLiteConverter.from_saved_model('tflite/saved_model/) converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_model = converter.convert() with open('model_optimize2.tflite', 'wb') as f: f.write(tflite_model)

  • TFLite generated without any error, but have size 544 bytes
    image

4. Expected behavior

Much larger tflite file with inference possibility

5. Additional context

I have many warnings while converting checkpoint to saved graph
image
image

6. System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Google Colab
  • TensorFlow installed from (source or binary): 2.3
  • TensorFlow version (use command below): 2.3
  • Python version: 3.7
  • Bazel version (if compiling from source): not used
  • GCC/Compiler version (if compiling from source): not used
  • CUDA/cuDNN version: 10.1
  • GPU model and memory: Tesla T4
@mpa74 mpa74 added models:research models that come under research directory type:bug Bug in the code labels Oct 19, 2020
@saikumarchalla
Copy link

@mpa74 Could you please go through this link and let us know if it helps.Thanks!

@saikumarchalla saikumarchalla added the stat:awaiting response Waiting on input from the contributor label Oct 20, 2020
@saikumarchalla saikumarchalla self-assigned this Oct 20, 2020
@mpa74
Copy link
Author

mpa74 commented Oct 20, 2020

@mpa74 Could you please go through this link and let us know if it helps.Thanks!

Thanks for response. I try to get tflite by way in this thread (only converting, without quantizing) and got same unusable tflite (500 bytes size)
Code i used:
tflite_convert --saved_model_dir=train_model/export2/saved_model --output_file=detect_local.tflite

@saikumarchalla saikumarchalla added stat:awaiting model gardener Waiting on input from TensorFlow model gardener and removed stat:awaiting response Waiting on input from the contributor labels Oct 20, 2020
@srjoglekar246
Copy link
Contributor

@mpa74 Are you using the latest TF nightly for TFLite conversion? These features were added recently, so I don't think they have made it to the stable version yet.

@mpa74
Copy link
Author

mpa74 commented Oct 20, 2020

@mpa74 Are you using the latest TF nightly for TFLite conversion? These features were added recently, so I don't think they have made it to the stable version yet.

Tested on nightly versions of tensorflow and tf object detection. Everything is OK. Thanks!

I do:

  • created new vurtualenv
    - pip install tf-nighly (tf_nightly-2.4.0.dev20201020-cp37-cp37m-win_amd64 installed), tf-slim, scipy
  • git clone https://github.com/tensorflow/models.git
  • navigate to models\object_detection
  • Get-ChildItem object_detection/protos/*.proto | foreach {./protoc "object_detection/protos/$($_.Name)" --python_out=.} (from PowerShell)
  • python setup.py build
  • python setup.py install
  • pip install tf-models-nightly (because i have error "No module named 'official")
  • test installation: python object_detection/builders/model_builder_tf2_test.py - everithing is OK.
  • next i retrained model, get SavedModel and convert to tflite using command above.

@srjoglekar246
Copy link
Contributor

Awesome. Thanks for confirming!

@google-ml-butler
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
models:research models that come under research directory stat:awaiting model gardener Waiting on input from TensorFlow model gardener type:bug Bug in the code
Projects
None yet
Development

No branches or pull requests

3 participants