Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tensorflow lite: error when convert frozen model to lite format #14761

Closed
mrbrantofgithub opened this issue Nov 21, 2017 · 10 comments
Closed

tensorflow lite: error when convert frozen model to lite format #14761

mrbrantofgithub opened this issue Nov 21, 2017 · 10 comments
Labels
comp:lite TF Lite related issues stat:contribution welcome Status - Contributions welcome type:feature Feature requests

Comments

@mrbrantofgithub
Copy link

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
    No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
    Ubuntu 14.04
  • TensorFlow installed from (source or binary):
    source
  • TensorFlow version (use command below):
    1.3.0
  • Python version:
    2.7
  • Bazel version (if compiling from source):
    0.7.0
  • GCC/Compiler version (if compiling from source):
    gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4
  • CUDA/cuDNN version:
    cuda8.0/cudnn6.0

I tried to convert squeezenet frozen model to lite format with the following command:
"bazel run --config=opt tensorflow/contrib/lite/toco:toco -- --input_file=/home/xxx/caffe-tensorflow/npy2ckpt/squeezenet/frozen_model.pb --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --output_file=/home/xxx/caffe-tensorflow/npy2ckpt/squeezenet/squeezenet.lite --inference_type=FLOAT --input_type=FLOAT --input_arrays=input --output_arrays=prob --input_shapes=1,227,227,3"

the output is shown below:
2017-11-21 18:35:29.977505: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 170 operators, 231 arrays (0 quantized)
2017-11-21 18:35:29.981856: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 40 operators, 93 arrays (0 quantized)
2017-11-21 18:35:29.982061: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 40 operators, 93 arrays (0 quantized)
2017-11-21 18:35:29.982201: I tensorflow/contrib/lite/toco/allocate_transient_arrays.cc:312] Total transient array allocated size: 4071680 bytes, theoretical optimal value: 4071680 bytes.
2017-11-21 18:35:29.982317: I tensorflow/contrib/lite/toco/toco_tooling.cc:255] Estimated count of arithmetic ops: 0.781679 billion (note that a multiply-add is counted as 2 ops).
2017-11-21 18:35:29.982482: F tensorflow/contrib/lite/toco/tflite/export.cc:192] Unsupported operator: Squeeze

Then I tried to convert mobilenet_v1_1.0_224.pb to lite format, the same error as above.
"bazel run --config=opt tensorflow/contrib/lite/toco:toco -- --input_file=/home/xxx/Downloads/freeze_mobilenet/MobileNet/img224/mobilenet_v1_1.0_224.pb --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --output_file=/home/xxx/Downloads/freeze_mobilenet/MobileNet/img224/mobilenet.lite --inference_type=FLOAT --input_type=FLOAT --input_arrays=input --output_arrays=output --input_shapes=1,224,224,3"

output:
2017-11-21 22:07:39.747095: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 418 operators, 584 arrays (0 quantized)
2017-11-21 22:07:39.766175: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 31 operators, 88 arrays (0 quantized)
2017-11-21 22:07:39.766390: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 31 operators, 88 arrays (0 quantized)
2017-11-21 22:07:39.766592: I tensorflow/contrib/lite/toco/allocate_transient_arrays.cc:312] Total transient array allocated size: 6422528 bytes, theoretical optimal value: 4816896 bytes.
2017-11-21 22:07:39.766751: I tensorflow/contrib/lite/toco/toco_tooling.cc:255] Estimated count of arithmetic ops: 1.14264 billion (note that a multiply-add is counted as 2 ops).
2017-11-21 22:07:39.766952: F tensorflow/contrib/lite/toco/tflite/export.cc:192] Unsupported operator: Squeeze

Although I installed tensorflow with "pip install tensorflow-gpu", in order to convert model to lite format, I git clone the tensorflow files and configure, bazel to compile the files. I don't know whether this affect the converting of models, but the error is really strange!

@anitha-v anitha-v added the comp:lite TF Lite related issues label Nov 21, 2017
@miaout17
Copy link
Contributor

For now, Squeeze is supported only when it can be fused into other ops.

Please refer to this doc to see the ops compatibility. The team is working on supporting more ops in Toco and hopefully this will be supported in the near future.

@mpeniak
Copy link

mpeniak commented Nov 22, 2017

Thanks for the info @miaout17, is ssd-mobilenet supported please? I cannot tell based on that document.

@gargn
Copy link

gargn commented Nov 22, 2017

The model ssd-mobilenet is currently not supported. The team is working on supporting more ops and models.

Regarding mobilenet_v1_1.0_224, feel free to reference this doc for an example on converting an available mobilenet_v1_1.0_224 model using toco.

A pretrained frozen mobilenet_v1_1.0_224 can be found here: https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_0.25_128_frozen.tgz. Additionally, the list of available tflite models is found in this doc.

@mrbrantofgithub
Copy link
Author

Now that I convert the mobilenet model to lite format failed, how tensorflow offered us the converted mobilenet lite model on the github? @miaout17

@aselle
Copy link
Contributor

aselle commented Nov 28, 2017

As @gargn mentioned, we haven't yet provided an example of Mobilenet SSD, only Mobilenet classification. We will likely provide support for that in the future. If you are adventurous, we are happy to help you along.

@aselle aselle added stat:awaiting response Status - Awaiting response from author type:feature Feature requests labels Nov 28, 2017
@tensorflowbutler
Copy link
Member

It has been 14 days with no activity and the awaiting response label was assigned. Is this still an issue? Please update the label and/or status accordingly.

@aselle aselle added stat:contribution welcome Status - Contributions welcome and removed stat:awaiting response Status - Awaiting response from author labels Dec 20, 2017
@offbye
Copy link

offbye commented Dec 27, 2017

when the ssd-mobilenet can be supported by tflite ?

@smitshilu
Copy link
Contributor

@aselle any update on mobilenet SSD support?

@mpeniak
Copy link

mpeniak commented Mar 5, 2018

or any anitcipated release date for this feature? I see you added "contribution welcome" tag, does it mean you are not planning developing any object detection support in-house?

@dsfour
Copy link

dsfour commented Mar 11, 2018

how to fuse squeeze to other ops? @miaout17

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues stat:contribution welcome Status - Contributions welcome type:feature Feature requests
Projects
None yet
Development

No branches or pull requests

10 participants