-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tensorflow lite: error when convert frozen model to lite format #14761
Comments
For now, Please refer to this doc to see the ops compatibility. The team is working on supporting more ops in Toco and hopefully this will be supported in the near future. |
Thanks for the info @miaout17, is ssd-mobilenet supported please? I cannot tell based on that document. |
The model Regarding A pretrained frozen mobilenet_v1_1.0_224 can be found here: https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_0.25_128_frozen.tgz. Additionally, the list of available tflite models is found in this doc. |
Now that I convert the mobilenet model to lite format failed, how tensorflow offered us the converted mobilenet lite model on the github? @miaout17 |
As @gargn mentioned, we haven't yet provided an example of Mobilenet SSD, only Mobilenet classification. We will likely provide support for that in the future. If you are adventurous, we are happy to help you along. |
It has been 14 days with no activity and the |
when the ssd-mobilenet can be supported by tflite ? |
@aselle any update on mobilenet SSD support? |
or any anitcipated release date for this feature? I see you added "contribution welcome" tag, does it mean you are not planning developing any object detection support in-house? |
how to fuse squeeze to other ops? @miaout17 |
System information
No
Ubuntu 14.04
source
1.3.0
2.7
0.7.0
gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4
cuda8.0/cudnn6.0
I tried to convert squeezenet frozen model to lite format with the following command:
"bazel run --config=opt tensorflow/contrib/lite/toco:toco -- --input_file=/home/xxx/caffe-tensorflow/npy2ckpt/squeezenet/frozen_model.pb --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --output_file=/home/xxx/caffe-tensorflow/npy2ckpt/squeezenet/squeezenet.lite --inference_type=FLOAT --input_type=FLOAT --input_arrays=input --output_arrays=prob --input_shapes=1,227,227,3"
the output is shown below:
2017-11-21 18:35:29.977505: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 170 operators, 231 arrays (0 quantized)
2017-11-21 18:35:29.981856: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 40 operators, 93 arrays (0 quantized)
2017-11-21 18:35:29.982061: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 40 operators, 93 arrays (0 quantized)
2017-11-21 18:35:29.982201: I tensorflow/contrib/lite/toco/allocate_transient_arrays.cc:312] Total transient array allocated size: 4071680 bytes, theoretical optimal value: 4071680 bytes.
2017-11-21 18:35:29.982317: I tensorflow/contrib/lite/toco/toco_tooling.cc:255] Estimated count of arithmetic ops: 0.781679 billion (note that a multiply-add is counted as 2 ops).
2017-11-21 18:35:29.982482: F tensorflow/contrib/lite/toco/tflite/export.cc:192] Unsupported operator: Squeeze
Then I tried to convert mobilenet_v1_1.0_224.pb to lite format, the same error as above.
"bazel run --config=opt tensorflow/contrib/lite/toco:toco -- --input_file=/home/xxx/Downloads/freeze_mobilenet/MobileNet/img224/mobilenet_v1_1.0_224.pb --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --output_file=/home/xxx/Downloads/freeze_mobilenet/MobileNet/img224/mobilenet.lite --inference_type=FLOAT --input_type=FLOAT --input_arrays=input --output_arrays=output --input_shapes=1,224,224,3"
output:
2017-11-21 22:07:39.747095: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 418 operators, 584 arrays (0 quantized)
2017-11-21 22:07:39.766175: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 31 operators, 88 arrays (0 quantized)
2017-11-21 22:07:39.766390: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 31 operators, 88 arrays (0 quantized)
2017-11-21 22:07:39.766592: I tensorflow/contrib/lite/toco/allocate_transient_arrays.cc:312] Total transient array allocated size: 6422528 bytes, theoretical optimal value: 4816896 bytes.
2017-11-21 22:07:39.766751: I tensorflow/contrib/lite/toco/toco_tooling.cc:255] Estimated count of arithmetic ops: 1.14264 billion (note that a multiply-add is counted as 2 ops).
2017-11-21 22:07:39.766952: F tensorflow/contrib/lite/toco/tflite/export.cc:192] Unsupported operator: Squeeze
Although I installed tensorflow with "pip install tensorflow-gpu", in order to convert model to lite format, I git clone the tensorflow files and configure, bazel to compile the files. I don't know whether this affect the converting of models, but the error is really strange!
The text was updated successfully, but these errors were encountered: