Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error converting the model to TF Lite #17684

Closed
Neargye opened this issue Mar 13, 2018 · 23 comments
Closed

Error converting the model to TF Lite #17684

Neargye opened this issue Mar 13, 2018 · 23 comments
Assignees
Labels
comp:lite TF Lite related issues

Comments

@Neargye
Copy link
Contributor

Neargye commented Mar 13, 2018

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04.4
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (use command below): r1.6 commit: cbc6580
  • Python version: 3.5.2
  • Bazel version (if compiling from source): 0.11.1
  • GCC/Compiler version (if compiling from source): gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9)
  • CUDA/cuDNN version: N/A (build without support CUDA)
  • GPU model and memory: N/A (build without support CUDA)
  • Exact command to reproduce:

Describe the problem

Trained model, successfully froze, it works on the tensorflow android, using TensorFlowInferenceInterface.
I try to convert this into a TF Lite format, but I get an error.

Source code / logs

bazel-bin/tensorflow/contrib/lite/toco/toco \
    --input_file=./test_model/frozen_graph.pb \
    --input_format=TENSORFLOW_GRAPHDEF \
    --output_file=./test_model/unet.tflite \
    --output_format=TFLITE \
    --input_array='input' \
    --input_data_type=FLOAT \
    --input_shape=2,192,320,1 \
    --inference_type=FLOAT \
    --inference_input_type=FLOAT \
    --output_array='final/Sigmoid' \
    --v=1
2018-03-13 21:07:12.711948: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 282 operators, 479 arrays (0 quantized)
2018-03-13 21:07:12.716274: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 282 operators, 479 arrays (0 quantized)
2018-03-13 21:07:12.716893: F tensorflow/contrib/lite/toco/graph_transformations/resolve_batch_normalization.cc:86] Check failed: mean_shape.dims() == multiplier_shape.dims()
Aborted (core dumped)
@Neargye
Copy link
Contributor Author

Neargye commented Mar 14, 2018

For example, such commands work successfully

bazel-bin/tensorflow/python/tools/freeze_graph\
    —input_graph=./test_model/model/unet.pb \
    —input_checkpoint=./test_model/model/unet \
    —input_binary=true \
    —output_graph=./test_model/frozen_graph.pb \
    —output_node_names='final/Sigmoid'
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
    —in_graph=./test_model/frozen_graph.pb \
    —out_graph=./test_model/optimize_for_deployment.pb \
    —inputs='input' \
    —outputs='final/Sigmoid' \
    —transforms='
        strip_unused_nodes(type=float, shape="2,192,320,1")
        remove_nodes(op=Identity, op=CheckNumerics)
        fold_constants(ignore_errors=true)
        fold_batch_norms
        fold_old_batch_norms'

And all models work on android, using TensorFlowInferenceInterface.

@poxvoculi poxvoculi assigned andrehentz and unassigned poxvoculi Mar 14, 2018
@poxvoculi
Copy link
Contributor

@andrehentz Do you have some insight into this?

@Neargye
Copy link
Contributor Author

Neargye commented Mar 26, 2018

frozen_graph.zip
Here is an example of a frozen graph

@GarryLau
Copy link

GarryLau commented Apr 4, 2018

@Neargye I get the same problem when I try converting the mobilenet_v1 to a .tflite mode. The mobilenet_v1 which I used is in https://github.com/tensorflow/models/tree/master/research/slim/nets.
I have tried you solution above,but it does not solve my problem. Is there any ops don't supported by tflite? @poxvoculi
Many thanks.

@wmafx
Copy link

wmafx commented Apr 13, 2018

I have a very similar issue here. Graph froze successfully but getting the same error

I0413 09:45:28.332509 34490 graph_transformations.cc:39] Before Removing unused ops: 282 operators, 471 arrays (0 quantized)
I0413 09:45:28.360260 34490 graph_transformations.cc:39] After Removing unused ops pass 1: 277 operators, 464 arrays (0 quantized)
I0413 09:45:28.395511 34490 graph_transformations.cc:39] Before general graph transformations: 277 operators, 464 arrays (0 quantized)
F0413 09:45:28.397012 34490 resolve_batch_normalization.cc:86] Check failed: mean_shape.dims() == multiplier_shape.dims()

@GarryLau
Copy link

You should not use graph.pbtxt(produced when training) to froze graph. You should use a eval.pbtxt to frozen_graph. Just like ziped files (each file contains a eval.pbtxt)in https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md

@ablenesi
Copy link
Contributor

I think this might be the cause of the issue. Unfortunately, the issue template was never filled out so the issue is closed. I just started working with TensorFlow this week, I might be wrong.

The issue describes an error related to ResolveBatchNormalization input dimentions.

I'm getting the same error as above :
2018-04-15 20:01:41.180669: F tensorflow/contrib/lite/toco/graph_transformations/resolve_batch_normalization.cc:86] Check failed: mean_shape.dims() == multiplier_shape.dims() Abort trap: 6

@Neargye
Copy link
Contributor Author

Neargye commented Apr 15, 2018

I removed the training nodes, and got the following error

2018-04-16 00:48:55.759638: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Prod
2018-04-16 00:48:55.760237: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: SquaredDifference
2018-04-16 00:48:55.760341: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Reciprocal
2018-04-16 00:48:55.760466: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Prod
2018-04-16 00:48:55.760565: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: SquaredDifference
2018-04-16 00:48:55.760653: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Reciprocal
2018-04-16 00:48:55.760772: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Prod
2018-04-16 00:48:55.760872: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: SquaredDifference
2018-04-16 00:48:55.760999: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Reciprocal
2018-04-16 00:48:55.761141: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Prod
2018-04-16 00:48:55.761246: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: SquaredDifference
2018-04-16 00:48:55.761347: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Reciprocal
2018-04-16 00:48:55.761479: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Prod
2018-04-16 00:48:55.761581: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: SquaredDifference
2018-04-16 00:48:55.761680: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Reciprocal
2018-04-16 00:48:55.761807: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Prod
2018-04-16 00:48:55.761908: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: SquaredDifference
2018-04-16 00:48:55.762006: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Reciprocal
2018-04-16 00:48:55.762126: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Prod
2018-04-16 00:48:55.762216: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: SquaredDifference
2018-04-16 00:48:55.762289: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Reciprocal
2018-04-16 00:48:55.762375: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Prod
2018-04-16 00:48:55.762435: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: SquaredDifference
2018-04-16 00:48:55.762493: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Reciprocal
2018-04-16 00:48:55.762582: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Prod
2018-04-16 00:48:55.762642: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: SquaredDifference
2018-04-16 00:48:55.762699: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Reciprocal
2018-04-16 00:48:55.762786: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Prod
2018-04-16 00:48:55.762848: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: SquaredDifference
2018-04-16 00:48:55.762907: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Reciprocal
2018-04-16 00:48:55.763031: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Prod
2018-04-16 00:48:55.763093: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: SquaredDifference
2018-04-16 00:48:55.763152: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Reciprocal
2018-04-16 00:48:55.763239: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Prod
2018-04-16 00:48:55.763299: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: SquaredDifference
2018-04-16 00:48:55.763357: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Reciprocal
2018-04-16 00:48:55.763477: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Prod
2018-04-16 00:48:55.763538: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: SquaredDifference
2018-04-16 00:48:55.763671: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Reciprocal
2018-04-16 00:48:55.763760: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Prod
2018-04-16 00:48:55.763820: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: SquaredDifference
2018-04-16 00:48:55.763904: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Reciprocal
2018-04-16 00:48:55.764060: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Prod
2018-04-16 00:48:55.764161: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: SquaredDifference
2018-04-16 00:48:55.764260: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Reciprocal
2018-04-16 00:48:55.764380: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Prod
2018-04-16 00:48:55.764480: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: SquaredDifference
2018-04-16 00:48:55.764569: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Reciprocal
2018-04-16 00:48:55.764721: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Prod
2018-04-16 00:48:55.764821: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: SquaredDifference
2018-04-16 00:48:55.764922: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Reciprocal
2018-04-16 00:48:55.765042: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Prod
2018-04-16 00:48:55.765143: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: SquaredDifference
2018-04-16 00:48:55.765231: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1171] Converting unsupported operation: Reciprocal
2018-04-16 00:48:55.769814: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 560 operators, 829 arrays (0 quantized)
2018-04-16 00:48:55.781042: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 560 operators, 829 arrays (0 quantized)
2018-04-16 00:48:55.790922: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 483 operators, 748 arrays (0 quantized)
2018-04-16 00:48:55.799834: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 483 operators, 748 arrays (0 quantized)
2018-04-16 00:48:55.805886: I tensorflow/contrib/lite/toco/allocate_transient_arrays.cc:311] Total transient array allocated size: 7864448 bytes, theoretical optimal value: 7864384 bytes.
2018-04-16 00:48:55.808726: F tensorflow/contrib/lite/toco/tflite/export.cc:304] Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If you have a custom implementation for them you can disable this error with --allow_custom_ops. Here is a list of operators for which you will need custom implementations: CAST, Prod, RSQRT, Reciprocal, SquaredDifference, Stack, TensorFlowShape, TensorFlowSquare, TensorFlowSum, TransposeConv.
Aborted (core dumped)

@GarryLau I can not use this version TF Lite, because the necessary operations have not been implemented yet?

@smitshilu
Copy link
Contributor

@GarryLau how can I generate eval.pbtxt? I tried running mobilenet_v1_eval with latest checkpoint but it is not gearing anything.

@andrehentz andrehentz added the comp:lite TF Lite related issues label Apr 25, 2018
@GarryLau
Copy link

@smitshilu @Neargye The method to generate eval.pbtxt is in the following:

def export_eval_pbtxt():
  """Export eval.pbtxt."""
  with tf.Graph().as_default() as g:
    images = tf.placeholder(dtype=tf.float32,shape=[None,224,224,3])
    # using one of the following methods to create graph, depends on you
    #_, _ = mobilenet_v1.mobilenet_v1(inputs=images,num_classes=NUM_CLASSES, is_training=False)
    with slim.arg_scope(mobilenet_v1.mobilenet_v1_arg_scope(is_training=False,regularize_depthwise=True)):
      _, _ = mobilenet_v1.mobilenet_v1(inputs=images, is_training=False, depth_multiplier=1.0, num_classes=NUM_CLASSES)
    eval_graph_file = '/home/garylau/Desktop/mobilenet_v1/mobilenet_v1_eval.pbtxt'
    with tf.Session() as sess:
        with open(eval_graph_file, 'w') as f:
            f.write(str(g.as_graph_def()))

Then, call the function to generate eval.pbtxt.
Hope to help you.

@tensorflowbutler
Copy link
Member

Nagging Assignee @andrehentz: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.

@phildue
Copy link

phildue commented May 10, 2018

I'm facing the same problem. Can batch normalization currently not be used?

F tensorflow/contrib/lite/toco/graph_transformations/resolve_batch_normalization.cc:86] Check failed: mean_shape.dims() == multiplier_shape.dims() \nAborted (core dumped)

@phildue
Copy link

phildue commented May 16, 2018

Maybe this holds only for me but I got it solved. I defined the graph using keras. The problem was I called K.learning_phase(0) after defining the graph. This led to the above error. When calling K.learning_phase(0) before defining the model it works :)

@tensorflowbutler
Copy link
Member

Nagging Assignee @andrehentz: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.

@andrehentz
Copy link
Contributor

I tried the conversion with the provided frozen_graph.zip and it seems to work now. Please reopen if you still encounter issues.

@ychen404
Copy link

ychen404 commented Aug 6, 2018

@GarryLau
Hi Garry,

I am able to use the code you provided to create the eval.pbtxt.
But I still don't understand how to use eval pbtxt for freezing.
Is the graph retrieved from the checkpoint file? If I need to use the eval pbtxt, should I use the eval gbtxt directly in the freeze script?
Following is the code I used to freeze my model.

import tensorflow as tf
 from tensorflow.python.framework import graph_util
 import os,sys
 output_node_names = "MobilenetV1/Predictions/Reshape"
 saver = tf.train.import_meta_graph('/home/users/saman/yitao/tensorflow_android/models/research/slim/batch_32/model.ckpt-156300.meta', clear_devices=True)
 
 graph = tf.get_default_graph()
 input_graph_def = graph.as_graph_def()
 sess = tf.Session()
 saver.restore(sess, "/home/users/saman/yitao/tensorflow_android/models/research/slim/batch_32/model.ckpt-156300")
 output_graph_def = graph_util.convert_variables_to_constants(
             sess, # The session is used to retrieve the weights
             input_graph_def, # The graph_def is used to retrieve the nodes
             output_node_names.split(",") # The output node names are used to select the usefull nodes
 )
 output_graph="frozen-model-conv6-bat-32.pb"
 with tf.gfile.GFile(output_graph, "wb") as f:
     f.write(output_graph_def.SerializeToString())
 
 sess.close()

@GarryLau
Copy link

@ychen404
freeze_graph:
bazel-bin/tensorflow/python/tools/freeze_graph \ --input_graph=/home/lg/Desktop/inception_v3/inception_v3_eval.pbtxt \ --input_checkpoint=/home/lg/Desktop/inception_v3/checkpoint/model.ckpt-20000 \ --input_binary=false \ --output_graph=/home/lg/Desktop/inception_v3/frozen_inception_v3_299.pb \ --output_node_names=InceptionV3/Predictions/Reshape_1 \ --checkpoint_version=2
toco(float):
bazel-bin/tensorflow/contrib/lite/toco/toco \ --input_file=/home/lg/Desktop/inception_v3/frozen_inception_v3_299.pb \ --input_format=TENSORFLOW_GRAPHDEF \ --output_format=TFLITE \ --output_file=/home/lg/Desktop/inception_v3/frozen_graph_inception_v3.tflite \ --inference_type=FLOAT \ --input_type=FLOAT \ --input_arrays=Placeholder \ --output_arrays=InceptionV3/Predictions/Reshape_1 \ --input_shapes=1,299,299,3
toco(QUANTIZED_UINT8):
bazel-bin/tensorflow/contrib/lite/toco/toco \ --input_file=/home/lg/Desktop/inception_v3/frozen_inception_v3_299.pb \ --input_format=TENSORFLOW_GRAPHDEF \ --output_format=TFLITE \ --output_file=/home/lg/Desktop/inception_v3/frozen_graph_inception_v3.tflite \ --inference_type=QUANTIZED_UINT8 \ --input_type=QUANTIZED_UINT8 \ --input_arrays=Placeholder \ --output_arrays=InceptionV3/Predictions/Reshape_1 \ --input_shapes=1,299,299,3 \ --default_ranges_min=0.0 \ --default_ranges_max=255.0

@GarryLau
Copy link

GarryLau commented Aug 11, 2018

@ychen404
freeze_graph:

bazel-bin/tensorflow/python/tools/freeze_graph  \
--input_graph=/home/lg/Desktop/inception_v3/inception_v3_eval.pbtxt \
--input_checkpoint=/home/lg/Desktop/inception_v3/checkpoint/model.ckpt-20000 \
--input_binary=false \
--output_graph=/home/lg/Desktop/inception_v3/frozen_inception_v3_299.pb  \
--output_node_names=InceptionV3/Predictions/Reshape_1  \
--checkpoint_version=2

toco(float):

bazel-bin/tensorflow/contrib/lite/toco/toco \
--input_file=/home/lg/Desktop/inception_v3/frozen_inception_v3_299.pb \
--input_format=TENSORFLOW_GRAPHDEF  \
--output_format=TFLITE  \
--output_file=/home/lg/Desktop/inception_v3/frozen_graph_inception_v3.tflite \
--inference_type=FLOAT  \
--input_type=FLOAT \
--input_arrays=Placeholder  \
--output_arrays=InceptionV3/Predictions/Reshape_1  \
--input_shapes=1,299,299,3

toco(QUANTIZED_UINT8):

bazel-bin/tensorflow/contrib/lite/toco/toco \
--input_file=/home/lg/Desktop/inception_v3/frozen_inception_v3_299.pb \
--input_format=TENSORFLOW_GRAPHDEF  \
--output_format=TFLITE  \
--output_file=/home/lg/Desktop/inception_v3/frozen_graph_inception_v3.tflite \
--inference_type=QUANTIZED_UINT8  \
--input_type=QUANTIZED_UINT8 \
--input_arrays=Placeholder  \
--output_arrays=InceptionV3/Predictions/Reshape_1  \
--input_shapes=1,299,299,3 \
--default_ranges_min=0.0 \
--default_ranges_max=255.0

@ychen404
Copy link

Hi GarryLau,

It's working now! Thank you very much!

@arun-kumark
Copy link

arun-kumark commented Nov 14, 2019

Hello GarryLau,
Thanks for sharing the commands,
I am trying to convert my Keras model to the tflite model (8 bit quantized).
I am facing the issue when I use the --inference_type from FLOAT to QUANTIZED_UINT8

When I use :

toco \
  --graph_def_file=./my_model.pb \
  --input_format=TENSORFLOW_GRAPHDEF  \
  --output_format=TFLITE  \
  --output_file=.//my_model.tflite \
  --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE \
  --inference_type=FLOAT \
  --input_shapes="1, 1500, 3" \
  --input_arrays=input_1 \
  --output_arrays='bottleneck/Elu' \
  --std_dev_values=128.0 --mean_values=0 \
  --allow_custom_ops \
  --default_ranges_min=0 \
  --default_ranges_max=255.0 

It generates the tflite file, but without weights quantization.

Once I change the --inference_type to QUANTIZED_UINT8 I get the abort, some logs truncated below:

2019-11-14 16:34:35.800516: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After default min-max range propagation graph transformations pass 1: 131 operators, 326 arrays (1 quantized)
2019-11-14 16:34:35.803427: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before quantization graph transformations: 131 operators, 326 arrays (1 quantized)
2019-11-14 16:34:35.803555: F ./tensorflow/lite/toco/toco_tooling.h:38] Check failed: s.ok() Unimplemented: this graph contains an operator of type (Unsupported TensorFlow op: QuantizeV2) for which the quantized form is not yet implemented. Sorry, and patches welcome (that's a relatively fun patch to write, mostly providing the actual quantized arithmetic code for this op).
Fatal Python error: Aborted

Current thread 0x00007f7450bf5b80 (most recent call first):
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/toco/python/toco_from_protos.py", line 52 in execute
  File "/home/superuser/.local/lib/python3.6/site-packages/absl/app.py", line 250 in _run_main
  File "/home/superuser/.local/lib/python3.6/site-packages/absl/app.py", line 299 in run
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/platform/app.py", line 40 in run
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/toco/python/toco_from_protos.py", line 89 in main
  File "/usr/local/bin/toco_from_protos", line 8 in <module>
Aborted (core dumped)

I want to use Post training quantization for my model, but it failed in all respect. Could you please help me?

My model architecture is as below:
Untitled

Thanks in advance.

Kind Regards
Arun

@Naveen-Dodda
Copy link

Hey Arun,

I feel the issues is with TensorFlow version that you are using, there might be an issue with toco_protos if you install TensorFLow alternate sources. I have tried it my self using 1.15, simplest way to try it yourself is to use Google colab and generate 8 bit quantized tflite file.

Hope this answers your question, feel free to reach out to me.

Best,
Naveen Dodda

@arun-kumark
Copy link

Dear Naveen,
I am too able to generate the tflite file (post quantization) but the only catastrophe is, the file is not 8-bit quantized. Could you share the Google colab code to do it. As first time, I am using APIs for Google Colab.
How it's different than the system installation and environment ?

I am in parallel doing from beginning to create the quantization aware training, to avoid post-training quantization problems.

Kind Regards
Arun

@Naveen-Dodda
Copy link

Hello Arun,

Thanks for checking back, could you share me details of your input and output ? or what have been using for conevrter.representative_dataset. I can try to perform 8 bit quatization from my end and wil be happy to sahre my colab code with you.

Its not very diffrent from system installtion vesrion of TensorFlow but in general colab would be common environment. So we can make sure we are using official version and it would easy to troubleshoot for third person like me intead of understanding your local environment.

Trying out quatization aware traning could be a great option ; but for few models post-training quatization could be eaiser (transfer leraning or using a pre trained model)

Will be happy to answer you question if you have more,

Best,
Naveen Dodda

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues
Projects
None yet
Development

No branches or pull requests