Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TFLiteConverter.from_saved_model - batchNorm is not supported? #23627

Closed
milinddeore opened this issue Nov 9, 2018 · 8 comments
Closed

TFLiteConverter.from_saved_model - batchNorm is not supported? #23627

milinddeore opened this issue Nov 9, 2018 · 8 comments
Assignees
Labels
comp:lite TF Lite related issues

Comments

@milinddeore
Copy link

milinddeore commented Nov 9, 2018

Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No, i have tried tensorflow example code snippet.

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Google Colab (Linux bedbba52137a 4.14.65+ Add support for Python 3.x #1 SMP Sun Sep 9 02:18:33 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux)

  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: None

  • TensorFlow installed from (source or binary): pip on google colab, using following command

pip3 install --upgrade tf-nightly

  • TensorFlow version (use command below): version: 1.13.0-dev20181109
import tensorflow as tf 
print(tf.__version__)

  • Python version: Python 3.6.6

  • Bazel version (if compiling from source): None

  • GCC/Compiler version (if compiling from source): None

  • CUDA/cuDNN version: using command 'nvcc --version'
    nvcc: NVIDIA (R) Cuda compiler driver
    Copyright (c) 2005-2018 NVIDIA Corporation
    Built on Tue_Jun_12_23:07:04_CDT_2018
    Cuda compilation tools, release 9.2, V9.2.148

  • GPU model and memory:

You can collect some of this information using our environment capture script
You can also obtain the TensorFlow version with
python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"

Describe the current behavior
I have a model here, which is exported SavedModel using following code:

# SavedModel using simple_save()

ins = {"phase_train_placeholder":phase_train_placeholder}
outs = {"embeddings":embeddings}
tf.saved_model.simple_save(sess, '/content/generated/', ins, outs)

When i convert this SavedModel to TFLite it give me error, the code snippet is as:

import tensorflow as tf

saved_model_dir = '/content/generated/'

converter = tf.contrib.lite.TFLiteConverter.from_saved_model(saved_model_dir, input_arrays=['phase_train'], input_shapes=(1,160,160,3), 
                                                             output_arrays=['embeddings'])
tflite_model = converter.convert()
open("converted_model_savedModel.tflite", "wb").write(tflite_model)

Following are error logs:

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/convert_saved_model.py:61: load (from tensorflow.python.saved_model.loader_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.loader.load or tf.compat.v1.saved_model.load. There will be a new function for importing SavedModels in Tensorflow 2.0.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/queue_runner_impl.py:391: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.
Instructions for updating:
To construct input pipelines, use the `tf.data` module.
INFO:tensorflow:Restoring parameters from /content/generated/variables/variables
INFO:tensorflow:The given SavedModel MetaGraphDef contains SignatureDefs with the following keys: {'serving_default'}
INFO:tensorflow:input tensors info: 
INFO:tensorflow:Tensor's key in saved_model's tensor_map: phase_train_placeholder
INFO:tensorflow: tensor name: phase_train:0, shape: unknown_rank, type: DT_BOOL
INFO:tensorflow:output tensors info: 
INFO:tensorflow:Tensor's key in saved_model's tensor_map: embeddings
INFO:tensorflow: tensor name: embeddings:0, shape: (-1, 512), type: DT_FLOAT
INFO:tensorflow:Restoring parameters from /content/generated/variables/variables
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-4-63a92824a047> in <module>()
      6 
      7 converter = tf.contrib.lite.TFLiteConverter.from_saved_model(saved_model_dir, input_arrays=['phase_train'], input_shapes=(1,160,160,3), 
----> 8                                                              output_arrays=['embeddings'])
      9 tflite_model = converter.convert()
     10 open("converted_model_savedModel.tflite", "wb").write(tflite_model)

/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/lite.py in from_saved_model(cls, saved_model_dir, input_arrays, input_shapes, output_arrays, tag_set, signature_key)
    342 
    343     result = _freeze_saved_model(saved_model_dir, input_arrays, input_shapes,
--> 344                                  output_arrays, tag_set, signature_key)
    345     return cls(
    346         graph_def=result[0], input_tensors=result[1], output_tensors=result[2])

/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/convert_saved_model.py in freeze_saved_model(saved_model_dir, input_arrays, input_shapes, output_arrays, tag_set, signature_key)
    254     in_tensors = _get_tensors(graph, inputs, input_arrays)
    255     out_tensors = _get_tensors(graph, outputs, output_arrays)
--> 256     set_tensor_shapes(in_tensors, input_shapes)
    257 
    258     output_names = [node.split(":")[0] for node in outputs]

/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/convert_saved_model.py in set_tensor_shapes(tensors, shapes)
    201   if shapes:
    202     for tensor in tensors:
--> 203       shape = shapes.get(tensor_name(tensor))
    204       if shape is not None:
    205         tensor.set_shape(shape)

AttributeError: 'tuple' object has no attribute 'get'

Updates on 10-Nov-2018

I have to give input_shape as dictionary in the following way:

import tensorflow as tf

saved_model_dir = '/content/generated/'

converter = tf.contrib.lite.TFLiteConverter.from_saved_model(saved_model_dir, input_arrays=['phase_train'], input_shapes={"phase_train":[1,160,160,3]}, output_arrays=['embeddings'])

tflite_model = converter.convert()
open("converted_model_savedModel.tflite", "wb").write(tflite_model) 

This fixed the earlier error but now i see a different error and the logs are below:

INFO:tensorflow:Restoring parameters from /content/generated/variables/variables
INFO:tensorflow:The given SavedModel MetaGraphDef contains SignatureDefs with the following keys: {'serving_default'}
INFO:tensorflow:input tensors info: 
INFO:tensorflow:Tensor's key in saved_model's tensor_map: phase_train_placeholder
INFO:tensorflow: tensor name: phase_train:0, shape: unknown_rank, type: DT_BOOL
INFO:tensorflow:output tensors info: 
INFO:tensorflow:Tensor's key in saved_model's tensor_map: embeddings
INFO:tensorflow: tensor name: embeddings:0, shape: (-1, 512), type: DT_FLOAT
INFO:tensorflow:Restoring parameters from /content/generated/variables/variables
INFO:tensorflow:Froze 490 variables.
INFO:tensorflow:Converted 490 variables to const ops.
---------------------------------------------------------------------------
ConverterError                            Traceback (most recent call last)
<ipython-input-53-91d1899f3204> in <module>()
      8 converter = tf.contrib.lite.TocoConverter.from_saved_model(saved_model_dir, input_arrays=['phase_train'], input_shapes={"phase_train":[1,160,160,3]}, 
      9                                                    output_arrays=['embeddings'])
---> 10 tflite_model = converter.convert()

/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/lite.py in convert(self)
    454           input_tensors=self._input_tensors,
    455           output_tensors=self._output_tensors,
--> 456           **converter_kwargs)
    457     else:
    458       result = _toco_convert_graph_def(

/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/convert.py in toco_convert_impl(input_data, input_tensors, output_tensors, *args, **kwargs)
    395   data = toco_convert_protos(model_flags.SerializeToString(),
    396                              toco_flags.SerializeToString(),
--> 397                              input_data.SerializeToString())
    398   return data
    399 

/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str)
    170       stderr = _try_convert_to_unicode(stderr)
    171       raise ConverterError(
--> 172           "TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
    173   finally:
    174     # Must manually cleanup files.

ConverterError: TOCO failed. See console for info.
2018-11-11 08:46:00.208147: I tensorflow/lite/toco/import_tensorflow.cc:1280] Converting unsupported operation: FIFOQueueV2
2018-11-11 08:46:00.216527: I tensorflow/lite/toco/import_tensorflow.cc:193] Unsupported data type in placeholder op: 20
2018-11-11 08:46:00.216572: I tensorflow/lite/toco/import_tensorflow.cc:1280] Converting unsupported operation: QueueDequeueUpToV2
2018-11-11 08:46:00.216749: I tensorflow/lite/toco/import_tensorflow.cc:1280] Converting unsupported operation: RefSwitch
2018-11-11 08:46:00.216793: I tensorflow/lite/toco/import_tensorflow.cc:1280] Converting unsupported operation: AssignSub
2018-11-11 08:46:00.216846: I tensorflow/lite/toco/import_tensorflow.cc:1280] Converting unsupported operation: RefSwitch

....... logs dropped here 

2018-11-11 08:46:00.291969: I tensorflow/lite/toco/import_tensorflow.cc:1280] Converting unsupported operation: RefSwitch
2018-11-11 08:46:00.292018: I tensorflow/lite/toco/import_tensorflow.cc:1280] Converting unsupported operation: AssignSub
2018-11-11 08:46:00.292076: I tensorflow/lite/toco/import_tensorflow.cc:1280] Converting unsupported operation: RefSwitch
2018-11-11 08:46:00.292113: I tensorflow/lite/toco/import_tensorflow.cc:1280] Converting unsupported operation: AssignSub
2018-11-11 08:46:00.937387: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 5600 operators, 9398 arrays (0 quantized)
2018-11-11 08:46:01.526448: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After Removing unused ops pass 1: 3582 operators, 6259 arrays (0 quantized)
2018-11-11 08:46:01.979950: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 3582 operators, 6259 arrays (0 quantized)
2018-11-11 08:46:01.982607: F tensorflow/lite/toco/graph_transformations/resolve_batch_normalization.cc:45] Check failed: IsConstantParameterArray(*model, bn_op->inputs[1]) && IsConstantParameterArray(*model, bn_op->inputs[2]) && IsConstantParameterArray(*model, bn_op->inputs[3]) Batch normalization resolution requires that mean, multiplier and offset arrays be constant.
Aborted (core dumped)

Describe the expected behavior
It should create *.lite file instead.

Code to reproduce the issue
Provide a reproducible test case that is the bare minimum necessary to generate the problem.

I have given the SavedModel and above code snippet to reproduce it.

Other info / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

@ymodak ymodak added the comp:lite TF Lite related issues label Nov 9, 2018
@milinddeore milinddeore changed the title TFLiteConverter.from_saved_model - 'tuple' object has no attribute 'get' TFLiteConverter.from_saved_model - batchNorm is not supported? Nov 11, 2018
@deercoder
Copy link

I have the same error, when I use Transfrom Graph tool to get a quantized.pb file, I use the following command to convert to tf-lite, but it failed with the error "Batch normalization resolution requires that mean, multiplier and offset arrays be constant."

tflite_convert --output_file=${DATASET_DIR}/quantized_mobilenetv2.tflite \
        --graph_def_file=${DATASET_DIR}/quantized_graph.pb \
        --input_shapes=1,224,224,3 \
        --allow_custom_ops=true \
        --input_arrays=input \
        --output_arrays=MobilenetV2/Predictions/Reshape_1 \
        --mean_values=128 \
        --std_dev_values=127

Any ideas or suggestions would be greatly appreciated. .Thanks.

@srjoglekar246
Copy link
Contributor

@milinddeore Are you trying to convert a graph that performs model training? We can typically only convert graphs that perform eval. Can you post your code that builds the graph?

@srjoglekar246 srjoglekar246 assigned srjoglekar246 and unassigned gargn Feb 8, 2019
@milinddeore
Copy link
Author

@srjoglekar246 This is FaceNet model, the source code is here, and i modified train_softmax.py for .tflite conversion.

I tried converting to .tflite from savedModel and from frozen but no luck. But today i saw lot has changed on the page.

Here is the other thread, where similar issue is seen.

and

@milinddeore
Copy link
Author

I could able to solve it, please see check on #19431

@srjoglekar246
Copy link
Contributor

@milinddeore I think this makes sense. Sorry I didn't get to your issue on time, but it seems like the error occurred because you were earlier trying to convert the part of the graph that was doing the training?

In this attempt, you essentially add an input/output interface to the saved graph and use just that with the converter. Am I correct?

Thanks for resolving this :-). Can we close the bug?

@milinddeore
Copy link
Author

milinddeore commented Feb 26, 2019

@srjoglekar246 Thats correct!
It strips off phase_train, training input tensor and keeps input inference only.
That essentially means that all the BatchNorm with is_training=False and many other training specific ops are removed.

I have a question here, is there a document where we can see all the support ops on mobile or .tflite per release?

@srjoglekar246
Copy link
Contributor

Thanks for confirming! Closing this one...

@srjoglekar246
Copy link
Contributor

@milinddeore The Compatibility Guide should be a good resource for most ops. However, there is a slight chance it might be outdated, in that case you can look into our kernels directory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues
Projects
None yet
Development

No branches or pull requests

5 participants