Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TFLite, Model with Conv2DTranspose fails to convert, fully quantization, int8 #39720

Closed
wwwind opened this issue May 20, 2020 · 4 comments
Closed
Assignees
Labels
stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.2 Issues related to TF 2.2 TFLiteConverter For issues related to TFLite converter type:bug Bug

Comments

@wwwind
Copy link
Contributor

wwwind commented May 20, 2020

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux
  • TensorFlow installed from (source or binary): tf-nightly
  • TensorFlow version (or github SHA if from source): tf-nightly

Command used to run the converter or code if you’re using the Python API
If possible, please share a link to Colab/Jupyter/any notebook.

This issue is very similar to the issue, but
the problematic layer is Conv2DTranspose, so it is different model here.
I tested models with other layers and all are fine, except this one and the issue logged above, separately.

https://colab.research.google.com/drive/1g8wjs5D3N9blNpWYMIQ8R_AipZASUKH8?usp=sharing

import numpy as np
import tensorflow as tf

input_size = [5, 5, 2]
kernel_size = [3, 3, 6]
stride = [2, 2]

input_0 = tf.keras.layers.Input(shape=input_size)
layer_0 = tf.keras.layers.Conv2DTranspose(
            filters=kernel_size[-1],
            kernel_size=kernel_size[0:2],
            strides=stride,
            activation=None,
            use_bias=False,
            name = "transpose_conv"
        )(input_0)
model = tf.keras.models.Model(inputs=[input_0], outputs=[layer_0])
model.summary()

keras_layer = [
  layer for layer in model.layers if layer.name == "transpose_conv"
][0]
keras_layer.set_weights(
            [
                np.random.rand(
                    kernel_size[0],
                    kernel_size[1],
                    kernel_size[2],
                    input_size[2],
                ).astype(np.float32)
            ]
        )

num_calib = 1000
def _get_calib_data_func():
  def representative_data_gen():
    for _ in range(num_calib):
      yield [
        np.random.rand(
          1, input_size[0], input_size[1], input_size[2],
        ).astype(np.float32)
      ]

  return representative_data_gen

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.representative_dataset = _get_calib_data_func()

converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tflite_model_INT8 = converter.convert()

The output from the converter invocation

RuntimeError                              Traceback (most recent call last)

<ipython-input-18-717edec90ae0> in <module>()
      3 
      4 converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
----> 5 tflite_model_INT8 = converter.convert()

3 frames

/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/optimize/calibrator.py in calibrate_and_quantize(self, dataset_gen, input_type, output_type, allow_float, resize_input)
     91     return self._calibrator.QuantizeModel(
     92         np.dtype(input_type.as_numpy_dtype()).num,
---> 93         np.dtype(output_type.as_numpy_dtype()).num, allow_float)
     94 
     95   def calibrate_and_quantize_single(self,

RuntimeError: Max and min for dynamic tensors should be recorded during calibration: Failed for tensor functional_5/transpose_conv/Shape
Empty min/max for tensor functional_5/transpose_conv/Shape

Also, please include a link to the saved model or GraphDef

# Put link here or attach to the issue.

Failure details
If the conversion is successful, but the generated model is wrong,
state what is wrong:

  • Producing wrong results and/or decrease in accuracy
  • Producing correct results, but the model is slower than expected (model generated from old converter)

RNN conversion support
If converting TF RNN to TFLite fused RNN ops, please prefix [RNN] in the title.

Any other info / logs

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

@wwwind wwwind added the TFLiteConverter For issues related to TFLite converter label May 20, 2020
@Saduf2019
Copy link
Contributor

Saduf2019 commented May 20, 2020

I am able to replicate this issue, please find the gist here . Thanks!

@Saduf2019 Saduf2019 added TF 2.2 Issues related to TF 2.2 type:bug Bug labels May 20, 2020
@Saduf2019 Saduf2019 assigned ymodak and unassigned Saduf2019 May 20, 2020
@ymodak ymodak assigned MeghnaNatraj and unassigned ymodak May 21, 2020
@alxhoff
Copy link
Contributor

alxhoff commented May 26, 2020

I am also experiencing a similar problem with a similar model.

@lvenugopalan lvenugopalan added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Jun 4, 2020
@MeghnaNatraj
Copy link
Member

The issue seems to be resolved with the latest tf-nightly ('2.3.0-dev20200610') and TF 2.2.0. Feel free to re-open it if you still face this issue.

@google-ml-butler
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.2 Issues related to TF 2.2 TFLiteConverter For issues related to TFLite converter type:bug Bug
Projects
None yet
Development

No branches or pull requests

6 participants