Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does tFlite support input shape=[1,32,None,3] #29590

Closed
gds101054108 opened this issue Jun 10, 2019 · 34 comments
Closed

Does tFlite support input shape=[1,32,None,3] #29590

gds101054108 opened this issue Jun 10, 2019 · 34 comments
Assignees
Labels
comp:lite TF Lite related issues stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author type:support Support issues

Comments

@gds101054108
Copy link

System information
Linux Ubuntu 16.04
tf-cpu-1.13.1

I use tensorflow train a crnn+ctc OCR model,the width of textline is Variable,but when I convert pb to tflite,ValueError: None is only supported in the 1st dimension Tensor 'input_images' has invalid shape [1, 32, None, 3]。

@gadagashwini-zz gadagashwini-zz self-assigned this Jun 11, 2019
@gadagashwini-zz gadagashwini-zz added comp:lite TF Lite related issues type:support Support issues labels Jun 11, 2019
@ymodak ymodak assigned haozha111 and unassigned ymodak Jun 14, 2019
@ymodak ymodak added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Jun 14, 2019
@haozha111
Copy link
Contributor

I believe the converter currently can't handle unknown dimension other than the batch size dimension. You could try use a fixed length (such as max length) instead.

@tensorflowbutler tensorflowbutler removed the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Jun 20, 2019
@gds101054108
Copy link
Author

if use a fixed length(max width),the run time will be too long for small width

@haozha111
Copy link
Contributor

I see.

Currently dynamic input shape is not supported in tflite. However a walkaround could be:

  1. set the unknown dimension to a fixed value during conversion.
  2. then try interpreter.resize_tensor_input() method to resize the input tensor size at inference.

It's not guaranteed that this path will always work as expected though, however it's no harm if you can give it a try. Thanks!

@melody-rain
Copy link

@haozha111 Seems your method does not work.
After calling

 interpreter.resize_tensor_input(

and

interpreter.set_tensor(input_details[0]['index'], im)

the program will crash with

Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)

@haozha111
Copy link
Contributor

Hi Melody, sorry that it doesn't work for you. We are actively working on dynamic shapes in TF Lite, and hope to make the user experience better. Adding Nupur who is working in this area.

@mcanerim
Copy link

Any news on dynamic shapes in TF Lite?

@magneter
Copy link

magneter commented Oct 5, 2019

I tried resize input_tensor.It would crash in seconds.
And I tried 3 methods in post_training_quantize ,interpreter.invoke takes about 3 times before I used optimization method , even the model is actually smaller.

@hamlatzis
Copy link

or at least can we have variable batch? I resize input/output tensors, but when call allocate_tensors (either from python or java) I get an error inside reshape operator that the input and output tensors don't have the same number of elements

@haozha111 haozha111 removed their assignment Oct 17, 2019
@gargn
Copy link

gargn commented Jan 31, 2020

We added support for unknown dimensions in TensorFlow Lite today (5591208).

Can you try converting your model again with tonight's (1/31) tf-nightly once it's released (pip install tf-nightly). Convert the model with experimental_new_converter = True.

When you load the model it should have an additional field shape_signature that contains the shape with any unknown dimensions marked with -1. shape will have those dimensions marked with 1.

You can then call ResizeInputTensor with the desired shape when running the interpreter. The generated model will only work on the latest TensorFlow version (i.e. the interpreter on the tf-nightly version you are running).

If it does not work, can you provide a detailed error and repro instructions?

@nekapoor
Copy link

@gargn I exported a .tflite file generated from AutoML. The default input is 224x224. Any ideas on how to change the input to 200x80 and then save and download a new .tflite file? I can't seem to find documentation on this? I know i can use resize_tensor_input, but then I have to do the inference at that time. I'd love to get a new tflite file that has a new input tensor all together.

Thoughts?

@a-rich
Copy link

a-rich commented Mar 17, 2020

We added support for unknown dimensions in TensorFlow Lite today (5591208).

Can you try converting your model again with tonight's (1/31) tf-nightly once it's released (pip install tf-nightly). Convert the model with experimental_new_converter = True.

When you load the model it should have an additional field shape_signature that contains the shape with any unknown dimensions marked with -1. shape will have those dimensions marked with 1.

You can then call ResizeInputTensor with the desired shape when running the interpreter. The generated model will only work on the latest TensorFlow version (i.e. the interpreter on the tf-nightly version you are running).

If it does not work, can you provide a detailed error and repro instructions?

I tried this with
tf.__version__ = '2.2.0-dev20200317'

and It's not working -- I believe it's because uint8 dtype (i.e. coral USB compilation?) isn't supported yet. Can this be?

converter = lite.TFLiteConverter.from_saved_model('saved_yolo/')  
converter.experimental_new_converter = True

My data needs to be uint8 so my tflite model can be converted to an Edge TPU model...
but it seems that the converter won't work with this data:

In [27]: converter._is_unknown_shapes_allowed(fp32_execution=next(
                 representative_dataset_gen())[0][0].dtype == np.float32)                                                                 
Out[27]: False

In [28]: converter._is_unknown_shapes_allowed(fp32_execution=next(
                  representative_dataset_gen())[0][0].dtype == np.uint8)                                                                   
Out[28]: True
converter.representative_dataset = representative_dataset_gen 
converter.target_spec.supported_ops = [lite.OpsSet.TFLITE_BUILTINS_INT8] 
converter.optimizations = [lite.Optimize.DEFAULT] 
converter.inference_input_type = tf.uint8 
converter.inference_output_type = tf.uint8 
tflite_model = converter.convert()                                                                                                                                                             
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-18-c548bab089a8> in <module>
----> 1 tflite_model = converter.convert()

~/.pyenv/versions/3.7.6/envs/experimental_converter/lib/python3.7/site-packages/tensorflow/lite/python/lite.py in convert(self)
   556               "None is only supported in the 1st dimension. Tensor '{0}' has "
   557               "invalid shape '{1}'.".format(
--> 558                   _get_tensor_name(tensor), shape_list))
   559         elif shape_list and shape_list[0] is None:
   560           # Set the batch size to 1 if undefined.

ValueError: None is only supported in the 1st dimension. Tensor 'input_1' has invalid shape '[None, None, None, 3]'.

As an aside, if any of you know how I can make the TF 1.x yolo model I'm trying to convert have fixed tensor shapes so I can compile, that would be amazing!

I've only started learning TF now that the nice 2.x APIs are there for me use and don't know anything about TF 1.x idioms.

@adriancaruana
Copy link

@gargn

We added support for unknown dimensions in TensorFlow Lite today (5591208).

Can you try converting your model again with tonight's (1/31) tf-nightly once it's released (pip install tf-nightly). Convert the model with experimental_new_converter = True.

When you load the model it should have an additional field shape_signature that contains the shape with any unknown dimensions marked with -1. shape will have those dimensions marked with 1.

You can then call ResizeInputTensor with the desired shape when running the interpreter. The generated model will only work on the latest TensorFlow version (i.e. the interpreter on the tf-nightly version you are running).

If it does not work, can you provide a detailed error and repro instructions?

I've tried this in 2.2.0-rc2 with a keras model and I'm having issues also. Minimal repro:

import tensorflow as tf
from tensorflow.python.keras import Model, Input
from tensorflow.python.keras.layers import Conv2D

print('version:', tf.__version__)

i = Input(shape=(None, None, 3))
x = Conv2D(32, (3, 3))(i)

m = Model(i, x)
converter = tf.lite.TFLiteConverter.from_keras_model(m)
# MLIR enabled by default in 2.2.0, but check anyway:
print('MLIR enabled?', converter.experimental_new_converter)
tflite_model = converter.convert()

Output:

version: 2.2.0-rc2
MLIR enabled? True
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-1-f6a6483ac38d> in <module>
     12 # MLIR enabled by default in 2.2.0, but check anyway:
     13 print('MLIR enabled?', converter.experimental_new_converter)
---> 14 tflite_model = converter.convert()

~/.local/lib/python3.7/site-packages/tensorflow/lite/python/lite.py in convert(self)
    481               "None is only supported in the 1st dimension. Tensor '{0}' has "
    482               "invalid shape '{1}'.".format(
--> 483                   _get_tensor_name(tensor), shape_list))
    484         elif shape_list and shape_list[0] is None:
    485           # Set the batch size to 1 if undefined.

ValueError: None is only supported in the 1st dimension. Tensor 'input_1' has invalid shape '[None, None, None, 3]'.

@mikkelam
Copy link

mikkelam commented Apr 4, 2020

Having the same issue as adrian

@digital-nomad-cheng
Copy link

@gargn

We added support for unknown dimensions in TensorFlow Lite today (5591208).
Can you try converting your model again with tonight's (1/31) tf-nightly once it's released (pip install tf-nightly). Convert the model with experimental_new_converter = True.
When you load the model it should have an additional field shape_signature that contains the shape with any unknown dimensions marked with -1. shape will have those dimensions marked with 1.
You can then call ResizeInputTensor with the desired shape when running the interpreter. The generated model will only work on the latest TensorFlow version (i.e. the interpreter on the tf-nightly version you are running).
If it does not work, can you provide a detailed error and repro instructions?

I've tried this in 2.2.0-rc2 with a keras model and I'm having issues also. Minimal repro:

import tensorflow as tf
from tensorflow.python.keras import Model, Input
from tensorflow.python.keras.layers import Conv2D

print('version:', tf.__version__)

i = Input(shape=(None, None, 3))
x = Conv2D(32, (3, 3))(i)

m = Model(i, x)
converter = tf.lite.TFLiteConverter.from_keras_model(m)
# MLIR enabled by default in 2.2.0, but check anyway:
print('MLIR enabled?', converter.experimental_new_converter)
tflite_model = converter.convert()

Output:

version: 2.2.0-rc2
MLIR enabled? True
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-1-f6a6483ac38d> in <module>
     12 # MLIR enabled by default in 2.2.0, but check anyway:
     13 print('MLIR enabled?', converter.experimental_new_converter)
---> 14 tflite_model = converter.convert()

~/.local/lib/python3.7/site-packages/tensorflow/lite/python/lite.py in convert(self)
    481               "None is only supported in the 1st dimension. Tensor '{0}' has "
    482               "invalid shape '{1}'.".format(
--> 483                   _get_tensor_name(tensor), shape_list))
    484         elif shape_list and shape_list[0] is None:
    485           # Set the batch size to 1 if undefined.

ValueError: None is only supported in the 1st dimension. Tensor 'input_1' has invalid shape '[None, None, None, 3]'.

Having the same issue here.

@james34602
Copy link

We can specify a random non-batch size dimemsion for dynamic axis using tensorflow-2.2.0-rc3 tflite_convert.

However, the tflite-runtime cannot recalculate the appropriate output tensor dimensions if our model needs to.

interpreter = tflite.Interpreter(model_path="densenet.tflite")
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
width = 277 # Any number that is valid for the model
interpreter.resize_tensor_input(input_details[0]['index'], (1, 32, width, 1))
interpreter.allocate_tensors()
interpreter.set_tensor(input_details[0]['index'], X)
interpreter.invoke()
output = interpreter.get_tensor(output_details[0]['index'])

Something like this will occur:

RuntimeError: tensorflow/lite/kernels/reshape.cc:66 num_input_elements != num_output_elements (3072 != 13698)

Looks like tflite not yet able to modify all internal operator input/output dimensions accordingly.

@jvishnuvardhan
Copy link
Contributor

@adriancaruana I think your issue was resolved already in recent tf-nightly. Please check the gist here. Thanks!

@jvishnuvardhan jvishnuvardhan added the stat:awaiting response Status - Awaiting response from author label May 10, 2020
@jvishnuvardhan
Copy link
Contributor

@james34602 Can you print input_details and output_details? Can you share the tflite file or model file? Thanks!

@james34602
Copy link

@jvishnuvardhan
Absolutely!
https://drive.google.com/file/d/1ymi9BP6QQB1J0am3kkZLKdl6qQkFj_-f/view?usp=sharing

I remove most parameters in the model, however, the I/O of the model remains identical to the original unshrink model file.

---Python print start---
input_details:
[{'name': 'the_input', 'index': 44, 'shape': array([1, 32, 2112, 1]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}}]

output_details:
[{'name': 'out/truediv', 'index': 36, 'shape': array([], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}}]

Traceback (most recent call last):
File "runDensenet.py", line 15, in
text = keras_densenet(image)
File "C:\Users\XXX\densenetCustomTrial\densenet\modelSing.py", line 52, in predict
interpreter.allocate_tensors()
File "C:\Users\XXX\Anaconda3\envs\ocrtest\lib\site-packages\tflite_runtime\interpreter.py", line 243, in allocate_tensors
return self._interpreter.AllocateTensors()
File "C:\Users\XXX\Anaconda3\envs\ocrtest\lib\site-packages\tflite_runtime\tensorflow_wrap_interpreter_wrapper.py", line 110, in AllocateTensors
return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: tensorflow/lite/kernels/reshape.cc:66 num_input_elements != num_output_elements (192 != 0)Node number 11 (RESHAPE) failed to prepare.`
---Python print end---
Notice that:

  1. 2112 ([1, 32, 2112, 1]) should be None in the first place, tricks are needed to get tflite_convert working
  2. The tflite run completely correct if the input width is 2112, but it's not when input != 2112.

We hope tensorflow team fix the problem...even Matlab MatConvNet can handle dynamic I/O shape...

@lynx97
Copy link

lynx97 commented May 15, 2020

I used concrete_function to set input shape when converting saved_model to fflite format.
The conversion is successful but when I tried to inference in android, the output shape I got with converted model has shape 0.
In saved_model_cli, output_shape is: (-1, -1, -1, 2)

@jvishnuvardhan
Copy link
Contributor

jvishnuvardhan commented May 15, 2020

@james34602 I printed the input_details and found that None (-1) is not present in the signature. Please check the input_details.

Input_details: [{'name': 'the_input', 'index': 44, 'shape': array([ 1, 32, 2112, 1], dtype=int32), 'shape_signature': array([ 1, 32, 2112, 1], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]
If you had defined 3rd dimension as None, then the signature in the input_details should look like 'shape_signature': array([ 1, 32, -1, 1], dtype=int32),. Please check @gargn comment on None dimension here.

Can you update the model with None and convert again. Here is a gist for your reference. Hope it helps. Thanks!

@jvishnuvardhan
Copy link
Contributor

@lynx97 Can you please create a new issue and provide a standalone code to reproduce the issue. Opening new issue is better and can be easy to follow for other users who are facing similar issue like you. Thanks!

@james34602
Copy link

james34602 commented May 16, 2020

@jvishnuvardhan Thanks for attention.
The version of tensorflow I was using is tensorflow-2.2.0-rc3, sorry about that, I thought RC version contain some sort of experimental features, converting a model that contain non-batch None axis is not supported.

@james34602
Copy link

Hello guys, @jvishnuvardhan
The tf-nightly can convert None axis in all of my test model, and work perfectly, thanks.

interpreter = tflite.Interpreter(model_path="densenet/densenet.tflite")
X = img.reshape([1, 32, width, 1])
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
interpreter.resize_tensor_input(input_details[0]['index'], (1, 32, width, 1))
interpreter.allocate_tensors()
interpreter.set_tensor(input_details[0]['index'], X)
interpreter.invoke()
Y = interpreter.get_tensor(output_details[0]['index'])

@jvishnuvardhan
Copy link
Contributor

@gds101054108 Can you please verify once and close the issue if this was resolved for you. Couple of other above who had similar issue like you confirmed that None in non-batch dimension work as expected. Thanks!

@google-ml-butler
Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.

@google-ml-butler google-ml-butler bot added the stale This label marks the issue/pr stale - to be closed automatically if no activity label May 23, 2020
@google-ml-butler
Copy link

Closing as stale. Please reopen if you'd like to work on this further.

@zheyangshi
Copy link

We added support for unknown dimensions in TensorFlow Lite today (5591208).

Can you try converting your model again with tonight's (1/31) tf-nightly once it's released (pip install tf-nightly). Convert the model with experimental_new_converter = True.

When you load the model it should have an additional field shape_signature that contains the shape with any unknown dimensions marked with -1. shape will have those dimensions marked with 1.

You can then call ResizeInputTensor with the desired shape when running the interpreter. The generated model will only work on the latest TensorFlow version (i.e. the interpreter on the tf-nightly version you are running).

If it does not work, can you provide a detailed error and repro instructions?

Hi!I am very curious for this trick, but I don't know how to achieve this. Could you give me a example?
image
Actually, I follow the guide of this guy, and it seems to work. So is the last step necessary?
image

@purva98
Copy link

purva98 commented Jun 7, 2020

hey @james34602 @jvishnuvardhan @gargn
Hey! did it work for None Type?
I see that your input shape is [1, 32, width, 1],
Is it going to work for something like [None,None,None,4]
I tried it for [None,None,None,4], it did not work, any suggestions?

Also, [1, 32, width, 1] is not similar to [1, 32,None, 1] right?

@james34602
Copy link

@purva98
tfnightly solve many problems, give it a shot.
[1, 32, width, 1] is [1, 32,None, 1]
width is not predetermined.

@marko-radojcic
Copy link

marko-radojcic commented Jun 7, 2020 via email

@tjdevWorks
Copy link

tjdevWorks commented Jul 14, 2020

Hey @james34602 @jvishnuvardhan @gargn

I have been facing similar issues listed in this thread.
I have a model trained with tensorflow 1.15 and wanted to convert it to a tflite model, and have been using the tf-nightly package for it (version: 2.4.0-dev20200714).

Since the model requires dynamic input sizes I had set the width and height to None and after converting it to tflite this was the input_tensors :
[{'dtype': numpy.float32,
'index': 0,
'name': 'input_image:0',
'quantization': (0.0, 0),
'quantization_parameters': {'quantized_dimension': 0,
'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32)},
'shape': array([1, 1, 1, 3], dtype=int32),
'shape_signature': array([ 1, -1, -1, 3], dtype=int32),
'sparsity_parameters': {}}]

For resizing and setting the input data, I followed this snippet, here batch_image is of shape: [1, height, width, 3].

interpreter.resize_tensor_input(input_details[0]['index'], (1, batch_image.shape[1], batch_image.shape[2], 3), strict=True)
interpreter.allocate_tensors()
interpreter.set_tensor(input_details[0]['index'], input_data)

You can see the input details shape has changed after executing the above snippet

interpreter.get_input_details() 

[{'dtype': numpy.float32,
'index': 0,
'name': 'input_image:0',
'quantization': (0.0, 0),
'quantization_parameters': {'quantized_dimension': 0,
'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32)},
'shape': array([ 1, 286, 300, 3], dtype=int32),
'shape_signature': array([ 1, -1, -1, 3], dtype=int32),
'sparsity_parameters': {}}]

But on executing interpreter.invoke(), I am facing 2 issues:

  1. Runtime error on some images, these same images work perfectly fine without tflite.
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-59-7d35ed1dfe14> in <module>()
----> 1 interpreter.invoke()

/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/interpreter.py in invoke(self)
    522     """
    523     self._ensure_safe()
--> 524     self._interpreter.Invoke()
    525 
    526   def reset_all_variables(self):

RuntimeError: tensorflow/lite/kernels/kernel_util.cc:249 d1 == d2 || d1 == 1 || d2 == 1 was not true.Node number 49 (ADD) failed to prepare.
tensorflow/lite/kernels/kernel_util.cc:249 d1 == d2 || d1 == 1 || d2 == 1 was not true.Node number 37 (ADD) failed to prepare.
  1. Works on certain images but very high execution time (orders of 10x increase in time) than normally executing the model without tflite directly using the savedmodel format.

I have put together a Colab notebook to reproduce both the situations.
https://colab.research.google.com/drive/1OqzaBvzOCKOr6G-R7mQVPsnkhQm51jfY?usp=sharing

Model Files and Test Images (for both cases):
https://drive.google.com/file/d/1JErOmr9kyQYJFfTW9QwrZMJ6gM_3e9ZH/view?usp=sharing

  • Can you please help me and tell me the problem and possible solution for this.
  • Can you also please share what's the recommended way to use tflite for dynamic input size models with the performance gains similar to which tflite says we can achieve?

@LinJM
Copy link

LinJM commented Aug 27, 2020

  • Mac OS
  • TF 2.3.0

It still has the problem.

RuntimeError: tensorflow/lite/kernels/reshape.cc:66 num_input_elements != num_output_elements (416 != 0)Node number 11 (RESHAPE) failed to prepare.

@CaptainDario
Copy link

I have the same problem as @tjdevWorks .
When using:

interpreter = tf.lite.Interpreter(model_path=path)
input_details = interpreter.get_input_details()
interpreter.resize_tensor_input(input_details[0]['index'], (1, 640, 427, 3))
interpreter.allocate_tensors()
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()

I get:

---------------------------------------------------------------------------

RuntimeError                              Traceback (most recent call last)

<ipython-input-70-8ab2d11515fd> in <module>()
     12 interpreter.set_tensor(input_details[0]['index'], input_data)
     13 
---> 14 interpreter.invoke()

/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/interpreter.py in invoke(self)
    538     """
    539     self._ensure_safe()
--> 540     self._interpreter.Invoke()
    541 
    542   def reset_all_variables(self):

RuntimeError: tensorflow/lite/kernels/kernel_util.cc:404 d1 == d2 || d1 == 1 || d2 == 1 was not true.Node number 85 (ADD) failed to prepare.

@jvishnuvardhan Does any news on this exist or can somebody provide some help?

@nguyenducanson
Copy link

Any news on dynamic shapes in TF Lite?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author type:support Support issues
Projects
None yet
Development

No branches or pull requests