Skip to content

Issue in converting input output from float 32 to int8(Convert using float fallback quantization), #47024

@Paryavi

Description

@Paryavi

Hi There, I have an issue in converting a model with float 32 input to int 8;
My code; https://colab.research.google.com/drive/1EGMqQlos_NovF3qakNVo0PLgvvZukLtB#scrollTo=TqOt6Sv7AsMi

Details:
I used standard transfer learning code;
https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/transfer_learning.ipynb

just instead of loading data from the website, I loaded data folders from my computer (instead of cat I wanna detect an invasive beetle!), then I tried to convert the input of the model from float 32 to int 8 using the link below guideline;
https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_integer_quant.ipynb#scrollTo=FiwiWU3gHdkW

Specifically this cell from abovementioned quantization guidline generates the following issue;
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
# Model has only one input so each data point has one element.
yield [input_value]

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen

tflite_model_quant = converter.convert()

I get the following issue; ValueError: Unbatching a dataset is only supported for rank >= 1


ValueError Traceback (most recent call last)
in
8 converter.representative_dataset = representative_data_gen
9
---> 10 tflite_model_quant = converter.convert()

C:\ProgramData\Anaconda3\envs\TF2\lib\site-packages\tensorflow\lite\python\lite.py in convert(self)
871 graph=frozen_func.graph)
872
--> 873 return super(TFLiteKerasModelConverterV2,
874 self).convert(graph_def, input_tensors, output_tensors)
875

C:\ProgramData\Anaconda3\envs\TF2\lib\site-packages\tensorflow\lite\python\lite.py in convert(self, graph_def, input_tensors, output_tensors)
630 calibrate_and_quantize, flags = quant_mode.quantizer_flags()
631 if calibrate_and_quantize:
--> 632 result = self._calibrate_quantize_model(result, **flags)
633
634 flags_modify_model_io_type = quant_mode.flags_modify_model_io_type(

C:\ProgramData\Anaconda3\envs\TF2\lib\site-packages\tensorflow\lite\python\lite.py in _calibrate_quantize_model(self, result, inference_input_type, inference_output_type, activations_type, allow_float)
457 return _mlir_quantize(calibrated)
458 else:
--> 459 return calibrate_quantize.calibrate_and_quantize(
460 self.representative_dataset.input_gen, inference_input_type,
461 inference_output_type, allow_float, activations_type)

C:\ProgramData\Anaconda3\envs\TF2\lib\site-packages\tensorflow\lite\python\optimize\calibrator.py in calibrate_and_quantize(self, dataset_gen, input_type, output_type, allow_float, activations_type, resize_input)
91 """
92 initialized = False
---> 93 for sample in dataset_gen():
94 if not initialized:
95 initialized = True

in representative_data_gen()
1 def representative_data_gen():
----> 2 for input_value in tf.data.Dataset.from_tensor_slices(train_dataset).batch(1).take(20):
3 # Model has only one input so each data point has one element.
4 yield [input_value]
5

C:\ProgramData\Anaconda3\envs\TF2\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py in from_tensor_slices(tensors)
689 Dataset: A Dataset.
690 """
--> 691 return TensorSliceDataset(tensors)
692
693 class _GeneratorState(object):

C:\ProgramData\Anaconda3\envs\TF2\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py in init(self, element)
3155 element = structure.normalize_element(element)
3156 batched_spec = structure.type_spec_from_value(element)
-> 3157 self._tensors = structure.to_batched_tensor_list(batched_spec, element)
3158 self._structure = nest.map_structure(
3159 lambda component_spec: component_spec._unbatch(), batched_spec) # pylint: disable=protected-access

C:\ProgramData\Anaconda3\envs\TF2\lib\site-packages\tensorflow\python\data\util\structure.py in to_batched_tensor_list(element_spec, element)
362 # pylint: disable=protected-access
363 # pylint: disable=g-long-lambda
--> 364 return _to_tensor_list_helper(
365 lambda state, spec, component: state + spec._to_batched_tensor_list(
366 component), element_spec, element)

C:\ProgramData\Anaconda3\envs\TF2\lib\site-packages\tensorflow\python\data\util\structure.py in _to_tensor_list_helper(encode_fn, element_spec, element)
337 return encode_fn(state, spec, component)
338
--> 339 return functools.reduce(
340 reduce_fn, zip(nest.flatten(element_spec), nest.flatten(element)), [])
341

C:\ProgramData\Anaconda3\envs\TF2\lib\site-packages\tensorflow\python\data\util\structure.py in reduce_fn(state, value)
335 def reduce_fn(state, value):
336 spec, component = value
--> 337 return encode_fn(state, spec, component)
338
339 return functools.reduce(

C:\ProgramData\Anaconda3\envs\TF2\lib\site-packages\tensorflow\python\data\util\structure.py in (state, spec, component)
363 # pylint: disable=g-long-lambda
364 return _to_tensor_list_helper(
--> 365 lambda state, spec, component: state + spec._to_batched_tensor_list(
366 component), element_spec, element)
367

C:\ProgramData\Anaconda3\envs\TF2\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py in _to_batched_tensor_list(self, value)
3322 def _to_batched_tensor_list(self, value):
3323 if self._dataset_shape.ndims == 0:
-> 3324 raise ValueError("Unbatching a dataset is only supported for rank >= 1")
3325 return self._to_tensor_list(value)
3326

ValueError: Unbatching a dataset is only supported for rank >= 1

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions