Problems with converted 8-bit TFLite models of CycleGAN and running inference (specially allocating tensors) #59922
Labels
comp:lite
TF Lite related issues
stat:awaiting tensorflower
Status - Awaiting response from tensorflower
TF 2.11
Issues related to TF 2.11
TFLiteConverter
For issues related to TFLite converter
type:bug
Bug
System information
provided in TensorFlow): No, problem are found on normal TF code.
Description of task
Description of problems
Quick note: Problems 2 and 3 occur for the generative TFLite models of Cycle GAN and problem 4 is for the discriminative TFLite models of CycleGAN.
1. When trying to simply allocate tensors for the converted TFLite models on the python interpreter I get the following error:
Aborted (core dumped)
2. Since the error message was not useful I wanted to run the model using the C++ to understand the problem better. So I built the "benchmark_model" tool from https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/benchmark using Bazel and tried to run the same tflite model using the benchmark tool with the following cmd calls:
Running this gave me better idea of the problem:
tensorflow/tensorflow/lite/kernels/internal/quantization_util.cc
Line 117 in be3ea70
3. Since I was able to access and rebuilt the code I made a small adjustment in the code to artificially reduce the "double multiplier" below 1 for this node to see if the model is able to run completely with out any errors.
4. When I tried to run inference using the benchmark tool for the discriminative models I get the following error
Overall there seems to be problem converting some trivial operations from TF to TFLite. I am not sure if its because of the way Cycle GAN are defined in TF initially or if I am performing the conversion steps wrong. Any help in this matter would be great, I want to convert CycleGAN to int8 TFLite mode and run it.
The text was updated successfully, but these errors were encountered: