TFLite, Model with Conv2DTranspose fails to convert, fully quantization, int8 #39720
Labels
stat:awaiting tensorflower
Status - Awaiting response from tensorflower
TF 2.2
Issues related to TF 2.2
TFLiteConverter
For issues related to TFLite converter
type:bug
Bug
System information
Command used to run the converter or code if you’re using the Python API
If possible, please share a link to Colab/Jupyter/any notebook.
This issue is very similar to the issue, but
the problematic layer is Conv2DTranspose, so it is different model here.
I tested models with other layers and all are fine, except this one and the issue logged above, separately.
https://colab.research.google.com/drive/1g8wjs5D3N9blNpWYMIQ8R_AipZASUKH8?usp=sharing
The output from the converter invocation
Also, please include a link to the saved model or GraphDef
Failure details
If the conversion is successful, but the generated model is wrong,
state what is wrong:
RNN conversion support
If converting TF RNN to TFLite fused RNN ops, please prefix [RNN] in the title.
Any other info / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
The text was updated successfully, but these errors were encountered: