New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TFLite, 2.2.0, accuracy drops significantly when tf.lite.Optimize.DEFAULT option is used #40000
Comments
Hi @wwwind, Quantizing the weights can result in a significant decrease in the accuracy, as you limit the dynamic range of the weights. You might want to explore the dynamic range of your original network. Looks like your model accuracy suffers immensely under the quantization. You can also try to quantize the weights into |
@amahendrakar No, |
Was able to reproduce the issue with TF v2.2 and TF-nightly. Please find the attached gist. Thanks! |
@wwwind Are you looking for int8 tflite model or float tflite model? Let me check the converted model and respond to you. Thanks! |
Hi @jvishnuvardhan Problem is that accuracy is much worse with just weights in int8 than when the model is fully int8 quantized. |
@liufengdb could you take a look at this? |
thanks for fixing! |
System information
Command used to run the converter or code if you’re using the Python API
If possible, please share a link to Colab/Jupyter/any notebook.
Colab:
https://colab.research.google.com/drive/1Z2Xvh2dufYR8y9U-9735KgBOGYYd9NtN#scrollTo=X-vMKEjgTIp0
The output from the converter invocation
Failure details
If I remove the line:
converter.optimizations = [tf.lite.Optimize.DEFAULT]
from the script, then
accuracy is 0.9264305177111717
The converted model with this settings is wrong.
The text was updated successfully, but these errors were encountered: