You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
output_details=interpreter.get_output_details()
output_size, output_zero_point=output_details[0]['quantization']
##### output_size != 1# This led to me having to get real out by zooming, as follows:out_true=out*output_size
The out_scale of the tflite model I derived from is not equal to 1, whether this has an impact at the final stage.
The text was updated successfully, but these errors were encountered:
If you can avoid this additional rescale of the output it would be better, in any case it will not work well if the output itself is quantized to 8 bits
The out_scale of the tflite model I derived from is not equal to 1, whether this has an impact at the final stage.
The text was updated successfully, but these errors were encountered: