You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have followed the instruction provided by @fsx950223 to create a int8 quantized tflite model. The quantization was for weights and layers output. The tflite obtained from a efficientdet-d2 checkpoint was not able to run an inference (the script was running for ever without any output and it seems stuck somewhere). I tried to make inference with tflite converted from a efficientdet-lite0 and the inference went just fine.
Note: for the efficientdet-d2-int8.tflite the inference (using keras.eval_tflite) did not print out any error message and kept running forever.
The text was updated successfully, but these errors were encountered:
The speed is not a problem. In case of d0 the inference is running but slow. but in case of d2 it is not running through one sample inference even if I wait hours.
I have followed the instruction provided by @fsx950223 to create a int8 quantized tflite model. The quantization was for weights and layers output. The tflite obtained from a efficientdet-d2 checkpoint was not able to run an inference (the script was running for ever without any output and it seems stuck somewhere). I tried to make inference with tflite converted from a efficientdet-lite0 and the inference went just fine.
Note: for the efficientdet-d2-int8.tflite the inference (using keras.eval_tflite) did not print out any error message and kept running forever.
The text was updated successfully, but these errors were encountered: