You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use the supported model file (model_1.tflite, model_2.tflite, model_quant_1.tflite and model_quant_2.tflite) and the script "real_time_processing_tf_lite.py" to compare the inference time.
My implementation configs: Ubuntu 18.04, tf2.0.
the processing times are shown as follows:
TF-lite: 0.383403 ms; TF-lite quantized: 0.4470351 ms
It is a little abnormal that TF-lite quantized model is slower than TF-lite model during inference. I found the script is required in tf2.3.0 when running tflite model. Does it mean that tf2.0 has some limitations in your script? Looking forward to your reply
The text was updated successfully, but these errors were encountered:
I use the supported model file (model_1.tflite, model_2.tflite, model_quant_1.tflite and model_quant_2.tflite) and the script "real_time_processing_tf_lite.py" to compare the inference time.
My implementation configs: Ubuntu 18.04, tf2.0.
the processing times are shown as follows:
TF-lite: 0.383403 ms; TF-lite quantized: 0.4470351 ms
It is a little abnormal that TF-lite quantized model is slower than TF-lite model during inference. I found the script is required in tf2.3.0 when running tflite model. Does it mean that tf2.0 has some limitations in your script? Looking forward to your reply
The text was updated successfully, but these errors were encountered: