You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey,
I am quantizing the mobilenetv2_1.4_224 keras model using the post-training quantization tool (full integer). However, the runtime numbers are different to the runtime when I use your provided quantized mobilenetv2_1.4_224. Analyzing this issue, I found out that you converted it with TOCO, while by default the tflite converter uses MLIR. Explicitly setting converter.experimental_new_converter = False (use of TOCO) is in newer (>=2.6) TF versions not possible, because TOCO has been removed. Using previous TF version (<2.6) raise the error TypeError: ('Keyword argument not understood:', 'keepdims').
The difference between the full integer post-quantization using the TF guide and your provided quantized model is pretty severe. Your model runs with 4ms, while using the TF guide and MLIR it needs 10ms on the same HW.
Question: Could you please give me any information to reproduce your quantization of the mobilenetv2 ?
oconnor127
changed the title
Post-Quantization of mobilenetv2-keras model slower your given quantized model
Post-Quantization of mobilenetv2-keras model slower than your given quantized model
Nov 5, 2021
The OD API team may not use keras.applications. keras.applications is not released by the original MobileNet team. So there might be something they missed. Good to ask keras team. @tombstone for MobileNet used by OD API.
Hey,
I am quantizing the mobilenetv2_1.4_224 keras model using the post-training quantization tool (full integer). However, the runtime numbers are different to the runtime when I use your provided quantized mobilenetv2_1.4_224. Analyzing this issue, I found out that you converted it with TOCO, while by default the tflite converter uses MLIR. Explicitly setting converter.experimental_new_converter = False (use of TOCO) is in newer (>=2.6) TF versions not possible, because TOCO has been removed. Using previous TF version (<2.6) raise the error TypeError: ('Keyword argument not understood:', 'keepdims').
The difference between the full integer post-quantization using the TF guide and your provided quantized model is pretty severe. Your model runs with 4ms, while using the TF guide and MLIR it needs 10ms on the same HW.
Question: Could you please give me any information to reproduce your quantization of the mobilenetv2 ?
My quantization code is basically:
The text was updated successfully, but these errors were encountered: