We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Code (just a demo how I do quantize and it can't reproduce error)
def representative_dataset_gen(): for x in validation_fingerprints: x = x[np.newaxis,:] yield [x] converter = tf.lite.TFLiteConverter.from_saved_model(flags.train_dir + '/last_model') converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] # converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS] converter.allow_custom_ops = True converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.uint8 converter.representative_dataset = representative_dataset_gen last_quant_model = converter.convert() with open(flags.train_dir + '/quant_last_model.tflite', 'wb') as w: w.write(last_quant_model)
Some config type(validation_fingerprints): <class 'numpy.ndarray'> shape(validation_fingerprints): (3093, 16384) type(x): <class 'numpy.ndarray'> shape(x): (1,16384) The model_summary model_summary.txt
Validation_fingerprints is np.float32. I don't know if it would cause problem in full integer quantization. (I found that 4-2-6-7. Full Integer Quantization from saved_model (All 8-bit integer quantization) also use np.float32 tho.) I've also found this issue but setting fused=False in batch norm doesn't help. Is there any advice? Stuck this for several days 😢
Validation_fingerprints
fused=False
The text was updated successfully, but these errors were encountered:
did you fixed it?
Sorry, something went wrong.
No branches or pull requests
Code (just a demo how I do quantize and it can't reproduce error)
Some config
type(validation_fingerprints): <class 'numpy.ndarray'>
shape(validation_fingerprints): (3093, 16384)
type(x): <class 'numpy.ndarray'>
shape(x): (1,16384)
The model_summary
model_summary.txt
Validation_fingerprints
is np.float32. I don't know if it would cause problem in full integer quantization. (I found that 4-2-6-7. Full Integer Quantization from saved_model (All 8-bit integer quantization) also use np.float32 tho.)I've also found this issue but setting
fused=False
in batch norm doesn't help. Is there any advice? Stuck this for several days 😢The text was updated successfully, but these errors were encountered: