New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QBatchNormalization with scale=False and model_save_quantized_weights #83
Comments
Hello. Thank you for reporting this. Can you please provide a code example where I can reproduce the problem? |
This is an example #!/usr/bin/env python3
import qkeras
import tensorflow.keras as keras
input_layer = keras.layers.Input([19, 28, 16])
intermediate = qkeras.QBatchNormalization(
scale=False
)(input_layer)
model = keras.Model(input_layer, intermediate)
print('Number of weights: ' + str(len(model.layers[1].get_weights())))
print('Number of quantizers: ' + str(len(model.layers[1].get_quantizers())))
qkeras.utils.model_save_quantized_weights(model) |
So if I understand correctly, the issue is caused by this line: Line 159 in 1f2134b
If There may be several solutions to this, but I will need some time to investigate |
Yes, the issue is there |
We have determined that to fix this issue will take quite a bit of refactoring work, and so it will take more time. In the meantime, the workaround is to use folded batch normalization. This method allows you to integrate your batch normalization operations into your convolution layers, thereby avoiding batch norm layers altogether. This is the method we currently use in our team. We have good support for this in QKeras if you switch to using the QConv2DBatchnorm layers. Then you can use the Let me know if this helps. |
When model_save_quantized_weights is called on a model including a QBatchNormalization with scale=False it seems that the wrong quantizers are used.
QBatchNormalization.get_quantizers() returns a list with gamma_quantizer as first element even when there is no gamma, resulting in a disalignment between quantizers and weights in this point
qkeras/qkeras/utils.py
Line 159 in 1f2134b
The text was updated successfully, but these errors were encountered: