You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
And I notice that despite the kernel quantization, I receive kernel values that are not entirely +1 and -1. For example, values such as 8.16999555e-01 and 3.77580225e-02 appear within the weight kernel.
Is there any intuitive explanation for this? Thank you!
The text was updated successfully, but these errors were encountered:
Outside the quantize_context this is expected when training models with latent weights as explained in the docs you linked above since the weights are only binarized in the forward pass and stored as floating point values during training.
However, the following should return binarized weights for the binary convolutions:
withlarq.context.quantized_scope(True):
weights=model.get_weights() # get binary weights
Although keep in mind that the model might include some full precision layers like batch norms that won't appear quantized in the Keras model but can be fused when deploying with larq compute engine.
After building out the simple BNN from the following guide: https://docs.larq.dev/larq/tutorials/binarynet_cifar10/
I try retrieving binary weights to examine via https://docs.larq.dev/larq/guides/bnn-optimization/#retrieving-the-binary-weights
And I notice that despite the kernel quantization, I receive kernel values that are not entirely +1 and -1. For example, values such as 8.16999555e-01 and 3.77580225e-02 appear within the weight kernel.
Is there any intuitive explanation for this? Thank you!
The text was updated successfully, but these errors were encountered: