You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Yes that is supported. In fact, it is even recommended in many cases to include some int8 layers. Indeed, typically your first layer can remain int8 to get better accuracy. Here's an example where only the weights are quantized in the first layer, not the activations: https://docs.larq.dev/larq/tutorials/mnist/#create-the-model. In this example, you can replace the kernel_quantizer argumentin the first layer by any quantizer adhering to the lq.quantizers.Quantizer abstract class (or just leave the argument out). Or you can mix in normal tf.keras layers in your network as well, you are not restricted to use only lq. layers.
Hello,I would like to ask whether larqsupports 8-bit quantization for the first and last layers, and binary quantization for the middle layer?
I have learned that the QAT toolkit of Tensorflow supports 8-bit quantitative perception training.
Can larq be used in combination with this toolkit ?
The text was updated successfully, but these errors were encountered: