-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Quantize/de-quantize inputs/outputs & scaling loss? #5
Comments
Not necessary. brevitas.nn layers always return values in dequantized range. |
Ok. Also, would you advise using the PyTorch's functional version of sigmoid/log_sigmoid, or that of brevitas? My understanding is that PyTorch's functional version of sigmoid operates on FP32 which can give better resolution versus that of brevitas. Hence, it might be better to use PyTorch's sigmoid if we are not too concerned about the cost of that particular sigmoid function. |
It really depends on your specific use case. What QuantSigmoid does is take a FP32 input in, apply a sigmoid activation function, and the quantize the output according to your specification. |
Ok. I am particularly talking about the use case of training for classification. In that case, I believe F.sigmoid will be a better choice given that I'd want to use F.nll_loss downstream for training and it'll give me better resolution. Edit: Additionally, I couldn't see in the codebase as to where de-quantization really happens at the layer output. Does it explicitly require us to set the following properties true for a layer as shown below:
The reason why I am digging deeper into this is that I am getting pretty good results with even very low values of |
If your accuracy is unreasonably high, it might that you have quantization disabled. The default behavior is to have quantization disabled, i.e. a QuantConv2d behaves by default as a Conv2d.. To enable integer quantization you need to set weight_quant_type = QuantType.INT . I understand this might be confusing, so i'll force the user to specify the QuantType in a later update. Regarding dequantization, it is not exposed to the user as a separate layer. It's performed here and is called as part of every quantized layer. What those flags do is to compute explicitly the scale factor of the output accumulator, as well as its maximum bit width, and return them as quantized tensor, which is a named tuple composed of (output_tensor, output_scale_factor, output_bit_width). The relationship between those values is that output_tensor/output_scale_factors are integer values (when you have bias disabled) that can be represented with output_bit_width bits. Enabling them is not required in general. This sort of information is gonna go in the documentation as soon the API stabilize enough. |
Hi,
Is it required to quantize inputs or dequantize outputs before passing it to the loss function? Also, does any kind of scaling needs to be performed for the loss function?
Meet
The text was updated successfully, but these errors were encountered: