Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quantize/de-quantize inputs/outputs & scaling loss? #5

Closed
meetvadera opened this issue Oct 15, 2019 · 5 comments
Closed

Quantize/de-quantize inputs/outputs & scaling loss? #5

meetvadera opened this issue Oct 15, 2019 · 5 comments

Comments

@meetvadera
Copy link

meetvadera commented Oct 15, 2019

Hi,

Is it required to quantize inputs or dequantize outputs before passing it to the loss function? Also, does any kind of scaling needs to be performed for the loss function?

Meet

@meetvadera meetvadera changed the title Quantize/de-quantize inputs/outputs/loss? Quantize/de-quantize inputs/outputs & scaling loss? Oct 15, 2019
@volcacius
Copy link
Contributor

Not necessary. brevitas.nn layers always return values in dequantized range.

@meetvadera
Copy link
Author

Ok. Also, would you advise using the PyTorch's functional version of sigmoid/log_sigmoid, or that of brevitas? My understanding is that PyTorch's functional version of sigmoid operates on FP32 which can give better resolution versus that of brevitas. Hence, it might be better to use PyTorch's sigmoid if we are not too concerned about the cost of that particular sigmoid function.

@volcacius
Copy link
Contributor

It really depends on your specific use case. What QuantSigmoid does is take a FP32 input in, apply a sigmoid activation function, and the quantize the output according to your specification.

@meetvadera
Copy link
Author

meetvadera commented Oct 16, 2019

Ok. I am particularly talking about the use case of training for classification. In that case, I believe F.sigmoid will be a better choice given that I'd want to use F.nll_loss downstream for training and it'll give me better resolution.

Edit: Additionally, I couldn't see in the codebase as to where de-quantization really happens at the layer output. Does it explicitly require us to set the following properties true for a layer as shown below:

self.fc1 = qnn.QuantLinear(...,compute_output_scale= True, compute_output_bit_width = True, return_quant_tensor = True)?

The reason why I am digging deeper into this is that I am getting pretty good results with even very low values of bit_width such as 2 or 4, while setting all the properties of layers to default and only tweaking bit_width.

@volcacius
Copy link
Contributor

If your accuracy is unreasonably high, it might that you have quantization disabled. The default behavior is to have quantization disabled, i.e. a QuantConv2d behaves by default as a Conv2d.. To enable integer quantization you need to set weight_quant_type = QuantType.INT . I understand this might be confusing, so i'll force the user to specify the QuantType in a later update.

Regarding dequantization, it is not exposed to the user as a separate layer. It's performed here and is called as part of every quantized layer.

What those flags do is to compute explicitly the scale factor of the output accumulator, as well as its maximum bit width, and return them as quantized tensor, which is a named tuple composed of (output_tensor, output_scale_factor, output_bit_width). The relationship between those values is that output_tensor/output_scale_factors are integer values (when you have bias disabled) that can be represented with output_bit_width bits. Enabling them is not required in general.

This sort of information is gonna go in the documentation as soon the API stabilize enough.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants