Skip to content
This repository has been archived by the owner on Nov 8, 2022. It is now read-only.

question: [Quantization] Which files to change to make inference faster for Q8BERT? #221

Open
sarthaklangde opened this issue May 18, 2021 · 1 comment
Labels
question Further information is requested

Comments

@sarthaklangde
Copy link

sarthaklangde commented May 18, 2021

I know from previous issues it is mentioned that that Q8BERT was just an experiment to measure the accuracy of quantized BERT model. But, given that the accuracy is good, what changed would need to be made to torch.nn.quantization file to replace the FP32 operations by INT8 operations?

Replacing the FP32 Linear layers with the torch.nn.quantized.Linear should theoretically work since it will have optimized operations, but it doesn't. Same for other layers.

If someone could just point out how to improve the inference speed (hints, tips, directions, code, anything), it would be helpful since the model's accuracy is really good and I would like to use it for downstream tasks. I don't mind even creating a PR once those changes are done so that it merges with the main repo.

Thank you!

@sarthaklangde sarthaklangde added the question Further information is requested label May 18, 2021
@Ayyub29
Copy link

Ayyub29 commented Aug 25, 2022

Do you find the answer for this?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants