Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can the quantized model trained by pytorch qat be converted to the onnx model? #76583

Closed
chenxinhua opened this issue Apr 29, 2022 · 2 comments
Closed
Labels
oncall: quantization Quantization support in PyTorch

Comments

@chenxinhua
Copy link

chenxinhua commented Apr 29, 2022

馃殌 The feature, motivation and pitch

Can the quantized model trained by pytorch qat be converted to the onnx model?

Alternatives

No response

Additional context

No response

cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo

@mruberry mruberry added the oncall: quantization Quantization support in PyTorch label May 3, 2022
@github-actions github-actions bot added this to Need Triage in Quantization Triage May 3, 2022
Quantization Triage automation moved this from Need Triage to Done May 6, 2022
@andrewor14 andrewor14 reopened this May 6, 2022
Quantization Triage automation moved this from Done to In Progress May 6, 2022
@andrewor14
Copy link
Contributor

andrewor14 commented May 6, 2022

Hi Xinhua, I believe #42835 adds support for this. You can see the PR description for an example of how to do this. You may also find this thread useful: https://discuss.pytorch.org/t/onnx-export-of-quantized-model/76884

@HDCharles
Copy link
Contributor

closing due to inactivity

Quantization Triage automation moved this from In Progress to Done Jun 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
oncall: quantization Quantization support in PyTorch
Projects
Development

No branches or pull requests

4 participants