Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

could torch with quantize aware of training support onnx-export #52838

Closed
tilaba opened this issue Feb 25, 2021 · 1 comment
Closed

could torch with quantize aware of training support onnx-export #52838

tilaba opened this issue Feb 25, 2021 · 1 comment
Labels
module: onnx Related to torch.onnx oncall: quantization Quantization support in PyTorch

Comments

@tilaba
Copy link

tilaba commented Feb 25, 2021

I want to export the quantize-trained model as onnx format, but it reports the error as follows:
"File "/dockerdata/leobhliu/tools/anaconda3/lib/python3.7/site-packages/torch/jit/init.py", line 345, in
ret_inputs.append(tuple(x.clone(memory_format=torch.preserve_format) for x in args))
RuntimeError: self.qscheme() == at::kPerTensorAffine INTERNAL ASSERT FAILED at /opt/conda/conda-bld/pytorch_1591914880026/work/aten/src/ATen/native/quantized/QTensor.cpp:158, please report a bug to PyTorch. clone for quantized Tensor only works for PerTensorAffine scheme right now"

the following is my code:
import torch
import torch.nn as nn
import numpy as np
class M(torch.nn.Module):
def init(self):
super(M, self).init()
self.quant = torch.quantization.QuantStub()
self.conv = torch.nn.Conv2d(1, 1, 1)
self.relu = torch.nn.ReLU()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.conv(x)
x = self.relu(x)
x = self.dequant(x)
return x

model_fp32 = M()
model_fp32.eval()
model_fp32.qconfig = torch.quantization.get_default_qconfig('fbgemm')

model_fp32_prepared = torch.quantization.prepare(model_fp32, inplace=True)

input_fp32 = torch.randn(4, 1, 4, 4)
model_fp32_prepared(input_fp32)

model_int8 = torch.quantization.convert(model_fp32_prepared.eval(), inplace=False)

res = model_int8(input_fp32)
torch.onnx.export(model_int8, input_fp32, "model_int8.onnx", verbose=False)

cc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof @SplitInfinity @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a @vkuzo

@anjali411 anjali411 added module: onnx Related to torch.onnx oncall: quantization Quantization support in PyTorch labels Feb 25, 2021
@github-actions github-actions bot added this to Need Triage in Quantization Triage Feb 25, 2021
@vkuzo
Copy link
Contributor

vkuzo commented Feb 26, 2021

hi @tilaba , thanks for the report. #42835 was recently landed to fix this. You can try the nightlies to get it, and it will also be going out with v1.8. Hope this helps!

Closing as this should be fixed, but please feel free to reopen if there are issues.

@vkuzo vkuzo closed this as completed Feb 26, 2021
Quantization Triage automation moved this from Need Triage to Done Feb 26, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: onnx Related to torch.onnx oncall: quantization Quantization support in PyTorch
Projects
Development

No branches or pull requests

3 participants