You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to export the quantize-trained model as onnx format, but it reports the error as follows:
"File "/dockerdata/leobhliu/tools/anaconda3/lib/python3.7/site-packages/torch/jit/init.py", line 345, in
ret_inputs.append(tuple(x.clone(memory_format=torch.preserve_format) for x in args))
RuntimeError: self.qscheme() == at::kPerTensorAffine INTERNAL ASSERT FAILED at /opt/conda/conda-bld/pytorch_1591914880026/work/aten/src/ATen/native/quantized/QTensor.cpp:158, please report a bug to PyTorch. clone for quantized Tensor only works for PerTensorAffine scheme right now"
the following is my code:
import torch
import torch.nn as nn
import numpy as np
class M(torch.nn.Module):
def init(self):
super(M, self).init()
self.quant = torch.quantization.QuantStub()
self.conv = torch.nn.Conv2d(1, 1, 1)
self.relu = torch.nn.ReLU()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.conv(x)
x = self.relu(x)
x = self.dequant(x)
return x
hi @tilaba , thanks for the report. #42835 was recently landed to fix this. You can try the nightlies to get it, and it will also be going out with v1.8. Hope this helps!
Closing as this should be fixed, but please feel free to reopen if there are issues.
I want to export the quantize-trained model as onnx format, but it reports the error as follows:
"File "/dockerdata/leobhliu/tools/anaconda3/lib/python3.7/site-packages/torch/jit/init.py", line 345, in
ret_inputs.append(tuple(x.clone(memory_format=torch.preserve_format) for x in args))
RuntimeError: self.qscheme() == at::kPerTensorAffine INTERNAL ASSERT FAILED at /opt/conda/conda-bld/pytorch_1591914880026/work/aten/src/ATen/native/quantized/QTensor.cpp:158, please report a bug to PyTorch. clone for quantized Tensor only works for PerTensorAffine scheme right now"
the following is my code:
import torch
import torch.nn as nn
import numpy as np
class M(torch.nn.Module):
def init(self):
super(M, self).init()
self.quant = torch.quantization.QuantStub()
self.conv = torch.nn.Conv2d(1, 1, 1)
self.relu = torch.nn.ReLU()
self.dequant = torch.quantization.DeQuantStub()
def forward(self, x):
x = self.quant(x)
x = self.conv(x)
x = self.relu(x)
x = self.dequant(x)
return x
model_fp32 = M()
model_fp32.eval()
model_fp32.qconfig = torch.quantization.get_default_qconfig('fbgemm')
model_fp32_prepared = torch.quantization.prepare(model_fp32, inplace=True)
input_fp32 = torch.randn(4, 1, 4, 4)
model_fp32_prepared(input_fp32)
model_int8 = torch.quantization.convert(model_fp32_prepared.eval(), inplace=False)
res = model_int8(input_fp32)
torch.onnx.export(model_int8, input_fp32, "model_int8.onnx", verbose=False)
cc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof @SplitInfinity @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a @vkuzo
The text was updated successfully, but these errors were encountered: