Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

量化后模型输出问题 #5

Closed
Cliveliew opened this issue Nov 25, 2021 · 6 comments
Closed

量化后模型输出问题 #5

Cliveliew opened this issue Nov 25, 2021 · 6 comments

Comments

@Cliveliew
Copy link

您好!我在将您给的best模型PTQ量化以后,测试了自己的图片。结果输出的图像数据数值不是0就是255,请问这是怎么回事呢?感谢您的解答!

@Cliveliew Cliveliew reopened this Nov 25, 2021
@Cliveliew
Copy link
Author

稍微改了一下,输出数据倒不是只有0/255了,但是量化模型的效果差距比较大,请问这样的差距正常吗?
img_48
img_48_

@cxzhou95
Copy link
Owner

cxzhou95 commented Nov 25, 2021 via email

@Cliveliew
Copy link
Author

这是我用您的best.qt模型量化后随便找了张图测试的,测试图没有用退化模型处理,上图是量化后的输出,下图是原模型的输出;
我还用自己的数据集训了您的原网络(没有加载您的best.pt模型,非QAT训练,就只是原float的训练),最后的模型效果很好,但是量化后也有问题。

@cxzhou95
Copy link
Owner

cxzhou95 commented Nov 25, 2021 via email

@Cliveliew
Copy link
Author

感谢您的回复!我把我的torch版本换成了和您一致的,尝试了您的QTSQ和QAT模型,运行正常且效果很好。我怀疑是否是我的量化转换代码写漏了,因为看了您的代码和网上的博客但是没有找到问题所在,不知能否麻烦您看看转换代码是否有问题…
以下是我的转换代码:
torch.manual_seed(191009)
device = 'cpu'
a = torch.load('exp/OneCyclicLR/best.pt', map_location=device)
model = XLSR_quantization(3)
model.load_state_dict(a)
model.to(device)
model.fuse_model()

model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
torch.quantization.prepare(model, inplace=True)
quantized_model = torch.quantization.convert(model.eval(), inplace=False)
quantized_model.eval()

@Cliveliew
Copy link
Author

应该知道问题在哪了,好像prepare之后得输入点数据给模型跑跑- -

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants