Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

请问量化感知训练(qat)怎么使用 #215

Closed
WZMIAOMIAO opened this issue Aug 24, 2022 · 4 comments
Closed

请问量化感知训练(qat)怎么使用 #215

WZMIAOMIAO opened this issue Aug 24, 2022 · 4 comments

Comments

@WZMIAOMIAO
Copy link

您好,非常感谢你们开源的优秀项目。最近想学习一下,我看了下文档,好像没找到感知训练量化的内容(可能是自己粗心没找到)。如果是我没找到希望您能稍微指点一下文档在哪,如果真的没有,想了解下您们的一些想法。谢谢。

@hshen14
Copy link

hshen14 commented Aug 24, 2022

You may also have some interests to this one: https://github.com/intel/neural-compressor/tree/master/examples. Just FYI.

@Jzz24
Copy link
Collaborator

Jzz24 commented Aug 24, 2022

目前ppq仅支持离线量化(ptq)。但集成了一些“量化训练"算法比如lsq,brecq,advanced optimize等方法,区别于qat,上述算法仅仅在标定集上进行blockwise的微调,不会整个模型进行梯度反传训练,其优化效果也是不错的,帮助我们解决了不少模型掉点。
暂未支持qat的原因:

  1. 数据隐私,无法获取整个数据集进行qat训练;面对批量的快速部署需求,采用ptq部署效率更高;
  2. 当前硬件推理库大都仅支持8比特量化,ptq可满足绝大部分需求;
  3. 面对"难量化"的模型掉点,更重要的是定位量化误差的来源(后端图融合?算子联合定点?激活值或者weight异常值等),否则无脑qat的话,效果可能不尽如人意。ppq花了很多时间在图融合,联合定点的量化模拟,希望对你有帮助。

@WZMIAOMIAO
Copy link
Author

You may also have some interests to this one: https://github.com/intel/neural-compressor/tree/master/examples. Just FYI.

thanks...

@WZMIAOMIAO
Copy link
Author

目前ppq仅支持离线量化(ptq)。但集成了一些“量化训练"算法比如lsq,brecq,advanced optimize等方法,区别于qat,上述算法仅仅在标定集上进行blockwise的微调,不会整个模型进行梯度反传训练,其优化效果也是不错的,帮助我们解决了不少模型掉点。 暂未支持qat的原因:

  1. 数据隐私,无法获取整个数据集进行qat训练;面对批量的快速部署需求,采用ptq部署效率更高;
  2. 当前硬件推理库大都仅支持8比特量化,ptq可满足绝大部分需求;
  3. 面对"难量化"的模型掉点,更重要的是定位量化误差的来源(后端图融合?算子联合定点?激活值或者weight异常值等),否则无脑qat的话,效果可能不尽如人意。ppq花了很多时间在图融合,联合定点的量化模拟,希望对你有帮助。

非常感谢您的回复,我后面仔细学习了解下该项目。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants