Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PPL_DSP_INT8量化后export问题 #86

Closed
Menace-Dragon opened this issue May 4, 2022 · 8 comments
Closed

PPL_DSP_INT8量化后export问题 #86

Menace-Dragon opened this issue May 4, 2022 · 8 comments

Comments

@Menace-Dragon
Copy link

Menace-Dragon commented May 4, 2022

GPU模式下,在跑RetinaFace(backbone为ResNet50时),量化过程成功跑完,在导出时报TypeError: Cannot convert Resize_133 to caffe op。debug发现是因为没有满足ppq/parser/caffe/caffe_export_utils.py 的第439行判断而导致的。

@Menace-Dragon
Copy link
Author

Capture
如图,(当然你也发现了,该resize算子输入尺寸为23x23,输出为45x45,居然不是2倍关系,我也迷惑

@Menace-Dragon
Copy link
Author

但是我又看了一下,该模型是Pytorch.onnx export出来的opset11,应该是比较正常的算子,Resize算子写的虽然省略了东西但是应该没毛病(ONNXRuntime可以跑通该模型)

@ZhangZhiPku
Copy link
Collaborator

据我所知你拿ppq转换onnx的模型到caffe问题可多了,或者从caffe转换到onnx也一堆问题,主要是它们有些定义好像不是很一样...你确定要这样搞的话可以解除439行的判断试试,虽然它后面可能还要别的问题。

@Menace-Dragon
Copy link
Author

有道理,emmmmmmmm那我暂时先不考虑这个模型了

@ZhangZhiPku
Copy link
Collaborator

好像就只有caffe进caffe出是可以的,你这个东西是输入了一个onnx然后要导出caffe?

@Menace-Dragon
Copy link
Author

确实,我才意识到,PPL_DSP_INT8只能导出CaffeModel。再叨扰问一下,,,PPL_DSP_INT8的DSP,指的是量化完成的模型是在CPU上运行的吗?

@ZhangZhiPku
Copy link
Collaborator

好像不是这样的,这个PPL_DSP_INT8指的是按照PPL_DSP的量化规则对模型进行量化,确定每一个算子的具体量化细节。这一规则适用于PPL_DSP和SNPE平台。

@Menace-Dragon
Copy link
Author

ohh明白了,感谢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants