-
Notifications
You must be signed in to change notification settings - Fork 219
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PPL_DSP_INT8量化后export问题 #86
Comments
但是我又看了一下,该模型是Pytorch.onnx export出来的opset11,应该是比较正常的算子,Resize算子写的虽然省略了东西但是应该没毛病(ONNXRuntime可以跑通该模型) |
据我所知你拿ppq转换onnx的模型到caffe问题可多了,或者从caffe转换到onnx也一堆问题,主要是它们有些定义好像不是很一样...你确定要这样搞的话可以解除439行的判断试试,虽然它后面可能还要别的问题。 |
有道理,emmmmmmmm那我暂时先不考虑这个模型了 |
好像就只有caffe进caffe出是可以的,你这个东西是输入了一个onnx然后要导出caffe? |
确实,我才意识到,PPL_DSP_INT8只能导出CaffeModel。再叨扰问一下,,,PPL_DSP_INT8的DSP,指的是量化完成的模型是在CPU上运行的吗? |
好像不是这样的,这个PPL_DSP_INT8指的是按照PPL_DSP的量化规则对模型进行量化,确定每一个算子的具体量化细节。这一规则适用于PPL_DSP和SNPE平台。 |
ohh明白了,感谢 |
GPU模式下,在跑RetinaFace(backbone为ResNet50时),量化过程成功跑完,在导出时报TypeError: Cannot convert Resize_133 to caffe op。debug发现是因为没有满足ppq/parser/caffe/caffe_export_utils.py 的第439行判断而导致的。
The text was updated successfully, but these errors were encountered: