Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Yolov8模型量化后,量化未报错,加载模型推理报错,MNN tool无法打开模型,nerton提示weights参数是空 #2763

Closed
LossNAN opened this issue Feb 20, 2024 · 10 comments
Labels
bug Something isn't working

Comments

@LossNAN
Copy link

LossNAN commented Feb 20, 2024

平台(如果交叉编译请再附上交叉编译目标平台):

Platform(Include target platform as well if cross-compiling):

mac(m1)

Github版本:

Github Version:

直接下载ZIP包请提供下载日期以及压缩包注释里的git版本(可通过7z l zip包路径命令并在输出信息中搜索Comment 获得,形如Comment = bc80b11110cd440aacdabbf59658d630527a7f2b)。 git clone请提供 git commit 第一行的commit id

Provide date (or better yet, git revision from the comment section of the zip. Obtainable using 7z l PATH/TO/ZIP and search for Comment in the output) if downloading source as zip,otherwise provide the first commit id from the output of git commit

编译方式:

Compiling Method

cmake -DMNN_BUILD_CONVERTER=ON -DMNN_BUILD_TOOL=ON -DMNN_BUILD_BENCHMARK=ON -DMNN_BUILD_QUANTOOLS=ON -DMNN_BUILD_OPENCV=ON -DMNN_IMGCODECS=ON ..

编译日志:

Build Log:

[ 99%] Linking CXX executable quantized.out
[ 99%] Built target quantized.out
[ 99%] Linking CXX executable ../../runTrainDemo.out
[ 99%] Built target runTrainDemo.out
1 warning generated.
[ 99%] Linking CXX shared library libMNNConvertDeps.dylib
[ 99%] Built target MNNConvertDeps
[100%] Building CXX object tools/converter/CMakeFiles/TestPassManager.dir/source/TestPassManager.cpp.o
[100%] Building CXX object tools/converter/CMakeFiles/MNNRevert2Buffer.dir/source/MNNRevert2Buffer.cpp.o
[100%] Building CXX object tools/converter/CMakeFiles/MNNDump2Json.dir/source/MNNDump2Json.cpp.o
[100%] Building CXX object tools/converter/CMakeFiles/MNNConvert.dir/source/MNNConverter.cpp.o
[100%] Building CXX object tools/converter/CMakeFiles/TestConvertResult.dir/source/TestConvertResult.cpp.o
[100%] Linking CXX executable ../../MNNDump2Json
[100%] Linking CXX executable ../../MNNRevert2Buffer
[100%] Linking CXX executable ../../TestConvertResult
[100%] Linking CXX executable ../../MNNConvert
[100%] Built target MNNDump2Json
[100%] Built target MNNRevert2Buffer
[100%] Built target TestConvertResult
[100%] Built target MNNConvert
[100%] Linking CXX executable ../../TestPassManager
[100%] Built target TestPassManager

量化配置文件

{
    "format": "RGB",
    "mean": [
        0.0,
        0.0,
        0.0
    ],
    "normal": [
        255.0,
        255.0,
        255.0
    ],
    "width": 640,
    "height": 640,
    "path": "../images_data/",
    "used_image_num": 200,
    "feature_quantize_method": "KL",
    "weight_quantize_method": "MAX_ABS",
    "model": "../models/yolov8l_640.mnn"
}

量化日志

(base) zhikunlin@ZhikundeMacBook-Pro build % ./quantized.out ../models/yolov8l_640.mnn ../models/yolov8l_640_quant.mnn yolov8l_quant.json
[16:52:57] /Users/zhikunlin/Desktop/MNN/tools/quantization/calibration.cpp:1268: >>> modelFile: ../models/yolov8l_640.mnn
[16:52:57] /Users/zhikunlin/Desktop/MNN/tools/quantization/calibration.cpp:1269: >>> preTreatConfig: yolov8l_quant.json
[16:52:57] /Users/zhikunlin/Desktop/MNN/tools/quantization/calibration.cpp:1270: >>> dstFile: ../models/yolov8l_640_quant.mnn
[16:52:57] /Users/zhikunlin/Desktop/MNN/tools/quantization/calibration.cpp:1298: Calibrate the feature and quantize model...
[16:52:57] /Users/zhikunlin/Desktop/MNN/tools/quantization/calibration.cpp:159: Use feature quantization method: KL
[16:52:57] /Users/zhikunlin/Desktop/MNN/tools/quantization/calibration.cpp:160: Use weight quantization method: MAX_ABS
[16:52:57] /Users/zhikunlin/Desktop/MNN/tools/quantization/calibration.cpp:180: feature_clamp_value: 127
[16:52:57] /Users/zhikunlin/Desktop/MNN/tools/quantization/calibration.cpp:181: weight_clamp_value: 127
hw.cpufamily: 458787763 , size = 4
The device support i8sdot:1, support fp16:1, support i8mm: 0
[16:52:57] /Users/zhikunlin/Desktop/MNN/tools/quantization/Helper.cpp:111: used image num: 200
[16:52:59] /Users/zhikunlin/Desktop/MNN/tools/quantization/calibration.cpp:666: fake quant weights done.
ComputeFeatureRange: 100.00 %
CollectFeatureDistribution: 100.00 %
[16:59:33] /Users/zhikunlin/Desktop/MNN/tools/quantization/calibration.cpp:1306: Quantize model done!

使用浮点模型推理:

(base) zhikunlin@ZhikundeMacBook-Pro build % ./yolov8_demo yolov8l_640.mnn 1.jpg 
hw.cpufamily: 458787763 , size = 4
The device support i8sdot:1, support fp16:1, support i8mm: 0
Load Cache file error.
### box: {198.200134, 559.483887, 846.837708, 893.445435}, class_idx: 1, score: 0.957126
### box: {487.433807, 443.627838, 856.631470, 687.042358}, class_idx: 1, score: 0.941436
### box: {12.310919, 392.199829, 186.548264, 530.937195}, class_idx: 1, score: 0.940869
### box: {224.537567, 370.929993, 423.085419, 551.752930}, class_idx: 1, score: 0.939305
### box: {971.428528, 469.405518, 1032.574707, 638.214478}, class_idx: 0, score: 0.878514
### box: {294.351196, 302.742859, 377.328491, 372.653992}, class_idx: 5, score: 0.810854
### box: {1854.038086, 449.519897, 1898.463867, 585.375183}, class_idx: 0, score: 0.731948
### box: {1617.999756, 450.079376, 1665.884766, 594.068970}, class_idx: 0, score: 0.717773
### box: {175.525665, 285.320923, 202.085388, 326.254883}, class_idx: 5, score: 0.695722
### box: {247.576767, 264.756287, 263.744812, 306.575226}, class_idx: 0, score: 0.604375
### box: {208.743011, 268.913757, 236.528748, 307.761871}, class_idx: 5, score: 0.540824
### box: {-0.082666, 337.742737, 68.722839, 436.501007}, class_idx: 1, score: 0.511465
### box: {5.401082, 305.923645, 94.517921, 388.969910}, class_idx: 1, score: 0.431312
### box: {411.817444, 390.059052, 433.946106, 486.991608}, class_idx: 0, score: 0.369734
### box: {8.893124, 267.331665, 111.605499, 305.032623}, class_idx: 1, score: 0.364433
### box: {287.154480, 285.582336, 304.579041, 334.789917}, class_idx: 0, score: 0.339148
result image write to `res.jpg`.

使用量化后的模型推理:

(base) zhikunlin@ZhikundeMacBook-Pro build % ./yolov8_demo yolov8l_640_quant.mnn 1.jpg 
hw.cpufamily: 458787763 , size = 4
The device support i8sdot:1, support fp16:1, support i8mm: 0
Load Cache file error.
Create execution error : 101
code=2 in onForward, 372 
zsh: segmentation fault  ./yolov8_demo yolov8l_640_quant.mnn 1.jpg

MNN工作台无法打开量化后的模型提示如下:

image

Nerton可以打开,但是weights里面为空:

image

目前看来可能是模型量化过程存在问题,但是量化工具提示量化完成,希望能得到您的尽快回复~~感谢🙏

@jxt1234
Copy link
Collaborator

jxt1234 commented Feb 20, 2024

用 MNNV2Basic 测试下看看

nerton 里面 weight 为空是正常的,对应是替换为 buffer

@jxt1234
Copy link
Collaborator

jxt1234 commented Feb 20, 2024

另外检查下 yolov8_demo 所用的 mnn 库是否是最新代码编译的

@jxt1234 jxt1234 added the User The user ask question about how to use. Or don't use MNN correctly and cause bug. label Feb 20, 2024
@LossNAN
Copy link
Author

LossNAN commented Feb 21, 2024

用 MNNV2Basic 测试下看看

nerton 里面 weight 为空是正常的,对应是替换为 buffer

出现同样的错误
image

@LossNAN
Copy link
Author

LossNAN commented Feb 21, 2024

另外检查下 yolov8_demo 所用的 mnn 库是否是最新代码编译的

@jxt1234 已将最新的编译生成的库文件复制过去使用,还是同样的错误

@jxt1234
Copy link
Collaborator

jxt1234 commented Feb 21, 2024

mnn 是什么时候的版本? 那个 yolov8_quant.mnn 发一下看看

@LossNAN
Copy link
Author

LossNAN commented Feb 21, 2024

mnn 是什么时候的版本? 那个 yolov8_quant.mnn 发一下看看
版本号:
commit 784017d

mnn文件已经发送到您的邮箱:
120543985@qq.com

@jxt1234
Copy link
Collaborator

jxt1234 commented Feb 21, 2024

看上去是 unary 量化出错了,定位中

@jxt1234 jxt1234 added bug Something isn't working and removed User The user ask question about how to use. Or don't use MNN correctly and cause bug. labels Feb 21, 2024
@LossNAN
Copy link
Author

LossNAN commented Feb 22, 2024

看上去是 unary 量化出错了,定位中

yolov5的量化后的模型也出错了,量化前的模型是可以正常运行的,量化后出现以下错误:

(base) zhikunlin@192 build % ./MNNV2Basic.out ../models/yolov5s_416_quant.mnn 10 0 0 4 1x3x416x416
Use extra forward type: 0

Open Model ../models/yolov5s_416_quant.mnn
Load Cache file error.
hw.cpufamily: 458787763 , size = 4
The device support i8sdot:1, support fp16:1, support i8mm: 0
Create execution error : 101
test_main, 282, cost time: 1.435000 ms
Resize error, can't execute MNN

@jxt1234
和yolov8量化的错误相同,yolov5的量化前后mnn文件已通过邮箱发给您,希望能尽快回复

@v0jiuqi
Copy link
Collaborator

v0jiuqi commented Feb 27, 2024

已修复,等版本更新

@jxt1234
Copy link
Collaborator

jxt1234 commented Mar 1, 2024

2.8.2 修正,需要重新量化模型

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants