-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
squeezenet segment fault #328
Comments
so does other networks and mobilenetTest.out/backendtest.out but benchmark.out works OK aarch64-linux-gcc/g++ 5.4 , RK3399 -- >>>>>>>>>>>>> |
gdb --args ./mobilenetTest.out ../benchmark/models/MobileNetV2_224.mnn 224x224.jpg GNU gdb (Ubuntu 7.11.1-0ubuntu1~16.5) 7.11.1 Thread 1 "mobilenetTest.o" received signal SIGSEGV, Segmentation fault. |
benchmark下的models只能用于测试性能,也就是benchmakr.out测试,其他不能用 |
./quantized.out models/SqueezeNetV1.0.mnn models/squeeneti8.out models/squeezei8.json
modelFile=s models/SqueezeNetV1.0.mnn in main, 21
preTreatConfig=s models/squeezei8.json in main, 22
dstFile=s models/squeeneti8.out in main, 23
[09:31:22] /home/firefly/git/MNN/tools/quantization/quantized.cpp:50: calibrate the feature according to KL-divergence and quantize model...
[09:31:22] /home/firefly/git/MNN/tools/quantization/calibration.cpp:102: use feature quantization method: KL
[09:31:22] /home/firefly/git/MNN/tools/quantization/calibration.cpp:103: use weight quantization method: MAX_ABS
[09:31:22] /home/firefly/git/MNN/tools/quantization/Helper.cpp:61: used image num: 2
Segmentation fault (core dumped)
GDB info:
Starting program: /home/firefly/git/MNN/buildq/quantized.out models/SqueezeNetV1.0.mnn models/squeeneti8.out models/squeezei8.json
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".
[09:14:06] /home/firefly/git/MNN/tools/quantization/quantized.cpp:21: >>> modelFile: models/SqueezeNetV1.0.mnn
[09:14:06] /home/firefly/git/MNN/tools/quantization/quantized.cpp:22: >>> preTreatConfig: models/squeezei8.json
[09:14:06] /home/firefly/git/MNN/tools/quantization/quantized.cpp:23: >>> dstFile: models/squeeneti8.out
[09:14:06] /home/firefly/git/MNN/tools/quantization/quantized.cpp:50: Calibrate the feature and quantize model...
[09:14:06] /home/firefly/git/MNN/tools/quantization/calibration.cpp:107: Use feature quantization method: KL
[09:14:06] /home/firefly/git/MNN/tools/quantization/calibration.cpp:108: Use weight quantization method: MAX_ABS
[09:14:06] /home/firefly/git/MNN/tools/quantization/Helper.cpp:100: used image num: 1
[New Thread 0x7fb78a01e0 (LWP 11516)]
[New Thread 0x7fb70a01e0 (LWP 11517)]
[New Thread 0x7fb68a01e0 (LWP 11518)]
Thread 1 "quantized.out" received signal SIGSEGV, Segmentation fault.
0x0000000000512f24 in flatbuffers::Vector::size (this=0x0) at /home/firefly/git/MNN/tools/quantization/../../tools/converter/source/IR/flatbuffers/flatbuffers.h:211
211 uoffset_t size() const { return EndianScalar(length_); }
(gdb) bt
#0 0x0000000000512f24 in flatbuffers::Vector::size (this=0x0) at /home/firefly/git/MNN/tools/quantization/../../tools/converter/source/IR/flatbuffers/flatbuffers.h:211
#1 0x0000007fb7f3c8c0 in MNN::ConvolutionFloatFactory::create (inputs=std::vector of length 1, capacity 1 = {...}, outputs=std::vector of length 1, capacity 1 = {...}, op=0x568e80,
backend=0x55c250) at /home/firefly/git/MNN/source/backend/cpu/compute/ConvolutionFloatFactory.cpp:74
#2 0x0000007fb7f2b374 in MNN::ConvolutionFactory::onCreate (this=0x554c40, inputs=std::vector of length 1, capacity 1 = {...}, outputs=std::vector of length 1, capacity 1 = {...},
op=0x568e80, backend=0x55c250) at /home/firefly/git/MNN/source/backend/cpu/CPUConvolution.cpp:74
#3 0x0000007fb7f22df8 in MNN::CPUBackend::onCreate (this=0x55c250, inputs=std::vector of length 1, capacity 1 = {...}, outputs=std::vector of length 1, capacity 1 = {...}, op=0x568e80)
at /home/firefly/git/MNN/source/backend/cpu/CPUBackend.cpp:178
#4 0x0000007fb7e97a8c in MNN::Pipeline::Unit::_createExecution (this=0x586770, bn=0x55c250, cpuBn=0x55c250) at /home/firefly/git/MNN/source/core/Pipeline.cpp:96
#5 0x0000007fb7e982a0 in MNN::Pipeline::Unit::prepare (this=0x586770, bn=0x55c250, cpuBn=0x55c250) at /home/firefly/git/MNN/source/core/Pipeline.cpp:236
#6 0x0000007fb7e98934 in MNN::Pipeline::prepare (this=0x585530) at /home/firefly/git/MNN/source/core/Pipeline.cpp:298
#7 0x0000007fb7e8a388 in MNN::Session::resize (this=0x588c50) at /home/firefly/git/MNN/source/core/Session.cpp:128
#8 0x0000007fb7e7fef8 in MNN::Interpreter::createSession (this=0x55c130, config=...) at /home/firefly/git/MNN/source/core/Interpreter.cpp:121
#9 0x00000000004bb048 in Calibration::_initMNNSession (this=0x567500, modelBuffer=0x55db00 "\030", bufferSize=7788, channels=3)
at /home/firefly/git/MNN/tools/quantization/calibration.cpp:123
#10 0x00000000004bae70 in Calibration::Calibration (this=0x567500, model=0x5631e0, modelBuffer=0x55db00 "\030", bufferSize=7788, configPath="models/squeezei8.json")
at /home/firefly/git/MNN/tools/quantization/calibration.cpp:116
#11 0x0000000000511d78 in main (argc=4, argv=0x7ffffff308) at /home/firefly/git/MNN/tools/quantization/quantized.cpp:52
(gdb) vim /home/firefly/git/MNN/tools/quantization/../../tools/converter/source/IR/flatbuffers/flatbuffers.h
The text was updated successfully, but these errors were encountered: