Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

利用jetson nano 部署后程序执行无输出 #383

Open
zxm97 opened this issue Oct 23, 2023 · 5 comments
Open

利用jetson nano 部署后程序执行无输出 #383

zxm97 opened this issue Oct 23, 2023 · 5 comments

Comments

@zxm97
Copy link

zxm97 commented Oct 23, 2023

OpenCV版本4.5.2
MNN版本2.4.0, cmake选项为:
cmake -D CMAKE_BUILD_TYPE=Release
-D MNN_CUDA=ON ..

修改了HyperLPR-master\cpp\src\nn_module\mnn_adapter.h (21): is_use_cuda = true,
编译好动态链接库和DEMO后,执行
./PlateRecDemo ../hyperlpr3/resource/models/r2_mobile ../hyperlpr3/resource/images/test_img.jpg,
只打印一行The device support dot:0, support fp16:0, support i8mm: 0,
换几张图测试,results.plate_size始终是0。

@qianxiaoer
Copy link

qianxiaoer commented Oct 23, 2023 via email

@zxm97
Copy link
Author

zxm97 commented Oct 24, 2023

编译了MNN2.7.1 (设置MNN_CUDA=ON) 后,编译本项目,CMakeLists的内容修改之处:

option( LINUX_FETCH_MNN "Fetch and build MNN from git" OFF )
option( LINUX_USE_3RDPARTY_OPENCV "Linux platform using pre-compiled OpenCV library from 3rdparty_hyper_inspire_op" OFF)
option( BUILD_SHARE "Build shared libs" ON )
option( BUILD_SAMPLES "Build samples demo" OFF )
option( BUILD_TEST "Build unit-test exec" OFF )
...
if (LINUX_FETCH_MNN)
...
else()
# MNN Third party dependence
set(MNN_INCLUDE_DIRS ${PATH_3RDPARTY}/MNN-2.7.1/${PLAT}/include)
set(MNN_LIBS ${PATH_3RDPARTY}/MNN-2.7.1/${PLAT}/lib)
endif()

HyperLPR-master\cpp\src\nn_module\mnn_adapter.h (21)的is_use_cuda设为false或true编译完都检测不到。

用下载的第三方库里的MNN-2.2.0,也没检测到。

@zxm97
Copy link
Author

zxm97 commented Oct 24, 2023

nano@nano-desktop:~/HyperLPR-master/build_test$ ./UnitTest
[test_classification.cpp][C_A_T_C_H_T_E_S_T_0][17]: [UnitTest]->Classification Model
The device support i8sdot:0, support fp16:0, support i8mm: 0

UnitTest is a Catch v2.13.9 host application.
Run with -? for options

-------------------------------------------------------------------------------
test_Classification
  test_ClassificationModelPredict
-------------------------------------------------------------------------------
/home/nano/HyperLPR-master/cpp/test/nn_module/test_classification.cpp:40
...............................................................................

/home/nano/HyperLPR-master/cpp/test/nn_module/test_classification.cpp:49: FAILED:
  CHECK( clsEngine.getMOutputMaxConfidence() == Approx(predict_results_confidence[i]).epsilon(0.001) )
with expansion:
  0.87586f == Approx( 0.9999293089 )

/home/nano/HyperLPR-master/cpp/test/nn_module/test_classification.cpp:48: FAILED:
  CHECK( PlateColor(clsEngine.getMOutputColor()) == predict_results_cls[i] )
with expansion:
  0 == 2

/home/nano/HyperLPR-master/cpp/test/nn_module/test_classification.cpp:49: FAILED:
  CHECK( clsEngine.getMOutputMaxConfidence() == Approx(predict_results_confidence[i]).epsilon(0.001) )
with expansion:
  0.99349f == Approx( 0.8975974917 )

/home/nano/HyperLPR-master/cpp/test/nn_module/test_classification.cpp:48: FAILED:
  CHECK( PlateColor(clsEngine.getMOutputColor()) == predict_results_cls[i] )
with expansion:
  0 == 1

/home/nano/HyperLPR-master/cpp/test/nn_module/test_classification.cpp:49: FAILED:
  CHECK( clsEngine.getMOutputMaxConfidence() == Approx(predict_results_confidence[i]).epsilon(0.001) )
with expansion:
  0.9302f == Approx( 0.9997951984 )

===============================================================================
[test_detection.cpp][C_A_T_C_H_T_E_S_T_0][15]: [UnitTest]->Detect Model
[test_detection.cpp][C_A_T_C_H_T_E_S_T_0][25]: Detect Model SplitModel
-------------------------------------------------------------------------------
test_Detection
  test_SplitDetectionSplitModel
-------------------------------------------------------------------------------
/home/nano/HyperLPR-master/cpp/test/nn_module/test_detection.cpp:24
...............................................................................

/home/nano/HyperLPR-master/cpp/test/nn_module/test_detection.cpp:32: FAILED:
  CHECK( result.size() == 1 )
with expansion:
  0 == 1

/home/nano/HyperLPR-master/cpp/test/nn_module/test_detection.cpp:32: FAILED:
  {Unknown expression after the reported line}
due to a fatal error condition:
  SIGSEGV - Segmentation violation signal

===============================================================================
test cases:  2 |  0 passed | 2 failed
assertions: 26 | 19 passed | 7 failed

Segmentation fault (core dumped)

@zxm97
Copy link
Author

zxm97 commented Oct 24, 2023

MNN 2.7.1单元测试的结果:

nano@nano-desktop:~/MNN-2.7.1/build$ ./run_test.out
running core/auto_storage.
running engine/backend/copy_buffer_float.
The device support i8sdot:0, support fp16:0, support i8mm: 0
Test 0 Backend for 0
res=1 in run, 672

========= check nchwTonhwc result ! =========
res=1 in run, 674

========= check nhwc_2_NC4HW4_2_nhwc_float result ! =========
NC4HW4 -> nhwc !
res=1 in run, 676
res=1 in run, 678
res=1 in run, 680
res=1 in run, 682
Test 0 Backend for 1
res=1 in run, 672

========= check nchwTonhwc result ! =========
res=1 in run, 674

========= check nhwc_2_NC4HW4_2_nhwc_float result ! =========
NC4HW4 -> nhwc !
res=1 in run, 676
res=1 in run, 678
res=1 in run, 680
res=1 in run, 682
Test 0 Backend for 2
res=1 in run, 672

========= check nchwTonhwc result ! =========
res=1 in run, 674

========= check nhwc_2_NC4HW4_2_nhwc_float result ! =========
NC4HW4 -> nhwc !
res=1 in run, 676
res=1 in run, 678
res=1 in run, 680
res=1 in run, 682
Test 2 Backend for 0
res=1 in run, 672

========= check nchwTonhwc result ! =========
res=1 in run, 674

========= check nhwc_2_NC4HW4_2_nhwc_float result ! =========
NC4HW4 -> nhwc !
res=1 in run, 676
res=1 in run, 678
res=1 in run, 680
res=1 in run, 682
Test 2 Backend for 1
res=1 in run, 672

========= check nchwTonhwc result ! =========
res=1 in run, 674

========= check nhwc_2_NC4HW4_2_nhwc_float result ! =========
NC4HW4 -> nhwc !
res=1 in run, 676
res=1 in run, 678
res=1 in run, 680
res=1 in run, 682
Test 2 Backend for 2
res=1 in run, 672

========= check nchwTonhwc result ! =========
res=1 in run, 674

========= check nhwc_2_NC4HW4_2_nhwc_float result ! =========
NC4HW4 -> nhwc !
res=1 in run, 676
res=1 in run, 678
res=1 in run, 680
res=1 in run, 682
running engine/backend/copy_buffer_cpu.
Test 0 Backend for 0

========= check NC4HW4_2_NC4HW4_IntType result ! =========

========= check NC4HW4_2_NC4HW4_IntType result ! =========

========= check NC4HW4_2_NC4HW4_IntType result ! =========

========= check nhwc_2_NC4HW4_2_nhwc_inttype result ! =========

========= check nhwc_2_NC4HW4_2_nhwc_inttype result ! =========

========= check nhwc_2_NC4HW4_2_nhwc_inttype result ! =========
Test 0 Backend for 1

========= check NC4HW4_2_NC4HW4_IntType result ! =========

========= check NC4HW4_2_NC4HW4_IntType result ! =========

========= check NC4HW4_2_NC4HW4_IntType result ! =========

========= check nhwc_2_NC4HW4_2_nhwc_inttype result ! =========

========= check nhwc_2_NC4HW4_2_nhwc_inttype result ! =========

========= check nhwc_2_NC4HW4_2_nhwc_inttype result ! =========
Test 0 Backend for 2

========= check NC4HW4_2_NC4HW4_IntType result ! =========

========= check NC4HW4_2_NC4HW4_IntType result ! =========

========= check NC4HW4_2_NC4HW4_IntType result ! =========

========= check nhwc_2_NC4HW4_2_nhwc_inttype result ! =========

========= check nhwc_2_NC4HW4_2_nhwc_inttype result ! =========

========= check nhwc_2_NC4HW4_2_nhwc_inttype result ! =========
running core/buffer_allocator.
BufferAllocator total size : 27 B, 0.000026 M
StaticAllocator total size : 0 B, 0.000000 M
BufferAllocator total size : 1189085472 B, 1134.000244 M
StaticAllocator total size : 0 B, 0.000000 M
BufferAllocator total size : 2776896 B, 2.648254 M
StaticAllocator total size : 0 B, 0.000000 M
running core/callback.
running core/idst.
running core/memory_utils.
running core/regionfuse.
running core/tensor.
running core/tensor_utils.
running core/threadpool.
workIndex=0 in operator(), 26
workIndex=1 in operator(), 26
workIndex=-1 in operator(), 26
index=0 in operator(), 29
index=0 in operator(), 29
index=0 in operator(), 29
index=4 in operator(), 29
index=4 in operator(), 29
index=8 in operator(), 29
index=8 in operator(), 29
workIndex=-1 in operator(), 26
index=0 in operator(), 29
index=1 in operator(), 29
index=2 in operator(), 29
index=3 in operator(), 29
index=4 in operator(), 29
index=5 in operator(), 29
index=6 in operator(), 29
index=7 in operator(), 29
index=8 in operator(), 29
index=9 in operator(), 29
index=1 in operator(), 29
workIndex=-1 in operator(), 26
index=0 in operator(), 29
index=2 in operator(), 29
index=1 in operator(), 29
index=3 in operator(), 29
index=2 in operator(), 29
index=4 in operator(), 29
index=3 in operator(), 29
index=5 in operator(), 29
index=4 in operator(), 29
index=6 in operator(), 29
index=5 in operator(), 29
index=7 in operator(), 29
index=1 in operator(), 29
index=2 in operator(), 29
index=6 in operator(), 29
index=2 in operator(), 29
index=6 in operator(), 29
index=5 in operator(), 29
index=9 in operator(), 29
index=1 in operator(), 29
index=5 in operator(), 29
index=9 in operator(), 29
workIndex=-1 in operator(), 26
index=0 in operator(), 29
workIndex=-1 in operator(), 26
index=0 in operator(), 29
workIndex=-1 in operator(), 26
index=0 in operator(), 29
index=1 in operator(), 29
index=2 in operator(), 29
index=1 in operator(), 29
index=1 in operator(), 29
index=3 in operator(), 29
index=2 in operator(), 29
index=2 in operator(), 29
index=4 in operator(), 29
index=3 in operator(), 29
index=3 in operator(), 29
index=5 in operator(), 29
index=4 in operator(), 29
index=4 in operator(), 29
index=6 in operator(), 29
index=5 in operator(), 29
index=5 in operator(), 29
index=7 in operator(), 29
index=6 in operator(), 29
index=8 in operator(), 29
index=6 in operator(), 29
index=7 in operator(), 29
index=9 in operator(), 29
index=7 in operator(), 29
index=8 in operator(), 29
index=8 in operator(), 29
index=9 in operator(), 29
index=9 in operator(), 29
index=6 in operator(), 29
workIndex=-1 in operator(), 26
workIndex=-1 in operator(), 26
index=0 in operator(), 29
index=8 in operator(), 29
index=3 in operator(), 29
index=7 in operator(), 29
index=3 in operator(), 29
index=7 in operator(), 29
index=0 in operator(), 29
index=9 in operator(), 29
index=1 in operator(), 29
index=1 in operator(), 29
index=2 in operator(), 29
index=3 in operator(), 29
index=4 in operator(), 29
index=5 in operator(), 29
index=6 in operator(), 29
index=7 in operator(), 29
index=8 in operator(), 29
index=2 in operator(), 29
index=3 in operator(), 29
index=4 in operator(), 29
index=5 in operator(), 29
index=6 in operator(), 29
index=7 in operator(), 29
index=8 in operator(), 29
index=9 in operator(), 29
index=9 in operator(), 29
index=7 in operator(), 29
index=8 in operator(), 29
index=9 in operator(), 29
running cv/image_process/gray_to_gray.
running cv/image_process/gray_to_gray_bilinear_transorm.
running cv/image_process/gray_to_gray_nearest_transorm.
running cv/image_process/gray_to_rgba.
running cv/image_process/bgr_to_gray.
running cv/image_process/rgb_to_bgr.
running cv/image_process/rgba_to_bgra.
running cv/image_process/bgr_to_bgr.
running cv/image_process/rgb_to_gray.
running cv/image_process/rgba_to_gray.
running cv/image_process/rgba_to_gray_bilinear_transorm.
running cv/image_process/rgba_to_gray_nearest_transorm.
running cv/image_process/rgba_to_bgr.
running cv/image_process/bgr_to_bgr_blitter.
running cv/image_process/gray_to_gray_blitter.
running cv/image_process/yuv_blitter.
running cv/matrix/scale.
running expr/AllAny.
running expr/ExecutorReset.
running expr/ExecutorConfigTest.
running expr/ExecutorScopeMultiThread.
running expr/ExprResizeCompute.
running expr/ExprResize.
running expr/Extra.
running expr/Gather.
running expr/MatMul.
running expr/MatrixBand.
running expr/MemoryIncrease/mobilenetv1.
From init 20.778854 mb to 20.778854 mb
running expr/MemoryIncrease/interp.
From init 1.469353 mb to 1.469353 mb
running expr/MidOutputTest.
running expr/ModuleTest.
Increase: 33.429974 in rt
Increase: 0.000000 in rt
running expr/RefTest.
running expr/LoopTest.
running expr/ModuleClone.
running expr/ModuleReleaseTest.
memory=f 0.000000 in run, 424
memory=f 66.859947 in run, 437
memory=f 0.000000 in run, 440
running expr/ModuleTestSpeed.
Thread 1 avg cost: 195.231812 ms
Thread 4 avg cost: 136.004807 ms
running expr/SpecialSessionTest.
running expr/SessionCircleTest.
loop: 1, 0.500000 -> 19.221680, 5.000000 -> 1922.167969
loop: 0, 0.500000 -> 0.500000, 5.000000 -> 50.000000
running expr/SessionTest.
_run, 705, cost time: 907.140015 ms
_run, 714, cost time: 710.296021 ms
operator(), 731, cost time: 1659.537109 ms
operator(), 731, cost time: 1978.366089 ms
operator(), 731, cost time: 2011.487061 ms
operator(), 731, cost time: 1856.212036 ms
_run, 758, cost time: 851.844055 ms
_run, 758, cost time: 644.807007 ms
_run, 758, cost time: 755.100037 ms
_run, 705, cost time: 911.361023 ms
_run, 714, cost time: 693.511047 ms
operator(), 731, cost time: 1612.624023 ms
operator(), 731, cost time: 1722.580078 ms
operator(), 731, cost time: 1733.859131 ms
operator(), 731, cost time: 1401.192017 ms
_run, 758, cost time: 788.224060 ms
_run, 758, cost time: 893.152039 ms
_run, 758, cost time: 708.292053 ms
running expr/MultiThreadOneSessionTest.
running expr/MemeoryUsageTest.
memory=f 56.751022 in operator(), 851
memory=f 24.000000 in operator(), 851
memory=f 7.667923 in operator(), 851
memory=f 11.444092 in operator(), 851
memory=f 15.258789 in operator(), 851
running expr/ConstMemoryReplaceTest.
running expr/MutlThreadConstReplaceTest.
Summer: 9.901236, 9.901236, 9.901236, 9.901236,
running expr/MultiThreadLoad.
running expr/Padding.
running expr/RasterOutput.
running expr/Replace.
running expr/Precompute.
running expr/PrecomputeDynamic.
running expr/ReverseSequence.
running expr/zeroshape.
running expr/zeroshape2.
running expr/zeroshape3.
running expr/zeroshape4.
running op/argmax.
running op/argmin.
running op/BatchMatMul.
running op/batch_to_space_nd.
running op/binary/broadcastShapeTest.
running op/binary/add.
running op/binary/subtract.
running op/binary/multiply.
running op/binary/divide.
running op/binary/pow.
running op/binary/minimum.
running op/binary/maximum.
running op/binary/biasadd.
running op/binary/greater.
running op/binary/greaterequal.
running op/binary/less.
running op/binary/floordiv.
running op/binary/squareddifference.
running op/binary/equal.
running op/binary/lessequal.
running op/binary/floormod.
running op/binary/mod_float.
running op/binary/mod_int.
running op/binary/atan2.
running op/binary/logicalor.
running op/binary/notqual.
running op/binary/subtractBroastTest.
running op/binary/bitwise_and.
running op/binary/bitwise_or.
running op/binary/bitwise_xor.
running op/binary/fuse_relu.
running op/binary/addInt8.
AddInt8 test zeropoint is zero
AddInt8 test zeropoint is not zero
running op/binary/subtractInt8.
SubtractInt8 test zeropoint is zero
SubtractInt8 test zeropoint is not zero
running op/binary/multiplyInt8.
MultiplyInt8 test zeropoint is zero
MultiplyInt8 test zeropoint is not zero
running op/binary/divideInt8.
DivedeInt8 test zero point is zero
DivedeInt8 test zero point is not zero
running op/binary/powInt8.
running op/binary/minimumInt8.
running op/binary/maximumInt8.
MaximumInt8 test zeropoint is zero
MaximumInt8 test zeropoint is not zero
running op/binary/floordivInt8.
running op/binary/floormodInt8.
running op/binary/atan2Int8.
running op/binary/sqdInt8.
SqdInt8 test zeropoint is zero
SqdInt8 test zeropoint is not zero
running op/binary/addC4.
running op/BroadcastToTest.
running op/BinaryBroadcastTest.
running op/cast.
running op/channel_shuffle.
running op/concat.
running op/Conv2DBackPropFilter.
running op/Conv2DBackPropFilterDW.
running op/Conv2DBackPropTest.
running op/bias_grad.
running op/ConvInt8/im2col_gemm.
running op/ConvInt8/im2col_spmm.
running op/ConvInt8/winograd.
running op/ConvInt8/depthwise.
Test strides=1
strides=2
running op/convert.
running op/convolution/conv3d.
running op/convolution/conv2d.
running op/convolution/sparse_conv2d.
running op/convolution/depthwise_conv.
running op/convolution/conv_group.
running op/CosineSimilarity.
running op/CropAndResize.
running op/crop.
running op/cumprod.
running op/cumsum.
running op/Deconvolution.
beigin testcase 0
beigin testcase 1
beigin testcase 2
running op/DeconvolutionInt8.
begin testcase 0
begin testcase 1
begin testcase 2
running op/depthtospace.
running op/Dilation2D/cpu.
running op/elu.
running op/expand_dims.
running op/fill.
running op/GatherElements.
running op/gather_nd.
running op/gather.
running op/gatherv2.
running op/GridSample.
running op/histogram.
running op/im2col.
running op/col2im.
running op/InnerProduct.
running op/layernorm.
running op/linspace.
running op/matmul.
running op/matmulBConst.
running op/matrixbandpart.
running op/moments.
running op/MultiConv.
running op/MultiDeconv.
running op/normalize.
running op/OneHotTest.
running op/prelu.
running op/pad.
running op/PermuteTest.
running op/MaxPool3d.
running op/AvePool3d.
running op/PoolGrad.
running op/ROIAlign.
running op/ROIPooling.
running op/randomuniform.
running op/range.
running op/rank.
running op/raster.
running op/relu6.
running op/clamp.
running op/relu.
running op/reduction/reduce_sum.
running op/reduction/reduce_sum_multi.
running op/reduction/reduce_mean.
running op/reduction/reduce_max.
running op/reduction/reduce_min.
running op/reduction/reduce_prod.
running op/reshape/nchw.
running op/reshape/nhwc.
running op/reshape/nc4hw4.
running op/resize.
running op/Interp.
running op/InterpInt8.
InterpInt8 test: Type=1
InterpInt8 test: Type=2
0 error, right: -1, compute: -0.032
InterpInt8 ResizeType=2 test failed!
running op/reverse.
running op/scale.
running op/scaleInt8.
running op/ScatterElementsTest.
running op/ScatterNdTest.
running op/selu.
running op/select.
running op/rnn/SequenceGRU.
running op/setdiff1d.
running op/shape.
running op/size.
running op/softmax.
running op/softmaxInt8.
running op/softplus.
running op/softsign.
running op/sort.
running op/space_to_batch_nd.
running op/spacetodepth.
running op/split.
running op/squeeze.
Cannot Squeeze dim[1], 1 is expected, 2 is got. input shape: Tensor shape: 2, 2,
running op/stack.
running op/stridedslice.
running op/splitc4.
running op/svd.
run, 37, cost time: 0.068000 ms
run, 136, cost time: 0.080000 ms
running op/tanh.
running op/threshold.
running op/tile.
running op/TopKV2.
0 s 74 ms
running op/transpose.
running op/unary/abs.
running op/unary/negative.
running op/unary/floor.
running op/unary/ceil.
running op/unary/square.
running op/unary/sqrt.
running op/unary/rsqrt.
running op/unary/exp.
running op/unary/log.
running op/unary/sin.
running op/unary/cos.
running op/unary/tan.
running op/unary/asin.
running op/unary/acos.
running op/unary/atan.
running op/unary/reciprocal.
running op/unary/log1p.
running op/unary/tanh.
running op/unary/sigmoid.
running op/unary/acosh.
running op/unary/asinh.
running op/unary/atanh.
running op/unary/round.
running op/unary/sign.
running op/unary/cosh.
running op/unary/erf.
running op/unary/erfc.
running op/unary/erfinv.
running op/unary/expm1.
running op/unary/sinh.
running op/unary/gelu.
running op/unique.
running op/UnravelIndexTest.
running op/unstack.
running op/where.
running op/zeroslike.
Error: op/InterpInt8
TEST_NAME_UNIT: 鍗曞厓娴嬭瘯
TEST_CASE_AMOUNT_UNIT: {"blocked":0,"failed":1,"passed":289,"skipped":0}

@tunmx
Copy link
Collaborator

tunmx commented Oct 31, 2023

感谢反馈,我会去找一台jetson nano尝试复现出问题

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants