Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

C++ Inference Throws Error #4174

Closed
scopedog opened this issue Sep 24, 2021 · 7 comments
Closed

C++ Inference Throws Error #4174

scopedog opened this issue Sep 24, 2021 · 7 comments
Assignees

Comments

@scopedog
Copy link

I use PaddleOCR 2.3.0.1 + PaddlePaddle 2.1.2 and got the following error at inference with C++. Note the inference succeeded several times and terminated with the error. Any idea?


The predicted text is :
terminate called after throwing an instance of 'paddle::platform::EnforceNotMet'
  what():  

--------------------------------------
C++ Traceback (most recent call last):
--------------------------------------
0   paddle::AnalysisPredictor::ZeroCopyRun()
1   paddle::framework::NaiveExecutor::Run()
2   paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle
::platform::Place const&)
3   paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope cons
t&, paddle::platform::Place const&) const
4   paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope cons
t&, paddle::platform::Place const&, paddle::framework::RuntimeContext*) const
5   paddle::operators::ElementwiseOp::InferShape(paddle::framework::InferShapeCo
ntext*) const
6   paddle::operators::GetBroadcastDimsArrays(paddle::framework::DDim const&, paddle::framework::DDim const&, int*, int*, int*, int, int)
7   paddle::platform::EnforceNotMet::EnforceNotMet(paddle::platform::ErrorSummary const&, char const*, int)
8   paddle::platform::GetCurrentTraceBackString[abi:cxx11]()

----------------------
Error Message Summary:
----------------------
InvalidArgumentError: Broadcast dimension mismatch. Operands could not be broadcast together with the shape of X = [1, 128, 8, 91] and the shape of Y = [256]. Received [128] in X is not equal to [256] in Y at i:1.
  [Hint: Expected x_dims_array[i] == y_dims_array[i] || x_dims_array[i] <= 1 || y_dims_array[i] <= 1 == true, but received x_dims_array[i] == y_dims_array[i] || x_dims_array[i] <= 1 || y_dims_array[i] <= 1:0 != true:1.] (at /paddle/paddle/fluid/operators/elementwise/elementwise_op_function.h:160)
  [operator < elementwise_add > error]

@LDOUBLEV
Copy link
Collaborator

Please check the shape of the input image, and try to convert this image to other sizes before predicting.

@scopedog
Copy link
Author

Thanks.
It seems very difficult for us to reproduce this error since it was caused by an image uploaded by somebody else.
However, we'll keep an eye on it.

@MistEO
Copy link

MistEO commented Dec 26, 2021

I've the same error, ask for help

OS: Ubuntu 20.04.3 LTS
PaddleOCR: release/2.4
Paddle: ubuntu14.04_cpu_avx_openblas_gcc82 or ubuntu14.04_cpu_noavx_openblas_gcc82, same error

$ ./ppocr system --det_model_dir=/home/mreo/repo/MeoAssistantArknights/3rdparty/resource/PaddleOCR/det --rec_model_dir=/home/mreo/repo/MeoAssistantArknights/3rdparty/resource/PaddleOCR/rec --image_dir=/home/mreo/cpp_infer_pred_12.png
mode: system
total images num: 1
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1227 00:43:03.032341 103269 analysis_predictor.cc:155] Profiler is deactivated, and no profiling report will be generated.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [attention_lstm_fuse_pass]
--- Running IR pass [seqconv_eltadd_relu_fuse_pass]
--- Running IR pass [seqpool_cvm_concat_fuse_pass]
--- Running IR pass [mul_lstm_fuse_pass]
--- Running IR pass [fc_gru_fuse_pass]
--- Running IR pass [mul_gru_fuse_pass]
--- Running IR pass [seq_concat_fc_fuse_pass]
--- Running IR pass [squeeze2_matmul_fuse_pass]
--- Running IR pass [reshape2_matmul_fuse_pass]
--- Running IR pass [flatten2_matmul_fuse_pass]
--- Running IR pass [map_matmul_to_mul_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [repeated_fc_relu_fuse_pass]
--- Running IR pass [squared_mat_sub_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
I1227 00:43:03.142690 103269 graph_pattern_detector.cc:101] ---  detected 34 subgraphs
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [conv_transpose_bn_fuse_pass]
--- Running IR pass [conv_transpose_eltwiseadd_bn_fuse_pass]
I1227 00:43:03.150655 103269 graph_pattern_detector.cc:101] ---  detected 2 subgraphs
--- Running IR pass [is_test_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [memory_optimize_pass]
I1227 00:43:03.166231 103269 memory_optimize_pass.cc:201] Cluster name : x  size: 12
I1227 00:43:03.166283 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_58.tmp_0  size: 24
I1227 00:43:03.166316 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_61.tmp_0  size: 52
I1227 00:43:03.166323 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_96.tmp_0  size: 76
I1227 00:43:03.166326 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_65.tmp_0  size: 76
I1227 00:43:03.166329 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_60.tmp_0  size: 104
I1227 00:43:03.166332 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_57.tmp_0  size: 24
I1227 00:43:03.166349 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_69.tmp_0  size: 76
I1227 00:43:03.166354 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_78.tmp_0  size: 960
I1227 00:43:03.166370 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_82.tmp_0  size: 1076
I1227 00:43:03.166378 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_85.tmp_0  size: 256
I1227 00:43:03.166380 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_80.tmp_0  size: 1344
I1227 00:43:03.166383 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_86.tmp_0  size: 1536
I1227 00:43:03.166386 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_88.tmp_0  size: 1920
I1227 00:43:03.166390 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_87.tmp_0  size: 256
I1227 00:43:03.166393 103269 memory_optimize_pass.cc:201] Cluster name : batch_norm_44.tmp_3  size: 1536
I1227 00:43:03.166396 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_59.tmp_0  size: 24
I1227 00:43:03.166399 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_84.tmp_0  size: 1536
I1227 00:43:03.166404 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_75.tmp_0  size: 128
I1227 00:43:03.166406 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_76.tmp_0  size: 308
I1227 00:43:03.166409 103269 memory_optimize_pass.cc:201] Cluster name : hardswish_19.tmp_0  size: 1920
I1227 00:43:03.166411 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_70.tmp_0  size: 480
I1227 00:43:03.166414 103269 memory_optimize_pass.cc:201] Cluster name : batch_norm_46.tmp_3  size: 1920
I1227 00:43:03.166419 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_66.tmp_0  size: 204
I1227 00:43:03.166422 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_74.tmp_0  size: 308
I1227 00:43:03.166424 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_92.tmp_0  size: 308
I1227 00:43:03.166429 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_77.tmp_0  size: 128
I1227 00:43:03.166431 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_63.tmp_0  size: 52
I1227 00:43:03.166433 103269 memory_optimize_pass.cc:201] Cluster name : tmp_0  size: 308
I1227 00:43:03.166436 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_81.tmp_0  size: 180
I1227 00:43:03.166440 103269 memory_optimize_pass.cc:201] Cluster name : nearest_interp_v2_1220.tmp_0  size: 308
I1227 00:43:03.166442 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_67.tmp_0  size: 76
I1227 00:43:03.166445 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_72.tmp_0  size: 332
I1227 00:43:03.166447 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_83.tmp_0  size: 256
I1227 00:43:03.166465 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_68.tmp_0  size: 256
I1227 00:43:03.166481 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_79.tmp_0  size: 180
I1227 00:43:03.166486 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_64.tmp_0  size: 160
I1227 00:43:03.166501 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_71.tmp_0  size: 128
I1227 00:43:03.166505 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_73.tmp_0  size: 128
I1227 00:43:03.166508 103269 memory_optimize_pass.cc:201] Cluster name : conv2d_62.tmp_0  size: 128
--- Running analysis [ir_graph_to_program_pass]
I1227 00:43:03.196521 103269 analysis_predictor.cc:598] ======= optimize end =======
I1227 00:43:03.196640 103269 naive_executor.cc:107] ---  skip [feed], feed -> x
I1227 00:43:03.197862 103269 naive_executor.cc:107] ---  skip [tmp_0], fetch -> fetch
label file: ./ppocr_keys_v1.txt
I1227 00:43:03.198449 103269 analysis_predictor.cc:155] Profiler is deactivated, and no profiling report will be generated.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [attention_lstm_fuse_pass]
--- Running IR pass [seqconv_eltadd_relu_fuse_pass]
--- Running IR pass [seqpool_cvm_concat_fuse_pass]
--- Running IR pass [mul_lstm_fuse_pass]
--- Running IR pass [fc_gru_fuse_pass]
--- Running IR pass [mul_gru_fuse_pass]
--- Running IR pass [seq_concat_fc_fuse_pass]
--- Running IR pass [squeeze2_matmul_fuse_pass]
--- Running IR pass [reshape2_matmul_fuse_pass]
--- Running IR pass [flatten2_matmul_fuse_pass]
--- Running IR pass [map_matmul_to_mul_pass]
I1227 00:43:03.244259 103269 graph_pattern_detector.cc:101] ---  detected 1 subgraphs
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [repeated_fc_relu_fuse_pass]
--- Running IR pass [squared_mat_sub_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [conv_transpose_bn_fuse_pass]
--- Running IR pass [conv_transpose_eltwiseadd_bn_fuse_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [memory_optimize_pass]
I1227 00:43:03.281141 103269 memory_optimize_pass.cc:201] Cluster name : lstm_0._generated_var_0  size: 1
I1227 00:43:03.281186 103269 memory_optimize_pass.cc:201] Cluster name : fill_constant_batch_size_like_149.tmp_0  size: 768
I1227 00:43:03.281190 103269 memory_optimize_pass.cc:201] Cluster name : elementwise_add_5  size: 19200
I1227 00:43:03.281191 103269 memory_optimize_pass.cc:201] Cluster name : batch_norm_6.tmp_2  size: 25600
I1227 00:43:03.281193 103269 memory_optimize_pass.cc:201] Cluster name : x  size: 38400
I1227 00:43:03.281194 103269 memory_optimize_pass.cc:201] Cluster name : ctc_fc.tmp_1  size: 662500
I1227 00:43:03.281195 103269 memory_optimize_pass.cc:201] Cluster name : lstm_0.tmp_3  size: 1
I1227 00:43:03.281198 103269 memory_optimize_pass.cc:201] Cluster name : ctc_fc.tmp_0  size: 662500
I1227 00:43:03.281198 103269 memory_optimize_pass.cc:201] Cluster name : ctc_fc_w_attr.quantized.dequantized  size: 2544000
--- Running analysis [ir_graph_to_program_pass]
I1227 00:43:03.332463 103269 analysis_predictor.cc:598] ======= optimize end =======
I1227 00:43:03.332615 103269 naive_executor.cc:107] ---  skip [feed], feed -> x
I1227 00:43:03.336149 103269 naive_executor.cc:107] ---  skip [ctc_fc.tmp_0], fetch -> fetch
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1227 00:43:03.336280 103269 main.cpp:215] The predict img: /home/mreo/cpp_infer_pred_12.png
terminate called after throwing an instance of 'paddle::platform::EnforceNotMet'
  what():

  Compile Traceback (most recent call last):
    File "deploy/slim/prune/export_prune_model.py", line 149, in <module>
      main(config, device, logger, vdl_writer)
    File "deploy/slim/prune/export_prune_model.py", line 143, in main
      paddle.jit.save(model, save_path)
    File "<decorator-gen-59>", line 2, in save

    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
      return wrapped_func(*args, **kwargs)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/base.py", line 40, in __impl__
      return func(*args, **kwargs)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/jit.py", line 681, in save
      inner_input_spec)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 488, in concrete_program_specify_input_spec
      *desired_input_spec)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 402, in get_concrete_program
      concrete_program, partial_program_layer = self._program_cache[cache_key]
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 711, in __getitem__
      self._caches[item] = self._build_once(item)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 702, in _build_once
      class_instance=cache_key.class_instance)
    File "<decorator-gen-57>", line 2, in from_func_spec

    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
      return wrapped_func(*args, **kwargs)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/base.py", line 40, in __impl__
      return func(*args, **kwargs)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 652, in from_func_spec
      outputs = static_func(*inputs)
    File "deploy/slim/prune/../../../ppocr/modeling/architectures/base_model.py", line 74, in forward
      x = self.backbone(x)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py", line 912, in __call__
      outputs = self.forward(*inputs, **kwargs)
    File "deploy/slim/prune/../../../ppocr/modeling/backbones/det_mobilenet_v3.py", line 148, in forward
      x = self.conv(x)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py", line 912, in __call__
      outputs = self.forward(*inputs, **kwargs)
    File "/tmp/tmp8arpb725.py", line 35, in forward
      false_fn_4, (self, x), (x,), (x,))
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 210, in convert_ifelse
      return _run_py_ifelse(pred, true_fn, false_fn, true_args, false_args)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 235, in _run_py_ifelse
      return true_fn(*true_args) if pred else false_fn(*false_args)
    File "/tmp/tmp8arpb725.py", line 29, in true_fn_4
      true_fn_3, false_fn_3, (x,), (self, x), (x,))
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 210, in convert_ifelse
      return _run_py_ifelse(pred, true_fn, false_fn, true_args, false_args)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 235, in _run_py_ifelse
      return true_fn(*true_args) if pred else false_fn(*false_args)
    File "/tmp/tmp8arpb725.py", line 26, in false_fn_3
      true_fn_2, false_fn_2, (x,), (x,), (x,))
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 210, in convert_ifelse
      return _run_py_ifelse(pred, true_fn, false_fn, true_args, false_args)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 235, in _run_py_ifelse
      return true_fn(*true_args) if pred else false_fn(*false_args)
    File "deploy/slim/prune/../../../ppocr/modeling/backbones/det_mobilenet_v3.py", line 195, in forward
      x = F.activation.hardswish(x)
    File "/usr/local/lib/python3.7/site-packages/paddle/nn/functional/activation.py", line 375, in hardswish
      helper.append_op(type='hard_swish', inputs={'X': x}, outputs={'Out': out})
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op
      return self.main_program.current_block().append_op(*args, **kwargs)
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/framework.py", line 3225, in append_op
      attrs=kwargs.get("attrs", None))
    File "/usr/local/lib/python3.7/site-packages/paddle/fluid/framework.py", line 2305, in __init__
      for frame in traceback.extract_stack():

--------------------------------------
C++ Traceback (most recent call last):
--------------------------------------
0   paddle::AnalysisPredictor::ZeroCopyRun()
1   paddle::framework::NaiveExecutor::Run()
2   paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&)
3   paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const
4   paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext*) const
5   paddle::framework::OperatorWithKernel::ChooseKernel(paddle::framework::RuntimeContext const&, paddle::framework::Scope const&, paddle::platform::Place const&) const
6   paddle::operators::ActivationOp::GetExpectedKernelType(paddle::framework::ExecutionContext const&) const
7   paddle::operators::GetKernelType(paddle::framework::ExecutionContext const&, paddle::framework::OperatorWithKernel const&, std::string const&)
8   paddle::framework::OperatorWithKernel::IndicateVarDataType(paddle::framework::ExecutionContext const&, std::string const&) const
9   paddle::framework::OperatorWithKernel::ParseInputDataType(paddle::framework::ExecutionContext const&, std::string const&, paddle::framework::proto::VarType_Type*) const
10  paddle::platform::EnforceNotMet::EnforceNotMet(paddle::platform::ErrorSummary const&, char const*, int)
11  paddle::platform::GetCurrentTraceBackString[abi:cxx11]()

----------------------
Error Message Summary:
----------------------
InvalidArgumentError: The Tensor in the hard_swish Op's Input Variable X(conv2d_88.tmp_0) is not initialized.
  [Hint: Expected t->IsInitialized() == true, but received t->IsInitialized():0 != true:1.] (at /paddle/paddle/fluid/framework/operator.cc:1511)
  [operator < hard_swish > error]
[1]    103269 abort (core dumped)  ./ppocr system   --image_dir=/home/mreo/cpp_infer_pred_12.png

@scopedog
Copy link
Author

In my case, calling rec->Run (or running recognition) without cls by specifying NULL for the argument solved this problem.
There seems to be something wrong with cls.

@MistEO
Copy link

MistEO commented Dec 26, 2021

In my case, calling rec->Run (or running recognition) without cls by specifying NULL for the argument solved this problem. There seems to be something wrong with cls.

Thanks for your reply.

I tried it, but it didn't work, the error not changed 😟 I will look for more ways to try to solve it.

@AriouatI
Copy link

AriouatI commented Mar 10, 2022

Hello,
I got the same error as mentioned in the original message :
terminate called after throwing an instance of 'phi::enforce::EnforceNotMet'
while trying to use Paddle Detection the error comes from paddle/phi/core/dense_tensor.cc i tried to print the types (i don't know to which variables they refer to exactly but the function is called many times with no problem until we get a mismatch in the types for me it was uint8 and float32.
any ideas on what may be causing this ?
Thanks.

@paddle-bot-old
Copy link

Since you haven't replied for more than 3 months, we have closed this issue/pr.
If the problem is not solved or there is a follow-up one, please reopen it at any time and we will continue to follow up.
It is recommended to pull and try the latest code first.
由于您超过三个月未回复,我们将关闭这个issue/pr。
若问题未解决或有后续问题,请随时重新打开(建议先拉取最新代码进行尝试),我们会继续跟进。

an1018 pushed a commit to an1018/PaddleOCR that referenced this issue Aug 17, 2022
* [ce tests] add trt_mode in ppyolo

* [ce tests] set amp in tests.sh
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants