Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dnn(conv1d): invalid memory access (2022-12-27) #23046

Closed
alalek opened this issue Dec 28, 2022 · 2 comments · Fixed by #23050
Closed

dnn(conv1d): invalid memory access (2022-12-27) #23046

alalek opened this issue Dec 28, 2022 · 2 comments · Fixed by #23050

Comments

@alalek
Copy link
Member

alalek commented Dec 28, 2022

Valrind report: http://pullrequest.opencv.org/buildbot/builders/master_valgrind-lin64-debug/builds/100008

Sporadic test crashes:

relates #22905

[ RUN      ] Conv3D.conv3d/1, where GetParam() = (GFLOPS=0.000, K=[3 x 3 x 3], IN={1, 2, 19, 19, 19}, OCN=2, G=2, S=[2 x 2 x 2], P=(1, 1) x (1, 1) x (1, 1), BIAS, OCV/CPU)
==18788== Invalid read of size 4
==18788==    at 0x50246F5: packData8 (fast_convolution.cpp:302)
==18788==    by 0x50246F5: operator() (fast_convolution.cpp:750)
==18788==    by 0x50246F5: std::_Function_handler<void (cv::Range const&), cv::dnn::runFastConv(cv::_InputArray const&, cv::_OutputArray const&, cv::Ptr<cv::dnn::FastConv> const&, int, cv::Ptr<cv::dnn::dnn4_v20221220::ActivationLayer> const&, std::vector<float, std::allocator<float> > const&, bool)::{lambda(cv::Range const&)#1}>::_M_invoke(std::_Any_data const&, cv::Range const&) (std_function.h:316)
==18788==    by 0x501E6F6: operator() (std_function.h:706)
==18788==    by 0x501E6F6: cv::ParallelLoopBodyLambdaWrapper::operator()(cv::Range const&) const (utility.hpp:604)
==18788==    by 0x662F91A: cv::(anonymous namespace)::ParallelLoopBodyWrapper::operator()(cv::Range const&) const (parallel.cpp:352)
==18788==    by 0x6646062: execute (parallel_impl.cpp:332)
==18788==    by 0x6646062: cv::ThreadPool::run(cv::Range const&, cv::ParallelLoopBody const&, double) (parallel_impl.cpp:647)
==18788==    by 0x66463EB: cv::parallel_for_pthreads(cv::Range const&, cv::ParallelLoopBody const&, double) (parallel_impl.cpp:750)
==18788==    by 0x662FF07: parallel_for_impl (parallel.cpp:609)
==18788==    by 0x662FF07: cv::parallel_for_(cv::Range const&, cv::ParallelLoopBody const&, double) (parallel.cpp:520)
==18788==    by 0x5022CF9: parallel_for_ (utility.hpp:612)
==18788==    by 0x5022CF9: cv::dnn::runFastConv(cv::_InputArray const&, cv::_OutputArray const&, cv::Ptr<cv::dnn::FastConv> const&, int, cv::Ptr<cv::dnn::dnn4_v20221220::ActivationLayer> const&, std::vector<float, std::allocator<float> > const&, bool) (fast_convolution.cpp:531)
==18788==    by 0x4FDC95F: cv::dnn::ConvolutionLayerImpl::forward(cv::_InputArray const&, cv::_OutputArray const&, cv::_OutputArray const&) (convolution_layer.cpp:1391)
==18788==    by 0x50F6FE1: cv::dnn::dnn4_v20221220::Net::Impl::forwardLayer(cv::dnn::dnn4_v20221220::detail::LayerData&) (net_impl.cpp:727)
==18788==    by 0x50ED137: cv::dnn::dnn4_v20221220::Net::Impl::forwardToLayer(cv::dnn::dnn4_v20221220::detail::LayerData&, bool) (net_impl.cpp:881)
==18788==    by 0x51071AD: cv::dnn::dnn4_v20221220::Net::Impl::forward(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) (net_impl.cpp:906)
==18788==    by 0x50E8D6B: cv::dnn::dnn4_v20221220::Net::forward(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) (net.cpp:80)
==18788==  Address 0x1ba761ac is 28 bytes after a block of size 80 in arena "client"
==18788== 
@alalek
Copy link
Member Author

alalek commented Dec 28, 2022

@zihaomu @vpisarev Please take a look on this.

There is gdb debugger available with valdind: https://valgrind.org/docs/manual/manual-core-adv.html

@zihaomu
Copy link
Member

zihaomu commented Dec 28, 2022

I'm working on it, try to fix it today.

@zihaomu zihaomu linked a pull request Dec 28, 2022 that will close this issue
6 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants