Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UMat memory leak on Intel GPUs when using multiple threads #10101

Open
leugenea opened this issue Nov 16, 2017 · 2 comments
Open

UMat memory leak on Intel GPUs when using multiple threads #10101

leugenea opened this issue Nov 16, 2017 · 2 comments

Comments

@leugenea
Copy link

System information (version)
  • OpenCV => 3.3.1
  • Operating System / Platform => Windows 10 64 Bit
  • Compiler => Visual Studio 2013
Detailed description

Using UMat with OpenCL on Intel videocards in multiple host threads causes GPU memory leak.

Steps to reproduce
  1. Create function that: loads image into cv::Mat, copies it to cv::UMat using copyTo method, runs some OpenCL-related function like CascadeClassifier's detectMultiScale or cv::equalizeHist, and returns.
  2. Create loop: create thread, run function from 1 in this thread, join thread.
  3. Watch GPU memory increasing indefinitely.

intel_gpu_leak

Additional notes
  1. If not using Intel GPU (tested on various NVidia GPUs), there's no memory leak.
  2. If using OpenCL from fixed number of threads (for example, in fixed-size threadpool), there's no memory leak.
  3. Even if I release UMat's memory using release() method, memory will leak.
@alalek
Copy link
Member

alalek commented Nov 16, 2017

Consider to use ThreadPools instead of spawning new threads.

@leugenea
Copy link
Author

Yes, of course. But isn't it a bit strange that OpenCL memory isn't deallocated until program terminates?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants