Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Choose backend (GPU/CPU) #1852

Open
xd009642 opened this issue Jul 31, 2019 · 12 comments
Open

Choose backend (GPU/CPU) #1852

xd009642 opened this issue Jul 31, 2019 · 12 comments

Comments

@xd009642
Copy link

So I couldn't find a way to do this in the documentation but I was wondering if there was a way to force dlib to use the CPU if it's built with CUDA enabled? I ask because I'm making a multithreaded application with dlib and only a maximum of 2 threads at a time can use CUDA (because of limitations of the GPU itself). So I'd like to in a thread get dlib to use the GPU in the thread if <2 threads are using the GPU and otherwise fallback to using the CPU.

Also a slightly related question how would dlib cope with a machine with multiple GPUs installed all for compute purposes?

@davisking
Copy link
Owner

davisking commented Aug 1, 2019 via email

@xd009642
Copy link
Author

xd009642 commented Aug 1, 2019

Would there be any interest in adding the ability to run something on CPU if built with GPU support? Just because being able to handle this in a single binary would be preferential to building two different binaries linked to two different builds of dlib and coordinating between the two.

Cuda set device is super useful for the second question though cheers! Just saw on docs that it sets the device for the host thread 😊

@davisking
Copy link
Owner

davisking commented Aug 2, 2019 via email

@dlib-issue-bot

This comment has been minimized.

@dlib-issue-bot

This comment has been minimized.

@dlib-issue-bot

This comment has been minimized.

@pliablepixels
Copy link

Is there a chance to reopen this? Maybe @xd009642 made some progress?
I am in a situation where I use dlib in GPU mode for inferencing, but depending on the training set, need to switch to CPU as GPU memory runs out.

@pliablepixels
Copy link

Continuing on a closed thread @davisking unless you think this warrants a new issue.

I took at quick look at the code, it looks like CPU code is compiled out when DLIB_USE_CUDA is used so its not a simple matter of setting a runtime flag. Supporting switching to CPU when GPU is enabled would be a non trivial effort that would involve keeping both code bases compiled in. Is that right or am I going down the wrong path?

@xd009642
Copy link
Author

I didn't make any progress aside from looking at the code, unfortunately real-life interfered and I lost the need for this functionality at work. My approach though would have been to keep both code-bases compiling in as well as provide an option to keep the old behaviour (just in case any users had a requirement on the old behaviour for binary size etc).

I could potentially have another look at this over the Christmas holidays but I won't realistically get a chance before December 20th so you can always give it a shot if you wish

@davisking
Copy link
Owner

I don't think it's a big deal to support this. The change is basically just to replace a bunch of:

#ifdef DLIB_USE_CUDA
do_this();
#else
do_that();
#endif

statements with something like:

if (dlib::use_cuda())
    do_this();
else
    do_that();

Where dlib::use_cuda() simply returns a thread local bool. The state of the bool needs to be initialized based on the DLIB_USE_CUDA macro, and a few variable declarations that are conditionally created based on DLIB_USE_CUDA need to be updated. But other than that it doesn't seem like there is anything else to do.

I don't think there needs to be any option to keep the old behavior where one path is compiled out.

@reunanen
Copy link
Contributor

reunanen commented Jan 4, 2020

I don't think there needs to be any option to keep the old behavior where one path is compiled out.

In my opinion, it would be really nice if we could still build CPU-only versions that have no CUDA dependencies whatsoever (during compiling, linking, or runtime execution).

@davisking
Copy link
Owner

Oh yeah, I didn’t mean cuda would become required. The options would be cpu or cpu and gpu. No gpu only option.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants