-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Choose backend (GPU/CPU) #1852
Comments
There is no way to switch at runtime. However you build it is the mode in
which it will run. To assign things to different GPUs you use the normal
cudaSetDevice() method provided by the CUDA runtime.
…On Wed, Jul 31, 2019 at 5:05 PM xd009642 ***@***.***> wrote:
So I couldn't find a way to do this in the documentation but I was
wondering if there was a way to force dlib to use the CPU if it's built
with CUDA enabled? I ask because I'm making a multithreaded application
with dlib and only a maximum of 2 threads at a time can use CUDA (because
of limitations of the GPU itself). So I'd like to in a thread get dlib to
use the GPU in the thread if <2 threads are using the GPU and otherwise
fallback to using the CPU.
Also a slightly related question how would dlib cope with a machine with
multiple GPUs installed all for compute purposes?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#1852?email_source=notifications&email_token=ABPYFR3PPXGOIBCOEBNKKU3QCH5DHA5CNFSM4IIKR6Z2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HCVMBVA>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABPYFR6GWGIPNKM47GNWTSLQCH5DHANCNFSM4IIKR6ZQ>
.
|
Would there be any interest in adding the ability to run something on CPU if built with GPU support? Just because being able to handle this in a single binary would be preferential to building two different binaries linked to two different builds of dlib and coordinating between the two. Cuda set device is super useful for the second question though cheers! Just saw on docs that it sets the device for the host thread 😊 |
Sure, such an option would be cool. So a PR that set that up would be
great. It would probably best be accomplished via an API similar to
cudaSetDevice(). That is, calling some global function to set CPU or GPU,
for the current thread.
…On Thu, Aug 1, 2019 at 6:38 PM xd009642 ***@***.***> wrote:
Would there be any interest in adding the ability to run something on CPU
if built with GPU support? Just because being able to handle this in a
single binary would be preferential to building two different binaries
linked to two different builds of dlib and coordinating between the two.
Cuda set device is super useful for the second question though cheers!
Just saw on docs that it sets the device for the host thread 😊
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1852?email_source=notifications&email_token=ABPYFRYEKFCKONLESZ37PHDQCNQVRA5CNFSM4IIKR6Z2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3MCRWI#issuecomment-517482713>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABPYFR3KZYSNEKK56D3YPQDQCNQVRANCNFSM4IIKR6ZQ>
.
|
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Is there a chance to reopen this? Maybe @xd009642 made some progress? |
Continuing on a closed thread @davisking unless you think this warrants a new issue. I took at quick look at the code, it looks like CPU code is compiled out when |
I didn't make any progress aside from looking at the code, unfortunately real-life interfered and I lost the need for this functionality at work. My approach though would have been to keep both code-bases compiling in as well as provide an option to keep the old behaviour (just in case any users had a requirement on the old behaviour for binary size etc). I could potentially have another look at this over the Christmas holidays but I won't realistically get a chance before December 20th so you can always give it a shot if you wish |
I don't think it's a big deal to support this. The change is basically just to replace a bunch of:
statements with something like:
Where I don't think there needs to be any option to keep the old behavior where one path is compiled out. |
In my opinion, it would be really nice if we could still build CPU-only versions that have no CUDA dependencies whatsoever (during compiling, linking, or runtime execution). |
Oh yeah, I didn’t mean cuda would become required. The options would be cpu or cpu and gpu. No gpu only option. |
So I couldn't find a way to do this in the documentation but I was wondering if there was a way to force dlib to use the CPU if it's built with CUDA enabled? I ask because I'm making a multithreaded application with dlib and only a maximum of 2 threads at a time can use CUDA (because of limitations of the GPU itself). So I'd like to in a thread get dlib to use the GPU in the thread if <2 threads are using the GPU and otherwise fallback to using the CPU.
Also a slightly related question how would dlib cope with a machine with multiple GPUs installed all for compute purposes?
The text was updated successfully, but these errors were encountered: