-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to specify CPU or GPU? #464
Comments
Hello @romanovzky the default behavior is that the model gets put on GPU if available and runs on CPU if there is no GPU. This is done in the device = None
if torch.cuda.is_available():
device = torch.device('cuda:0')
else:
device = torch.device('cpu') The So if you would like to explicitly change this behavior, for instance to direct it to run on CPU even if you have a GPU available, you need to run this code before instantiating your model: import flair, torch
flair.device = torch.device('cpu') This overwrites the Hope this clarifies! |
Closing since question is answered (hopefully) - feel free to reopen if you have other questions. |
Hi Alan, I'm running the language model trainer on a server with 2 GPUs. Both GPUs are available. It works on GPU 0, however, when I try to change the device to GPU 1, I face the following error:
I use the following code:
Best, |
Hello @TDaudert - thanks for spotting and reporting this. The error is likely here: i.e., we only check if cuda is available and if so then put on cuda:0. But in the rest of the code, we use the newer .to(device) method, like here: I will put in a PR which should fix this error! |
GH-464: fix text generation on cuda:1
@TDaudert should now be fixed in master branch! |
I'd like to know how to move the model between CPU and GPU. Alternatively, I'd like to know if it's possible to specify the device before instantiating a model.
The text was updated successfully, but these errors were encountered: