Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor cuda device assignation #26

Closed
sacdallago opened this issue Jul 9, 2020 · 1 comment
Closed

Refactor cuda device assignation #26

sacdallago opened this issue Jul 9, 2020 · 1 comment
Assignees
Labels
enhancement New feature or request prio:medium
Milestone

Comments

@sacdallago
Copy link
Owner

sacdallago commented Jul 9, 2020

Currently, in one form or another, in various parts of the pipeline, this is used:

"cuda:0" if torch.cuda.is_available() and not self._use_cpu else "cpu"

The problem here is that cuda:0 will always refer to the 0 card. In systems hosting multiple cards, this will be painful. workoarounds are:


Examples where it's used:

@sacdallago sacdallago added the enhancement New feature or request label Jul 9, 2020
@sacdallago sacdallago added this to the Version v0.1.4 milestone Jul 15, 2020
@sacdallago
Copy link
Owner Author

Only relevant once we can test on GPU server

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request prio:medium
Projects
None yet
Development

No branches or pull requests

2 participants