-
Notifications
You must be signed in to change notification settings - Fork 3.6k
move torch.cuda.set_device() to enable collective calls earlier in setup
#8312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report
@@ Coverage Diff @@
## master #8312 +/- ##
======================================
- Coverage 92% 92% -0%
======================================
Files 213 213
Lines 13798 13793 -5
======================================
- Hits 12744 12726 -18
- Misses 1054 1067 +13 |
d2564f6 to
519d01c
Compare
tchaton
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM !
|
Do we also want to move?: pytorch_lightning//plugins/training_type/horovod.py:136: torch.cuda.set_device(self.root_device)
pytorch_lightning//plugins/training_type/ddp_spawn.py:333: torch.cuda.set_device(self.root_device) |
|
@carmocca the one in ddp_spawn cannot be moved, it needs to set the device in the spawned subprocesses too. |
What does this PR do?
Unblocks #8017 who is calling
trainer.log_direarly inCallback.setup(), which internally leads to a broacast, but hangs because device is not yet set!Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:
Did you have fun?
I made sure I had fun coding 🙃