-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Distribute GPUs in round robin mode for distributed_test #46389
Closed
Flamefire
wants to merge
1
commit into
pytorch:master
from
Flamefire:fix_test_DistributedDataParallel
Closed
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @Flamefire
Would be correct if I assume
world_size=2
and 4 GPUs in total, then we have the following? Before this change, we have:rank 0 -> gpu [0, 1]
rank 1 -> gpu [2, 3]
After this change, we have:
rank 0 -> gpu [0, 2]
rank 1 -> gpu [1, 3]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please help me understand why the above change fixes the problem described below? Thx.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes your assumption is correct. For an in-depth analysis see the issue where I posted many details, here only the summary:
During the barrier that happens very early (on creation of the process group) each process creates a communicator with GPU idx equal to its rank:
pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp
Line 1389 in 2e2fe8c
The problem with the old distribution is, that rank 1 (in your example) wants to use GPU 2 afterwards and hence needs a new communicator. But rank 0 wants to (continue to) use GPU 0 and hence does not need a new communicator. But because creation of a communicator is a collective operation this will fail due to rank 1 waiting for rank 0 that never appears.
Later Rank 0 might want to create a communicator for GPU 0 and 1 and joins the still waiting rank 1 in creating one, but now there is a mismatch: Rank 0 is already further down and expects 4 total GPU ranks (2 per process) while rank 1 only expects 2. This leads to a (correct) system error in NCCL code, but the real problem is already earlier.