Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add an API to DDP for dynamically updating the underlying process group. #113580

Conversation

pritamdamania87
Copy link
Contributor

@pritamdamania87 pritamdamania87 commented Nov 13, 2023

Motivation

If we would like to reinitialize DDP with a different PG with torch.compile, we need to do the following:

del old_ddp
del old_pg
pg = init_pg(...)
ddp = DDP(pg)
model = torch.compile(DDP)

This results in recompilation of the entire model and is very expensive. Since the only thing we need to update is the PG, we should be able to do this without having to compile the model again.

Proposal

As a result, in this PR I've introduced an _update_process_group API which can dynamically update the underlying ProcessGroup used by DDP without needing to reinitialize DDP again.

cc @mrshenli @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin @XilunWu @wanchaol @fduwjj @wz337 @tianyu-l @wconstab @yf225 @kiukchung @d4l3k @LucasLLC

Copy link

pytorch-bot bot commented Nov 13, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/113580

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 3c754c4 with merge base a2c32b8 (image):

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added the release notes: distributed (c10d) release notes category label Nov 13, 2023
@XilunWu
Copy link
Contributor

XilunWu commented Nov 13, 2023

this PR has a lot of re-formatting which I suggest we exclude.

@pritamdamania87
Copy link
Contributor Author

this PR has a lot of re-formatting which I suggest we exclude.

@XilunWu This came from lintrunner -a automatically. Not sure if lint will fail if I don't apply those changes.

@wconstab
Copy link
Contributor

this PR has a lot of re-formatting which I suggest we exclude.

@XilunWu This came from lintrunner -a automatically. Not sure if lint will fail if I don't apply those changes.

actually while i hate to be pedantic i think in this case since there are so many lines of code affected, it'd be worth splitting the PR into a stack where the first PR just applies lintrunner and the second PR just has your 'real' changes.

@wconstab
Copy link
Contributor

@rohan-varma @fegin any issue with this PR? (I can do a more detailed review if you dont want to but any objection on the addition of these APIs in principle?)

@pritamdamania87
Copy link
Contributor Author

actually while i hate to be pedantic i think in this case since there are so many lines of code affected, it'd be worth splitting the PR into a stack where the first PR just applies lintrunner and the second PR just has your 'real' changes.

So I had to run lintrunner -a since make lint was failing locally for me. If I remove the formatting changes won't lint fail on this PR and as a result block merge?

@wconstab
Copy link
Contributor

actually while i hate to be pedantic i think in this case since there are so many lines of code affected, it'd be worth splitting the PR into a stack where the first PR just applies lintrunner and the second PR just has your 'real' changes.

So I had to run lintrunner -a since make lint was failing locally for me. If I remove the formatting changes won't lint fail on this PR and as a result block merge?

my suggestion was to create a 'stack' where your first pr does the lint changes and then your second PR which is 'on top' does the changes. (are you familiar with ghstack? pip install ghstack and instructions on its github page)

@pritamdamania87
Copy link
Contributor Author

my suggestion was to create a 'stack' where your first pr does the lint changes and then your second PR which is 'on top' does the changes. (are you familiar with ghstack? pip install ghstack and instructions on its github page)

Looks like maybe there was something broken in my local setup causing the linter to change files. Removing the formatting still passes CI though

@janeyx99 janeyx99 added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Nov 14, 2023
Copy link
Contributor

@fduwjj fduwjj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, this looks good to me. Unblock for now but one thing I am not sure is that, are the changes in the reducer the only places we need to change? tbh, I am not reading reducer's code enough. Maybe, can @rohan-varma or @fegin also take a look? I also added more CI lables.

@fduwjj fduwjj added ciflow/binaries Trigger all binary build and upload jobs on the PR ciflow/periodic Trigger jobs ran periodically on master (periodic.yml) on the PR labels Nov 15, 2023
@pritamdamania87
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Nov 15, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@fegin
Copy link
Contributor

fegin commented Nov 15, 2023

Forgot to leave the comments. The PR looks good to me.

pytorchmergebot pushed a commit that referenced this pull request Nov 27, 2023
#113580 introduced the `DDP._update_process_group` API. However, the implementation did not correctly reset all of the necessary state in the reducer. In particular if an error occurred during backward, DDP would end up in an incorrect state.

As a result, in this PR I've enhanced the unit test to test for this case and also appropriately fixed resetting Reducer state.

Pull Request resolved: #114194
Approved by: https://github.com/rohan-varma
vfdev-5 pushed a commit to vfdev-5/pytorch that referenced this pull request Nov 29, 2023
pytorch#113580 introduced the `DDP._update_process_group` API. However, the implementation did not correctly reset all of the necessary state in the reducer. In particular if an error occurred during backward, DDP would end up in an incorrect state.

As a result, in this PR I've enhanced the unit test to test for this case and also appropriately fixed resetting Reducer state.

Pull Request resolved: pytorch#114194
Approved by: https://github.com/rohan-varma
@albanD albanD added oncall: distributed Add this issue/PR to distributed oncall triage queue and removed module: distributed labels Dec 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/binaries Trigger all binary build and upload jobs on the PR ciflow/periodic Trigger jobs ran periodically on master (periodic.yml) on the PR ciflow/slow ciflow/trunk Trigger trunk jobs on your pull request Merged oncall: distributed Add this issue/PR to distributed oncall triage queue open source release notes: distributed (c10d) release notes category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants