-
Notifications
You must be signed in to change notification settings - Fork 25k
[docs][1.5] update RPC docs to reflect correct use of dist_autograd backwards and dist_optim step() #34670
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rohan-varma has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
💊 CircleCI build failures summary and remediationsAs of commit 9f9b75d (more details on the Dr. CI page): ✅ None of the build failures appear to be your fault 💚
🚧 1 upstream failure:These were probably caused by upstream breakages:
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker. This comment has been revised 4 times. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rohan-varma has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
@rohan-varma merged this pull request in f933fa3. |
torch.distributed.autograd.backwards()
does not use the current thread local autograd context, instead it looks it up based on the context_id passed intorch.distributeed.optimizer.optim.step()