-
Notifications
You must be signed in to change notification settings - Fork 25.2k
doc string fixed in torch.distributed.reduce_scatter #84983
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
doc string fixed in torch.distributed.reduce_scatter #84983
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/84983
Note: Links to docs will display an error until the docs builds have been completed. ✅ No Failures, 1 PendingAs of commit c898da4: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall LGTM, I think we may have to update the docs to include enum values for torch.distributed.ReduceOp
as well. I am starting CI tests and waiting for docs to get built to confirm the changes before I approve.
Co-authored-by: Howard Huang <howardhuang96@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
@H-Huang Can you merge it now? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks for adding!
@pytorchbot merge |
@pytorchbot successfully started a merge job. Check the current status here and land check progress here. |
Fixes #84865 Previous `torch.distributed.reduce_scatter`: ``` def reduce_scatter(output, input_list, op=ReduceOp.SUM, group=None, async_op=False): """ Reduces, then scatters a list of tensors to all processes in a group. Args: output (Tensor): Output tensor. input_list (list[Tensor]): List of tensors to reduce and scatter. group (ProcessGroup, optional): The process group to work on. If None, the default process group will be used. async_op (bool, optional): Whether this op should be an async op. ``` Fixed: ``` def reduce_scatter(output, input_list, op=ReduceOp.SUM, group=None, async_op=False): """ Reduces, then scatters a list of tensors to all processes in a group. Args: output (Tensor): Output tensor. input_list (list[Tensor]): List of tensors to reduce and scatter. op (optional): One of the values from ``torch.distributed.ReduceOp`` enum. Specifies an operation used for element-wise reductions group (ProcessGroup, optional): The process group to work on. If None, the default process group will be used. async_op (bool, optional): Whether this op should be an async op. ``` Pull Request resolved: #84983 Approved by: https://github.com/H-Huang
Merge failedReason: Failed to merge; some land checks failed: trunk, trunk / android-emulator-build-test / build-and-test If you believe this is an error, you can use the old behavior with Please reach out to the PyTorch DevX Team with feedback or questions! Details for Dev Infra teamRaised by workflow job |
@pytorchbot merge -g |
@pytorchbot successfully started a merge job. Check the current status here. |
Hey @ShisuiUzumaki. |
Fixes #84865 Previous `torch.distributed.reduce_scatter`: ``` def reduce_scatter(output, input_list, op=ReduceOp.SUM, group=None, async_op=False): """ Reduces, then scatters a list of tensors to all processes in a group. Args: output (Tensor): Output tensor. input_list (list[Tensor]): List of tensors to reduce and scatter. group (ProcessGroup, optional): The process group to work on. If None, the default process group will be used. async_op (bool, optional): Whether this op should be an async op. ``` Fixed: ``` def reduce_scatter(output, input_list, op=ReduceOp.SUM, group=None, async_op=False): """ Reduces, then scatters a list of tensors to all processes in a group. Args: output (Tensor): Output tensor. input_list (list[Tensor]): List of tensors to reduce and scatter. op (optional): One of the values from ``torch.distributed.ReduceOp`` enum. Specifies an operation used for element-wise reductions group (ProcessGroup, optional): The process group to work on. If None, the default process group will be used. async_op (bool, optional): Whether this op should be an async op. ``` Pull Request resolved: #84983 Approved by: https://github.com/H-Huang
Fixes #84865
Previous
torch.distributed.reduce_scatter
:Fixed: