Skip to content

Conversation

Aidyn-A
Copy link
Collaborator

@Aidyn-A Aidyn-A commented Mar 1, 2024

The necessity of this PR lies in the fact that autograd engine + DDP calls all_reduce from C++, so the changes must be made in C++.

[rank0]: Traceback (most recent call last):
[rank0]:   File "~/complex_ddp.py", line 72, in <module>
[rank0]:     main()
[rank0]:   File "~/complex_ddp.py", line 64, in main
[rank0]:     loss.backward()
[rank0]:   File "/home/usr/pytorch/torch/_tensor.py", line 525, in backward
[rank0]:     torch.autograd.backward(
[rank0]:   File "/home/usr/pytorch/torch/autograd/__init__.py", line 267, in backward
[rank0]:     _engine_run_backward(
[rank0]:   File "/home/usr/pytorch/torch/autograd/graph.py", line 744, in _engine_run_backward
[rank0]:     return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[rank0]: TypeError: Input tensor data type is not supported for NCCL process group: ComplexFloat

I believe, for minimizing the Python overhead, the same could be done for the rest of the ops, what do you think @kwen2501?

cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin @XilunWu @wanchaol @fduwjj @wz337 @tianyu-l @wconstab @yf225 @chauhang @eqy @ptrblck

@pytorch-bot pytorch-bot bot added the release notes: distributed (c10d) release notes category label Mar 1, 2024
Copy link

pytorch-bot bot commented Mar 1, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/121045

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 9a304f7 with merge base 8861507 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@github-actions github-actions bot added the oncall: distributed Add this issue/PR to distributed oncall triage queue label Mar 1, 2024
@Aidyn-A Aidyn-A requested a review from kwen2501 March 1, 2024 21:25
@eqy
Copy link
Collaborator

eqy commented Mar 1, 2024

Naive question, what happens when we view complex float as real and apply premul sum? Is the "elementwise" computation valid there?

@Aidyn-A
Copy link
Collaborator Author

Aidyn-A commented Mar 2, 2024

Naive question, what happens when we view complex float as real and apply premul sum? Is the "elementwise" computation valid there?

Yes, the "elementwise" computation must be valid there, as long as the "pre-mul factor" is of the real type.

Copy link
Collaborator

@eqy eqy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to add a minimal test e.g,. in test_c10d_nccl.py?

Copy link
Contributor

@kwen2501 kwen2501 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Re: move complex support down from python to cpp
we might need to give it more thought, as today all cpp backends rely on the view_as_real conversion at the python level. If we move it, then it means we'd need to add it back in every backend. (like you did here).

Another way to do it is at the dispatcher:
https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/Ops.cpp

@kwen2501
Copy link
Contributor

kwen2501 commented Mar 8, 2024

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Mar 8, 2024
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pianpwk pushed a commit that referenced this pull request Mar 11, 2024
The necessity of this PR lies in the fact that autograd engine + DDP calls `all_reduce` from C++, so the changes must be made in C++.

```
[rank0]: Traceback (most recent call last):
[rank0]:   File "~/complex_ddp.py", line 72, in <module>
[rank0]:     main()
[rank0]:   File "~/complex_ddp.py", line 64, in main
[rank0]:     loss.backward()
[rank0]:   File "/home/usr/pytorch/torch/_tensor.py", line 525, in backward
[rank0]:     torch.autograd.backward(
[rank0]:   File "/home/usr/pytorch/torch/autograd/__init__.py", line 267, in backward
[rank0]:     _engine_run_backward(
[rank0]:   File "/home/usr/pytorch/torch/autograd/graph.py", line 744, in _engine_run_backward
[rank0]:     return Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
[rank0]: TypeError: Input tensor data type is not supported for NCCL process group: ComplexFloat
```

I believe, for minimizing the Python overhead, the same could be done for the rest of the ops, what do you think @kwen2501?

Pull Request resolved: #121045
Approved by: https://github.com/eqy, https://github.com/kwen2501
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged oncall: distributed Add this issue/PR to distributed oncall triage queue open source release notes: distributed (c10d) release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants