Skip to content

Conversation

SherlockNoMad
Copy link
Contributor

@SherlockNoMad SherlockNoMad commented Sep 23, 2025

Explicit redistribute_local_tensor API call could also results in communication, record it!

cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @ezyang @msaroufim @dcci

Copy link

pytorch-bot bot commented Sep 23, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/163704

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 8b1ee97 with merge base 8d81564 (image):

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added ciflow/inductor oncall: distributed Add this issue/PR to distributed oncall triage queue labels Sep 23, 2025
Copy link
Member

@zpcore zpcore left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, I was going to implement capturing the transforminfo but you already did it! Is it possible we have a flag in debug_mode to control this?

Overall LGTM, may need to update the test file.

@zpcore
Copy link
Member

zpcore commented Sep 23, 2025

By the way, I hope in DebugMode we can get the list of transforminfos triggered for each DTensor redistribution besides the detailed collective ops carried out under the hood. Is it possible? Thanks!

@SherlockNoMad SherlockNoMad added the topic: not user facing topic category label Sep 24, 2025
@SherlockNoMad
Copy link
Contributor Author

@zpcore

I tried

  redistribute_input(t: f32[3, 8], [S(0)] -> [R], ([_TransformInfo(mesh_dim=0, src_dst_placements=(Shard(dim=0), Replicate()), logical_shape=[9, 8])],))

looks a bit too verbose? anyway to simplify?

for mode in _get_current_dispatch_mode_stack():
if isinstance(mode, DebugMode):
debug_mode = mode
break
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Put this in a helper plz

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

get_active_debug_mode

@SherlockNoMad
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Sep 24, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 mandatory check(s) failed. The first few are:

Dig deeper by viewing the failures on hud

Details for Dev Infra team Raised by workflow job

Failing merge rule: Core Maintainers

@SherlockNoMad
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 jobs have failed, first few of them are: trunk / linux-jammy-cuda12.8-py3.10-gcc11 / test (distributed, 3, 3, linux.g4dn.12xlarge.nvidia.gpu)

Details for Dev Infra team Raised by workflow job

@SherlockNoMad
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@zpcore
Copy link
Member

zpcore commented Sep 24, 2025

@zpcore

I tried

  redistribute_input(t: f32[3, 8], [S(0)] -> [R], ([_TransformInfo(mesh_dim=0, src_dst_placements=(Shard(dim=0), Replicate()), logical_shape=[9, 8])],))

looks a bit too verbose? anyway to simplify?

That's good enough, I will follow up to update the str represestation of the transforminfo. Thanks!

@SherlockNoMad SherlockNoMad deleted the bahuang/redis branch September 24, 2025 16:58
jainapurva pushed a commit that referenced this pull request Sep 29, 2025
Explicit redistribute_local_tensor API call could also results in communication, record it!

Pull Request resolved: #163704
Approved by: https://github.com/ezyang
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request Merged oncall: distributed Add this issue/PR to distributed oncall triage queue topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants