Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[dtensor] make replicate -> partial do division instead #110898

Closed
wants to merge 2 commits into from

Conversation

wanchaol
Copy link
Contributor

@wanchaol wanchaol commented Oct 9, 2023

Stack from ghstack (oldest at bottom):

This PR switches the replicate -> partial to do division instead of
zeroing out other ranks, it preserve same numerics, but avoid the
per-rank behavior difference, and friendly to torch compile

This PR switches the replicate -> partial to do division instead of
zeroing out other ranks, it preserve same numerics, but avoid the
per-rank behavior difference, and friendly to torch compile

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Oct 9, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/110898

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 75a68e2 with merge base 201d02e (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

wanchaol added a commit that referenced this pull request Oct 9, 2023
This PR switches the replicate -> partial to do division instead of
zeroing out other ranks, it preserve same numerics, but avoid the
per-rank behavior difference, and friendly to torch compile

ghstack-source-id: d2ae8a10843e79a3cbebf1c1b34aba7a7a3027b3
Pull Request resolved: #110898
@wanchaol wanchaol requested a review from bdhirsh October 9, 2023 21:58
This PR switches the replicate -> partial to do division instead of
zeroing out other ranks, it preserve same numerics, but avoid the
per-rank behavior difference, and friendly to torch compile

[ghstack-poisoned]
@wanchaol wanchaol added the release notes: distributed (dtensor) release notes category label Oct 9, 2023
Copy link
Contributor

@fduwjj fduwjj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for doing this and this definitely makes TP (bias of rowwise linear) less complicated for users to understand.

pytorchmergebot pushed a commit that referenced this pull request Oct 11, 2023
make random ops be a set instead of list
Pull Request resolved: #110900
Approved by: https://github.com/fduwjj
ghstack dependencies: #110898
pytorchmergebot pushed a commit that referenced this pull request Oct 12, 2023
isdanni pushed a commit to isdanni/pytorch that referenced this pull request Oct 13, 2023
@facebook-github-bot facebook-github-bot deleted the gh/wanchaol/366/head branch October 15, 2023 14:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants