New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[tp] improve documentation #115880
[tp] improve documentation #115880
Conversation
Improve the TP documentation in terms of format and descriptions [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/115880
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 1fb2ba9 with merge base 0692240 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Improve the TP documentation in terms of format and descriptions ghstack-source-id: 83ecf45626f9f1605625de6d503cdb996d206fe8 Pull Request resolved: #115880
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm.
@@ -32,7 +32,7 @@ Tensor Parallelism supports the following parallel styles: | |||
To simply configure the nn.Module's inputs and outputs with DTensor layouts | |||
and perform necessary layout redistributions, without distribute the module | |||
parameters to DTensors, the following classes can be used in | |||
the ``parallelize_plan``of ``parallelize_module``: | |||
the `parallelize_plan` of `parallelize_module`: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought that we need double backtick for code type and that single backtick is italics, but I may be wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's kind of weird to me too, but the lastest main docs looks wrong format so I switched back to single backtick, see lines before this API https://pytorch.org/docs/main/distributed.tensor.parallel.html#torch.distributed.tensor.parallel.PrepareModuleInput
I'll play around with the code types a bit more and see the refreshed docs before landing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it is missing the space between the last backtick of ``parallelize_plan`` and of, causing it to render improperly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point, lm try that instead
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
adjusted multiple places to double backtick , but the built doc still rendering the old version, gonna land this first and see the doc
Improve the TP documentation in terms of format and descriptions cc mrshenli pritamdamania87 zhaojuanmao satgera rohan-varma gqchen aazzolini osalpekar jiayisuse H-Huang kwen2501 awgu penguinwu fegin XilunWu fduwjj wz337 tianyu-l wconstab yf225 [ghstack-poisoned]
Improve the TP documentation in terms of format and descriptions ghstack-source-id: 60a3ecb492552a3206efd2d21c847803e964865d Pull Request resolved: #115880
Improve the TP documentation in terms of format and descriptions cc mrshenli pritamdamania87 zhaojuanmao satgera rohan-varma gqchen aazzolini osalpekar jiayisuse H-Huang kwen2501 awgu penguinwu fegin XilunWu fduwjj wz337 tianyu-l wconstab yf225 [ghstack-poisoned]
Improve the TP documentation in terms of format and descriptions ghstack-source-id: d2cc1093cd0d6492af2c5815d0a19e985620fb3a Pull Request resolved: #115880
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Improve the TP documentation in terms of format and descriptions Pull Request resolved: #115880 Approved by: https://github.com/XilunWu
Improve the TP documentation in terms of format and descriptions Pull Request resolved: pytorch#115880 Approved by: https://github.com/XilunWu
Improve the TP documentation in terms of format and descriptions Pull Request resolved: #115880 Approved by: https://github.com/XilunWu
Improve the TP documentation in terms of format and descriptions Pull Request resolved: pytorch#115880 Approved by: https://github.com/XilunWu
Improve the TP documentation in terms of format and descriptions Pull Request resolved: pytorch#115880 Approved by: https://github.com/XilunWu
Improve the TP documentation in terms of format and descriptions Pull Request resolved: pytorch#115880 Approved by: https://github.com/XilunWu
Stack from ghstack (oldest at bottom):
Improve the TP documentation in terms of format and descriptions
cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin @XilunWu @fduwjj @wz337 @tianyu-l @wconstab @yf225