-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Avoid GPU syncs by reusing Pre-allocated Zero Tensor #128069
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/128069
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 204485d with merge base 92151c8 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
# implemented using post-save and pre-load hooks | ||
_init_state_dict_state(self) | ||
_register_all_state_dict_hooks(self) | ||
self.zero = torch.tensor(0.0, device=self.compute_device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we initialize this lazily (upon calling clip_grad_norm_
and actually needing to use this)?
Maybe we can put it in a private attribute like self._zero_scalar
?
Thanks @quanta42 for the contribution! Could you sign the CLA? |
I did sign it but I work for Adobe so I need the authorization from the administrators. Thanks for the prompt review! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Could you fix lint?
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
@awgu I was wondering why this hasn't been merged after fixing the linting? |
sorry this fell through the cracks |
@pytorchbot rebase -s |
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
This commit improves the FullyShardedDataParallel (FSDP) class in PyTorch by reducing unnecessary GPU synchronizations by reusing a pre-allocated zero tensor.
code review
Successfully rebased |
67949db
to
204485d
Compare
Thank you |
@pytorchbot merge -i |
Merge startedYour change will be merged while ignoring the following 1 checks: pull / linux-focal-py3.8-clang10-onnx / test (default, 2, 2, amz2023.linux.2xlarge) Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
This commit improves the FullyShardedDataParallel (FSDP) class in PyTorch by reducing unnecessary GPU synchronizations by reusing a pre-allocated zero tensor.
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @penguinwu @tianyu-l @yf225 @chauhang