-
Notifications
You must be signed in to change notification settings - Fork 7.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add an option to zero out the gradient before the forward #4905
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This pull request was exported from Phabricator. Differential Revision: D44264848 |
1 similar comment
This pull request was exported from Phabricator. Differential Revision: D44264848 |
sf-wind
added a commit
to sf-wind/detectron2
that referenced
this pull request
Apr 10, 2023
Summary: Pull Request resolved: facebookresearch#4905 Currently the optimizer zeros the gradients after the forward and before the backward. In a recent PyTorch change, it set all gradients to None by default. This has a benefit of reducing the memory consumption (since all gradients are None). However, doing this after the forward does not provide any memory saving, since the the memory consumption is maximum at the end of forward. It doesn't matter whether the gradient is set to None before the forward or after the forward. So we should set it before the forward the enjoy the memory saving. We add a flag to enable it instead of doing it by default for now. Since people can override the zero_grad function (as the comment indicates), we do not know exactly what is done inside the function. This is to be on the safe side so the current flows may not be broken. Once we have gone through the existing flows, we should make the flag enabled by default. Reviewed By: tglik Differential Revision: D44264848 fbshipit-source-id: 7fa9b44dc44b10ff2b52adc5c57162b80d537efe
7d26448
to
8889f4c
Compare
Summary: Pull Request resolved: facebookresearch#4905 Currently the optimizer zeros the gradients after the forward and before the backward. In a recent PyTorch change, it set all gradients to None by default. This has a benefit of reducing the memory consumption (since all gradients are None). However, doing this after the forward does not provide any memory saving, since the the memory consumption is maximum at the end of forward. It doesn't matter whether the gradient is set to None before the forward or after the forward. So we should set it before the forward the enjoy the memory saving. We add a flag to enable it instead of doing it by default for now. Since people can override the zero_grad function (as the comment indicates), we do not know exactly what is done inside the function. This is to be on the safe side so the current flows may not be broken. Once we have gone through the existing flows, we should make the flag enabled by default. Reviewed By: tglik Differential Revision: D44264848 fbshipit-source-id: 9d79eef6d667ebc9753d3ac22573d42f72f98622
This pull request was exported from Phabricator. Differential Revision: D44264848 |
8889f4c
to
f151ab8
Compare
This pull request has been merged in 88217ca. |
danielm322
pushed a commit
to danielm322/detectron2
that referenced
this pull request
Jun 9, 2023
Summary: Pull Request resolved: facebookresearch#4905 Currently the optimizer zeros the gradients after the forward and before the backward. In a recent PyTorch change, it set all gradients to None by default. This has a benefit of reducing the memory consumption (since all gradients are None). However, doing this after the forward does not provide any memory saving, since the the memory consumption is maximum at the end of forward. It doesn't matter whether the gradient is set to None before the forward or after the forward. So we should set it before the forward the enjoy the memory saving. We add a flag to enable it instead of doing it by default for now. Since people can override the zero_grad function (as the comment indicates), we do not know exactly what is done inside the function. This is to be on the safe side so the current flows may not be broken. Once we have gone through the existing flows, we should make the flag enabled by default. Reviewed By: tglik Differential Revision: D44264848 fbshipit-source-id: a68c7cbd36439faf65801f0f771ae8bc9c130699
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
CLA Signed
This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
fb-exported
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary:
Currently the optimizer zeros the gradients after the forward and before the backward. In a recent PyTorch change, it set all gradients to None by default. This has a benefit of reducing the memory consumption (since all gradients are None).
However, doing this after the forward does not provide any memory saving, since the the memory consumption is maximum at the end of forward.
It doesn't matter whether the gradient is set to None before the forward or after the forward. So we should set it before the forward the enjoy the memory saving.
We add a flag to enable it instead of doing it by default for now. Since people can override the zero_grad function (as the comment indicates), we do not know exactly what is done inside the function. This is to be on the safe side so the current flows may not be broken.
Once we have gone through the existing flows, we should make the flag enabled by default.
Reviewed By: tglik
Differential Revision: D44264848