Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Only make a shallow copy when loading optimizer state_dict #106082

Closed
wants to merge 6 commits into from

Conversation

janeyx99
Copy link
Contributor

@janeyx99 janeyx99 commented Jul 26, 2023

The thing we do still deep copy is the param_groups, which is much lighter weight. This should also save memory when loading from a checkpoint.

The deepcopy was introduced in ecfcf39, but module.py had only a shallow copy at that point so it did not actually bring parity.

Incorporates an XLA fix, which is why I'm updating the pin to pytorch/xla@ca5eab8

Stack from ghstack (oldest at bottom):

@janeyx99 janeyx99 requested a review from albanD as a code owner July 26, 2023 22:38
@pytorch-bot pytorch-bot bot added the release notes: optimizer Relating to optimizers, torch.optim label Jul 26, 2023
@pytorch-bot
Copy link

pytorch-bot bot commented Jul 26, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/106082

Note: Links to docs will display an error until the docs builds have been completed.

✅ 2 Unrelated Failures

As of commit 8aea828:

UNSTABLE - The following jobs failed but were likely due to flakiness present on trunk and has been marked as unstable:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@janeyx99 janeyx99 added the topic: performance topic category label Jul 26, 2023
This should also save memory when loading from a checkpoint.




[ghstack-poisoned]
This should also save memory when loading from a checkpoint.




[ghstack-poisoned]
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM! Very good catch!

.git-blame-ignore-revs Show resolved Hide resolved
The thing we do still deep copy is the param_groups, which is much lighter weight. This should also save memory when loading from a checkpoint.

The deepcopy was introduced in ecfcf39, but module.py had only a shallow copy at that point so it did not actually bring parity.




[ghstack-poisoned]
janeyx99 added a commit that referenced this pull request Jul 28, 2023
ghstack-source-id: cee33dcf6d531cb942fb9cde6947943dd7d709b2
Pull Request resolved: #106082
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok!

The thing we do still deep copy is the param_groups, which is much lighter weight. This should also save memory when loading from a checkpoint.

The deepcopy was introduced in ecfcf39, but module.py had only a shallow copy at that point so it did not actually bring parity.




[ghstack-poisoned]
janeyx99 added a commit that referenced this pull request Jul 31, 2023
ghstack-source-id: 8028f48dbef35e145614fe14b0557c8f5fc0e736
Pull Request resolved: #106082
@janeyx99
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Jul 31, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: Command git -C /home/runner/work/pytorch/pytorch cherry-pick -x 46dd0f0a2d97b43b6e730a525d4b178b82f36a6f returned non-zero exit code 1

Auto-merging .git-blame-ignore-revs
CONFLICT (content): Merge conflict in .git-blame-ignore-revs
Auto-merging test/optim/test_optim.py
error: could not apply 46dd0f0a2d9... Only make a shallow copy when loading optimizer state_dict
hint: After resolving the conflicts, mark them with
hint: "git add/rm <pathspec>", then run
hint: "git cherry-pick --continue".
hint: You can instead skip this commit with "git cherry-pick --skip".
hint: To abort and get back to the state before "git cherry-pick",
hint: run "git cherry-pick --abort".
Details for Dev Infra team Raised by workflow job

The thing we do still deep copy is the param_groups, which is much lighter weight. This should also save memory when loading from a checkpoint.

The deepcopy was introduced in ecfcf39, but module.py had only a shallow copy at that point so it did not actually bring parity.

Incorporates an XLA fix, which is why I'm updating the pin to pytorch/xla@ca5eab8




[ghstack-poisoned]
janeyx99 added a commit that referenced this pull request Aug 1, 2023
ghstack-source-id: 12751888e2adc1663544cd94cdd145721b1ec926
Pull Request resolved: #106082
@janeyx99
Copy link
Contributor Author

janeyx99 commented Aug 1, 2023

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: optimizer Relating to optimizers, torch.optim topic: performance topic category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants