Skip to content

Conversation

@minhua-chen
Copy link
Contributor

Summary:
Make iter persistent for AdagradW optimizer state saving.
This is to avoid potential loss of the iter information when training is restarted.

Differential Revision: D74717848

@netlify
Copy link

netlify bot commented May 17, 2025

Deploy Preview for pytorch-fbgemm-docs ready!

Name Link
🔨 Latest commit f3e56fe
🔍 Latest deploy log https://app.netlify.com/projects/pytorch-fbgemm-docs/deploys/686ec616b82890000829ed24
😎 Deploy Preview https://deploy-preview-4147--pytorch-fbgemm-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D74717848

minhua-chen added a commit to minhua-chen/FBGEMM that referenced this pull request May 20, 2025
Summary:

X-link: facebookresearch/FBGEMM#1228

Make iter persistent for AdagradW optimizer state saving.
This is to avoid potential loss of the iter information when training is restarted.

Reviewed By: q10

Differential Revision: D74717848
minhua-chen added a commit to minhua-chen/FBGEMM that referenced this pull request May 20, 2025
Summary:

X-link: facebookresearch/FBGEMM#1228

Make iter persistent for AdagradW optimizer state saving.
This is to avoid potential loss of the iter information when training is restarted.

Reviewed By: q10

Differential Revision: D74717848
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D74717848

minhua-chen added a commit to minhua-chen/FBGEMM that referenced this pull request May 20, 2025
Summary:
Pull Request resolved: pytorch#4147

X-link: facebookresearch/FBGEMM#1228

Make iter persistent for AdagradW optimizer state saving.
This is to avoid potential loss of the iter information when training is restarted.

Reviewed By: q10

Differential Revision: D74717848
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D74717848

minhua-chen added a commit to minhua-chen/FBGEMM that referenced this pull request May 20, 2025
Summary:
Pull Request resolved: pytorch#4147

X-link: facebookresearch/FBGEMM#1228

Make iter persistent for AdagradW optimizer state saving.
This is to avoid potential loss of the iter information when training is restarted.

Reviewed By: q10

Differential Revision: D74717848
@minhua-chen minhua-chen force-pushed the export-D74717848 branch 2 times, most recently from bb2de73 to 3f2b034 Compare May 20, 2025 07:21
minhua-chen added a commit to minhua-chen/FBGEMM that referenced this pull request May 20, 2025
Summary:

X-link: facebookresearch/FBGEMM#1228

Make iter persistent for AdagradW optimizer state saving.
This is to avoid potential loss of the iter information when training is restarted.

Reviewed By: q10

Differential Revision: D74717848
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D74717848

minhua-chen added a commit to minhua-chen/FBGEMM that referenced this pull request Jul 9, 2025
Summary:

X-link: facebookresearch/FBGEMM#1228

Make iter persistent for AdagradW optimizer state saving.
This is to avoid potential loss of the iter information when training is restarted.

Reviewed By: q10

Differential Revision: D74717848
minhua-chen added a commit to minhua-chen/FBGEMM that referenced this pull request Jul 9, 2025
Summary:

X-link: facebookresearch/FBGEMM#1228

Make iter persistent for AdagradW optimizer state saving.
This is to avoid potential loss of the iter information when training is restarted.

Reviewed By: q10

Differential Revision: D74717848
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D74717848

minhua-chen added a commit to minhua-chen/FBGEMM that referenced this pull request Jul 9, 2025
Summary:
Pull Request resolved: pytorch#4147

X-link: facebookresearch/FBGEMM#1228

Make iter persistent for AdagradW optimizer state saving.
This is to avoid potential loss of the iter information when training is restarted.

Reviewed By: q10

Differential Revision: D74717848
Summary:
Pull Request resolved: pytorch#4147

X-link: facebookresearch/FBGEMM#1228

Make iter persistent for AdagradW optimizer state saving.
This is to avoid potential loss of the iter information when training is restarted.

Reviewed By: q10

Differential Revision: D74717848
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D74717848

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants