Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't make parameters have symbolic shapes #85809

Closed
wants to merge 1 commit into from

Conversation

ezyang
Copy link
Contributor

@ezyang ezyang commented Sep 28, 2022

Stack from ghstack (oldest at bottom):

Parameters won't change size across iterations of the
training loop, so this is a cost-free optimization that
avoids us having to do symbolic math over parameters.

Signed-off-by: Edward Z. Yang ezyang@fb.com

Parameters won't change size across iterations of the
training loop, so this is a cost-free optimization that
avoids us having to do symbolic math over parameters.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Sep 28, 2022

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/85809

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 7664e89:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

ezyang added a commit that referenced this pull request Sep 28, 2022
Parameters won't change size across iterations of the
training loop, so this is a cost-free optimization that
avoids us having to do symbolic math over parameters.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

ghstack-source-id: d7462b36ca5779fed8f6633633ee986ffa6df949
Pull Request resolved: #85809
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good!

@ezyang
Copy link
Contributor Author

ezyang commented Sep 28, 2022

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here.
The merge job was triggered without a flag. This means that your change will be merged once all checks on your PR have passed (ETA: 0-4 Hours). If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

@github-actions
Copy link

Hey @ezyang.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

drisspg pushed a commit to drisspg/pytorch that referenced this pull request Sep 29, 2022
Parameters won't change size across iterations of the
training loop, so this is a cost-free optimization that
avoids us having to do symbolic math over parameters.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: pytorch#85809
Approved by: https://github.com/albanD
@facebook-github-bot facebook-github-bot deleted the gh/ezyang/1417/head branch October 2, 2022 14:19
mehtanirav pushed a commit that referenced this pull request Oct 4, 2022
Parameters won't change size across iterations of the
training loop, so this is a cost-free optimization that
avoids us having to do symbolic math over parameters.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: #85809
Approved by: https://github.com/albanD
alvgaona pushed a commit to alvgaona/pytorch that referenced this pull request Oct 11, 2022
Parameters won't change size across iterations of the
training loop, so this is a cost-free optimization that
avoids us having to do symbolic math over parameters.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: pytorch#85809
Approved by: https://github.com/albanD
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants