-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Re-introduce torch.Tensor.to_padded_tensor
#85293
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Re-introduce torch.Tensor.to_padded_tensor
#85293
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/85293
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 FailuresAs of commit 8cca6ae: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
torch.Tensor.to_padded_tensor
@mikaylagawarecki has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Differential Revision: [D39629004](https://our.internmc.facebook.com/intern/diff/D39629004) [ghstack-poisoned]
@mikaylagawarecki has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Differential Revision: [D39629004](https://our.internmc.facebook.com/intern/diff/D39629004) [ghstack-poisoned]
@pytorchbot rebase -s |
@pytorchbot successfully started a rebase job. Check the current status here |
Differential Revision: [D39629004](https://our.internmc.facebook.com/intern/diff/D39629004) [ghstack-poisoned]
Successfully rebased |
dispatch: | ||
CompositeExplicitAutograd: alias_copy_out | ||
|
||
- func: to_padded_tensor(Tensor self, float padding, int[]? output_size=None) -> Tensor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@albanD removing the method seemed to cause a decent amount of headaches for internal users. I don't like that we can't break BC though however. Would adding this as a method on class Tensor in python make a significant difference?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it might not be torch scriptable if we add it on a class Tensor at the python level @cpuhrsch is this what you were getting at yesterday?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additionally I noticed that to_dense
which seems like the analogous function to to_padded_tensor
for sparse is a method registered as a native function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If that's painful, you can keep it here for sure yes.
@mikaylagawarecki has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
@mikaylagawarecki has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
@pytorchbot successfully started a merge job. Check the current status here. |
Hey @mikaylagawarecki. |
Differential Revision: [D39629004](https://our.internmc.facebook.com/intern/diff/D39629004) Pull Request resolved: #85293 Approved by: https://github.com/cpuhrsch
Stack from ghstack (oldest at bottom):
torch.Tensor.to_padded_tensor
#85293Differential Revision: D39629004