-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[pytorch]Migrate _th_ger to Aten and kill resize_scalar in codegen #33792
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
💊 CircleCI build failures summary and remediationsAs of commit 3229b44 (more details on the Dr. CI page): ✅ None of the build failures appear to be your fault 💚
❄️ 2 tentatively flaky failures2 failures tentatively classified as flaky but have not launched reruns to confirm:
|
| if isinstance(resize, str): | ||
| return "{}.resize_({}.sizes());".format(arg['name'], resize) | ||
| else: | ||
| resize_scalar = arg.get('resize_scalar', False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The existence of resize_scalar makes it seem like ger, at some point in the past, supported accepting a 0D tensor. It doesn't accept 0D tensors on master and no one has complained about it, so this change seems fine to me.
If we want to be really safe we can try to figure out if torch.ger accepted a 0D tensor at any time in the past and figure out when torch.ger stopped accepting a 0D tensor, if at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checked a little bit. I think people who made the _th_ger change wanted to make this resize safe.
In legacy code, the size check is inside addr function, but this resize happens before calling addr. _th_ger call addr underline, and addr doesn't allow 0D vec. So ger doesn't support 0D vec anyway
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new resize semantics introduced in this PR are bc breaking; we should either:
- follow the old behavior
- follow the old behavior and issue a deprecation warning.
… codegen" Differential Revision: [D20107158](https://our.internmc.facebook.com/intern/diff/D20107158) [ghstack-poisoned]
|
discussed with @zou3519 , we will keep the old TH resize behavior in this PR. We will open a new PR if we need to deprecated the legacy resize behavior. |
| Tensor& ger_out(Tensor &result, const Tensor& self, const Tensor& vec2) { | ||
| check_1d(self, "self", "ger"); | ||
| check_1d(vec2, "vec2", "ger"); | ||
| if (result.dim() != 2 || result.size(0) != self.size(0) || result.size(1) != vec2.size(0)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think it's clearer if you do something like
if (result.sizes() != {self.size(0), vec2.size(0)}) {
result.resize_({...});
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will do if there are more dimensions to check. For 2 dimensions, I think this is still fine and save a memory allocation, faster.
… codegen" Differential Revision: [D20107158](https://our.internmc.facebook.com/intern/diff/D20107158) [ghstack-poisoned]
|
This breaks XLA CI test, added Ailing to give update here once XLA side is ready |
|
@pytorchbot rebase this please |
|
ehhh what happened to our pytorchbot? :P |
… codegen" Differential Revision: [D20107158](https://our.internmc.facebook.com/intern/diff/D20107158) [ghstack-poisoned]
… codegen" Differential Revision: [D20107158](https://our.internmc.facebook.com/intern/diff/D20107158) [ghstack-poisoned]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
… codegen" Differential Revision: [D20107158](https://our.internmc.facebook.com/intern/diff/D20107158) [ghstack-poisoned]
… codegen" Differential Revision: [D20107158](https://our.internmc.facebook.com/intern/diff/D20107158) [ghstack-poisoned]
|
@glaringlee merged this pull request in 57c1b80. |
…ytorch#33792) Summary: Pull Request resolved: pytorch#33792 Test Plan: Imported from OSS Differential Revision: D20107158 Pulled By: glaringlee fbshipit-source-id: bceddb2d39d3abf36f277daba537677312449c9c
Stack from ghstack:
Differential Revision: D20107158