-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Make operators with optional Tensor? arguments c10-full #41610
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Previously, operators that have a `Tensor?` (i.e. optional tensor) in their schema implemented it using `Tensor` in C++ and filled in an undefined tensor for the None case. The c10 operator library, however, expects `Tensor?` to be represented as `optional<Tensor>`, so those operators couldn't be c10-full yet and still had to use codegenerated unboxing instead of templated unboxing. This PR changes that. It extends the `hacky_wrapper_for_legacy_signatures` to not only take case of TensorOptions, but now also map between signatures taking `Tensor` and `optional<Tensor>`. For this, it requires an additional template parameter, the expected signature, and it uses that to go argument-by-argument and unwrap any optionals it finds. Differential Revision: [D22607879](https://our.internmc.facebook.com/intern/diff/D22607879/) [ghstack-poisoned]
💊 CI failures summary and remediationsAs of commit a4cebc4 (more details on the Dr. CI page):
🕵️ 5 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
Previously, operators that have a `Tensor?` (i.e. optional tensor) in their schema implemented it using `Tensor` in C++ and filled in an undefined tensor for the None case. The c10 operator library, however, expects `Tensor?` to be represented as `optional<Tensor>`, so those operators couldn't be c10-full yet and still had to use codegenerated unboxing instead of templated unboxing. This PR changes that. It extends the `hacky_wrapper_for_legacy_signatures` to not only take case of TensorOptions, but now also map between signatures taking `Tensor` and `optional<Tensor>`. For this, it requires an additional template parameter, the expected signature, and it uses that to go argument-by-argument and unwrap any optionals it finds. Differential Revision: [D22607879](https://our.internmc.facebook.com/intern/diff/D22607879/) [ghstack-poisoned]
Pull Request resolved: #41610 Previously, operators that have a `Tensor?` (i.e. optional tensor) in their schema implemented it using `Tensor` in C++ and filled in an undefined tensor for the None case. The c10 operator library, however, expects `Tensor?` to be represented as `optional<Tensor>`, so those operators couldn't be c10-full yet and still had to use codegenerated unboxing instead of templated unboxing. This PR changes that. It extends the `hacky_wrapper_for_legacy_signatures` to not only take case of TensorOptions, but now also map between signatures taking `Tensor` and `optional<Tensor>`. For this, it requires an additional template parameter, the expected signature, and it uses that to go argument-by-argument and unwrap any optionals it finds. ghstack-source-id: 108050783 Differential Revision: [D22607879](https://our.internmc.facebook.com/intern/diff/D22607879/)
Previously, operators that have a `Tensor?` (i.e. optional tensor) in their schema implemented it using `Tensor` in C++ and filled in an undefined tensor for the None case. The c10 operator library, however, expects `Tensor?` to be represented as `optional<Tensor>`, so those operators couldn't be c10-full yet and still had to use codegenerated unboxing instead of templated unboxing. This PR changes that. It extends the `hacky_wrapper_for_legacy_signatures` to not only take case of TensorOptions, but now also map between signatures taking `Tensor` and `optional<Tensor>`. For this, it requires an additional template parameter, the expected signature, and it uses that to go argument-by-argument and unwrap any optionals it finds. Differential Revision: [D22607879](https://our.internmc.facebook.com/intern/diff/D22607879/) [ghstack-poisoned]
Previously, operators that have a `Tensor?` (i.e. optional tensor) in their schema implemented it using `Tensor` in C++ and filled in an undefined tensor for the None case. The c10 operator library, however, expects `Tensor?` to be represented as `optional<Tensor>`, so those operators couldn't be c10-full yet and still had to use codegenerated unboxing instead of templated unboxing. This PR changes that. It extends the `hacky_wrapper_for_legacy_signatures` to not only take care of TensorOptions, but now also map between signatures taking `Tensor` and `optional<Tensor>`. For this, it requires an additional template parameter, the expected signature, and it uses that to go argument-by-argument and unwrap any optionals it finds. Differential Revision: [D22607879](https://our.internmc.facebook.com/intern/diff/D22607879/) [ghstack-poisoned]
Previously, operators that have a `Tensor?` (i.e. optional tensor) in their schema implemented it using `Tensor` in C++ and filled in an undefined tensor for the None case. The c10 operator library, however, expects `Tensor?` to be represented as `optional<Tensor>`, so those operators couldn't be c10-full yet and still had to use codegenerated unboxing instead of templated unboxing. This PR changes that. It extends the `hacky_wrapper_for_legacy_signatures` to not only take care of TensorOptions, but now also map between signatures taking `Tensor` and `optional<Tensor>`. For this, it requires an additional template parameter, the expected signature, and it uses that to go argument-by-argument and unwrap any optionals it finds. Differential Revision: [D22607879](https://our.internmc.facebook.com/intern/diff/D22607879/) [ghstack-poisoned]
Previously, operators that have a `Tensor?` (i.e. optional tensor) in their schema implemented it using `Tensor` in C++ and filled in an undefined tensor for the None case. The c10 operator library, however, expects `Tensor?` to be represented as `optional<Tensor>`, so those operators couldn't be c10-full yet and still had to use codegenerated unboxing instead of templated unboxing. This PR changes that. It extends the `hacky_wrapper_for_legacy_signatures` to not only take care of TensorOptions, but now also map between signatures taking `Tensor` and `optional<Tensor>`. For this, it requires an additional template parameter, the expected signature, and it uses that to go argument-by-argument and unwrap any optionals it finds. Differential Revision: [D22607879](https://our.internmc.facebook.com/intern/diff/D22607879/) [ghstack-poisoned]
Pull Request resolved: #41610 Previously, operators that have a `Tensor?` (i.e. optional tensor) in their schema implemented it using `Tensor` in C++ and filled in an undefined tensor for the None case. The c10 operator library, however, expects `Tensor?` to be represented as `optional<Tensor>`, so those operators couldn't be c10-full yet and still had to use codegenerated unboxing instead of templated unboxing. This PR changes that. It extends the `hacky_wrapper_for_legacy_signatures` to not only take case of TensorOptions, but now also map between signatures taking `Tensor` and `optional<Tensor>`. For this, it requires an additional template parameter, the expected signature, and it uses that to go argument-by-argument and unwrap any optionals it finds. ghstack-source-id: 108268905 Differential Revision: [D22607879](https://our.internmc.facebook.com/intern/diff/D22607879/)
Previously, operators that have a `Tensor?` (i.e. optional tensor) in their schema implemented it using `Tensor` in C++ and filled in an undefined tensor for the None case. The c10 operator library, however, expects `Tensor?` to be represented as `optional<Tensor>`, so those operators couldn't be c10-full yet and still had to use codegenerated unboxing instead of templated unboxing. This PR changes that. It extends the `hacky_wrapper_for_legacy_signatures` to not only take care of TensorOptions, but now also map between signatures taking `Tensor` and `optional<Tensor>`. For this, it requires an additional template parameter, the expected signature, and it uses that to go argument-by-argument and unwrap any optionals it finds. Differential Revision: [D22607879](https://our.internmc.facebook.com/intern/diff/D22607879/) [ghstack-poisoned]
Previously, if an op took an optional `Tensor?` argument, the C++ frontend (i.e. `at::op()` and `Tensor::op()`) were generated to take `Tensor`. A previous PR (#41610) changed the kernels to be written with `c10::optional<Tensor>` instead of `Tensor`, but that did not touch the C++ frontend yet. This PR changes the C++ frontend API to take `c10::optional<Tensor>` instead of `Tensor` as well. This should be mostly bc conserving. Since `Tensor` implicitly converts to `c10::optional<Tensor>`, any old code calling an op with a `Tensor` would still work. There are likely corner cases that get broken though. For example, C++ only ever does *one* implicit conversion. So if you call an op with a non-tensor object that gets implicitly converted to a `Tensor`, then that previously worked since the API took a `Tensor` and C++ allows one implicit conversion. Now it wouldn't work anymore because it would require two implicit conversions (to `Tensor` and then to `c10::optional<Tensor>`) and C++ doesn't do that. The main reasons for doing this are - Make the C++ API more sane. Those arguments are optional and that should be visible from the signature. - Allow easier integration for XLA and Autocast. Those backends generate code to wrap operators and forward operator arguments to calls to at::op(). After #41610, there was a mismatch because they had to implement operators with `optional<Tensor>` but call `at::op()` with `Tensor`, so they had to manually convert between those. After this PR, they can just forward the `optional<Tensor>` in their call to `at::op()`. Differential Revision: [D22704832](https://our.internmc.facebook.com/intern/diff/D22704832/) [ghstack-poisoned]
Previously, if an op took an optional `Tensor?` argument, the C++ frontend (i.e. `at::op()` and `Tensor::op()`) were generated to take `Tensor`. A previous PR (#41610) changed the kernels to be written with `c10::optional<Tensor>` instead of `Tensor`, but that did not touch the C++ frontend yet. This PR changes the C++ frontend API to take `c10::optional<Tensor>` instead of `Tensor` as well. This should be mostly bc conserving. Since `Tensor` implicitly converts to `c10::optional<Tensor>`, any old code calling an op with a `Tensor` would still work. There are likely corner cases that get broken though. For example, C++ only ever does *one* implicit conversion. So if you call an op with a non-tensor object that gets implicitly converted to a `Tensor`, then that previously worked since the API took a `Tensor` and C++ allows one implicit conversion. Now it wouldn't work anymore because it would require two implicit conversions (to `Tensor` and then to `c10::optional<Tensor>`) and C++ doesn't do that. The main reasons for doing this are - Make the C++ API more sane. Those arguments are optional and that should be visible from the signature. - Allow easier integration for XLA and Autocast. Those backends generate code to wrap operators and forward operator arguments to calls to at::op(). After #41610, there was a mismatch because they had to implement operators with `optional<Tensor>` but call `at::op()` with `Tensor`, so they had to manually convert between those. After this PR, they can just forward the `optional<Tensor>` in their call to `at::op()`. Differential Revision: [D22704832](https://our.internmc.facebook.com/intern/diff/D22704832/) ghstack-source-id: 108374324 Pull Request resolved: #41947
Previously, if an op took an optional `Tensor?` argument, the C++ frontend (i.e. `at::op()` and `Tensor::op()`) were generated to take `Tensor`. A previous PR (#41610) changed the kernels to be written with `c10::optional<Tensor>` instead of `Tensor`, but that did not touch the C++ frontend yet. This PR changes the C++ frontend API to take `c10::optional<Tensor>` instead of `Tensor` as well. This should be mostly bc conserving. Since `Tensor` implicitly converts to `c10::optional<Tensor>`, any old code calling an op with a `Tensor` would still work. There are likely corner cases that get broken though. For example, C++ only ever does *one* implicit conversion. So if you call an op with a non-tensor object that gets implicitly converted to a `Tensor`, then that previously worked since the API took a `Tensor` and C++ allows one implicit conversion. Now it wouldn't work anymore because it would require two implicit conversions (to `Tensor` and then to `c10::optional<Tensor>`) and C++ doesn't do that. The main reasons for doing this are - Make the C++ API more sane. Those arguments are optional and that should be visible from the signature. - Allow easier integration for XLA and Autocast. Those backends generate code to wrap operators and forward operator arguments to calls to at::op(). After #41610, there was a mismatch because they had to implement operators with `optional<Tensor>` but call `at::op()` with `Tensor`, so they had to manually convert between those. After this PR, they can just forward the `optional<Tensor>` in their call to `at::op()`. Differential Revision: [D22704832](https://our.internmc.facebook.com/intern/diff/D22704832/) [ghstack-poisoned]
Previously, if an op took an optional `Tensor?` argument, the C++ frontend (i.e. `at::op()` and `Tensor::op()`) were generated to take `Tensor`. A previous PR (#41610) changed the kernels to be written with `c10::optional<Tensor>` instead of `Tensor`, but that did not touch the C++ frontend yet. This PR changes the C++ frontend API to take `c10::optional<Tensor>` instead of `Tensor` as well. This should be mostly bc conserving. Since `Tensor` implicitly converts to `c10::optional<Tensor>`, any old code calling an op with a `Tensor` would still work. There are likely corner cases that get broken though. For example, C++ only ever does *one* implicit conversion. So if you call an op with a non-tensor object that gets implicitly converted to a `Tensor`, then that previously worked since the API took a `Tensor` and C++ allows one implicit conversion. Now it wouldn't work anymore because it would require two implicit conversions (to `Tensor` and then to `c10::optional<Tensor>`) and C++ doesn't do that. The main reasons for doing this are - Make the C++ API more sane. Those arguments are optional and that should be visible from the signature. - Allow easier integration for XLA and Autocast. Those backends generate code to wrap operators and forward operator arguments to calls to at::op(). After #41610, there was a mismatch because they had to implement operators with `optional<Tensor>` but call `at::op()` with `Tensor`, so they had to manually convert between those. After this PR, they can just forward the `optional<Tensor>` in their call to `at::op()`. Differential Revision: [D22704832](https://our.internmc.facebook.com/intern/diff/D22704832/) [ghstack-poisoned]
Pull Request resolved: #41947 Previously, if an op took an optional `Tensor?` argument, the C++ frontend (i.e. `at::op()` and `Tensor::op()`) were generated to take `Tensor`. A previous PR (#41610) changed the kernels to be written with `c10::optional<Tensor>` instead of `Tensor`, but that did not touch the C++ frontend yet. This PR changes the C++ frontend API to take `c10::optional<Tensor>` instead of `Tensor` as well. This should be mostly bc conserving. Since `Tensor` implicitly converts to `c10::optional<Tensor>`, any old code calling an op with a `Tensor` would still work. There are likely corner cases that get broken though. For example, C++ only ever does *one* implicit conversion. So if you call an op with a non-tensor object that gets implicitly converted to a `Tensor`, then that previously worked since the API took a `Tensor` and C++ allows one implicit conversion. Now it wouldn't work anymore because it would require two implicit conversions (to `Tensor` and then to `c10::optional<Tensor>`) and C++ doesn't do that. The main reasons for doing this are - Make the C++ API more sane. Those arguments are optional and that should be visible from the signature. - Allow easier integration for XLA and Autocast. Those backends generate code to wrap operators and forward operator arguments to calls to at::op(). After #41610, there was a mismatch because they had to implement operators with `optional<Tensor>` but call `at::op()` with `Tensor`, so they had to manually convert between those. After this PR, they can just forward the `optional<Tensor>` in their call to `at::op()`. ghstack-source-id: 108448001 Differential Revision: [D22704832](https://our.internmc.facebook.com/intern/diff/D22704832/)
Previously, if an op took an optional `Tensor?` argument, the C++ frontend (i.e. `at::op()` and `Tensor::op()`) were generated to take `Tensor`. A previous PR (#41610) changed the kernels to be written with `c10::optional<Tensor>` instead of `Tensor`, but that did not touch the C++ frontend yet. This PR changes the C++ frontend API to take `c10::optional<Tensor>` instead of `Tensor` as well. This should be mostly bc conserving. Since `Tensor` implicitly converts to `c10::optional<Tensor>`, any old code calling an op with a `Tensor` would still work. There are likely corner cases that get broken though. For example, C++ only ever does *one* implicit conversion. So if you call an op with a non-tensor object that gets implicitly converted to a `Tensor`, then that previously worked since the API took a `Tensor` and C++ allows one implicit conversion. Now it wouldn't work anymore because it would require two implicit conversions (to `Tensor` and then to `c10::optional<Tensor>`) and C++ doesn't do that. The main reasons for doing this are - Make the C++ API more sane. Those arguments are optional and that should be visible from the signature. - Allow easier integration for XLA and Autocast. Those backends generate code to wrap operators and forward operator arguments to calls to at::op(). After #41610, there was a mismatch because they had to implement operators with `optional<Tensor>` but call `at::op()` with `Tensor`, so they had to manually convert between those. After this PR, they can just forward the `optional<Tensor>` in their call to `at::op()`. Differential Revision: [D22704832](https://our.internmc.facebook.com/intern/diff/D22704832/) [ghstack-poisoned]
Previously, if an op took an optional `Tensor?` argument, the C++ frontend (i.e. `at::op()` and `Tensor::op()`) were generated to take `Tensor`. A previous PR (#41610) changed the kernels to be written with `c10::optional<Tensor>` instead of `Tensor`, but that did not touch the C++ frontend yet. This PR changes the C++ frontend API to take `c10::optional<Tensor>` instead of `Tensor` as well. This should be mostly bc conserving. Since `Tensor` implicitly converts to `c10::optional<Tensor>`, any old code calling an op with a `Tensor` would still work. There are likely corner cases that get broken though. For example, C++ only ever does *one* implicit conversion. So if you call an op with a non-tensor object that gets implicitly converted to a `Tensor`, then that previously worked since the API took a `Tensor` and C++ allows one implicit conversion. Now it wouldn't work anymore because it would require two implicit conversions (to `Tensor` and then to `c10::optional<Tensor>`) and C++ doesn't do that. The main reasons for doing this are - Make the C++ API more sane. Those arguments are optional and that should be visible from the signature. - Allow easier integration for XLA and Autocast. Those backends generate code to wrap operators and forward operator arguments to calls to at::op(). After #41610, there was a mismatch because they had to implement operators with `optional<Tensor>` but call `at::op()` with `Tensor`, so they had to manually convert between those. After this PR, they can just forward the `optional<Tensor>` in their call to `at::op()`. Differential Revision: [D22704832](https://our.internmc.facebook.com/intern/diff/D22704832/) [ghstack-poisoned]
aten/src/ATen/core/op_registration/hacky_wrapper_for_legacy_signatures.h
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving to unblock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm finding the metaprogramming kind of hard to follow... some comment begging inline :)
aten/src/ATen/core/op_registration/hacky_wrapper_for_legacy_signatures.h
Outdated
Show resolved
Hide resolved
aten/src/ATen/core/op_registration/hacky_wrapper_for_legacy_signatures.h
Outdated
Show resolved
Hide resolved
aten/src/ATen/core/op_registration/hacky_wrapper_for_legacy_signatures.h
Show resolved
Hide resolved
Previously, operators that have a `Tensor?` (i.e. optional tensor) in their schema implemented it using `Tensor` in C++ and filled in an undefined tensor for the None case. The c10 operator library, however, expects `Tensor?` to be represented as `optional<Tensor>`, so those operators couldn't be c10-full yet and still had to use codegenerated unboxing instead of templated unboxing. This PR changes that. It extends the `hacky_wrapper_for_legacy_signatures` to not only take care of TensorOptions, but now also map between signatures taking `Tensor` and `optional<Tensor>`. For this, it requires an additional template parameter, the expected signature, and it uses that to go argument-by-argument and unwrap any optionals it finds. Differential Revision: [D22607879](https://our.internmc.facebook.com/intern/diff/D22607879/) [ghstack-poisoned]
Previously, if an op took an optional `Tensor?` argument, the C++ frontend (i.e. `at::op()` and `Tensor::op()`) were generated to take `Tensor`. A previous PR (#41610) changed the kernels to be written with `c10::optional<Tensor>` instead of `Tensor`, but that did not touch the C++ frontend yet. This PR changes the C++ frontend API to take `c10::optional<Tensor>` instead of `Tensor` as well. This should be mostly bc conserving. Since `Tensor` implicitly converts to `c10::optional<Tensor>`, any old code calling an op with a `Tensor` would still work. There are likely corner cases that get broken though. For example, C++ only ever does *one* implicit conversion. So if you call an op with a non-tensor object that gets implicitly converted to a `Tensor`, then that previously worked since the API took a `Tensor` and C++ allows one implicit conversion. Now it wouldn't work anymore because it would require two implicit conversions (to `Tensor` and then to `c10::optional<Tensor>`) and C++ doesn't do that. The main reasons for doing this are - Make the C++ API more sane. Those arguments are optional and that should be visible from the signature. - Allow easier integration for XLA and Autocast. Those backends generate code to wrap operators and forward operator arguments to calls to at::op(). After #41610, there was a mismatch because they had to implement operators with `optional<Tensor>` but call `at::op()` with `Tensor`, so they had to manually convert between those. After this PR, they can just forward the `optional<Tensor>` in their call to `at::op()`. Differential Revision: [D22704832](https://our.internmc.facebook.com/intern/diff/D22704832/) [ghstack-poisoned]
Previously, if an op took an optional `Tensor?` argument, the C++ frontend (i.e. `at::op()` and `Tensor::op()`) were generated to take `Tensor`. A previous PR (#41610) changed the kernels to be written with `c10::optional<Tensor>` instead of `Tensor`, but that did not touch the C++ frontend yet. This PR changes the C++ frontend API to take `c10::optional<Tensor>` instead of `Tensor` as well. This should be mostly bc conserving. Since `Tensor` implicitly converts to `c10::optional<Tensor>`, any old code calling an op with a `Tensor` would still work. There are likely corner cases that get broken though. For example, C++ only ever does *one* implicit conversion. So if you call an op with a non-tensor object that gets implicitly converted to a `Tensor`, then that previously worked since the API took a `Tensor` and C++ allows one implicit conversion. Now it wouldn't work anymore because it would require two implicit conversions (to `Tensor` and then to `c10::optional<Tensor>`) and C++ doesn't do that. The main reasons for doing this are - Make the C++ API more sane. Those arguments are optional and that should be visible from the signature. - Allow easier integration for XLA and Autocast. Those backends generate code to wrap operators and forward operator arguments to calls to at::op(). After #41610, there was a mismatch because they had to implement operators with `optional<Tensor>` but call `at::op()` with `Tensor`, so they had to manually convert between those. After this PR, they can just forward the `optional<Tensor>` in their call to `at::op()`. Differential Revision: [D22704832](https://our.internmc.facebook.com/intern/diff/D22704832/) [ghstack-poisoned]
Previously, if an op took an optional `Tensor?` argument, the C++ frontend (i.e. `at::op()` and `Tensor::op()`) were generated to take `Tensor`. A previous PR (#41610) changed the kernels to be written with `c10::optional<Tensor>` instead of `Tensor`, but that did not touch the C++ frontend yet. This PR changes the C++ frontend API to take `c10::optional<Tensor>` instead of `Tensor` as well. This should be mostly bc conserving. Since `Tensor` implicitly converts to `c10::optional<Tensor>`, any old code calling an op with a `Tensor` would still work. There are likely corner cases that get broken though. For example, C++ only ever does *one* implicit conversion. So if you call an op with a non-tensor object that gets implicitly converted to a `Tensor`, then that previously worked since the API took a `Tensor` and C++ allows one implicit conversion. Now it wouldn't work anymore because it would require two implicit conversions (to `Tensor` and then to `c10::optional<Tensor>`) and C++ doesn't do that. The main reasons for doing this are - Make the C++ API more sane. Those arguments are optional and that should be visible from the signature. - Allow easier integration for XLA and Autocast. Those backends generate code to wrap operators and forward operator arguments to calls to at::op(). After #41610, there was a mismatch because they had to implement operators with `optional<Tensor>` but call `at::op()` with `Tensor`, so they had to manually convert between those. After this PR, they can just forward the `optional<Tensor>` in their call to `at::op()`. Differential Revision: [D22704832](https://our.internmc.facebook.com/intern/diff/D22704832/) [ghstack-poisoned]
Previously, operators that have a `Tensor?` (i.e. optional tensor) in their schema implemented it using `Tensor` in C++ and filled in an undefined tensor for the None case. The c10 operator library, however, expects `Tensor?` to be represented as `optional<Tensor>`, so those operators couldn't be c10-full yet and still had to use codegenerated unboxing instead of templated unboxing. This PR changes that. It extends the `hacky_wrapper_for_legacy_signatures` to not only take care of TensorOptions, but now also map between signatures taking `Tensor` and `optional<Tensor>`. For this, it requires an additional template parameter, the expected signature, and it uses that to go argument-by-argument and unwrap any optionals it finds. Differential Revision: [D22607879](https://our.internmc.facebook.com/intern/diff/D22607879/) [ghstack-poisoned]
Previously, if an op took an optional `Tensor?` argument, the C++ frontend (i.e. `at::op()` and `Tensor::op()`) were generated to take `Tensor`. A previous PR (#41610) changed the kernels to be written with `c10::optional<Tensor>` instead of `Tensor`, but that did not touch the C++ frontend yet. This PR changes the C++ frontend API to take `c10::optional<Tensor>` instead of `Tensor` as well. This should be mostly bc conserving. Since `Tensor` implicitly converts to `c10::optional<Tensor>`, any old code calling an op with a `Tensor` would still work. There are likely corner cases that get broken though. For example, C++ only ever does *one* implicit conversion. So if you call an op with a non-tensor object that gets implicitly converted to a `Tensor`, then that previously worked since the API took a `Tensor` and C++ allows one implicit conversion. Now it wouldn't work anymore because it would require two implicit conversions (to `Tensor` and then to `c10::optional<Tensor>`) and C++ doesn't do that. The main reasons for doing this are - Make the C++ API more sane. Those arguments are optional and that should be visible from the signature. - Allow easier integration for XLA and Autocast. Those backends generate code to wrap operators and forward operator arguments to calls to at::op(). After #41610, there was a mismatch because they had to implement operators with `optional<Tensor>` but call `at::op()` with `Tensor`, so they had to manually convert between those. After this PR, they can just forward the `optional<Tensor>` in their call to `at::op()`. Differential Revision: [D22704832](https://our.internmc.facebook.com/intern/diff/D22704832/) [ghstack-poisoned]
Previously, operators that have a `Tensor?` (i.e. optional tensor) in their schema implemented it using `Tensor` in C++ and filled in an undefined tensor for the None case. The c10 operator library, however, expects `Tensor?` to be represented as `optional<Tensor>`, so those operators couldn't be c10-full yet and still had to use codegenerated unboxing instead of templated unboxing. This PR changes that. It extends the `hacky_wrapper_for_legacy_signatures` to not only take care of TensorOptions, but now also map between signatures taking `Tensor` and `optional<Tensor>`. For this, it requires an additional template parameter, the expected signature, and it uses that to go argument-by-argument and unwrap any optionals it finds. Differential Revision: [D22607879](https://our.internmc.facebook.com/intern/diff/D22607879/) [ghstack-poisoned]
Previously, if an op took an optional `Tensor?` argument, the C++ frontend (i.e. `at::op()` and `Tensor::op()`) were generated to take `Tensor`. A previous PR (#41610) changed the kernels to be written with `c10::optional<Tensor>` instead of `Tensor`, but that did not touch the C++ frontend yet. This PR changes the C++ frontend API to take `c10::optional<Tensor>` instead of `Tensor` as well. This should be mostly bc conserving. Since `Tensor` implicitly converts to `c10::optional<Tensor>`, any old code calling an op with a `Tensor` would still work. There are likely corner cases that get broken though. For example, C++ only ever does *one* implicit conversion. So if you call an op with a non-tensor object that gets implicitly converted to a `Tensor`, then that previously worked since the API took a `Tensor` and C++ allows one implicit conversion. Now it wouldn't work anymore because it would require two implicit conversions (to `Tensor` and then to `c10::optional<Tensor>`) and C++ doesn't do that. The main reasons for doing this are - Make the C++ API more sane. Those arguments are optional and that should be visible from the signature. - Allow easier integration for XLA and Autocast. Those backends generate code to wrap operators and forward operator arguments to calls to at::op(). After #41610, there was a mismatch because they had to implement operators with `optional<Tensor>` but call `at::op()` with `Tensor`, so they had to manually convert between those. After this PR, they can just forward the `optional<Tensor>` in their call to `at::op()`. Differential Revision: [D22704832](https://our.internmc.facebook.com/intern/diff/D22704832/) [ghstack-poisoned]
Previously, operators that have a `Tensor?` (i.e. optional tensor) in their schema implemented it using `Tensor` in C++ and filled in an undefined tensor for the None case. The c10 operator library, however, expects `Tensor?` to be represented as `optional<Tensor>`, so those operators couldn't be c10-full yet and still had to use codegenerated unboxing instead of templated unboxing. This PR changes that. It extends the `hacky_wrapper_for_legacy_signatures` to not only take care of TensorOptions, but now also map between signatures taking `Tensor` and `optional<Tensor>`. For this, it requires an additional template parameter, the expected signature, and it uses that to go argument-by-argument and unwrap any optionals it finds. Differential Revision: [D22607879](https://our.internmc.facebook.com/intern/diff/D22607879/) [ghstack-poisoned]
Previously, if an op took an optional `Tensor?` argument, the C++ frontend (i.e. `at::op()` and `Tensor::op()`) were generated to take `Tensor`. A previous PR (#41610) changed the kernels to be written with `c10::optional<Tensor>` instead of `Tensor`, but that did not touch the C++ frontend yet. This PR changes the C++ frontend API to take `c10::optional<Tensor>` instead of `Tensor` as well. This should be mostly bc conserving. Since `Tensor` implicitly converts to `c10::optional<Tensor>`, any old code calling an op with a `Tensor` would still work. There are likely corner cases that get broken though. For example, C++ only ever does *one* implicit conversion. So if you call an op with a non-tensor object that gets implicitly converted to a `Tensor`, then that previously worked since the API took a `Tensor` and C++ allows one implicit conversion. Now it wouldn't work anymore because it would require two implicit conversions (to `Tensor` and then to `c10::optional<Tensor>`) and C++ doesn't do that. The main reasons for doing this are - Make the C++ API more sane. Those arguments are optional and that should be visible from the signature. - Allow easier integration for XLA and Autocast. Those backends generate code to wrap operators and forward operator arguments to calls to at::op(). After #41610, there was a mismatch because they had to implement operators with `optional<Tensor>` but call `at::op()` with `Tensor`, so they had to manually convert between those. After this PR, they can just forward the `optional<Tensor>` in their call to `at::op()`. Differential Revision: [D22704832](https://our.internmc.facebook.com/intern/diff/D22704832/) [ghstack-poisoned]
Previously, if an op took an optional `Tensor?` argument, the C++ frontend (i.e. `at::op()` and `Tensor::op()`) were generated to take `Tensor`. A previous PR (#41610) changed the kernels to be written with `c10::optional<Tensor>` instead of `Tensor`, but that did not touch the C++ frontend yet. This PR changes the C++ frontend API to take `c10::optional<Tensor>` instead of `Tensor` as well. This should be mostly bc conserving. Since `Tensor` implicitly converts to `c10::optional<Tensor>`, any old code calling an op with a `Tensor` would still work. There are likely corner cases that get broken though. For example, C++ only ever does *one* implicit conversion. So if you call an op with a non-tensor object that gets implicitly converted to a `Tensor`, then that previously worked since the API took a `Tensor` and C++ allows one implicit conversion. Now it wouldn't work anymore because it would require two implicit conversions (to `Tensor` and then to `c10::optional<Tensor>`) and C++ doesn't do that. The main reasons for doing this are - Make the C++ API more sane. Those arguments are optional and that should be visible from the signature. - Allow easier integration for XLA and Autocast. Those backends generate code to wrap operators and forward operator arguments to calls to at::op(). After #41610, there was a mismatch because they had to implement operators with `optional<Tensor>` but call `at::op()` with `Tensor`, so they had to manually convert between those. After this PR, they can just forward the `optional<Tensor>` in their call to `at::op()`. Differential Revision: [D22704832](https://our.internmc.facebook.com/intern/diff/D22704832/) [ghstack-poisoned]
Previously, operators that have a `Tensor?` (i.e. optional tensor) in their schema implemented it using `Tensor` in C++ and filled in an undefined tensor for the None case. The c10 operator library, however, expects `Tensor?` to be represented as `optional<Tensor>`, so those operators couldn't be c10-full yet and still had to use codegenerated unboxing instead of templated unboxing. This PR changes that. It extends the `hacky_wrapper_for_legacy_signatures` to not only take care of TensorOptions, but now also map between signatures taking `Tensor` and `optional<Tensor>`. For this, it requires an additional template parameter, the expected signature, and it uses that to go argument-by-argument and unwrap any optionals it finds. Differential Revision: [D22607879](https://our.internmc.facebook.com/intern/diff/D22607879/) [ghstack-poisoned]
Previously, if an op took an optional `Tensor?` argument, the C++ frontend (i.e. `at::op()` and `Tensor::op()`) were generated to take `Tensor`. A previous PR (#41610) changed the kernels to be written with `c10::optional<Tensor>` instead of `Tensor`, but that did not touch the C++ frontend yet. This PR changes the C++ frontend API to take `c10::optional<Tensor>` instead of `Tensor` as well. This should be mostly bc conserving. Since `Tensor` implicitly converts to `c10::optional<Tensor>`, any old code calling an op with a `Tensor` would still work. There are likely corner cases that get broken though. For example, C++ only ever does *one* implicit conversion. So if you call an op with a non-tensor object that gets implicitly converted to a `Tensor`, then that previously worked since the API took a `Tensor` and C++ allows one implicit conversion. Now it wouldn't work anymore because it would require two implicit conversions (to `Tensor` and then to `c10::optional<Tensor>`) and C++ doesn't do that. The main reasons for doing this are - Make the C++ API more sane. Those arguments are optional and that should be visible from the signature. - Allow easier integration for XLA and Autocast. Those backends generate code to wrap operators and forward operator arguments to calls to at::op(). After #41610, there was a mismatch because they had to implement operators with `optional<Tensor>` but call `at::op()` with `Tensor`, so they had to manually convert between those. After this PR, they can just forward the `optional<Tensor>` in their call to `at::op()`. Differential Revision: [D22704832](https://our.internmc.facebook.com/intern/diff/D22704832/) [ghstack-poisoned]
Summary: Pull Request resolved: #41947 Previously, if an op took an optional `Tensor?` argument, the C++ frontend (i.e. `at::op()` and `Tensor::op()`) were generated to take `Tensor`. A previous PR (#41610) changed the kernels to be written with `c10::optional<Tensor>` instead of `Tensor`, but that did not touch the C++ frontend yet. This PR changes the C++ frontend API to take `c10::optional<Tensor>` instead of `Tensor` as well. This should be mostly bc conserving. Since `Tensor` implicitly converts to `c10::optional<Tensor>`, any old code calling an op with a `Tensor` would still work. There are likely corner cases that get broken though. For example, C++ only ever does *one* implicit conversion. So if you call an op with a non-tensor object that gets implicitly converted to a `Tensor`, then that previously worked since the API took a `Tensor` and C++ allows one implicit conversion. Now it wouldn't work anymore because it would require two implicit conversions (to `Tensor` and then to `c10::optional<Tensor>`) and C++ doesn't do that. The main reasons for doing this are - Make the C++ API more sane. Those arguments are optional and that should be visible from the signature. - Allow easier integration for XLA and Autocast. Those backends generate code to wrap operators and forward operator arguments to calls to at::op(). After #41610, there was a mismatch because they had to implement operators with `optional<Tensor>` but call `at::op()` with `Tensor`, so they had to manually convert between those. After this PR, they can just forward the `optional<Tensor>` in their call to `at::op()`. ghstack-source-id: 108873705 Test Plan: unit tests Reviewed By: bhosmer Differential Revision: D22704832 fbshipit-source-id: f4c00d457b178fbc124be9e884a538a3653aae1f
|
This pull request has been merged in 3a19af2. |
Stack from ghstack:
Previously, operators that have a
Tensor?(i.e. optional tensor) in their schema implemented it usingTensorin C++ and filled in an undefined tensor for the None case.The c10 operator library, however, expects
Tensor?to be represented asoptional<Tensor>, so those operators couldn't be c10-full yet and still had to use codegenerated unboxing instead of templated unboxing.This PR changes that. It extends the
hacky_wrapper_for_legacy_signaturesto not only take care of TensorOptions, but now also map between signatures takingTensorandoptional<Tensor>.For this, it requires an additional template parameter, the expected signature, and it uses that to go argument-by-argument and unwrap any optionals it finds.
Differential Revision: D22607879