-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add torch.library.custom_op #122344
Add torch.library.custom_op #122344
Conversation
This is the entrypoint for defining a blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - we infer the namespace for the operator from the module. In a followup, we'll add the ability to manually specify a namespace. - def_blackbox registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the blackbox op do not alias any inputs or each other. We enforce this via a runtime check. Test Plan: - new tests [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/122344
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit ab8669d with merge base 31aff29 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This is the entrypoint for defining a blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - we infer the namespace for the operator from the module. In a followup, we'll add the ability to manually specify a namespace. - def_blackbox registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the blackbox op do not alias any inputs or each other. We enforce this via a runtime check. Test Plan: - new tests ghstack-source-id: 00ba59bba5a23e87053e823e1957e788eb3e1a70 Pull Request resolved: #122344
This is the entrypoint for defining a blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - we infer the namespace for the operator from the module. In a followup, we'll add the ability to manually specify a namespace. - def_blackbox registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the blackbox op do not alias any inputs or each other. We enforce this via a runtime check. Test Plan: - new tests [ghstack-poisoned]
This is the entrypoint for defining a blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - we infer the namespace for the operator from the module. In a followup, we'll add the ability to manually specify a namespace. - def_blackbox registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the blackbox op do not alias any inputs or each other. We enforce this via a runtime check. Test Plan: - new tests ghstack-source-id: d70f2ac501a1d7233dbd37a450e38266432c7eb1 Pull Request resolved: #122344
This is the entrypoint for defining a blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - we infer the namespace for the operator from the module. In a followup, we'll add the ability to manually specify a namespace. - def_blackbox registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the blackbox op do not alias any inputs or each other. We enforce this via a runtime check. Test Plan: - new tests [ghstack-poisoned]
This is the entrypoint for defining a blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - we infer the namespace for the operator from the module. In a followup, we'll add the ability to manually specify a namespace. - def_blackbox registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the blackbox op do not alias any inputs or each other. We enforce this via a runtime check. Test Plan: - new tests ghstack-source-id: f791b9abf0f07d7c9503fa48da06e7531fe05e47 Pull Request resolved: #122344
This is the entrypoint for defining a blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - we infer the namespace for the operator from the module. In a followup, we'll add the ability to manually specify a namespace. - def_blackbox registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the blackbox op do not alias any inputs or each other. We enforce this via a runtime check. Test Plan: - new tests cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang [ghstack-poisoned]
This is the entrypoint for defining a blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - we infer the namespace for the operator from the module. In a followup, we'll add the ability to manually specify a namespace. - def_blackbox registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the blackbox op do not alias any inputs or each other. We enforce this via a runtime check. Test Plan: - new tests ghstack-source-id: 96d3bc0808e93db7ee63980fa0d9d3ca6c652297 Pull Request resolved: #122344
Design doc: https://docs.google.com/document/d/13CS7nOEtTEsJ_0tiVGPhFDeYRbVb7LXIhw8atS-0eow/edit Prototype PR: #120345 This is the entrypoint for defining a blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - we infer the namespace for the operator from the module. In a followup, we'll add the ability to manually specify a namespace. - def_blackbox registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the blackbox op do not alias any inputs or each other. We enforce this via a runtime check. Test Plan: - new tests cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang [ghstack-poisoned]
This is the entrypoint for defining a blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - we infer the namespace for the operator from the module. In a followup, we'll add the ability to manually specify a namespace. - def_blackbox registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the blackbox op do not alias any inputs or each other. We enforce this via a runtime check. Test Plan: - new tests ghstack-source-id: 633860fff828094a042dbdfad4f425689848a385 Pull Request resolved: #122344
Design doc: https://docs.google.com/document/d/13CS7nOEtTEsJ_0tiVGPhFDeYRbVb7LXIhw8atS-0eow/edit Prototype PR: #120345 This is the entrypoint for defining a blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - we infer the namespace for the operator from the module. In a followup, we'll add the ability to manually specify a namespace. - def_blackbox registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the blackbox op do not alias any inputs or each other. We enforce this via a runtime check. Test Plan: - new tests cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang [ghstack-poisoned]
This is the entrypoint for defining a blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - we infer the namespace for the operator from the module. In a followup, we'll add the ability to manually specify a namespace. - def_blackbox registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the blackbox op do not alias any inputs or each other. We enforce this via a runtime check. Test Plan: - new tests ghstack-source-id: 165cd88503aff64b92898691823b2c89bfdf0a39 Pull Request resolved: #122344
Design doc: https://docs.google.com/document/d/13CS7nOEtTEsJ_0tiVGPhFDeYRbVb7LXIhw8atS-0eow/edit Prototype PR: #120345 This is the entrypoint for defining a blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - we infer the namespace for the operator from the module. In a followup, we'll add the ability to manually specify a namespace. - def_blackbox registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the blackbox op do not alias any inputs or each other. We enforce this via a runtime check. Test Plan: - new tests cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang [ghstack-poisoned]
This is the entrypoint for defining a blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - we infer the namespace for the operator from the module. In a followup, we'll add the ability to manually specify a namespace. - def_blackbox registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the blackbox op do not alias any inputs or each other. We enforce this via a runtime check. Test Plan: - new tests ghstack-source-id: 3d8787c77d0148c25e4d5ca1476aad285ead78c1 Pull Request resolved: #122344
This should be ready for another review. TL;DR of changes:
There are a number of TODOs in the code that I'd prefer to do as followups (this PR is getting long), but please lmk if you'd prefer any to be handled now. |
torch/_library/custom_ops.py
Outdated
) -> Callable: | ||
"""Register an implementation for a device type for this operator. | ||
|
||
Some valid types are: "cpu", "cuda", "xla", "mps", "ipu", "xpu". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have a list of valid device strings anywhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope :P
|
||
""" | ||
self._abstract_fn = fn | ||
return fn |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this trigger a rebuild?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no rebuilds in the custom ops design?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ho I though that's what the call in init self._build()
is?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That only gets called once, when the user calls @custom_op
. I'll rename it to something less confusing
OPDEF_TO_LIB: Dict[str, "library.Library"] = {} | ||
|
||
|
||
def get_library_allowing_overwrite(namespace: str, name: str) -> "library.Library": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will delete C++ or python manual registrations on rebuild?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will delete the old library, which will delete all registrations associated with that library.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure how this will play with C++ implemented kernels. But sounds ok.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to worry about that for now.
@@ -178,6 +185,8 @@ def _destroy(self): | |||
for handle in self._registration_handles: | |||
handle.destroy() | |||
self._registration_handles.clear() | |||
global _impls | |||
_impls -= self._op_impls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is a current bug?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes
>>> | ||
>>> # Add implementations for the cuda device | ||
>>> @numpy_sin.register_impl("cuda") | ||
>>> def _(x): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tbh I think this will lead to more questions than anything else.
I think we should use a name that makes sense here or numpy_sin like is done with python property.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or numpy_sin like is done with python property.
What do you mean?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default example for property from python is to name all of these the same: https://docs.python.org/3/library/functions.html#property.deleter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to go think about this a bit more
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We've got four options:
@custom_op("mylib::sin", mutated_args=())
def sin(x):
return x.clone()
# Option 1: recommend underscore name (The python singledispatch way)
# https://docs.python.org/3/library/functools.html#functools.singledispatch
@sin.register_fake
def _(x):
return torch.empty_like(x)
# Option 2: recommend a named name
@sin.register_fake
def sin_fake(x):
return torch.empty_like(x)
# Option 3: recommend the original name (The python property way)
@sin.register_fake
def sin(x):
return torch.empty_like(x)
# Option 4: recommend a random name with convention
@sin.register_fake
def impl(x):
return torch.empty_like(x)
I don't like Option 2 and Option 3 because those have bitten me in the past:
- Option 2 leads to unnecessarily verbose names and copy-paste confusion
- Option 3 means there is no way to call any of the impls directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really have a bone in this fight but Python singledispatch following the _ convention seems pretty compelling to me
This is the entrypoint for defining an opaque/blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - name inference. The user needs to manually specify an op name for now. In a followup we add the ability to automatically infer a name (this is a little tricky). - custom_op registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the custom_op do not alias any inputs or each other. We enforce this via a runtime check, but can relax this into an opcheck test if it really matters in the future. Test Plan: - new tests [ghstack-poisoned]
This is the entrypoint for defining an opaque/blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - name inference. The user needs to manually specify an op name for now. In a followup we add the ability to automatically infer a name (this is a little tricky). - custom_op registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the custom_op do not alias any inputs or each other. We enforce this via a runtime check, but can relax this into an opcheck test if it really matters in the future. Test Plan: - new tests [ghstack-poisoned]
This is the entrypoint for defining an opaque/blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - name inference. The user needs to manually specify an op name for now. In a followup we add the ability to automatically infer a name (this is a little tricky). - custom_op registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the custom_op do not alias any inputs or each other. We enforce this via a runtime check, but can relax this into an opcheck test if it really matters in the future. Test Plan: - new tests [ghstack-poisoned]
High-level updates:
|
|
||
def custom_op( | ||
name: str, | ||
/, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aw yisss
>>> | ||
>>> # Add implementations for the cuda device | ||
>>> @numpy_sin.register_impl("cuda") | ||
>>> def _(x): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really have a bone in this fight but Python singledispatch following the _ convention seems pretty compelling to me
return lib | ||
|
||
|
||
def iter_tensors( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Intended to be private?
This is the entrypoint for defining an opaque/blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - name inference. The user needs to manually specify an op name for now. In a followup we add the ability to automatically infer a name (this is a little tricky). - custom_op registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the custom_op do not alias any inputs or each other. We enforce this via a runtime check, but can relax this into an opcheck test if it really matters in the future. Test Plan: - new tests [ghstack-poisoned]
This is the entrypoint for defining an opaque/blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - name inference. The user needs to manually specify an op name for now. In a followup we add the ability to automatically infer a name (this is a little tricky). - custom_op registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the custom_op do not alias any inputs or each other. We enforce this via a runtime check, but can relax this into an opcheck test if it really matters in the future. Test Plan: - new tests [ghstack-poisoned]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good! Just a few nits/questions that can be done in follow ups
@@ -16,6 +18,7 @@ | |||
'fallthrough_kernel', | |||
'impl_abstract', | |||
'get_ctx', | |||
'custom_op', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm surprised this is not tripping the public API checks. The __module__
being private shouldn't be ok...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should definitely fail:
pytorch/test/test_public_bindings.py
Line 477 in 72662bf
'._' not in elem_module |
Also because of this, we miss the fact that this is not properly documented on the website.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lol we need to update the public bindings test to catch these cases
def check(arg): | ||
if isinstance(arg, Tensor): | ||
yield arg | ||
elif allowed_nesting > 0 and isinstance(arg, (tuple, list)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: typing says Tuple and this says tuple/list?
Should this just be Sequence?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The type hint says that args
is a Tuple[Any]
. arg
is Any
(and it may be a tuple or a list)
|
||
|
||
def iter_tensors( | ||
args: Tuple[Any], kwargs: Dict[str, Any], allowed_nesting: int = 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we remove that allowed_nesting arg since it is never used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a bug, thanks for the catch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I see what's going on. This function is recursive. Inside iter_tensors
, I call iter_tensors
with a different allowed_nesting.
self._abstract_fn = fn | ||
return fn | ||
|
||
def _register_to_dispatcher(self) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: "_define_in_dispatcher" ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, we technically define + impl in this function
torch/_library/custom_ops.py
Outdated
# Fields used to interface with the PyTorch dispatcher | ||
self._namespace = namespace | ||
self._name = name | ||
self._qualname = f"{self._namespace}::{self._name}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: make this a property computed dynamically removes the two source of truth and avoid potential divergence
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yessir
This is the entrypoint for defining an opaque/blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - name inference. The user needs to manually specify an op name for now. In a followup we add the ability to automatically infer a name (this is a little tricky). - custom_op registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the custom_op do not alias any inputs or each other. We enforce this via a runtime check, but can relax this into an opcheck test if it really matters in the future. Test Plan: - new tests cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang [ghstack-poisoned]
This is the entrypoint for defining an opaque/blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - name inference. The user needs to manually specify an op name for now. In a followup we add the ability to automatically infer a name (this is a little tricky). - custom_op registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the custom_op do not alias any inputs or each other. We enforce this via a runtime check, but can relax this into an opcheck test if it really matters in the future. Test Plan: - new tests cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang [ghstack-poisoned]
Pull Request resolved: #123107 Approved by: https://github.com/soulitzer ghstack dependencies: #122344
This is the entrypoint for defining an opaque/blackbox (e.g. PyTorch will never peek into it) custom op. In this PR, you can specify backend impls and the abstract impl for this op. NB: most of this PR is docstrings, please don't be intimidated by the line count. There are a number of interesting features: - we infer the schema from type hints. In a followup I add the ability to manually specify a schema. - name inference. The user needs to manually specify an op name for now. In a followup we add the ability to automatically infer a name (this is a little tricky). - custom_op registrations can override each other. This makes them more pleasant to work with in environments like colab. - we require that the outputs of the custom_op do not alias any inputs or each other. We enforce this via a runtime check, but can relax this into an opcheck test if it really matters in the future. Test Plan: - new tests Pull Request resolved: pytorch#122344 Approved by: https://github.com/ezyang, https://github.com/albanD
Pull Request resolved: pytorch#123107 Approved by: https://github.com/soulitzer ghstack dependencies: pytorch#122344
Stack from ghstack (oldest at bottom):
This is the entrypoint for defining an opaque/blackbox (e.g. PyTorch will
never peek into it) custom op. In this PR, you can specify backend impls
and the abstract impl for this op.
NB: most of this PR is docstrings, please don't be intimidated by the
line count.
There are a number of interesting features:
to manually specify a schema.
In a followup we add the ability to automatically infer a name (this
is a little tricky).
more pleasant to work with in environments like colab.
or each other. We enforce this via a runtime check, but can relax this
into an opcheck test if it really matters in the future.
Test Plan:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang