-
Couldn't load subscription status.
- Fork 25.7k
[POC] Dispatcher registration from Python #62660
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Modeled off of __torch_dispatch__, this PR demonstrates how to register Python functions directly to the dispatcher. Unlike __torch_dispatch__, this makes it possible to directly override preexisting implementations of PyTorch operators with alternate implementations, functionality that maybe of interest to IPEX or Triton. This is not a complete PR because it doesn't provide a coherent user facing API for the functionality: - We need to design an appropriate Python-side object hierarchy and API surface for interacting with the dispatcher. Similarly, we need to battle harden the preexisting Python dispatcher API, which was previously solely used for testing. Some things that need to be improved include file/line number reporting of registrations. - For overriding use cases, we need to introduce new alias keys to allow for overriding without triggering a warning message that overriding is happening. - When an override happens, we need a convenient way to call into the underlying implementation, in case we have decided we aren't interested in overriding the behavior anyway - This PR is not compatible with torchdeploy; to handle torchdeploy, we must introduce a level of indirection whereby there is a single registered kernel for all Python interpreters, but it then determines whether or not to fallthrough to the base implementation based on whether or not the registration happened from a matching interpreter or not Signed-off-by: Edward Z. Yang <ezyang@fb.com> [ghstack-poisoned]
Modeled off of __torch_dispatch__, this PR demonstrates how to register Python functions directly to the dispatcher. Unlike __torch_dispatch__, this makes it possible to directly override preexisting implementations of PyTorch operators with alternate implementations, functionality that maybe of interest to IPEX or Triton. This is not a complete PR because it doesn't provide a coherent user facing API for the functionality: - We need to design an appropriate Python-side object hierarchy and API surface for interacting with the dispatcher. Similarly, we need to battle harden the preexisting Python dispatcher API, which was previously solely used for testing. Some things that need to be improved include file/line number reporting of registrations. - For overriding use cases, we need to introduce new alias keys to allow for overriding without triggering a warning message that overriding is happening. - When an override happens, we need a convenient way to call into the underlying implementation, in case we have decided we aren't interested in overriding the behavior anyway - This PR is not compatible with torchdeploy; to handle torchdeploy, we must introduce a level of indirection whereby there is a single registered kernel for all Python interpreters, but it then determines whether or not to fallthrough to the base implementation based on whether or not the registration happened from a matching interpreter or not Signed-off-by: Edward Z. Yang <ezyang@fb.com> ghstack-source-id: 5d3fac0 Pull Request resolved: #62660
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit dbd9fd5 (more details on the Dr. CI page):
🕵️ 15 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages
|
|
@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Modeled off of __torch_dispatch__, this PR demonstrates how to register Python functions directly to the dispatcher. Unlike __torch_dispatch__, this makes it possible to directly override preexisting implementations of PyTorch operators with alternate implementations, functionality that maybe of interest to IPEX or Triton. This is not a complete PR because it doesn't provide a coherent user facing API for the functionality: - We need to design an appropriate Python-side object hierarchy and API surface for interacting with the dispatcher. Similarly, we need to battle harden the preexisting Python dispatcher API, which was previously solely used for testing. Some things that need to be improved include file/line number reporting of registrations. - For overriding use cases, we need to introduce new alias keys to allow for overriding without triggering a warning message that overriding is happening. - When an override happens, we need a convenient way to call into the underlying implementation, in case we have decided we aren't interested in overriding the behavior anyway - This PR is not compatible with torchdeploy; to handle torchdeploy, we must introduce a level of indirection whereby there is a single registered kernel for all Python interpreters, but it then determines whether or not to fallthrough to the base implementation based on whether or not the registration happened from a matching interpreter or not Signed-off-by: Edward Z. Yang <ezyang@fb.com> ghstack-source-id: 5d3fac0 Pull Request resolved: pytorch#62660
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
WIP, but works for reverse-mode autograd right now. H/T pytorch/pytorch#62660 for half of the solution.
|
I'll probably write up a doc on this, but another use case for this is a mechanism for "custom subclass function" - given a Tensor subclass (or a functorch transform), register a custom rule for it from Python. I hacked together a POC over at pytorch/functorch@0f98ab9 that reuses a lot of the code from this PR |
|
I am also interested in this feature. Mainly for backwards compatibility with a lot of python "ops" that are not compatible with TorchScript. The interface of the functions is compatible, but the implementation itself is not |
@fps7806 Thanks for the interest. I had a few clarifying questions:
|
The main ask is for exportable TorchScript functions implemented in python. |
|
I got around testing and found the exact snippet/test for what I want: I made a custom library that I can use for now, but would be great if we can merge this PR at some point and support this natively. |
Modeled off of __torch_dispatch__, this PR demonstrates how to register Python functions directly to the dispatcher. Unlike __torch_dispatch__, this makes it possible to directly override preexisting implementations of PyTorch operators with alternate implementations, functionality that maybe of interest to IPEX or Triton. This is not a complete PR because it doesn't provide a coherent user facing API for the functionality: - We need to design an appropriate Python-side object hierarchy and API surface for interacting with the dispatcher. Similarly, we need to battle harden the preexisting Python dispatcher API, which was previously solely used for testing. Some things that need to be improved include file/line number reporting of registrations. - For overriding use cases, we need to introduce new alias keys to allow for overriding without triggering a warning message that overriding is happening. - When an override happens, we need a convenient way to call into the underlying implementation, in case we have decided we aren't interested in overriding the behavior anyway - This PR is not compatible with torchdeploy; to handle torchdeploy, we must introduce a level of indirection whereby there is a single registered kernel for all Python interpreters, but it then determines whether or not to fallthrough to the base implementation based on whether or not the registration happened from a matching interpreter or not Signed-off-by: Edward Z. Yang <ezyangfb.com> Differential Revision: [D30074924](https://our.internmc.facebook.com/intern/diff/D30074924) [ghstack-poisoned]
Modeled off of __torch_dispatch__, this PR demonstrates how to register Python functions directly to the dispatcher. Unlike __torch_dispatch__, this makes it possible to directly override preexisting implementations of PyTorch operators with alternate implementations, functionality that maybe of interest to IPEX or Triton. This is not a complete PR because it doesn't provide a coherent user facing API for the functionality: - We need to design an appropriate Python-side object hierarchy and API surface for interacting with the dispatcher. Similarly, we need to battle harden the preexisting Python dispatcher API, which was previously solely used for testing. Some things that need to be improved include file/line number reporting of registrations. - For overriding use cases, we need to introduce new alias keys to allow for overriding without triggering a warning message that overriding is happening. - When an override happens, we need a convenient way to call into the underlying implementation, in case we have decided we aren't interested in overriding the behavior anyway - This PR is not compatible with torchdeploy; to handle torchdeploy, we must introduce a level of indirection whereby there is a single registered kernel for all Python interpreters, but it then determines whether or not to fallthrough to the base implementation based on whether or not the registration happened from a matching interpreter or not Signed-off-by: Edward Z. Yang <ezyangfb.com> Differential Revision: [D30074924](https://our.internmc.facebook.com/intern/diff/D30074924) [ghstack-poisoned]
Modeled off of __torch_dispatch__, this PR demonstrates how to register Python functions directly to the dispatcher. Unlike __torch_dispatch__, this makes it possible to directly override preexisting implementations of PyTorch operators with alternate implementations, functionality that maybe of interest to IPEX or Triton. This is not a complete PR because it doesn't provide a coherent user facing API for the functionality: - We need to design an appropriate Python-side object hierarchy and API surface for interacting with the dispatcher. Similarly, we need to battle harden the preexisting Python dispatcher API, which was previously solely used for testing. Some things that need to be improved include file/line number reporting of registrations. - For overriding use cases, we need to introduce new alias keys to allow for overriding without triggering a warning message that overriding is happening. - When an override happens, we need a convenient way to call into the underlying implementation, in case we have decided we aren't interested in overriding the behavior anyway - This PR is not compatible with torchdeploy; to handle torchdeploy, we must introduce a level of indirection whereby there is a single registered kernel for all Python interpreters, but it then determines whether or not to fallthrough to the base implementation based on whether or not the registration happened from a matching interpreter or not Signed-off-by: Edward Z. Yang <ezyang@fb.com> ghstack-source-id: 95d6efa Pull Request resolved: #62660
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
WIP, but works for reverse-mode autograd right now. H/T pytorch#62660 for half of the solution.
WIP, but works for reverse-mode autograd right now. H/T #62660 for half of the solution.
Stack from ghstack:
Modeled off of torch_dispatch, this PR demonstrates how to register
Python functions directly to the dispatcher. Unlike torch_dispatch,
this makes it possible to directly override preexisting implementations
of PyTorch operators with alternate implementations, functionality that
maybe of interest to IPEX or Triton.
This is not a complete PR because it doesn't provide a coherent user
facing API for the functionality:
We need to design an appropriate Python-side object hierarchy and
API surface for interacting with the dispatcher. Similarly, we need
to battle harden the preexisting Python dispatcher API, which was
previously solely used for testing. Some things that need to be
improved include file/line number reporting of registrations.
For overriding use cases, we need to introduce new alias keys to
allow for overriding without triggering a warning message that
overriding is happening.
When an override happens, we need a convenient way to call into the
underlying implementation, in case we have decided we aren't
interested in overriding the behavior anyway
This PR is not compatible with torchdeploy; to handle torchdeploy,
we must introduce a level of indirection whereby there is a single
registered kernel for all Python interpreters, but it then determines
whether or not to fallthrough to the base implementation based on
whether or not the registration happened from a matching interpreter
or not
Signed-off-by: Edward Z. Yang ezyang@fb.com
Differential Revision: D30074924