torchgen/gen_backend_stubs.py compatibility with DispatchStubs #103370
Labels
module: codegen
Issues related to the codegen for Aten and Autograd
module: dispatch
DispatchStub, Type, void pointer table, c10 dispatch
module: structured kernels
Related to new structured kernels functionality
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
馃殌 The feature, motivation and pitch
For our out-of-tree backend, I would like to support many structured kernels similar to how is done in CUDA, i.e. registering
DispatchStub
per operation, similar to how is done here, forPrivateUse1
, the support for registering stubs was added in this PR.This PR however, does not tackle how to implement structured kernels in the same manner. It would be nice if
get_backend_stubs.py
could support implementation paths of this form instead.I'm not certain as to whether
torchgen
could support the reuse of these kernels with overridden headers, which is how we've implemented a method to redirect CUDA kernel launches from our backend successfully with various unstructured kernels.Alternatives
No response
Additional context
No response
cc @ezyang @bhosmer @bdhirsh
The text was updated successfully, but these errors were encountered: