-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Closed
Labels
triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Describe the bug
Right now we have this logic
if (
not disable_meta
# TorchScript dumps a bunch of extra nonsense overloads
# which don't have corresponding dispatcher entries, we need
# to filter those out
and torch._C._dispatch_has_kernel(name)
# Don't register a meta kernel to any operator that has
# a CompositeImplicitAutograd kernel in core.
# Otherwise we won't be able to run autograd for that operator with the meta backend.
and "CompositeImplicitAutograd" not in torch._C._dispatch_dump(name)
and not torch._C._dispatch_has_kernel_for_dispatch_key(name, "Meta")
):
meta_lib.impl(op_overload, fn)
But there are other alias dispatch keys that can register into Meta, so testing for only one alias key is not right. Also, it is not great to be groveling in a text format.
What we probably need to do is either fix, or add a variant, of _dispatch_has_kernel_for_dispatch_key to check not if there is a kernel explicitly registered to the Meta key, but if there is any non-error key in the table itself, after we've processed all aliases.
cc @bdhirsh
Versions
master
Metadata
Metadata
Assignees
Labels
triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module