Skip to content

Conversation

@zou3519
Copy link
Contributor

@zou3519 zou3519 commented Apr 15, 2024

@pytorch-bot
Copy link

pytorch-bot bot commented Apr 15, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/124066

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 384727b with merge base c4878ab (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@zou3519 zou3519 changed the title [WIP] torch.library.register_fake now accepts more torch.library.register_fake now accepts more Apr 15, 2024
@zou3519 zou3519 added the suppress-bc-linter Suppresses the failures of API backward-compatibility linter (Lint/bc_linter) label Apr 15, 2024
We allow it to accept:
- a string with the op name
- an opoverload
- a new-style custom op

If any of these are referring to a new-style custom op (created with the
custom_op decorator), then we dispatch to CustomOpDef.register_fake.
Otherwise, we do what we previously did.

Test Plan:
- new tests

[ghstack-poisoned]
@zou3519 zou3519 changed the title torch.library.register_fake now accepts more torch.library.register_fake now accepts more types Apr 15, 2024
We allow it to accept:
- a string with the op name
- an opoverload
- a new-style custom op

If any of these are referring to a new-style custom op (created with the
custom_op decorator), then we dispatch to CustomOpDef.register_fake.
Otherwise, we do what we previously did.

Test Plan:
- new tests

[ghstack-poisoned]
We allow it to accept:
- a string with the op name
- an opoverload
- a new-style custom op

If any of these are referring to a new-style custom op (created with the
custom_op decorator), then we dispatch to CustomOpDef.register_fake.
Otherwise, we do what we previously did.

Test Plan:
- new tests

[ghstack-poisoned]
zou3519 added 3 commits April 16, 2024 09:55
We allow it to accept:
- a string with the op name
- an opoverload
- a new-style custom op

If any of these are referring to a new-style custom op (created with the
custom_op decorator), then we dispatch to CustomOpDef.register_fake.
Otherwise, we do what we previously did.

Test Plan:
- new tests

[ghstack-poisoned]
We allow it to accept:
- a string with the op name
- an opoverload
- a new-style custom op

If any of these are referring to a new-style custom op (created with the
custom_op decorator), then we dispatch to CustomOpDef.register_fake.
Otherwise, we do what we previously did.

Test Plan:
- new tests

[ghstack-poisoned]
We allow it to accept:
- a string with the op name
- an opoverload
- a new-style custom op

If any of these are referring to a new-style custom op (created with the
custom_op decorator), then we dispatch to CustomOpDef.register_fake.
Otherwise, we do what we previously did.

Test Plan:
- new tests

[ghstack-poisoned]
yield from check(kwarg)


def _maybe_get_opdef(qualname):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

qualname sounds like it should be a str object which is not true given the below...

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also it's used only in one place, should we just move it to that other file?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

qualname sounds like it should be a str object which is not true given the below...

Ya

Also it's used only in one place, should we just move it to that other file?

I'm trying to not pollute library.py too much

We allow it to accept:
- a string with the op name
- an opoverload
- a new-style custom op

If any of these are referring to a new-style custom op (created with the
custom_op decorator), then we dispatch to CustomOpDef.register_fake.
Otherwise, we do what we previously did.

Test Plan:
- new tests

[ghstack-poisoned]
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update!

We allow it to accept:
- a string with the op name
- an opoverload
- a new-style custom op

If any of these are referring to a new-style custom op (created with the
custom_op decorator), then we dispatch to CustomOpDef.register_fake.
Otherwise, we do what we previously did.

Test Plan:
- new tests

[ghstack-poisoned]
@zou3519 zou3519 added ciflow/trunk Trigger trunk jobs on your pull request release notes: composability release notes category labels Apr 17, 2024
pytorchmergebot pushed a commit that referenced this pull request Apr 18, 2024
Allows registering autograd for all custom op entry points:
- the new-style custom op API (custom_op)
- the old-style torch.library APIs
- C++ operator registration

Test Plan:
- tests
Pull Request resolved: #124071
Approved by: https://github.com/albanD
ghstack dependencies: #123937, #124064, #124065, #124066
pytorchmergebot pushed a commit that referenced this pull request Apr 18, 2024
A kernel has "dispatcher convention" if there is an additional keyset
arg at the beginning of the argument list. This PR:
- adds a way to register kernels with dispatcher_convention using
  Library.impl (pass dispatcher_convention = True)
- adds OpOverload.redispatch

We use both of the above in the new custom ops API: we register the
autograd kernel in dispatcher convention so that we can actually call
redispatch like how pytorch built-in ops do it.

Test Plan:
- existing tests
Pull Request resolved: #124089
Approved by: https://github.com/albanD
ghstack dependencies: #123937, #124064, #124065, #124066, #124071
pytorchmergebot pushed a commit that referenced this pull request Apr 18, 2024
I forgot to delete this in an earlier PR.
Pull Request resolved: #124092
Approved by: https://github.com/albanD
ghstack dependencies: #123937, #124064, #124065, #124066, #124071, #124089
sanketpurandare pushed a commit to sanketpurandare/pytorch that referenced this pull request Apr 22, 2024
We allow it to accept:
- a string with the op name
- an opoverload
- a new-style custom op

If any of these are referring to a new-style custom op (created with the
custom_op decorator), then we dispatch to CustomOpDef.register_fake.
Otherwise, we do what we previously did.

Test Plan:
- new tests
Pull Request resolved: pytorch#124066
Approved by: https://github.com/albanD
ghstack dependencies: pytorch#123937, pytorch#124064, pytorch#124065
pytorch-bot bot pushed a commit that referenced this pull request Apr 22, 2024
Allows registering autograd for all custom op entry points:
- the new-style custom op API (custom_op)
- the old-style torch.library APIs
- C++ operator registration

Test Plan:
- tests
Pull Request resolved: #124071
Approved by: https://github.com/albanD
ghstack dependencies: #123937, #124064, #124065, #124066
sanketpurandare pushed a commit to sanketpurandare/pytorch that referenced this pull request Apr 22, 2024
A kernel has "dispatcher convention" if there is an additional keyset
arg at the beginning of the argument list. This PR:
- adds a way to register kernels with dispatcher_convention using
  Library.impl (pass dispatcher_convention = True)
- adds OpOverload.redispatch

We use both of the above in the new custom ops API: we register the
autograd kernel in dispatcher convention so that we can actually call
redispatch like how pytorch built-in ops do it.

Test Plan:
- existing tests
Pull Request resolved: pytorch#124089
Approved by: https://github.com/albanD
ghstack dependencies: pytorch#123937, pytorch#124064, pytorch#124065, pytorch#124066, pytorch#124071
sanketpurandare pushed a commit to sanketpurandare/pytorch that referenced this pull request Apr 22, 2024
pytorch-bot bot pushed a commit that referenced this pull request May 3, 2024
We allow it to accept:
- a string with the op name
- an opoverload
- a new-style custom op

If any of these are referring to a new-style custom op (created with the
custom_op decorator), then we dispatch to CustomOpDef.register_fake.
Otherwise, we do what we previously did.

Test Plan:
- new tests
Pull Request resolved: #124066
Approved by: https://github.com/albanD
ghstack dependencies: #123937, #124064, #124065
pytorch-bot bot pushed a commit that referenced this pull request May 3, 2024
Allows registering autograd for all custom op entry points:
- the new-style custom op API (custom_op)
- the old-style torch.library APIs
- C++ operator registration

Test Plan:
- tests
Pull Request resolved: #124071
Approved by: https://github.com/albanD
ghstack dependencies: #123937, #124064, #124065, #124066
petrex pushed a commit to petrex/pytorch that referenced this pull request May 3, 2024
A kernel has "dispatcher convention" if there is an additional keyset
arg at the beginning of the argument list. This PR:
- adds a way to register kernels with dispatcher_convention using
  Library.impl (pass dispatcher_convention = True)
- adds OpOverload.redispatch

We use both of the above in the new custom ops API: we register the
autograd kernel in dispatcher convention so that we can actually call
redispatch like how pytorch built-in ops do it.

Test Plan:
- existing tests
Pull Request resolved: pytorch#124089
Approved by: https://github.com/albanD
ghstack dependencies: pytorch#123937, pytorch#124064, pytorch#124065, pytorch#124066, pytorch#124071
pytorch-bot bot pushed a commit that referenced this pull request May 3, 2024
I forgot to delete this in an earlier PR.
Pull Request resolved: #124092
Approved by: https://github.com/albanD
ghstack dependencies: #123937, #124064, #124065, #124066, #124071, #124089
@github-actions github-actions bot deleted the gh/zou3519/960/head branch June 1, 2024 02:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: composability release notes category suppress-bc-linter Suppresses the failures of API backward-compatibility linter (Lint/bc_linter)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants