Skip to content

Modify DispatchKeyExtractor to also work for optional Tensors #58283

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from

Conversation

Chillee
Copy link
Collaborator

@Chillee Chillee commented May 14, 2021

No description provided.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented May 14, 2021

💊 CI failures summary and remediations

As of commit 91bf4eb (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@facebook-github-bot
Copy link
Contributor

@Chillee has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@bhosmer bhosmer self-requested a review May 14, 2021 07:03
@codecov
Copy link

codecov bot commented May 14, 2021

Codecov Report

Merging #58283 (91bf4eb) into master (c911c30) will decrease coverage by 0.03%.
The diff coverage is 71.78%.

@@            Coverage Diff             @@
##           master   #58283      +/-   ##
==========================================
- Coverage   76.84%   76.80%   -0.04%     
==========================================
  Files        1986     1986              
  Lines      197902   198388     +486     
==========================================
+ Hits       152079   152378     +299     
- Misses      45823    46010     +187     

@facebook-github-bot
Copy link
Contributor

@Chillee merged this pull request in ccd7141.

bhosmer pushed a commit that referenced this pull request May 14, 2021

Correctly extract dispatch keys from optional Tensors in unboxed calls. Companion to #58283 which does the same for boxed calls.

E.g. fixes the following repro case (from @zou3519 and @Chillee):

```
import torch

# make a quantized tensor
r = torch.rand(3, 2, dtype=torch.float) * 4 - 2
scale = 0.02
zero_point = 2
quantized = torch.quantize_per_tensor(r, scale, zero_point, torch.quint8)

# The following invokes the quantized key's version of the clamp.
# The quantized key doesn't actually have a kernel registered for clamp,
# so it throws an error to us.
torch.clamp(quantized, quantized, quantized)

# However, the following straight up calls into native::clamp (e.g. the CPU
# kernel) instead of dispatching on the quantized key and giving us an error:
cpu_tensor = torch.randn(3, 2)
torch.clamp(cpu_tensor, quantized, quantized)
```


Differential Revision: [D28436822](https://our.internmc.facebook.com/intern/diff/D28436822)

[ghstack-poisoned]
krshrimali pushed a commit to krshrimali/pytorch that referenced this pull request May 19, 2021
…h#58283)

Summary: Pull Request resolved: pytorch#58283

Reviewed By: bhosmer

Differential Revision: D28436443

Pulled By: Chillee

fbshipit-source-id: ba6aae74e8ec3c5a6157cc4517c29b36bdd4a30d
zou3519 added a commit that referenced this pull request Jun 25, 2021
Fixes #60461.

Previously, when one calls `self.index(indices)` using a regular `self`
Tensor and a `BatchedTensor` indices the dispatcher would not dispatch
to the Batched key. This is because the dispatcher did not extract
dispatch keys from `indices`.

Similar #58283 and #58296, this PR modifies the dispatcher to extract
dispatch keys from List[Optional[Tensor]] arguments. We do this for both
boxed and unboxed kernels.

Test Plan:
- run the test case in
https://gist.github.com/zou3519/4421df7c5271376a0ef53ca857b18740
(requires functorch). After this PR, it raises `RuntimeError: Batching
rule not implemented for aten::index.Tensor. We could not generate a
fallback.`, which shows that dispatch happened on the Batched key.
- Taking suggestions for how to write a test for this in core

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Jun 25, 2021
Fixes #60461.

Previously, when one calls `self.index(indices)` using a regular `self`
Tensor and a `BatchedTensor` indices the dispatcher would not dispatch
to the Batched key. This is because the dispatcher did not extract
dispatch keys from `indices`.

Similar #58283 and #58296, this PR modifies the dispatcher to extract
dispatch keys from List[Optional[Tensor]] arguments. We do this for both
boxed and unboxed kernels.

Test Plan:
- run the test case in
https://gist.github.com/zou3519/4421df7c5271376a0ef53ca857b18740
(requires functorch). After this PR, it raises `RuntimeError: Batching
rule not implemented for aten::index.Tensor. We could not generate a
fallback.`, which shows that dispatch happened on the Batched key.
- Taking suggestions for how to write a test for this in core

ghstack-source-id: 1dc29ae
Pull Request resolved: #60787
zou3519 added a commit that referenced this pull request Jul 7, 2021
Fixes #60461.

Previously, when one calls `self.index(indices)` using a regular `self`
Tensor and a `BatchedTensor` indices the dispatcher would not dispatch
to the Batched key. This is because the dispatcher did not extract
dispatch keys from `indices`.

Similar #58283 and #58296, this PR modifies the dispatcher to extract
dispatch keys from List[Optional[Tensor]] arguments. We do this for both
boxed and unboxed kernels.

Test Plan:
- run the test case in
https://gist.github.com/zou3519/4421df7c5271376a0ef53ca857b18740
(requires functorch). After this PR, it raises `RuntimeError: Batching
rule not implemented for aten::index.Tensor. We could not generate a
fallback.`, which shows that dispatch happened on the Batched key.
- Taking suggestions for how to write a test for this in core

ghstack-source-id: 2c2a432
Pull Request resolved: #60787
zou3519 added a commit that referenced this pull request Jul 7, 2021
…nal[Tensor]] for dispatch"

Fixes #60461.

Previously, when one calls `self.index(indices)` using a regular `self`
Tensor and a `BatchedTensor` indices the dispatcher would not dispatch
to the Batched key. This is because the dispatcher did not extract
dispatch keys from `indices`.

Similar #58283 and #58296, this PR modifies the dispatcher to extract
dispatch keys from List[Optional[Tensor]] arguments. We do this for both
boxed and unboxed kernels.

Test Plan:
- run the test case in
https://gist.github.com/zou3519/4421df7c5271376a0ef53ca857b18740
(requires functorch). After this PR, it raises `RuntimeError: Batching
rule not implemented for aten::index.Tensor. We could not generate a
fallback.`, which shows that dispatch happened on the Batched key.
- Taking suggestions for how to write a test for this in core

Differential Revision: [D29438611](https://our.internmc.facebook.com/intern/diff/D29438611)

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Jul 7, 2021
… dispatch"

Fixes #60461.

Previously, when one calls `self.index(indices)` using a regular `self`
Tensor and a `BatchedTensor` indices the dispatcher would not dispatch
to the Batched key. This is because the dispatcher did not extract
dispatch keys from `indices`.

Similar #58283 and #58296, this PR modifies the dispatcher to extract
dispatch keys from List[Optional[Tensor]] arguments. We do this for both
boxed and unboxed kernels.

Test Plan:
- run the test case in
https://gist.github.com/zou3519/4421df7c5271376a0ef53ca857b18740
(requires functorch). After this PR, it raises `RuntimeError: Batching
rule not implemented for aten::index.Tensor. We could not generate a
fallback.`, which shows that dispatch happened on the Batched key.
- Taking suggestions for how to write a test for this in core

Differential Revision: [D29438611](https://our.internmc.facebook.com/intern/diff/D29438611)

[ghstack-poisoned]
facebook-github-bot pushed a commit that referenced this pull request Jul 8, 2021
…60787)

Summary:
Pull Request resolved: #60787

Fixes #60461.

Previously, when one calls `self.index(indices)` using a regular `self`
Tensor and a `BatchedTensor` indices the dispatcher would not dispatch
to the Batched key. This is because the dispatcher did not extract
dispatch keys from `indices`.

Similar #58283 and #58296, this PR modifies the dispatcher to extract
dispatch keys from List[Optional[Tensor]] arguments. We do this for both
boxed and unboxed kernels.

Test Plan:
- run the test case in
https://gist.github.com/zou3519/4421df7c5271376a0ef53ca857b18740
(requires functorch). After this PR, it raises `RuntimeError: Batching
rule not implemented for aten::index.Tensor. We could not generate a
fallback.`, which shows that dispatch happened on the Batched key.
- Taking suggestions for how to write a test for this in core

Reviewed By: jbschlosser

Differential Revision: D29438611

Pulled By: zou3519

fbshipit-source-id: 77e182f763e18aa3fa857eebafa8b7f83384db71
@github-actions github-actions bot deleted the fix_dispatch branch February 11, 2024 01:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants