Skip to content

Conversation

@ezyang
Copy link
Contributor

@ezyang ezyang commented Jul 27, 2022

Stack from ghstack (oldest at bottom):

It turns out that for factory function prims (prims with no Tensor
arguments), we were always going to the ATen implementation of
the operator.

Prior to the next PR in this stack, the change is a bit hard to
test, but you can indirectly observe the impact by running arange
with trace dispatching on (well, you need
#82277 patched in too.)

$ TORCH_SHOW_DISPATCH_TRACE=1 python -c "import torch._refs; torch._refs.arange(4, device='meta')"
[callBoxed] op=[prims::arange], key=[BackendSelect]
[call] op=[aten::empty_strided], key=[BackendSelect]
[redispatch] op=[aten::empty_strided], key=[Meta]

Previously, the prims::arange call was dispatching to Undefined.

For maximum fidelity, technically we're supposed to redispatch to a
specific dispatch key, but the Python bindings to do this don't exist
and it was easy to route to the implementations which we already
intended to go to. We would have to fix this if we wanted external
backends to register custom implementations to OTHER dispatch keys
via Python op registration.

Signed-off-by: Edward Z. Yang ezyang@fb.com

It turns out that for factory function prims (prims with no Tensor
arguments), we were always going to the ATen implementation of
the operator.

Prior to the next PR in this stack, the change is a bit hard to
test, but you can indirectly observe the impact by running arange
with trace dispatching on (well, you need
#82277 patched in too.)

```
$ TORCH_SHOW_DISPATCH_TRACE=1 python -c "import torch._refs; torch._refs.arange(4, device='meta')"
[callBoxed] op=[prims::arange], key=[BackendSelect]
[call] op=[aten::empty_strided], key=[BackendSelect]
[redispatch] op=[aten::empty_strided], key=[Meta]
```

Previously, the prims::arange call was dispatching to Undefined.

For maximum fidelity, technically we're supposed to redispatch to a
specific dispatch key, but the Python bindings to do this don't exist
and it was easy to route to the implementations which we already
intended to go to.  We would have to fix this if we wanted external
backends to register custom implementations to OTHER dispatch keys
via Python op registration.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jul 27, 2022

🔗 Helpful links

✅ No Failures (0 Pending)

As of commit 59d859c (more details on the Dr. CI page):

Expand to see more

💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

It turns out that for factory function prims (prims with no Tensor
arguments), we were always going to the ATen implementation of
the operator.

Prior to the next PR in this stack, the change is a bit hard to
test, but you can indirectly observe the impact by running arange
with trace dispatching on (well, you need
#82277 patched in too.)

```
$ TORCH_SHOW_DISPATCH_TRACE=1 python -c "import torch._refs; torch._refs.arange(4, device='meta')"
[callBoxed] op=[prims::arange], key=[BackendSelect]
[call] op=[aten::empty_strided], key=[BackendSelect]
[redispatch] op=[aten::empty_strided], key=[Meta]
```

Previously, the prims::arange call was dispatching to Undefined.

For maximum fidelity, technically we're supposed to redispatch to a
specific dispatch key, but the Python bindings to do this don't exist
and it was easy to route to the implementations which we already
intended to go to.  We would have to fix this if we wanted external
backends to register custom implementations to OTHER dispatch keys
via Python op registration.

Signed-off-by: Edward Z. Yang <ezyangfb.com>

[ghstack-poisoned]

from torch._subclasses.fake_tensor import contains_tensor_types

if not any(contains_tensor_types(a.type) for a in _prim._schema.arguments):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this map 1-to-1 with the logic that codegen uses to determine when to generate backend select kernels?

def needs_backend_select(f: NativeFunction, selector: SelectiveBuilder) -> bool:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hypothetically yes, but prims are simpler than native_functions.yaml so I guessed a simplified version would work.

@ezyang
Copy link
Contributor Author

ezyang commented Jul 27, 2022

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here

@github-actions
Copy link
Contributor

Hey @ezyang.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

facebook-github-bot pushed a commit that referenced this pull request Jul 28, 2022
)

Summary:
It turns out that for factory function prims (prims with no Tensor
arguments), we were always going to the ATen implementation of
the operator.

Prior to the next PR in this stack, the change is a bit hard to
test, but you can indirectly observe the impact by running arange
with trace dispatching on (well, you need
#82277 patched in too.)

```
$ TORCH_SHOW_DISPATCH_TRACE=1 python -c "import torch._refs; torch._refs.arange(4, device='meta')"
[callBoxed] op=[prims::arange], key=[BackendSelect]
[call] op=[aten::empty_strided], key=[BackendSelect]
[redispatch] op=[aten::empty_strided], key=[Meta]
```

Previously, the prims::arange call was dispatching to Undefined.

For maximum fidelity, technically we're supposed to redispatch to a
specific dispatch key, but the Python bindings to do this don't exist
and it was easy to route to the implementations which we already
intended to go to.  We would have to fix this if we wanted external
backends to register custom implementations to OTHER dispatch keys
via Python op registration.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Pull Request resolved: #82311
Approved by: https://github.com/ngimel, https://github.com/bdhirsh

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/3b6b27e9d7ad3951b23364f31c19b1d3ebbeaf7c

Reviewed By: osalpekar

Differential Revision: D38234285

Pulled By: ezyang

fbshipit-source-id: e73b770529d3e221dd85f8c7b1607da975d1f211
@facebook-github-bot facebook-github-bot deleted the gh/ezyang/1289/head branch July 31, 2022 14:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants