-
Notifications
You must be signed in to change notification settings - Fork 345
mxtensor: switch to AOBaseTensor dispatch #3080
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Stack from ghstack (oldest at bottom): |
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3080
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 3467ee3 with merge base b3b545f ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Summary: Deletes the custom op dispatch logic in `MXTensor` and switches to the general one in `AOBaseTensor`. To enable this, we move the mx ops to the same file as `MXTensor`. This is to avoid the need for callsites to separately import `mx_ops.py`. Note that there are a couple of custom mx ops which could instead use the general implementation, leaving that for future PRs to keep this one small. Test Plan: ``` pytest test/prototype/mx_formats -s -x ``` Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 1d27ad4 ghstack-comment-id: 3338749802 Pull Request resolved: #3080
Summary: Deletes the custom op dispatch logic in `MXTensor` and switches to the general one in `AOBaseTensor`. To enable this, we move the mx ops to the same file as `MXTensor`. This is to avoid the need for callsites to separately import `mx_ops.py`. Note that there are a couple of custom mx ops which could instead use the general implementation, leaving that for future PRs to keep this one small. Test Plan: ``` pytest test/prototype/mx_formats -s -x ``` Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 994257e ghstack-comment-id: 3338749802 Pull Request resolved: #3080
implements = MXTensor.implements | ||
|
||
|
||
@implements([aten.detach.default, aten.alias.default]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these are defined in TorchAOBaseTensor as well I think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, we can move in future PRs if needed, I wanted to minimize changes in this PR
Summary:
Deletes the custom op dispatch logic in
MXTensor
and switches to thegeneral one in
AOBaseTensor
.To enable this, we move the mx ops to the same file as
MXTensor
. Thisis to avoid the need for callsites to separately import
mx_ops.py
.Note that there are a couple of custom mx ops which could instead use
the general implementation, leaving that for future PRs to keep this one
small.
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags: