-
Notifications
You must be signed in to change notification settings - Fork 683
Remove non-per-tensor quantized add and replace with per-tensor variant #14093
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14093
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 92 PendingAs of commit a526b61 with merge base 0b4fe31 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D81950579 |
This PR needs a
|
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Differential Revision: D81950579
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Differential Revision: D81950579
d8516e8
to
f45dc38
Compare
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Differential Revision: D81950579
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Differential Revision: D81950579
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Differential Revision: D81950579
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Differential Revision: D81950579
This pull request was exported from Phabricator. Differential Revision: D81950579 |
…nt (pytorch#14093) Summary: Pull Request resolved: pytorch#14093 As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Differential Revision: D81950579
f45dc38
to
54950c0
Compare
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Differential Revision: D81950579
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Differential Revision: D81950579
54950c0
to
7c3e336
Compare
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Differential Revision: D81950579
This pull request was exported from Phabricator. Differential Revision: D81950579 |
…nt (pytorch#14093) Summary: Pull Request resolved: pytorch#14093 As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Differential Revision: D81950579
7c3e336
to
0fd1c87
Compare
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Differential Revision: D81950579
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Differential Revision: D81950579
0fd1c87
to
710e60d
Compare
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Differential Revision: D81950579
This pull request was exported from Phabricator. Differential Revision: D81950579 |
…nt (pytorch#14093) Summary: Pull Request resolved: pytorch#14093 As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Differential Revision: D81950579
710e60d
to
e405754
Compare
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Differential Revision: D81950579
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Differential Revision: D81950579
This pull request was exported from Phabricator. Differential Revision: D81950579 |
…nt (pytorch#14093) Summary: Pull Request resolved: pytorch#14093 As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Differential Revision: D81950579
cf81c51
to
5311b19
Compare
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Reviewed By: hsharma35 Differential Revision: D81950579
5311b19
to
fa689ac
Compare
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Reviewed By: hsharma35 Differential Revision: D81950579
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Reviewed By: hsharma35 Differential Revision: D81950579
This pull request was exported from Phabricator. Differential Revision: D81950579 |
…nt (pytorch#14093) Summary: Pull Request resolved: pytorch#14093 As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Reviewed By: hsharma35 Differential Revision: D81950579
57b1705
to
be6c7e8
Compare
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Reviewed By: hsharma35 Differential Revision: D81950579
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Reviewed By: hsharma35 Differential Revision: D81950579
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Reviewed By: hsharma35 Differential Revision: D81950579
This pull request was exported from Phabricator. Differential Revision: D81950579 |
…nt (pytorch#14093) Summary: Pull Request resolved: pytorch#14093 As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Reviewed By: hsharma35 Differential Revision: D81950579
679d685
to
d9c1ad4
Compare
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Reviewed By: hsharma35 Differential Revision: D81950579
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Reviewed By: hsharma35 Differential Revision: D81950579
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Reviewed By: hsharma35 Differential Revision: D81950579
d9c1ad4
to
a526b61
Compare
…nt (pytorch#14093) Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations. Reviewed By: hsharma35 Differential Revision: D81950579
Differential Revision: D81950579 Pull Request resolved: pytorch#14093
Summary: As discussed offline, we don't need a non-per-tensor variant of quantized_add, so removing from ref implementations.
Differential Revision: D81950579