-
Notifications
You must be signed in to change notification settings - Fork 684
Quantized Softmax Kernel #14096
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Quantized Softmax Kernel #14096
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14096
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit e55524e with merge base 0439fde ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D78716203 |
Summary: Generic implementation of quantized softmax, dummy implementation of DLA_V130 implementation for now NOTE: Mask parameter is nop Reviewed By: mcremon-meta Differential Revision: D78716203
dc16e07
to
7b58326
Compare
This pull request was exported from Phabricator. Differential Revision: D78716203 |
Summary: Pull Request resolved: pytorch#14096 Generic implementation of quantized softmax, dummy implementation of DLA_V130 implementation for now NOTE: Mask parameter is nop Reviewed By: mcremon-meta Differential Revision: D78716203
7b58326
to
d5d1c59
Compare
Summary: Generic implementation of quantized softmax, dummy implementation of DLA_V130 implementation for now NOTE: Mask parameter is nop Reviewed By: mcremon-meta Differential Revision: D78716203
d5d1c59
to
07e1a4a
Compare
This pull request was exported from Phabricator. Differential Revision: D78716203 |
Summary: Pull Request resolved: pytorch#14096 Generic implementation of quantized softmax, dummy implementation of DLA_V130 implementation for now NOTE: Mask parameter is nop Reviewed By: mcremon-meta Differential Revision: D78716203
07e1a4a
to
7253fd6
Compare
Summary: Generic implementation of quantized softmax, dummy implementation of DLA_V130 implementation for now NOTE: Mask parameter is nop Reviewed By: mcremon-meta Differential Revision: D78716203
7253fd6
to
e10ad97
Compare
This pull request was exported from Phabricator. Differential Revision: D78716203 |
Summary: Pull Request resolved: pytorch#14096 Generic implementation of quantized softmax, dummy implementation of DLA_V130 implementation for now NOTE: Mask parameter is nop Reviewed By: mcremon-meta Differential Revision: D78716203
31e04e8
to
559b044
Compare
Summary: Generic implementation of quantized softmax, dummy implementation of DLA_V130 implementation for now NOTE: Mask parameter is nop Reviewed By: mcremon-meta Differential Revision: D78716203
This pull request was exported from Phabricator. Differential Revision: D78716203 |
Summary: Pull Request resolved: pytorch#14096 Generic implementation of quantized softmax, dummy implementation of DLA_V130 implementation for now NOTE: Mask parameter is nop Reviewed By: mcremon-meta Differential Revision: D78716203
86e6c0a
to
846f12b
Compare
Summary: Generic implementation of quantized softmax, dummy implementation of DLA_V130 implementation for now NOTE: Mask parameter is nop Reviewed By: mcremon-meta Differential Revision: D78716203
This pull request was exported from Phabricator. Differential Revision: D78716203 |
Summary: Pull Request resolved: pytorch#14096 Generic implementation of quantized softmax, dummy implementation of DLA_V130 implementation for now NOTE: Mask parameter is nop Reviewed By: mcremon-meta Differential Revision: D78716203
5927554
to
7d9b4db
Compare
Summary: Generic implementation of quantized softmax, dummy implementation of DLA_V130 implementation for now NOTE: Mask parameter is nop Reviewed By: mcremon-meta Differential Revision: D78716203
This pull request was exported from Phabricator. Differential Revision: D78716203 |
Summary: Pull Request resolved: pytorch#14096 Generic implementation of quantized softmax, dummy implementation of DLA_V130 implementation for now NOTE: Mask parameter is nop Reviewed By: mcremon-meta Differential Revision: D78716203
0c2e9f2
to
9f16c91
Compare
@skrtskrtfb has exported this pull request. If you are a Meta employee, you can view the originating diff in D78716203. |
Summary: Generic implementation of quantized softmax, dummy implementation of DLA_V130 implementation for now NOTE: Mask parameter is nop Reviewed By: mcremon-meta Differential Revision: D78716203
Summary: Generic implementation of quantized softmax, dummy implementation of DLA_V130 implementation for now NOTE: Mask parameter is nop Reviewed By: mcremon-meta Differential Revision: D78716203
9f16c91
to
02e86d1
Compare
@skrtskrtfb has exported this pull request. If you are a Meta employee, you can view the originating diff in D78716203. |
Summary: Generic implementation of quantized softmax, dummy implementation of DLA_V130 implementation for now NOTE: Mask parameter is nop Reviewed By: mcremon-meta Differential Revision: D78716203
02e86d1
to
20bf837
Compare
@skrtskrtfb has exported this pull request. If you are a Meta employee, you can view the originating diff in D78716203. |
Summary: Generic implementation of quantized softmax, dummy implementation of DLA_V130 implementation for now NOTE: Mask parameter is nop Reviewed By: mcremon-meta Differential Revision: D78716203
20bf837
to
e55524e
Compare
@skrtskrtfb has exported this pull request. If you are a Meta employee, you can view the originating diff in D78716203. |
Differential Revision: D78716203 Pull Request resolved: pytorch#14096
Summary:
Generic implementation of quantized softmax, dummy implementation of DLA_V130 implementation for now
NOTE: Mask parameter is nop
Reviewed By: mcremon-meta
Differential Revision: D78716203