-
Notifications
You must be signed in to change notification settings - Fork 684
Add support for strongly typed softmax #13750
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13750
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 1 New Failure, 1 Unrelated FailureAs of commit 9bd2a33 with merge base 0d0769a ( NEW FAILURE - The following job has failed:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D81172654 |
This PR needs a
|
This pull request was exported from Phabricator. Differential Revision: D81172654 |
Summary: Pull Request resolved: #13750 As a bonus, it allows us to fully remove the fallback, since all cases for fp32 are optimized (unless a tensor is more than 16 dims, which is probably ok for a few years) Differential Revision: D81172654
0963507
to
04744d0
Compare
Summary: Pull Request resolved: #13750 As a bonus, it allows us to fully remove the fallback, since all cases for fp32 are optimized (unless a tensor is more than 16 dims, which is probably ok for a few years) Reviewed By: JakeStevens Differential Revision: D81172654
This pull request was exported from Phabricator. Differential Revision: D81172654 |
04744d0
to
9bd2a33
Compare
Summary: As a bonus, it allows us to fully remove the fallback, since all cases for fp32 are optimized (unless a tensor is more than 16 dims, which is probably ok for a few years)
Differential Revision: D81172654