-
Notifications
You must be signed in to change notification settings - Fork 685
export static llama with masked softmax #13832
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13832
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 2 New FailuresAs of commit 28fd542 with merge base b660c2e ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D81248691 |
This PR needs a
|
Summary: export the model with soft attn max support Reviewed By: limintang, sxu Differential Revision: D81248691
3c8050a
to
4b47a05
Compare
Summary: Pull Request resolved: #13832 export the model with soft attn max support Reviewed By: limintang, sxu Differential Revision: D81248691
This pull request was exported from Phabricator. Differential Revision: D81248691 |
4b47a05
to
28fd542
Compare
@pytorchbot label "topic: not user facing" |
Summary: export the model with soft attn max support
Reviewed By: sxu
Differential Revision: D81248691