-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Update disabling fast-path for strict-export inside MultiheadAttention #164544
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update disabling fast-path for strict-export inside MultiheadAttention #164544
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/164544
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 1 PendingAs of commit 802e9de with merge base 2a7c486 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@tugsbayasgalan has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
torch/nn/modules/activation.py
Outdated
why_not_fast_path = "some Tensor argument has_torch_function" | ||
elif _is_make_fx_tracing(): | ||
why_not_fast_path = "we are running make_fx tracing" | ||
elif torch.compiler.is_exporting(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you need another elif branch for export.
For some reason, executorch needs the slow path. But the original flag doesn't work for new export because we inline torch modules even before getting into make_fx. Differential Revision: [D83810733](https://our.internmc.facebook.com/intern/diff/D83810733) [ghstack-poisoned]
@tugsbayasgalan has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
…eadAttention" For some reason, executorch needs the slow path. But the original flag doesn't work for new export because we inline torch modules even before getting into make_fx. We still have to keep the old flag because lot of code assumes this exist.... grr Differential Revision: [D83810733](https://our.internmc.facebook.com/intern/diff/D83810733) [ghstack-poisoned]
@tugsbayasgalan has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
…eadAttention" For some reason, executorch needs the slow path. But the original flag doesn't work for new export because we inline torch modules even before getting into make_fx. We still have to keep the old flag because lot of code assumes this exist.... grr Differential Revision: [D83810733](https://our.internmc.facebook.com/intern/diff/D83810733) [ghstack-poisoned]
@tugsbayasgalan has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
pytorch#164544) For some reason, executorch needs the slow path. But the original flag doesn't work for new export because we inline torch modules even before getting into make_fx. We still have to keep the old flag because lot of code assumes this exist.... grr Differential Revision: [D83810733](https://our.internmc.facebook.com/intern/diff/D83810733) Pull Request resolved: pytorch#164544 Approved by: https://github.com/anijain2305, https://github.com/mikaylagawarecki
pytorch#164544) For some reason, executorch needs the slow path. But the original flag doesn't work for new export because we inline torch modules even before getting into make_fx. We still have to keep the old flag because lot of code assumes this exist.... grr Differential Revision: [D83810733](https://our.internmc.facebook.com/intern/diff/D83810733) Pull Request resolved: pytorch#164544 Approved by: https://github.com/anijain2305, https://github.com/mikaylagawarecki
Stack from ghstack (oldest at bottom):
For some reason, executorch needs the slow path. But the original flag doesn't work for new export because we inline torch modules even before getting into make_fx. We still have to keep the old flag because lot of code assumes this exist.... grr
Differential Revision: D83810733