-
Notifications
You must be signed in to change notification settings - Fork 25.7k
Fix inductor fallback_random for dropout/rand_like #89515
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Avoid fx graph rewrite that replaces certain ops with ones using triton random - Keep track of replacement ops using triton random, so it is possible to not disable all replacements when using fallback_random [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/89515
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 18cee96: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
|
||
|
|
||
| def replace_fx(gm: torch.fx.GraphModule): | ||
| # Sometimes patch_functions() misses things already in the graph |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is AutogradMonkeypatch used for? do i also need to update it? not obvious what it means that 'replacements' "is" the function being called on torch_function
(see above, L27, not in this diff)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That looks like a typo to me. I think it should be: func in replacements.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok makes sense. i'll PR that too but separately in case it breaks something :)
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
- Avoid fx graph rewrite that replaces certain ops with ones using triton random - Keep track of replacement ops using triton random, so it is possible to not disable all replacements when using fallback_random Pull Request resolved: pytorch#89515 Approved by: https://github.com/ngimel
Stack from ghstack (oldest at bottom):
triton random
to not disable all replacements when using fallback_random
cc @mlazos @soumith @voznesenskym @yanboliang @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @chunyuan-w @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire