New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make TORCH_COMPILE_DEBUG=1 work again #112917
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/112917
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (2 Unrelated Failures)As of commit 2e8bff9 with merge base 132cb57 (): FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
topic: not user facing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks Ying!
@ipiszy has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Surprised there are no new type errors. Thanks for the fix!
@@ -3111,6 +3111,7 @@ def forward(self, l_input_: torch.Tensor): | |||
o2 = torch.compile(mod)(inp) | |||
self.assertEqual(o1, o2) | |||
|
|||
@patch.object(config.trace, "enabled", True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To add a test with "TORCH_COMPILE_DEBUG=1". It seems that we don't have any test coverage for "TORCH_COMPILE_DEBUG=1" for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should probably just make a separate sanity test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Chillee Where do you suggest to put this sanity test?
Yeah, I'm surprised, too... |
There is actually a mypy error, for some reason my local run doesn't show this error:
So need to fix it properly.. |
@ipiszy has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
I think, |
@ipiszy has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
It will introduce other mypy errors lol. I'm just going to add "type: ignore" here. |
@ipiszy has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
ATT. After the fix, self.node is `Optional[ir.Buffer]` in `FusedSchedulerNode` and `ForeachKernelSchedulerNode`, but `ir.Buffer` in `BaseSchedulerNode`. Using `ir.Buffer` for `BaseSchedulerNode.node` avoids all mypy complaints about Optionals. Pull Request resolved: pytorch#112917 Approved by: https://github.com/davidberard98, https://github.com/int3, https://github.com/leslie-fang-intel, https://github.com/aakhundov
ATT. After the fix, self.node is
Optional[ir.Buffer]
inFusedSchedulerNode
andForeachKernelSchedulerNode
, butir.Buffer
inBaseSchedulerNode
. Usingir.Buffer
forBaseSchedulerNode.node
avoids all mypy complaints about Optionals.cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler