-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[NNC] Build aggregate stmt for kernel before LoopNest. #53024
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
💊 CI failures summary and remediationsAs of commit b11e376 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good, that's definitely going in the right direction! I left some comments - please feel free to ignore them if you've planned to make similar changes in further PRs.
torch/csrc/jit/tensorexpr/kernel.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of having a vector of Stmts and cloning tensor stmts into it, why can't we have a Block and append tensor stmts to it immediately, without cloning anything?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ran into some issues when I tried without cloning the stmts. But I wasn't adding them to a Block then. Let me try this cleanup again in a follow up PR.
torch/csrc/jit/tensorexpr/kernel.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we make replace tensorOutputs_ with bufOutputs_ in TensorExprKernel and collect Bufs instead of Tensors in the first place? That would get rid of this loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. I will do this cleanup as a followup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@navahgar has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@navahgar has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Codecov Report
@@ Coverage Diff @@
## master #53024 +/- ##
=======================================
Coverage 78.00% 78.00%
=======================================
Files 1848 1848
Lines 179724 179739 +15
=======================================
+ Hits 140200 140214 +14
- Misses 39524 39525 +1 |
Summary: This PR builds an aggregate stmt for all the tensors in the kernel before constructing LoopNest. This migrates to using the LoopNest constructor that takes in a stmt and output buffers. This is one more step closer to eliminating the dependency of LoopNest on Tensor. Pull Request resolved: pytorch#53024 Reviewed By: H-Huang Differential Revision: D26729221 Pulled By: navahgar fbshipit-source-id: 43e972585351f6902c14b383b137aaaee3aaa3e1
Summary: This PR builds an aggregate stmt for all the tensors in the kernel before constructing LoopNest. This migrates to using the LoopNest constructor that takes in a stmt and output buffers. This is one more step closer to eliminating the dependency of LoopNest on Tensor. Pull Request resolved: pytorch#53024 Reviewed By: H-Huang Differential Revision: D26729221 Pulled By: navahgar fbshipit-source-id: 43e972585351f6902c14b383b137aaaee3aaa3e1
|
I'm just curious. https://github.com/pytorch/pytorch/runs/2008165384 states that Facebook Internal tests have been running for 33 days. I wonder what that means (eg. tests ended abruptly & couldn't post an update to GitHub? Or was the update lost by GitHub due to having rate-limited FB's usage of GitHub?). I've seen this in a lot of closed PRs. |
@malfet have you seen this before? |
This PR builds an aggregate stmt for all the tensors in the kernel before constructing LoopNest. This migrates to using the LoopNest constructor that takes in a stmt and output buffers. This is one more step closer to eliminating the dependency of LoopNest on Tensor.