Skip to content

Conversation

@navahgar
Copy link
Contributor

@navahgar navahgar commented Mar 1, 2021

This PR builds an aggregate stmt for all the tensors in the kernel before constructing LoopNest. This migrates to using the LoopNest constructor that takes in a stmt and output buffers. This is one more step closer to eliminating the dependency of LoopNest on Tensor.

@navahgar navahgar requested a review from ZolotukhinM March 1, 2021 17:00
@facebook-github-bot facebook-github-bot added cla signed oncall: jit Add this issue/PR to JIT oncall triage queue labels Mar 1, 2021
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Mar 1, 2021

💊 CI failures summary and remediations

As of commit b11e376 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Copy link

@ZolotukhinM ZolotukhinM left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good, that's definitely going in the right direction! I left some comments - please feel free to ignore them if you've planned to make similar changes in further PRs.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of having a vector of Stmts and cloning tensor stmts into it, why can't we have a Block and append tensor stmts to it immediately, without cloning anything?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran into some issues when I tried without cloning the stmts. But I wasn't adding them to a Block then. Let me try this cleanup again in a follow up PR.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we make replace tensorOutputs_ with bufOutputs_ in TensorExprKernel and collect Bufs instead of Tensors in the first place? That would get rid of this loop.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. I will do this cleanup as a followup.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@navahgar has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@navahgar has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@codecov
Copy link

codecov bot commented Mar 2, 2021

Codecov Report

Merging #53024 (b11e376) into master (e00e42d) will increase coverage by 0.00%.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##           master   #53024   +/-   ##
=======================================
  Coverage   78.00%   78.00%           
=======================================
  Files        1848     1848           
  Lines      179724   179739   +15     
=======================================
+ Hits       140200   140214   +14     
- Misses      39524    39525    +1     

@facebook-github-bot
Copy link
Contributor

@navahgar merged this pull request in d382693.

aocsa pushed a commit to Quansight/pytorch that referenced this pull request Mar 15, 2021
Summary:
This PR builds an aggregate stmt for all the tensors in the kernel before constructing LoopNest. This migrates to using the LoopNest constructor that takes in a stmt and output buffers. This is one more step closer to eliminating the dependency of LoopNest on Tensor.

Pull Request resolved: pytorch#53024

Reviewed By: H-Huang

Differential Revision: D26729221

Pulled By: navahgar

fbshipit-source-id: 43e972585351f6902c14b383b137aaaee3aaa3e1
xsacha pushed a commit to xsacha/pytorch that referenced this pull request Mar 31, 2021
Summary:
This PR builds an aggregate stmt for all the tensors in the kernel before constructing LoopNest. This migrates to using the LoopNest constructor that takes in a stmt and output buffers. This is one more step closer to eliminating the dependency of LoopNest on Tensor.

Pull Request resolved: pytorch#53024

Reviewed By: H-Huang

Differential Revision: D26729221

Pulled By: navahgar

fbshipit-source-id: 43e972585351f6902c14b383b137aaaee3aaa3e1
@imaginary-person
Copy link
Contributor

I'm just curious. https://github.com/pytorch/pytorch/runs/2008165384 states that Facebook Internal tests have been running for 33 days. I wonder what that means (eg. tests ended abruptly & couldn't post an update to GitHub? Or was the update lost by GitHub due to having rate-limited FB's usage of GitHub?). I've seen this in a lot of closed PRs.

@ZolotukhinM
Copy link

I'm just curious. https://github.com/pytorch/pytorch/runs/2008165384 states that Facebook Internal tests have been running for 33 days. I wonder what that means (eg. tests ended abruptly & couldn't post an update to GitHub? Or was the update lost by GitHub due to having rate-limited FB's usage of GitHub?). I've seen this in a lot of closed PRs.

@malfet have you seen this before?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants