New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vmap support for torch.tril and torch.triu #94287
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/94287
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 4 PendingAs of commit e962dc9: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D43016624 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copying both these tensors includes an atomic lock and ref count inc / dec that can be avoided with an std::move().
int64_t diagonal = 0) { | ||
TORCH_CHECK(self.dim() >= 2, "tril: The input tensor must have at least 2 dimensions."); | ||
auto result = at::tril(self, diagonal); | ||
return std::make_tuple(result, 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return std::make_tuple(result, 0); | |
return std::make_tuple(std::move(result), 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This optimisation is almost always negligible in the grand scheme of things. If we really want to enforce it all across we should add it to the linter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lezcano Adding it to a linter is bit more difficult that it sounds, a lot of false positives / negatives. Further complicated by template forwarding etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then I think it's not worth it to reviewing PRs making this point. As discussed, all these optimisations are not useful and not worth the CI run unless they are in a very hot path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lezcano there is work on a clang-tidy check that I've been experimenting with, but it's not quite ready yet: https://reviews.llvm.org/D137205
int64_t diagonal = 0) { | ||
TORCH_CHECK(self.dim() >= 2, "triu: The input tensor must have at least 2 dimensions."); | ||
auto result = at::triu(self, diagonal); | ||
return std::make_tuple(result, 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return std::make_tuple(result, 0); | |
return std::make_tuple(std::move(result), 0); |
This pull request was exported from Phabricator. Differential Revision: D43016624 |
4 similar comments
This pull request was exported from Phabricator. Differential Revision: D43016624 |
This pull request was exported from Phabricator. Differential Revision: D43016624 |
This pull request was exported from Phabricator. Differential Revision: D43016624 |
This pull request was exported from Phabricator. Differential Revision: D43016624 |
@pytorchbot rebase |
@pytorchbot successfully started a rebase job. Check the current status here |
Successfully rebased |
This pull request was exported from Phabricator. Differential Revision: D43016624 |
2 similar comments
This pull request was exported from Phabricator. Differential Revision: D43016624 |
This pull request was exported from Phabricator. Differential Revision: D43016624 |
This pull request was exported from Phabricator. Differential Revision: D43016624 |
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary: Add vmap support for torch.tril and torch.triu. Fix: #91403 Test Plan: GitHub pipeline Differential Revision: D43016624 ### Expected behavior Same as using for-loop: ```python import torch x = torch.randn(32, 3) results = [] for xi in x: y = torch.triu(xi) results.append(y) """ triu: input tensor must have at least 2 dimensions --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-7-d726203efb0e> in <module> 4 results = [] 5 for xi in x: ----> 6 y = torch.triu(xi) 7 results.append(y) RuntimeError: triu: input tensor must have at least 2 dimensions """ ``` Pull Request resolved: pytorch/pytorch#94287 Approved by: https://github.com/Skylion007, https://github.com/zou3519
Summary: Add vmap support for torch.tril and torch.triu. Fix: #91403 Test Plan: GitHub pipeline Differential Revision: D43016624 ### Expected behavior Same as using for-loop: ```python import torch x = torch.randn(32, 3) results = [] for xi in x: y = torch.triu(xi) results.append(y) """ triu: input tensor must have at least 2 dimensions --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-7-d726203efb0e> in <module> 4 results = [] 5 for xi in x: ----> 6 y = torch.triu(xi) 7 results.append(y) RuntimeError: triu: input tensor must have at least 2 dimensions """ ``` Pull Request resolved: pytorch/pytorch#94287 Approved by: https://github.com/Skylion007, https://github.com/zou3519
Summary:
Add vmap support for torch.tril and torch.triu.
Fix: #91403
Test Plan: GitHub pipeline
Differential Revision: D43016624
Expected behavior
Same as using for-loop: