Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vmap support for torch.tril and torch.triu #94287

Closed
wants to merge 1 commit into from
Closed

vmap support for torch.tril and torch.triu #94287

wants to merge 1 commit into from

Conversation

isdanni
Copy link
Contributor

@isdanni isdanni commented Feb 7, 2023

Summary:
Add vmap support for torch.tril and torch.triu.

Fix: #91403

Test Plan: GitHub pipeline

Differential Revision: D43016624

Expected behavior

Same as using for-loop:

import torch 

x = torch.randn(32, 3)
results = []
for xi in x:
  y = torch.triu(xi)
  results.append(y)
"""
triu: input tensor must have at least 2 dimensions
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-7-d726203efb0e> in <module>
      4 results = []
      5 for xi in x:
----> 6   y = torch.triu(xi)
      7   results.append(y)
RuntimeError: triu: input tensor must have at least 2 dimensions
"""

@pytorch-bot
Copy link

pytorch-bot bot commented Feb 7, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/94287

Note: Links to docs will display an error until the docs builds have been completed.

⏳ No Failures, 4 Pending

As of commit e962dc9:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43016624

Copy link
Collaborator

@Skylion007 Skylion007 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copying both these tensors includes an atomic lock and ref count inc / dec that can be avoided with an std::move().

int64_t diagonal = 0) {
TORCH_CHECK(self.dim() >= 2, "tril: The input tensor must have at least 2 dimensions.");
auto result = at::tril(self, diagonal);
return std::make_tuple(result, 0);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return std::make_tuple(result, 0);
return std::make_tuple(std::move(result), 0);

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This optimisation is almost always negligible in the grand scheme of things. If we really want to enforce it all across we should add it to the linter.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lezcano Adding it to a linter is bit more difficult that it sounds, a lot of false positives / negatives. Further complicated by template forwarding etc.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then I think it's not worth it to reviewing PRs making this point. As discussed, all these optimisations are not useful and not worth the CI run unless they are in a very hot path.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lezcano there is work on a clang-tidy check that I've been experimenting with, but it's not quite ready yet: https://reviews.llvm.org/D137205

int64_t diagonal = 0) {
TORCH_CHECK(self.dim() >= 2, "triu: The input tensor must have at least 2 dimensions.");
auto result = at::triu(self, diagonal);
return std::make_tuple(result, 0);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return std::make_tuple(result, 0);
return std::make_tuple(std::move(result), 0);

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43016624

4 similar comments
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43016624

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43016624

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43016624

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43016624

@isdanni isdanni requested review from lezcano and Skylion007 and removed request for zou3519, Chillee and lezcano February 8, 2023 04:26
@zou3519
Copy link
Contributor

zou3519 commented Mar 8, 2023

@pytorchbot rebase

@zou3519 zou3519 self-requested a review March 8, 2023 20:57
@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a rebase job. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased export-D43016624 onto refs/remotes/origin/viable/strict, please pull locally before adding more changes (for example, via git checkout export-D43016624 && git pull --rebase)

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43016624

2 similar comments
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43016624

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43016624

Summary:
Pull Request resolved: #94287

Add vmap support for torch.tril and torch.triu.

Issue: #91403

Test Plan: GitHub pipeline

Reviewed By: zou3519

Differential Revision: D43016624

fbshipit-source-id: 6e73ce6f1c83be8bafc70a039c06404b60180a87
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D43016624

@facebook-github-bot
Copy link
Contributor

@pytorchbot merge

(Initiating merge automatically since Phabricator Diff has merged)

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Mar 16, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 23, 2023
Summary:
Add vmap support for torch.tril and torch.triu.

Fix: #91403

Test Plan: GitHub pipeline

Differential Revision: D43016624

### Expected behavior
Same as using for-loop:

```python
import torch

x = torch.randn(32, 3)
results = []
for xi in x:
  y = torch.triu(xi)
  results.append(y)
"""
triu: input tensor must have at least 2 dimensions
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-7-d726203efb0e> in <module>
      4 results = []
      5 for xi in x:
----> 6   y = torch.triu(xi)
      7   results.append(y)
RuntimeError: triu: input tensor must have at least 2 dimensions
"""
```

Pull Request resolved: pytorch/pytorch#94287
Approved by: https://github.com/Skylion007, https://github.com/zou3519
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 27, 2023
Summary:
Add vmap support for torch.tril and torch.triu.

Fix: #91403

Test Plan: GitHub pipeline

Differential Revision: D43016624

### Expected behavior
Same as using for-loop:

```python
import torch

x = torch.randn(32, 3)
results = []
for xi in x:
  y = torch.triu(xi)
  results.append(y)
"""
triu: input tensor must have at least 2 dimensions
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-7-d726203efb0e> in <module>
      4 results = []
      5 for xi in x:
----> 6   y = torch.triu(xi)
      7   results.append(y)
RuntimeError: triu: input tensor must have at least 2 dimensions
"""
```

Pull Request resolved: pytorch/pytorch#94287
Approved by: https://github.com/Skylion007, https://github.com/zou3519
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

vmap support for torch.tril, torch.triu
6 participants