New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make randperm work properly on non-contiguous tensors. #23043
Conversation
The code should be equivalent if the output tensor is contiguous. I wonder if there is a proper way to add a test for non-contiguous tensors... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good for the most part. Can you add a test? One way to test noncontiguous is by creating a contiguous tensor, creating a view that's non contiguous (using view
, transpose
, or anything else).
b98dc92
to
fa9ab89
Compare
OK, I used transpose and add an assertion to guarantee that it has to be non-contiguous (because it seems there is no way I can generate a tensor that is guaranteed to be non-contiguous without looking into the details of implementation). Another note: The CPU test and CUDA test looks largely the same. I'll probably simplify it in a separate PR. |
b4333b8
to
6d4d370
Compare
@pytorchbot rebase this please |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good except one tiny thing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@VitalyFedyunin has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks!
@@ -2807,6 +2807,15 @@ def test_randperm_cuda(self): | |||
torch.randperm(small_n, out=res) # No exception expected | |||
self.assertRaises(RuntimeError, lambda: torch.randperm(large_n, out=res)) | |||
|
|||
# Test non-contiguous tensors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks!
@VitalyFedyunin merged this pull request in 236149e. |
Summary: Close pytorch/pytorch#22710 Pull Request resolved: pytorch/pytorch#23043 Differential Revision: D16446340 Pulled By: VitalyFedyunin fbshipit-source-id: 1760af310fee71b369e1aaaf96546277058611c9
Close #22710