Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make randperm work properly on non-contiguous tensors. #23043

Closed
wants to merge 2 commits into from

Conversation

xuhdev
Copy link
Collaborator

@xuhdev xuhdev commented Jul 18, 2019

Close #22710

@pytorchbot pytorchbot added module: cuda Related to torch.cuda, and CUDA support in general module: operators labels Jul 18, 2019
@xuhdev
Copy link
Collaborator Author

xuhdev commented Jul 18, 2019

The code should be equivalent if the output tensor is contiguous. I wonder if there is a proper way to add a test for non-contiguous tensors...

@xuhdev xuhdev changed the title Make randperm works properly on non-contiguous tensors. [WIP] Make randperm works properly on non-contiguous tensors. Jul 18, 2019
@xuhdev xuhdev changed the title [WIP] Make randperm works properly on non-contiguous tensors. Make randperm works properly on non-contiguous tensors. Jul 18, 2019
@xuhdev xuhdev assigned gchanan and unassigned gchanan Jul 18, 2019
@xuhdev xuhdev requested a review from gchanan July 18, 2019 23:17
@xuhdev xuhdev requested a review from li-roy July 19, 2019 17:59
Copy link
Contributor

@li-roy li-roy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good for the most part. Can you add a test? One way to test noncontiguous is by creating a contiguous tensor, creating a view that's non contiguous (using view, transpose, or anything else).

aten/src/ATen/native/cuda/TensorFactories.cu Outdated Show resolved Hide resolved
@xuhdev xuhdev force-pushed the op/randperm-cuda branch 2 times, most recently from b98dc92 to fa9ab89 Compare July 19, 2019 19:25
@xuhdev xuhdev requested a review from li-roy July 19, 2019 19:26
@xuhdev
Copy link
Collaborator Author

xuhdev commented Jul 19, 2019

Looks good for the most part. Can you add a test? One way to test noncontiguous is by creating a contiguous tensor, creating a view that's non contiguous (using view, transpose, or anything else).

OK, I used transpose and add an assertion to guarantee that it has to be non-contiguous (because it seems there is no way I can generate a tensor that is guaranteed to be non-contiguous without looking into the details of implementation).

Another note: The CPU test and CUDA test looks largely the same. I'll probably simplify it in a separate PR.

@cpuhrsch cpuhrsch added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jul 19, 2019
@xuhdev xuhdev requested a review from gchanan July 22, 2019 17:56
@xuhdev xuhdev force-pushed the op/randperm-cuda branch 2 times, most recently from b4333b8 to 6d4d370 Compare July 22, 2019 23:30
@xuhdev
Copy link
Collaborator Author

xuhdev commented Jul 23, 2019

@pytorchbot rebase this please

Copy link
Contributor

@VitalyFedyunin VitalyFedyunin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good except one tiny thing.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@VitalyFedyunin has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@xuhdev xuhdev changed the title Make randperm works properly on non-contiguous tensors. Make randperm work properly on non-contiguous tensors. Jul 23, 2019
Copy link
Contributor

@gchanan gchanan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!

@@ -2807,6 +2807,15 @@ def test_randperm_cuda(self):
torch.randperm(small_n, out=res) # No exception expected
self.assertRaises(RuntimeError, lambda: torch.randperm(large_n, out=res))

# Test non-contiguous tensors
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!

@facebook-github-bot
Copy link
Contributor

@VitalyFedyunin merged this pull request in 236149e.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Jul 29, 2019
Summary:
Close pytorch/pytorch#22710
Pull Request resolved: pytorch/pytorch#23043

Differential Revision: D16446340

Pulled By: VitalyFedyunin

fbshipit-source-id: 1760af310fee71b369e1aaaf96546277058611c9
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged module: cuda Related to torch.cuda, and CUDA support in general open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

randperm on cuda looks buggy
9 participants