Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added CUDA support for complex input for torch.solve #47045

Closed
wants to merge 18 commits into from

Conversation

IvanYashchuk
Copy link
Collaborator

torch.solve now works for complex inputs on GPU.
I moved the existing tests to test_linalg.py and modified them to test complex and float32 dtypes.
Differentiation also works correctly with complex inputs.

Fixes #41084
Ref. #33152

@anjali411 I hope you don't mind that I took over #42737

@IvanYashchuk IvanYashchuk added module: complex Related to complex number support in PyTorch module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul complex_autograd labels Oct 29, 2020
@albanD albanD added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Oct 29, 2020
@skipCPUIfNoLapack
@dtypes(torch.float64, torch.complex128)
@dtypesIfCUDA(torch.float64)
def test_solve_autograd(self, device, dtype):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should add autograd tests in test_linalg.py. The only autograd tests this file are in test_autograd_and_jit which are tagged with a note that they should be moved to method_tests in common_methods_invocations.py cc. @mruberry

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for now we should just add tests in method_tests for solve by adding an entry here: https://github.com/pytorch/pytorch/blob/master/test/test_autograd.py#L4928-L4929 and adding complex specific tests similar to this (make sure to include 'complex' in the substring for each of the new tests so that they only run for complex dtype).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might also be worth adding LinalgFuncInfo to migrate the linalg autograd tests as we are extending the linalg support but it's ok to leave that for a follow-up PR.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recall @mruberry mentioned in other PRs to move all linalg related tests to test_linalg.py, so I think it includes also autograd tests that are not part of method_tests, and not only from test_torch.py.
Since in the future, the plan is to have LinalgFuncInfo instead of method_tests, I suggest we leave this test here and have CPU&CUDA variant tested without adding an entry to method_tests (which is a bit troublesome because we need to create an input with the dtype and device using random_fullrank_matrix_distinct_singular_value(..., dtype, device).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We often debate amongst ourselves where tests related to autograd should live. Do they live with the operator, or do they live with test_autograd.py? And we haven't come to a consistent resolution yet, unfortunately, so it's reviewers' discretion. In the (near) future I expect we'll publicize a definitive plan for where these tests go. Sorry there's not a clear solution at the moment.

@@ -7614,87 +7614,6 @@ def run_test(matsize, batchdims, mat_chars):
run_test(matsize, batchdims, mat_chars=['sym', 'sym_pd', 'sym_psd'])
run_test(matsize, batchdims, mat_chars=['sing', 'non_sing'])

def solve_test_helper(self, A_dims, b_dims, device, dtype):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks! good to see tests being migrated from test_torch.py to specific dedicated test files like test_linalg.py.

Copy link
Contributor

@anjali411 anjali411 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm once the autograd tests are removed from test_linalg.py and added to method_tests.

@facebook-github-bot
Copy link
Contributor

Hi @IvanYashchuk!

Thank you for your pull request. We require contributors to sign our Contributor License Agreement, and yours needs attention.

You currently have a record in our system, but we do not have a signature on file.

In order for us to review and merge your code, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks!

@codecov
Copy link

codecov bot commented Nov 2, 2020

Codecov Report

Merging #47045 (2d37d7f) into master (2df5600) will increase coverage by 39.16%.
The diff coverage is 100.00%.

@@             Coverage Diff             @@
##           master   #47045       +/-   ##
===========================================
+ Coverage   42.06%   81.23%   +39.16%     
===========================================
  Files         472     1837     +1365     
  Lines       64759   198089   +133330     
===========================================
+ Hits        27240   160909   +133669     
+ Misses      37519    37180      -339     

@dr-ci
Copy link

dr-ci bot commented Nov 2, 2020

💊 CI failures summary and remediations

As of commit 2d37d7f (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 28 times.

@IvanYashchuk
Copy link
Collaborator Author

@anjali411 I've removed the autograd tests from test_linalg.py and added to method_tests.

def test_tensorsolve_empty(self, device, dtype):
# Check for empty inputs. NumPy does not work for these cases.
a = torch.empty(0, 0, 1, 2, 3, 0, dtype=dtype, device=device)
b = torch.empty(a.shape[:2], dtype=dtype, device=device)
x = torch.linalg.tensorsolve(a, b)
self.assertEqual(torch.tensordot(a, x, dims=len(x.shape)), b)

# TODO: once "solve_cuda" supports complex dtypes, they shall be added to above tests
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch.linalg.tensorsolve now works for complex dtypes.

@mruberry
Copy link
Collaborator

mruberry commented Nov 6, 2020

Ping @anjali411 but @IvanYashchuk this needs a rebase.

@IvanYashchuk
Copy link
Collaborator Author

IvanYashchuk commented Nov 6, 2020

@mruberry, we've decided with Anjali that we'll merge these complex-valued CUDA PRs in turn in this order:

  • torch.inverse
  • torch.triangular_solve
  • torch.solve
  • torch.lu_solve
  • torch.cholesky_solve

Copy link
Contributor

@anjali411 anjali411 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm (but just needs a rebase) thank you @IvanYashchuk!

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@anjali411 merged this pull request in 149190c.

tugsbayasgalan pushed a commit to tugsbayasgalan/pytorch that referenced this pull request Nov 16, 2020
Summary:
`torch.solve` now works for complex inputs on GPU.
I moved the existing tests to `test_linalg.py` and modified them to test complex and float32 dtypes.
Differentiation also works correctly with complex inputs.

Fixes pytorch#41084
Ref. pytorch#33152

anjali411 I hope you don't mind that I took over pytorch#42737

Pull Request resolved: pytorch#47045

Reviewed By: nikithamalgifb

Differential Revision: D24921503

Pulled By: anjali411

fbshipit-source-id: 4c3fc4f193a84b6e28c43c08672d480715000923
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed complex_autograd Merged module: complex Related to complex number support in PyTorch module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support complex torch.solve on the GPU
6 participants