New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added CUDA support for complex input for torch.solve #47045
Conversation
test/test_linalg.py
Outdated
@skipCPUIfNoLapack | ||
@dtypes(torch.float64, torch.complex128) | ||
@dtypesIfCUDA(torch.float64) | ||
def test_solve_autograd(self, device, dtype): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should add autograd tests in test_linalg.py
. The only autograd tests this file are in test_autograd_and_jit
which are tagged with a note that they should be moved to method_tests
in common_methods_invocations.py
cc. @mruberry
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for now we should just add tests in method_tests for solve by adding an entry here: https://github.com/pytorch/pytorch/blob/master/test/test_autograd.py#L4928-L4929 and adding complex specific tests similar to this (make sure to include 'complex' in the substring for each of the new tests so that they only run for complex dtype).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might also be worth adding LinalgFuncInfo to migrate the linalg autograd tests as we are extending the linalg support but it's ok to leave that for a follow-up PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I recall @mruberry mentioned in other PRs to move all linalg related tests to test_linalg.py
, so I think it includes also autograd tests that are not part of method_tests
, and not only from test_torch.py
.
Since in the future, the plan is to have LinalgFuncInfo instead of method_tests
, I suggest we leave this test here and have CPU&CUDA variant tested without adding an entry to method_tests
(which is a bit troublesome because we need to create an input with the dtype and device using random_fullrank_matrix_distinct_singular_value(..., dtype, device)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We often debate amongst ourselves where tests related to autograd should live. Do they live with the operator, or do they live with test_autograd.py? And we haven't come to a consistent resolution yet, unfortunately, so it's reviewers' discretion. In the (near) future I expect we'll publicize a definitive plan for where these tests go. Sorry there's not a clear solution at the moment.
@@ -7614,87 +7614,6 @@ def run_test(matsize, batchdims, mat_chars): | |||
run_test(matsize, batchdims, mat_chars=['sym', 'sym_pd', 'sym_psd']) | |||
run_test(matsize, batchdims, mat_chars=['sing', 'non_sing']) | |||
|
|||
def solve_test_helper(self, A_dims, b_dims, device, dtype): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks! good to see tests being migrated from test_torch.py
to specific dedicated test files like test_linalg.py
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm once the autograd tests are removed from test_linalg.py
and added to method_tests
.
Hi @IvanYashchuk! Thank you for your pull request. We require contributors to sign our Contributor License Agreement, and yours needs attention. You currently have a record in our system, but we do not have a signature on file. In order for us to review and merge your code, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks! |
Codecov Report
@@ Coverage Diff @@
## master #47045 +/- ##
===========================================
+ Coverage 42.06% 81.23% +39.16%
===========================================
Files 472 1837 +1365
Lines 64759 198089 +133330
===========================================
+ Hits 27240 160909 +133669
+ Misses 37519 37180 -339 |
💊 CI failures summary and remediationsAs of commit 2d37d7f (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 28 times. |
@anjali411 I've removed the autograd tests from |
def test_tensorsolve_empty(self, device, dtype): | ||
# Check for empty inputs. NumPy does not work for these cases. | ||
a = torch.empty(0, 0, 1, 2, 3, 0, dtype=dtype, device=device) | ||
b = torch.empty(a.shape[:2], dtype=dtype, device=device) | ||
x = torch.linalg.tensorsolve(a, b) | ||
self.assertEqual(torch.tensordot(a, x, dims=len(x.shape)), b) | ||
|
||
# TODO: once "solve_cuda" supports complex dtypes, they shall be added to above tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
torch.linalg.tensorsolve
now works for complex dtypes.
Ping @anjali411 but @IvanYashchuk this needs a rebase. |
@mruberry, we've decided with Anjali that we'll merge these complex-valued CUDA PRs in turn in this order:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm (but just needs a rebase) thank you @IvanYashchuk!
Removed unused imports
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
@anjali411 merged this pull request in 149190c. |
Summary: `torch.solve` now works for complex inputs on GPU. I moved the existing tests to `test_linalg.py` and modified them to test complex and float32 dtypes. Differentiation also works correctly with complex inputs. Fixes pytorch#41084 Ref. pytorch#33152 anjali411 I hope you don't mind that I took over pytorch#42737 Pull Request resolved: pytorch#47045 Reviewed By: nikithamalgifb Differential Revision: D24921503 Pulled By: anjali411 fbshipit-source-id: 4c3fc4f193a84b6e28c43c08672d480715000923
torch.solve
now works for complex inputs on GPU.I moved the existing tests to
test_linalg.py
and modified them to test complex and float32 dtypes.Differentiation also works correctly with complex inputs.
Fixes #41084
Ref. #33152
@anjali411 I hope you don't mind that I took over #42737