Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added linalg.cholesky #46083

Closed
wants to merge 62 commits into from
Closed

Conversation

IvanYashchuk
Copy link
Collaborator

This PR adds torch.linalg.cholesky function that matches numpy.linalg.cholesky.

Fixed lda argument to lapackCholesky calls.
Added random_hermitian_pd_matrix helper function for tests.

Ref #42666.

@IvanYashchuk IvanYashchuk added module: numpy Related to numpy support, and also numpy compatibility of our operators module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul labels Oct 9, 2020
@IvanYashchuk
Copy link
Collaborator Author

I'm not sure whether what I did in torch/csrc/api/include/torch/linalg.h is correct.
Existing functions have names linalg_det, linalg_norm, and there are in linalg namespace. Are they accessed with torch::linalg::linalg_det?

@dr-ci
Copy link

dr-ci bot commented Oct 9, 2020

💊 CI failures summary and remediations

As of commit eb507e5 (more details on the Dr. CI page):


  • 1/3 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)
  • 2/3 broken upstream at merge base 1aeac97 since Nov 13

🚧 2 ongoing upstream failures:

These were probably caused by upstream breakages that are not fixed yet:


Extra GitHub checks: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 141 times.

@codecov
Copy link

codecov bot commented Oct 11, 2020

Codecov Report

Merging #46083 into master will increase coverage by 34.70%.
The diff coverage is 100.00%.

@@             Coverage Diff             @@
##           master   #46083       +/-   ##
===========================================
+ Coverage   26.11%   60.81%   +34.70%     
===========================================
  Files         437     2748     +2311     
  Lines       55231   254054   +198823     
===========================================
+ Hits        14424   154514   +140090     
- Misses      40807    99540    +58733     

from torch.testing._internal.common_utils import random_hermitian_pd_matrix
A = random_hermitian_pd_matrix(shape, *batch, dtype=dtype, device=device)
expected_L = np.linalg.cholesky(A.cpu().numpy())
actual_L = torch.linalg.cholesky(A)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this produce the aforementioned RuntimeError? If so, then I think it's probably better to use a with self.assertRaisesRegex(...):, rather than expectedFailure.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea to use an xfail test (unittest.expectedFailure) because that is a bug we can't fix right now.
I would expect to use self.assertRaisesRegex(...) (and consequently mark the test as 'pass') for the intended errors that are not bugs.

test/test_linalg.py Outdated Show resolved Hide resolved
test/test_linalg.py Outdated Show resolved Hide resolved
Copy link
Collaborator

@kurtamohler kurtamohler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall to me.

Does linalg.cholesky support JIT and autograd? If so, maybe we should have a test case for it in test/test_linalg.py:test_autograd_and_jit

@IvanYashchuk
Copy link
Collaborator Author

Hey, @mruberry, this is now ready to be merged.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@IvanYashchuk
Copy link
Collaborator Author

@mruberry, I'm sorry there was an error that I didn't notice when updating the branch. I think this PR needs to be imported again.

@IvanYashchuk
Copy link
Collaborator Author

Resolved merge conflicts.

@mruberry
Copy link
Collaborator

@IvanYashchuk More merge conflicts, unfortunately. You may want to add the tests as a different place in test_linalg.py to try and avoid these conflicts.

@IvanYashchuk
Copy link
Collaborator Author

@mruberry, I updated the branch.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@mruberry
Copy link
Collaborator

Test failure is real:

test_cholesky_autograd_xfailed_cuda_complex128 - TestLinalgCUDA

@IvanYashchuk
Copy link
Collaborator Author

So batched matmul for complex-valued inputs on CUDA now works. I fixed the tests.

@mruberry
Copy link
Collaborator

This part of the lint failure is real:

  {
    path: 'test/test_linalg.py',
    start_line: 214,
    end_line: 214,
    start_column: 57,
    end_column: 57,
    annotation_level: 'failure',
    message: '[E271] multiple spaces after keyword'
  },

Sorry for the noise, the other failure is in the base. The ROCm failure appears unrelated.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in 260daf0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul module: numpy Related to numpy support, and also numpy compatibility of our operators open source
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants