-
Notifications
You must be signed in to change notification settings - Fork 21.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added linalg.slogdet #49194
Added linalg.slogdet #49194
Conversation
💊 CI failures summary and remediationsAs of commit 8fbdf60 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. This comment has been revised 93 times. |
This test failure looks real:
|
squareCheckInputs(self); | ||
TORCH_CHECK((at::isFloatingType(self.scalar_type()) || at::isComplexType(self.scalar_type())), | ||
"Expected a floating point tensor as input"); | ||
TORCH_CHECK((at::isFloatingType(self.scalar_type()) || at::isComplexType(self.scalar_type())) && self.dim() >= 2, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this handle bfloat16 and float16 inputs, too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, this was copied from the old code of slogdet
and I didn't check what isFloatingType
does.
I'll change it to accept only float, double, cfloat, cdouble.
Codecov Report
@@ Coverage Diff @@
## master #49194 +/- ##
=======================================
Coverage 80.67% 80.68%
=======================================
Files 1910 1910
Lines 207864 207890 +26
=======================================
+ Hits 167697 167727 +30
+ Misses 40167 40163 -4 |
@mruberry, holidays are over and I've finally updated this pull request. It's ready for another round of review. |
Looks like this just needs a rebase. Also, I thought we had to register complex gradients somewhere? Did I miss that or is it not needed in this PR? Sorry for the delay in reviewing this, by the way. It's been an incredibly busy week. Things should get back to normal next week. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome.
Sorry for the delay in reviewing this, @IvanYashchuk. It's been an incredibly busy week, and I appreciate your help reviewing cuSOLVER svd and lstsq. Things should be back to normal next week, but if you're enjoying helping with reviews it'd be great to have you review more often. Let me know.
On this PR, I thought supporting complex autograd required adding the function's name to a list somewhere - did I miss that in this PR or does linalg.slogdet not need to do that for some reason?
Finally, this just needs a rebase (edit: and let's remove some method_test entries).
skips=( | ||
# These tests do not work with output_func=itemgetter(1) | ||
# TODO: remove this once https://github.com/pytorch/pytorch/issues/49326 is resolved | ||
SkipInfo('TestCommon', 'test_variant_consistency_jit'),)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While we're here let's remove the slogdet entries in method_tests:
('slogdet', lambda dtype, device: make_nonzero_det(torch.randn(1, 1), 1), NO_ARGS, |
slogdet is just a deprecated alias for linalg.slogdet at this point.
@mruberry I removed method_test entries.
Sure! Assign me as a reviewer wherever you think I can help |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Import is needed:
|
@mruberry I am sorry I overlooked that. |
No worries. Happens to everyone. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Fixes #51652. In particular: - the main implementation is in `torch.linalg.det` now. `torch.det` is just a deprecated alias to it - add a new `OpInfo` for `torch.linalg.det` - remove the old-style tests for `torch.det` (this is similar to what we did for `torch.linalg.slogdet`, see #49194) - added a `out=` argument to `torch.linalg.det`, but **not** to `torch.det`. It is worth noting that I had to skip few tests: - `TestGradientsCuda::test_fn_gradgrad_linalg_det_cuda_float64`. This is not a regression: the functionality is broken also on master, but the test is not executed properly due to #53361. And the following tests which fails only on ROCm: - `test_variant_consistency_jit_cuda_{float64,float32}` - `test_fn_grad_cuda_float64` I think that the ROCm tests fail because the current linalg.det backward is unstable if the matrix has repeated singular values, see #53364 . (At the moment of writing some CI jobs are still running but I believe the build will be green, since the only difference wrt the last push is the skip of the ROCm tests) Pull Request resolved: #53119 Reviewed By: H-Huang Differential Revision: D27441999 Pulled By: mruberry fbshipit-source-id: 5eab14c4f0a165e0cf9ec626c3f4bb23359f2a9e
This PR adds
torch.linalg.slogdet
.Changes compared to the original torch.slogdet:
slogdet_backward
to work with complex inputRef. #42666