Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added linalg.slogdet #49194

Closed
wants to merge 46 commits into from
Closed

Conversation

IvanYashchuk
Copy link
Collaborator

@IvanYashchuk IvanYashchuk commented Dec 10, 2020

This PR adds torch.linalg.slogdet.

Changes compared to the original torch.slogdet:

  • Complex input now works as in NumPy
  • Added out= variant (allocates temporary and makes a copy for now)
  • Updated slogdet_backward to work with complex input

Ref. #42666

@IvanYashchuk IvanYashchuk added module: numpy Related to numpy support, and also numpy compatibility of our operators module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul labels Dec 10, 2020
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Dec 10, 2020

💊 CI failures summary and remediations

As of commit 8fbdf60 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

This comment has been revised 93 times.

@IvanYashchuk IvanYashchuk marked this pull request as ready for review December 14, 2020 11:53
@IvanYashchuk IvanYashchuk requested review from mruberry and removed request for albanD and glaringlee December 14, 2020 11:53
@mruberry
Copy link
Collaborator

This test failure looks real:

test_slogdet_cpu_float32 - TestLinalgCPU
test_linalg.py

Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 273, in instantiated_test
    result = test_fn(self, *args)
  File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 545, in dep_fn
    return fn(slf, device, *args, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_device_type.py", line 545, in dep_fn
    return fn(slf, device, *args, **kwargs)
  File "test_linalg.py", line 4726, in test_slogdet
    run_test(matsize, batchdims, mat_chars=['singular'])
  File "test_linalg.py", line 4712, in run_test
    self.assertEqual(expected_value[0], actual_value[0], atol=self.precision, rtol=self.precision)
  File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1187, in assertEqual
    exact_dtype=exact_dtype, exact_device=exact_device)
  File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1068, in assertEqual
    exact_dtype=exact_dtype, exact_device=exact_device)
  File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1198, in assertEqual
    super().assertTrue(result, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
AssertionError: False is not true : Scalars failed to compare as equal! Comparing 1.0 and 0.0 gives a difference of 1.0, but the allowed difference with rtol=0.001 and atol=0.001 is only 0.001!

squareCheckInputs(self);
TORCH_CHECK((at::isFloatingType(self.scalar_type()) || at::isComplexType(self.scalar_type())),
"Expected a floating point tensor as input");
TORCH_CHECK((at::isFloatingType(self.scalar_type()) || at::isComplexType(self.scalar_type())) && self.dim() >= 2,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this handle bfloat16 and float16 inputs, too?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, this was copied from the old code of slogdet and I didn't check what isFloatingType does.
I'll change it to accept only float, double, cfloat, cdouble.

@codecov
Copy link

codecov bot commented Jan 7, 2021

Codecov Report

Merging #49194 (5440f7a) into master (30e45bb) will increase coverage by 0.00%.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##           master   #49194   +/-   ##
=======================================
  Coverage   80.67%   80.68%           
=======================================
  Files        1910     1910           
  Lines      207864   207890   +26     
=======================================
+ Hits       167697   167727   +30     
+ Misses      40167    40163    -4     

@IvanYashchuk
Copy link
Collaborator Author

@mruberry, holidays are over and I've finally updated this pull request. It's ready for another round of review.

@mruberry
Copy link
Collaborator

Looks like this just needs a rebase. Also, I thought we had to register complex gradients somewhere? Did I miss that or is it not needed in this PR?

Sorry for the delay in reviewing this, by the way. It's been an incredibly busy week. Things should get back to normal next week.

@mruberry mruberry self-requested a review January 13, 2021 09:55
Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome.

Sorry for the delay in reviewing this, @IvanYashchuk. It's been an incredibly busy week, and I appreciate your help reviewing cuSOLVER svd and lstsq. Things should be back to normal next week, but if you're enjoying helping with reviews it'd be great to have you review more often. Let me know.

On this PR, I thought supporting complex autograd required adding the function's name to a list somewhere - did I miss that in this PR or does linalg.slogdet not need to do that for some reason?

Finally, this just needs a rebase (edit: and let's remove some method_test entries).

skips=(
# These tests do not work with output_func=itemgetter(1)
# TODO: remove this once https://github.com/pytorch/pytorch/issues/49326 is resolved
SkipInfo('TestCommon', 'test_variant_consistency_jit'),)),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While we're here let's remove the slogdet entries in method_tests:

('slogdet', lambda dtype, device: make_nonzero_det(torch.randn(1, 1), 1), NO_ARGS,

slogdet is just a deprecated alias for linalg.slogdet at this point.

@IvanYashchuk
Copy link
Collaborator Author

IvanYashchuk commented Jan 14, 2021

@mruberry I removed method_test entries.
linalg.slogdet does not need to be registered for complex autograd because this function maps complex-valued input to real-valued output and it's not differentiable wrt sign (which is complex). See https://github.com/pytorch/pytorch/blob/master/tools/autograd/gen_variable_type.py#L69-L70

if you're enjoying helping with reviews it'd be great to have you review more often. Let me know.

Sure! Assign me as a reviewer wherever you think I can help

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@mruberry
Copy link
Collaborator

Import is needed:

Jan 15 01:13:12   File "test_ops.py", line 60, in <module>
Jan 15 01:13:12     _gradcheck_ops = partial(ops, dtypes=OpDTypes.supported,
Jan 15 01:13:12 NameError: name 'partial' is not defined

@IvanYashchuk
Copy link
Collaborator Author

@mruberry I am sorry I overlooked that.

@mruberry
Copy link
Collaborator

@mruberry I am sorry I overlooked that.

No worries. Happens to everyone.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in f9a5ba7.

@antocuni antocuni mentioned this pull request Mar 5, 2021
facebook-github-bot pushed a commit that referenced this pull request Apr 5, 2021
Summary:
Fixes #51652.
In particular:
- the main implementation is in `torch.linalg.det` now. `torch.det` is just a deprecated alias to it
- add a new `OpInfo` for `torch.linalg.det`
- remove the old-style tests for `torch.det` (this is similar to what we did for `torch.linalg.slogdet`, see #49194)
- added a `out=` argument to `torch.linalg.det`, but **not** to `torch.det`.

It is worth noting that I had to skip few tests:
- `TestGradientsCuda::test_fn_gradgrad_linalg_det_cuda_float64`. This is not a regression: the functionality is broken also on master, but the test is not executed properly due to #53361.

And the following tests which fails only on ROCm:
- `test_variant_consistency_jit_cuda_{float64,float32}`
- `test_fn_grad_cuda_float64`

I think that the ROCm tests fail because the current linalg.det backward is unstable if the matrix has repeated singular values, see #53364 .

(At the moment of writing some CI jobs are still running but I believe the build will be green, since the only difference wrt the last push is the skip of the ROCm tests)

Pull Request resolved: #53119

Reviewed By: H-Huang

Differential Revision: D27441999

Pulled By: mruberry

fbshipit-source-id: 5eab14c4f0a165e0cf9ec626c3f4bb23359f2a9e
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul module: numpy Related to numpy support, and also numpy compatibility of our operators open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants