Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make torch.svd return V, not V.conj() for complex inputs #51012

Closed
wants to merge 10 commits into from

Conversation

IvanYashchuk
Copy link
Collaborator

@IvanYashchuk IvanYashchuk commented Jan 24, 2021

BC-breaking note:

torch.svd() added support for complex inputs in PyTorch 1.7, but was not documented as doing so. The complex "V" tensor returned was actually the complex conjugate of what's expected. This PR fixes the discrepancy.

Note that this means this PR silently breaks all current users of torch.svd() with complex inputs.

Original PR Summary:

This PR resolves #45821.

The problem was that when introducing the support of complex inputs for torch.svd it was overlooked that LAPACK/MAGMA returns the conjugate transpose of V matrix, not just the transpose of V. So torch.svd was silently returning U, S, V.conj() instead of U, S, V.

Behavior of torch.linalg.pinv, torch.pinverse and torch.linalg.svd (they depend on torch.svd) is not changed in this PR.

@IvanYashchuk IvanYashchuk added module: docs Related to our documentation, both in docs/ and docblocks module: complex Related to complex number support in PyTorch module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul labels Jan 24, 2021
@IvanYashchuk
Copy link
Collaborator Author

This link points to the current documentation.
And here is the updated rendered documentation. Inconsistent use of italics and code highlight is fixed (in favor of italics), inline math sections are replaced with italics as well.
image
image
image

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jan 24, 2021

💊 CI failures summary and remediations

As of commit 7fc28c5 (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

@IvanYashchuk
Copy link
Collaborator Author

IvanYashchuk commented Jan 25, 2021

XLA's torch.pinverse test fails. This test compares the output of normal PyTorch with XLA's PyTorch https://github.com/pytorch/xla/blob/master/test/cpp/test_aten_xla_tensor.cpp#L3341-L3353
XLA's SVD is implemented here https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/client/lib/svd.cc#L808-L878
A possible reason for the failure could be that here I changed at::pinverse to use at::linalg_svd instead of at::svd.
A quick fix for this PR would be not to use linalg_svd for now until relevant bindings are implemented in pytorch/xla.

@IvanYashchuk IvanYashchuk marked this pull request as ready for review January 25, 2021 12:16
@mruberry
Copy link
Collaborator

mruberry commented Jan 25, 2021

XLA's torch.pinverse test fails. This test compares the output of normal PyTorch with XLA's PyTorch https://github.com/pytorch/xla/blob/master/test/cpp/test_aten_xla_tensor.cpp#L3341-L3353
XLA's SVD is implemented here https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/client/lib/svd.cc#L808-L878
A possible reason for the failure could be that here I changed at::pinverse to use at::linalg_svd instead of at::svd.
A quick fix for this PR would be not to use linalg_svd for now until relevant bindings are implemented in pytorch/xla.

cc @ailzhang @JackCaoG

Continuing to use at::svd to implement pinverse (for now) sounds good.

@mruberry
Copy link
Collaborator

mruberry commented Jan 25, 2021

This PR resolves #45821.

The problem was that when introducing the support of complex inputs for torch.svd it was overlooked that LAPACK/MAGMA returns the conjugate transpose of V matrix, not just the transpose of V. So torch.svd was silently returning U, S, V.conj() instead of U, S, V.

Behavior of torch.linalg.pinv, torch.pinverse and torch.linalg.svd (they depend on torch.svd) is not changed in this PR.

Just so I'm confident I understand what's going on here:

  • torch.svd has this weird thing where it returns V^T for real-valued inputs, not V
  • the generalization of that for complex inputs is that it would return the conjugate transpose of V, V^H
  • when complex support was added to torch.svd, it was mistakenly added so that V was returned, not V^H
  • this PR address that inconsistency

Is that correct?

Follow-up question: how long has torch.svd been returning V and not V^H for complex inputs?

@IvanYashchuk
Copy link
Collaborator Author

IvanYashchuk commented Jan 25, 2021

LAPACK and MAGMA return V.transpose() for real inputs and V.conj().transpose() for complex inputs.
Then in PyTorch output of LAPACK and MAGMA is transposed, so on current master branch torch.svd returns V.transpose().transpose() -> V for real inputs and V.conj().transpose().transpose() -> V.conj() for complex inputs.

torch.svd has this weird thing where it returns V^T for real-valued inputs, not V
the generalization of that for complex inputs is that it would return the conjugate transpose of V, V^H
when complex support was added to torch.svd, it was mistakenly added so that V was returned, not V^H

  • No, torch.svd returns V for real-valued inputs
  • The generalization to complex inputs is to return V as well. But it was mistakenly added so that V.conj() is returned, not V
  • this PR address that inconsistency

torch.svd had been returning V.conj(), not V since the 3rd of October. Introduced in this PR #45795

@mruberry
Copy link
Collaborator

mruberry commented Jan 25, 2021

LAPACK and MAGMA return V.transpose() for real inputs and V.conj().transpose() for complex inputs.
Then in PyTorch output of LAPACK and MAGMA is transposed, so on current master branch torch.svd returns V.transpose().transpose() -> V for real inputs and V.conj().transpose().transpose() -> V.conj() for complex inputs.

torch.svd has this weird thing wh

torch.svd has this weird thing where it returns V^T for real-valued inputs, not V
the generalization of that for complex inputs is that it would return the conjugate transpose of V, V^H
when complex support was added to torch.svd, it was mistakenly added so that V was returned, not V^H

  • No, torch.svd returns V for real-valued inputs
  • The generalization to complex inputs is to return V as well. But it was mistakenly added so that V.conj() is returned, not V
  • this PR address that inconsistency

torch.svd had been returning V.conj(), not V since the 3rd of October. Introduced in this PR #45795

Got it, thank you for clarifying. This all makes sense now. Looks like this support is part of PyTorch 1.7.1, so this will be a BC-breaking change. Worse, it's a silent correctness issue. I expanded the PR summary with a BC-breaking note reflecting the current state of this PR (please correct me if I'm still missing something).

It would be safer if we could disallow complex inputs to torch.svd, even if internally we rely on at::svd with complex inputs. I can't think of any reasonable way to do that, however. Luckily torch.svd is being deprecated and this functionality was never documented, so it may be OK to change this.

@mruberry mruberry added the module: bc-breaking Related to a BC-breaking change label Jan 25, 2021
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the cleanup @IvanYashchuk !

The PR looks good to me.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@albanD has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@codecov
Copy link

codecov bot commented Jan 25, 2021

Codecov Report

Merging #51012 (7fc28c5) into master (627a331) will increase coverage by 0.21%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master   #51012      +/-   ##
==========================================
+ Coverage   80.70%   80.91%   +0.21%     
==========================================
  Files        1926     1926              
  Lines      210012   210020       +8     
==========================================
+ Hits       169485   169933     +448     
+ Misses      40527    40087     -440     

@facebook-github-bot
Copy link
Contributor

@albanD merged this pull request in ddf2681.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged module: bc-breaking Related to a BC-breaking change module: complex Related to complex number support in PyTorch module: docs Related to our documentation, both in docs/ and docblocks module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul open source
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Singular value decomposition of complex valued matrices does not match mathematical definition
5 participants