Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Kronecker product of tensors (torch.kron) #45358

Closed
wants to merge 32 commits into from

Conversation

IvanYashchuk
Copy link
Collaborator

@IvanYashchuk IvanYashchuk commented Sep 25, 2020

This PR adds a function for calculating the Kronecker product of tensors.
The implementation is based on at::tensordot with permutations and reshape.
Tests pass.

TODO:

  • Add more test cases
  • Write documentation
  • Add entry common_methods_invokations.py

Ref. #42666

Tests pass. The implementation is based on tensordot.
@dr-ci
Copy link

dr-ci bot commented Sep 25, 2020

💊 CI failures summary and remediations

As of commit a1b3255 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 100 times.

@IvanYashchuk IvanYashchuk marked this pull request as ready for review September 28, 2020 17:12
@IvanYashchuk IvanYashchuk added the module: numpy Related to numpy support, and also numpy compatibility of our operators label Sep 29, 2020
@codecov
Copy link

codecov bot commented Sep 29, 2020

Codecov Report

Merging #45358 into master will increase coverage by 0.00%.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##           master   #45358   +/-   ##
=======================================
  Coverage   60.81%   60.81%           
=======================================
  Files        2748     2748           
  Lines      254027   254070   +43     
=======================================
+ Hits       154488   154522   +34     
- Misses      99539    99548    +9     

@zhangguanheng66 zhangguanheng66 added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Sep 29, 2020
Copy link
Contributor

@vishwakftw vishwakftw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, looks good to me. Can we get some benchmarks too - comparison to np.linalg.kron?

@IvanYashchuk
Copy link
Collaborator Author

IvanYashchuk commented Oct 5, 2020

Here is the code for the benchmark.
The first column shows the input shapes.

Using pytorch 1.7.0a0+18e2767
Using cupy 8.0.0
Using numpy 1.19.2
[-------------------------------------------- kron torch.float64 -------------------------------------------]
                                  |  torch.kron CUDA  |  cupy.kron CUDA  |  torch.kron CPU  |  numpy.kron CPU
8 threads: --------------------------------------------------------------------------------------------------
      ((32,), (2, 16, 32))        |        54.5       |      224.8       |       47.4       |       128.2    
      ((32,), (32, 32))           |        39.1       |      169.7       |       44.8       |       114.8    
      ((32,), (32,))              |        25.3       |      123.9       |       15.7       |        43.6    
      ((32, 32), (2, 16, 32))     |       115.4       |      306.7       |      271.0       |      3968.2    
      ((32, 32), (32, 32))        |       113.6       |      260.8       |      207.0       |      3207.8    
      ((32, 32), (32,))           |        24.8       |      263.8       |       21.9       |       167.4    
      ((2, 16, 32), (2, 16, 32))  |       114.9       |      374.2       |      267.5       |      4214.7    
      ((2, 16, 32), (32, 32))     |       113.7       |      339.4       |      210.6       |      4026.5    
      ((2, 16, 32), (32,))        |        25.2       |      299.9       |       21.6       |       181.2 

[-------------------------------------------- kron torch.float32 -------------------------------------------]
                                  |  torch.kron CUDA  |  cupy.kron CUDA  |  torch.kron CPU  |  numpy.kron CPU
8 threads: --------------------------------------------------------------------------------------------------
      ((32,), (2, 16, 32))        |        52.3       |      234.5       |       37.9       |        90.9    
      ((32,), (32, 32))           |        38.0       |      173.7       |       36.5       |        80.0    
      ((32,), (32,))              |        23.2       |      126.0       |       14.6       |        41.3    
      ((32, 32), (2, 16, 32))     |        71.2       |      318.2       |      212.8       |      2125.3    
      ((32, 32), (32, 32))        |        59.7       |      259.0       |      116.1       |      1686.4    
      ((32, 32), (32,))           |        23.9       |      256.9       |       19.1       |       120.7    
      ((2, 16, 32), (2, 16, 32))  |        61.7       |      298.4       |      207.3       |      2127.6    
      ((2, 16, 32), (32, 32))     |        57.5       |      312.1       |      115.8       |      2107.0    
      ((2, 16, 32), (32,))        |        25.0       |      300.2       |       19.8       |       132.2    

Times are in microseconds (us).

@vishwakftw
Copy link
Contributor

Any idea why some cases are not as fast?

@IvanYashchuk
Copy link
Collaborator Author

Well, the implementation is different. NumPy's implementation is based on outer (link to code), while here tensordot is used. torch.outer is for vectors only and probably would require more manipulations to get it right.

@IvanYashchuk
Copy link
Collaborator Author

Alright, I've realized that the previous timings were in debug mode 😄
I've updated the previous post. Now we see that the current implementation is faster than NumPy and CuPy.

@ethanhs
Copy link

ethanhs commented Oct 23, 2020

Hi, this is really exciting to see, I was hoping to use the kronecker product with complex tensors, but I couldn't discern if that would be supported by this. I look forward to using this!

@IvanYashchuk
Copy link
Collaborator Author

Updated _out implementation to use at::native::resize_output.
Explicit dispatch: Math is now used native_functions.yml.
Added non-contiguous test cases.
Documentation now includes the mathematical definition of the Kronecker product as on the Wikipedia page (tested locally that it renders correctly now).


Computes the Kronecker product, denoted by :math:`\otimes`, of :attr:`input` and :attr:`other`.

If :attr:`input` is a :math:`(m \times n)` tensor and :attr:`other` is a
Copy link
Collaborator

@mruberry mruberry Oct 26, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We didn't discuss this previously, but is the Kronecker product defined if either of A or B aren't matrices? Should we add a check for that?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the Kronecker product is defined mathematically only for matrices. We can think of vectors as m×1 matrix, and scalars as 1×1 matrix, then everything works.
Vectors are tested here (as (4, ) shape).
As for n-dimensional arrays with n>2, NumPy extends the definition as described in the notes section to "blocks of the second tensor scaled by the first tensor".
So kron does not support batching.

Sometimes it's said that for matrices Kronecker product == tensor outer product, but this is not true for tensors in general. For the example from Wiki about the tensor product, kron would give a tensor with dimensions (31, 510, 7*100).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Being like NumPy seems OK (our goal is to be compatible, after all). Would you add a note about how input and other are treated if they're not matrices? The current docs deal with them as if they must be matrices. Something like:

  • NOTE
  • The Kronecker product is typically defined only for two matrices
  • When either is a scalar or vector it's unsqueezed as...
  • When either is an input is a tensor with 3+ dimensions then...

ALTERNATIVELY you could expand the description of the function describe the matrix case, say that scalars and vectors are unsqueezed to be matrices, and THEN define the "general" case. That seems like a more challenging but better approach.

What are your thoughts, @IvanYashchuk?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about in the main description present the general definition first and then mention what it does for matrices.

Computes the Kronecker product of input and other.
For general n-dimensional tensors this function computes:
... math expression
image

The number of dimensions of input and other is assumed to be the same and if necessary the smaller tensor is unsqueezed as the larger one.

If input is a (m \times n) tensor matrix and other is a (p \times q) tensor matrix, the result will be a (p*m \times q*n) block tensor matrix:
... kron definition for matrices from wiki
Scalar and vector inputs are unsqueezed to be matrices.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That approach sounds good. Instead of

"The number of dimensions of input and other is assumed to be the same and if necessary the smaller tensor is unsqueezed as the larger one."

I think it can say "If one tensor has fewer dimensions than the other it is unsqueezed until it has the same number of dimensions."

That change would make the explicit reference to scalar and vector inputs at the end redundant, so it can be removed.

The last caveat is that this needs to define the dot operator in both equations above.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, dot was for the normal multiplication of scalars. Is asterisk (*) preferred?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dot's OK if the documentation defines it (the dot operator is used for so many mathematical operations it's highly ambiguous), an asterisk would also be fine and probably doesn't require definition (it's typically used for elementwise multiplication and scalar multiplication).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated the documentation so that the main text describes now the computation for general n-dimensional tensors. I left the matrix case description as a note section.
Here is the image of rendered docs:
image

torch/_torch_docs.py Outdated Show resolved Hide resolved
@mruberry mruberry self-requested a review October 30, 2020 16:50
Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome work, @IvanYashchuk. Made one last small docs comment for your review.

It'll be great to have torch.kron available. I was just talking to a PyTorch user the other day about how he's using matrices with a "Kronecker structure" as a form of structured sparsity.

Do me a favor, though, and when the updates are ready for review be sure to re-request review or say "This is ready for another review." With the number of PRs I'm tracking it's hard to understand when updates are being made vs. something is ready for review again.

Just ping me when you'd like this merged.

@facebook-github-bot
Copy link
Contributor

Hi @IvanYashchuk!

Thank you for your pull request. We require contributors to sign our Contributor License Agreement, and yours needs attention.

You currently have a record in our system, but we do not have a signature on file.

In order for us to review and merge your code, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks!

@IvanYashchuk
Copy link
Collaborator Author

Hi @mruberry, I think now this PR should be ready for merging.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@IvanYashchuk
Copy link
Collaborator Author

Tests seem to fail. Is it related to common_methods_invocations.py's method_tests?

@mruberry
Copy link
Collaborator

mruberry commented Nov 2, 2020

Tests seem to fail. Is it related to common_methods_invocations.py's method_tests?

I would ignore the "Facebook internal" build signal. It's complete nonsense.

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in f276ab5.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged module: numpy Related to numpy support, and also numpy compatibility of our operators open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants