Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

large sparse matrix causing crash during decomposition #17

Closed
jtliso opened this issue Oct 27, 2017 · 6 comments
Closed

large sparse matrix causing crash during decomposition #17

jtliso opened this issue Oct 27, 2017 · 6 comments

Comments

@jtliso
Copy link

jtliso commented Oct 27, 2017

When I try to use Tucker decomposition of a large sparse matrix, Tensorly crashes. I have used both the MXNet and NumPy backend, and both cause the crash due to memory issues.

The dimensions of my sparse matrix are (358, 556, 2). I was hoping to use Tensorly for even larger sparse matrices. I did not know if you guys intend to release any support for sparse matrices, or if perhaps something I am doing could be incorrect.

@JeanKossaifi
Copy link
Member

We don't yet have support for sparse tensors but it is on the roadmap!

@MaxMcGlinnPoole
Copy link

@JeanKossaifi Great to hear it is on the roadmap. I also need this functionality for the project I am working on. Keep up the great work!

@JeanKossaifi
Copy link
Member

JeanKossaifi commented Nov 17, 2017

Thanks! We welcome contributions if you are interested in taking a crack at it! :)

@JeanKossaifi
Copy link
Member

As a side note, you should have no problem to factorize (dense) tensors of that size. I recently ran CP and Tucker factorization (on an AWS instance, with one Tesla V100 GPU) on a tensor of size 1000x1000x1000 in about 40 seconds using pytorch, mxnet or tensorflow.

@santiago-py-chen
Copy link

santiago-py-chen commented Sep 9, 2018

Hi, I wonder if it is normal that when I was calling partial_tucker on a given tensor, it always returns core tensor and factors with the same magnitude but different signs? Even if I fixed the random_state, the sign of returned core tensor and factor were still different from time to time. (Not sure if it should be raised here, I would move it to somewhere else if inappropriate)

@JeanKossaifi
Copy link
Member

JeanKossaifi commented Sep 10, 2018

It is the result of the sign indeterminacy of the singular value decomposition.
See for instance this article about the problem and a possible solution to it. We can add a SignFlip function and an additional parameter in the decomposition algorithms on whether to use it.

I opened #74 -- feel free to take a crack at it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants