-
Notifications
You must be signed in to change notification settings - Fork 281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
large sparse matrix causing crash during decomposition #17
Comments
We don't yet have support for sparse tensors but it is on the roadmap! |
@JeanKossaifi Great to hear it is on the roadmap. I also need this functionality for the project I am working on. Keep up the great work! |
Thanks! We welcome contributions if you are interested in taking a crack at it! :) |
As a side note, you should have no problem to factorize (dense) tensors of that size. I recently ran CP and Tucker factorization (on an AWS instance, with one Tesla V100 GPU) on a tensor of size 1000x1000x1000 in about 40 seconds using pytorch, mxnet or tensorflow. |
Hi, I wonder if it is normal that when I was calling partial_tucker on a given tensor, it always returns core tensor and factors with the same magnitude but different signs? Even if I fixed the random_state, the sign of returned core tensor and factor were still different from time to time. (Not sure if it should be raised here, I would move it to somewhere else if inappropriate) |
It is the result of the sign indeterminacy of the singular value decomposition. I opened #74 -- feel free to take a crack at it! |
When I try to use Tucker decomposition of a large sparse matrix, Tensorly crashes. I have used both the MXNet and NumPy backend, and both cause the crash due to memory issues.
The dimensions of my sparse matrix are (358, 556, 2). I was hoping to use Tensorly for even larger sparse matrices. I did not know if you guys intend to release any support for sparse matrices, or if perhaps something I am doing could be incorrect.
The text was updated successfully, but these errors were encountered: