Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch.sspaddmm should broadcast the input tensor #69348

Open
ilovepytorch opened this issue Dec 3, 2021 · 4 comments
Open

torch.sspaddmm should broadcast the input tensor #69348

ilovepytorch opened this issue Dec 3, 2021 · 4 comments
Labels
module: sparse Related to torch.sparse Stale triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@ilovepytorch
Copy link

ilovepytorch commented Dec 3, 2021

馃悰 Bug

torch.sspaddmm(input, mat1, mat2) is equivalent with torch.addmm(input, mat1, mat2) and torch.sparse.addmm(input, mat1, mat2) except input and mat1 are sparse. So it should broadcast the input tensor like the other two.

To Reproduce

import torch
input = torch.rand([1, 1])
mat1 = torch.rand([2, 3])
mat2 = torch.rand([3, 3])
res1 = torch.addmm(input, mat1, mat2)
print("addmm pass")
input = input.to_sparse()
mat1 = mat1.to_sparse()
res2 = torch.sspaddmm(input, mat1, mat2)

Result:

addmm pass
Traceback (most recent call last):
    res2 = torch.sspaddmm(input, mat1, mat2)
RuntimeError: sspaddmm: Argument #1: Expected dim 0 size 2, got 1

Expected behavior

It should broadcast the input tensor and not raise this kind of error.

Environment

  • PyTorch Version (e.g., 1.0): 1.10.0, 1.9.0
  • OS (e.g., Linux): Linux
  • How you installed PyTorch (conda, pip, source): conda
  • Python version: 3.9.5
  • CUDA/cuDNN version: 11.1
  • GPU models and configuration: RTX 3090

cc @nikitaved @pearu @cpuhrsch @IvanYashchuk

@gchanan gchanan added module: sparse Related to torch.sparse triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Dec 3, 2021
@github-actions github-actions bot added the Stale label Feb 1, 2022
@ilovepytorch
Copy link
Author

Is there any update?

@IvanYashchuk
Copy link
Collaborator

IvanYashchuk commented Feb 14, 2022

Broadcasting is not implemented for sparse tensors because the tensor in the sparse case needs to be materialized and repeat all the values making it much larger. I guess there's no operation that currently supports sparse tensors and implements broadcasting behavior.
In the future, broadcasting behavior could be implemented for sparse tensors, but I don't think that anybody is actively working on this at the moment.

@cpuhrsch
Copy link
Contributor

@IvanYashchuk - Pearu did some work for sparse specific behavior for broadcast_to, but the underlying operation really is expand.

@ilovepytorch - Given all the other operations that are still missing for Tensors with sparse Layouts this might show up a little later.

@ilovepytorch
Copy link
Author

Thanks for your answers!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: sparse Related to torch.sparse Stale triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

4 participants