Open
Description
Here summarizes a list of requested Sparse Tensor functions and autograd supports from previous PRs. Please feel free to comment on functions that should be added also.
Functions
-
sum()
with autograd [sparse] torch.sparse.sum() #12430 -
max()
with autograd -
log1p()
Add log1p for sparse tensor #8969 -
S.copy_(S) with autograd
copy_(Sparse, Sparse) for sparse tensor #9005 - indexing (
gather()
,index_select()
) -
mul_(S, D) -> S
,mul(S, D) -> S
with autograd -
cuda()
-
nn.Linear
with autograd (SxS, SxD, relies onaddmm
andmatmul
) -
softmax()
with autograd (same as in TF: Applies softmax() to a region of a densified tensor submatrix; (2) Masks out the zero locations; (3) Renormalizes the remaining elements. SparseTensor result has exactly the same non-zero indices and shape) -
to_sparse()
dense.to_sparse() re: #8853 #12171 -
narrow_copy()
add narrow() support for sparse tensors re: #8853 #11342 -
sparse_mm(S, D) -> D
with autograd -
cat()
implement concatenation of sparse tensors #13577 -
unsqueeze()
,stack()
Implementunsqueeze
for sparse vectors (this also makesstack
work out of the box) #13760
Wish list
bmm(S, D)
(add an extra sparse dim atindices
of SparseTensor as batch dim?)- broadcasting
mul(S, D) -> S
Dataset
,Dataloader
save
,load
for sparse tensors
Existing
- autograd supported for
values()
via [sparse] Autograd indices/values and sparse_coo ctor #13001 (Thanks to @ssnl!), that means all element-wise ops are supported in sparse now - norm (cannot take
dim
args) - pow
- clone
- zero_
- t_ / t
- add_ / add(Sparse, Sparse, Scalar) -> Sparse
- add_ / add(Dense, Sparse, Scalar) -> Dense
- sub_ / sub(Sparse, Sparse, Scalar) -> Sparse
- mul_ / mul(Sparse, Sparse) -> Sparse
- mul_ / mul(Sparse, Scalar) -> Sparse
- div_ / div(Sparse, Scalar) -> Sparse
- addmm(Dense, Sparse, Dense, Scalar, Scalar) -> Dense
- sspaddmm(Sparse, Sparse, Dense, Scalar, Scalar) -> Sparse
- mm(Sparse, Dense) -> Dense
- smm(Sparse, Dense) -> Sparse
- hspmm(Sparse, Dense) -> HybridSparse
- spmm(Sparse, Dense) -> Dense