-
Notifications
You must be signed in to change notification settings - Fork 21.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Todo functions and autograd supports for Sparse Tensor #8853
Comments
Also:
Also Wojciej has been posting about these recently. He'd like some variant of this script to work:
|
See also #8856 |
Requests added:
|
I think norm is already implemented. |
@li-roy oh, you're right! Will remove it from todo |
Summary: - fixes log1p at #8853 - added log1p of sparse tensor in ATen - make log1p of sparse tensor non-differentiable and raise error, because local derivate of log1p for zero element is 1 / (0 + 1) = 1 and make tensor dense Closes #8969 Reviewed By: ezyang Differential Revision: D8677491 fbshipit-source-id: 8363a613519de4bc75eda087ccd20a3eb2d18126
also added TODO requests from @adefazio: ops and autograd support of
|
Summary: - fixes log1p at pytorch#8853 - added log1p of sparse tensor in ATen - make log1p of sparse tensor non-differentiable and raise error, because local derivate of log1p for zero element is 1 / (0 + 1) = 1 and make tensor dense Closes pytorch#8969 Reviewed By: ezyang Differential Revision: D8677491 fbshipit-source-id: 8363a613519de4bc75eda087ccd20a3eb2d18126
@weiyangfb I took a pass at implementing narrow from your list. Posted a couple questions on the PR if you have a chance to look. Thanks! |
@realdoug Thanks for working on this feature! I will take a look! |
@weiyangfb is there a priority list at all for these ops? I think I can knock out a few more. |
@realdoug Thanks a lot for the great work! Let me take a look, stay tuned! |
I'd like to add |
Is it possible to add matmul(S, D) where S and D have rank > 2? |
Is this topic related to conversion pytorch to ONNX? |
Hey I'm trying to implement cosine similarity of two csr matrices in Pytorch def awesome_cossim_top(A, B, ntop, lower_bound=0):
# force A and B as a CSR matrix.
# If they have already been CSR, there is no overhead
A = A.tocsr()
B = B.tocsr()
M, _ = A.shape
_, N = B.shape
idx_dtype = np.int32
nnz_max = M*ntop
indptr = np.zeros(M+1, dtype=idx_dtype)
indices = np.zeros(nnz_max, dtype=idx_dtype)
data = np.zeros(nnz_max, dtype=A.dtype)
ct.sparse_dot_topn(
M, N, np.asarray(A.indptr, dtype=idx_dtype),
np.asarray(A.indices, dtype=idx_dtype),
A.data,
np.asarray(B.indptr, dtype=idx_dtype),
np.asarray(B.indices, dtype=idx_dtype),
B.data,
ntop,
lower_bound,
indptr, indices, data)
return csr_matrix((data,indices,indptr),shape=(M,N)) So far I've reached def csr_to_coo(X):
X = X.tocoo()
return torch.sparse.LongTensor(torch.LongTensor([X.row.tolist(), X.col.tolist()]),
torch.LongTensor(X.data.astype(np.int32)))
def cosine_distance(x1, x2=None, eps=1e-8):
return 1 - torch.sparse.mm(x1, x2) But if I run the code below from scipy.sparse import csr_matrix
Acsr = csr_matrix([[1, 2, 0], [0, 0, 3], [4, 0, 5]])
print(cosine_distance(csr_to_coo(Acsr),csr_to_coo(Acsr))) it throws the following error
From the ToDo I can see that (sparse, sparse) multiplication is supported, am I doing something wrong or this isn't supported yet? |
@rohanrajpal, sparse-sparse matmul is not supported in current master branch. Current |
Alright, thanks! |
@weiyangfb, I am working on graph convolutional neural networks, and need to perform |
Also, it would be good to support torch.min / torch.amin. These could then be used in absence of #22378 |
Here summarizes a list of requested Sparse Tensor functions and autograd supports from previous PRs. Please feel free to comment on functions that should be added also.
Functions
sum()
with autograd [sparse] torch.sparse.sum() #12430max()
with autogradlog1p()
Add log1p for sparse tensor #8969S.copy_(S) with autograd
copy_(Sparse, Sparse) for sparse tensor #9005gather()
,index_select()
)mul_(S, D) -> S
,mul(S, D) -> S
with autogradcuda()
nn.Linear
with autograd (SxS, SxD, relies onaddmm
andmatmul
)softmax()
with autograd (same as in TF: Applies softmax() to a region of a densified tensor submatrix; (2) Masks out the zero locations; (3) Renormalizes the remaining elements. SparseTensor result has exactly the same non-zero indices and shape)to_sparse()
dense.to_sparse() re: #8853 #12171narrow_copy()
add narrow() support for sparse tensors re: #8853 #11342sparse_mm(S, D) -> D
with autogradcat()
implement concatenation of sparse tensors #13577unsqueeze()
,stack()
Implementunsqueeze
for sparse vectors (this also makesstack
work out of the box) #13760Wish list
bmm(S, D)
(add an extra sparse dim atindices
of SparseTensor as batch dim?)mul(S, D) -> S
Dataset
,Dataloader
save
,load
for sparse tensorsExisting
values()
via [sparse] Autograd indices/values and sparse_coo ctor #13001 (Thanks to @ssnl!), that means all element-wise ops are supported in sparse nowdim
args)The text was updated successfully, but these errors were encountered: