Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dense.to_sparse() re: #8853 #12171

Closed
wants to merge 1 commit into from
Closed

Conversation

realdoug
Copy link
Contributor

Here is my stab at dense.to_sparse

to_sparse() -> Tensor

Returns a sparse copy of the tensor. Torch supports sparse tensors
in coordinate format. See https://pytorch.org/docs/stable/sparse.html.

This comment was marked as off-topic.

@@ -2384,6 +2384,14 @@ def callable(a, b) -> number
0.012766935862600803
""")

add_docstr_all('to_sparse',

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.


LongTensor indices = self.nonzero().transpose(0, 1);
std::vector<int64_t> maskShape;
if(sparseDims != dims){

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@realdoug realdoug force-pushed the dense2sparse branch 2 times, most recently from facfac6 to 4b8027e Compare October 5, 2018 04:46
@weiyangfb
Copy link
Contributor

CIs failed look related

@realdoug realdoug force-pushed the dense2sparse branch 5 times, most recently from 0823819 to 840624c Compare October 11, 2018 18:40
@realdoug
Copy link
Contributor Author

@weiyangfb ok ready for review again

expected, _, _ = self._gen_sparse(sparse_dims, nnz, shape)
d = expected.to_dense()
result = d.to_sparse(sparse_dims)
self.assertEqual(expected.to_dense(), result.to_dense()) # == not implemented for sparse tensors yet

This comment was marked as off-topic.

shape = [10, 5, 19, 8]
for dim, dim_sz in enumerate(shape):
sparse_dims = dim + 1
for nnz in range(1, dim_sz):

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

d = expected.to_dense()
result = d.to_sparse(sparse_dims)
self.assertEqual(expected.to_dense(), result.to_dense()) # == not implemented for sparse tensors yet
self.assertEqual(sparse_dims, result._sparseDims())

This comment was marked as off-topic.

self.assertEqual(sparse_dims, result._sparseDims())

sp, _, _ = self._gen_sparse(2, 10, [3, 3, 3])
self.assertRaises(RuntimeError, lambda: sp.to_sparse())

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

}else{
Tensor i = nz.narrow(0, 0, sparseDims);
std::tie(indices, std::ignore) = _unique_dim(i, 1);
indices = indices.clone();

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.


Returns a sparse copy of the tensor. PyTorch supports sparse tensors in
:ref:`coordinate format <sparse-docs>`.
""")

This comment was marked as off-topic.

Copy link
Contributor

@weiyangfb weiyangfb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, there are some minor nits

@realdoug
Copy link
Contributor Author

@weiyangfb sounds good, done & done!

@realdoug realdoug force-pushed the dense2sparse branch 2 times, most recently from 6ff7c27 to dbff197 Compare October 23, 2018 16:22
@realdoug
Copy link
Contributor Author

build failures were due to #12984. Retriggered the build via rebase

@weiyangfb weiyangfb self-assigned this Oct 23, 2018
@weiyangfb
Copy link
Contributor

@realdoug please rebase to fix the CI failures

@@ -2238,6 +2238,18 @@
variants: function, method
device_guard: False

- func: to_sparse(Tensor self, int64_t sparseDims) -> Tensor

This comment was marked as off-topic.

@realdoug realdoug force-pushed the dense2sparse branch 2 times, most recently from 3cc749f to cd62fcd Compare October 25, 2018 02:07
int64_t dims = self.dim();
AT_CHECK(sparse_dim > 0, "sparse_dim must be >0");
AT_CHECK(sparse_dim <= dims,
"sparseDims must be less than or equal to self.dim()");

This comment was marked as off-topic.


Tensor nz = self.nonzero().transpose(0, 1);
if(nz.numel() == 0){
return new_with_dims_sparse(sparse_dim, dims-sparse_dim, sizes, sparse_options);

This comment was marked as off-topic.

>>> d.to_sparse()
tensor(indices=tensor([[1, 1],
[0, 2]]),
values=tensor([ 9, 10]),

This comment was marked as off-topic.

This comment was marked as off-topic.

size=(3, 3), nnz=2, layout=torch.sparse_coo)
>>> d.to_sparse(1)
tensor(indices=tensor([[1]]),
values=tensor([[ 9, 0, 10]]),

This comment was marked as off-topic.

LongTensor indices;
if(sparse_dim == dims){
indices = nz.clone();
}else{

This comment was marked as off-topic.

std::vector<int64_t> sizes = self.sizes().vec();

Tensor nz = self.nonzero().transpose(0, 1);
if(nz.numel() == 0){

This comment was marked as off-topic.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

weiyangfb has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@weiyangfb
Copy link
Contributor

need a rebase :(

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

weiyangfb has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Oct 27, 2018
Summary:
Here is my stab at ```dense.to_sparse```
Pull Request resolved: pytorch/pytorch#12171

Differential Revision: D10859078

Pulled By: weiyangfb

fbshipit-source-id: 5df72f72ba4f8f10e283402ff7731fd535682664
@weiyangfb
Copy link
Contributor

landed, Thanks @realdoug !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants