Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add sparse.kron #194

Merged
merged 2 commits into from Sep 20, 2018
Merged

Add sparse.kron #194

merged 2 commits into from Sep 20, 2018

Conversation

jcrist
Copy link
Collaborator

@jcrist jcrist commented Sep 20, 2018

Added sparse.kron implementation.

Add sparse kron implementation.
@@ -212,6 +212,65 @@ def _dot(a, b):
return aa.dot(b)


def kron(a, b):
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the approach in sparse when coercing inputs? If neither input is a sparse array, do you return a dense result, or coerce to sparse?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We generally raise a ValueError if the result is guaranteed to be dense.

Copy link
Collaborator Author

@jcrist jcrist Sep 20, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can understand that if the inputs are sparse, but the output will be dense. In this case though the inputs are already dense. Should I forbid non-SparseArray inputs? sparse.concatenate does this, sparse.dot says it accepts non-sparse inputs, but the code errors out internally non expecting them (raises an AttributeError).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The dot behaviour is weird, it should work on mixed or sparse only, but not on dense-dense. To answer your question: In general, we only accept dense inputs if mixed sparse-dense operations are more efficient than a dense-only one. If the dense-only operation is the best one available, we err.

For dot, there is an efficient implementation of sparse-dense and dense-sparse multiplication, as in this case. For element-wise operations, this is sometimes the case (think operator.mul), but not always (think operator.add).

This case is similar: A mixed operation could benefit from the sparse structure, but not a dense-only one.

@codecov
Copy link

codecov bot commented Sep 20, 2018

Codecov Report

Merging #194 into master will increase coverage by 0.04%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #194      +/-   ##
==========================================
+ Coverage   97.52%   97.57%   +0.04%     
==========================================
  Files          11       11              
  Lines        1417     1444      +27     
==========================================
+ Hits         1382     1409      +27     
  Misses         35       35
Impacted Files Coverage Δ
sparse/coo/__init__.py 100% <ø> (ø) ⬆️
sparse/coo/common.py 97.73% <100%> (+0.21%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8a29cdf...a932fb6. Read the comment docs.

Copy link
Collaborator

@hameerabbasi hameerabbasi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is really urgent, go ahead and merge it after these changes. I'm working on a version of kron using that will work on arbitrary dimensionality, and that one will supersede this one.

If you can wait a day or two, I'll put my PR in.

[0, 0, 0, 1, 2, 3, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 2, 3]], dtype=int64)
"""
from .core import COO
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You might want to add a fill-value is zero check here along with an additional note in the documentation. See, for example, dot.

if a_ndim == 0 or b_ndim == 0:
return a * b
elif a_ndim > 2 or b_ndim > 2:
raise NotImplementedError("kron with ndim > 2 for either argument")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An additional “is not supported” will be nice here.

@@ -212,6 +212,65 @@ def _dot(a, b):
return aa.dot(b)


def kron(a, b):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The dot behaviour is weird, it should work on mixed or sparse only, but not on dense-dense. To answer your question: In general, we only accept dense inputs if mixed sparse-dense operations are more efficient than a dense-only one. If the dense-only operation is the best one available, we err.

For dot, there is an efficient implementation of sparse-dense and dense-sparse multiplication, as in this case. For element-wise operations, this is sometimes the case (think operator.mul), but not always (think operator.add).

This case is similar: A mixed operation could benefit from the sparse structure, but not a dense-only one.

@jcrist
Copy link
Collaborator Author

jcrist commented Sep 20, 2018

If this is really urgent, go ahead and merge it after these changes. I'm working on a version of kron using that will work on arbitrary dimensionality, and that one will supersede this one.

Glad to hear it! I figured there'd be a more efficient/elegant way to express this in ndimensions, but my immediate need was for a 2 dimensions only and using the scipy sparse implementation was easy. I look forward to seeing your implementation.

I believe all concerns have been addressed. Thanks for the review, merging.

@jcrist jcrist merged commit 5f865a9 into pydata:master Sep 20, 2018
@jcrist jcrist deleted the kron branch September 20, 2018 14:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants