Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP]Added cases for sparse arrays in tensorproduct #17000

Merged
merged 4 commits into from Jun 13, 2019

Conversation

Projects
None yet
4 participants
@kangzhiq
Copy link
Contributor

commented Jun 8, 2019

References to other Issues or PRs

#16941

Brief description of what is fixed or changed

Sparse array are cast into list while performing tensorproduct. Brief overview in #16941
Before:

>>> a = MutableSparseNDimArray.zeros(10000, 20000)
>>> res = tensorproduct(a, 3)
Too slow

Thanks to the lazy-evaluation, it would not cause a MemoryError immediately. But as the evaluation continues, it is supposed to go beyond the limit of memory. So it should be fixed.
Now:

>>> a = MutableSparseNDimArray({1:1}, (3, ))
>>> b = MutableSparseNDimArray({2:1}, (3, ))

>>> tensorproduct(a, b) == ImmutableSparseNDimArray({5: 1}, (3, 3))
True

Other comments

Release Notes

  • tensor
    • Added specific cases for sparse array in tensorproduct and tests
Added some cases and tests
- Added case for tensorproduct of a scalar and a sparse array
- Added tests for sparse array in test of tensorproduct
@sympy-bot

This comment has been minimized.

Copy link

commented Jun 8, 2019

Hi, I am the SymPy bot (v147). I'm here to help you write a release notes entry. Please read the guide on how to write release notes.

Your release notes are in good order.

Here is what the release notes will look like:

  • tensor
    • Added specific cases for sparse array in tensorproduct and tests (#17000 by @kangzhiq)

This will be added to https://github.com/sympy/sympy/wiki/Release-Notes-for-1.5.

Note: This comment will be updated with the latest check if you edit the pull request. You need to reload the page to see it.

Click here to see the pull request description that was parsed.

<!-- Your title above should be a short description of what
was changed. Do not include the issue number in the title. -->

#### References to other Issues or PRs
<!-- If this pull request fixes an issue, write "Fixes #NNNN" in that exact
format, e.g. "Fixes #1234". See
https://github.com/blog/1506-closing-issues-via-pull-requests . Please also
write a comment on that issue linking back to this pull request once it is
open. -->
#16941

#### Brief description of what is fixed or changed
Sparse array are cast into list while performing tensorproduct. Brief overview in #16941
**Before:**
```python
>>> a = MutableSparseNDimArray.zeros(10000, 20000)
>>> res = tensorproduct(a, 3)
Too slow
```
Thanks to the lazy-evaluation, it would not cause a MemoryError immediately. But as the evaluation continues, it is supposed to go beyond the limit of memory. So it should be fixed.
**Now:**

```python
>>> a = MutableSparseNDimArray({1:1}, (3, ))
>>> b = MutableSparseNDimArray({2:1}, (3, ))

>>> tensorproduct(a, b) == ImmutableSparseNDimArray({5: 1}, (3, 3))
True
```
#### Other comments


#### Release Notes

<!-- Write the release notes for this release below. See
https://github.com/sympy/sympy/wiki/Writing-Release-Notes for more information
on how to write release notes. The bot will check your release notes
automatically to see if they are formatted correctly. -->

<!-- BEGIN RELEASE NOTES -->
* tensor
  * Added specific cases for sparse array in tensorproduct and tests

<!-- END RELEASE NOTES -->

Update

The release notes on the wiki have been updated.

@@ -43,6 +43,11 @@ def test_tensorproduct():
assert tensorproduct(A, B, a) == Array([[a*x, 2*a*x, 3*a*x], [a*y, 2*a*y, 3*a*y]])
assert tensorproduct(B, a, A) == Array([[a*x, a*y], [2*a*x, 2*a*y], [3*a*x, 3*a*y]])

# tests for large scale sparse array
a = MutableSparseNDimArray.zeros(10000, 20000)
assert tensorproduct(a, x) == MutableSparseNDimArray.zeros(10000, 20000)

This comment has been minimized.

Copy link
@kangzhiq

kangzhiq Jun 8, 2019

Author Contributor

Test will not pass because equality test for large scale sparse array is not fixed yet.(Referring to #16937)

@oscarbenjamin oscarbenjamin added the tensor label Jun 9, 2019

if isinstance(b, SparseNDimArray) and not isinstance(a, NDimArray):
return type(b)({k: a*v for (k, v) in b._sparse_array}, b.shape)

# TODO: product for two sparse arrays

This comment has been minimized.

Copy link
@Upabjojr

Upabjojr Jun 10, 2019

Contributor

don't we only need the product of two sparse arrays?

This comment has been minimized.

Copy link
@kangzhiq

kangzhiq Jun 10, 2019

Author Contributor

You are right. I will work on the case of two sparse arrays. Thank you for reviewing!

kangzhiq added some commits Jun 11, 2019

Added cases for sp_arr for tensorproduct
- Removed unnecessary codes
- Added case for tensorproduct between two sp_arr
- Added tests
@Upabjojr

This comment has been minimized.

Copy link
Contributor

commented Jun 12, 2019

Can you add a for loop testing both dense and sparse classes?

e.g. instead o

X = Array(...)
assert tensorproduct( ... ) == ...

write:

for ArrayType in [ImmutableDenseArray, ImmutableSparseArray]:
    X = ArrayType( ... )
    assert tensorproduct( ... ) == ...

In this way we are sure all tests are repeated for both dense and sparse arrays.

Modified tests in test_arrayop
- Added loop to make sure test passed for sprase and dense arrays
- Ameliorated tests in test_derive_by_array
@codecov

This comment has been minimized.

Copy link

commented Jun 12, 2019

Codecov Report

Merging #17000 into master will increase coverage by 0.104%.
The diff coverage is 100%.

@@             Coverage Diff              @@
##            master   #17000       +/-   ##
============================================
+ Coverage   74.126%   74.23%   +0.104%     
============================================
  Files          620      620               
  Lines       160572   160638       +66     
  Branches     37677    37696       +19     
============================================
+ Hits        119026   119242      +216     
+ Misses       36140    36013      -127     
+ Partials      5406     5383       -23

@Upabjojr Upabjojr merged commit 16d98d9 into sympy:master Jun 13, 2019

3 checks passed

codecov/project 74.23% (target 0%)
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
sympy-bot/release-notes The release notes look OK
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.