-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Map for sparse matrices #60
Comments
Defining |
Maybe one could imagine something like |
Fully agree with @mlubin. |
If we had a nmatrix type with non-zero default, this would be straightforward. But we don't and no one seems that interested in it. As it stands, we could make it an error to map a function to a sparse array that doesn't map zero to zero. In that case, we could require specifying a dense result array type. |
I agree with @mlubin 's solution too. Otherwise, it is too magical for anyone else reading the code. |
@StefanKarpinski my vague sense is that zero is special, in that common operations like matrix multiply and scaling generally (though not always) preserve it. The next generalization is to different semirings, which I believe we support via the sparse code's use of |
For sure there is no place for magical transformations and behaviour. @StefanKarpinski might be right in throwing an error if one maps zeros to something else on |
Ah, I see we also use |
Raising an error would be painful as it would mean functions working with any |
Maybe redefining the behaviour of the default |
I'm not sure if a map function applied to only stored elements is even meaningful in general given that zero elements themselves could be stored. The result could change depending on stored zeros, which isn't supposed to happen. I think something like this should first live in a package before we consider adopting it in |
But storing zeroes in sparse matrix format is jet another issue. |
I've outlined before how one can pretty easily have non-zero default sparse matrices decomposed as |
Ah yes, I remember that now. |
On Wed, May 28, 2014 at 9:10 AM, Stefan Karpinski
Kevin |
@StefanKarpinski only time I've ever wanted sparse with a non-zero default is for sparse vectors of bounds, with default |
Ok I think we've decided not to have |
There are a few things to do though. Perhaps we should make |
Reopening this to make sure that some of the above mentioned things get done. |
@tkelman: The real benefit of sparse arrays with non-zero defaults is so that things like map can be made to work generically. Sparse matrices aren't closed under almost any operations unless you can have non-zero defaults. Anyway, I'm clearly failing to convince anyone that sparse arrays with non-zero defaults are a good idea so I'm just going to shut up about it. |
I think @johnmyleswhite had a good point that such a |
Putting more generality into the way sparse matrices are structured is a good goal. These features don't exist in any classical sparse matrix tools due to their many limitations, so quantifying how much demand exists for that kind of functionality is hard. Better extensibility in terms of more coordinate formats, being able to specify block structure (say a sparse matrix where each "nonzero element" is itself a dense matrix), etc and still having linear algebra operations work on them in a generic way is a little higher up on my wish list than non-zero defaults. But also considerably harder to accomplish. |
I for one agree that sparse matrices with nonzero default is a neat idea, and it might also be a nice way to generalize them to non-number element types (I seem to remember we had a discussion about that at some point, but can't remember where). I'm not sure when you would want to map a non-pure function anyway, but maybe others have cases for this? I don't now much about actual use cases either, though. One thing that would become trickier is concatenation of matrices with different defaults. |
Concerning the original topic ( |
We could add a special-purpose |
|
I guess that it should be |
Hello all, I came across this issue after experiencing unexpectedly slow Also, currently Also, I have one possible application for "sparse" matrices with non-zero defaults. In very large scale optimization with binary variables, sometimes it is convenient to store a tensor of 1's. There are certainly other ways around that, so some may argue that it's not the best way of doing what it achieves, but I thought I'd mention it as one application that I can think of for my own use. |
The next evolution of
julia> foo = sprand(4, 4, 0.5)
4×4 sparse matrix with 8 Float64 nonzero entries:
[3, 1] = 0.433251
[4, 1] = 0.269651
[1, 2] = 0.783236
[2, 2] = 0.693917
[1, 3] = 0.272095
[3, 3] = 0.270313
[1, 4] = 0.755662
[2, 4] = 0.303817
julia> map!(cos, foo.nzval); # or map!(cos, nonzeros(foo)) if you prefer
julia> foo
4×4 sparse matrix with 8 Float64 nonzero entries:
[3, 1] = 0.907606
[4, 1] = 0.963864
[1, 2] = 0.708634
[2, 2] = 0.768747
[1, 3] = 0.96321
[3, 3] = 0.963687
[1, 4] = 0.727818
[2, 4] = 0.954202 ?
Could you point to an issue describing/illustrating the behavior you mention? Thanks and best! |
Ah, so am I to undestand that This issue came up JuliaOpt/JuMP.jl/#889. We wanted a function which performed an operation only on the non-zero elements of a sparse tensor and returned a tensor of identical shape, and weren't sure what the "proper" way of doing that was. As it is, the best available solution doesn't seem particularly elegant. |
No, |
Oh, I see... it was doing exactly what it does for It appears that this topic is quite a minefield. Maybe this isn't the place to bring it up, but one other thing that would probably be extremely helpful for most users because it would have (as far as I can see) completely unambiguous behavior is to have something like for (i, j) ∈ spiterate(X)
f(X[i, j])
end iterating first through columns and then rows, skipping all zero elements. Of course, it is already possible to achieve this behavior using |
Please have a look at the definitions and documentation I linked above, you might find them helpful :). To be clear,
Iterating efficiently over sparse matrices is tricky. For example, the suggestion above would be lamentably inefficient (as accessing a sparse matrix/vector via Potential higher level interfaces for iteration over sparse matrices/vectors have received extensive discussion elsewhere, in case you feel like diving down a deep rabbit hole :). (I don't have the issue numbers immediately available unfortunately). Best! |
Yeah, this issue is a rabbit hole indeed. Just to be clear, I realized that I'm beginning to see why Julia is the only language I know of with native sparse matrices (I guess Matlab has them but it doesn't count because it isn't really a general-purpose programming language). Still, it's very worth all the effort everyone has put into it! Thanks for all the help. |
Despite the fact that the semantics of this are "a rabbit hole", as @ExpandingMan pointed out, wouldn't it still be useful to include some simple function such as the following for convenience, maybe under a better thought out name? function sparsemap(f, M::SparseMatrixCSC)
SparseMatrixCSC(M.m, M.n, M.colptr, M.rowval, map(f, M.nzval))
end |
Dup of #4 |
Please consider the code below where
julia
is converting sparse matrix into dense one and doingmap
on the top. Should we havemap
defined only for non-zero elements in sparse matrix instead? Otherwise the slow down is an issue and one cannot apply effectively λ-calculus in this case!The text was updated successfully, but these errors were encountered: