-
Notifications
You must be signed in to change notification settings - Fork 527
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mat: What about sparse matrices? #745
Comments
We understand the importance of sparse matrices, but there is a significant coding burden in creating them. For one, there is a large diversity in sparse kinds. We are close to stabilizing mat64 and so are maybe in a position tothink about adding them, but there are no explicit plans as far as I know. |
I have spent quite some time thinking about a plan to introduce sparse matrix handling - it is not trivial, both in the implementation of the matrix types and also in the threading it into the mat64 operation model we use. I do want to do it though. What operations are you particularly interested in? |
I need basic operations:
|
|
I am going to use only sparse matrices (sparse x sparse) and growing by two dimensions. |
Mmm. That's not likely to be handled for a long time. Sparse x sparse is both difficult and unlikely to be pay back well. The normal approach to sparse operations is to use a scatter which is easy to to apply effieciently to a dense, but not to a sparse. In addition to that sparsity tends to reduce, so sparse x sparse is likely to result in a dense anyway.
What is the domain you are working in, what number of non-zero elements do you expect (and what arrangement) and how fast do you need this to be? In the first instance I would suggest that you implement a sparse type on top of a map[struct{int, int}]float64. This will be slow, but it will work.
|
I will interfere a bit. I would also like to have sparse matrices in gonum, I think that eventually we will have to have them but I also agree that it is not trivial. I started to implement my own tiny sparse package but due to the lack of free time I did not get far. To give a sample of different (much more basic) sparse operations: at first I would be happy with just being able to build a fixed sparse matrix and compute (transposed) sparse matrix times dense vector product. Then I could write simple CG and GMRES solvers with which I could then solve my toy PDE models. As a next step, I would like to be able to modify values in the sparse matrix without changing its sparsity structure. That would keep me happy for a long time. |
The trick though is that there are so many kinds of sparse matrix. At least in my world of PDEs, it's almost always block-structured rather than random access. GMRES is then coded to operate knowing the block structure. |
This is trivial. You can see how this works in the sparse PageRank function in graph/network.
This is also relatively straightforward. Depending on your use model either an iteration over a row/column (depending on your storage) to find the correct index, or binary search on a sorted rox/column. For reference, there is CSparse and Sparse BLAS as competitors for models to use. Sparse BLAS is really where I would like to go. Neither is trivial. |
Yes, both are straightforward and I have them together with a CG solver. I was trying to think of a nice interface that fits with mat64 but it was not so clear to me. That's what I mean by non-trivial. I read the Sparse BLAS document and reference code in detail and tried to use it for inspiration but its usage of matrix handles as integers would have to replaced with something more Go-like. |
A pointer to a sparse BLAS type (types presumably - satisfying a sparse BLAS interface - rather than our blas64 method-less types) would be appropriate. The GC handles the end sparse BLAS function and a NewT handles each of the begin. I think that is all fairly simple (though tedious in the implementation).
The complexity arises in the combinatoric interaction between types both in the sparse BLAS and in mat64.
|
any progress here, guys? |
I’m a beginner gopher, but I would like to avoid adding a layer of python to he existing Go backend just to have sparse matrices. If this is still something you would like to see implemented, and are willing to give me a few pointers I might give it a go (!). |
This repository if deprecated and frozen, so I'll move this to #367, but the overall plan would to:
|
I transferred this issue from the old repository here in case there are some useful comments or ideas, although there is already at least one issue about sparse matrices. |
I'd like to offer my grain of salt on the matter:
The fastest way to "assign" to a sparse matrix is to compute these vectors first and then use I know matlab uses lapack behind the scenes for dense matrices but I'm not sure what it uses for sparse representation. |
What would be needed for a first PR on this? I've given Sparse BLAS a shot below sparse blas level 1 routinespackage blas
// A sparse vector representation.
// Non zero entries should be len(Val) or len(Idx)
// at any given time.
type SpVector struct {
// Length, or capacity of sparse vector.
N int
// Val[i] corresponds to the Stride*Idx[i]th value of the vector.
Val []float64
Idx []int
Stride int
}
type Implementation struct{}
// Dusdot Sparse Blas Level 1 routine. Returns dot product of x and y.
// w = x . y
func (Implementation) Dusdot(nz int, x []float64, indx []int, y []float64, incy int) (w float64) {
for i := 0; i < nz; i++ {
w += x[i] * y[indx[i]*incy]
}
return w
}
// Dusaxpy Sparse Blas Level 1 routine. Adds scaled x values to y as in
// y = alpha * x
func (Implementation) Dusaxpy(nz int, alpha float64, x []float64, indx []int, y []float64, incy int) {
for i := 0; i < nz; i++ {
y[indx[i]*incy] += alpha * x[i]
}
}
// Dusga Sparse Blas Level 1 routine. Copies x values to y.
// x = y
func (Implementation) Dusga(nz int, y []float64, incy int, x []float64, indx []int) {
for i := 0; i < nz; i++ {
x[i] = y[indx[i]*incy]
}
}
// Dusgz Sparse Blas Level 1 routine. Gather y values into x and zero y values.
// x = y
// y = 0
func (Implementation) Dusgz(nz int, y []float64, incy int, x []float64, indx []int) {
for i := 0; i < nz; i++ {
x[i] = y[indx[i]*incy]
y[indx[i]*incy] = 0
}
}
// Dussc Sparse Blas Level 1 routine. Copies x values to y.
// y = x
func (Implementation) Dussc(nz int, x []float64, y []float64, incy int, indx []int) {
for i := 0; i < nz; i++ {
y[indx[i]*incy] = x[i]
}
}
|
I have big-size matrices, but it too sparsed. I do not have enough memory, and I do not see other libraries to do it.
The text was updated successfully, but these errors were encountered: