Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

draft: symmetric sparse matrix support #22200

Closed
wants to merge 4 commits into from
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions base/sparse/sparse.jl
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ export AbstractSparseArray, AbstractSparseMatrix, AbstractSparseVector,

include("abstractsparse.jl")
include("sparsematrix.jl")
include("symmetric.jl")
include("sparsevector.jl")
include("higherorderfns.jl")

Expand Down
76 changes: 76 additions & 0 deletions base/sparse/symmetric.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# This file is a part of Julia. License is MIT: https://julialang.org/license

function Symmetric(A::SparseMatrixCSC, uplo::Symbol=:U)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not needed since it is equivalent to the default constructor:

Symmetric(A::AbstractMatrix, uplo::Symbol=:U) = (checksquare(A); Symmetric{eltype(A),typeof(A)}(A, char_uplo(uplo)))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. Fixed.

checksquare(A)
Symmetric{eltype(A), typeof(A)}(A, Base.LinAlg.char_uplo(uplo)) # preserve A
end

(*)(A::Symmetric{TA,SparseMatrixCSC{TA,S}}, x::StridedVecOrMat{Tx}) where {TA,S,Tx} = A_mul_B(A, x)

function A_mul_B!(α::Number, A::Symmetric{TA,SparseMatrixCSC{TA,S}}, B::StridedVecOrMat, β::Number, C::StridedVecOrMat) where {TA,S}

A.data.n == size(B, 1) || throw(DimensionMismatch())
A.data.m == size(C, 1) || throw(DimensionMismatch())

A.uplo == 'U' ? A_mul_B_U_kernel!(α, A, B, β, C) : A_mul_B_L_kernel!(α, A, B, β, C)
end

function A_mul_B(A::Symmetric{TA,SparseMatrixCSC{TA,S}}, x::StridedVector{Tx}) where {TA,S,Tx}
T = promote_type(TA, Tx)
A_mul_B!(one(T), A, x, zero(T), similar(x, T, A.data.n))
end

function A_mul_B(A::Symmetric{TA,SparseMatrixCSC{TA,S}}, B::StridedMatrix{Tx}) where {TA,S,Tx}
T = promote_type(TA, Tx)
A_mul_B!(one(T), A, B, zero(T), similar(B, T, (A.data.n, size(B, 2))))
end

function A_mul_B_U_kernel!(α::Number, A::Symmetric{TA,SparseMatrixCSC{TA,S}}, B::StridedVecOrMat, β::Number, C::StridedVecOrMat) where {TA,S}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are just 2 lines that are different between the U/L kernels. I think we can merge them, and add a Val{:U}/Val{:L} as last argument to the kernel? Something like:

function A_mul_B_UL_kernel!::Number, A::Symmetric{TA,SparseMatrixCSC{TA,S}},
    B::StridedVecOrMat, β::Number, C::StridedVecOrMat, ::Type{Val{uplo}}) where {TA,S,uplo}
    colptr = A.data.colptr
    rowval = A.data.rowval
    nzval = A.data.nzval
    if β != 1
        β != 0 ? scale!(C, β) : fill!(C, zero(eltype(C)))
    end
    @inbounds for k = 1 : size(C, 2)
        @inbounds for col = 1 : A.data.n
            αxj = α * B[col, k]
            tmp = TA(0)
            @inbounds for j = (uplo == :U ? (colptr[col] : (colptr[col + 1] - 1)) :
                                            (colptr[col] : (colptr[col + 1] - 1)))
                row = rowval[j]
                uplo == :U ? (row > col && break) : (row > col && break)  # assume indices are sorted
                a = nzval[j]
                C[row, k] += a * αxj
                row == col || (tmp += a * B[row, k])
            end
            C[col, k] += α * tmp
        end
    end
    C
end

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a performance impact for that, which is why I separated them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In fact, there's a performance impact for allowing C to be general, which makes me think that a separate version where C is just a vector is desirable.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a performance impact for that, which is why I separated them.

That should be compiled away.

In fact, there's a performance impact for allowing C to be general, which makes me think that a separate version where C is just a vector is desirable.

If the impact is noticeable it makes sense to have a separate kernel for matvec.

Copy link
Contributor Author

@dpo dpo Jun 4, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using the matrix in my gist, here's the benchmark of A * b:

BenchmarkTools.Trial: 
  memory estimate:  3.85 MiB
  allocs estimate:  2
  --------------
  minimum time:     21.475 ms (0.00% GC)
  median time:      22.218 ms (0.00% GC)
  mean time:        22.425 ms (1.17% GC)
  maximum time:     27.263 ms (5.92% GC)
  --------------
  samples:          223
  evals/sample:     1

Now U * b and L * b using separate kernels:

BenchmarkTools.Trial: 
  memory estimate:  3.85 MiB
  allocs estimate:  2
  --------------
  minimum time:     28.259 ms (0.00% GC)
  median time:      29.112 ms (0.00% GC)
  mean time:        29.481 ms (0.88% GC)
  maximum time:     35.077 ms (0.00% GC)
  --------------
  samples:          170
  evals/sample:     1
BenchmarkTools.Trial: 
  memory estimate:  3.85 MiB
  allocs estimate:  2
  --------------
  minimum time:     33.965 ms (0.00% GC)
  median time:      35.219 ms (0.00% GC)
  mean time:        35.797 ms (0.92% GC)
  maximum time:     40.802 ms (0.00% GC)
  --------------
  samples:          140
  evals/sample:     1

and U * b and L * b using the joint kernel (after obvious corrections):

BenchmarkTools.Trial: 
  memory estimate:  3.85 MiB
  allocs estimate:  4
  --------------
  minimum time:     33.787 ms (0.00% GC)
  median time:      34.744 ms (0.00% GC)
  mean time:        35.347 ms (0.88% GC)
  maximum time:     40.835 ms (0.00% GC)
  --------------
  samples:          142
  evals/sample:     1
BenchmarkTools.Trial: 
  memory estimate:  3.85 MiB
  allocs estimate:  4
  --------------
  minimum time:     33.890 ms (0.00% GC)
  median time:      34.753 ms (0.00% GC)
  mean time:        35.240 ms (0.89% GC)
  maximum time:     41.668 ms (0.00% GC)
  --------------
  samples:          142
  evals/sample:     1

I didn't know about Val. It improves over my initial joint version somewhat, but there'll still be a noticeable difference over large amounts of matrix-vector products.


colptr = A.data.colptr
rowval = A.data.rowval
nzval = A.data.nzval
if β != 1
β != 0 ? scale!(C, β) : fill!(C, zero(eltype(C)))
end
@inbounds for k = 1 : size(C, 2)
@inbounds for col = 1 : A.data.n
αxj = α * B[col, k]
tmp = TA(0)
@inbounds for j = colptr[col] : (colptr[col + 1] - 1)
row = rowval[j]
row > col && break # assume indices are sorted
a = nzval[j]
C[row, k] += a * αxj
row == col || (tmp += a * B[row, k])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should have an alpha

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course, thanks. I chose to multiply tmp by alpha after the loop to save a few cycles.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ps: should the kernels be exported?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so.

end
C[col, k] += tmp
end
end
C
end

function A_mul_B_L_kernel!(α::Number, A::Symmetric{TA,SparseMatrixCSC{TA,S}}, B::StridedVecOrMat, β::Number, C::StridedVecOrMat) where {TA,S}

colptr = A.data.colptr
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

4 space indent, and shouldn't start a function body with a blank line

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be fixed. Besides doing the same job for Hermitian, what other methods not yet covered would users find crucial?

rowval = A.data.rowval
nzval = A.data.nzval
if β != 1
β != 0 ? scale!(C, β) : fill!(C, zero(eltype(C)))
end
@inbounds for k = 1 : size(C, 2)
@inbounds for col = 1 : A.data.n
αxj = α * B[col, k]
tmp = TA(0)
@inbounds for j = (colptr[col + 1] - 1) : -1 : colptr[col]
row = rowval[j]
row < col && break
a = nzval[j]
C[row, k] += a * αxj
row == col || (tmp += a * B[row, k])
end
C[col, k] += tmp
end
end
C
end
13 changes: 13 additions & 0 deletions test/sparse/sparse.jl
Original file line number Diff line number Diff line change
Expand Up @@ -137,6 +137,19 @@ end
end
end

@testset "symmetric sparse matrix-vector multiplication" begin
for i = 1:5
a = sprand(10, 10, 0.25)
a = a + a'
b = rand(10)
ab = a*b
u = Symmetric(a, :U)
@test maximum(abs.(u*b - ab)) < 100*eps() * maximum(abs.(ab))
l = Symmetric(a, :L)
@test maximum(abs.(l*b - ab)) < 100*eps() * maximum(abs.(ab))
end
end

@testset "sparse matrix * BitArray" begin
A = sprand(5,5,0.2)
B = trues(5)
Expand Down