Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow sparse matrix-vector multiplication with symmetric matrices #32689

Closed
mancolric opened this issue Jul 26, 2019 · 6 comments
Closed

Slow sparse matrix-vector multiplication with symmetric matrices #32689

mancolric opened this issue Jul 26, 2019 · 6 comments
Labels
linear algebra Linear algebra performance Must go faster sparse Sparse arrays

Comments

@mancolric
Copy link

mancolric commented Jul 26, 2019

Hello,

I have performance problems when multiplying a symmetric sparse matrix with a vector: the multiplication is much slower than if we do not consider the matrix symmetry. Some other people have reported the same problems:
https://discourse.julialang.org/t/slow-sparse-matrix-vector-product-with-symmetric-matrices/3968

Please consider the following MWE:

using SparseArrays
using LinearAlgebra

function test1()

N   = Int(3e4)
x   = ones(N)
A   = sparse(1:N,1:N,x)
B   = Symmetric(A)

#Repeat several time so the computer has time to warm-up:
for ii=1:10
    @time y=A*x
    @time y=B*x
end

return

end

The last two output lines are

0.000134 seconds (2 allocations: 234.453 KiB)
4.990873 seconds (2 allocations: 234.453 KiB)

I also have the same problem with non-diagonal sparse matrices and with matrix-matrix multiplications if one of the matrices is symmetric.

Thanks,
Manu

@andreasnoack andreasnoack added linear algebra Linear algebra performance Must go faster sparse Sparse arrays labels Jul 26, 2019
@andreasnoack
Copy link
Member

cc @KlausC

@KlausC
Copy link
Contributor

KlausC commented Jul 27, 2019

I just want to remind of this PR, which I think will enable many kinds of specializations on wrapped sparse matrices. Unfortunately there is no progress there. #31563. Once this is merged, I will also be able to resolve this issue. I recognized, that that PR is targeting other issues in that wrapped sparse area. This issue has already be treated in a PR merged into v1.2.

@KlausC
Copy link
Contributor

KlausC commented Jul 28, 2019

There seems to be a mismatch of julia versions. I saw the same effect on an older version of julia.
@mancolric , could you please report your VERSION of Julia. I think the improvement #30018 was merged into release 1.2 but not backported to 1.1.

First the old version, which I could test:

julia> VERSION
v"1.0.4"
julia> using BenchmarkTools
julia> using SparseArrays
julia> using LinearAlgebra
julia> N = 30000
30000
julia> x = ones(N);
julia> A = sparse(1:N, 1:N, x);
julia> B = Symmetric(A);
julia> @btime $A * $x;
  76.404 μs (2 allocations: 234.45 KiB)
julia> @btime $B * $x;
  8.954 s (2 allocations: 234.45 KiB)

Now the current development version:

julia> VERSION
v"1.3.0-alpha.13"
julia> using BenchmarkTools
julia> using SparseArrays
julia> using LinearAlgebra
julia> N = 30000
30000
julia> x = ones(N);
julia> A = sparse(1:N, 1:N, x);
julia> B = Symmetric(A);
julia> @btime $A * x;
  85.020 μs (2 allocations: 234.45 KiB)

julia> @btime $B * x;
  157.999 μs (2 allocations: 234.45 KiB)

@mancolric
Copy link
Author

mancolric commented Jul 28, 2019

Yes, I am using the 1.0.4 LTS version. I will update to 1.3 and try again.

Do you think Bx will outperform Ax in future releases?

@KlausC
Copy link
Contributor

KlausC commented Jul 28, 2019

@mancolric , not in general. But the quotient is less than 2 or 1 in more populated matrices. See the example below, which has ~100 entries per column. In the case of 1000 entries per column it is even better than the unwrapped matrix; break-even occurs for ~250 entries per column.

The current implementation is optimized to access only the upper or lower part of the stored matrix. That requires to identify the diagonal position in each of the sparse columns, which needs a certain effort. Of course there is still improvement possible, but probable not worth the effort.

julia> N = 30000; A = sprand(N, N, 100/N); B = Symmetric(A); A = sparse(B);
julia> @btime $A * $x;
  7.787 ms (2 allocations: 234.45 KiB)
julia> @btime $B * $x;
  8.688 ms (2 allocations: 234.45 KiB)

julia> N = 30000; A = sprand(N, N, 1000/N); B = Symmetric(A); A = sparse(B);
julia> @btime $A * $x;
  77.542 ms (2 allocations: 234.45 KiB)
julia> @btime $B * $x;
  65.791 ms (2 allocations: 234.45 KiB)

julia> N = 30000; A = sprand(N, N, 250/N); B = Symmetric(A); A = sparse(B);
julia> @btime $A * $x;
  19.711 ms (2 allocations: 234.45 KiB)
julia> @btime $A * $x;
  19.750 ms (2 allocations: 234.45 KiB)

@mancolric
Copy link
Author

OK, thank you for your answer! I will upload to the 1.3 version and try.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
linear algebra Linear algebra performance Must go faster sparse Sparse arrays
Projects
None yet
Development

No branches or pull requests

3 participants