Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SparseAlgebra #12

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

FelixBangerter
Copy link

…nclude MatrixModule.jl

@coveralls
Copy link

coveralls commented Jul 19, 2018

Coverage Status

Coverage decreased (-16.1%) to 69.308% when pulling e6a0d41 on PicricAcid96:pull-request/e6a0d41a into 69bde0d on vonDonnerstein:master.

Copy link
Owner

@vonDonnerstein vonDonnerstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Felix, Thank you for your pull request. I've just had a quick read through your changes (see my comments). Looks very promising! I've mostly commented on more general aspects. Since you know best how to write your code, I hope I wasn't too intrusive about any implementational details. Only if you like, I could have a go on those as well. Please add a few unit or integration tests, so that I or anyone else working with your code doesn't unintentionally break your work. Thanks!

@@ -60,7 +66,7 @@ function writePixmap(filename::String,matrix::Matrix)
',' => "#ffeebb",
'.' => "#ffeeee",
' ' => "#ffffff",
'0' => "#bbbbbb", # zero by matrix format (blocking)
'0' => "#bbbbbb", # zero by matrix format (blocking)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

intentional?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a comment to http://schurkus.com/2015/12/05/quantumlab-design-guidelines/ to clarify indentation:
I'm assuming one tabstop corresponds to 8 spaces (github default)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wasn't intentional I changed it back.

rowptr::Array{Int,1}
rowpattern::Array{Tuple{Int64,Int64},1}
colpattern::Array{Tuple{Int64,Int64},1}
BCSRSpM(val,col,rowptr,rowpattern,colpattern) = new(val,col,rowptr,rowpattern,colpattern)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't this just the default constructor? In that case you could leave it out.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.



"""
BCSRSpM(val,col,rowptr,rowpattern,colpattern)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The CSC matrix format is called SparseMatrixCSC in julia, so I would think SparseMatrixBCSR would be a consistent name for your type. What do you think?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, I changed it.

end

"""
symBCSRSpM(val,col,rowptr,pattern)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This I would probably call SparseMatrixBCSRSymmetric or something in that vein. Similar to the one above. As I understand it, this is a type on its own and not a Symmetric-view on the BCSR type, is it?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, I changed it.
Yes SparseMatrixBCSRSymmetric is a new type for symmetric matrices, it exploits the fact that for symmetric matrices only the lower triangular blocks have to be stored and the corresponding upper triangular part can just be constructed by mirroring along the diagonal.
Not sure if it's worth it to save the diagonal elements as LowerTriangulars, as they need more memory than normal arrays:

julia> Base.summarysize(randn(100,100))
80000

julia> Base.summarysize(LowerTriangular(randn(100,100)))
80008

Your thoughts on that?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed here, the point of LowerTriangular seems not to be memory reduction but instead runtime improvements in BLAS. Since BLAS requires a certain memory layout, this makes it necessary to save the full matrix (as you noticed).

I have just done a few timings on my machine. It seems, that the use of LowerTriangular does save one third to half of the matrix multiplication time (depending on the size of the matrix blocks, 100x100 - 1000x1000)*. But since your matrix is really symmetric and not triangular, this can't be sufficient since you have to do two multiplications if you choose LowerTriangulars.

Another approach would be to use Symmetric views on the diagonal matrix blocks. But as Symmetric(matrix) seems not to be exploited and the underlying data is still the full matrix, these timings don't really improve.
julia> r = Matrix(Symmetric(rand(1000,1000)));
julia> s = Symmetric(r);
julia> rr = zeros(r); @time for i in 1:100 rr += r*r end
24.662503 seconds (400 allocations: 1.490 GiB, 0.25% gc time)
julia> rs = zeros(r); @time for i in 1:100 rs += r*s end
24.902586 seconds (1.89 k allocations: 2.235 GiB, 0.34% gc time)
julia> sr = zeros(r); @time for i in 1:100 sr += s*r end
27.563856 seconds (7.70 k allocations: 1.491 GiB, 0.24% gc time)
julia> ss = zeros(r); @time for i in 1:100 ss += s*s end
26.048422 seconds (1.58 k allocations: 2.980 GiB, 0.58% gc time)

* results for LowerTriangular
julia> l = LowerTriangular(r);
julia> ll = zeros(r); @time for i in 1:100 ll += l*l end
13.332010 seconds (700 allocations: 2.235 GiB, 0.50% gc time)
julia> rl = zeros(r); @time for i in 1:100 rl += r*l end
12.494506 seconds (2.68 k allocations: 1.490 GiB, 0.46% gc time)
julia> lr = zeros(r); @time for i in 1:100 lr += l*r end
12.227783 seconds (2.02 k allocations: 1.490 GiB, 0.45% gc time)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the explanation, I'll leave that feature in for now. While matrix multiplications with LowerTriangulars are faster I am not convinced that that is much of a benefit as only blocks on the main diagonal get stored as LowerTriangular matrices.
I'll think about a way of using the fact that the corresponding dense matrix is symmetric in the future.

type BCSRSpM
val::Array{Array{Float64,2},1}
col::Array{Int,1}
rowptr::Array{Int,1}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As this is a very specific nomenclature, I would suggest adding a reference to the documentation, where the details of the implementation (with these names) can be found, or - if no such publication exists - explain the basic idea with a few minimal examples in the documentation. You could have a look, e.g., at the documentation for Symmetric to get an idea.

Copy link
Author

@FelixBangerter FelixBangerter Jul 20, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a small example in the documentation, also did that for other BCSRSpM constructors (Note that BCSRSpM is now called SparseMatrixBCSR).

computeDifferenceRowptr(Integer,rowptr)
returns the running indices to run over all blocks in a row; b-a = # of blocks in a row
"""
function computeDifferenceRowptr(i::Int,rowptr)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for this and many other functions of this file, I'm still unclear about how "internal" they are meant to be. While I don't get the idea of this function at all, maybe that's not a problem, if it isn't ever to be called by any user of the library. If that is your intent, maybe you could group these "helper" functions in the beginning or end of the file (e.g., putting "# HELPERS" before them)?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reordered the code and made sections SPARSE MATRIX CONSTRUCTORS, HELPERS, BASE EXTENSIONS and MATRIX OPERATIONS for better readability. computeDifferenceRowptr is not supposed to be called by the average user just a helper function.

multiplySpMV(SpM,Vector)
returns the product Vector2 = SpM*Vector
"""
function multiplySpMV(SpM::BCSRSpM,vec::Array{Float64,1})
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't this an extension to Base.*?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I added the extension in the code as *(SpM::SparseMatrixBCSR,vec::Array{Float64,1}) = multiplySpMV(SpM,vec).

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just rename this function?

function Base.*(SpM::SparseMatrixBCSR, vec::Vector{Float64})
  if SpM.colpattern[...
end

Maybe I'm just missing the advantage of having it as a separate function. I've noticed that this style of naming a separate function does appear in Base in some places, but never understood what it's good for, so I would be eager to learn.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly I didn't know that this was an option, I see no point of having it as a separate function I just thought that was the way to do it. I changed it. You don't actually need Base.* as a function name just * is enough if Base.* is imported.

return M
end

#TODO: macht aehnliches wie computeBlock -> vereinigen
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll wait with my comments on those functions for which you still have TODOs until you are ready.

return SpMT
end

#@Henry: laesst sich so nicht in MatrixModule aufrufen weil ShellModule benutzt
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if benchmarking should be a part of QuantumLab itself. Maybe, if you want to, we could have a separate benchmark.jl script in QL/test to separate it from the code functionalities. Or you could keep that script separate (not commiting it to QuantumLab and just run it when you are ready and report the results in your lab report.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll just leave it out, it's only meant for my report anyway.

computePattern(basis)
returns the matrix blocking pattern for a given basis
"""
function computePattern(basis::Array{ShellModule.Shell,1})
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

basis is a bit misleading for the argument name, because it isn't a Basis (e.g. GaussianBasis). How about shells::Vector{Shell}?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point I changed all instances of basis to shells.

returns a matrix M in BCSR format with the given blocking row- and colpattern
"""
function BCSRSpM(M::Array{Float64,2},rowpattern::Array{Tuple{Int64,Int64},1},colpattern::Array{Tuple{Int64,Int64},1})
(checkPattern(M,rowpattern,1) && checkPattern(M,colpattern,2)) || error("Hey idiot your pattern doesn't match the matrix ᕦ(ò_óˇ)ᕤ")
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I must have missed this on my first read. While I, too, enjoy humor in error messages from time to time, I very much prefer if they stay polite and clear. When the user uses the library in an incorrect way, the fact that it doesn't work as expected is already quite frustrating. Being insulted by the code doesn't really help with that ;).
I've added a corresponding bullet point to the coding guidelines.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point I wouldn't want to receive such an error message either tbh. I changed the error messages to be more informative and polite but I left the ASCII smileys in.

@vonDonnerstein
Copy link
Owner

Felix, after you have made your changes, could you update your pull request? See the last point of Detailed Description for how to do that. No hurry, just take the time that you need. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants