-
Notifications
You must be signed in to change notification settings - Fork 2
Construct BlockSparseArray
when slicing with graded unit ranges
#36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #36 +/- ##
==========================================
- Coverage 74.02% 73.83% -0.20%
==========================================
Files 29 30 +1
Lines 1001 1051 +50
==========================================
+ Hits 741 776 +35
- Misses 260 275 +15
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
I believe the only issues remaining are due to a change to DerivableInterfaces.jl's |
I've added tests. There is one final test failure caused by the fact that the changes to the broadcast/map functionality leads to this new behavior: julia> using BlockSparseArrays: BlockSparseMatrix
julia> using BlockArrays: Block
julia> a = BlockSparseMatrix{Float64}([2, 2], [2, 2])
2×2-blocked 4×4 BlockSparseMatrix{Float64, Matrix{Float64}, SparseArraysBase.SparseMatrixDOK{Matrix{Float64}, BlockSparseArrays.GetUnstoredBlock{Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}}}, Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}}:
. . │ . .
. . │ . .
──────┼──────
. . │ . .
. . │ . .
julia> a[Block(1, 1)] = randn(2, 2)
2×2 Matrix{Float64}:
-0.528387 -0.36891
0.451625 1.32933
julia> a .= 0
2×2-blocked 4×4 BlockSparseMatrix{Float64, Matrix{Float64}, SparseArraysBase.SparseMatrixDOK{Matrix{Float64}, BlockSparseArrays.GetUnstoredBlock{Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}}}, Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}}:
0.0 0.0 │ . .
0.0 0.0 │ . .
──────────┼──────
. . │ . .
. . │ . . while ideally it would drop the block, like julia> fill!(a, 0)
2×2-blocked 4×4 BlockSparseMatrix{Float64, Matrix{Float64}, SparseArraysBase.SparseMatrixDOK{Matrix{Float64}, BlockSparseArrays.GetUnstoredBlock{Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}}}, Tuple{BlockedOneTo{Int64, Vector{Int64}}, BlockedOneTo{Int64, Vector{Int64}}}}:
. . │ . .
. . │ . .
──────┼──────
. . │ . .
. . │ . . I'll fix that and then merge. |
This requires ITensor/DerivableInterfaces.jl#14 getting fixed, in progress here: ITensor/DerivableInterfaces.jl#15.
With that fixed, this enables:
which gives:
so slicing a dense array by graded unit ranges creates a block sparse array, and additionally drops zero blocks. Note that it also preserves dual information, which is enabled by ITensor/GradedUnitRanges.jl#10.
This fixes ITensor/GradedUnitRanges.jl#9, and as discussed this will be helpful for constructing abelian symmetric operators and states in https://github.com/ITensor/QuantumOperatorDefinitions.jl.