Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify similarmatrix, remove optimize! #106

Merged
merged 3 commits into from
Sep 30, 2020
Merged

Simplify similarmatrix, remove optimize! #106

merged 3 commits into from
Sep 30, 2020

Conversation

pablosanjose
Copy link
Owner

Closes #105

This PR is an alternative to #105

PR #105 has some serious problems. Internalizing the Bloch matrix into Hamiltonian leads to less flexibility, since its type often depends on the context (e.g. the method used to diagonalize a Hamiltonian). Needing to mutate/regenerate a Hamiltonian just to change its Bloch matrix type is inelegant and inefficient. Also, a chain of transformations of a Hamiltonian almost always led to a blochmatrix needing to be allocated at each step. The decision to take the Julian bloch!(matrix, h, ...) approach was ultimately good. The only problem of needing an external preallocated Bloch matrix is that the similarmatrix used to construct the matrix was sometimes not smart enough.

This PR takes the good bits of #105 without internalizing blochmatrix. In particular we can now do similarmatrix(h, flatten) to keep the matrix type of h but with a scalar eltype (it is still inferable since flatten is a function with a unique type). Since flatten is an exported function with the same meaning of flattening eltypes, this seems appropriate, without needing a new exported function. We also get rid of optimize!, and simply force the output of similarmatrix(h,...) to already be optimized for h, thus avoiding sparse element splicing when calling bloch!/bloch.

We also sneak in a change that was part of #105: changing the type kwarg of hamiltonian to orbtype, which is more descriptive. The equivalent for lattice is still type.

@codecov-commenter
Copy link

codecov-commenter commented Sep 30, 2020

Codecov Report

Merging #106 into master will increase coverage by 0.03%.
The diff coverage is 87.71%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #106      +/-   ##
==========================================
+ Coverage   62.37%   62.40%   +0.03%     
==========================================
  Files          16       16              
  Lines        2708     2708              
==========================================
+ Hits         1689     1690       +1     
+ Misses       1019     1018       -1     
Impacted Files Coverage Δ
src/Quantica.jl 100.00% <ø> (ø)
src/greens.jl 0.00% <ø> (ø)
src/lattice.jl 81.59% <ø> (ø)
src/diagonalizer.jl 53.78% <83.33%> (-0.35%) ⬇️
src/hamiltonian.jl 75.29% <87.50%> (+0.33%) ⬆️
src/KPM.jl 64.80% <100.00%> (ø)
src/bandstructure.jl 91.15% <100.00%> (-0.06%) ⬇️
src/parametric.jl 81.60% <100.00%> (-1.35%) ⬇️
... and 1 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 743b928...8e18faf. Read the comment docs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants