Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add MPO version of VUMPS/TDVP #39

Open
mtfishman opened this issue Jan 28, 2022 · 14 comments
Open

Add MPO version of VUMPS/TDVP #39

mtfishman opened this issue Jan 28, 2022 · 14 comments

Comments

@mtfishman
Copy link
Member

mtfishman commented Jan 28, 2022

Currently, the implementation of VUMPS/TDVP only supports Hamiltonians represented explicitly in terms of sums of local terms (with extensions to long range interactions in #31).

Ideally, we would have a version that works for an MPO. I attempted to implement one initially but there was a bug in it that I couldn't track down.

EDIT: The old, broken implementation is here: https://github.com/ITensor/ITensorInfiniteMPS.jl/blob/main/src/broken/vumps_mpo.jl. There may be useful functionality in there that could be reused for a new version, such as transforming an MPO into a set of operator/ITensor-valued matrices.

@LHerviou expressed interest in possibly looking into this. Additionally, an alternative that was discussed was to extend #31 (EDIT: and #32) to make use of sums of local MPOs with finite support, which would be an intermediate solution (ultimately, we would still want a single MPO version).

Please comment on this issue if you are interested in looking into implementing this.

@LHerviou
Copy link
Contributor

LHerviou commented Feb 2, 2022

I have started having a look at it, but if anyone wants to contribute, let me know.

As of now, I am actually struggling on something simple. It should be possible to automatically generate an IMPO from the InfiniteTensorSum we have in models. If the 2-local Hamiltonian is of the form T1 - T2, then we necessarily have
W_inf = 1 0 0
T1 0 0
0 T2 1
It is not necessarily the optimal form, but it should not be too bad (and can be improved).
It turns out I am struggling to implement the Index fusion required, in particular if the tensors are sparse (which they pretty much always are/should be). Am I missing something? The combiner function do tensor products, which is not what I am looking for.

Another alternative would be to directly work with the InfiniteTensorSum (it is just a question of relabeling the index then).

@mtfishman
Copy link
Member Author

Just so I understand the issue fully, you are talking about implementing VUMPS in terms of a single MPO, not a sum of local MPOs, correct?

Which kind of Index fusion are you trying to do? You form the MPO using the construction you describe above, but then want to fuse the quantum number sectors of the link indices of the MPO to form larger QN blocks? If that is the case, combiners work on single indices as well, in which case they fuse quantum number blocks automatically.

However, I'll note that we've actually found that for DMRG, it can in fact be better for performance to not fuse the quantum number blocks of the link indices of the MPO, since the MPO often becomes more sparse of you keep the blocks split. Because of this, I introduced a function splitblocks that takes an MPO and splits the link indices into smaller QN blocks, while also dropping zero blocks that get introduced to make it more sparse:

https://github.com/ITensor/ITensors.jl/blob/921e2116beb74d21028d2dde05a26170d6d5f622/examples/dmrg/1d_heisenberg_conserve_spin.jl#L30

(Credit goes to Johannes Hauschild, lead developer of TenPy, for pointing this out to us!)

Also, fusing the QN blocks may destroy the upper/lower block diagonal form that is useful for implementing the algorithm for obtaining the quasi-fixed point environment of the MPO transfer matrix, so it may be best to just leave the MPO as-is.

@LHerviou
Copy link
Contributor

LHerviou commented Feb 2, 2022

Yes, as a single translation invariant MPO.

I am really (for now) just trying to create the object W_inf. I am trying to write it automatically for arbitrary (potentially large) MPO, and I am just struggling a bit with making sure the indices work correctly. I guess I can just keep the correct T1, T2 without really having an explicit MPO built in.

I agree it is probably best not fusing the QN blocks.

@LHerviou
Copy link
Contributor

LHerviou commented Feb 2, 2022

Nevermind, I think I fixed that problem: would taking InfiniteMPO.data to be CelledVector{Matrix{ITensor}} be a satisfactory solution?

@mtfishman
Copy link
Member Author

Oh I see, you are actually trying to write a general code for converting some sort of operator representation into a uniform MPO?

Getting the indices consistent with the correct quantum numbers can be tricky. Indeed, if it easier for now, we could just use the operator-valued matrix representation, like what is output by:

https://github.com/ITensor/ITensorInfiniteMPS.jl/blob/main/src/broken/vumps_mpo.jl#L51-L79

I think I found it was easier to write the VUMPS code in terms of that form anyway, and we could always just convert to that form once we have code that actually makes an infinite MPO explicitly.

I don't think we should define the InfiniteMPO type in that way directly since ultimately we want that to store order-4 ITensors, but we could just make a new type for the operator-valued matrix representation, like InfiniteOpMatrix (I'm sure there is a better name).

To give you some context, in finite DMRG in ITensor, this is handled by the OpSum/AutoMPO system. The OpSum is the representation of the Hamiltonian in terms of string representation of operators and which sites they act on, and the AutoMPO system converts that representation into an MPO:

https://github.com/ITensor/ITensors.jl/blob/main/src/physics/autompo.jl

One goal was to extend that code to form infinite MPOs, also starting from an OpSum representation.

There was a PR for that to the C++ version of ITensor: ITensor/ITensor#161, but we haven't had time to properly review it and port it to the Julia code. The Julia code for finite OpSum to MPO conversion was refactored and simplified a lot from the C++ version of the code by Miles, so likely the extension of converting and OpSum to an InfiniteMPO could be simplified as well.

@LHerviou
Copy link
Contributor

LHerviou commented Feb 2, 2022

I will start like that then, it should be easy to make the basic infrastructure work.

Thanks for the additional context.

@LHerviou
Copy link
Contributor

LHerviou commented Feb 8, 2022

For info, I managed to write a full MPO treatment of VUMPS. The subspace expansion could be improved, but I would like to talk to several people in Benasque before looking into it.

I am not completely sure it fits my (selfish) requirements.

@mtfishman
Copy link
Member Author

Amazing, thanks!

My understanding is that the subspace expansion could be implemented by applying the 2-site effective Hamiltonian (the same one you would use in a 2-site VUMPS update), and then project into the nullspace. So I think it would be very similar to the current implementation. Or do you mean that there is a particular issue with the FQH Hamiltonian?

@LHerviou
Copy link
Contributor

LHerviou commented Feb 8, 2022

Well, the original proposition for the subspace expansion is indeed just looking at the two site effective Hamiltonian. That is what I am currently using in the code.

On the other hand, there were more recent papers discussing a better way to do it
https://scipost.org/SciPostPhysCore.4.1.004/pdf
https://scipost.org/SciPostPhysLectNotes.7/pdf 3.3

I discussed with some coworkers and we are not sure

  1. how important this improvement is
  2. whether it helps in the FQHE case where symmetries are blocking (as you discussed with Mike Zaletel) and therefore we really need some mixing in order to explore the symmetry resolved Hilbert space.

@mtfishman
Copy link
Member Author

My understanding is that https://scipost.org/SciPostPhysCore.4.1.004/pdf is focused on the problem of truncating, not expanding, the bond dimension of an MPS. From what I can tell, the algorithm assumes you already have an MPS manifold to project into. Perhaps some variant could be used for bond dimension expansion but it also has the disadvantage that it is an iterative algorithm, as opposed to the direct method currently used.

I guess I would hope the current method or an extension would work (such as extending to a 3-site variation, if needed), though improved bond dimension expansion techniques is definitely a worthy area of exploration in general.

@LHerviou
Copy link
Contributor

LHerviou commented Feb 9, 2022

The method can be in principle also used to find the best way to extend the system. In fact, the Ghent team uses a variation of that proposal to do the subspace expansion in their vumps code for 2D classical systems (some of my coworkers work with them).

I do agree it is not completely obvious from the papers themselves --- I had to discuss with my coworkers to clarify how it worked.
Additionally, they do it without symmetries in their code.

@mtfishman
Copy link
Member Author

Yes, it is much clearer how it could work without symmetries, since you can just select a bond dimension larger than the one you have. You could just apply the Hamiltonian or exponential of the Hamiltonian to your MPS and then compress back to an MPS with the bond dimension you request. Symmetries are always the subtle part with these subspace expansion procedures.

@mtfishman
Copy link
Member Author

mtfishman commented Feb 9, 2022

I guess you could mix the two strategies, by first expanding the bond dimension using the subspace expansion we are using, and then variationally optimize within the expanded tensor space using the compression algorithm in https://scipost.org/SciPostPhysCore.4.1.004/pdf.

Essentially that is like alternating steps of VUMPS and iTEBD, where iTEBD is implemented with the algorithm in https://scipost.org/SciPostPhysCore.4.1.004/pdf.

@LHerviou
Copy link
Contributor

LHerviou commented Feb 9, 2022

Yes, pretty much what I had in mind.
But then I am not sure it is significantly better than "just applying vumps" after the subspace expansion.
Hence I need to talk with them in 10 days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants