Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wish list for belief_propagation #112

Open
9 tasks
mtfishman opened this issue Oct 23, 2023 · 0 comments
Open
9 tasks

Wish list for belief_propagation #112

mtfishman opened this issue Oct 23, 2023 · 0 comments

Comments

@mtfishman
Copy link
Member

mtfishman commented Oct 23, 2023

Here is a wish list for the belief_propagation function:

  • Customizable update sequences/schedules, including combinations of parallel and serial/sequential schedules. Ideally this would be designed to make it easy to do real space parallel DMRG/TDVP/TEBD.
  • A unified picture of using different gauges, such as arbitrary gauges (standard BP), the Vidal gauge, and orthogonal gauges using square root BP. This would make it so that many aspects of other tensor network algorithms, such as DMRG/TDVP on a tree, can be encompassed within the BP function.
  • Specialized functionality when run on a tree (or a subregion of a graph that is a tree) to ensure it is done as efficiently as possible, i.e. in one iteration using a tree traversal.
  • Ability to do minimal message tensor updates from one region to another (i.e. update message tensors around some path between regions), with applications to DMRG/TDVP. How do we elegantly handle that for both trees and non-trees? The goal would be to only require doing a minimal amount of self-consistency, and also allow dropping parts of the cache for memory efficiency.
  • Writing BP cache to disk (this would be used for the write-to-disk feature that is important in DMRG to avoid running out of memory for large bond dimensions).
  • Better format for partitioned tensor networks as well as a simpler format for the BP cache.
  • Make sure all of this functionality efficiently handles simpler cases like periodic MPS. That should be handled automatically if all of the issues above are addressed.
  • Replace specialized functionality we have for TTNs like computing inner products or orthogonalizing with BP (generalized to partitioned networks) and square root BP. In principle we could entirely remove the TreeTensorNetwork type with the correct abstractions in the belief_propagation function that analyze the graph structure and handle tree structures efficiently at runtime.
  • Try out different BP schedules/update sequences. This is a good reference proposing a new schedule and comparing to existing ones: https://arxiv.org/abs/1206.6837

Some of this is addressed by #111 and ITensor/NamedGraphs.jl#39.

In https://github.com/mtfishman/ITensorNetworks.jl/tree/generalize_alternating_update I am working on generalizing alternating_update to non-tree graphs using BP as a contraction backend, addressing these issues will be useful for that PR. Issues around the gauge will be important, since picking the right gauge can allow us to use regular eigensolvers instead of generalized eigensolvers for DMRG on loopy networks. Designing it correctly will also allow us to not have two separate codes for trees or non-trees, ideally those details can be handled in the belief_propagation function.

@JoeyT1994 @b-kloss

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant