Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add weights(::AbstractITensorNetwork), new graph partitioning interface #30

Merged
merged 7 commits into from
Dec 23, 2022

Conversation

mtfishman
Copy link
Member

@mtfishman mtfishman commented Dec 21, 2022

Also add examples for Graphs.jl functionality implemented in ITensor/NamedGraphs.jl#20 and ITensor/DataGraphs.jl#10.

@mtfishman
Copy link
Member Author

mtfishman commented Dec 21, 2022

weights(tn::AbstractITensorNetwork) is defined as log2(dim(commoninds(tn, edge))) for each edge in edges(tn), and are stored in a Dictionary. This is the same definition used in https://arxiv.org/abs/2209.02895. This makes sense in the limit when dim(commoninds(tn, edge)) == 1, then the tensors are not connected and by this definition the weight is zero. Additionally, for tree networks it should correspond to the maximum entropy allowed by the index dimensions. Users can of course use their own function to compute weights for particular use cases, and pass the weights as necessary into graph functions that will use them. For example, we could consider more sophisticated weight functions that depend on the entanglement entropies, mutual information, or singular value spectrum.

@LinjianMa I think this is also what you are using?

@LinjianMa
Copy link
Contributor

LinjianMa commented Dec 21, 2022 via email

@mtfishman mtfishman changed the title Add weights(::AbstractITensorNetwork) Add weights(::AbstractITensorNetwork), new graph partitioning interface Dec 22, 2022
@mtfishman
Copy link
Member Author

mtfishman commented Dec 23, 2022

In the latest I made some changes to the partition interface. The new syntax is summarized below:

julia> using ITensorNetworks

julia> tn = ITensorNetwork(named_grid((4, 2)); link_space=3);

subgraph_vertices returns a list of subgraph vertices according to the specified partitioning:

subgraph_vertices

julia> tn_sv = subgraph_vertices(tn; npartitions=2) # Same as `partition_vertices(tn; nvertices_per_partition=4)`
2-element Vector{Vector{Tuple{Int64, Int64}}}:
 [(1, 1), (2, 1), (1, 2), (2, 2)]
 [(3, 1), (4, 1), (3, 2), (4, 2)]

partition_vertices returns a graph with subgraph vertices as the vertex data and edges defining which subgraphs are connected according to the specified partitioning:

partition_vertices

julia> tn_pv = partition_vertices(tn; npartitions=2);

julia> typeof(tn_pv)
DataGraph{Int64, Vector{Tuple{Int64, Int64}}, Any, NamedGraphs.NamedGraph{Int64}, NamedGraphs.NamedEdge{Int64}}

julia> tn_pv[1]
4-element Vector{Tuple{Int64, Int64}}:
 (1, 1)
 (2, 1)
 (1, 2)
 (2, 2)

julia> edges(tn_pv)
1-element Vector{NamedGraphs.NamedEdge{Int64}}:
 1 => 2

julia> tn_pv[1 => 2]
2-element Vector{NamedGraphs.NamedEdge{Tuple{Int64, Int64}}}:
 (2, 1) => (3, 1)
 (2, 2) => (3, 2)

subgraphs returns a list of subgraphs according to the specified partitioning:

subgraphs

julia> tn_sg = subgraphs(tn; npartitions=2);

julia> typeof(tn_sg)
Vector{ITensorNetwork{Tuple{Int64, Int64}}} (alias for Array{ITensorNetwork{Tuple{Int64, Int64}}, 1})

julia> tn_sg[1]
ITensorNetwork{Tuple{Int64, Int64}} with 4 vertices:
4-element Vector{Tuple{Int64, Int64}}:
 (1, 1)
 (2, 1)
 (1, 2)
 (2, 2)

and 4 edge(s):
(1, 1) => (2, 1)
(1, 1) => (1, 2)
(2, 1) => (2, 2)
(1, 2) => (2, 2)

with vertex data:
4-element Dictionaries.Dictionary{Tuple{Int64, Int64}, Any}
 (1, 1) │ ((dim=3|id=169|"1×1↔2×1"), (dim=3|id=124|"1×1↔1×2"))
 (2, 1) │ ((dim=3|id=169|"1×1↔2×1"), (dim=3|id=105|"2×1↔3×1"), (dim=3|id=352|"2×1↔2×2"))
 (1, 2) │ ((dim=3|id=124|"1×1↔1×2"), (dim=3|id=789|"1×2↔2×2"))
 (2, 2) │ ((dim=3|id=352|"2×1↔2×2"), (dim=3|id=789|"1×2↔2×2"), (dim=3|id=254|"2×2↔3×2"))

partition returns a graph with subgraphs as the vertex data and edges defining which subgraphs are connected according to the specified partitioning:

partition

julia> tn_pg = partition(tn; npartitions=2);

julia> typeof(tn_pg)
DataGraph{Int64, ITensorNetwork{Tuple{Int64, Int64}}, Any, NamedGraphs.NamedGraph{Int64}, NamedGraphs.NamedEdge{Int64}}

julia> tn_pg[1]
ITensorNetwork{Tuple{Int64, Int64}} with 4 vertices:
4-element Vector{Tuple{Int64, Int64}}:
 (1, 1)
 (2, 1)
 (1, 2)
 (2, 2)

and 4 edge(s):
(1, 1) => (2, 1)
(1, 1) => (1, 2)
(2, 1) => (2, 2)
(1, 2) => (2, 2)

with vertex data:
4-element Dictionaries.Dictionary{Tuple{Int64, Int64}, Any}
 (1, 1) │ ((dim=3|id=169|"1×1↔2×1"), (dim=3|id=124|"1×1↔1×2"))
 (2, 1) │ ((dim=3|id=169|"1×1↔2×1"), (dim=3|id=105|"2×1↔3×1"), (dim=3|id=352|"2×1↔2×2"))
 (1, 2) │ ((dim=3|id=124|"1×1↔1×2"), (dim=3|id=789|"1×2↔2×2"))
 (2, 2) │ ((dim=3|id=352|"2×1↔2×2"), (dim=3|id=789|"1×2↔2×2"), (dim=3|id=254|"2×2↔3×2"))

julia> tn_pg[1 => 2][:edges]
2-element Vector{NamedGraphs.NamedEdge{Tuple{Int64, Int64}}}:
 (2, 1) => (3, 1)
 (2, 2) => (3, 2)

@JoeyT1994 let me know what you think. Should supersede #34 and close #28.

Also, the example of partitioning a partitioned graph discussed in #28 can now be done with:

using ITensors
using Graphs
using NamedGraphs
using ITensorNetworks
using SplitApplyCombine
using Metis

s = siteinds("S=1/2", named_grid(8))
tn = ITensorNetwork(s; link_space=2)
Z = prime(tn; sites=[])  tn
vertex_groups = group(v -> v[1], vertices(Z))
# Create two layers of partitioning
Z_p = partition(partition(Z, vertex_groups); nvertices_per_partition=2)
# Flatten the partitioned partitions
Z_verts = [reduce(vcat, (vertices(Z_p[vp][v]) for v in vertices(Z_p[vp]))) for vp in vertices(Z_p)]

which can be found in examples/group_partition.jl.

A feature that is still missing is accounting for the weights of the graph (for example using the index dimensions) when partitioning the graph, which is especially important when partitioning a partitioned graph where partitions could be connected by a lot of indices. That can be left for future work.

@LinjianMa maybe some of these new features are relevant for #11 as well, it seems relevant to make it easier to work with partitioned graphs/tensor networks.

@JoeyT1994
Copy link
Contributor

Hi Matt,

This looks great to me. Being able to stack partitioning like that is really helpful for memory efficient generalized belief propagation.

I will close PR #34 as I believe this supercedes it.

In early Jan I will create a PR 2 for belief propagation where the example and test takes advantage of this interface. I will also improve the mechanism for calculating expectation values.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants