New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Derivatives and environments #82
Comments
By the way, I suspect (although I have not checked) that using the proposed |
I think |
Adam is working on it, but he is a 20%er. I've asked him to add a branch of his work so far, so hopefully that'll be added soon. |
Remove node should be easy enough to add. Though I think we'll use |
@Thenerdstation The only trouble I see with replacing the connected edges is that it makes it tricky to keep track of them across the |
Could you give a tiny example of what your ideal workflow would look like? I still don't quite see how modifying the edges in place is more beneficial than just replacing them. |
Something like: net = TensorNetwork()
n1 = net.add_node(t1, axis_names=['a', 'b', 'c'])
n2 = net.add_node(t2, ...)
...
# (connect edges so that there are no danglings)
...
output_edges = [n1['a'], n1['b'], n1['c']]
net.remove_node(n1)
net.contract_all_naively() # or whatever :)
env = net.get_final_node()
env.reorder_edges(output_edges) # want my output_edges to still be part of the network
# I might also want to contract the environment with another node.
# The following replaces the removed node to reproduce the result
# of contracting the original network.
n1 = net.add_node(t1, axis_names=['a', 'b', 'c'])
net.connect(n1['a'], output_edge[0])
net.connect(n1['b'], output_edge[1])
net.connect(n1['c'], output_edge[2])
net.contract_between(n1, env) If the edges are replaced, one could alternatively have something like |
Oh I see what you mean now. You will need the broken edges for reshaping and to possibly connect it to a different node. Let me think about this some more. I wanna keep the API as clean and intuitive as possible. |
Alright this is my compromise. |
@amilsted Please comment on the PR if you have any concerns about the design choice. |
@mganahl You too. |
If we define a tensor network for e.g. a scalar value (no dangling edges) and we want to compute the derivative of that number with respect to a tensor
T
in the network, one could use autodiff (depending on the backend). This is also known as computing the "environment" ofT
.However, the first derivative is also given by the contraction of the same network with the
T
tensor removed. For doing this, it would be nice to have aremove_node()
method that deletes a node from the network. Any dangling edges attached to that node are also removed. Any connected edges become dangling edges (modified in place, so that the edge objects persist).This would be useful for a number of algorithms: One can define the network for, say, the energy of a quantum state once, then compute all required environments/derivatives using
remove_node()
. These are then used to minimize the energy.Going further, if multiple environments of the same network are desired, it is possible to take the optimal contraction order for one environment and derive optimal contraction orders for all the others: https://arxiv.org/abs/1310.8023
Intermediate results can also often be reused for multiple environments.
One might imagine having a method
environments([n1, n2, n3])
(wheren1,n2,n3
are nodes) that efficiently computes environments of multiple tensors this way.Thoughts?
The text was updated successfully, but these errors were encountered: