Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Metric tensor tape] Covariance matrix support #1012

Merged
merged 21 commits into from
Jan 21, 2021
Merged

Conversation

josh146
Copy link
Member

@josh146 josh146 commented Jan 20, 2021

Context:

The metric tensor is the last main feature not supported in tape mode. This is PR 1 of two PRs, focused on adding support for the metric tensor to the tape. In this PR, we add the ability to compute the covariance matrix of a circuit ansatz with respect to a list of commuting observables, in a differentiable manner.

Description of the Change:

  • Adds the new function qml.math.cov_matrix(). This function accepts a list of commuting observables, and the probability distribution in the shared observable eigenbasis after the application of an ansatz. It uses these to construct the covariance matrix in a framework independent manner, such that the output covariance matrix is autodifferentiable.

    For example, consider the following ansatz and observable list:

    obs_list = [qml.PauliX(0) @ qml.PauliZ(1), qml.PauliY(2)]
    ansatz = qml.templates.StronglyEntanglingLayers

    We can construct a QNode to output the probability distribution in the shared eigenbasis of the observables:

    dev = qml.device("default.qubit", wires=3)
    
    @qml.qnode(dev, interface="autograd")
    def circuit(weights):
        ansatz(weights, wires=[0, 1, 2])
        # rotate into the basis of the observables
        for o in obs_list:
            o.diagonalizing_gates()
        return qml.probs(wires=[0, 1, 2])

    We can now compute the covariance matrix:

    >>> weights = qml.init.strong_ent_layers_normal(n_layers=2, n_wires=3)
    >>> cov = qml.math.cov_matrix(circuit(weights), obs_list)
    >>> cov
    array([[0.98707611, 0.03665537],
           [0.03665537, 0.99998377]])

    Autodifferentiation is fully supported using all interfaces:

    >>> cost_fn = lambda weights: qml.math.cov_matrix(circuit(weights), obs_list)[0, 1]
    >>> qml.grad(cost_fn)(weights)[0]
    array([[[ 4.94240914e-17, -2.33786398e-01, -1.54193959e-01],
            [-3.05414996e-17,  8.40072236e-04,  5.57884080e-04],
            [ 3.01859411e-17,  8.60411436e-03,  6.15745204e-04]],
    
           [[ 6.80309533e-04, -1.23162742e-03,  1.08729813e-03],
            [-1.53863193e-01, -1.38700657e-02, -1.36243323e-01],
            [-1.54665054e-01, -1.89018172e-02, -1.56415558e-01]]])
  • In order to support this new function, the following low-level tensor functions were added to qml.math:

    • diag (analogous to np.diag)
    • flatten
    • marginal_prob (marginalize a probability distribution over given axes)
    • reshape
    • scatter_element_add (a functional equivalent of tensor[idx] += value)

Benefits:

  • The ability to compute the covariance matrix is crucial to computing the metric tensor on hardware. The block diagonal approximation to the metric tensor is simply a sequence of covariance matrix computations, one per layer of the original QNode, executed in batch.

  • Since the implementation of the covariance matrix here is autodifferentiable, the metric tensor computation in the follow up PR will also be differentiable.

  • I have added cov_matrix to qml.math because it is a low-level function of the form tensor -> tensor. It is not a tape transform.

Possible Drawbacks: n/a

Related GitHub Issues: n/a

@josh146 josh146 added the review-ready 👌 PRs which are ready for review by someone from the core team. label Jan 20, 2021
@github-actions
Copy link
Contributor

Hello. You may have forgotten to update the changelog!
Please edit .github/CHANGELOG.md with:

  • A one-to-two sentence description of the change. You may include a small working example for new features.
  • A link back to this PR.
  • Your name (or GitHub username) in the contributors section.

Comment on lines +83 to +90
@wrap_output
def scatter_element_add(self, index, value):
size = self.data.size
flat_index = np.ravel_multi_index(index, self.shape)
t = [0] * size
t[flat_index] = value
self.data = self.data + np.array(t).reshape(self.shape)
return self.data
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The convoluted approach here is because autograd does not support array assignment, nor does numpy provide a functional approach to scattering :(

If anyone has any insights on a better way of doing this (while preserving differentiabilty in autograd), would appreciate it!

@codecov
Copy link

codecov bot commented Jan 20, 2021

Codecov Report

Merging #1012 (6b3fd43) into master (ca35ffe) will increase coverage by 0.01%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1012      +/-   ##
==========================================
+ Coverage   97.90%   97.91%   +0.01%     
==========================================
  Files         151      151              
  Lines       11103    11201      +98     
==========================================
+ Hits        10870    10968      +98     
  Misses        233      233              
Impacted Files Coverage Δ
pennylane/math/__init__.py 100.00% <ø> (ø)
pennylane/math/autograd_box.py 100.00% <100.00%> (ø)
pennylane/math/fn.py 100.00% <100.00%> (ø)
pennylane/math/jax_box.py 100.00% <100.00%> (ø)
pennylane/math/numpy_box.py 100.00% <100.00%> (ø)
pennylane/math/tensorbox.py 95.83% <100.00%> (+0.21%) ⬆️
pennylane/math/tf_box.py 100.00% <100.00%> (ø)
pennylane/math/torch_box.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ca35ffe...6b3fd43. Read the comment docs.

Comment on lines +976 to +993
@staticmethod
def expected_cov(weights):
"""Analytic covariance matrix for ansatz and obs_list"""
a, b, c = weights
return np.array([
[np.sin(b) ** 2, -np.cos(a) * np.sin(b) ** 2 * np.sin(c)],
[-np.cos(a) * np.sin(b) ** 2 * np.sin(c), 1 - np.cos(a) ** 2 * np.cos(b) ** 2 * np.sin(c) ** 2]
])

@staticmethod
def expected_grad(weights):
"""Analytic covariance matrix gradient for ansatz and obs_list"""
a, b, c = weights
return np.array([
np.sin(a) * np.sin(b) ** 2 * np.sin(c),
-2 * np.cos(a) * np.cos(b) * np.sin(b) * np.sin(c),
-np.cos(a) * np.cos(c) * np.sin(b) ** 2
])
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I computed the covariance matrix of this ansatz+obs_list by hand, and differentiated it by hand, to ensure that all interfaces were producing the correct result.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's commitment!

@josh146 josh146 changed the title Tape metric tensor [Metric tensor tape] Covariance matrix support Jan 20, 2021
Copy link
Contributor

@mariaschuld mariaschuld left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome @josh146 !

I approve in the interest of time, but check out the comments, there may be one or two minor fixes needed.

pennylane/math/fn.py Outdated Show resolved Hide resolved
cov = scatter_element_add(cov, [i, j], res)
cov = scatter_element_add(cov, [j, i], res)

return cov
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool how this is all diffable...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With qml.math, we could really try and push this. Any pre- or post- quantum processing function we add, we should always now try and add in a differentiable manner :D

pennylane/math/fn.py Outdated Show resolved Hide resolved
<tf.Tensor: shape=(2,), dtype=float64, numpy=array([0.70710678, 0.70710678])>
"""
prob = flatten(prob)
num_wires = int(np.log2(len(prob)))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change to num_vars?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does vars refer to here?

pennylane/math/fn.py Show resolved Hide resolved
pennylane/math/fn.py Show resolved Hide resolved
def scatter_element_add(self, index, value):
self.data = self.data.clone()
self.data[tuple(index)] += value
return self.data
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this function is implemented not in-place?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a limitation of PyTorch :(

PyTorch allows in-place operations of tensors, unless the tensor is a leaf node (that is, a tensor created at the beginning of the computational graph by a user, with the requires_grad explicitly set).

But I just realised we can have a check for this using self.data.is_leaf, so we only need to clone if that is true.

tests/math/test_functions.py Outdated Show resolved Hide resolved
Comment on lines +976 to +993
@staticmethod
def expected_cov(weights):
"""Analytic covariance matrix for ansatz and obs_list"""
a, b, c = weights
return np.array([
[np.sin(b) ** 2, -np.cos(a) * np.sin(b) ** 2 * np.sin(c)],
[-np.cos(a) * np.sin(b) ** 2 * np.sin(c), 1 - np.cos(a) ** 2 * np.cos(b) ** 2 * np.sin(c) ** 2]
])

@staticmethod
def expected_grad(weights):
"""Analytic covariance matrix gradient for ansatz and obs_list"""
a, b, c = weights
return np.array([
np.sin(a) * np.sin(b) ** 2 * np.sin(c),
-2 * np.cos(a) * np.cos(b) * np.sin(b) * np.sin(c),
-np.cos(a) * np.cos(c) * np.sin(b) ** 2
])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's commitment!

probs = circuit(weights)
return fn.cov_matrix(probs, self.obs_list)

weights = np.array([0.1, 0.2, 0.3])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't you want to make this test data, maybe using some edge cases?

pennylane/math/torch_box.py Outdated Show resolved Hide resolved
Co-authored-by: Maria Schuld <mariaschuld@gmail.com>
Base automatically changed from circuit-graph-tape to master January 21, 2021 06:45
pennylane/math/fn.py Outdated Show resolved Hide resolved
@@ -1044,6 +1048,37 @@ def expected_grad(weights):
-np.cos(a) * np.cos(c) * np.sin(b) ** 2
])

def test_weird_wires(self, in_tape_mode, tol):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lol. custom?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hahaha I believe @johannesjmeyer has a test called test_weird_wires in the circuit drawer, so I stole the name from there

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or maybe I'm imagining it and made it up

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds plausible though

Copy link
Contributor

@mariaschuld mariaschuld left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

@josh146 josh146 merged commit dde4f15 into master Jan 21, 2021
@josh146 josh146 deleted the tape-metric-tensor branch January 21, 2021 15:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
review-ready 👌 PRs which are ready for review by someone from the core team.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants