-
Notifications
You must be signed in to change notification settings - Fork 575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Metric tensor tape] Covariance matrix support #1012
Conversation
Hello. You may have forgotten to update the changelog!
|
@wrap_output | ||
def scatter_element_add(self, index, value): | ||
size = self.data.size | ||
flat_index = np.ravel_multi_index(index, self.shape) | ||
t = [0] * size | ||
t[flat_index] = value | ||
self.data = self.data + np.array(t).reshape(self.shape) | ||
return self.data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The convoluted approach here is because autograd does not support array assignment, nor does numpy provide a functional approach to scattering :(
If anyone has any insights on a better way of doing this (while preserving differentiabilty in autograd), would appreciate it!
Codecov Report
@@ Coverage Diff @@
## master #1012 +/- ##
==========================================
+ Coverage 97.90% 97.91% +0.01%
==========================================
Files 151 151
Lines 11103 11201 +98
==========================================
+ Hits 10870 10968 +98
Misses 233 233
Continue to review full report at Codecov.
|
@staticmethod | ||
def expected_cov(weights): | ||
"""Analytic covariance matrix for ansatz and obs_list""" | ||
a, b, c = weights | ||
return np.array([ | ||
[np.sin(b) ** 2, -np.cos(a) * np.sin(b) ** 2 * np.sin(c)], | ||
[-np.cos(a) * np.sin(b) ** 2 * np.sin(c), 1 - np.cos(a) ** 2 * np.cos(b) ** 2 * np.sin(c) ** 2] | ||
]) | ||
|
||
@staticmethod | ||
def expected_grad(weights): | ||
"""Analytic covariance matrix gradient for ansatz and obs_list""" | ||
a, b, c = weights | ||
return np.array([ | ||
np.sin(a) * np.sin(b) ** 2 * np.sin(c), | ||
-2 * np.cos(a) * np.cos(b) * np.sin(b) * np.sin(c), | ||
-np.cos(a) * np.cos(c) * np.sin(b) ** 2 | ||
]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I computed the covariance matrix of this ansatz+obs_list by hand, and differentiated it by hand, to ensure that all interfaces were producing the correct result.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's commitment!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome @josh146 !
I approve in the interest of time, but check out the comments, there may be one or two minor fixes needed.
cov = scatter_element_add(cov, [i, j], res) | ||
cov = scatter_element_add(cov, [j, i], res) | ||
|
||
return cov |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool how this is all diffable...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With qml.math
, we could really try and push this. Any pre- or post- quantum processing function we add, we should always now try and add in a differentiable manner :D
<tf.Tensor: shape=(2,), dtype=float64, numpy=array([0.70710678, 0.70710678])> | ||
""" | ||
prob = flatten(prob) | ||
num_wires = int(np.log2(len(prob))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change to num_vars
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does vars refer to here?
def scatter_element_add(self, index, value): | ||
self.data = self.data.clone() | ||
self.data[tuple(index)] += value | ||
return self.data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this function is implemented not in-place?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a limitation of PyTorch :(
PyTorch allows in-place operations of tensors, unless the tensor is a leaf node (that is, a tensor created at the beginning of the computational graph by a user, with the requires_grad
explicitly set).
But I just realised we can have a check for this using self.data.is_leaf
, so we only need to clone if that is true.
@staticmethod | ||
def expected_cov(weights): | ||
"""Analytic covariance matrix for ansatz and obs_list""" | ||
a, b, c = weights | ||
return np.array([ | ||
[np.sin(b) ** 2, -np.cos(a) * np.sin(b) ** 2 * np.sin(c)], | ||
[-np.cos(a) * np.sin(b) ** 2 * np.sin(c), 1 - np.cos(a) ** 2 * np.cos(b) ** 2 * np.sin(c) ** 2] | ||
]) | ||
|
||
@staticmethod | ||
def expected_grad(weights): | ||
"""Analytic covariance matrix gradient for ansatz and obs_list""" | ||
a, b, c = weights | ||
return np.array([ | ||
np.sin(a) * np.sin(b) ** 2 * np.sin(c), | ||
-2 * np.cos(a) * np.cos(b) * np.sin(b) * np.sin(c), | ||
-np.cos(a) * np.cos(c) * np.sin(b) ** 2 | ||
]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's commitment!
probs = circuit(weights) | ||
return fn.cov_matrix(probs, self.obs_list) | ||
|
||
weights = np.array([0.1, 0.2, 0.3]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't you want to make this test data, maybe using some edge cases?
Co-authored-by: Maria Schuld <mariaschuld@gmail.com>
… into tape-metric-tensor
@@ -1044,6 +1048,37 @@ def expected_grad(weights): | |||
-np.cos(a) * np.cos(c) * np.sin(b) ** 2 | |||
]) | |||
|
|||
def test_weird_wires(self, in_tape_mode, tol): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lol. custom
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hahaha I believe @johannesjmeyer has a test called test_weird_wires
in the circuit drawer, so I stole the name from there
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or maybe I'm imagining it and made it up
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds plausible though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
Context:
The metric tensor is the last main feature not supported in tape mode. This is PR 1 of two PRs, focused on adding support for the metric tensor to the tape. In this PR, we add the ability to compute the covariance matrix of a circuit ansatz with respect to a list of commuting observables, in a differentiable manner.
Description of the Change:
Adds the new function
qml.math.cov_matrix()
. This function accepts a list of commuting observables, and the probability distribution in the shared observable eigenbasis after the application of an ansatz. It uses these to construct the covariance matrix in a framework independent manner, such that the output covariance matrix is autodifferentiable.For example, consider the following ansatz and observable list:
We can construct a QNode to output the probability distribution in the shared eigenbasis of the observables:
We can now compute the covariance matrix:
Autodifferentiation is fully supported using all interfaces:
In order to support this new function, the following low-level tensor functions were added to
qml.math
:diag
(analogous tonp.diag
)flatten
marginal_prob
(marginalize a probability distribution over given axes)reshape
scatter_element_add
(a functional equivalent oftensor[idx] += value
)Benefits:
The ability to compute the covariance matrix is crucial to computing the metric tensor on hardware. The block diagonal approximation to the metric tensor is simply a sequence of covariance matrix computations, one per layer of the original QNode, executed in batch.
Since the implementation of the covariance matrix here is autodifferentiable, the metric tensor computation in the follow up PR will also be differentiable.
I have added
cov_matrix
toqml.math
because it is a low-level function of the formtensor -> tensor
. It is not a tape transform.Possible Drawbacks: n/a
Related GitHub Issues: n/a