-
Notifications
You must be signed in to change notification settings - Fork 575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for batch execution to qml.metric_tensor
#1638
Conversation
Hello. You may have forgotten to update the changelog!
|
qml.metric_tensor
Codecov Report
@@ Coverage Diff @@
## master #1638 +/- ##
=======================================
Coverage 99.15% 99.15%
=======================================
Files 196 196
Lines 14294 14318 +24
=======================================
+ Hits 14173 14197 +24
Misses 121 121
Continue to review full report at Codecov.
|
[ch8957] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@josh146 thanks this is a great update 🎉 I've left a number of questions within
[0. , 0.28750832]]) | ||
``` | ||
|
||
To revert to the previous behaviour of returning the metric tensor with respect to gate |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great 💯
Totally optional suggestion: Is there a more descriptive keyword than hybrid
we could use here? It's not immediately clear just from looking at the arguments what it means. Maybe... use_gate_args
, or something like this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that this example is looking great and there could be a more descriptive keyword here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I admit, I used hybrid
only because the qml.gradients
module uses it 🙈
I have a slight preference for keeping it consistent (for now), while opening up an issue to replace the keyword name in both places?
Options I can think of are:
quantum_only
include_classical
circuit_only
- ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a fan of include_classical
:)
from .batch_transform import batch_transform | ||
|
||
|
||
SUPPORTED_OPS = ["RX", "RY", "RZ", "PhaseShift"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it necessary to support PhaseShift
, since its decomposition is in terms of RZ
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the metric tensor, we can support any single-parameter operation that has a generator, so this limits us to these four (I believe?)
As to why PhaseShift
; on the off-chance someone does have a circuit with PhaseShift
, I think this will lead to a slight reduction in overhead, since the expansion can be avoided.
new_tape = tape.expand(depth=2, stop_at=_stopping_critera) | ||
params = new_tape.get_parameters(trainable_only=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why depth 2?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No reason except the existing metric tensor code uses depth 2!
@@ -152,7 +187,7 @@ def metric_tensor_tape(tape, diag_approx=False, wrt=None): | |||
# to measure in the basis of the parametrized layer generators. | |||
with tape.__class__() as layer_tape: | |||
for op in queue: | |||
op.queue() | |||
qml.apply(op) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
😎
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couldn't help myself!
if is_square and qml.math.allclose(cjac, qml.numpy.eye(cjac.shape[0])): | ||
# Classical Jacobian is the identity. No classical processing | ||
# is present inside the QNode. | ||
return mt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this covers the case where even if hybrid=True
, no actual classical processing happened so we just return the mt
w.r.t. the gate arguments?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep :)
@@ -58,9 +57,6 @@ def circuit(weights): | |||
assert tapes[2].operations[0].data == [1] | |||
assert tapes[2].operations[1].data == [2] | |||
|
|||
result = qml.metric_tensor(circuit)(params) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why isn't the result
still being tested?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's superfluous --- this test is testing that the rotation gate is correctly decomposed, and the execution of the compiled circuit feels a bit out of scope
# Currently, in the Autograd interface, we assume | ||
# that all objects are differentiable by default. | ||
return getattr(tensor, "requires_grad", True) | ||
return getattr(tensor, "requires_grad", False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this setting the default value of requires_grad
everywhere to False? If so that seems like a major change, it should maybe be added separately to the CHANGELOG and docs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Luckily not, it turns out that this function is not used anywhere important yet 😆 I was super nervous about making this change, expecting tonnes of tests to fail.
But good idea to mention this in the changelog
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have added this to the changelog!
qml.RX(a[1], wires=0) | ||
qml.RY(a[0], wires=0) | ||
qml.CNOT(wires=[0, 1]) | ||
qml.PhaseShift(b, wires=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no classical processing of the arguments in this QNode
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is, but it's subtle. The classical processing function is
f: ([a0, a1], b) -> (a1, a0, b)
So the classical Jacobians will be a permutation matrix and an identity matrix:
classical_jacobian(circuit)(a, b) == ([[0, 1], [1, 0]], [[1]])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have added a comment here :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow I totally missed that! Thanks, the comment is helpful 💯
Co-authored-by: Olivia Di Matteo <2068515+glassnotes@users.noreply.github.com>
[0. , 0.28750832]]) | ||
``` | ||
|
||
To revert to the previous behaviour of returning the metric tensor with respect to gate |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that this example is looking great and there could be a more descriptive keyword here
|
||
>>> grad_fn = qml.grad(lambda x: met_fn(x)[3, 2]) | ||
>>> grad_fn(weights) | ||
array([[ 0.04867729, -0.00049502, 0. ], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be worth mentioning that we can find the Jacobian but agreed that the gradient example is more digestible here!
|
||
qnode.construct(args, kwargs) | ||
for c in cjac: | ||
if c is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how can None
values appear in the classical jacobian?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
depending on the autodiff framework, None
appears if the corresponding QNode argument is non-differentiable:
@qml.qnode(dev)
def circuit(x, y):
qml.RX(2*x, wires=0)
qml.RY(y**3, wires=0)
return qml.expval(qml.PauliZ(0)
>>> x = np.array(0.4, requires_grad=True)
>>> y = np.array(0.4, requires_grad=False)
Another fine addition @josh146! Just a couple of minor suggestions and questions |
Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com>
Thanks @glassnotes and @anthayes92! Your feedback was extremely helpful - I've incorporated it in 9fdf474..9f01b05 (you can view the combined diff here). Note that the behaviour of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the clarifications @josh146 , looks good to go ⭐
[0. , 0.28750832]]) | ||
``` | ||
|
||
To revert to the previous behaviour of returning the metric tensor with respect to gate |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a fan of include_classical
:)
Please use the `qml.metric_tensor` transform instead. | ||
[(#1638)](https://github.com/PennyLaneAI/pennylane/pull/1638) | ||
|
||
- The utility function `qml.math.requires_grad` now returns `True` when using Autograd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not going in 0.18 right? I'm just thinking now whether we will have to update some of the demos / other docs to incorporate this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, this is being merged into master only/v0.19-dev
qml.RX(a[1], wires=0) | ||
qml.RY(a[0], wires=0) | ||
qml.CNOT(wires=[0, 1]) | ||
qml.PhaseShift(b, wires=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow I totally missed that! Thanks, the comment is helpful 💯
Context: With the low-level differentiable batch-execution pipeline now available in PennyLane, and the availability of
@qml.batch_transform
, we can now start porting our existing batch-tape transforms to take advantage of this framework.Description of the Change:
When using
qml.math.get_trainable_indices()
, only NumPy arrays withrequires_grad=True
are taken as trainable when using Autograd. This is a requirement for batch transformations like the metric tensor to work correctly.When using the QNode in backpropagation, trainable parameter indices are computed and stored. Previously, a backpropagation QNode would simply mark all parameters as trainable. This extra information is needed for the metric tensor.
The metric tensor has been converted into a batch transformation, that accepts both tapes and QNodes as input. This makes the old
metric_tensor_tape
function irrelevant, and it has been removed. The tests have been likewise updated.Previously,
qml.metric_tensor(qnode)(*args, **kwargs)
would only return the metric tensor with respect to gate arguments, and ignore any classical processing inside the QNode, even very trivial classical processing such as parameter permutation. This would lead to many reported user bugs, such as QNGOptimizer returns TypeError when step method called #1154. In this new framework, the metric tensor now takes into account classical processing, and returns the metric tensor with respect to QNode arguments, not simply gate arguments.To revert to the previous behaviour of returning the metric tensor with respect to gate arguments,
qml.metric_tensor(qnode, hybrid=False)
can be passed.Benefits:
The
qml.metric_tensor()
function now makes use of batch execution for submission of circuits required for metric tensor computation.The
qml.metric_tensor()
function now takes into account classical computation inside the QNode, for example, if gate arguments are repeated or permuted.QNodes in backpropagation mode correctly track trainable parameters, fixing an issue that was long part of our test suite.
Possible Drawbacks: n/a
Related GitHub Issues: Closes #1154