Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for batch execution to qml.metric_tensor #1638

Merged
merged 29 commits into from
Sep 15, 2021

Conversation

josh146
Copy link
Member

@josh146 josh146 commented Sep 10, 2021

Context: With the low-level differentiable batch-execution pipeline now available in PennyLane, and the availability of @qml.batch_transform, we can now start porting our existing batch-tape transforms to take advantage of this framework.

Description of the Change:

  • When using qml.math.get_trainable_indices(), only NumPy arrays with requires_grad=True are taken as trainable when using Autograd. This is a requirement for batch transformations like the metric tensor to work correctly.

  • When using the QNode in backpropagation, trainable parameter indices are computed and stored. Previously, a backpropagation QNode would simply mark all parameters as trainable. This extra information is needed for the metric tensor.

  • The metric tensor has been converted into a batch transformation, that accepts both tapes and QNodes as input. This makes the old metric_tensor_tape function irrelevant, and it has been removed. The tests have been likewise updated.

  • Previously, qml.metric_tensor(qnode)(*args, **kwargs) would only return the metric tensor with respect to gate arguments, and ignore any classical processing inside the QNode, even very trivial classical processing such as parameter permutation. This would lead to many reported user bugs, such as QNGOptimizer returns TypeError when step method called  #1154. In this new framework, the metric tensor now takes into account classical processing, and returns the metric tensor with respect to QNode arguments, not simply gate arguments.

    To revert to the previous behaviour of returning the metric tensor with respect to gate arguments, qml.metric_tensor(qnode, hybrid=False) can be passed.

Benefits:

  • The qml.metric_tensor() function now makes use of batch execution for submission of circuits required for metric tensor computation.

  • The qml.metric_tensor() function now takes into account classical computation inside the QNode, for example, if gate arguments are repeated or permuted.

  • QNodes in backpropagation mode correctly track trainable parameters, fixing an issue that was long part of our test suite.

Possible Drawbacks: n/a

Related GitHub Issues: Closes #1154

@github-actions
Copy link
Contributor

Hello. You may have forgotten to update the changelog!
Please edit .github/CHANGELOG.md with:

  • A one-to-two sentence description of the change. You may include a small working example for new features.
  • A link back to this PR.
  • Your name (or GitHub username) in the contributors section.

@josh146 josh146 changed the title [WIP] Batch metric tensor Add support for batch execution to qml.metric_tensor Sep 10, 2021
@codecov
Copy link

codecov bot commented Sep 10, 2021

Codecov Report

Merging #1638 (44ec65b) into master (1f89139) will increase coverage by 0.00%.
The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #1638   +/-   ##
=======================================
  Coverage   99.15%   99.15%           
=======================================
  Files         196      196           
  Lines       14294    14318   +24     
=======================================
+ Hits        14173    14197   +24     
  Misses        121      121           
Impacted Files Coverage Δ
pennylane/math/utils.py 100.00% <100.00%> (ø)
pennylane/qnode.py 98.73% <100.00%> (+0.03%) ⬆️
pennylane/transforms/__init__.py 100.00% <100.00%> (ø)
pennylane/transforms/classical_jacobian.py 100.00% <100.00%> (ø)
pennylane/transforms/metric_tensor.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1f89139...44ec65b. Read the comment docs.

@josh146 josh146 added the review-ready 👌 PRs which are ready for review by someone from the core team. label Sep 13, 2021
@josh146
Copy link
Member Author

josh146 commented Sep 13, 2021

[ch8957]

Base automatically changed from batch-bug-fixes to v0.18.0-rc0 September 13, 2021 14:25
@josh146 josh146 changed the base branch from v0.18.0-rc0 to master September 13, 2021 14:28
Copy link
Contributor

@glassnotes glassnotes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@josh146 thanks this is a great update 🎉 I've left a number of questions within

.github/CHANGELOG.md Outdated Show resolved Hide resolved
[0. , 0.28750832]])
```

To revert to the previous behaviour of returning the metric tensor with respect to gate
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great 💯

Totally optional suggestion: Is there a more descriptive keyword than hybrid we could use here? It's not immediately clear just from looking at the arguments what it means. Maybe... use_gate_args, or something like this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that this example is looking great and there could be a more descriptive keyword here

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I admit, I used hybrid only because the qml.gradients module uses it 🙈

I have a slight preference for keeping it consistent (for now), while opening up an issue to replace the keyword name in both places?

Options I can think of are:

  • quantum_only
  • include_classical
  • circuit_only
  • ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a fan of include_classical :)

from .batch_transform import batch_transform


SUPPORTED_OPS = ["RX", "RY", "RZ", "PhaseShift"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it necessary to support PhaseShift, since its decomposition is in terms of RZ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the metric tensor, we can support any single-parameter operation that has a generator, so this limits us to these four (I believe?)

As to why PhaseShift; on the off-chance someone does have a circuit with PhaseShift, I think this will lead to a slight reduction in overhead, since the expansion can be avoided.

Comment on lines +39 to +40
new_tape = tape.expand(depth=2, stop_at=_stopping_critera)
params = new_tape.get_parameters(trainable_only=False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why depth 2?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No reason except the existing metric tensor code uses depth 2!

pennylane/transforms/metric_tensor.py Outdated Show resolved Hide resolved
@@ -152,7 +187,7 @@ def metric_tensor_tape(tape, diag_approx=False, wrt=None):
# to measure in the basis of the parametrized layer generators.
with tape.__class__() as layer_tape:
for op in queue:
op.queue()
qml.apply(op)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😎

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couldn't help myself!

if is_square and qml.math.allclose(cjac, qml.numpy.eye(cjac.shape[0])):
# Classical Jacobian is the identity. No classical processing
# is present inside the QNode.
return mt
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this covers the case where even if hybrid=True, no actual classical processing happened so we just return the mt w.r.t. the gate arguments?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep :)

@@ -58,9 +57,6 @@ def circuit(weights):
assert tapes[2].operations[0].data == [1]
assert tapes[2].operations[1].data == [2]

result = qml.metric_tensor(circuit)(params)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why isn't the result still being tested?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's superfluous --- this test is testing that the rotation gate is correctly decomposed, and the execution of the compiled circuit feels a bit out of scope

# Currently, in the Autograd interface, we assume
# that all objects are differentiable by default.
return getattr(tensor, "requires_grad", True)
return getattr(tensor, "requires_grad", False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this setting the default value of requires_grad everywhere to False? If so that seems like a major change, it should maybe be added separately to the CHANGELOG and docs

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Luckily not, it turns out that this function is not used anywhere important yet 😆 I was super nervous about making this change, expecting tonnes of tests to fail.

But good idea to mention this in the changelog

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have added this to the changelog!

qml.RX(a[1], wires=0)
qml.RY(a[0], wires=0)
qml.CNOT(wires=[0, 1])
qml.PhaseShift(b, wires=1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no classical processing of the arguments in this QNode

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is, but it's subtle. The classical processing function is

f: ([a0, a1], b) -> (a1, a0, b)

So the classical Jacobians will be a permutation matrix and an identity matrix:

classical_jacobian(circuit)(a, b) == ([[0, 1], [1, 0]], [[1]])

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have added a comment here :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow I totally missed that! Thanks, the comment is helpful 💯

josh146 and others added 2 commits September 15, 2021 00:04
Co-authored-by: Olivia Di Matteo <2068515+glassnotes@users.noreply.github.com>
[0. , 0.28750832]])
```

To revert to the previous behaviour of returning the metric tensor with respect to gate
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that this example is looking great and there could be a more descriptive keyword here


>>> grad_fn = qml.grad(lambda x: met_fn(x)[3, 2])
>>> grad_fn(weights)
array([[ 0.04867729, -0.00049502, 0. ],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be worth mentioning that we can find the Jacobian but agreed that the gradient example is more digestible here!

pennylane/transforms/metric_tensor.py Outdated Show resolved Hide resolved

qnode.construct(args, kwargs)
for c in cjac:
if c is not None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how can None values appear in the classical jacobian?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

depending on the autodiff framework, None appears if the corresponding QNode argument is non-differentiable:

@qml.qnode(dev)
def circuit(x, y):
    qml.RX(2*x, wires=0)
    qml.RY(y**3, wires=0)
    return qml.expval(qml.PauliZ(0)

>>> x = np.array(0.4, requires_grad=True)
>>> y = np.array(0.4, requires_grad=False)

@anthayes92
Copy link
Contributor

Another fine addition @josh146! Just a couple of minor suggestions and questions

Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com>
@josh146
Copy link
Member Author

josh146 commented Sep 15, 2021

Thanks @glassnotes and @anthayes92! Your feedback was extremely helpful - I've incorporated it in 9fdf474..9f01b05 (you can view the combined diff here).

Note that the behaviour of qml.transforms.classical_jacobian changed in master since the code review, so there is a bit of a change to the classical_jacobian logic and tests to fix bugs that was causing failing tests.

Copy link
Contributor

@glassnotes glassnotes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the clarifications @josh146 , looks good to go ⭐

[0. , 0.28750832]])
```

To revert to the previous behaviour of returning the metric tensor with respect to gate
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a fan of include_classical :)

Please use the `qml.metric_tensor` transform instead.
[(#1638)](https://github.com/PennyLaneAI/pennylane/pull/1638)

- The utility function `qml.math.requires_grad` now returns `True` when using Autograd
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not going in 0.18 right? I'm just thinking now whether we will have to update some of the demos / other docs to incorporate this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, this is being merged into master only/v0.19-dev

qml.RX(a[1], wires=0)
qml.RY(a[0], wires=0)
qml.CNOT(wires=[0, 1])
qml.PhaseShift(b, wires=1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow I totally missed that! Thanks, the comment is helpful 💯

@josh146 josh146 merged commit 17b1551 into master Sep 15, 2021
@josh146 josh146 deleted the batch-metric-tensor branch September 15, 2021 13:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
review-ready 👌 PRs which are ready for review by someone from the core team.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

QNGOptimizer returns TypeError when step method called
3 participants