Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for qml.specs to the beta QNode #1739

Merged
merged 18 commits into from
Oct 14, 2021
Merged

Add support for qml.specs to the beta QNode #1739

merged 18 commits into from
Oct 14, 2021

Conversation

josh146
Copy link
Member

@josh146 josh146 commented Oct 12, 2021

Context: Previous, qml.specs would query QNode.specs. This property does not exist on the new QNode.

Description of the Change:

  • qml.specs transform is modified to directly compute the specifications of qml.beta.QNode objects.
  • An if-else block used to temporarily set the QNode expansion depth is changed to try-finally, to ensure it is safe in case there is an exception.

Benefits:

  • The beta QNode correctly tracks trainable parameters in backpropagation mode
  • Computing the number of gradient executions is now more reliable, since the gradient transform is simply called directly.

Possible Drawbacks: n/a

Related GitHub Issues: n/a

@github-actions
Copy link
Contributor

Hello. You may have forgotten to update the changelog!
Please edit doc/releases/changelog-dev.md with:

  • A one-to-two sentence description of the change. You may include a small working example for new features.
  • A link back to this PR.
  • Your name (or GitHub username) in the contributors section.

@josh146 josh146 requested a review from albi3ro October 12, 2021 09:55
@josh146 josh146 added the review-ready 👌 PRs which are ready for review by someone from the core team. label Oct 12, 2021
@josh146
Copy link
Member Author

josh146 commented Oct 12, 2021

[sc-9747]

@codecov
Copy link

codecov bot commented Oct 12, 2021

Codecov Report

Merging #1739 (655e1c1) into master (65bdcfc) will increase coverage by 2.66%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1739      +/-   ##
==========================================
+ Coverage   96.55%   99.22%   +2.66%     
==========================================
  Files         207      207              
  Lines       15552    15573      +21     
==========================================
+ Hits        15017    15452     +435     
+ Misses        535      121     -414     
Impacted Files Coverage Δ
pennylane/gradients/parameter_shift.py 100.00% <100.00%> (ø)
pennylane/transforms/specs.py 100.00% <100.00%> (ø)
pennylane/interfaces/batch/__init__.py 100.00% <0.00%> (+0.96%) ⬆️
pennylane/devices/default_qubit.py 100.00% <0.00%> (+1.22%) ⬆️
pennylane/beta/devices/default_tensor.py 96.93% <0.00%> (+1.70%) ⬆️
pennylane/interfaces/batch/tensorflow.py 100.00% <0.00%> (+2.22%) ⬆️
pennylane/devices/default_qubit_tf.py 92.00% <0.00%> (+2.66%) ⬆️
pennylane/beta/devices/default_tensor_tf.py 90.62% <0.00%> (+3.12%) ⬆️
pennylane/interfaces/batch/torch.py 100.00% <0.00%> (+3.27%) ⬆️
pennylane/interfaces/torch.py 100.00% <0.00%> (+3.29%) ⬆️
... and 16 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 65bdcfc...655e1c1. Read the comment docs.

@josh146
Copy link
Member Author

josh146 commented Oct 12, 2021

@albi3ro I've updated the test file to retain the original tests and the new tests, ensuring qml.specs continues to work well with both QNodes 🙂

Copy link
Contributor

@albi3ro albi3ro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some thoughts and questions for now.

info["execution_options"] = qnode.execute_kwargs
info["interface"] = qnode.interface

if callable(qnode.gradient_fn):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What else would it be? A string? None?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, it can currently take three types:

  • Callable: one of qml.gradients.gradient_transform (could be changed to isinstance(qnode.gradient_fn, gradient_transform) to be stricter)
  • None: no gradients
  • str: to indicate "device"

pennylane/transforms/specs.py Outdated Show resolved Hide resolved
pennylane/transforms/specs.py Outdated Show resolved Hide resolved
"The QNode specifications can only be calculated after its quantum tape has been constructed."
)

info = qnode.qtape.specs.copy()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will we be getting rid of the qnode.specs property?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep; the idea is to remove all properties/methods that depend on the state of construction, and replace them with functions that act on the QNode. This also includes qnode.draw() and qnode.metric_tensor().

Comment on lines 134 to 136
# In the case of a broad exception, we don't want the `qml.specs` transform
# to fail. Instead, we simply indicate that the number of gradient executions
# is not supported.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While this might be helpful while the new QNode is still under development, in the long run, we should know exactly the types of situations that would cause this to fail and check for those instead. If something's going wrong internally, we want to know.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, this was inserted to solve an error I was getting while running the tests! In particular, this one:

@qml.qnode(dev, diff_method=diff_method)
def circuit():
    return qml.state()

Of the three diff methods tested, only backprop supports state differentiation; both adjoint and parameter-shift will raise an error.

  • Previously, qml.specs would use the Operator.grad_recipe to determine the number of shifts required, independent of the circuit measurement, resulting in an incorrect value for num_gradient_executions.

  • In this PR, qml.specs queries directly the gradient transform to determine the number of shifts. This results in more accurate results, however it means that calling qml.specs(circuit) on the QNode above will raise an error, since qml.gradients.param_shift and adjoint raise an exception if the input circuit ends with qml.state.

Overall, I think this is more informative; checking the number of parameter-shift executions on this QNode will show NotSupported, rather than a numeric value.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What we could do, though, is somehow 'catch' the exception message, and store this in the dictionary?

Copy link
Member Author

@josh146 josh146 Oct 13, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've pushed an update that does the following 🙂

dev = qml.device("default.qubit", wires=2)

@qml.beta.qnode(dev, diff_method=qml.gradients.param_shift)
def circuit(x):
    qml.RX(x, wires=0)
    return qml.state()
>>> specs = qml.specs(circuit)(0.56)
>>> specs["num_gradient_executions"]
NotSupported: Computing the gradient of circuits that return the state is not supported.

@josh146 josh146 requested a review from albi3ro October 13, 2021 07:02
info["gradient_fn"] = inspect.getmodule(qnode.gradient_fn).__name__

try:
if isinstance(qnode.gradient_fn, qml.gradients.gradient_transform):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to already be inside an if callable(qnode.gradient_fn) block. Is there a reason we make the stricter check again? Or can we defer this check to the outer if statement?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, let me remove the inner one.

Copy link
Contributor

@albi3ro albi3ro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few questions about when:
A) someone passes in a gradient transform that's built into pennylane, like qml.gradients.param_shift
B) someone passes in their own custom gradient transform

I think those cases need to be improved and tested.

pennylane/transforms/specs.py Outdated Show resolved Hide resolved
pennylane/transforms/specs.py Outdated Show resolved Hide resolved
@josh146
Copy link
Member Author

josh146 commented Oct 14, 2021

Thanks Christina, both valid points! I've pushed a commit that takes this into account, and now shows the absolute import path for gradient transforms, even when a custom transform.

@josh146 josh146 requested a review from albi3ro October 14, 2021 15:18
Copy link
Contributor

@albi3ro albi3ro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the improvements!

@josh146 josh146 merged commit 7425c5d into master Oct 14, 2021
@josh146 josh146 deleted the beta-qnode-specs branch October 14, 2021 16:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
review-ready 👌 PRs which are ready for review by someone from the core team.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants