Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow interface functions to convert QNodes with pre-existing interfaces #707

Merged
merged 14 commits into from
Jul 8, 2020

Conversation

josh146
Copy link
Member

@josh146 josh146 commented Jul 6, 2020

Context:

The PennyLane interface functions to_torch(qnode), to_tf(qnode), and to_autograd(qnode) currently all make the assumption that the input QNode either has no attached interface (i.e., it is a 'bare' JacobianQNode) or that it has an autograd interface.

This causes issues if the QNode to be converted has the same interface as the one to be applied (see PL-451, whereby to_tf(to_tf(qnode)) causes the gradient to be zeroed), or already has a pre-existing interface.

Description of the Change:

  • All interfaces now keep a reference to the original bare QNode under the _qnode attribute.

  • The interface functions return the input QNode, without modification, if the input QNode interface matches the function.

  • If the input QNode has a pre-existing interface, then the interface functions instead return to_interface(qnode._qnode)

Benefits:

The interface functions contain logic detailing how to handle QNodes with no interfaces and pre-existing interfaces. Higher level abstractions that depend on these functions (such as the qml.qnn module) can now use these functions without worrying about the interface of input QNodes.

Possible Drawbacks:

While writing tests for to_autograd(), I noticed that it has a side-effect; it mutates the input QNode in addition to returning a new QNode. I attempted to modify to_autograd() to perform a deep copy to ensure this no longer happens. This worked for simple QNodes, but QNode with tensor observables would not copy correctly --- the output would always be the ground state.

Related GitHub Issues: PL-451

@josh146 josh146 added bug 🐛 Something isn't working interface 🔌 Classical machine-learning interfaces labels Jul 6, 2020
@josh146 josh146 requested a review from trbromley July 6, 2020 11:33
@codecov
Copy link

codecov bot commented Jul 6, 2020

Codecov Report

Merging #707 into master will increase coverage by 0.01%.
The diff coverage is 97.80%.

@@            Coverage Diff             @@
##           master     #707      +/-   ##
==========================================
+ Coverage   98.65%   98.66%   +0.01%     
==========================================
  Files          99      103       +4     
  Lines        6077     6357     +280     
==========================================
+ Hits         5995     6272     +277     
- Misses         82       85       +3     
Impacted Files Coverage Δ
pennylane/_queuing_context.py 100.00% <ø> (ø)
...ennylane/circuit_drawer/representation_resolver.py 99.28% <ø> (ø)
pennylane/templates/layers/strongly_entangling.py 100.00% <ø> (ø)
pennylane/vqe/vqe.py 100.00% <ø> (ø)
pennylane/qnodes/cv.py 99.14% <50.00%> (-0.86%) ⬇️
pennylane/qnodes/base.py 99.30% <75.00%> (-0.70%) ⬇️
pennylane/beta/plugins/default_tensor.py 95.25% <95.18%> (-0.70%) ⬇️
pennylane/_device.py 99.35% <100.00%> (ø)
pennylane/_qubit_device.py 99.29% <100.00%> (+<0.01%) ⬆️
pennylane/beta/plugins/default_tensor_tf.py 92.50% <100.00%> (+0.39%) ⬆️
... and 30 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0b0a702...ac08be8. Read the comment docs.

@josh146
Copy link
Member Author

josh146 commented Jul 6, 2020

@trbromley all tests seem to be passing, except for the Keras tests 🤔

@josh146
Copy link
Member Author

josh146 commented Jul 6, 2020

@trbromley all tests seem to be passing, except for the Keras tests 🤔

Fixed 💪 With to_tf(), care has to be taken to ensure that a new interface is provided for an existing TF QNode if the dtype argument changes.

@co9olguy
Copy link
Member

co9olguy commented Jul 6, 2020

@josh146 Is there any way to keep QNodes "interface-agnostic", and the interface is just a container/context which contains a "bare" QNode?

@josh146
Copy link
Member Author

josh146 commented Jul 6, 2020

@josh146 Is there any way to keep QNodes "interface-agnostic", and the interface is just a container/context which contains a "bare" QNode?

Maybe I'm not fully following, but this is (to some extent) how it is done currently with both the TF and Torch interfaces.

  • The TF interface contains a 'bare' QNode internally, that is wrapped by the TF custom_gradient function. The bare QNode is stored as a private attribute.

  • The Torch interface contains a 'bare' QNode internally. The 'container' in this case is the torch.nn.Function class. The bare QNode is stored as a private attribute.

It is difficult to standardize the interface containers much further, since TF requires that the container be tf.custom_gradient, and Torch requires that the container be torch.nn.Function.apply.

The one exception to this is the Autograd interface, where we actually monkeypatch the bare qnode, adding the AutogradQNode class mixin which registers the .jacobian() method as an autograd primitive bound to __call__(). This is peculiar to Autograd --- we originally attempted to apply the same approach as for TF and Torch (container class which contains the custom gradient logic), but couldn't get this to work. Autograd seems to disallow splitting the autograd primitive and the function/method it is bound to between different objects.

Copy link
Contributor

@trbromley trbromley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @josh146, looks good to me! Just had some quick questions before approval.

.github/CHANGELOG.md Show resolved Hide resolved
@@ -99,5 +106,6 @@ def gradient_product(g):

# define the vector-Jacobian product function for AutogradQNode.evaluate
autograd.extend.defvjp(AutogradQNode.evaluate, AutogradQNode.QNode_vjp, argnums=[1])
qnode._qnode = qnode # pylint: disable=protected-access
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain this line? Is this for the case where qnode_interface is None?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah is this part linked to the "possible drawbacks":

While writing tests for to_autograd(), I noticed that it has a side-effect; it mutates the input QNode in addition to returning a new QNode. I attempted to modify to_autograd() to perform a deep copy to ensure this no longer happens. This worked for simple QNodes, but QNode with tensor observables would not copy correctly --- the output would always be the ground state.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, this is more because I want to be consistent with the other interfaces -- that a reference to the 'bare' QNode exists at self._qnode.

pennylane/interfaces/tf.py Outdated Show resolved Hide resolved
tests/interfaces/test_tf.py Outdated Show resolved Hide resolved
josh146 and others added 5 commits July 7, 2020 23:26
Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com>
@josh146 josh146 requested a review from trbromley July 7, 2020 14:32
Copy link
Contributor

@trbromley trbromley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @josh146 !

tests/interfaces/test_torch.py Outdated Show resolved Hide resolved
Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com>
@josh146 josh146 merged commit f9736de into master Jul 8, 2020
@josh146 josh146 deleted the fix-interface-conversion branch July 8, 2020 07:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working interface 🔌 Classical machine-learning interfaces
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants