Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jacobian Slice Bug StrawberryFields #2485

Merged
merged 12 commits into from
Apr 22, 2022
9 changes: 6 additions & 3 deletions doc/releases/changelog-0.23.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -614,12 +614,15 @@ the `decimals` and `show_matrices` keywords are added. `qml.drawer.tape_text(tap
the qubit operator that is prepared for tapering the HF state.
[(#2441)](https://github.com/PennyLaneAI/pennylane/pull/2441)

* Fixed a bug with custom device defined jacobians not being returned properly.
[(#108)](https://github.com/PennyLaneAI/pennylane-sf/pull/108/files)
antalszava marked this conversation as resolved.
Show resolved Hide resolved

<h3>Documentation</h3>

<h3>Contributors</h3>

This release contains contributions from (in alphabetical order):

Karim Alaa El-Din, Guillermo Alonso-Linaje, Juan Miguel Arrazola, Ali Asadi, Utkarsh Azad, Thomas Bromley, Alain Delgado,
Olivia Di Matteo, Anthony Hayes, David Ittah, Josh Izaac, Soran Jahangiri, Christina Lee, Romain Moyard, Zeyue Niu,
Matthew Silverman, Lee James O'Riordan, Jay Soni, Antal Száva, Maurice Weber, David Wierichs.
Karim Alaa El-Din, Guillermo Alonso-Linaje, Juan Miguel Arrazola, Ali Asadi, Utkarsh Azad, Samuel Banning,
Thomas Bromley, Alain Delgado, Olivia Di Matteo, Anthony Hayes, David Ittah, Josh Izaac, Soran Jahangiri, Christina Lee,
Romain Moyard, Zeyue Niu, Matthew Silverman, Lee James O'Riordan, Jay Soni, Antal Száva, Maurice Weber, David Wierichs.
10 changes: 9 additions & 1 deletion pennylane/interfaces/autograd.py
Original file line number Diff line number Diff line change
Expand Up @@ -229,7 +229,15 @@ def grad_fn(dy):

vjps = [qml.gradients.compute_vjp(d, jac) for d, jac in zip(dy, jacs)]

return [qml.math.to_numpy(v, max_depth=_n) if isinstance(v, ArrayBox) else v for v in vjps]
return_vjps = [
qml.math.to_numpy(v, max_depth=_n) if isinstance(v, ArrayBox) else v for v in vjps
]
if device.capabilities().get("provides_jacobian", False):
# in the case where the device provides the jacobian,
# the output of grad_fn must be wrapped in a tuple in
# order to match the input parameters to _execute.
return (return_vjps,)
puzzleshark marked this conversation as resolved.
Show resolved Hide resolved
return return_vjps

return grad_fn

Expand Down
28 changes: 28 additions & 0 deletions tests/interfaces/test_autograd.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
from pennylane import numpy as np

import pennylane as qml
from pennylane.devices import DefaultQubit
from pennylane.gradients import finite_diff, param_shift
from pennylane.interfaces import execute

Expand Down Expand Up @@ -1111,3 +1112,30 @@ def test_multiple_hamiltonians_trainable(self, cost_fn, execute_kwargs, tol):
res = np.hstack(qml.jacobian(cost_fn)(weights, coeffs1, coeffs2, dev=dev))
expected = self.cost_fn_jacobian(weights, coeffs1, coeffs2)
assert np.allclose(res, expected, atol=tol, rtol=0)


class TestCustomJacobian:
def test_custom_jacobians(self):
class CustomJacobianDevice(DefaultQubit):
@classmethod
def capabilities(cls):
capabilities = super().capabilities()
capabilities["provides_jacobian"] = True
return capabilities

def jacobian(self, tape):
return np.array([1.0, 2.0, 3.0, 4.0])

dev = CustomJacobianDevice(wires=2)

@qml.qnode(dev, diff_method="device")
def circuit(v):
qml.RX(v, wires=0)
return qml.probs(wires=[0, 1])

d_circuit = qml.jacobian(circuit, argnum=0)

params = np.array(1.0, requires_grad=True)

d_out = d_circuit(params)
assert np.allclose(d_out, np.array([1.0, 2.0, 3.0, 4.0]))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@antalszava I am trying to understand why this bug didn't cause a problem for us. What was d_out before this fix? Thanks very much!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a very good question! Another good question is if this change would affect your custom device jacobians 🤔

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now I see: We had actually worked around this by returning np.array([gradient]) from our device's jacobian method, where gradient is itself a numpy array. I guess we just need to stop doing this starting with PL v0.23.0 then. Thanks for the heads up!