Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Operator matrices should be defined using holomorphic functions #1819

Open
josh146 opened this issue Oct 28, 2021 · 4 comments
Open

[BUG] Operator matrices should be defined using holomorphic functions #1819

josh146 opened this issue Oct 28, 2021 · 4 comments
Assignees
Labels
bug 🐛 Something isn't working

Comments

@josh146
Copy link
Member

josh146 commented Oct 28, 2021

In #1749, the Operator.matrix method was updated to be use differentiable logic when constructing the matrix of parametric operations, allowing them to be differentiable in all frameworks. This change worked well with existing workflows, that is, differentiating a real-valued variational quantum algorithm cost function.

However, when differentiating the operation matrix itself, we are dealing with a complex valued cost-function, and unlike in the case above, now need to ensure that all tensor operations are holomorphic.

Unfortunately, qml.math.conj() is a non-holomorphic function. Operations which use this function will output incorrect complex matrix gradients:

>>> x = torch.tensor(0.2 + 0j, requires_grad=True)
>>> cost = lambda x: qml.RZ(x, wires=0).matrix
>>> torch.autograd.functional.jacobian(cost, x)
tensor([[-0.0499+0.4975j, -0.0000+0.0000j],
        [-0.0000+0.0000j, -0.0499+0.4975j]])

Implementing the RZ gate without using conj() we can compare to the expected gradient:

>>> cost = lambda x: torch.diag(torch.stack([torch.exp(-0.5j * x), torch.exp(0.5j * x)]))
>>> torch.autograd.functional.jacobian(cost, x)
tensor([[-0.0499+0.4975j,  0.0000+0.0000j],
        [ 0.0000+0.0000j, -0.0499-0.4975j]])

Note that this bug will only affect the gradients of complex-valued cost functions in PennyLane, but will arise wherever non-holomorphic functions are used. This includes:

  • The operator matrices, as described above
  • Attempting to differentiate a QNode using qml.state() on one of the backprop devices, default.qubit.(torch|tf|autograd|jax).

To solve this, the all operator matrices and backprop simulators should be re-written to avoid non-holomorphic functions.

@josh146 josh146 added the bug 🐛 Something isn't working label Oct 28, 2021
@licedric licedric changed the title [BUG] Operator matrices should be defined using non-holomorphic functions [BUG] Operator matrices should be defined using holomorphic functions Nov 2, 2021
@puzzleshark puzzleshark linked a pull request Nov 10, 2021 that will close this issue
5 tasks
@puzzleshark puzzleshark self-assigned this Nov 10, 2021
@puzzleshark
Copy link
Contributor

@josh146 what do you mean when you say backprop simulators?

@josh146
Copy link
Member Author

josh146 commented Nov 11, 2021

@puzzleshark I mean the simulators which natively support backpropagation 🙂 This includes:

  • default.qubit.autograd
  • default.qubit.jax
  • default.qubit.tf
  • default.qubit.torch

Note that when you create a QNode using

dev = qml.device("default.qubit", wires=2)

@qml.qnode(dev, diff_method="best")
def circuit(...):

then a backprop device will be automatically chosen if shots=None (we are not approximating results with finite-shots, which introduces stochasticity).

For more info, check out this tutorial: https://pennylane.ai/qml/demos/tutorial_backprop.html

@puzzleshark
Copy link
Contributor

@josh146 gotcha. seems like all simulators use the same code, but just swap out the tensor operations ie. tf.math.conj vs torch.conj. Looking though, I don't see any particular issue with it w.r.t complex differentiability, (all instances of conj are used w.r.t to the state vector) but not sure how I could test that it's all in working order? Maybe create some tests with simple circuits and compare with same calculation via one of the auto-diff frameworks?

@josh146
Copy link
Member Author

josh146 commented Nov 16, 2021

Maybe create some tests with simple circuits and compare with same calculation via one of the auto-diff frameworks?

Yep! I think that is the way to go. Perhaps create a simple circuit, such as

def circuit(x):
    qml.Hadamard(wires=0)
    qml.RZ(x, wires=0)
    return qml.state()

and at the same time, analytically compute what the state looks like, by applying the analytic RX matrix to the state [1, 0].

You can then differentiate the output state in Torch/TF/Autograd/JAX, and double check it agrees with the analytic result :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants