-
Notifications
You must be signed in to change notification settings - Fork 586
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add autodiff integration tests for the new QNode pipeline #1646
Conversation
Hello. You may have forgotten to update the changelog!
|
Codecov Report
@@ Coverage Diff @@
## master #1646 +/- ##
=======================================
Coverage 99.19% 99.19%
=======================================
Files 199 199
Lines 14824 14835 +11
=======================================
+ Hits 14704 14715 +11
Misses 120 120
Continue to review full report at Codecov.
|
[ch8959] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@josh146 looks great! I've left a lot of questions, but these are more for my own understanding and don't affect the immediate mergeability 👍
@@ -331,8 +331,22 @@ def var_param_shift(tape, argnum, shift=np.pi / 2, gradient_recipes=None, f0=Non | |||
def processing_fn(results): | |||
# We need to expand the dimensions of the variance mask, | |||
# and convert it to be the same type as the results. | |||
mask = qml.math.convert_like(qml.math.reshape(var_mask, [-1, 1]), results[0]) | |||
f0 = qml.math.expand_dims(results[0], -1) | |||
res = results[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's going on here, and why is only the parameter shift gradient affected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ohhhhh.
This is a bug uncovered by the new integration tests!
Basically, the old approach didn't work for QNodes of the form return qml.probs(), qml.expval()
:(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it was the test test_ragged_variance
or something like that that was failing until I made this change
if diff_method != "parameter-shift": | ||
pytest.skip("Test only supports parameter-shift") | ||
|
||
dev = qml.device("default.qubit", wires=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will dev_name
eventually be something other than default.qubit
? I notice in the parameter array it is always the same, and furthermore default.qubit
is being set manually set here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe! Now that lightning.qubit is part of PL, we could potentially add lightning qubit to our tests
0.2, | ||
0.3, | ||
0.4, | ||
0.5 - np.pi / 2, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to make sure I understand what's happened here, instead of executing anything, mock apply just appends to the parameter list, including when we call grad
? So here what's been appended are the two sets of parameters for parameter shift, where only the y
parameter is shifted since x
has requires_grad=False
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep exactly!! Blame past Josh for this, I wrote this test last year 😬
I actually probably wouldn't have written it like this year, I feel like my mocking ability has declined lol
qml.RX(x, wires=[0]) | ||
qml.RY(y, wires=[1]) | ||
qml.CNOT(wires=[0, 1]) | ||
return qml.state() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of curiosity what are the barriers to supporting differentiation of the state?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Depends on the differentiation method :)
- Adjoint: not sure if it is possible to support it, bit of a research question
- Backprop: already supported in PL!
- Parameter-shift: we need to add a shift rule for the state. Actually, for RX, RY, RZ, Rot, there is a shift rule that works --- I believe it has coefficients halve that of the standard shift rule?
- Finite-diff: should work, but I'm not sure 🤔
assert np.allclose(res1, res2, atol=tol, rtol=0) | ||
assert np.allclose(grad1, grad2, atol=tol, rtol=0) | ||
|
||
def test_drawing(self, dev_name, diff_method, mode): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is there a drawing test in here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because there was a bug in the circuit drawer a long time ago that didn't allow it to draw operations that had TF tensors 😆 It must be a relic from before we had qml.draw()
assert np.allclose(res, expected, atol=tol, rtol=0) | ||
|
||
|
||
def test_adjoint_reuse_device_state(mocker): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the TF tests there is a separate class for this with multiple tests rather than an isolated test; I don't know much about the adjoint method but is the behaviour of different between interfaces?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep!
With Autograd/JAX, the UI enforces that the forwards and backwards pass happens consecutively. E.g., if you look at the source code for qml.grad
, it looks as follows:
def grad(fun, x):
# Forward pass. Generates the answer
# and the computational graph.
vjp, ans = _make_vjp(fun, x)
# Backwards pass to compute the gradient
grad_value = vjp(vspace(ans).ones())
return grad_value, ans
The nice thing about this if (a) if the interface is autograd, and (b) the backwards pass has been initiated, we can assume that the current device state corresponds to the forward pass call.
However, in TF/Torch, because of the OOP UI, it is possible to separate the forward and backwards passes:
loss1 = circuit(weights)
loss2 = circuit(2 * weights)
loss1.backward()
so, during the backward pass, we can no longer assume that the current device state is from the forward pass of the circuit with the same parameters!
Co-authored-by: Olivia Di Matteo <2068515+glassnotes@users.noreply.github.com>
Context: #1642 introduces a beta QNode that uses the batch-execution pipeline. This PR adds integration tests for the Autograd, TF, and Torch interfaces.
Description of the Change: As above.
Benefits: integration tests!
Possible Drawbacks: n/a
Related GitHub Issues: n/a