Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add autodiff integration tests for the new QNode pipeline #1646

Merged
merged 44 commits into from
Sep 21, 2021

Conversation

josh146
Copy link
Member

@josh146 josh146 commented Sep 13, 2021

Context: #1642 introduces a beta QNode that uses the batch-execution pipeline. This PR adds integration tests for the Autograd, TF, and Torch interfaces.

Description of the Change: As above.

Benefits: integration tests!

Possible Drawbacks: n/a

Related GitHub Issues: n/a

@josh146 josh146 changed the base branch from master to batch-qnode September 13, 2021 07:05
@github-actions
Copy link
Contributor

Hello. You may have forgotten to update the changelog!
Please edit .github/CHANGELOG.md with:

  • A one-to-two sentence description of the change. You may include a small working example for new features.
  • A link back to this PR.
  • Your name (or GitHub username) in the contributors section.

@codecov
Copy link

codecov bot commented Sep 13, 2021

Codecov Report

Merging #1646 (dc32116) into master (5475488) will increase coverage by 0.00%.
The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #1646   +/-   ##
=======================================
  Coverage   99.19%   99.19%           
=======================================
  Files         199      199           
  Lines       14824    14835   +11     
=======================================
+ Hits        14704    14715   +11     
  Misses        120      120           
Impacted Files Coverage Δ
pennylane/gradients/parameter_shift.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5475488...dc32116. Read the comment docs.

@josh146 josh146 changed the title [WIP] Add autodiff integration tests for the new QNode pipeline Add autodiff integration tests for the new QNode pipeline Sep 13, 2021
@josh146 josh146 added the review-ready 👌 PRs which are ready for review by someone from the core team. label Sep 13, 2021
@josh146
Copy link
Member Author

josh146 commented Sep 13, 2021

[ch8959]

Base automatically changed from batch-qnode to master September 20, 2021 14:12
Copy link
Contributor

@glassnotes glassnotes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@josh146 looks great! I've left a lot of questions, but these are more for my own understanding and don't affect the immediate mergeability 👍

@@ -331,8 +331,22 @@ def var_param_shift(tape, argnum, shift=np.pi / 2, gradient_recipes=None, f0=Non
def processing_fn(results):
# We need to expand the dimensions of the variance mask,
# and convert it to be the same type as the results.
mask = qml.math.convert_like(qml.math.reshape(var_mask, [-1, 1]), results[0])
f0 = qml.math.expand_dims(results[0], -1)
res = results[0]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's going on here, and why is only the parameter shift gradient affected?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ohhhhh.

This is a bug uncovered by the new integration tests!

Basically, the old approach didn't work for QNodes of the form return qml.probs(), qml.expval() :(

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it was the test test_ragged_variance or something like that that was failing until I made this change

if diff_method != "parameter-shift":
pytest.skip("Test only supports parameter-shift")

dev = qml.device("default.qubit", wires=1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will dev_name eventually be something other than default.qubit? I notice in the parameter array it is always the same, and furthermore default.qubit is being set manually set here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe! Now that lightning.qubit is part of PL, we could potentially add lightning qubit to our tests

tests/interfaces/test_batch_autograd_qnode.py Show resolved Hide resolved
0.2,
0.3,
0.4,
0.5 - np.pi / 2,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to make sure I understand what's happened here, instead of executing anything, mock apply just appends to the parameter list, including when we call grad? So here what's been appended are the two sets of parameters for parameter shift, where only the y parameter is shifted since x has requires_grad=False?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep exactly!! Blame past Josh for this, I wrote this test last year 😬

I actually probably wouldn't have written it like this year, I feel like my mocking ability has declined lol

tests/interfaces/test_batch_autograd_qnode.py Show resolved Hide resolved
qml.RX(x, wires=[0])
qml.RY(y, wires=[1])
qml.CNOT(wires=[0, 1])
return qml.state()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity what are the barriers to supporting differentiation of the state?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Depends on the differentiation method :)

  • Adjoint: not sure if it is possible to support it, bit of a research question
  • Backprop: already supported in PL!
  • Parameter-shift: we need to add a shift rule for the state. Actually, for RX, RY, RZ, Rot, there is a shift rule that works --- I believe it has coefficients halve that of the standard shift rule?
  • Finite-diff: should work, but I'm not sure 🤔

assert np.allclose(res1, res2, atol=tol, rtol=0)
assert np.allclose(grad1, grad2, atol=tol, rtol=0)

def test_drawing(self, dev_name, diff_method, mode):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is there a drawing test in here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because there was a bug in the circuit drawer a long time ago that didn't allow it to draw operations that had TF tensors 😆 It must be a relic from before we had qml.draw()

assert np.allclose(res, expected, atol=tol, rtol=0)


def test_adjoint_reuse_device_state(mocker):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the TF tests there is a separate class for this with multiple tests rather than an isolated test; I don't know much about the adjoint method but is the behaviour of different between interfaces?

Copy link
Member Author

@josh146 josh146 Sep 21, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep!

With Autograd/JAX, the UI enforces that the forwards and backwards pass happens consecutively. E.g., if you look at the source code for qml.grad, it looks as follows:

def grad(fun, x):
    # Forward pass. Generates the answer
    # and the computational graph.
    vjp, ans = _make_vjp(fun, x)
    # Backwards pass to compute the gradient
    grad_value = vjp(vspace(ans).ones())
    return grad_value, ans

The nice thing about this if (a) if the interface is autograd, and (b) the backwards pass has been initiated, we can assume that the current device state corresponds to the forward pass call.

However, in TF/Torch, because of the OOP UI, it is possible to separate the forward and backwards passes:

loss1 = circuit(weights)
loss2 = circuit(2 * weights)
loss1.backward()

so, during the backward pass, we can no longer assume that the current device state is from the forward pass of the circuit with the same parameters!

tests/interfaces/test_batch_torch_qnode.py Show resolved Hide resolved
tests/interfaces/test_batch_torch_qnode.py Show resolved Hide resolved
Co-authored-by: Olivia Di Matteo <2068515+glassnotes@users.noreply.github.com>
@josh146 josh146 removed the request for review from anthayes92 September 21, 2021 13:19
@josh146 josh146 merged commit cb501a8 into master Sep 21, 2021
@josh146 josh146 deleted the batch-qnode-interfaces branch September 21, 2021 14:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
review-ready 👌 PRs which are ready for review by someone from the core team.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants