Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make the contraction of quantum and classical Jacobians consistent in gradient_transform #4945

Merged
merged 21 commits into from
Apr 25, 2024

Conversation

dwierichs
Copy link
Contributor

@dwierichs dwierichs commented Dec 13, 2023

Context:
When applying a gradient_transform to a QNode, the QNode wrapper makes use of the function _contract_qjac_with_cjac to contract the quantum and classical Jacobians with each other.
Here, the classical Jacobian is the Jacobian of classical processing that happens within the QNode between the QNode input arguments and the tape-level operation arguments.

For the special case where the classical Jacobian is the identity, there is a separate logic in _contract_qjac_with_cjac that skips the contraction.
This is a problem because the contraction, even if it does not affect the numerical values of the output, also implies reformatting of the quantum Jacobian, by virtue of transposition, casting to tuples, stacking.
All this is skipped when the classical Jacobian is the identity, which happens for example for 1D QNode arguments that are not processed any further but just passed to operations within the QNode. Unfortunately, that's a very common scenario, making this PR a breaking change with high visibility.

In addition, we never seem to test certain return type setups with classical processing, so that the contraction is skipped and relevant cases are not covered.

Description of the Change:
This PR removes the special logic for the case where the classical Jacobian is the identity.
There is an alternative version that still skips the contraction but performs all the reformatting steps. Given the amount of code duplication, I do not think we want to go for that version.

In addition, this PR adds the contraction logic for previously uncovered cases, which are now hit because we do not skip the contraction any longer.

Benefits:
The output of gradient_transforms applied to QNodes is self-consistent.
The output of gradient_transforms applied to QNodes without classical processing is consistent with results from using interface entrypoints.

Possible Drawbacks:
Breaking change

Related GitHub Issues:
Fixes #4940

[sc-51888]

Copy link

codecov bot commented Dec 13, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 99.68%. Comparing base (9151477) to head (88025f5).
Report is 5 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #4945      +/-   ##
==========================================
- Coverage   99.69%   99.68%   -0.01%     
==========================================
  Files         410      411       +1     
  Lines       38278    38061     -217     
==========================================
- Hits        38161    37943     -218     
- Misses        117      118       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

pennylane/gradients/gradient_transform.py Outdated Show resolved Hide resolved
pennylane/gradients/gradient_transform.py Outdated Show resolved Hide resolved
pennylane/gradients/gradient_transform.py Outdated Show resolved Hide resolved
pennylane/gradients/gradient_transform.py Outdated Show resolved Hide resolved
pennylane/gradients/gradient_transform.py Outdated Show resolved Hide resolved
@rmoyard rmoyard self-requested a review December 13, 2023 14:11
Copy link
Contributor

@rmoyard rmoyard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @dwierichs, could remove the commented parts of the code? Otherwise it looks good to me, nice testing. Do you know if the performance hit is big when you have the identity as classical jacobian?

pennylane/gradients/gradient_transform.py Outdated Show resolved Hide resolved
@dwierichs
Copy link
Contributor Author

Thanks @rmoyard. I left the code in to see whether we want to take the small perf hit or keep the code simpler.

I timed the following:

import pennylane as qml
from pennylane import numpy as pnp

dev = qml.device("default.qubit")

@qml.qnode(dev)
def f(x):
    [qml.RX(_x, 0) for _x in x]
    return qml.expval(qml.PauliZ(0))

n = 100
x = pnp.random.random(n, requires_grad=True)
%timeit qml.gradients.param_shift(f)(x)
2.66 s ± 78.2 ms # with "skip_cjac"
2.78 s ± 15.5 ms # without "skip_cjac"
2.77 s ± 64.9 ms # master

So there is some slowdown, I don't know how to gauge this against the added code complexity for skipping the contraction?

@dwierichs dwierichs requested a review from rmoyard April 8, 2024 13:04
Copy link
Contributor

@rmoyard rmoyard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @dwierichs, could you remove the commented parts of the code? On the other side, it seems fine to me to have less performance for more correct results, could you ask someone from @PennyLaneAI/performance to approve it as well?

@dwierichs dwierichs requested a review from a team April 8, 2024 14:32
Copy link
Contributor

@vincentmr vincentmr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. I have a couple questions and suggestions, but I'm ready to approve. Kudos for commenting your own PR as you did, it really helps the review process.

@rmoyard rmoyard self-requested a review April 8, 2024 19:45
Copy link
Contributor

@rmoyard rmoyard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💯

@dwierichs dwierichs enabled auto-merge (squash) April 9, 2024 07:56
@dwierichs dwierichs merged commit b0d7327 into master Apr 25, 2024
38 checks passed
@dwierichs dwierichs deleted the fix-grad_transform-qnode_wrapper branch April 25, 2024 17:47
@dwierichs dwierichs mentioned this pull request Apr 29, 2024
1 task
dwierichs added a commit that referenced this pull request Apr 30, 2024
- [x] Fix last open bug or open issue


**Context:**
Docs get out of sync over time.
In particular, the output shape of gradient transforms applied to QNodes
was changed in #4945 but not updated in the docs.

**Description of the Change:**
Update examples in docs of `qml.gradients` module.

Also updates the `qml.kernels` docs, with a few very small changes.

**Benefits:**

**Possible Drawbacks:**

**Related GitHub Issues:**
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] output shape and type of gradient_transform applied to QNodes depend on classical processing
3 participants