Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MLIR] Enzyme-driven gradients #244

Merged
merged 71 commits into from
Aug 31, 2023
Merged

Conversation

pengmai
Copy link
Contributor

@pengmai pengmai commented Aug 15, 2023

Context: The current gradient architecture does not support exact computation of hybrid programs with classical postprocessing or multiple QNodes.

Description of the Change: A reworking of Catalyst's gradient architecture to be driven by Enzyme. Quantum functions are split into their purely classical preprocessing and quantum parts as before, but now the differentiation of the end-to-end circuit is done by Enzyme with registered quantum gradients as custom gradients.

Overview of changes

  • Implement a splitting transformation that updates the hybrid computation graph such that QNodes are split into two functions, a .preprocess function that contains the classical preprocessing (just like the argmap function) and the .quantum function that contains the actual quantum computation. The .preprocess function ends in a call to the .quantum function, meaning it can replace QNodes and leave the hybrid graph connected.
    • This assumes that, like PennyLane, a QNode must end in one or more measurements and cannot contain postprocessing.
  • GradOps are lowered to one or more BackpropOps of the entire hybrid computation (one for each result entry)
  • Custom gradients are registered for the .quantum split out QNodes.
  • Modify EinsumLinalgGeneric to support both memref args (in addition to tensor args) and dynamic shapes. This is used in the custom gradient of quantum functions.
  • The differential lowerings for adjoint and parameter shift differentiation now operate on individual QNodes instead of GradOps.
    • There are also attributes used to connect the quantum gradients (.qgrad, .adjoint) to the custom gradients across both the lower-gradients and convert-gradient-to-llvm passes.
  • The frontend verification is modified to allow for non-QNode callees of catalyst.grad functions while also preserving the same verification as before (return value of a QNode with diff_method="adjoint" must be an expval, etc)
  • A bugfix within the runtime to prevent an error because the circuit is run twice with the adjoint method.

The important implementation quirks are documented on Notion.

Benefits: This new architecture supports differentiation of circuits with classical postprocessing, hybrid programs with multiple QNodes, and purely classical programs.

[sc-41364]
[sc-41375]
[sc-42856]

- WIP conversion to DPS of all generated functions
- Change tape type of custom grad to null ptr vs struct (was causing
  Enzyme to crash)
- Read values from param vec within the withparams modified QNode (was
  causing segfaults from dereferencing poison vals)
Copy link
Contributor

@rmoyard rmoyard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing job 🥇 Happy to approve the PR but I would like @dime10 to review it as well before merging

@josh146
Copy link
Member

josh146 commented Aug 24, 2023

This is great work @pengmai! Super excited to have it in 🎉

A quick question: will we also need to update https://docs.pennylane.ai/projects/catalyst/en/latest/dev/quick_start.html#calculating-quantum-gradients?

@pengmai
Copy link
Contributor Author

pengmai commented Aug 24, 2023

Thanks Josh!

A quick question: will we also need to update https://docs.pennylane.ai/projects/catalyst/en/latest/dev/quick_start.html#calculating-quantum-gradients?

This PR should be totally backwards compatible and thus the quick start shouldn't require any changes to work, unless you wanted to highlight the new features.

Copy link
Collaborator

@dime10 dime10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is amazing work 💯

A few comments and questions from my side, but nothing major!

Because this PR is so beefy though, I think it might be really helpful if the PR description contained a bullet point list of what was changed in existing code, what was added, and what are some quirks or compromises of the implantation, .... Doesn't need a lot of explanation, but just so we have a quick overview of everything that was undertaken.

doc/changelog.md Show resolved Hide resolved
frontend/catalyst/pennylane_extensions.py Show resolved Hide resolved
frontend/catalyst/pennylane_extensions.py Outdated Show resolved Hide resolved
frontend/test/pytest/test_gradient_postprocessing.py Outdated Show resolved Hide resolved
mlir/lib/Gradient/Transforms/LoweringPatterns.cpp Outdated Show resolved Hide resolved
pengmai and others added 6 commits August 30, 2023 10:41
- Also update comment around BackpropOp copying cotangents
Co-authored-by: David Ittah <dime10@users.noreply.github.com>
…neAI/catalyst into jmp/enzyme-gradient-architecture
@dime10 dime10 added this to the v0.3 milestone Aug 30, 2023
pengmai and others added 2 commits August 30, 2023 14:38
Co-authored-by: David Ittah <dime10@users.noreply.github.com>
@dime10
Copy link
Collaborator

dime10 commented Aug 30, 2023

Thanks for the overview in the description, very helpful 💯

Copy link
Collaborator

@dime10 dime10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧬

@pengmai pengmai merged commit 6b319bd into main Aug 31, 2023
18 checks passed
@pengmai pengmai deleted the jmp/enzyme-gradient-architecture branch August 31, 2023 14:56
dime10 added a commit that referenced this pull request Aug 31, 2023
**Context:** This PR implements some documentation changes that follow
up on #244, particularly [this
comment](#244 (comment)).

**Description of the Change:** Update the docstrings for `grad` and
`jacobian`, rename `"defer"` to `"auto"`

---------

Co-authored-by: David Ittah <dime10@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants