Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Smaller fixes to the backprop operation & lowering #224

Merged
merged 5 commits into from
Jul 31, 2023

Conversation

dime10
Copy link
Contributor

@dime10 dime10 commented Jul 28, 2023

Some improvements & fixes:

  • bug fix in the frontend to allow grad on qjit(f) where f is not a QNode
  • some refactoring of the backprop op
  • new verifiers for the backprop op
  • switch to allocating primal output buffers during bufferization rather than backprop -> llvm lowering

If the grad target is a qjit object, the current implementation assumes
that the wrapped function is already a "JAX function", that is a
`QNode` or `Function` object. This is not the case however when a bare
Python function is wrapped in qjit.
... `computeDiffArgs` as a convenience method to filter the callee
argument list.
For example, the op didn't check whether the declared number of gradient
results matches the number of differentiable arguments, a bug in one of
the bufferization tests.

The `quantum_jacobian` operand name was switched out to `cotangents`
since it more generally describes the purpose of that argument, and in
the way it is used now only receives a slice of the quantum jacobian
anyways.
Reorganizes backprop op arguments into:
- primal `args`
- `diffArgShadows` or gradient result buffers
- `calleeResults` or primal output buffers
- `cotangents` or callee result shadows
@dime10 dime10 requested a review from pengmai July 28, 2023 20:43
@github-actions
Copy link
Contributor

Hello. You may have forgotten to update the changelog!
Please edit doc/changelog.md on your branch with:

  • A one-to-two sentence description of the change. You may include a small working example for new features.
  • A link back to this PR.
  • Your name (or GitHub username) in the contributors section.

@codecov
Copy link

codecov bot commented Jul 28, 2023

Codecov Report

Merging #224 (1f033c3) into enzyme_integration (467526f) will not change coverage.
The diff coverage is 100.00%.

@@                 Coverage Diff                 @@
##           enzyme_integration     #224   +/-   ##
===================================================
  Coverage               99.12%   99.12%           
===================================================
  Files                      37       37           
  Lines                    6624     6624           
  Branches                  345      345           
===================================================
  Hits                     6566     6566           
  Misses                     33       33           
  Partials                   25       25           
Files Changed Coverage Δ
frontend/catalyst/pennylane_extensions.py 97.73% <100.00%> (ø)

Copy link
Contributor

@pengmai pengmai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great!

mlir/lib/Gradient/Transforms/BufferizationPatterns.cpp Outdated Show resolved Hide resolved
Co-authored-by: Jacob Mai Peng <jacobmpeng@gmail.com>
Copy link
Contributor

@rmoyard rmoyard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks great PR 👍

frontend/test/pytest/test_gradient.py Show resolved Hide resolved
mlir/include/Gradient/IR/GradientOps.td Show resolved Hide resolved
@dime10 dime10 merged commit 822ce6d into enzyme_integration Jul 31, 2023
18 checks passed
@dime10 dime10 deleted the backprop branch July 31, 2023 15:51
rmoyard added a commit that referenced this pull request Aug 2, 2023
* Draft integration

* Shape

* optional

* Add Enzyme

* Draft

* more

* Checkout 0.0.69

* Update

* Right versions

* Compile works

* Wheels and tests

* Typo

* Black

* Change hash

* Add dep version

* Typo

* Remove cache llvm

* Update lit

* Remove bin

* Remove

* Add ls

* CHnage key

* ENzyme path

* test

* Change

* Add tests

* Black

* Typo

* Compiling

* Revive param count

* Revive param count

* Update

* Remove comment

* Compute backprop res type

* Readd tests

* Update buff

* Change tests

* Changes

* Working insert slice

* Add diff arg indices

* Add copyright

* Update

* Fix tests buff and hybrid

* Test param shift

* Fix classical and adjoint

* Fix parameter shift tests

* Fix tests

* Pylint disable

* Add globals

* Typos

* Update tests openqasm

* Comment from review

* More

* Correct enzyme path

* Add free

* Typos

* Explicitly call preserve-nvvm with Enzyme

* Add loose types

* Remove tests and revive pcount in argmap

* Update to 0.0.74

* Update Enzyme

* Update format

* Fix wheels

* Attempt to fix failing upload to codecov by bumping Enzyme to working build

* [MLIR] Integrated BackpropOp lowering with Enzyme (#193)

* Rework Enzyme calling convention to unpack pointers to fix incorrect zero gradients

* Exclude Enzyme from autoformatting

* Update MemSetOp builder API

* Switch metadata approach from enzyme_function_like to enzyme_allocation_like to avoid double free error with generated derivative code

* Remove Enzyme wrapper function, switch argmap to destination-passing style

* Remove parameter count from argmap parameters

* Fix bug with Enzyme calling convention around inactive memrefs

* Update backprop lowering test

* Add index dialect to dependent dialects of gradient_to_llvm

* Update pinned version of Enzyme to one that builds with LLVM 17, only use 'auto' when clear from context

* Get generic alloc/free names from MLIR utilities, only register them when use-generic-functions option is set

* Update changelog

* Remove enzyme-loose-types flag

Co-authored-by: Romain Moyard <rmoyard@gmail.com>

* Add tests for PreEnzymeOpt pipeline, add explicit enzyme_const annotation for floats

* Remove index -> llvm and memref -> llvm patterns from gradient -> llvm, run those passes separately afterwards

* Autoformatting

---------

Co-authored-by: Romain Moyard <rmoyard@gmail.com>

* Enzyme integration + Deallocation (#201)

* Re-enable buffer deallocation passes.

* Register _mlir_memref_to_llvm_free as "like" free.

Using the allocation registration mechanism for Enzyme, fails to
automatically register the free-like function as free. Likely due to a
possibility on different arity.

* Explicitly name post enzyme file as ".postenzyme.ll"

* Comments.

* Readd tests

* Fix adjoint

* Fix parameter shift

* Update classical jac

* format

* Smaller fixes to the backprop operation & lowering (#224)

* Fix bug with grad on classical qjit function

If the grad target is a qjit object, the current implementation assumes
that the wrapped function is already a "JAX function", that is a
`QNode` or `Function` object. This is not the case however when a bare
Python function is wrapped in qjit.

* Refactor `compDiffArgIndices` and add...

... `computeDiffArgs` as a convenience method to filter the callee
argument list.

* Tighten backprop type declarations & add verifier

For example, the op didn't check whether the declared number of gradient
results matches the number of differentiable arguments, a bug in one of
the bufferization tests.

The `quantum_jacobian` operand name was switched out to `cotangents`
since it more generally describes the purpose of that argument, and in
the way it is used now only receives a slice of the quantum jacobian
anyways.

* Allocate primal out buffers during bufferization

Reorganizes backprop op arguments into:
- primal `args`
- `diffArgShadows` or gradient result buffers
- `calleeResults` or primal output buffers
- `cotangents` or callee result shadows

* Update mlir/lib/Gradient/Transforms/BufferizationPatterns.cpp

Co-authored-by: Jacob Mai Peng <jacobmpeng@gmail.com>

---------

Co-authored-by: Jacob Mai Peng <jacobmpeng@gmail.com>

* Remove too many free

* Format

* Right indentation

* Change function like

* Update from review

* [MLIR] Add support for differentiating scalars with the backprop op (#225)

* Add docstring explaining computeMemRefSizeInBytes

* Update changelog

* Support scalars in BackpropOp

* WIP: Adding back removed tests

* Add float type returns to backprop op

* Bug fixes with scalar differentiation support

* Add scalar support to verifier, revert changes to test

* Add back more tests

* Try to clarify role of scalarIndices

* Add verifier for shaped differentiable types, fix bug from merge commit

* Update comment

Co-authored-by: David Ittah <dime10@users.noreply.github.com>

* Add back more deleted tests, comment explaining scalar unpacking of the enzyme_autodiff call

* Clarify terminology around quantum gradients/cotangents

* Update shadow vs data in terminology in comments

---------

Co-authored-by: David Ittah <dime10@users.noreply.github.com>

* Update doc/changelog.md

Co-authored-by: David Ittah <dime10@users.noreply.github.com>

* More tests

* Add tests

* Changelog.

---------

Co-authored-by: Jacob Mai Peng <jacobmpeng@gmail.com>
Co-authored-by: erick-xanadu <110487834+erick-xanadu@users.noreply.github.com>
Co-authored-by: David Ittah <dime10@users.noreply.github.com>
Co-authored-by: Erick <erick.ochoalopez@xanadu.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants