Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch run #2069

Merged
merged 48 commits into from
Mar 8, 2022
Merged

Batch run #2069

merged 48 commits into from
Mar 8, 2022

Conversation

jackaraz
Copy link
Contributor

@jackaraz jackaraz commented Dec 27, 2021

Context:
In classical ML applications, training examples are run in batches and then the mean or sum of an objective function upon this batch is calculated to apply gradient descent. This function allows the collective execution of a batched input tape with the same weight used for the entire batch.

Description of the Change:
Additional transformation function.

Benefits:
A batch of training or validation examples will run at the same time. This will also allow collective job submissions to IBM-Q instead of one circuit tape at a time.

Possible Drawbacks:
Code assumes that the arguments are ordered with respect to the batched inputs first and then non-batched inputs. If the user requests any other order this will lead to wrong results.

Related GitHub Issues:
Improvement request mentioned in issue #2037

This function creates multiple circuit executions for batched input examples
and executes all batch inputs with the same trainable variables. The main
difference between the proposed version in the issue and this commit is the
input `argnum` this indicates the location of the given input hence gives the
ability to work across platforms.
@josh146
Copy link
Member

josh146 commented Jan 7, 2022

Hi @jackaraz, thanks for making this PR! I just wanted to check in to see if there was anything you had any questions regarding. If not, would this be ready for a code review?

@jackaraz
Copy link
Contributor Author

jackaraz commented Jan 7, 2022

Hi @josh146 not at the moment, I guess my code does not meet all the requirements that you have about code formatting. I can modify it with black if you like. Also after seeing @antalszava 's proposal in here I was thinking that might be possible to update the TensorFlow layer a bit further. However, this will require creating it as a Model rather than a layer which might create usability issues for a regular user.

@codecov
Copy link

codecov bot commented Jan 7, 2022

Codecov Report

❗ No coverage uploaded for pull request base (v0.22.0-rc0@15c779f). Click here to learn what that means.
The diff coverage is n/a.

Impacted file tree graph

@@              Coverage Diff               @@
##             v0.22.0-rc0    #2069   +/-   ##
==============================================
  Coverage               ?   99.32%           
==============================================
  Files                  ?      242           
  Lines                  ?    19138           
  Branches               ?        0           
==============================================
  Hits                   ?    19008           
  Misses                 ?      130           
  Partials               ?        0           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 15c779f...ef20484. Read the comment docs.

@josh146
Copy link
Member

josh146 commented Jan 11, 2022

I can modify it with black if you like.

Yes, that would be great 🙂

Also after seeing @antalszava 's proposal in here I was thinking that might be possible to update the TensorFlow layer a bit further. However, this will require creating it as a Model rather than a layer which might create usability issues for a regular user.

Feel free to discuss any thoughts you have regarding this in more detail, either here on in the issue! As you are currently working on a model that requires this feature, your feedback is invaluable.

@jackaraz
Copy link
Contributor Author

jackaraz commented Jan 26, 2022

Hi @josh146

I can modify it with black if you like.

Yes, that would be great 🙂

This is done. But I still see some linting errors; I believe it doesn't like how I present argnum in the Keras layer.

Also after seeing @antalszava 's proposal in here I was thinking that might be possible to update the TensorFlow layer a bit further. However, this will require creating it as a Model rather than a layer which might create usability issues for a regular user.

Feel free to discuss any thoughts you have regarding this in more detail, either here on in the issue! As you are currently working on a model that requires this feature, your feedback is invaluable.

I do have a few more implementations based on TF using pennylane, such as quantum natural gradient (as far as I know, this does not exist with TF backend at the moment, my implementation is based on pennylane's original one) and for parallelization on GPU/CPU. However, my implementation is not very generic; I'm basically writing a Keras model and a custom training within it. But it requires different models for different implementations i.e. one model for pure Quantum circuit-based networks, one for hybrid where the classical portion can train with traditional SGD and the quantum portion can train with QNGD within the same training sequence etc. However, as I said, these are case dependent implementations, we will release the paper soon, and I can show you the implementations, but I'm not sure if it would be relevant for pennylane.

@josh146
Copy link
Member

josh146 commented Jan 28, 2022

Thanks @jackaraz!

I do have a few more implementations based on TF using pennylane, such as quantum natural gradient (as far as I know, this does not exist with TF backend at the moment, my implementation is based on pennylane's original one)

This actually would be very interesting to see, as we have long been wanting to extend the QNG optimizer to support other interfaces. However, we still have various design questions, so your implementation --- even if not eventually merged into the codebase --- could still be helpful in resolving these questions!

@vbelis
Copy link

vbelis commented Feb 22, 2022

Hello everyone. I was preparing to open a new issue regarding the batching of input data. However, it seems that @jackaraz and @josh146 have already discussed this in the forum, before starting this PR, and would like to ensure whether we have the same goal in mind.

The implementation of tranform.batch_params(source) adds a batch dimension to each argument that is trainable by default, or to all QNode arguments if all_operations=True is enabled. This is, in a sense, the "opposite" of what it is typically required in a neural network training workflow, where, it is the inputs (encoded data features) that have a batch dimension whereas the weights don't. Hence, the weight tensors should be kept constant throughout the forward pass whereas the input x should have dimensions/shape: (n_batch, n_features).

My understanding is that the goal of this PR is to implement the above functionality. I am wondering whether your implementation @jackaraz is TensorFlow specific. Have you tested whether it can be used without an interface, i.e. vanilla pennylane, or with PyTorch (qnn.TorchLayer)?

@jackaraz
Copy link
Contributor Author

Hello everyone. I was preparing to open a new issue regarding the batching of input data. However, it seems that @jackaraz and @josh146 have already discussed this in the forum, before starting this PR, and would like to ensure whether we have the same goal in mind.

The implementation of tranform.batch_params(source) adds a batch dimension to each argument that is trainable by default, or to all QNode arguments if all_operations=True is enabled. This is, in a sense, the "opposite" of what it is typically required in a neural network training workflow, where, it is the inputs (encoded data features) that have a batch dimension whereas the weights don't. Hence, the weight tensors should be kept constant throughout the forward pass whereas the input x should have dimensions/shape: (n_batch, n_features).

My understanding is that the goal of this PR is to implement the above functionality. I am wondering whether your implementation @jackaraz is TensorFlow specific. Have you tested whether it can be used without an interface, i.e. vanilla pennylane, or with PyTorch (qnn.TorchLayer)?

Hi @vbelis, exactly the goal is to batch like in ML context. The function that I implemented should be generic, I tested with pennylane NumPy interface and TF but not PyTorch since I don't use PyTorch. However, since it uses the penny lane backend the separation between trainable and non-trainable parameters should be already available to the batch_transform class hence the function should be able to work with PyTorch as well.

@vbelis
Copy link

vbelis commented Feb 22, 2022

Hi @jackaraz, thanks for the fast reply. OK I will pull and do some tests locally to check the behavior, since I am using PyTorch. The PR has not been concluded due to the design mismatch that the CI flagged or are you also trying to enhance it further before pushing?

@jackaraz
Copy link
Contributor Author

Hi @vbelis I wasn't planning to add anything else, not really sure why the tests are failing, if you have any feedback would be much appreciated to finalize this PR.

@vbelis
Copy link

vbelis commented Feb 22, 2022

I will try to take a look soon. However, it would be much more efficient if some of the main developers, e.g. @josh146, could provide some tips regarding the failed tests: (codecov and sphinx). In the CodeFactor failed check, they seem to have an upper limit of 5 arguments for the constructor. Do you believe there is a good way to remove that argument or include it in the keyword arguments, to stay consistent with pennylane code design? Might be good to also merge the master branch to your branch to get the latest commits (there is some warning about this as well).

Regarding the failed checks by codecov/patch of the form: Added line #L... was not covered by tests, I don't have any suggestion out of the top of my head, sorry...

Copy link
Contributor Author

@jackaraz jackaraz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding the failed CodeFactor check.

pennylane/qnn/keras.py Show resolved Hide resolved
@josh146
Copy link
Member

josh146 commented Feb 22, 2022

Hi @vbelis, welcome!

Regarding the codefactor warning,

Too many arguments (6/5) (too-many-arguments)

I think it is fine to disable this one. That can be done by adding a comment

# pylint: disable=too-many-arguments

directly after the function signature 🙂

@josh146
Copy link
Member

josh146 commented Feb 22, 2022

Regarding the failed checks by codecov/patch of the form: Added line #L... was not covered by tests, I don't have any suggestion out of the top of my head, sorry...

This is a CI check that verifies that all lines of code added in this PR are tested :) In this case, it appears that this particular line is not being called by any of the unit tests.

OK I will pull and do some tests locally to check the behavior, since I am using PyTorch.

This is much appreciated @vbelis! Let me know if you have any questions :)

@jackaraz jackaraz requested a review from josh146 March 3, 2022 12:44
Copy link
Contributor Author

@jackaraz jackaraz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All requests have been implemented.

Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @jackaraz for taking into account all suggestions! This is a really nice addition, will be great to have this feature in 🎉

I've left some minor comments mostly regarding the documentation, but happy to now approve this PR 🙂

doc/releases/changelog-dev.md Outdated Show resolved Hide resolved
pennylane/qnn/keras.py Outdated Show resolved Hide resolved
pennylane/transforms/batch_input.py Show resolved Hide resolved
pennylane/transforms/batch_input.py Outdated Show resolved Hide resolved
pennylane/transforms/batch_input.py Outdated Show resolved Hide resolved
tests/transforms/test_batch_inputs.py Outdated Show resolved Hide resolved
@jackaraz
Copy link
Contributor Author

jackaraz commented Mar 7, 2022

Thanks @josh146; all suggestions have been implemented.

@josh146 josh146 changed the base branch from master to v0.22.0-rc0 March 8, 2022 06:55
@josh146 josh146 merged commit 628493d into PennyLaneAI:v0.22.0-rc0 Mar 8, 2022
antalszava added a commit that referenced this pull request Mar 11, 2022
* [Bug] Exclude Snapshot from adjoint backwards pass (#2289)

* Exclude Snapshot from adjoint backwards pass

* Add snapshots test for diff_methods

* Changelog

* Trigger rebuild

Co-authored-by: antalszava <antalszava@gmail.com>

* Work on consistency of `Operator`s (#2287)

* some inconsistencies

* swap basis

* undo duplicated wire in test

* changelog

* revert snapshot wires change

* unused import

* merge rc

* revert accidental changelog merge

Co-authored-by: Josh Izaac <josh146@gmail.com>

* Batch run (#2069)

* batching ability for non-trainable inputs only following issue #2037

This function creates multiple circuit executions for batched input examples
and executes all batch inputs with the same trainable variables. The main
difference between the proposed version in the issue and this commit is the
input `argnum` this indicates the location of the given input hence gives the
ability to work across platforms.

* adaptation for batch execution

* improvements according to PR rules

* minor update according to PR errors

* modify according to codefactor-io

* reformatted code style

* adjust line lenght for linting

* update linting

* disable linting for too many arguments

* add testing for batch input in keras

* format test_keras.py

* add tests for remaining functions

* adapt the defaults

* update docstring according to @josh146 's suggestions

* remove keras sterilazation

* add batch_input to the docstring

* docstring update for readability: pennylane/transforms/batch_input.py

Co-authored-by: Josh Izaac <josh146@gmail.com>

* minor fix in documentation

* change assertion error to valueerror

* test valueerror

* modify the definition of argnum

* change argnum -> batch_idx

* update changelog-dev.md

* apply @josh146 's suggestions

* linting

* tests

* more

Co-authored-by: antalszava <antalszava@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>

* Circuit cutting: Tidy up documentation (#2279)

* Redo imports

* Update docs

* Update wording

* Fix ID

* Add to docs

* Add to docs

* Fix

* Update docstrings

* Use nx.MultiDiGraph

* Update fragment_graph

* Update graph_to_tape

* Update remap_tape_wires

* Rename to expand_fragment_tape

* Update expand_fragment_tape

* Update CutStrategy

* Update qcut_processing_fn

* Remove note

* Update cut_circuit

* Work on docs

* Add to docs

* Update pennylane/transforms/qcut.py

* Add to changelog

* Move device definition

* Mention WireCut

* Move details

* QCut module

* Fix image location

* Fix init

* Apply suggestions from code review

Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com>

* Add link to communication graph

* Reword

* Move around

* fix

* fix

Co-authored-by: antalszava <antalszava@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com>

* Circuit cutting: update changelog (#2290)

* Redo imports

* Update docs

* Update wording

* Fix ID

* Add to docs

* Add to docs

* Fix

* Update docstrings

* Use nx.MultiDiGraph

* Update fragment_graph

* Update graph_to_tape

* Update remap_tape_wires

* Rename to expand_fragment_tape

* Update expand_fragment_tape

* Update CutStrategy

* Update qcut_processing_fn

* Remove note

* Update cut_circuit

* Work on docs

* Add to docs

* Update pennylane/transforms/qcut.py

* Add to changelog

* Move device definition

* Mention WireCut

* Move details

* QCut module

* Fix image location

* Fix init

* Update changelog

* Link to docs page

* Update wording

* Apply suggestions from code review

Co-authored-by: Josh Izaac <josh146@gmail.com>

* Move

* Update doc/releases/changelog-dev.md

Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>

* Remove

* Update

Co-authored-by: antalszava <antalszava@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>

* Minor gradient fixes (#2299)

* Produce consisten output shapes

In the absence of trainable params, some gradient transforms did not
produce an empty tuple yet like the rest of our functions.

* Minor formatting changes in param_shift_hessian

* Fix param_shift_hessian for all zero diff_methods

* Fix missing requires_grad & catch expected warning

* Changelog

Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca>

Co-authored-by: David Ittah <dime10@users.noreply.github.com>
Co-authored-by: David Wierichs <davidwierichs@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Jack Y. Araz <jackaraz@gmail.com>
Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com>
Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com>
Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>
Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca>
josh146 added a commit that referenced this pull request Mar 15, 2022
* [Bug] Exclude Snapshot from adjoint backwards pass (#2289)

* Exclude Snapshot from adjoint backwards pass

* Add snapshots test for diff_methods

* Changelog

* Trigger rebuild

Co-authored-by: antalszava <antalszava@gmail.com>

* Work on consistency of `Operator`s (#2287)

* some inconsistencies

* swap basis

* undo duplicated wire in test

* changelog

* revert snapshot wires change

* unused import

* merge rc

* revert accidental changelog merge

Co-authored-by: Josh Izaac <josh146@gmail.com>

* Batch run (#2069)

* batching ability for non-trainable inputs only following issue #2037

This function creates multiple circuit executions for batched input examples
and executes all batch inputs with the same trainable variables. The main
difference between the proposed version in the issue and this commit is the
input `argnum` this indicates the location of the given input hence gives the
ability to work across platforms.

* adaptation for batch execution

* improvements according to PR rules

* minor update according to PR errors

* modify according to codefactor-io

* reformatted code style

* adjust line lenght for linting

* update linting

* disable linting for too many arguments

* add testing for batch input in keras

* format test_keras.py

* add tests for remaining functions

* adapt the defaults

* update docstring according to @josh146 's suggestions

* remove keras sterilazation

* add batch_input to the docstring

* docstring update for readability: pennylane/transforms/batch_input.py

Co-authored-by: Josh Izaac <josh146@gmail.com>

* minor fix in documentation

* change assertion error to valueerror

* test valueerror

* modify the definition of argnum

* change argnum -> batch_idx

* update changelog-dev.md

* apply @josh146 's suggestions

* linting

* tests

* more

Co-authored-by: antalszava <antalszava@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>

* Circuit cutting: Tidy up documentation (#2279)

* Redo imports

* Update docs

* Update wording

* Fix ID

* Add to docs

* Add to docs

* Fix

* Update docstrings

* Use nx.MultiDiGraph

* Update fragment_graph

* Update graph_to_tape

* Update remap_tape_wires

* Rename to expand_fragment_tape

* Update expand_fragment_tape

* Update CutStrategy

* Update qcut_processing_fn

* Remove note

* Update cut_circuit

* Work on docs

* Add to docs

* Update pennylane/transforms/qcut.py

* Add to changelog

* Move device definition

* Mention WireCut

* Move details

* QCut module

* Fix image location

* Fix init

* Apply suggestions from code review

Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com>

* Add link to communication graph

* Reword

* Move around

* fix

* fix

Co-authored-by: antalszava <antalszava@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com>

* Circuit cutting: update changelog (#2290)

* Redo imports

* Update docs

* Update wording

* Fix ID

* Add to docs

* Add to docs

* Fix

* Update docstrings

* Use nx.MultiDiGraph

* Update fragment_graph

* Update graph_to_tape

* Update remap_tape_wires

* Rename to expand_fragment_tape

* Update expand_fragment_tape

* Update CutStrategy

* Update qcut_processing_fn

* Remove note

* Update cut_circuit

* Work on docs

* Add to docs

* Update pennylane/transforms/qcut.py

* Add to changelog

* Move device definition

* Mention WireCut

* Move details

* QCut module

* Fix image location

* Fix init

* Update changelog

* Link to docs page

* Update wording

* Apply suggestions from code review

Co-authored-by: Josh Izaac <josh146@gmail.com>

* Move

* Update doc/releases/changelog-dev.md

Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>

* Remove

* Update

Co-authored-by: antalszava <antalszava@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>

* Minor gradient fixes (#2299)

* Produce consisten output shapes

In the absence of trainable params, some gradient transforms did not
produce an empty tuple yet like the rest of our functions.

* Minor formatting changes in param_shift_hessian

* Fix param_shift_hessian for all zero diff_methods

* Fix missing requires_grad & catch expected warning

* Changelog

Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca>

* Deprecate jacobian tape (#2306)

* Deprecate the Jacobian tape

* Deprecate tape subclasses

* changelog

* more test fixes

* tests

* Apply suggestions from code review

Co-authored-by: antalszava <antalszava@gmail.com>

Co-authored-by: antalszava <antalszava@gmail.com>

* `qml.generator` doc fixes (#2309)

* generator doc fixes

* more fixing

* Snapshot: remove temporary fixes for lightning device (#2291)

* Remove temp fixes for lightning

* Include diff_method tests for all devices

* Changelog

* Update CI to use pennylane-lightning dev

Co-authored-by: antalszava <antalszava@gmail.com>

* Docs fixes for v0.22.0 release (#2312)

* Fix Operator docstring hyperrefs

* Fix example for top-level matrix function

* Add example to Snapshot op docstring

* Fix tape drawing examples in docs

* Apply suggestions from code review

* Update pennylane/ops/snapshot.py

Co-authored-by: Christina Lee <christina@xanadu.ai>

* Extend the conditional operations documentation (#2294)

* Add qfunc and else to cond's UsageDetails

* copy when inverting MV under the hood; add equivalent test case for inversion; add err msg when calling == of MV with unexpected typed obj; more examples

* format

* test docstr

* format

* correct examples

* format

* docstring

* have #2300 on rc too

* lambda example

* intro extend, docstring

* changelog PR num

* link

* note update

* updates

* Apply suggestions from code review

* updates

Co-authored-by: Christina Lee <christina@xanadu.ai>

* Add `qml.generator(op)` backwards compatibility (#2305)

* Add qml.generator(op) backwards compatibility

* Apply suggestions from code review

Co-authored-by: David Wierichs <davidwierichs@gmail.com>

* fix docstring

Co-authored-by: David Wierichs <davidwierichs@gmail.com>

* fixed docs

* use better function refs

* pin pennylane-lightning version in CI (#2318)

* Amend docstring examples for `compute_matrix` and `compute_eigvals` (#2314)

* array()

* print() to get the output formatting correct

* revert array()

* print()

Co-authored-by: Maria Schuld <mariaschuld@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>

* Few docstring updates in prep for `v0.22.0` (#2311)

* updates

* - lower

* updates

* contract_tensor ref updates

* rename test file: batch_input

* explicit requires_grad upon param generation

* torch.Tensor as type

* Update pennylane/transforms/__init__.py

Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com>

* use tf in example

Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com>

* Support for controlled & adjoint in Snapshot/Barrier (#2315)

* controlled, adjoint

* Remove print

* Match adjoint signature of parent class

* Add tests for ctrl/adj support

* Update Barrier.adjoint signature

* Add ctrl test for Barrier

* Update tests/test_debugging.py

* Update tests/ops/test_snapshot.py

* Update tests/ops/test_snapshot.py

* changelog

* trigger build

Co-authored-by: Antal Szava <antalszava@gmail.com>

* ControlledQubitUnitary should raise `DecompositionUndefinedError` (#2320)

* DecompositionUndefinedError

* changelog

* trigger check

* `v0.22.0` release notes (#2303)

* version

* log ref

* rename

* sections; emojis

* format

* improvements order

* format

* addition; collabs; v0.21.0 collab alphabet fix

* reorder

* collab; deprecation item

* more PRs; collab list extended

* update

* sections

* op section break up

* correct matrix example

* suggestions

* suggestions

* a few more

* fix typo in code

* update

* no tf import

* update

Co-authored-by: Josh Izaac <josh146@gmail.com>

* don't pull test Lightning (requires v0.22.0 to land)

* require v0.22 Lightning or higher

* changelog list extend

* update

* Pin Lightning `>=0.22` (#2324)

* pin lightning >=0.22

* Update tests.yml

Co-authored-by: Josh Izaac <josh146@gmail.com>

Co-authored-by: David Ittah <dime10@users.noreply.github.com>
Co-authored-by: David Wierichs <davidwierichs@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Jack Y. Araz <jackaraz@gmail.com>
Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com>
Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com>
Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>
Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca>
Co-authored-by: Christina Lee <christina@xanadu.ai>
Co-authored-by: Maria Schuld <mariaschuld@gmail.com>
antalszava added a commit that referenced this pull request Mar 16, 2022
* [Bug] Exclude Snapshot from adjoint backwards pass (#2289)

* Exclude Snapshot from adjoint backwards pass

* Add snapshots test for diff_methods

* Changelog

* Trigger rebuild

Co-authored-by: antalszava <antalszava@gmail.com>

* Work on consistency of `Operator`s (#2287)

* some inconsistencies

* swap basis

* undo duplicated wire in test

* changelog

* revert snapshot wires change

* unused import

* merge rc

* revert accidental changelog merge

Co-authored-by: Josh Izaac <josh146@gmail.com>

* Batch run (#2069)

* batching ability for non-trainable inputs only following issue #2037

This function creates multiple circuit executions for batched input examples
and executes all batch inputs with the same trainable variables. The main
difference between the proposed version in the issue and this commit is the
input `argnum` this indicates the location of the given input hence gives the
ability to work across platforms.

* adaptation for batch execution

* improvements according to PR rules

* minor update according to PR errors

* modify according to codefactor-io

* reformatted code style

* adjust line lenght for linting

* update linting

* disable linting for too many arguments

* add testing for batch input in keras

* format test_keras.py

* add tests for remaining functions

* adapt the defaults

* update docstring according to @josh146 's suggestions

* remove keras sterilazation

* add batch_input to the docstring

* docstring update for readability: pennylane/transforms/batch_input.py

Co-authored-by: Josh Izaac <josh146@gmail.com>

* minor fix in documentation

* change assertion error to valueerror

* test valueerror

* modify the definition of argnum

* change argnum -> batch_idx

* update changelog-dev.md

* apply @josh146 's suggestions

* linting

* tests

* more

Co-authored-by: antalszava <antalszava@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>

* Circuit cutting: Tidy up documentation (#2279)

* Redo imports

* Update docs

* Update wording

* Fix ID

* Add to docs

* Add to docs

* Fix

* Update docstrings

* Use nx.MultiDiGraph

* Update fragment_graph

* Update graph_to_tape

* Update remap_tape_wires

* Rename to expand_fragment_tape

* Update expand_fragment_tape

* Update CutStrategy

* Update qcut_processing_fn

* Remove note

* Update cut_circuit

* Work on docs

* Add to docs

* Update pennylane/transforms/qcut.py

* Add to changelog

* Move device definition

* Mention WireCut

* Move details

* QCut module

* Fix image location

* Fix init

* Apply suggestions from code review

Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com>

* Add link to communication graph

* Reword

* Move around

* fix

* fix

Co-authored-by: antalszava <antalszava@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com>

* Circuit cutting: update changelog (#2290)

* Redo imports

* Update docs

* Update wording

* Fix ID

* Add to docs

* Add to docs

* Fix

* Update docstrings

* Use nx.MultiDiGraph

* Update fragment_graph

* Update graph_to_tape

* Update remap_tape_wires

* Rename to expand_fragment_tape

* Update expand_fragment_tape

* Update CutStrategy

* Update qcut_processing_fn

* Remove note

* Update cut_circuit

* Work on docs

* Add to docs

* Update pennylane/transforms/qcut.py

* Add to changelog

* Move device definition

* Mention WireCut

* Move details

* QCut module

* Fix image location

* Fix init

* Update changelog

* Link to docs page

* Update wording

* Apply suggestions from code review

Co-authored-by: Josh Izaac <josh146@gmail.com>

* Move

* Update doc/releases/changelog-dev.md

Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>

* Remove

* Update

Co-authored-by: antalszava <antalszava@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>

* Minor gradient fixes (#2299)

* Produce consisten output shapes

In the absence of trainable params, some gradient transforms did not
produce an empty tuple yet like the rest of our functions.

* Minor formatting changes in param_shift_hessian

* Fix param_shift_hessian for all zero diff_methods

* Fix missing requires_grad & catch expected warning

* Changelog

Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca>

* Deprecate jacobian tape (#2306)

* Deprecate the Jacobian tape

* Deprecate tape subclasses

* changelog

* more test fixes

* tests

* Apply suggestions from code review

Co-authored-by: antalszava <antalszava@gmail.com>

Co-authored-by: antalszava <antalszava@gmail.com>

* `qml.generator` doc fixes (#2309)

* generator doc fixes

* more fixing

* Snapshot: remove temporary fixes for lightning device (#2291)

* Remove temp fixes for lightning

* Include diff_method tests for all devices

* Changelog

* Update CI to use pennylane-lightning dev

Co-authored-by: antalszava <antalszava@gmail.com>

* Docs fixes for v0.22.0 release (#2312)

* Fix Operator docstring hyperrefs

* Fix example for top-level matrix function

* Add example to Snapshot op docstring

* Fix tape drawing examples in docs

* Apply suggestions from code review

* Update pennylane/ops/snapshot.py

Co-authored-by: Christina Lee <christina@xanadu.ai>

* Extend the conditional operations documentation (#2294)

* Add qfunc and else to cond's UsageDetails

* copy when inverting MV under the hood; add equivalent test case for inversion; add err msg when calling == of MV with unexpected typed obj; more examples

* format

* test docstr

* format

* correct examples

* format

* docstring

* have #2300 on rc too

* lambda example

* intro extend, docstring

* changelog PR num

* link

* note update

* updates

* Apply suggestions from code review

* updates

Co-authored-by: Christina Lee <christina@xanadu.ai>

* Add `qml.generator(op)` backwards compatibility (#2305)

* Add qml.generator(op) backwards compatibility

* Apply suggestions from code review

Co-authored-by: David Wierichs <davidwierichs@gmail.com>

* fix docstring

Co-authored-by: David Wierichs <davidwierichs@gmail.com>

* fixed docs

* use better function refs

* pin pennylane-lightning version in CI (#2318)

* Amend docstring examples for `compute_matrix` and `compute_eigvals` (#2314)

* array()

* print() to get the output formatting correct

* revert array()

* print()

Co-authored-by: Maria Schuld <mariaschuld@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>

* Few docstring updates in prep for `v0.22.0` (#2311)

* updates

* - lower

* updates

* contract_tensor ref updates

* rename test file: batch_input

* explicit requires_grad upon param generation

* torch.Tensor as type

* Update pennylane/transforms/__init__.py

Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com>

* use tf in example

Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com>

* Support for controlled & adjoint in Snapshot/Barrier (#2315)

* controlled, adjoint

* Remove print

* Match adjoint signature of parent class

* Add tests for ctrl/adj support

* Update Barrier.adjoint signature

* Add ctrl test for Barrier

* Update tests/test_debugging.py

* Update tests/ops/test_snapshot.py

* Update tests/ops/test_snapshot.py

* changelog

* trigger build

Co-authored-by: Antal Szava <antalszava@gmail.com>

* ControlledQubitUnitary should raise `DecompositionUndefinedError` (#2320)

* DecompositionUndefinedError

* changelog

* trigger check

* `v0.22.0` release notes (#2303)

* version

* log ref

* rename

* sections; emojis

* format

* improvements order

* format

* addition; collabs; v0.21.0 collab alphabet fix

* reorder

* collab; deprecation item

* more PRs; collab list extended

* update

* sections

* op section break up

* correct matrix example

* suggestions

* suggestions

* a few more

* fix typo in code

* update

* no tf import

* update

Co-authored-by: Josh Izaac <josh146@gmail.com>

* Pin Lightning `>=0.22` (#2324)

* pin lightning >=0.22

* Update tests.yml

Co-authored-by: Josh Izaac <josh146@gmail.com>

* bump the version to 0.22.1

* v0.22.1 release notes

* Fix queuing unexpected operators with qml.measure; changelog

* notes

Co-authored-by: David Ittah <dime10@users.noreply.github.com>
Co-authored-by: David Wierichs <davidwierichs@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Jack Y. Araz <jackaraz@gmail.com>
Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com>
Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com>
Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>
Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca>
Co-authored-by: Christina Lee <christina@xanadu.ai>
Co-authored-by: Maria Schuld <mariaschuld@gmail.com>
Co-authored-by: Guillermo Alonso-Linaje <65235481+KetpuntoG@users.noreply.github.com>
Jaybsoni added a commit that referenced this pull request Mar 18, 2022
…product ops (#2276)

* sorted wires to match eig vals when computing expectation value!

* Added tests and change log entry

* lint

* remove redundant lines of code

* small typo

* [Bug] Exclude Snapshot from adjoint backwards pass (#2289)

* Exclude Snapshot from adjoint backwards pass

* Add snapshots test for diff_methods

* Changelog

* Trigger rebuild

Co-authored-by: antalszava <antalszava@gmail.com>

* Work on consistency of `Operator`s (#2287)

* some inconsistencies

* swap basis

* undo duplicated wire in test

* changelog

* revert snapshot wires change

* unused import

* merge rc

* revert accidental changelog merge

Co-authored-by: Josh Izaac <josh146@gmail.com>

* Batch run (#2069)

* batching ability for non-trainable inputs only following issue #2037

This function creates multiple circuit executions for batched input examples
and executes all batch inputs with the same trainable variables. The main
difference between the proposed version in the issue and this commit is the
input `argnum` this indicates the location of the given input hence gives the
ability to work across platforms.

* adaptation for batch execution

* improvements according to PR rules

* minor update according to PR errors

* modify according to codefactor-io

* reformatted code style

* adjust line lenght for linting

* update linting

* disable linting for too many arguments

* add testing for batch input in keras

* format test_keras.py

* add tests for remaining functions

* adapt the defaults

* update docstring according to @josh146 's suggestions

* remove keras sterilazation

* add batch_input to the docstring

* docstring update for readability: pennylane/transforms/batch_input.py

Co-authored-by: Josh Izaac <josh146@gmail.com>

* minor fix in documentation

* change assertion error to valueerror

* test valueerror

* modify the definition of argnum

* change argnum -> batch_idx

* update changelog-dev.md

* apply @josh146 's suggestions

* linting

* tests

* more

Co-authored-by: antalszava <antalszava@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>

* removed ordering in

* re-push to run tests

* push

* debugging

* Circuit cutting: Tidy up documentation (#2279)

* Redo imports

* Update docs

* Update wording

* Fix ID

* Add to docs

* Add to docs

* Fix

* Update docstrings

* Use nx.MultiDiGraph

* Update fragment_graph

* Update graph_to_tape

* Update remap_tape_wires

* Rename to expand_fragment_tape

* Update expand_fragment_tape

* Update CutStrategy

* Update qcut_processing_fn

* Remove note

* Update cut_circuit

* Work on docs

* Add to docs

* Update pennylane/transforms/qcut.py

* Add to changelog

* Move device definition

* Mention WireCut

* Move details

* QCut module

* Fix image location

* Fix init

* Apply suggestions from code review

Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com>

* Add link to communication graph

* Reword

* Move around

* fix

* fix

Co-authored-by: antalszava <antalszava@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com>

* Circuit cutting: update changelog (#2290)

* Redo imports

* Update docs

* Update wording

* Fix ID

* Add to docs

* Add to docs

* Fix

* Update docstrings

* Use nx.MultiDiGraph

* Update fragment_graph

* Update graph_to_tape

* Update remap_tape_wires

* Rename to expand_fragment_tape

* Update expand_fragment_tape

* Update CutStrategy

* Update qcut_processing_fn

* Remove note

* Update cut_circuit

* Work on docs

* Add to docs

* Update pennylane/transforms/qcut.py

* Add to changelog

* Move device definition

* Mention WireCut

* Move details

* QCut module

* Fix image location

* Fix init

* Update changelog

* Link to docs page

* Update wording

* Apply suggestions from code review

Co-authored-by: Josh Izaac <josh146@gmail.com>

* Move

* Update doc/releases/changelog-dev.md

Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>

* Remove

* Update

Co-authored-by: antalszava <antalszava@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>

* Minor gradient fixes (#2299)

* Produce consisten output shapes

In the absence of trainable params, some gradient transforms did not
produce an empty tuple yet like the rest of our functions.

* Minor formatting changes in param_shift_hessian

* Fix param_shift_hessian for all zero diff_methods

* Fix missing requires_grad & catch expected warning

* Changelog

Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca>

* Deprecate jacobian tape (#2306)

* Deprecate the Jacobian tape

* Deprecate tape subclasses

* changelog

* more test fixes

* tests

* Apply suggestions from code review

Co-authored-by: antalszava <antalszava@gmail.com>

Co-authored-by: antalszava <antalszava@gmail.com>

* `qml.generator` doc fixes (#2309)

* generator doc fixes

* more fixing

* Snapshot: remove temporary fixes for lightning device (#2291)

* Remove temp fixes for lightning

* Include diff_method tests for all devices

* Changelog

* Update CI to use pennylane-lightning dev

Co-authored-by: antalszava <antalszava@gmail.com>

* Docs fixes for v0.22.0 release (#2312)

* Fix Operator docstring hyperrefs

* Fix example for top-level matrix function

* Add example to Snapshot op docstring

* Fix tape drawing examples in docs

* Apply suggestions from code review

* Update pennylane/ops/snapshot.py

Co-authored-by: Christina Lee <christina@xanadu.ai>

* Extend the conditional operations documentation (#2294)

* Add qfunc and else to cond's UsageDetails

* copy when inverting MV under the hood; add equivalent test case for inversion; add err msg when calling == of MV with unexpected typed obj; more examples

* format

* test docstr

* format

* correct examples

* format

* docstring

* have #2300 on rc too

* lambda example

* intro extend, docstring

* changelog PR num

* link

* note update

* updates

* Apply suggestions from code review

* updates

Co-authored-by: Christina Lee <christina@xanadu.ai>

* Add `qml.generator(op)` backwards compatibility (#2305)

* Add qml.generator(op) backwards compatibility

* Apply suggestions from code review

Co-authored-by: David Wierichs <davidwierichs@gmail.com>

* fix docstring

Co-authored-by: David Wierichs <davidwierichs@gmail.com>

* fixed docs

* use better function refs

* I think I have solved the issue

* removed prints

* clean up

* more cleaning

* pin pennylane-lightning version in CI (#2318)

* Added permutation on probability vector

* Finally fixed issue, just need to add tests and format

* general clean up

* More clean up

* added comment explaining prob vect permutation

* added tests for new device method get_ordered_subset()

* Update .github/workflows/tests.yml

* Update pennylane/_qubit_device.py

* Added more tests

* Lint

* Added changelog entry, fixed up tests

* Added PR link to changelog

* lint

* fixed tests

* typo in teests

* Fixed get_ordered_subset() to raise a more useful error message

* regex match error in tests

* Update doc/releases/changelog-dev.md

* Update pennylane/_device.py

Co-authored-by: antalszava <antalszava@gmail.com>

* Update doc/releases/changelog-dev.md

* Apply suggestions from code review

Co-authored-by: antalszava <antalszava@gmail.com>

* Update pennylane/_device.py

Co-authored-by: antalszava <antalszava@gmail.com>

* Renamed device method for getting ordered subset

* update name in qubit device

* Wrapped permuting logic into private method

* updated tests using pytest.parameterize

* moved tests to device test suite

* Moved tests to device test suite

* added doc string to tests

* lint and added check for type of mapped wires

* updated device test to use tol as a func instead of a val and run on proper device

* typo in tests

* address codefactor

* more code factor

* codefactor

* override global variable name

* lint

* Apply suggestions from code review

Co-authored-by: antalszava <antalszava@gmail.com>

* format

* updated comment in _permute_wires func

* updated doc strings in test_measaurements.py

Co-authored-by: David Ittah <dime10@users.noreply.github.com>
Co-authored-by: antalszava <antalszava@gmail.com>
Co-authored-by: David Wierichs <davidwierichs@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Jack Y. Araz <jackaraz@gmail.com>
Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com>
Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com>
Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>
Co-authored-by: Christina Lee <christina@xanadu.ai>
Co-authored-by: Maria Schuld <mariaschuld@gmail.com>
antalszava added a commit that referenced this pull request Mar 21, 2022
* Fix output shape of batch transforms

Remove squeezing from current batch transforms outputs like
gradient_transform and hessian_transform, and instead produce the same
output shape that is generated in qml.QNode directly at the
batch_transform level.

* Do contract with cjac=[1] to remove unit dimension

* Fix var_param_shift iterating over 0d array

* Fix extra dimension in var_param_shift

Due to the mask always being 2d, there is an extra dimension for
scalar-valued QNodes, even after contraction with the classical
jacobian. Adjust the mask's shape to match that of the arrays holding
the result values.

* Fix mitigate processing_fn

* Fix tests expecting unit dimensions

* Fix dimensionality in param_shift_cv

* Fix metric tensor tape processing

* [Bug] Exclude Snapshot from adjoint backwards pass (#2289)

* Exclude Snapshot from adjoint backwards pass

* Add snapshots test for diff_methods

* Changelog

* Trigger rebuild

Co-authored-by: antalszava <antalszava@gmail.com>

* Work on consistency of `Operator`s (#2287)

* some inconsistencies

* swap basis

* undo duplicated wire in test

* changelog

* revert snapshot wires change

* unused import

* merge rc

* revert accidental changelog merge

Co-authored-by: Josh Izaac <josh146@gmail.com>

* Batch run (#2069)

* batching ability for non-trainable inputs only following issue #2037

This function creates multiple circuit executions for batched input examples
and executes all batch inputs with the same trainable variables. The main
difference between the proposed version in the issue and this commit is the
input `argnum` this indicates the location of the given input hence gives the
ability to work across platforms.

* adaptation for batch execution

* improvements according to PR rules

* minor update according to PR errors

* modify according to codefactor-io

* reformatted code style

* adjust line lenght for linting

* update linting

* disable linting for too many arguments

* add testing for batch input in keras

* format test_keras.py

* add tests for remaining functions

* adapt the defaults

* update docstring according to @josh146 's suggestions

* remove keras sterilazation

* add batch_input to the docstring

* docstring update for readability: pennylane/transforms/batch_input.py

Co-authored-by: Josh Izaac <josh146@gmail.com>

* minor fix in documentation

* change assertion error to valueerror

* test valueerror

* modify the definition of argnum

* change argnum -> batch_idx

* update changelog-dev.md

* apply @josh146 's suggestions

* linting

* tests

* more

Co-authored-by: antalszava <antalszava@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>

* Fix batch execution documented return type

* Remove obsolete safe_squeeze

* Circuit cutting: Tidy up documentation (#2279)

* Redo imports

* Update docs

* Update wording

* Fix ID

* Add to docs

* Add to docs

* Fix

* Update docstrings

* Use nx.MultiDiGraph

* Update fragment_graph

* Update graph_to_tape

* Update remap_tape_wires

* Rename to expand_fragment_tape

* Update expand_fragment_tape

* Update CutStrategy

* Update qcut_processing_fn

* Remove note

* Update cut_circuit

* Work on docs

* Add to docs

* Update pennylane/transforms/qcut.py

* Add to changelog

* Move device definition

* Mention WireCut

* Move details

* QCut module

* Fix image location

* Fix init

* Apply suggestions from code review

Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com>

* Add link to communication graph

* Reword

* Move around

* fix

* fix

Co-authored-by: antalszava <antalszava@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com>

* Circuit cutting: update changelog (#2290)

* Redo imports

* Update docs

* Update wording

* Fix ID

* Add to docs

* Add to docs

* Fix

* Update docstrings

* Use nx.MultiDiGraph

* Update fragment_graph

* Update graph_to_tape

* Update remap_tape_wires

* Rename to expand_fragment_tape

* Update expand_fragment_tape

* Update CutStrategy

* Update qcut_processing_fn

* Remove note

* Update cut_circuit

* Work on docs

* Add to docs

* Update pennylane/transforms/qcut.py

* Add to changelog

* Move device definition

* Mention WireCut

* Move details

* QCut module

* Fix image location

* Fix init

* Update changelog

* Link to docs page

* Update wording

* Apply suggestions from code review

Co-authored-by: Josh Izaac <josh146@gmail.com>

* Move

* Update doc/releases/changelog-dev.md

Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>

* Remove

* Update

Co-authored-by: antalszava <antalszava@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>

* Minor gradient fixes (#2299)

* Produce consisten output shapes

In the absence of trainable params, some gradient transforms did not
produce an empty tuple yet like the rest of our functions.

* Minor formatting changes in param_shift_hessian

* Fix param_shift_hessian for all zero diff_methods

* Fix missing requires_grad & catch expected warning

* Changelog

Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca>

* Deprecate jacobian tape (#2306)

* Deprecate the Jacobian tape

* Deprecate tape subclasses

* changelog

* more test fixes

* tests

* Apply suggestions from code review

Co-authored-by: antalszava <antalszava@gmail.com>

Co-authored-by: antalszava <antalszava@gmail.com>

* `qml.generator` doc fixes (#2309)

* generator doc fixes

* more fixing

* Move tape result squeezing to each transform

The previous placement of the squezing inside the batch transform
assumes that all uses of the batch transform are similar to that of the
gradient transforms, which is not the case. To avoid breaking other type
of transforms with this change, the squeezing is now placed inside each
gradient transform.

Tapes constructed from QNodes now also carry the `_qfunc_output`
attribute.

* Fix linting errors

* Snapshot: remove temporary fixes for lightning device (#2291)

* Remove temp fixes for lightning

* Include diff_method tests for all devices

* Changelog

* Update CI to use pennylane-lightning dev

Co-authored-by: antalszava <antalszava@gmail.com>

* Don't stack the tape result list, only each squeeze each element

* Fix `_qfunc_output` missing from expanded tape

* Revert "Fix metric tensor tape processing"

This reverts commit 03479ac.

* Fix issue when stacking scalars

The `np.stack` cannot deal with scalar arrays, which can just be skipped
in such cases.
This situation can occur in the gradient transforms when the output is a
scalar array of type object.

* Docs fixes for v0.22.0 release (#2312)

* Fix Operator docstring hyperrefs

* Fix example for top-level matrix function

* Add example to Snapshot op docstring

* Fix tape drawing examples in docs

* Apply suggestions from code review

* Update pennylane/ops/snapshot.py

Co-authored-by: Christina Lee <christina@xanadu.ai>

* Extend the conditional operations documentation (#2294)

* Add qfunc and else to cond's UsageDetails

* copy when inverting MV under the hood; add equivalent test case for inversion; add err msg when calling == of MV with unexpected typed obj; more examples

* format

* test docstr

* format

* correct examples

* format

* docstring

* have #2300 on rc too

* lambda example

* intro extend, docstring

* changelog PR num

* link

* note update

* updates

* Apply suggestions from code review

* updates

Co-authored-by: Christina Lee <christina@xanadu.ai>

* Add `qml.generator(op)` backwards compatibility (#2305)

* Add qml.generator(op) backwards compatibility

* Apply suggestions from code review

Co-authored-by: David Wierichs <davidwierichs@gmail.com>

* fix docstring

Co-authored-by: David Wierichs <davidwierichs@gmail.com>

* fixed docs

* use better function refs

* Remove `safe_squeeze` tests

* pin pennylane-lightning version in CI (#2318)

* Amend docstring examples for `compute_matrix` and `compute_eigvals` (#2314)

* array()

* print() to get the output formatting correct

* revert array()

* print()

Co-authored-by: Maria Schuld <mariaschuld@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>

* Few docstring updates in prep for `v0.22.0` (#2311)

* updates

* - lower

* updates

* contract_tensor ref updates

* rename test file: batch_input

* explicit requires_grad upon param generation

* torch.Tensor as type

* Update pennylane/transforms/__init__.py

Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com>

* use tf in example

Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com>

* Support for controlled & adjoint in Snapshot/Barrier (#2315)

* controlled, adjoint

* Remove print

* Match adjoint signature of parent class

* Add tests for ctrl/adj support

* Update Barrier.adjoint signature

* Add ctrl test for Barrier

* Update tests/test_debugging.py

* Update tests/ops/test_snapshot.py

* Update tests/ops/test_snapshot.py

* changelog

* trigger build

Co-authored-by: Antal Szava <antalszava@gmail.com>

* ControlledQubitUnitary should raise `DecompositionUndefinedError` (#2320)

* DecompositionUndefinedError

* changelog

* trigger check

* `v0.22.0` release notes (#2303)

* version

* log ref

* rename

* sections; emojis

* format

* improvements order

* format

* addition; collabs; v0.21.0 collab alphabet fix

* reorder

* collab; deprecation item

* more PRs; collab list extended

* update

* sections

* op section break up

* correct matrix example

* suggestions

* suggestions

* a few more

* fix typo in code

* update

* no tf import

* update

Co-authored-by: Josh Izaac <josh146@gmail.com>

* Pin Lightning `>=0.22` (#2324)

* pin lightning >=0.22

* Update tests.yml

Co-authored-by: Josh Izaac <josh146@gmail.com>

* Review: comment fixes

Co-authored-by: Josh Izaac <josh146@gmail.com>

* Review: simplify code

Co-authored-by: David Wierichs <davidwierichs@gmail.com>

* Add tests demonstrating bug resolution

* Changelog

* Undo difficult to test change for CV device

Co-authored-by: antalszava <antalszava@gmail.com>
Co-authored-by: David Wierichs <davidwierichs@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Jack Y. Araz <jackaraz@gmail.com>
Co-authored-by: Tom Bromley <49409390+trbromley@users.noreply.github.com>
Co-authored-by: anthayes92 <34694788+anthayes92@users.noreply.github.com>
Co-authored-by: Nathan Killoran <co9olguy@users.noreply.github.com>
Co-authored-by: Jay Soni <jbsoni@uwaterloo.ca>
Co-authored-by: Christina Lee <christina@xanadu.ai>
Co-authored-by: Maria Schuld <mariaschuld@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants