Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for the new VQE workflows to the differentiable batch execution pipeline #1608

Merged
merged 84 commits into from Sep 7, 2021

Conversation

josh146
Copy link
Member

@josh146 josh146 commented Aug 30, 2021

Context: In #1483, #1596, and #1551, support was added for:

  • Directly specifying expval(H) inside of a QNode
  • Differentiating Hamiltonian coefficients
  • Allowing certain devices, including default.qubit, to support Hamiltonians directly.

However, these changes were not ported over to the new differentiable batch-execute pipeline.

Description of the Change:

  • Add support for differentiating (one or more) Hamiltonians to qml.gradients.param_shift.

  • Modify hamiltonian_grad to support multiple Hamiltonians (return expval(H1), expval(H2)), as this is supported execution behaviour.

  • Fixes a couple of bugs with the Autograd and TensorFlow batch interfaces

  • Adds tests

Benefits: returning expval(H) from QNodes is now supported within the new batch-differentiable pipeline.

Possible Drawbacks: n/a

Related GitHub Issues: n/a

@josh146 josh146 added review-ready 👌 PRs which are ready for review by someone from the core team. and removed WIP 🚧 Work-in-progress labels Aug 30, 2021
@josh146
Copy link
Member Author

josh146 commented Aug 30, 2021

[ch8557]

Base automatically changed from grad-docs to master August 31, 2021 13:54
Copy link
Contributor

@glassnotes glassnotes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@josh146 only minor comments, this will be such a nice feature to have! 💯

pennylane/tape/tape.py Outdated Show resolved Hide resolved
pennylane/tape/tape.py Show resolved Hide resolved
pennylane/gradients/parameter_shift.py Show resolved Hide resolved
pennylane/gradients/hamiltonian_grad.py Outdated Show resolved Hide resolved

a, b, c = coeffs
x, y = weights
tape.trainable_params = {0, 1, 2, 4}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason it is not using all 3 coeffs of H?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh... very good question 🤔 I remember doing this deliberately, but for the life of me cannot remember the reason!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh! Sorry I misread this line. At first I thought it was ==, but now I realize it is an assignment =.

Basically, I want to test the case where not all Hamiltonian coefficients are trainable, just to make the test more complicated.

tests/gradients/test_parameter_shift.py Outdated Show resolved Hide resolved
tests/gradients/test_parameter_shift.py Outdated Show resolved Hide resolved
tests/gradients/test_parameter_shift.py Outdated Show resolved Hide resolved
tests/gradients/test_parameter_shift.py Outdated Show resolved Hide resolved
tests/interfaces/test_batch_autograd.py Show resolved Hide resolved
pennylane/gradients/parameter_shift.py Show resolved Hide resolved
@@ -738,6 +738,29 @@ def trainable_params(self, param_indices):

self._trainable_params = param_indices

def get_operation(self, idx):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a thought; could name this get_trainable_operation?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anthayes92 that was my original name 😆

Oddly, I found myself getting confused while writing this, and decided it was better to mirror the existing tape.get_parameters() method (which by default only gets trainable parameters).

@anthayes92
Copy link
Contributor

Awesome addition @josh146 ! Very thorough testing

Co-authored-by: Olivia Di Matteo <2068515+glassnotes@users.noreply.github.com>
@josh146 josh146 merged commit b9231d4 into master Sep 7, 2021
@josh146 josh146 deleted the gradient-vqe branch September 7, 2021 08:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
review-ready 👌 PRs which are ready for review by someone from the core team.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants