-
Notifications
You must be signed in to change notification settings - Fork 575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds Double Excitation operations #1123
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just did a quick run through, looks great so far!
Small comment re: autograd, jax, and tensorflow: it will be important to have some backprop tests, just to ensure everything remains differentiable!
.github/CHANGELOG.md
Outdated
- Added the `DoubleExcitation` four-qubit operation, which is particularly useful for quantum | ||
chemistry applications. [(#1121)](https://github.com/PennyLaneAI/pennylane/pull/1121) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor: the second half of the sentence "which is particularly useful for quantum chemistry applications", sounds a bit weird to me in a changelog 🤔
Suggest making it more specific (saying in 2-3 extra words why it is useful), or removing 'particularly'?
pennylane/devices/tf_ops.py
Outdated
U = [[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], | ||
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], | ||
[0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], | ||
[0, 0, 0, c, 0, 0, 0, 0, 0, 0, 0, 0, -s, 0, 0, 0], | ||
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], | ||
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], | ||
[0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], | ||
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], | ||
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0], | ||
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0], | ||
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0], | ||
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], | ||
[0, 0, 0, s, 0, 0, 0, 0, 0, 0, 0, 0, c, 0, 0, 0], | ||
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0], | ||
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0], | ||
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this because tensor indexing doesn't work in tensorflow? You could try using tf.scatter_nd
to avoid writing out the full matrix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Josh! I took a look and it's still pretty cumbersome to write down the matrix using tf.scatter_nd
. As far as I can tell the only way to use tf.scatter
is to populate an entire tensor index at a time for a tensor of all zeros. In this case it means populating entire rows. This would still be helpful if some rows where all zeros, but instead it seems we have to specify each full row directly, which is as difficult as writing down the entire matrix. For example, to build the 2x2 identity we'd have to do something like:
indices = tf.constant([[0], [1]])
updates = tf.constant([[1, 0], [0, 1]])
shape = tf.constant([2, 2])
scatter = tf.scatter_nd(indices, updates, shape)
print(scatter)
>>> tf.Tensor(
[[1 0]
[0 1]], shape=(2, 2), dtype=int32)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah. To be honest I don't 100% understand scatter_nd
😆
pennylane/devices/autograd_ops.py
Outdated
U = np.zeros((16, 16)) | ||
U[3, 3] = c # 3 (dec) = 0011 (bin) | ||
U[3, 12] = -s # 12 (dec) = 1100 (bin) | ||
U[12, 3] = s | ||
U[12, 12] = c |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be important to check that this remains differentiable --- I think autograd (and maybe JAX) do not like setting array elements via indexing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the suggestion! How would that check look like exactly? 🤔
Is this not checked in the device tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, so what you will need to do is just add a new integration test, which
- Creates a QNode parametrized on the diff method (both parameter-shift and backprop)
- Uses the new gate
- Differentiates the QNode and compares it against the expected result 🙂
If everything is fine, the tests will pass for both diff methods.
pennylane/devices/tf_ops.py
Outdated
|
||
return tf.convert_to_tensor(U) | ||
|
||
return U |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return U |
Codecov Report
@@ Coverage Diff @@
## master #1123 +/- ##
==========================================
+ Coverage 98.02% 98.05% +0.02%
==========================================
Files 142 142
Lines 10398 10522 +124
==========================================
+ Hits 10193 10317 +124
Misses 205 205
Continue to review full report at Codecov.
|
![:atom: :atom:](https://github.githubassets.com/images/icons/emoji/atom.png)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some suggestions, nearing approval from my side. 👍 I'd double-check that changes required for the docs are made in a quick other iteration. Checking the docs is great to see that rendering is good for docstrings.
pennylane/devices/autograd_ops.py
Outdated
|
||
|
||
def DoubleExcitationPlus(phi): | ||
r""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The previous suggestion for the docstrings are applicable here and for other additions too.
pennylane/devices/tf_ops.py
Outdated
|
||
|
||
def DoubleExcitation(phi): | ||
r""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Side-note: formatting of the docstring is required for the docs too, not just for cosmetics:
https://pennylane--1123.org.readthedocs.build/en/1123/code/api/pennylane.devices.tf_ops.DoubleExcitation.html?highlight=doubleexcitation#pennylane.devices.tf_ops.DoubleExcitation
pennylane/ops/qubit.py
Outdated
r"""DoubleExcitationPlus(phi, wires) | ||
Double excitation rotation with positive phase-shift outside the rotation subspace. | ||
|
||
.. math:: U_+(\phi) = \begin{bmatrix} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my browser, the matrix seems to be too big when rendered in the docs:
https://pennylane--1123.org.readthedocs.build/en/1123/code/api/pennylane.DoubleExcitationPlus.html?highlight=doubleexcitationplus#pennylane.DoubleExcitationPlus
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @ixfoduap. Left few minor suggestions. Happy to approve after checking them.
@@ -145,6 +145,9 @@ def circuit(): | |||
"CRY": jax_ops.CRY, | |||
"CRZ": jax_ops.CRZ, | |||
"MultiRZ": jax_ops.MultiRZ, | |||
"DoubleExcitation": jax_ops.DoubleExcitation, | |||
"DoubleExcitationPlus": jax_ops.DoubleExcitationPlus, | |||
"DoubleExcitationMinus": jax_ops.DoubleExcitationMinus, | |||
} | |||
|
|||
C_DTYPE = jnp.complex64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just out of curiosity, why is the precision lowered here as compare with autograd and tensorflow?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Thenerdstation
Co-authored-by: antalszava <antalszava@gmail.com> Co-authored-by: agran2018 <45397799+agran2018@users.noreply.github.com>
…to double_excitations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! 🥇 🥳
|1100\rangle,|0011\rangle\}`. More precisely, it performs the transformation | ||
|
||
.. math:: | ||
|
||
&|0011\rangle \rightarrow \cos(\phi) |0011\rangle - \sin(\phi) |1100\rangle\\ | ||
&|1100\rangle \rightarrow \cos(\phi) |1100\rangle + \sin(\phi) |0011\rangle, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
😍 🥇
.github/CHANGELOG.md
Outdated
expectation value calculations, following results from | ||
[Kottmann et al.](https://arxiv.org/abs/2011.05938). | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pennylane/devices/autograd_ops.py
Outdated
phi (float): rotation angle | ||
Returns: | ||
array[float]: Double excitation rotation matrix | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pennylane/devices/jax_ops.py
Outdated
phi (float): rotation angle | ||
Returns: | ||
jnp.Tensor[float]: Double excitation rotation matrix | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pennylane/devices/tf_ops.py
Outdated
phi (float): rotation angle | ||
Returns: | ||
tf.Tensor[complex]: Double excitation rotation matrix | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
while leaving all other basis states unchanged. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Minor): is it worth further specifying that it's the computational basis?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leaves them all unchanged, so the basis doesn't matter!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @ixfoduap, looks great!.
Context:
Current tools for constructing quantum circuits for quantum chemistry are inefficient. By relying on fermionic-to-qubit mappings to define operations and circuit decompositions to compute gradients, the resulting circuits for creating single and double excitation operations lead to lengthy computations. Through research work, we've identified that Givens rotations perform the same role as fermionic excitation operations, while allowing for simple analytical gradients and faster implementation in hardware and simulators.
Description of the Change:
This PR adds the 4-qubit
DoubleExcitation
operation with support across all native devices. Its analytical gradient requires a decomposition into two gates,DoubleExcitationPlus
andDoubleExcitationMinus
, which are also added as part of this PR.Benefits:
Much faster implementation of quantum circuits for quantum chemistry.
Note:
The
DoubleExcitationPlus
andDoubleExcitationMinus
have generators G such that G^2=1 (self-inverse). This means they satisfy the standard parameter-shift rule and no custom gradient formula needs to be passed