Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds Double Excitation operations #1123

Merged
merged 40 commits into from
Mar 23, 2021
Merged

Adds Double Excitation operations #1123

merged 40 commits into from
Mar 23, 2021

Conversation

ixfoduap
Copy link
Contributor

@ixfoduap ixfoduap commented Mar 4, 2021

Context:
Current tools for constructing quantum circuits for quantum chemistry are inefficient. By relying on fermionic-to-qubit mappings to define operations and circuit decompositions to compute gradients, the resulting circuits for creating single and double excitation operations lead to lengthy computations. Through research work, we've identified that Givens rotations perform the same role as fermionic excitation operations, while allowing for simple analytical gradients and faster implementation in hardware and simulators.

Description of the Change:
This PR adds the 4-qubit DoubleExcitation operation with support across all native devices. Its analytical gradient requires a decomposition into two gates, DoubleExcitationPlus and DoubleExcitationMinus, which are also added as part of this PR.

Benefits:
Much faster implementation of quantum circuits for quantum chemistry.

Note:
The DoubleExcitationPlus and DoubleExcitationMinus have generators G such that G^2=1 (self-inverse). This means they satisfy the standard parameter-shift rule and no custom gradient formula needs to be passed

@ixfoduap ixfoduap self-assigned this Mar 4, 2021
Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just did a quick run through, looks great so far!

Small comment re: autograd, jax, and tensorflow: it will be important to have some backprop tests, just to ensure everything remains differentiable!

Comment on lines 5 to 6
- Added the `DoubleExcitation` four-qubit operation, which is particularly useful for quantum
chemistry applications. [(#1121)](https://github.com/PennyLaneAI/pennylane/pull/1121)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor: the second half of the sentence "which is particularly useful for quantum chemistry applications", sounds a bit weird to me in a changelog 🤔

Suggest making it more specific (saying in 2-3 extra words why it is useful), or removing 'particularly'?

Comment on lines 208 to 223
U = [[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, c, 0, 0, 0, 0, 0, 0, 0, 0, -s, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, s, 0, 0, 0, 0, 0, 0, 0, 0, c, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this because tensor indexing doesn't work in tensorflow? You could try using tf.scatter_nd to avoid writing out the full matrix.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Josh! I took a look and it's still pretty cumbersome to write down the matrix using tf.scatter_nd . As far as I can tell the only way to use tf.scatter is to populate an entire tensor index at a time for a tensor of all zeros. In this case it means populating entire rows. This would still be helpful if some rows where all zeros, but instead it seems we have to specify each full row directly, which is as difficult as writing down the entire matrix. For example, to build the 2x2 identity we'd have to do something like:

indices = tf.constant([[0], [1]])
updates = tf.constant([[1, 0], [0, 1]])
shape = tf.constant([2, 2])
scatter = tf.scatter_nd(indices, updates, shape)

print(scatter)

>>> tf.Tensor(
[[1 0]
 [0 1]], shape=(2, 2), dtype=int32)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah. To be honest I don't 100% understand scatter_nd 😆

Comment on lines 196 to 200
U = np.zeros((16, 16))
U[3, 3] = c # 3 (dec) = 0011 (bin)
U[3, 12] = -s # 12 (dec) = 1100 (bin)
U[12, 3] = s
U[12, 12] = c
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be important to check that this remains differentiable --- I think autograd (and maybe JAX) do not like setting array elements via indexing

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the suggestion! How would that check look like exactly? 🤔

Is this not checked in the device tests?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, so what you will need to do is just add a new integration test, which

  • Creates a QNode parametrized on the diff method (both parameter-shift and backprop)
  • Uses the new gate
  • Differentiates the QNode and compares it against the expected result 🙂

If everything is fine, the tests will pass for both diff methods.

.github/CHANGELOG.md Outdated Show resolved Hide resolved
@ixfoduap ixfoduap changed the title Adds double excitation operations [WIP, no tests for now] Adds double excitation operations Mar 10, 2021
pennylane/devices/tf_ops.py Outdated Show resolved Hide resolved

return tf.convert_to_tensor(U)

return U
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return U

@codecov
Copy link

codecov bot commented Mar 11, 2021

Codecov Report

Merging #1123 (f23ba82) into master (6aa1228) will increase coverage by 0.02%.
The diff coverage is 97.42%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1123      +/-   ##
==========================================
+ Coverage   98.02%   98.05%   +0.02%     
==========================================
  Files         142      142              
  Lines       10398    10522     +124     
==========================================
+ Hits        10193    10317     +124     
  Misses        205      205              
Impacted Files Coverage Δ
pennylane/devices/default_mixed.py 100.00% <ø> (ø)
pennylane/devices/default_qubit.py 100.00% <ø> (ø)
pennylane/devices/default_qubit_tf.py 90.66% <90.66%> (ø)
pennylane/devices/default_qubit_jax.py 94.02% <94.02%> (ø)
pennylane/devices/jax_ops.py 97.22% <97.22%> (+0.85%) ⬆️
pennylane/devices/autograd_ops.py 100.00% <100.00%> (ø)
pennylane/devices/default_qubit_autograd.py 100.00% <100.00%> (ø)
pennylane/devices/tf_ops.py 100.00% <100.00%> (ø)
pennylane/ops/qubit.py 98.19% <100.00%> (+0.16%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6aa1228...f23ba82. Read the comment docs.

@ixfoduap ixfoduap changed the title Adds double excitation operations Adds Double Excitation operations Mar 12, 2021
tests/ops/test_qubit_ops.py Outdated Show resolved Hide resolved
tests/ops/test_qubit_ops.py Outdated Show resolved Hide resolved
tests/ops/test_qubit_ops.py Outdated Show resolved Hide resolved
@ixfoduap ixfoduap added core :atom: Core functionality (Operations, QNodes, CircuitGraphs) qchem ⚛️ Related to the QChem package labels Mar 19, 2021
pennylane/devices/autograd_ops.py Outdated Show resolved Hide resolved
pennylane/devices/jax_ops.py Outdated Show resolved Hide resolved
pennylane/devices/tf_ops.py Outdated Show resolved Hide resolved
pennylane/ops/qubit.py Outdated Show resolved Hide resolved
@josh146 josh146 removed their request for review March 19, 2021 15:12
Copy link
Contributor

@antalszava antalszava left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some suggestions, nearing approval from my side. 👍 I'd double-check that changes required for the docs are made in a quick other iteration. Checking the docs is great to see that rendering is good for docstrings.

tests/ops/test_qubit_ops.py Outdated Show resolved Hide resolved
.github/CHANGELOG.md Outdated Show resolved Hide resolved


def DoubleExcitationPlus(phi):
r"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The previous suggestion for the docstrings are applicable here and for other additions too.

pennylane/devices/tf_ops.py Outdated Show resolved Hide resolved
pennylane/ops/qubit.py Show resolved Hide resolved
tests/ops/test_qubit_ops.py Show resolved Hide resolved
tests/ops/test_qubit_ops.py Outdated Show resolved Hide resolved
tests/ops/test_qubit_ops.py Outdated Show resolved Hide resolved


def DoubleExcitation(phi):
r"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

r"""DoubleExcitationPlus(phi, wires)
Double excitation rotation with positive phase-shift outside the rotation subspace.

.. math:: U_+(\phi) = \begin{bmatrix}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

@agran2018 agran2018 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @ixfoduap. Left few minor suggestions. Happy to approve after checking them.

pennylane/devices/tf_ops.py Outdated Show resolved Hide resolved
pennylane/ops/qubit.py Outdated Show resolved Hide resolved
@@ -145,6 +145,9 @@ def circuit():
"CRY": jax_ops.CRY,
"CRZ": jax_ops.CRZ,
"MultiRZ": jax_ops.MultiRZ,
"DoubleExcitation": jax_ops.DoubleExcitation,
"DoubleExcitationPlus": jax_ops.DoubleExcitationPlus,
"DoubleExcitationMinus": jax_ops.DoubleExcitationMinus,
}

C_DTYPE = jnp.complex64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just out of curiosity, why is the precision lowered here as compare with autograd and tensorflow?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Thenerdstation

Copy link
Contributor

@antalszava antalszava left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me! 🥇 🥳

Comment on lines +1979 to +1984
|1100\rangle,|0011\rangle\}`. More precisely, it performs the transformation

.. math::

&|0011\rangle \rightarrow \cos(\phi) |0011\rangle - \sin(\phi) |1100\rangle\\
&|1100\rangle \rightarrow \cos(\phi) |1100\rangle + \sin(\phi) |0011\rangle,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😍 🥇

expectation value calculations, following results from
[Kottmann et al.](https://arxiv.org/abs/2011.05938).


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change

phi (float): rotation angle
Returns:
array[float]: Double excitation rotation matrix

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change

phi (float): rotation angle
Returns:
jnp.Tensor[float]: Double excitation rotation matrix

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change

phi (float): rotation angle
Returns:
tf.Tensor[complex]: Double excitation rotation matrix

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change


while leaving all other basis states unchanged.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Minor): is it worth further specifying that it's the computational basis?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leaves them all unchanged, so the basis doesn't matter!

Copy link
Contributor

@agran2018 agran2018 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @ixfoduap, looks great!.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core :atom: Core functionality (Operations, QNodes, CircuitGraphs) qchem ⚛️ Related to the QChem package
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants