Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Templates] Rewrite embeddings as operations #1156

Merged
merged 92 commits into from
Apr 6, 2021
Merged
Show file tree
Hide file tree
Changes from 83 commits
Commits
Show all changes
92 commits
Select commit Hold shift + click to select a range
21e30b8
Remove old core
josh146 Jan 29, 2021
bab8c08
more
josh146 Jan 29, 2021
6a62534
more tests
josh146 Jan 29, 2021
25e6085
more tests
josh146 Jan 29, 2021
5074218
more tests
josh146 Jan 29, 2021
23147d6
merge master
josh146 Jan 29, 2021
0f31b36
fixed more tests
josh146 Jan 29, 2021
8b5828a
Merge branch 'master' into rip-out-core
josh146 Jan 30, 2021
e13a1e5
more tests passing
josh146 Jan 30, 2021
f79f20f
Merge branch 'master' into rip-out-core
josh146 Feb 3, 2021
f6408fe
merge master
josh146 Feb 8, 2021
f014c89
update jax test
josh146 Feb 8, 2021
6d1fb63
merge master
josh146 Feb 11, 2021
33f4c8e
merge master
josh146 Feb 16, 2021
95e35e4
merge master
josh146 Feb 18, 2021
8c7fbc6
Merge branch 'master' into rip-out-core
josh146 Feb 22, 2021
14ebf35
tests passing
josh146 Feb 22, 2021
643113f
fix
josh146 Feb 22, 2021
153401d
fix
josh146 Feb 22, 2021
951b924
linting
josh146 Feb 22, 2021
0fdf4b4
fix docs
josh146 Feb 22, 2021
1d6d32d
Merge branch 'master' into rip-out-core
mariaschuld Feb 24, 2021
24119bc
Merge branch 'master' into rip-out-core
josh146 Mar 1, 2021
da62820
merge master:
josh146 Mar 4, 2021
4443b57
fix
josh146 Mar 4, 2021
672d963
fix
josh146 Mar 4, 2021
f4dc056
Merge branch 'master' into rip-out-core
josh146 Mar 8, 2021
4d100f3
merge master
josh146 Mar 10, 2021
c4f609a
fix
josh146 Mar 10, 2021
f8a2747
Update pennylane/tape/operation_recorder.py
josh146 Mar 10, 2021
5e1df17
fix tests after changing observable underline
josh146 Mar 10, 2021
be672e3
Update tests/test_queuing.py
josh146 Mar 12, 2021
50b074a
Update pennylane/circuit_graph.py
josh146 Mar 12, 2021
30c3bb1
Update pennylane/tape/tape.py
josh146 Mar 12, 2021
426c1ed
merge master
josh146 Mar 12, 2021
55e2206
merge master
josh146 Mar 12, 2021
c7f1a41
Merge branch 'master' into rip-out-core
antalszava Mar 14, 2021
c1d2874
Update pennylane/measure.py
josh146 Mar 15, 2021
39e7768
Update doc/code/qml_tape.rst
josh146 Mar 15, 2021
899911b
Update doc/code/qml_tape.rst
josh146 Mar 15, 2021
006bc97
Update tests/interfaces/test_qnode_torch.py
josh146 Mar 16, 2021
595186d
Update tests/interfaces/test_qnode_torch.py
josh146 Mar 16, 2021
16ba95c
Update tests/interfaces/test_qnode_autograd.py
josh146 Mar 16, 2021
9c99910
Update tests/interfaces/test_qnode_autograd.py
josh146 Mar 16, 2021
72c688e
Update tests/interfaces/test_qnode_tf.py
josh146 Mar 16, 2021
7884630
Update tests/interfaces/test_tape_tf.py
josh146 Mar 16, 2021
4eb9b76
Update tests/interfaces/test_qnode_tf.py
josh146 Mar 16, 2021
ebea0b0
Update tests/interfaces/test_tape_autograd.py
josh146 Mar 16, 2021
b86e613
Update tests/interfaces/test_tape_torch.py
josh146 Mar 16, 2021
4cad2e4
Update pennylane/interfaces/torch.py
josh146 Mar 16, 2021
c0632db
suggested changes
josh146 Mar 16, 2021
0343c22
Update pennylane/qnode.py
josh146 Mar 18, 2021
e54694b
suggested changes
josh146 Mar 18, 2021
a3cf592
Merge branch 'rip-out-core' of github.com:PennyLaneAI/pennylane into …
josh146 Mar 18, 2021
840fd79
merge master
josh146 Mar 18, 2021
032d2b1
fix test
josh146 Mar 18, 2021
25555cb
Merge branch 'master' into rip-out-core
trbromley Mar 18, 2021
4a08e84
rewrite amplitudeembedding
mariaschuld Mar 22, 2021
f8762c5
add tests
mariaschuld Mar 22, 2021
50693b5
finish amplitude embedding
mariaschuld Mar 22, 2021
bfd2f57
finish angle embedding
mariaschuld Mar 22, 2021
b548100
finish basis embedding
mariaschuld Mar 23, 2021
990d726
fix conflicts
mariaschuld Mar 23, 2021
a6c2977
finished basis embedding
mariaschuld Mar 23, 2021
b2656d8
finished displacement
mariaschuld Mar 23, 2021
319c84d
finished squeezing
mariaschuld Mar 23, 2021
6a325f0
finished all embeddings
mariaschuld Mar 23, 2021
4994ec3
polish
mariaschuld Mar 23, 2021
9171a66
Merge branch 'master' into rewrite-embeddings
mariaschuld Mar 23, 2021
17b7eb1
black
mariaschuld Mar 23, 2021
f9a1feb
Merge branch 'rewrite-embeddings' of github.com:PennyLaneAI/pennylane…
mariaschuld Mar 23, 2021
da33280
Merge branch 'master' into rewrite-embeddings
mariaschuld Mar 24, 2021
f12f205
removed merge conflict
mariaschuld Mar 24, 2021
985f043
polish docstrings
mariaschuld Mar 24, 2021
7c70c47
add interface tests beyond gradients
mariaschuld Mar 24, 2021
a075755
typo
mariaschuld Mar 24, 2021
18fa0e2
improve angle emb tests
mariaschuld Mar 24, 2021
38e15ad
improve tests further
mariaschuld Mar 24, 2021
258b4d6
add list/tuple input test
mariaschuld Mar 24, 2021
5088d98
update docstring of basisembedding
mariaschuld Mar 24, 2021
13147fd
fix conflict
mariaschuld Mar 25, 2021
b52f181
black
mariaschuld Mar 25, 2021
9a171c6
fix bug
mariaschuld Mar 25, 2021
898fa00
Update .github/CHANGELOG.md
mariaschuld Mar 29, 2021
925e868
Merge branch 'master' of github.com:PennyLaneAI/pennylane into rewrit…
mariaschuld Mar 29, 2021
d874902
typos
mariaschuld Mar 29, 2021
37c0211
Merge branch 'rewrite-embeddings' of github.com:PennyLaneAI/pennylane…
mariaschuld Mar 29, 2021
6350488
Merge branch 'master' into rewrite-embeddings
mariaschuld Apr 6, 2021
60ed4f8
Update tests/templates/test_embeddings/test_angle.py
mariaschuld Apr 6, 2021
7ac7d5a
applied review suggestions
mariaschuld Apr 6, 2021
69e7a9c
Merge branch 'rewrite-embeddings' of github.com:PennyLaneAI/pennylane…
mariaschuld Apr 6, 2021
28c7dcb
black
mariaschuld Apr 6, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions .github/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -423,11 +423,12 @@
1: ──RY(1.35)──╰X──RY(0.422)──╰X──┤
```

- The `QAOAEmbedding` and `BasicEntanglerLayers` are now classes inheriting
- The embedding templates, as well as `BasicEntanglerLayers`, are now classes inheriting
from `Operation`, and define the ansatz in their `expand()` method. This
change does not affect the user interface.
change does not affect the user interface.
[(#1156)](https://github.com/PennyLaneAI/pennylane/pull/1156)
mariaschuld marked this conversation as resolved.
Show resolved Hide resolved

For convenience, the class has a method that returns the shape of the
For convenience, `BasicEntanglerLayers` has a method that returns the shape of the
trainable parameter tensor, i.e.,

```python
Expand Down
174 changes: 82 additions & 92 deletions pennylane/templates/embeddings/amplitude.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,82 +12,22 @@
# See the License for the specific language governing permissions and
# limitations under the License.
r"""
Contains the ``AmplitudeEmbedding`` template.
Contains the AmplitudeEmbedding template.
"""
# pylint: disable-msg=too-many-branches,too-many-arguments,protected-access
import warnings
import numpy as np

import pennylane as qml
from pennylane.templates.decorator import template
from pennylane.operation import Operation, AnyWires
from pennylane.ops import QubitStateVector
from pennylane.wires import Wires

# tolerance for normalization
TOLERANCE = 1e-10


def _preprocess(features, wires, pad_with, normalize):
"""Validate and pre-process inputs as follows:

* Check that the features tensor is one-dimensional.
* If pad_with is None, check that the first dimension of the features tensor
has length :math:`2^n` where :math:`n` is the number of qubits. Else check that the
first dimension of the features tensor is not larger than :math:`2^n` and pad features with value if necessary.
* If normalize is false, check that first dimension of features is normalised to one. Else, normalise the
features tensor.

Args:
features (tensor_like): input features to pre-process
wires (Wires): wires that template acts on
pad_with (float): constant used to pad the features tensor to required dimension
normalize (bool): whether or not to normalize the features vector

Returns:
tensor: pre-processed features
"""

shape = qml.math.shape(features)

# check shape
if len(shape) != 1:
raise ValueError(f"Features must be a one-dimensional tensor; got shape {shape}.")

n_features = shape[0]
if pad_with is None and n_features != 2 ** len(wires):
raise ValueError(
f"Features must be of length {2 ** len(wires)}; got length {n_features}. "
f"Use the 'pad' argument for automated padding."
)

if pad_with is not None and n_features > 2 ** len(wires):
raise ValueError(
f"Features must be of length {2 ** len(wires)} or "
f"smaller to be padded; got length {n_features}."
)

# pad
if pad_with is not None and n_features < 2 ** len(wires):
padding = [pad_with] * (2 ** len(wires) - n_features)
features = qml.math.concatenate([features, padding], axis=0)

# normalize
norm = qml.math.sum(qml.math.abs(features) ** 2)

if not qml.math.allclose(norm, 1.0, atol=TOLERANCE):
if normalize or pad_with:
features = features / np.sqrt(norm)
else:
raise ValueError(
f"Features must be a vector of length 1.0; got length {norm}."
"Use 'normalize=True' to automatically normalize."
)

return features


@template
def AmplitudeEmbedding(features, wires, pad_with=None, normalize=False, pad=None):
class AmplitudeEmbedding(Operation):
r"""Encodes :math:`2^n` features into the amplitude vector of :math:`n` qubits.

By setting ``pad_with`` to a real or complex number, ``features`` is automatically padded to dimension
Expand All @@ -108,9 +48,8 @@ def AmplitudeEmbedding(features, wires, pad_with=None, normalize=False, pad=None
gradients with respect to the features cannot be computed by PennyLane.

Args:
features (tensor-like): input vector of length ``2^n``, or less if `pad_with` is specified
wires (Iterable or :class:`.wires.Wires`): Wires that the template acts on.
Accepts an iterable of numbers or strings, or a Wires object.
features (tensor_like): input tensor of dimension ``(2^n,)``, or less if `pad_with` is specified
wires (Iterable): wires that the template acts on
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the docstring everywhere:

  • Wires is not user facing (and an iterable anyways), so removed mention
  • tensor_like is more accurate than array

pad_with (float or complex): if not None, the input is padded with this constant to size :math:`2^n`
normalize (bool): whether to automatically normalize the features
pad (float or complex): same as `pad`, to be deprecated
Expand Down Expand Up @@ -142,21 +81,7 @@ def circuit(f=None):
**Differentiating with respect to the features**

Due to non-trivial classical processing to construct the state preparation circuit,
the features argument is **not always differentiable**.

.. code-block:: python

from pennylane import numpy as np

@qml.qnode(dev)
def circuit(f):
AmplitudeEmbedding(features=f, wires=range(2))
return qml.expval(qml.PauliZ(0))

>>> g = qml.grad(circuit, argnum=0)
>>> f = np.array([1, 1, 1, 1], requires_grad=True)
>>> g(f)
ValueError: Cannot differentiate wrt parameter(s) {0, 1, 2, 3}.
the features argument is in general **not differentiable**.

**Normalization**

Expand Down Expand Up @@ -216,17 +141,82 @@ def circuit(f=None):

"""

wires = Wires(wires)
num_params = 1
num_wires = AnyWires
par_domain = "A"

def __init__(self, features, wires, pad_with=None, normalize=False, pad=None, do_queue=True):

# pad is replaced with the more verbose pad_with
trbromley marked this conversation as resolved.
Show resolved Hide resolved
if pad is not None:
warnings.warn(
"The pad argument will be replaced by the pad_with option in future versions of PennyLane.",
PendingDeprecationWarning,
)
if pad_with is None:
pad_with = pad

wires = Wires(wires)
self.pad_with = pad_with
self.normalize = normalize

features = self._preprocess(features, wires, pad_with, normalize)
super().__init__(features, wires=wires, do_queue=do_queue)

def expand(self):

with qml.tape.QuantumTape() as tape:
QubitStateVector(self.parameters[0], wires=self.wires)

return tape

# pad is replaced with the more verbose pad_with
if pad is not None:
warnings.warn(
"The pad argument will be replaced by the pad_with option in future versions of PennyLane.",
PendingDeprecationWarning,
)
if pad_with is None:
pad_with = pad
@staticmethod
def _preprocess(features, wires, pad_with, normalize):
"""Validate and pre-process inputs as follows:

features = _preprocess(features, wires, pad_with, normalize)
* Check that the features tensor is one-dimensional.
* If pad_with is None, check that the first dimension of the features tensor
has length :math:`2^n` where :math:`n` is the number of qubits. Else check that the
first dimension of the features tensor is not larger than :math:`2^n` and pad features with value if necessary.
* If normalize is false, check that first dimension of features is normalised to one. Else, normalise the
features tensor.
"""

shape = qml.math.shape(features)

# check shape
if len(shape) != 1:
raise ValueError(f"Features must be a one-dimensional tensor; got shape {shape}.")

n_features = shape[0]
if pad_with is None and n_features != 2 ** len(wires):
raise ValueError(
f"Features must be of length {2 ** len(wires)}; got length {n_features}. "
f"Use the 'pad' argument for automated padding."
)

if pad_with is not None and n_features > 2 ** len(wires):
raise ValueError(
f"Features must be of length {2 ** len(wires)} or "
f"smaller to be padded; got length {n_features}."
)

QubitStateVector(features, wires=wires)
# pad
if pad_with is not None and n_features < 2 ** len(wires):
padding = [pad_with] * (2 ** len(wires) - n_features)
features = qml.math.concatenate([features, padding], axis=0)
trbromley marked this conversation as resolved.
Show resolved Hide resolved

# normalize
norm = qml.math.sum(qml.math.abs(features) ** 2)

if not qml.math.allclose(norm, 1.0, atol=TOLERANCE):
if normalize or pad_with:
features = features / np.sqrt(norm)
else:
raise ValueError(
f"Features must be a vector of length 1.0; got length {norm}."
"Use 'normalize=True' to automatically normalize."
)

features = qml.math.cast(features, np.complex128)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added this line - it turns out that tf cannot deal with real inputs, but this is a typical use case for the embedding.

return features
79 changes: 34 additions & 45 deletions pennylane/templates/embeddings/angle.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,40 +16,14 @@
"""
# pylint: disable-msg=too-many-branches,too-many-arguments,protected-access
import pennylane as qml
from pennylane.templates.decorator import template
from pennylane.templates import broadcast
from pennylane.wires import Wires
from pennylane.ops import RX, RY, RZ
from pennylane.operation import Operation, AnyWires


def _preprocess(features, wires):
"""Validate and pre-process inputs as follows:
ROT = {"X": RX, "Y": RY, "Z": RZ}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using a dict rather than the previous three if statements to check which gate is used.


* Check that the features tensor is one-dimensional.
* Check that the first dimension of the features tensor
has length :math:`n` or less, where :math:`n` is the number of qubits.

Args:
features (tensor_like): input features to pre-process
wires (Wires): wires that template acts on

Returns:
int: number of features
"""
shape = qml.math.shape(features)

if len(shape) != 1:
raise ValueError(f"Features must be a one-dimensional tensor; got shape {shape}.")

n_features = shape[0]
if n_features > len(wires):
raise ValueError(
f"Features must be of length {len(wires)} or less; got length {n_features}."
)
return n_features


@template
def AngleEmbedding(features, wires, rotation="X"):
class AngleEmbedding(Operation):
r"""
Encodes :math:`N` features into the rotation angles of :math:`n` qubits, where :math:`N \leq n`.

Expand All @@ -66,26 +40,41 @@ def AngleEmbedding(features, wires, rotation="X"):
``features`` than rotations, the circuit does not apply the remaining rotation gates.

Args:
features (array): input array of shape ``(N,)``, where N is the number of input features to embed,
features (tensor_like): input tensor of shape ``(N,)``, where N is the number of input features to embed,
with :math:`N\leq n`
wires (Iterable or Wires): Wires that the template acts on. Accepts an iterable of numbers or strings, or
a Wires object.
wires (Iterable): wires that the template acts on
rotation (str): type of rotations used

"""

wires = Wires(wires)
n_features = _preprocess(features, wires)
wires = wires[:n_features]
num_params = 1
num_wires = AnyWires
par_domain = "A"

def __init__(self, features, wires, rotation="X", do_queue=True):

if rotation not in ROT:
raise ValueError(f"Rotation option {rotation} not recognized.")
self.rotation = ROT[rotation]

shape = qml.math.shape(features)
if len(shape) != 1:
raise ValueError(f"Features must be a one-dimensional tensor; got shape {shape}.")
n_features = shape[0]
if n_features > len(wires):
raise ValueError(
f"Features must be of length {len(wires)} or less; got length {n_features}."
)
mariaschuld marked this conversation as resolved.
Show resolved Hide resolved

wires = wires[:n_features]
super().__init__(features, wires=wires, do_queue=do_queue)
mariaschuld marked this conversation as resolved.
Show resolved Hide resolved

def expand(self):

if rotation == "X":
broadcast(unitary=qml.RX, pattern="single", wires=wires, parameters=features)
features = self.parameters[0]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am inconsistent here, but thought the code is more readable if I give the first parameter a name?


elif rotation == "Y":
broadcast(unitary=qml.RY, pattern="single", wires=wires, parameters=features)
with qml.tape.QuantumTape() as tape:

elif rotation == "Z":
broadcast(unitary=qml.RZ, pattern="single", wires=wires, parameters=features)
for i in range(len(self.wires)):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason why I iterate over a range is that some tensors (tf, if I recall correctly) don't like to be iterated over in something like for t in tensor...

self.rotation(features[i], wires=self.wires[i])

else:
raise ValueError(f"Rotation option {rotation} not recognized.")
return tape
Loading