Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes bug in tape.set_parameters() #923

Merged
merged 5 commits into from
Nov 24, 2020
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 21 additions & 15 deletions .github/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,14 @@

<h3>New features since last release</h3>

* The ``ExpvalCost`` class (previously ``VQECost``) now provides observable optimization using the
``optimize`` argument, resulting in potentially fewer device executions.
* The `ExpvalCost` class (previously `VQECost`) now provides observable optimization using the
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A result of my text editor fixing incorrect markdown syntax!

`optimize` argument, resulting in potentially fewer device executions.
[(#902)](https://github.com/PennyLaneAI/pennylane/pull/902)

This is achieved by separating the observables composing the Hamiltonian into qubit-wise
commuting groups and evaluating those groups on a single QNode using functionality from the
``grouping`` module:
`grouping` module:

```python
qml.enable_tape()
commuting_obs = [qml.PauliX(0), qml.PauliX(0) @ qml.PauliZ(1)]
Expand All @@ -23,9 +23,9 @@

params = qml.init.strong_ent_layers_uniform(3, 2)
```

Grouping these commuting observables leads to fewer device executions:

```pycon
>>> cost_opt(params)
>>> ex_opt = dev.num_executions
Expand Down Expand Up @@ -89,8 +89,8 @@
* The MultiRZ gate now has a defined generator.
[(#912)](https://github.com/PennyLaneAI/pennylane/pull/912)

* The CRot gate now has a ``decomposition`` method, which breaks the gate down into rotations
and CNOT gates. This allows ``CRot`` to be used on devices that do not natively support it.
* The CRot gate now has a `decomposition` method, which breaks the gate down into rotations
and CNOT gates. This allows `CRot` to be used on devices that do not natively support it.
[(#908)](https://github.com/PennyLaneAI/pennylane/pull/908)

* QNodes in tape mode now support returning observables on the same wire if the observables are
Expand All @@ -99,7 +99,7 @@
transformed to the computational basis using a shared set of single-qubit rotations.
[(#882)](https://github.com/PennyLaneAI/pennylane/pull/882)

The following shows how to return the Pauli words ``XX`` and ``XI``:
The following shows how to return the Pauli words `XX` and `XI`:

```python
qml.enable_tape()
Expand Down Expand Up @@ -269,8 +269,8 @@
- `qnn.TorchLayer` [(#865)](https://github.com/PennyLaneAI/pennylane/pull/865)
- `qaoa` module [(#905)](https://github.com/PennyLaneAI/pennylane/pull/905)

* A new function, ``qml.refresh_devices()``, has been added, allowing PennyLane to
rescan installed PennyLane plugins and refresh the device list. In addition, the ``qml.device``
* A new function, `qml.refresh_devices()`, has been added, allowing PennyLane to
rescan installed PennyLane plugins and refresh the device list. In addition, the `qml.device`
loader will attempt to refresh devices if the required plugin device cannot be found.
This will result in an improved experience if installing PennyLane and plugins within
a running Python session (for example, on Google Colab), and avoid the need to
Expand All @@ -296,8 +296,8 @@

<h3>Breaking changes</h3>

- The ``VQECost`` class has been renamed to ``ExpvalCost`` to reflect its general applicability
beyond VQE. Use of ``VQECost`` is still possible but will result in a deprecation warning.
- The `VQECost` class has been renamed to `ExpvalCost` to reflect its general applicability
beyond VQE. Use of `VQECost` is still possible but will result in a deprecation warning.
[(#913)](https://github.com/PennyLaneAI/pennylane/pull/913)

<h3>Documentation</h3>
Expand Down Expand Up @@ -329,10 +329,16 @@
* Fixes a bug whereby binary Python operators were not properly propagating the `requires_grad`
attribute to the output tensor.
[(#889)](https://github.com/PennyLaneAI/pennylane/pull/889)

* Fixes a bug which prevents `TorchLayer` from doing `backward` when CUDA is enabled.
[(#899)](https://github.com/PennyLaneAI/pennylane/pull/899)

* Fixes a bug in `QuantumTape.set_parameters()`. The previous implementation assumed
that the `self.trainable_parms` set would always be iterated over in increasing integer
order. However, this is not guaranteed behaviour, and can lead to the incorrect tape parameters
being set if this is not the case.
[(#923)](https://github.com/PennyLaneAI/pennylane/pull/923)

<h3>Contributors</h3>

This release contains contributions from (in alphabetical order):
Expand Down
2 changes: 1 addition & 1 deletion pennylane/tape/tapes/tape.py
Original file line number Diff line number Diff line change
Expand Up @@ -704,7 +704,7 @@ def set_parameters(self, params, trainable_only=True):
[4, 1, 6]
"""
if trainable_only:
iterator = zip(self.trainable_params, params)
iterator = zip(sorted(self.trainable_params), params)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come that sortedness only mattered here, wouldn't we want the sorting to happen always when querying the trainable parameters? So as to change the def of the trainable_params property to return sorted(self._trainable_params), instead of sorting only here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @antalszava, in fact, this is the only place in the code base where the order of self.trainable_params matters. In other parts of the code base we do iterate over self.trainable_params, but we do not zip it with ordered data like we do here.

I suppose this was just me being cautious wrt overhead - rather than sorting + converting to a list everywhere self.trainable_params is called, I'm only doing it here to avoid additional overhead.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's good to know, thanks! Yes, I also think that the overhead aspect necessitates that sorting only happens at certain times.

One thing that could be nice is to leave a comment explaining why the sorting is needed here, but not elsewhere.

required_length = self.num_params
else:
iterator = enumerate(params)
Expand Down
18 changes: 18 additions & 0 deletions tests/tape/tapes/test_tape.py
Original file line number Diff line number Diff line change
Expand Up @@ -498,6 +498,24 @@ def test_setting_free_parameters(self, make_tape):
params[4],
]

def test_setting_parameters_unordered(self, make_tape, monkeypatch):
"""Test that an 'unordered' trainable_params set does not affect
the setting of parameter values"""
tape, params = make_tape
new_params = [-0.654, 0.3]

with monkeypatch.context() as m:
m.setattr(tape, "_trainable_params", {3, 1})
tape.set_parameters(new_params)

assert tape.get_parameters(trainable_only=False) == [
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about moving the assert into the monkeypatch context and querying tape.get_parameters(trainable_only=True)? Think it would make the test case a bit clearer.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

params[0],
new_params[0],
params[2],
new_params[1],
params[4],
]

def test_setting_all_parameters(self, make_tape):
"""Test that all parameters are correctly modified after construction"""
tape, params = make_tape
Expand Down