diff --git a/.github/CHANGELOG.md b/.github/CHANGELOG.md index b7ab8329123..251942be308 100644 --- a/.github/CHANGELOG.md +++ b/.github/CHANGELOG.md @@ -2,14 +2,14 @@

New features since last release

-* The ``ExpvalCost`` class (previously ``VQECost``) now provides observable optimization using the - ``optimize`` argument, resulting in potentially fewer device executions. +* The `ExpvalCost` class (previously `VQECost`) now provides observable optimization using the + `optimize` argument, resulting in potentially fewer device executions. [(#902)](https://github.com/PennyLaneAI/pennylane/pull/902) - + This is achieved by separating the observables composing the Hamiltonian into qubit-wise commuting groups and evaluating those groups on a single QNode using functionality from the - ``grouping`` module: - + `grouping` module: + ```python qml.enable_tape() commuting_obs = [qml.PauliX(0), qml.PauliX(0) @ qml.PauliZ(1)] @@ -23,9 +23,9 @@ params = qml.init.strong_ent_layers_uniform(3, 2) ``` - + Grouping these commuting observables leads to fewer device executions: - + ```pycon >>> cost_opt(params) >>> ex_opt = dev.num_executions @@ -89,8 +89,8 @@ * The MultiRZ gate now has a defined generator. [(#912)](https://github.com/PennyLaneAI/pennylane/pull/912) -* The CRot gate now has a ``decomposition`` method, which breaks the gate down into rotations - and CNOT gates. This allows ``CRot`` to be used on devices that do not natively support it. +* The CRot gate now has a `decomposition` method, which breaks the gate down into rotations + and CNOT gates. This allows `CRot` to be used on devices that do not natively support it. [(#908)](https://github.com/PennyLaneAI/pennylane/pull/908) * QNodes in tape mode now support returning observables on the same wire if the observables are @@ -99,7 +99,7 @@ transformed to the computational basis using a shared set of single-qubit rotations. [(#882)](https://github.com/PennyLaneAI/pennylane/pull/882) - The following shows how to return the Pauli words ``XX`` and ``XI``: + The following shows how to return the Pauli words `XX` and `XI`: ```python qml.enable_tape() @@ -269,8 +269,8 @@ - `qnn.TorchLayer` [(#865)](https://github.com/PennyLaneAI/pennylane/pull/865) - `qaoa` module [(#905)](https://github.com/PennyLaneAI/pennylane/pull/905) -* A new function, ``qml.refresh_devices()``, has been added, allowing PennyLane to - rescan installed PennyLane plugins and refresh the device list. In addition, the ``qml.device`` +* A new function, `qml.refresh_devices()`, has been added, allowing PennyLane to + rescan installed PennyLane plugins and refresh the device list. In addition, the `qml.device` loader will attempt to refresh devices if the required plugin device cannot be found. This will result in an improved experience if installing PennyLane and plugins within a running Python session (for example, on Google Colab), and avoid the need to @@ -296,8 +296,8 @@

Breaking changes

-- The ``VQECost`` class has been renamed to ``ExpvalCost`` to reflect its general applicability - beyond VQE. Use of ``VQECost`` is still possible but will result in a deprecation warning. +- The `VQECost` class has been renamed to `ExpvalCost` to reflect its general applicability + beyond VQE. Use of `VQECost` is still possible but will result in a deprecation warning. [(#913)](https://github.com/PennyLaneAI/pennylane/pull/913)

Documentation

@@ -329,10 +329,16 @@ * Fixes a bug whereby binary Python operators were not properly propagating the `requires_grad` attribute to the output tensor. [(#889)](https://github.com/PennyLaneAI/pennylane/pull/889) - + * Fixes a bug which prevents `TorchLayer` from doing `backward` when CUDA is enabled. [(#899)](https://github.com/PennyLaneAI/pennylane/pull/899) +* Fixes a bug in `QuantumTape.set_parameters()`. The previous implementation assumed + that the `self.trainable_parms` set would always be iterated over in increasing integer + order. However, this is not guaranteed behaviour, and can lead to the incorrect tape parameters + being set if this is not the case. + [(#923)](https://github.com/PennyLaneAI/pennylane/pull/923) +

Contributors

This release contains contributions from (in alphabetical order): diff --git a/pennylane/tape/tapes/tape.py b/pennylane/tape/tapes/tape.py index dc53605a3db..3203772da7b 100644 --- a/pennylane/tape/tapes/tape.py +++ b/pennylane/tape/tapes/tape.py @@ -704,7 +704,7 @@ def set_parameters(self, params, trainable_only=True): [4, 1, 6] """ if trainable_only: - iterator = zip(self.trainable_params, params) + iterator = zip(sorted(self.trainable_params), params) required_length = self.num_params else: iterator = enumerate(params) diff --git a/tests/tape/tapes/test_tape.py b/tests/tape/tapes/test_tape.py index 99554157e66..d5a253db43d 100644 --- a/tests/tape/tapes/test_tape.py +++ b/tests/tape/tapes/test_tape.py @@ -498,6 +498,29 @@ def test_setting_free_parameters(self, make_tape): params[4], ] + def test_setting_parameters_unordered(self, make_tape, monkeypatch): + """Test that an 'unordered' trainable_params set does not affect + the setting of parameter values""" + tape, params = make_tape + new_params = [-0.654, 0.3] + + with monkeypatch.context() as m: + m.setattr(tape, "_trainable_params", {3, 1}) + tape.set_parameters(new_params) + + assert tape.get_parameters(trainable_only=True) == [ + new_params[0], + new_params[1], + ] + + assert tape.get_parameters(trainable_only=False) == [ + params[0], + new_params[0], + params[2], + new_params[1], + params[4], + ] + def test_setting_all_parameters(self, make_tape): """Test that all parameters are correctly modified after construction""" tape, params = make_tape