-
Notifications
You must be signed in to change notification settings - Fork 575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure that MottonenStatePreparation
is fully differentiable
#1361
Comments
MottonenStatePreparation
is fully differentiableMottonenStatePreparation
is fully differentiable
MottonenStatePreparation
is fully differentiableMottonenStatePreparation
is fully differentiable
The following discussion forum post provides a bug fix to get differentiability working with PyTorch: https://discuss.pennylane.ai/t/hybrid-network-not-differentiating/1079/3 |
In particular, here is the relevant git diff: diff --git a/pennylane/templates/state_preparations/mottonen.py b/pennylane/templates/state_preparations/mottonen.py
index 860eeb80..a7a83544 100644
--- a/pennylane/templates/state_preparations/mottonen.py
+++ b/pennylane/templates/state_preparations/mottonen.py
@@ -203,6 +203,9 @@ def _get_alpha_y(a, n, k):
with np.errstate(divide="ignore", invalid="ignore"):
division = numerator / denominator
+ division = qml.math.cast(division, np.float64)
+ denominator = qml.math.cast(denominator, np.float64)
+
division = qml.math.where(denominator != 0.0, division, 0.0)
return 2 * qml.math.arcsin(qml.math.sqrt(division)) |
The fix was incorporated in #1400. One test that reproduced the bug was added, though we could extend that with the entire class of tests as suggested in the original description. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This issue has been tagged for contributions during unitaryHACK
Issue description
At the moment, the
MottonenStatePreparation
template that prepares a quantum state whose Dirac vector encodes some specified input vector, is not fully differentiable. This is why the test cases for differentiability/interfaces are currently reduced to a trivial example.The template should be fully differentiable with respect to
state_vector
in all interfaces (tf
,torch
,autograd/numpy
,jax
).At the moment, differentiation in some interfaces does not work. This is likely due to the complicated input processing pipelines in the template, which were rewritten in PennyLane's
math
library to be interface-agnostic, but which do not seem to work 100% yet.A detailed investigation about what cases work and which ones don't may be part of the solution to this issue.
Additional information
A minimum goal could be to copy the differentiation tests from another template (like this class of tests example) and to make it pass. To come up with a circuit that one can compare the template against, one would have to figure out the decomposition (and how angles of the gates depend on the input vector to the template) for a very simple case, which may be a bit tricky.
MottonenStatePreparation
is called byAmplitudeEmbedding
andQubitStateVector
if the device needs a decomposition. By extension, these two templates/operations should become differentiable on more devices.The text was updated successfully, but these errors were encountered: