-
Notifications
You must be signed in to change notification settings - Fork 604
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make GlobalPhase not differentiable #5620
Make GlobalPhase not differentiable #5620
Conversation
Thanks for this @Tarun-Kumar07 For the failures due to errors:
That would be expected, and we should shift the measurement to expectation values. For the failures due to:
Those are legitimately different results, so we can safely safe we are getting wrong results in that case 😢 I'll investigate. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left a couple of small comments and one major suggestion: Could we set GlobalPhase.grad_method = "F"
? This will produce unnecessary shifted tapes for expectation values and probabilities, but it will avoid wrong results when differentiating qml.state
with finite_diff
and param_shift
.
tests/templates/test_state_preparations/test_mottonen_state_prep.py
Outdated
Show resolved
Hide resolved
tests/templates/test_state_preparations/test_mottonen_state_prep.py
Outdated
Show resolved
Hide resolved
tests/templates/test_state_preparations/test_mottonen_state_prep.py
Outdated
Show resolved
Hide resolved
@Tarun-Kumar07 @albi3ro Not sure you got to this yet, but it seems that the decomposition of those state preparation methods handle special parameter values differently than others. This makes the derivative wrong at those special values, because Basically, the decomposition does something like the following decomposition for def compute_decomposition(theta, wires):
if not qml.math.is_abstract(theta) and qml.math.isclose(theta, 0):
return []
return [qml.RZ(theta, wires)] It's correct but it does not have the correct parameter-shift derivative at 0. This looks like an independent bug to me, and like one that could be hiding across the codebase for other ops as well, theoretically. |
@Tarun-Kumar07 Sorry for taking so long with this! We decided to move ahead with this PR as you originally drafted it (with |
Hey @dwierichs , once the PR #5774 is merged I will revert changes to |
**Context:** The decomposition of `MottonenStatePreparation` skips some gates for special parameter values/input states. See the linked issue for details. **Description of the Change:** This PR introduces a check for differentiability so that the gates only are skipped when no derivatives are being computed. Note that this does *not* fix the non-differentiability at other special parameter points that also is referenced in #5715 and that is being warned against in the docs already. Also, the linked issue is about multiple operations and we here only address `MottonenStatePreparation`. **Benefits:** Fixes parts of #5715. Unblocks #5620 . **Possible Drawbacks:** **Related GitHub Issues:** #5715
@Tarun-Kumar07 It is merged :) |
Hi @Tarun-Kumar07 , |
Hi @dwierichs, I am currently tied up until August 17th. After that, I will be able to work on this. Thank you for understanding. |
Thanks @Tarun-Kumar07. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving the first part of this PR. I modified it slightly, which will need approval by someone else.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #5620 +/- ##
========================================
Coverage 99.65% 99.65%
========================================
Files 430 430
Lines 41505 41210 -295
========================================
- Hits 41362 41069 -293
+ Misses 143 141 -2 ☔ View full report in Codecov by Sentry. |
Co-authored-by: Thomas R. Bromley <49409390+trbromley@users.noreply.github.com>
Thank you so much for this contribution @Tarun-Kumar07!! 🚀 |
Context:
When using the following state preparation methods (
AmplitudeEmbedding
,StatePrep
,MottonenStatePreparation
) withjit
andgrad
, the errorValueError: need at least one array to stack
was encountered.Description of the Change:
All state preparation strategies used
GlobalPhase
under the hood, which caused the above error. After this PR,GlobalPhase
may not be differentiable anymore, as itsgrad_method
is set toNone
.Benefits:
Possible Drawbacks:
Related GitHub Issues:
It fixes #5541