-
Notifications
You must be signed in to change notification settings - Fork 600
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix qml.grad so that the returned gradient always matches the cost function return type if only a single argument is differentiated #1067
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #1067 +/- ##
=======================================
Coverage 97.74% 97.74%
=======================================
Files 153 153
Lines 11584 11590 +6
=======================================
+ Hits 11323 11329 +6
Misses 261 261 ☔ View full report in Codecov by Sentry. |
if len(args) == 1: | ||
grad = (grad,) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have to turn grad into a tuple because we're reversing what we did to argnum earlier, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, the optimizer logic currently assumes grad
is always iterable with respect to argument number.
I figured, easier to leave this logic intact and simply re-tuple the grad
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed.
Co-authored-by: Chase Roberts <chase@xanadu.ai>
Context:
I realised yesterday that
autograd.grad
behaves differently depending if theargnum
argument is a sequence or an integer:We never take this into account in PennyLane, which can lead to gradients often being returned as tuple of size 1.
Description of the Change:
argnum
argument is a sequence of length 1, only the contained integer is passed toautograd.grad
.Benefits:
The return of
qml.grad
should now always match the return type of the function being differentiated.A lot of superfluous
[0]
could be removed from the tests (e.g.,qml.grad(circuit)(x)[0]
)Possible Drawbacks:
The optimizers have to add in the redundant tuple where needed so that the gradient remains iterable. But this is a easy change, and leads to an intuitive UI, so maybe better?
Some of the demos might need to be updated to remove unneeded
[0]
after gradient computations. But I think very few demos actually show the gradients.Related GitHub Issues: n/a