-
Notifications
You must be signed in to change notification settings - Fork 586
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow specifying a subset of parameters to estimate the jacobian by passing the argnum
kwarg
#1371
Conversation
Hello. You may have forgotten to update the changelog!
|
Codecov Report
@@ Coverage Diff @@
## master #1371 +/- ##
=======================================
Coverage 98.19% 98.19%
=======================================
Files 157 157
Lines 11713 11729 +16
=======================================
+ Hits 11501 11517 +16
Misses 212 212
Continue to review full report at Codecov.
|
tests/tape/test_jacobian_tape.py
Outdated
@@ -454,6 +482,55 @@ def test_single_expectation_value(self, tol): | |||
expected = np.array([[-np.sin(y) * np.sin(x), np.cos(y) * np.cos(x)]]) | |||
assert np.allclose(res, expected, atol=tol, rtol=0) | |||
|
|||
def test_single_expectation_value_with_num_params_all(self, tol): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This and the following test are using the test_single_expectation_value
case to make sure that parameter selection works fine.
Maybe instead of providing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could instead put this functionality into _grad_method_validation
? Non-selected parameters simply get their "grad_method"
set to 0
. This would keep the logic in tape.jacobian
simpler for an already complicated function.
pennylane/tape/jacobian_tape.py
Outdated
if num_params is None: | ||
return enumerate(diff_methods) | ||
|
||
if len(diff_methods) < num_params or num_params < 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest breaking these two cases into separate warnings.
Then, when len(diff_methods) < num_params
we can return enumerate(diff_methods)
as demonstrated, but return an empty array when num_params < 1
. I think that would be more intuitive behaviour.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair point! With the changed logic, the first doesn't make too much sense anymore, as there's a new error now. The second condition is treated separately and returns an empty list.
pennylane/tape/jacobian_tape.py
Outdated
@@ -393,6 +395,39 @@ def hessian_pd(self, i, j, params, **options): | |||
""" | |||
raise NotImplementedError | |||
|
|||
@staticmethod | |||
def _choose_params_with_methods(diff_methods, num_params): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should there be a way to specify the seed for reproducibility?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we do that via random.seed(my_seed)
in the test? Note that here the random
library was used for sampling (though open for change if another way or using numpy
could be preferred).
pennylane/tape/jacobian_tape.py
Outdated
computing the jacobian. Specifies the number of parameters to sample. | ||
|
||
Returns: | ||
object or list: map of the trainable parameter indices and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
object or list: map of the trainable parameter indices and | |
generator or object or list: map of the trainable parameter indices and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this a suggestion for the enumerate
case? Changing object
to enumerate
here, though not sure if we also have a generator
in other cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @antalszava! Great tests, and I'm happy with this PR in it's current form (once @albi3ro's comments are addressed).
@albi3ro makes a good point regarding choosing the random number of parameters higher up in the stack. Currently, the Jacobian
method computes the derivative based on the tape.trainable_params
, which is set by the interface. This is worth considering, but should not be a blocker to merging this PR; we can continue to improve this functionality in later PRs.
@@ -2,6 +2,9 @@ | |||
|
|||
<h3>New features since last release</h3> | |||
|
|||
* The number of parameters used to estimate the jacobian can now be specified. | |||
[(#1371)](https://github.com/PennyLaneAI/pennylane/pull/1371) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest adding an example here, as this is quite a user-facing feature!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure! Added
Co-authored-by: Christina Lee <christina@xanadu.ai>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Thank you for the review comments! 🙂 Not sure, if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reading this right now, I'm confused about "number of parameters" versus "which parameters". argnum
to me connotes the usage in qml.grad
. Before I continue to review, I'd like that distinction to be clearer.
Hi @albi3ro, with the latest behaviour of the code, I would think that we have the same behaviour as with Let me know if the confusion comes from the fact that the PR name, changelog, etc. still use the "number of parameters" phrasing (can update those places). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly, the PR just has some hold over's from the previous functionality.
Also, I still think this functionality merges well with _grad_method_validation
pennylane/tape/jacobian_tape.py
Outdated
diff_methods_to_sample = map(diff_methods.__getitem__, argnum) | ||
mapped_diff_methods = zip(argnum, diff_methods_to_sample) | ||
|
||
return random.sample(list(mapped_diff_methods), k=num_params) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we are still selecting a random sample? But num_params
is the length of argnum
, and thus mapped_diff_methods
, so randomly sampling all of them? Is this just permuting their order?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, very good point! Removed the sampling, indeed, was a left-over part from the previous behaviour.
pennylane/tape/jacobian_tape.py
Outdated
if argnum is None: | ||
return enumerate(diff_methods) | ||
|
||
if not isinstance(argnum, list): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe more specifically say isinstance(argnum, int)
? Since the options for argnum are int, list, or None
. We already returned if None
, so if its not a list, then its an int. I think...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure that's updated?
if not isinstance(argnum, list): | ||
argnum = [argnum] | ||
|
||
if not all(ind in self.trainable_params for ind in argnum): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For argnum
in qml.grad
, the specification of argnum
overrides any requires_grad
information. Should we be able to override and set a non-trainable parameter trainable via argnum
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, after our chat it seems that we'll not consider this (also, seems like qml.grad
doesn't cover all cases here).
pennylane/tape/jacobian_tape.py
Outdated
with respect to. When there are fewer parameters specified than the | ||
total number of trainable parameters, the jacobian is being estimated by | ||
sampling ``argnum`` many parameters from the set of trainable parameters. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this sampling thing still the case? Does this need to get updated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, removed that part.
tests/tape/test_jacobian_tape.py
Outdated
res = res.flatten() | ||
expected = expected.flatten() | ||
|
||
assert any(np.allclose(r, e, atol=tol, rtol=0) for r, e in zip(res, expected)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is deterministic now, so instead have
expected = np.array([[0, np.cos(y)*np.cos(x)]])
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great point! Updated
Co-authored-by: Christina Lee <christina@xanadu.ai>
Co-authored-by: Christina Lee <christina@xanadu.ai>
argnum
kwarg
…ylane into allow_jacobian_estimate
@albi3ro thank you for the comments and suggestions here! Great catches. Should have them updated now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for fixing all the things.
I made a comment about the if not isinstance(argnum, list):
line, but that's not blocking.
Context:
#958
Description of the Change:
Adds the
argnum
keyword argument to theJacobianTape.jacobian
method inspired by theargnum
kwarg of theqml.grad
function. Using this keyword argument the trainable parameters to use for estimating the jacobian can be specified.The logic is organized into a separate private method to allow easy unit testing and to prepare for further future extensions.
Note: as this is an addition to the
JacobianTape
, this option is only applicable for the differentiation methods that depend on such an object and itsjacobian
method:"parameter-shift"
,"finite-diff"
and"reversible"
.Benefits:
Jacobians can now be estimated by only using a subset of the trainable parameters.
Possible Drawbacks:
N/A
Related GitHub Issues:
Closes #958