-
Notifications
You must be signed in to change notification settings - Fork 575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
added multi_dispatch decorator #2084
Conversation
I noticed that in its current state, when I clone the original pennylane repo, install and just run |
Thanks @Qottmann, the new PR diff looks good :) If you are happy with it, I can begin a review.
🤔 This makes me wonder if this is due to a version incompatibility. What version of Black do you currently have installed? Locally, I have $ black --version
black, 21.12b0 (compiled: no)
$ black -l 100 tests pennylane
All done! ✨ 🍰 ✨
431 files left unchanged. Perhaps this may be solved by upgrading your Black version? |
Codecov Report
@@ Coverage Diff @@
## master #2084 +/- ##
=======================================
Coverage 99.18% 99.18%
=======================================
Files 226 226
Lines 17373 17396 +23
=======================================
+ Hits 17232 17255 +23
Misses 141 141
Continue to review full report at Codecov.
|
Indeed, I had an old version of black
With
If it is okay, I added From my side, it is ready for review then :) |
doc/requirements.txt
Outdated
@@ -21,3 +21,4 @@ tensornetwork==0.3 | |||
toml | |||
torch==1.9.0+cpu | |||
torchvision==0.10.0+cpu | |||
black>=21 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Qottmann it's a good idea to include this! Better though to place this in the requirements.txt
file, as this one seems to break the documentation from building 🤔 I will remove it for now, just so that the documentation build correctly.
black>=21 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah right! I added it to this requirement.txt. The remaining error: do I understand it right, that it is complaining that the case that argnum and tensor_list are given as integers, is not tested? I can modify the test to include those tests then!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @Qottmann! This is a great contribution, and will make adding framework-agnostic functions much easier to PennyLane going forward.
I've gone through and left some comments and suggestions, but it is looking in great shape --- I think only more 'administrative' work is required to get this merge ready, including fixing up the docstring and adding a few more tests.
do I understand it right, that it is complaining that the case that argnum and tensor_list are given as integers, is not tested? I can modify the test to include those tests then!
Yes, that's correct - we have a coverage check that verifies that all new code is adequately tested 🙂 While your current test is testing part of the new logic, it is likely not testing all logical paths and all input types.
doc/releases/changelog-dev.md
Outdated
* Added `multi_dispatch` decorator that helps ease the definition of new functions. | ||
[(#2082)](https://github.com/PennyLaneAI/pennylane/pull/2084) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be good here to add a short description and code example here to help explain to readers the importance/impact of this change 🙂
The reason that this tends to be important is that the readers of the changelog tend to be a very specific demographic --- it includes both regular users, who are likely to be more aware of changes occuring, but also non-regular users who maybe only interact with PennyLane once and a while by reading the changelog.
So when writing the changelog entries, we often try to think --- 'how can we communicate this change to a skim-reader in a succinct manner?'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that this approach is a bit unique to PL; quite a few OSS projects tend to have simpler changelogs! The reason we take this approach is that we often use our changelog to advertise the new releases.
In this case, you could likely copy and adapt the intro to the multi_dipatch
docstring
pennylane/math/multi_dispatch.py
Outdated
def multi_dispatch(argnum=None, tensor_list=None): | ||
"""Decorater to dispatch arguments handled by the interface. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is a link to the rendered version: https://pennylane--2084.org.readthedocs.build/en/2084/code/api/pennylane.math.multi_dispatch.html It looks really nice!
tests/math/test_multi_disptach.py
Outdated
([1.0, 0.0], [2.0, 3.0]), | ||
onp.array([[1.0, 0.0], [2.0, 3.0]]), | ||
np.array([[1.0, 0.0], [2.0, 3.0]]), | ||
# torch.tensor([[1.,0.],[2.,3.]]), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How come this one is commented out?
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
@josh146 I committed your suggestions and
|
…tion to complete tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @Qottmann! Very nice documentation, and great to see the additional tests. I have spotted a potential bug in the implementation, but I believe you might have also noticed it, as I just saw a GitHub notification :)
pennylane/math/multi_dispatch.py
Outdated
>>> @math.multi_dispatch(argnum=0,tensor_list=0) | ||
>>> def custom_function(values, like, coefficient=10): | ||
>>> # values is a list of vectors | ||
>>> # like can force the interface (optional) | ||
>>> return coefficient * np.sum([math.dot(v,v) for v in values]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
>>> @math.multi_dispatch(argnum=0,tensor_list=0) | |
>>> def custom_function(values, like, coefficient=10): | |
>>> # values is a list of vectors | |
>>> # like can force the interface (optional) | |
>>> return coefficient * np.sum([math.dot(v,v) for v in values]) | |
>>> @qml.math.multi_dispatch(argnum=0, tensor_list=0) | |
... def custom_function(values, like, coefficient=10): | |
... # values is a list of vectors | |
... # like can force the interface (optional) | |
... return coefficient * np.sum([math.dot(v, v) for v in values]) |
In Python console ('pycon') syntax,
>>>
prepends a new command...
indicates a continuation line of a command- output is not prepended by anything
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be nice to include in the example an example of branching based on like
, e.g.,
@qml.math.multi_dispatch(argnum=0, tensor_list=0)
def custom_function(values, like, coefficient=10):
squares = sum([qml.math.dot(v, v) for v in values])
if like == "tensorflow":
...
return 1j * coefficient * squares
just to show what it looks like for developers. Unfortunately, I cannot think of an example that doesn't look very contrived!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the comments and corrections!
Unfortunately, I cannot think of an example that doesn't look very contrived!
I also dont know of an example on top of my head. In math.dot() there are exceptions for torch and tensorflow but I think this example may be a bit too convoluted to put there. For now, I just left it with an empty if clause and a comment that this is a possibility in the doc-string.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I also had a look at existing functions to see if there were any that would be nice to use as an example, but all were either too complicated or too specific 😆
pennylane/math/multi_dispatch.py
Outdated
def decorator(fn): | ||
@functools.wraps(fn) | ||
def wrapper(*args, **kwargs): | ||
argnums = argnum or list(range(len(args))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Qottmann I think there might be a bug here (and as well with tensor_lists
below!) The issue being that argnum=0
and tensor_list=0
are both valid inputs, but python treats 0
as False
, and so the boolean statement will choose the latter argument in both cases!
This is a sign that more tests should be added, to ensure these edge cases work as expected.
def tensordot(x, like, axes=None): | ||
return np.tensordot(x[0], x[1], axes=axes) | ||
|
||
assert fn.allequal(tensordot(x, axes=(0, 0)).numpy(), 2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
very nice!
tests/math/test_multi_disptach.py
Outdated
def test_multi_dispatch_decorate2(t1, t2): | ||
"""Test decorating a standard numpy function for PennyLane, this time using 2 separate inputs""" | ||
|
||
@fn.multi_dispatch(argnum=[0, 1], tensor_list=None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A good idea to test different values of argnum
and tensor_list
:) It might be good to also have a test that doesn't set either of these?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, this is now done in test_multi_dispatch_decorate2
!
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
if not isinstance(argnums, Sequence): | ||
argnums = [argnums] | ||
if not isinstance(tensor_lists, Sequence): | ||
tensor_lists = [tensor_lists] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice 💯
pennylane/math/multi_dispatch.py
Outdated
>>> @math.multi_dispatch(argnum=0,tensor_list=0) | ||
>>> def custom_function(values, like, coefficient=10): | ||
>>> # values is a list of vectors | ||
>>> # like can force the interface (optional) | ||
>>> return coefficient * np.sum([math.dot(v,v) for v in values]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I also had a look at existing functions to see if there were any that would be nice to use as an example, but all were either too complicated or too specific 😆
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @Qottmann for fixing that final bug, this looks ready from my end now for merging in 💯 💪
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Great! Also from my end it is good to go. I committed your suggestions and all tests seem to pass :) |
Context:
Added
multi_dispatch
decorator to help make new function definitions easier in terms of variable interfaces.Description of the Change:
Added the function multi_dispatch in multi_dispatch.py
Note that this is a reopening of #2082 after encountering some git issues.
Benefits:
Handles arguments for new functions that need to be dispatched for an appropriate interface. Makes interface argument "like" optional as it utilizes
_multi_dispatch
to automatically recognize the interface based on the tensor objects.Possible Drawbacks:
Decorators are confusing to some users.
Related GitHub Issues:
closes #1820