Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added multi_dispatch decorator #2084

Merged
merged 28 commits into from
Jan 17, 2022
Merged

added multi_dispatch decorator #2084

merged 28 commits into from
Jan 17, 2022

Conversation

Qottmann
Copy link
Contributor

@Qottmann Qottmann commented Jan 12, 2022

Context:
Added multi_dispatch decorator to help make new function definitions easier in terms of variable interfaces.

Description of the Change:
Added the function multi_dispatch in multi_dispatch.py
Note that this is a reopening of #2082 after encountering some git issues.

Benefits:
Handles arguments for new functions that need to be dispatched for an appropriate interface. Makes interface argument "like" optional as it utilizes _multi_dispatch to automatically recognize the interface based on the tensor objects.

Possible Drawbacks:
Decorators are confusing to some users.

Related GitHub Issues:
closes #1820

@Qottmann
Copy link
Contributor Author

I noticed that in its current state, when I clone the original pennylane repo, install and just run black -l 100 pennylane/ it reformats 31 files. I dont know if this is intended, but to avoid having me submit 31 uninteded changes I just reformatted with black the files I have been working on.

@josh146
Copy link
Member

josh146 commented Jan 12, 2022

Thanks @Qottmann, the new PR diff looks good :) If you are happy with it, I can begin a review.

I noticed that in its current state, when I clone the original pennylane repo, install and just run black -l 100 pennylane/ it reformats 31 files.

🤔 This makes me wonder if this is due to a version incompatibility. What version of Black do you currently have installed? Locally, I have

$ black --version
black, 21.12b0 (compiled: no)
$ black -l 100 tests pennylane
All done! ✨ 🍰 ✨
431 files left unchanged.

Perhaps this may be solved by upgrading your Black version?

@codecov
Copy link

codecov bot commented Jan 12, 2022

Codecov Report

Merging #2084 (ba7cb0d) into master (1275736) will increase coverage by 0.00%.
The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #2084   +/-   ##
=======================================
  Coverage   99.18%   99.18%           
=======================================
  Files         226      226           
  Lines       17373    17396   +23     
=======================================
+ Hits        17232    17255   +23     
  Misses        141      141           
Impacted Files Coverage Δ
pennylane/math/__init__.py 100.00% <ø> (ø)
pennylane/math/multi_dispatch.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1275736...ba7cb0d. Read the comment docs.

@Qottmann
Copy link
Contributor Author

Indeed, I had an old version of black

$ black --version
black, version 19.10b0

With pip install --upgrade black I am now on

$ black --version
black, 21.12b0 (compiled: no)

If it is okay, I added black >= 21 in requirements.txt

From my side, it is ready for review then :)

@@ -21,3 +21,4 @@ tensornetwork==0.3
toml
torch==1.9.0+cpu
torchvision==0.10.0+cpu
black>=21
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Qottmann it's a good idea to include this! Better though to place this in the requirements.txt file, as this one seems to break the documentation from building 🤔 I will remove it for now, just so that the documentation build correctly.

Suggested change
black>=21

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah right! I added it to this requirement.txt. The remaining error: do I understand it right, that it is complaining that the case that argnum and tensor_list are given as integers, is not tested? I can modify the test to include those tests then!

Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Qottmann! This is a great contribution, and will make adding framework-agnostic functions much easier to PennyLane going forward.

I've gone through and left some comments and suggestions, but it is looking in great shape --- I think only more 'administrative' work is required to get this merge ready, including fixing up the docstring and adding a few more tests.

do I understand it right, that it is complaining that the case that argnum and tensor_list are given as integers, is not tested? I can modify the test to include those tests then!

Yes, that's correct - we have a coverage check that verifies that all new code is adequately tested 🙂 While your current test is testing part of the new logic, it is likely not testing all logical paths and all input types.

pennylane/math/multi_dispatch.py Outdated Show resolved Hide resolved
Comment on lines 276 to 277
* Added `multi_dispatch` decorator that helps ease the definition of new functions.
[(#2082)](https://github.com/PennyLaneAI/pennylane/pull/2084)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good here to add a short description and code example here to help explain to readers the importance/impact of this change 🙂

The reason that this tends to be important is that the readers of the changelog tend to be a very specific demographic --- it includes both regular users, who are likely to be more aware of changes occuring, but also non-regular users who maybe only interact with PennyLane once and a while by reading the changelog.

So when writing the changelog entries, we often try to think --- 'how can we communicate this change to a skim-reader in a succinct manner?'

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that this approach is a bit unique to PL; quite a few OSS projects tend to have simpler changelogs! The reason we take this approach is that we often use our changelog to advertise the new releases.

In this case, you could likely copy and adapt the intro to the multi_dipatch docstring

Comment on lines 86 to 87
def multi_dispatch(argnum=None, tensor_list=None):
"""Decorater to dispatch arguments handled by the interface.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pennylane/math/multi_dispatch.py Outdated Show resolved Hide resolved
pennylane/math/multi_dispatch.py Outdated Show resolved Hide resolved
pennylane/math/multi_dispatch.py Show resolved Hide resolved
([1.0, 0.0], [2.0, 3.0]),
onp.array([[1.0, 0.0], [2.0, 3.0]]),
np.array([[1.0, 0.0], [2.0, 3.0]]),
# torch.tensor([[1.,0.],[2.,3.]]),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come this one is commented out?

tests/math/test_multi_disptach.py Show resolved Hide resolved
tests/math/test_multi_disptach.py Outdated Show resolved Hide resolved
tests/math/test_multi_disptach.py Show resolved Hide resolved
Qottmann and others added 8 commits January 13, 2022 16:24
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
@Qottmann
Copy link
Contributor Author

@josh146 I committed your suggestions and

  • expanded tests in tests/math/test_multi_dispatch.py
  • added an example of a custom function decorated by multi_dispatch() in the docstring of multi_dispatch. Is this in line with what you had in mind?

Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Qottmann! Very nice documentation, and great to see the additional tests. I have spotted a potential bug in the implementation, but I believe you might have also noticed it, as I just saw a GitHub notification :)

doc/releases/changelog-dev.md Outdated Show resolved Hide resolved
doc/releases/changelog-dev.md Outdated Show resolved Hide resolved
pennylane/math/multi_dispatch.py Outdated Show resolved Hide resolved
pennylane/math/multi_dispatch.py Show resolved Hide resolved
pennylane/math/multi_dispatch.py Outdated Show resolved Hide resolved
Comment on lines 134 to 138
>>> @math.multi_dispatch(argnum=0,tensor_list=0)
>>> def custom_function(values, like, coefficient=10):
>>> # values is a list of vectors
>>> # like can force the interface (optional)
>>> return coefficient * np.sum([math.dot(v,v) for v in values])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
>>> @math.multi_dispatch(argnum=0,tensor_list=0)
>>> def custom_function(values, like, coefficient=10):
>>> # values is a list of vectors
>>> # like can force the interface (optional)
>>> return coefficient * np.sum([math.dot(v,v) for v in values])
>>> @qml.math.multi_dispatch(argnum=0, tensor_list=0)
... def custom_function(values, like, coefficient=10):
... # values is a list of vectors
... # like can force the interface (optional)
... return coefficient * np.sum([math.dot(v, v) for v in values])

In Python console ('pycon') syntax,

  • >>> prepends a new command
  • ... indicates a continuation line of a command
  • output is not prepended by anything

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be nice to include in the example an example of branching based on like, e.g.,

@qml.math.multi_dispatch(argnum=0, tensor_list=0)
def custom_function(values, like, coefficient=10):
    squares = sum([qml.math.dot(v, v) for v in values])

    if like == "tensorflow":
        ...

    return 1j * coefficient * squares

just to show what it looks like for developers. Unfortunately, I cannot think of an example that doesn't look very contrived!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the comments and corrections!

Unfortunately, I cannot think of an example that doesn't look very contrived!

I also dont know of an example on top of my head. In math.dot() there are exceptions for torch and tensorflow but I think this example may be a bit too convoluted to put there. For now, I just left it with an empty if clause and a comment that this is a possibility in the doc-string.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I also had a look at existing functions to see if there were any that would be nice to use as an example, but all were either too complicated or too specific 😆

pennylane/math/multi_dispatch.py Outdated Show resolved Hide resolved
def decorator(fn):
@functools.wraps(fn)
def wrapper(*args, **kwargs):
argnums = argnum or list(range(len(args)))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Qottmann I think there might be a bug here (and as well with tensor_lists below!) The issue being that argnum=0 and tensor_list=0 are both valid inputs, but python treats 0 as False, and so the boolean statement will choose the latter argument in both cases!

This is a sign that more tests should be added, to ensure these edge cases work as expected.

def tensordot(x, like, axes=None):
return np.tensordot(x[0], x[1], axes=axes)

assert fn.allequal(tensordot(x, axes=(0, 0)).numpy(), 2)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very nice!

def test_multi_dispatch_decorate2(t1, t2):
"""Test decorating a standard numpy function for PennyLane, this time using 2 separate inputs"""

@fn.multi_dispatch(argnum=[0, 1], tensor_list=None)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A good idea to test different values of argnum and tensor_list :) It might be good to also have a test that doesn't set either of these?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, this is now done in test_multi_dispatch_decorate2 !

Qottmann and others added 6 commits January 14, 2022 15:32
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
doc/releases/changelog-dev.md Outdated Show resolved Hide resolved
pennylane/math/multi_dispatch.py Outdated Show resolved Hide resolved
pennylane/math/multi_dispatch.py Outdated Show resolved Hide resolved
Comment on lines +156 to +159
if not isinstance(argnums, Sequence):
argnums = [argnums]
if not isinstance(tensor_lists, Sequence):
tensor_lists = [tensor_lists]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice 💯

tests/math/test_multi_disptach.py Outdated Show resolved Hide resolved
tests/math/test_multi_disptach.py Outdated Show resolved Hide resolved
Comment on lines 134 to 138
>>> @math.multi_dispatch(argnum=0,tensor_list=0)
>>> def custom_function(values, like, coefficient=10):
>>> # values is a list of vectors
>>> # like can force the interface (optional)
>>> return coefficient * np.sum([math.dot(v,v) for v in values])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I also had a look at existing functions to see if there were any that would be nice to use as an example, but all were either too complicated or too specific 😆

Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Qottmann for fixing that final bug, this looks ready from my end now for merging in 💯 💪

josh146 and others added 6 commits January 17, 2022 01:02
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
Co-authored-by: Josh Izaac <josh146@gmail.com>
@Qottmann
Copy link
Contributor Author

Great! Also from my end it is good to go. I committed your suggestions and all tests seem to pass :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Simplify adding multiple dispatch functions to the qml.math module
2 participants