Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow Observables to return objects and still be differentiable #1291

Merged
merged 19 commits into from
May 25, 2021

Conversation

cvjjm
Copy link
Contributor

@cvjjm cvjjm commented May 11, 2021

Before making this into a proper RP I wanted to quickly get your opinion whether this change has a chance of making it into upstream.

Background: I realized that the only thing that is preventing users from constructing custom Observables that return an object (in my case an OpenFermion.QubitOperator is super handy) is a very small portion of code in jacobian_tape.py that makes assumptions about return types having certain properties.

With the rather minimal change below, I can have a device support a custom Observable that returns an object and and for which it supports the device gradient and use that in return qml.expval(CustomObservable) of a QNode and even differentiated that QNode with qml.grad().

An alternative would be to implement .dtype, __len__(), and .flatten() in the class of the returned object and only change line 585 to take the dtype from g rather than a hardcoded float. Would that be preferred?

@cvjjm cvjjm changed the title allow Observables to return objects and still be differentiable Allow Observables to return objects and still be differentiable May 11, 2021
@josh146
Copy link
Member

josh146 commented May 11, 2021

With the rather minimal change below, I can have a device support a custom Observable that returns an object and and for which it supports the device gradient and use that in return qml.expval(CustomObservable) of a QNode and even differentiated that QNode with qml.grad().

@cvjjm this is great! I am in favour of this solution. The current code is written deliberately using duck typing, with the blocker here solely to distinguish between standard NumPy arrays and ragged arrays -- I'm not sure it makes sense to force objects to overwrite flatten() etc. just to keep to this.

My only concern is more one of maintenance. Best to leave a detailed comment here to make sure that future development adheres to this duck typing, and doesn't inadvertently introduce any code that limits the returned object type.

@co9olguy
Copy link
Member

Very much in favour of having a user's custom operations remain differentiable

@cvjjm
Copy link
Contributor Author

cvjjm commented May 11, 2021

Great! I added some comments. Let me know if you think this is sufficient and please approve the running of tests.

@josh146 josh146 self-requested a review May 11, 2021 16:53
Copy link
Member

@josh146 josh146 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @cvjjm! This is a nice small modification that opens up a very nice feature. Before this can be merged in, we'll need to:

  1. Work out why the tests are failing
  2. Add a new test to ensure the new behaviour works (and catch it from breaking in the future).

pennylane/tape/jacobian_tape.py Outdated Show resolved Hide resolved
Comment on lines +234 to +238
if hasattr(g, "flatten"):
# flatten only if g supports flattening to allow for
# objects other than numpy ndarrays
return g.flatten()
return g
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if this is too obfuscating, but we could avoid the if statement by doing this?

getattr(g, "flatten", lambda: g)()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Too obfuscating for my taste :-)

Comment on lines 597 to 598
dtype = float if isinstance(g, float) else np.object
jac = np.zeros((self._output_dim, len(params)), dtype=dtype)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cvjjm I noticed that quite a few of the tests are failing, specifically the np.allclose calls within the tests. I'm not 100% sure why, but I think the issue might be the logic here?

Should it instead be tied to the existence of a __len__ attribute?

dtype = float if hasattr(g, "__len__") else np.object

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't sure what I need to put here to preserve the default behavior. I will look into this.

Co-authored-by: Josh Izaac <josh146@gmail.com>
@cvjjm
Copy link
Contributor Author

cvjjm commented May 21, 2021

For me all the existing tests pass now and I have added a test of the newly supported behavior.

One last thing I am not totally happy with is that if I declare my QNode to accept an array as parameter like this:

        # force diff_method='parameter-shift' because otherwise
        # PennyLane swaps out dev for default.qubit.autograd
        @qml.qnode(dev, diff_method='parameter-shift')
        def qnode(x):
            qml.RY(x[0], wires=0)
            qml.RY(x[1], wires=0)
            return qml.expval(SpecialObservable(wires=0))

        @qml.qnode(dev, diff_method='parameter-shift')
        def reference_qnode(x):
            qml.RY(x[0], wires=0)
            qml.RY(x[1], wires=0)
            return qml.expval(qml.PauliZ(wires=0))

        params = np.array([0.2, 0.1])
        assert np.isclose(qnode(params).item().val, reference_qnode(params))
        assert np.isclose(qml.jacobian(qnode)(params).item().val, qml.jacobian(reference_qnode)(params))

Differentiation does no longer work because of the following error:

Traceback (most recent call last):
  File "/.../pennylane/tests/tape/test_jacobian_tape.py", line 723, in test_special_observable_qnode_differentiation
    assert np.isclose(qml.jacobian(qnode)([0.2, 0.1]).item().val, qml.jacobian(reference_qnode)([0.2, 0.1]))
  File "/.../pennylane/pennylane/_grad.py", line 182, in _jacobian_function
    return _jacobian(func, argnum[0])(*args, **kwargs)
  File "/.../miniforge3/envs/.../lib/python3.8/site-packages/autograd/wrap_util.py", line 20, in nary_f
    return unary_operator(unary_f, x, *nary_op_args, **nary_op_kwargs)
  File "/.../miniforge3/envs/.../lib/python3.8/site-packages/autograd/differential_operators.py", line 59, in jacobian
    jacobian_shape = ans_vspace.shape + vspace(x).shape
TypeError: can only concatenate tuple (not "list") to tuple

I suspect there must be a way to teach autograd to construct proper VSpaces for my SpecialObject which would fix this, but I am unsure how. This however is not a problem of PennyLane. Just in case you have an idea how also this could be made to work it would be very nice if you told me how.

@josh146
Copy link
Member

josh146 commented May 21, 2021

Thanks for the update @cvjjm!

Just in case you have an idea how also this could be made to work it would be very nice if you told me how.

Unfortunately, while I come across this error many times, I am not 100% sure how to validate new type mappings for Autograd.

I can show you how I managed to add the qml.numpy.tensor class as a valid Autograd type though, perhaps this will be helpful:

from autograd.tracer import Box
from autograd.numpy.numpy_boxes import ArrayBox
from autograd.numpy.numpy_vspaces import ComplexArrayVSpace, ArrayVSpace
from autograd.core import VSpace

def tensor_to_arraybox(x, *args):
    """Convert a :class:`~.tensor` to an Autograd ``ArrayBox``.

    Args:
        x (array_like): Any data structure in any form that can be converted to
            an array. This includes lists, lists of tuples, tuples, tuples of tuples,
            tuples of lists and ndarrays.

    Returns:
        autograd.numpy.numpy_boxes.ArrayBox: Autograd ArrayBox instance of the array
    """
    if isinstance(x, tensor):
        if x.requires_grad:
            return ArrayBox(x, *args)

        raise NonDifferentiableError(
            "{} is non-differentiable. Set the requires_grad attribute to True.".format(x)
        )

    return ArrayBox(x, *args)


Box.type_mappings[tensor] = tensor_to_arraybox
VSpace.mappings[tensor] = lambda x: ComplexArrayVSpace(x) if onp.iscomplexobj(x) else ArrayVSpace(x)

@codecov
Copy link

codecov bot commented May 21, 2021

Codecov Report

Merging #1291 (2010bfa) into master (499e5eb) will increase coverage by 0.00%.
The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #1291   +/-   ##
=======================================
  Coverage   98.27%   98.27%           
=======================================
  Files         150      150           
  Lines       11256    11262    +6     
=======================================
+ Hits        11062    11068    +6     
  Misses        194      194           
Impacted Files Coverage Δ
pennylane/tape/jacobian_tape.py 97.96% <100.00%> (+0.06%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 499e5eb...2010bfa. Read the comment docs.

@cvjjm
Copy link
Contributor Author

cvjjm commented May 25, 2021

Great. I didn't know about the Box.type_mappings thing. I will play around with it, but for this PR it should be irrelevant. The last commit should have fixed the remaining code formatting problems. From my side this is now ready to merge.

@cvjjm
Copy link
Contributor Author

cvjjm commented May 25, 2021

Ups. Forgot the -l 100 flag when calling black... Now all should be well.

Comment on lines +712 to +714
# force diff_method='parameter-shift' because otherwise
# PennyLane swaps out dev for default.qubit.autograd
@qml.qnode(dev, diff_method="parameter-shift")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hopefully this will be fixed soon via #1334!

return qml.expval(qml.PauliZ(wires=0))

assert np.isclose(qnode(0.2).item().val, reference_qnode(0.2))
assert np.isclose(qml.jacobian(qnode)(0.2).item().val, qml.jacobian(reference_qnode)(0.2))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A very thorough text example! This test will help ensure this (originally accidental) functionality remains working.

@josh146
Copy link
Member

josh146 commented May 25, 2021

@cvjjm, before merging, could I get you to:

  • Add a short summary of the change to the changelog, along with a link back to the PR
  • Add your name or GitHub username (whichever you prefer) to the Contributors section

Thanks 🙂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants