Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

assert_valid function for testing operators #4764

Merged
merged 50 commits into from Nov 16, 2023
Merged

Conversation

albi3ro
Copy link
Contributor

@albi3ro albi3ro commented Oct 31, 2023

This function qml.ops.functions.assert_valid is a tool for any developer creating their own Operator subclass. It checks multiple details of the class, such as:

  • dunder methods
  • property and method return types
  • interactions between different methods and properties
  • interaction with other packages like pickle and jax

This method will also be extensible as we find more mistakes and misconceptions that we could add.

[sc-43674]

timmysilv and others added 22 commits October 24, 2023 13:42
**Context:**
We need `autoray.ndim`, and it was only introduced in 0.6.1

**Description of the Change:**
- Pin autoray's lower bound to 0.6.1
- update `transmon_interaction` to check if `qubit_freq` is callable
before calling `ndim` on it (calling ndim on a function fails with
autoray 0.6.1, just returns 0 with 0.6.3, but we shouldn't do it at all
because it doesn't quite make sense)

**Benefits:**
This will ensure autoray always works with PennyLane with a standard
`pip install pennylane`

**Possible Drawbacks:**
N/A. People have to make their peace with old autoray no longer being
supported?
[sc-48521]


When run with a `QNode`, the `LocalHilbertSchmidt` template raises:
```
RuntimeError: No queuing context available to append operation to.
```

The decomposition uses `qml.apply` without checking that something is
recording. This failure was not found because there were no integration
tests checking that the template worked in a `QNode`.
**Description of the Change:**

Support for expand functions that return multiple tapes when using
transform directly on tapes.
**Context:**
We switched default.qubit to the new device API implementation, and some
docstrings need updating.

**Description of the Change:**
Make all docstrings reflect what actually happens. Some notes:
- Snapshot has exposed that states may be floats instead of complex
before being returned, though this does not affect any results. I'll add
a story to track our decision on this
- `_grad.py` needed updating for the jacobian docs because
`qml.jacobian` doesn't work with >1 MP being returned, so I return
`qml.probs()` to get measurements that increase the returned dimension
- made other changes like tracker and batch_execute to use the new
device API

**Benefits:**
The docstrings are correct!

**Possible Drawbacks:**
- Ugly PR!
- I made quite a few changes about the interface returned, but I left a
few out. I think there's more work to be done here, like how we handle
no-arg QNodes and the interface returned there. It's always autograd
instead of None, but that's not the worst of it

---------

Co-authored-by: Christina Lee <christina@xanadu.ai>
Fixes Issue #4723 [sc-48555]

---------

Co-authored-by: Mudit Pandey <mudit.pandey@xanadu.ai>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Matthew Silverman <matthews@xanadu.ai>
Co-authored-by: Romain Moyard <rmoyard@gmail.com>
the `qml.pulse.rdyberg_iteraction` function was queuing projectors,
which are now treated as operations in and affect the output of the
simulation.

This fix prevents from the being queued.
**Context:**
Sometimes, tapes will use standard wires (eg. `[0, 1, 2]`) but will
report them out of order, depending on the order in which they are used.
If we are drawing a qnode (in other words, a circuit bound to a device),
we have the true wire order available to us.

**Description of the Change:**
Use the device wire order wherever possible in drawing logic

**Benefits:**
We don't accidentally draw in funky wire orders like `[1, 0]`. See the
docstring that I changed in `tape_expand` as an example (before this PR,
it would swap wires without the `wire_order` specified)

**Possible Drawbacks:**
N/A

**Related GitHub Issues:**
first found in #4711
Closes #4731 [sc-48599]

Currently, transforms deep copy the qnode when they are applied.
Unfortunately, this also deep copies the device.

Copying the device leads to issues like:

1) pickling errors for `lightning.kokkos` (see above issue)

2) Unnecessary allocation of large amounts of memory 

3) No longer being able to use the original devices' tracker:

```
dev = qml.device('default.qubit')

@qml.transforms.compile
@qml.qnode(dev)
def circuit():
    return qml.state()

with dev.tracker:
    circuit()
dev.tracker.totals
```
```
{}
```

This bugfix downgrades the deep copy of a qnode to a shallow copy that
preserves the device.

To better separate the original from the copy, QNode now defines a
custom `__copy__` method that shallow copies the `execute_kwargs`,
`transform_program`, and `gradient_kwargs`. These were the three fully
mutable properties of the qnode.
**Context:**
SPSA tests have been failing with numpy >= 1.25, and we couldn't find
out why. The true reason is a difference in behaviour with
~autograd.numpy's implementation of divide~
`pennylane.numpy.tensor.__array_ufunc__`, but I can explain that below.

**Description of the Change:**
- ~Change the custom sampler in SPSA tests to use vanilla numpy~
- Cast the `where` argument passed to `qml.math.divide` to a vanilla
python list to avoid side effects
- Unpin numpy in docker builds now that this is fixed

**Benefits:**
Our tests pass with the latest numpy!

**Possible Drawbacks:**
N/A

[sc-47695]

## Details about what's happening

### Investigation Part 1

We use `qml.math.divide` on the results of a sampler call [in
spsa_gradient.py](https://github.com/PennyLaneAI/pennylane/blob/4e1f7495853df620852a50ba2daa9bceeb4bd86c/pennylane/gradients/spsa_gradient.py#L332-L335).
The custom sampler in these tests was using pennylane.numpy instead of
vanilla numpy, but they appear to behave differently:

```pycon
>>> div = lambda x: qml.math.divide(1, x, where=(x != 0), out=qml.math.zeros_like(x))
>>> good = np.array([0., 1.])
>>> bad = pnp.array([0., 1.])
>>> div(good), div(bad)
(array([0., 1.]), tensor([0., 0.], requires_grad=True))
```

What's happening here? For each term of the denominator (`x`), if the
`where` clause evaluates True, `divide` evaluates the quotient for that
term. If it instead evaluated False (in other words, we're about to
divide by zero), it gets the value from `out` as the effective default.
For some reason, `out` must match the input dimension, so I can't change
this to be `qml.math.convert_like(0., x)`.

With vanilla numpy, it only replaces values with the `out`-defaults when
the denominator was zero, but with pennylane numpy, it just returns
`out`. I'm just avoiding using pennylane-numpy in tests here because
it's unnecessary. naughty autograd.

### Investigation Part 2

See my comment below. Basically, passing `where` to numpy functions now
behaves differently if the value implements `__array_ufunc__`, and
pennylane-numpy tensors do in fact override it.
**Context:**
This is already being done in the text drawer, I'm just copying it over
for consistency.

**Description of the Change:**
If expansion_strategy is _not_ "device" but we're drawing a QNode with a
new device, run the QNode transform program on the tape before drawing
it. This is for draw_mpl, it's already present for `draw`

**Benefits:**
Parity between the two draw tools

**Possible Drawbacks:**
N/A

**Related GitHub Issues:**
done in #4595 for the text drawer
Updates to the release notes for 0.33.

Replaces the older PR:
#4647

---------

Co-authored-by: Isaac De Vlugt <isaacdevlugt@gmail.com>
Co-authored-by: Isaac De Vlugt <34751083+isaacdevlugt@users.noreply.github.com>
Fixes Issue #4738 

The `ProbabilityMP.shape` did not take into account probabilities that
broadcasted across all available wires.

[sc-48626]
The QuEra hardware sometimes has failed measurements, which return as
NaN. Because of this, all samples from the device are returned with type
`float`, and some of them are `np.NaN`. These were not handled well in
counts. On the `_default_qubit` device, nothing was converted to
integers, so it did this:

```
{'0.00.00.0': 149,
'0.00.01.0': 281, 
'0.00.0nan': 2,
'0.01.00.0': 277,
'0.01.01.0': 4,
'0.0nan0.0': 4,
'1.00.00.0': 268,
'1.00.0nan': 2,
'1.01.00.0': 5,
'nan0.00.0': 8}
```

We don't like that. Now they are converted into integers if there is no
observable, just like in `CountsMP`.

Also added a line both there and in `CountsMP` to avoid converting the
`NaN` values to 0 and implying they were ground state measurements -
instead, any sample where at least one wire failed is disregarded when
totalling the number of counts.

Has been tested with the QuEra hardware, and now returns nicely:

```
>>> res = circuit(params)
>>> res
{'000': 230, '001': 251, '010': 252, '011': 1, '100': 249, '110': 4}
```

with 13 disregarded failed measurements:
```
>>> sum(count for count in res.values())
987
```
Copy link

codecov bot commented Oct 31, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (aab9ed9) 99.64% compared to head (d3ca117) 99.65%.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #4764   +/-   ##
=======================================
  Coverage   99.64%   99.65%           
=======================================
  Files         381      382    +1     
  Lines       34346    34247   -99     
=======================================
- Hits        34224    34128   -96     
+ Misses        122      119    -3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@trbromley trbromley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! Not many comments from me, though I think a nice follow up is to have checks for broadcasting support.

doc/development/adding_operators.rst Show resolved Hide resolved
pennylane/ops/functions/assert_valid.py Show resolved Hide resolved
Copy link
Contributor

@mudit2812 mudit2812 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Christina! Very excited to get this in :)

Copy link
Contributor

@lillian542 lillian542 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks very useful, thanks @albi3ro!

We already knew this, but the ParameterizedEvolution operator fails the pytree check. Maybe an additional xfail test and corresponding shortcut story could be added?

doc/development/adding_operators.rst Show resolved Hide resolved
doc/development/adding_operators.rst Show resolved Hide resolved
pennylane/ops/functions/assert_valid.py Outdated Show resolved Hide resolved
@albi3ro albi3ro enabled auto-merge (squash) November 16, 2023 15:20
@albi3ro albi3ro merged commit bce513e into master Nov 16, 2023
33 checks passed
@albi3ro albi3ro deleted the operation-validity-func branch November 16, 2023 15:33
timmysilv added a commit that referenced this pull request Dec 12, 2023
**Context:**
Now that we have the `assert_valid` function (#4764), we can easily
validate operators meeting our expectations. The trouble is, we'd have
to test every single operator manually. It would be nice to
automatically generate an instance of each operator (with valid
parameters) and use that to validate the class.

**Description of the Change:**
Add `tests/ops/functions/conftest.py`, which generates a parametrization
of all subclasses of `Operator` (specifics described below). This had to
be put in `conftest.py` because pytest-xdist panics otherwise. Also
update `test_assert_valid.py` to auto-generate an instance of each of
these subclasses and assert that instance's validity. There are some
details to mention about what I've done: I use `op_cls.ndim_params` and
`op_cls.num_params` to generate valid parameters to construct the
instance of the class. This doesn't always work well for various
reasons:
- If they are defined as `property` variables, you can't view it at the
class-level, so I just guess
- I use `len(ndim_params)` to guess the number of parameters, resorting
to `num_params` if that fails
- I use `1.`, `[1.]*(2**n)`, or `np.eye(n)` if the `ndim_params` is 0, 1
or 2 respectively (for each parameter). I assume no other values exist,
and that these will be the appropriate magnitudes. Not always true, eg.
`BasisState` has `ndim_params=(1,)` to get an array, but the length is
`n`, not `2**n`

`_SKIP_OP_TYPES` is a mish-mash of things we don't test. I wrote in-line
comments in the dict to clarify what's what here. Anyway, they are:
- arithmetic ops
- templates
- ops that fail for unknown reasons
- ops that fail BUT have explicit test cases added (currently just a
handful so we can have a test available with a parametrization for the
future)

`_ABSTRACT_OR_META_TYPES` is a set of class types that we shouldn't try
to create instances of. In `get_all_classes`, if `c` is an abstract (eg.
`Operator`, although it's not explicitly stated to be abstract bc
python) or meta (eg. `ControlledOp`, for internal construction only)
class, we don't add it to the final list of classes to make instances
of. We do still add their child classes.

There are also some types that I explicitly xfail in the test. These
_succeed_ in being auto-generated, but they _fail_ validation. I've made
this into a dict, so you have to give a reason explaining why it fails
validation if you're putting it here.

**Benefits:**
- We are more confident in our existing operators!
- Any newly-added Operator will automatically get picked up by this
test. Good ones get added easily, bad ones fail this test in CI and the
dev knows how to fix their operator.

**Possible Drawbacks:**
Only thing I'm worried about now is ensuring that adding tests to make
this totally complete is as easy as possible. might be tough because the
configuration is in `conftest.py`, and not the test file itself. I've
added comments to the test itself so hopefully people look at them.

I'll list possible future work. In order of what I'd like to see:
1. Start adding explicit test cases for everything in `_SKIP_OP_TYPES`
(except qutrit ops, see next task)
2. make `assert_valid` work for qutrit ops then remove the xfail

[sc-51653]

---------

Co-authored-by: Christina Lee <christina@xanadu.ai>
mudit2812 pushed a commit that referenced this pull request Dec 15, 2023
**Context:**
Now that we have the `assert_valid` function (#4764), we can easily
validate operators meeting our expectations. The trouble is, we'd have
to test every single operator manually. It would be nice to
automatically generate an instance of each operator (with valid
parameters) and use that to validate the class.

**Description of the Change:**
Add `tests/ops/functions/conftest.py`, which generates a parametrization
of all subclasses of `Operator` (specifics described below). This had to
be put in `conftest.py` because pytest-xdist panics otherwise. Also
update `test_assert_valid.py` to auto-generate an instance of each of
these subclasses and assert that instance's validity. There are some
details to mention about what I've done: I use `op_cls.ndim_params` and
`op_cls.num_params` to generate valid parameters to construct the
instance of the class. This doesn't always work well for various
reasons:
- If they are defined as `property` variables, you can't view it at the
class-level, so I just guess
- I use `len(ndim_params)` to guess the number of parameters, resorting
to `num_params` if that fails
- I use `1.`, `[1.]*(2**n)`, or `np.eye(n)` if the `ndim_params` is 0, 1
or 2 respectively (for each parameter). I assume no other values exist,
and that these will be the appropriate magnitudes. Not always true, eg.
`BasisState` has `ndim_params=(1,)` to get an array, but the length is
`n`, not `2**n`

`_SKIP_OP_TYPES` is a mish-mash of things we don't test. I wrote in-line
comments in the dict to clarify what's what here. Anyway, they are:
- arithmetic ops
- templates
- ops that fail for unknown reasons
- ops that fail BUT have explicit test cases added (currently just a
handful so we can have a test available with a parametrization for the
future)

`_ABSTRACT_OR_META_TYPES` is a set of class types that we shouldn't try
to create instances of. In `get_all_classes`, if `c` is an abstract (eg.
`Operator`, although it's not explicitly stated to be abstract bc
python) or meta (eg. `ControlledOp`, for internal construction only)
class, we don't add it to the final list of classes to make instances
of. We do still add their child classes.

There are also some types that I explicitly xfail in the test. These
_succeed_ in being auto-generated, but they _fail_ validation. I've made
this into a dict, so you have to give a reason explaining why it fails
validation if you're putting it here.

**Benefits:**
- We are more confident in our existing operators!
- Any newly-added Operator will automatically get picked up by this
test. Good ones get added easily, bad ones fail this test in CI and the
dev knows how to fix their operator.

**Possible Drawbacks:**
Only thing I'm worried about now is ensuring that adding tests to make
this totally complete is as easy as possible. might be tough because the
configuration is in `conftest.py`, and not the test file itself. I've
added comments to the test itself so hopefully people look at them.

I'll list possible future work. In order of what I'd like to see:
1. Start adding explicit test cases for everything in `_SKIP_OP_TYPES`
(except qutrit ops, see next task)
2. make `assert_valid` work for qutrit ops then remove the xfail

[sc-51653]

---------

Co-authored-by: Christina Lee <christina@xanadu.ai>
mudit2812 pushed a commit that referenced this pull request Jan 19, 2024
**Context:**
Now that we have the `assert_valid` function (#4764), we can easily
validate operators meeting our expectations. The trouble is, we'd have
to test every single operator manually. It would be nice to
automatically generate an instance of each operator (with valid
parameters) and use that to validate the class.

**Description of the Change:**
Add `tests/ops/functions/conftest.py`, which generates a parametrization
of all subclasses of `Operator` (specifics described below). This had to
be put in `conftest.py` because pytest-xdist panics otherwise. Also
update `test_assert_valid.py` to auto-generate an instance of each of
these subclasses and assert that instance's validity. There are some
details to mention about what I've done: I use `op_cls.ndim_params` and
`op_cls.num_params` to generate valid parameters to construct the
instance of the class. This doesn't always work well for various
reasons:
- If they are defined as `property` variables, you can't view it at the
class-level, so I just guess
- I use `len(ndim_params)` to guess the number of parameters, resorting
to `num_params` if that fails
- I use `1.`, `[1.]*(2**n)`, or `np.eye(n)` if the `ndim_params` is 0, 1
or 2 respectively (for each parameter). I assume no other values exist,
and that these will be the appropriate magnitudes. Not always true, eg.
`BasisState` has `ndim_params=(1,)` to get an array, but the length is
`n`, not `2**n`

`_SKIP_OP_TYPES` is a mish-mash of things we don't test. I wrote in-line
comments in the dict to clarify what's what here. Anyway, they are:
- arithmetic ops
- templates
- ops that fail for unknown reasons
- ops that fail BUT have explicit test cases added (currently just a
handful so we can have a test available with a parametrization for the
future)

`_ABSTRACT_OR_META_TYPES` is a set of class types that we shouldn't try
to create instances of. In `get_all_classes`, if `c` is an abstract (eg.
`Operator`, although it's not explicitly stated to be abstract bc
python) or meta (eg. `ControlledOp`, for internal construction only)
class, we don't add it to the final list of classes to make instances
of. We do still add their child classes.

There are also some types that I explicitly xfail in the test. These
_succeed_ in being auto-generated, but they _fail_ validation. I've made
this into a dict, so you have to give a reason explaining why it fails
validation if you're putting it here.

**Benefits:**
- We are more confident in our existing operators!
- Any newly-added Operator will automatically get picked up by this
test. Good ones get added easily, bad ones fail this test in CI and the
dev knows how to fix their operator.

**Possible Drawbacks:**
Only thing I'm worried about now is ensuring that adding tests to make
this totally complete is as easy as possible. might be tough because the
configuration is in `conftest.py`, and not the test file itself. I've
added comments to the test itself so hopefully people look at them.

I'll list possible future work. In order of what I'd like to see:
1. Start adding explicit test cases for everything in `_SKIP_OP_TYPES`
(except qutrit ops, see next task)
2. make `assert_valid` work for qutrit ops then remove the xfail

[sc-51653]

---------

Co-authored-by: Christina Lee <christina@xanadu.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants