Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds CLI and Python invocations for the device test suite #733

Merged
merged 19 commits into from Aug 4, 2020
Merged
1 change: 1 addition & 0 deletions pennylane/plugins/__init__.py
Expand Up @@ -27,6 +27,7 @@
default_gaussian
tf_ops
autograd_ops
tests
"""
from .default_qubit import DefaultQubit
from .default_gaussian import DefaultGaussian
222 changes: 204 additions & 18 deletions pennylane/plugins/tests/__init__.py
Expand Up @@ -15,27 +15,12 @@
This subpackage provides integration tests for the devices with PennyLane's core
functionalities. At the moment, the tests only run on devices based on the 'qubit' model.

To run the tests against a particular device (i.e., for 'default.qubit'):
The tests require that ``pytest``, ``pytest-mock``, and ``flaky`` be installed.
josh146 marked this conversation as resolved.
Show resolved Hide resolved
These can be installed using ``pip``:

.. code-block:: console

python3 -m pytest path_to_pennylane/plugins/tests --device default.qubit --shots 1234 --analytic False

The location of your PennyLane installation may differ depending on installation method and operating
system. To find the location, you can execute the following Python code:

>>> import os
>>> import pennylane as qml
>>> print(os.path.dirname(qml.__file__))

The command line arguments are optional:

* If `--device` is not given, the tests are run on the qubit core devices that ship with PennyLane.

* If `--shots` is not given, a default of 10000 is used. The shots argument is ignored for devices running in
analytic mode.

* If `--analytic` is not given, the device's default is used.
pip install pytest pytest-mock flaky
josh146 marked this conversation as resolved.
Show resolved Hide resolved

The tests can also be run on an external device from a PennyLane plugin, such as
``'qiskit.aer'``. For this, make sure you have the correct dependencies installed.
Expand All @@ -47,4 +32,205 @@
For non-analytic tests, the tolerance of the assert statements
is set to a high enough value to account for stochastic fluctuations. Flaky is used to automatically
repeat failed tests.

There are several methods for running the tests against a particular device (i.e., for
``'default.qubit'``), detailed below.

Using pytest
------------

.. code-block:: console

pytest path_to_pennylane_src/plugins/tests --device=default.qubit --shots=10000 --analytic=False

The location of your PennyLane installation may differ depending on installation method and
operating system. To find the location, you can use the :func:`~.get_device_tests` function:

>>> from pennylane.plugins.tests import get_device_tests
>>> get_device_tests()

The pl-device-test CLI
----------------------

Alternatively, PennyLane provides a command line interface for invoking the device tests.

.. code-block:: console

pl-device-test --device default.qubit --shots 10000 --analytic False

Within Python
-------------

Finally, the tests can be invoked within a Python session via the :func:`~.test_device`
function:

>>> from pennylane.plugins.tests import test_device
>>> test_device("default.qubit")

For more details on the available arguments, see the :func:`~.test_device` documentation.

Functions
---------
"""
# pylint: disable=import-outside-toplevel,too-many-arguments
import argparse
import pathlib
import subprocess
import sys


# determine if running in an interactive environment
import __main__

interactive = False

try:
__main__.__file__
except AttributeError:
interactive = True


def get_device_tests():
"""Returns the location of the device integration tests."""
return str(pathlib.Path(__file__).parent.absolute())


def test_device(
device, analytic=None, shots=None, skip_ops=True, flaky_report=False, pytest_args=None, **kwargs
):
"""Run the device integration tests using an installed PennyLane device.

Args:
device (str): the name of the device to test
analytic (bool): Whether to run the device in analytic mode (where
expectation values and probabilities are computed exactly from the quantum state)
or non-analytic/"stochastic" mode (where probabilities and expectation
values are *estimated* using a finite number of shots.)
If not provided, the device default is used.
shots (int): The number of shots/samples used to estimate expectation
values and probability. Only takes affect if ``analytic=False``. If not
provided, the device default is used.
skip_ops (bool): whether to skip tests that use operations not supported
josh146 marked this conversation as resolved.
Show resolved Hide resolved
by the device
pytest_args (list[str]): additional PyTest arguments and flags
**kwargs: Additional device keyword args

**Example**

>>> from pennylane.plugins.tests import test_device
>>> test_device("default.qubit")
Comment on lines +120 to +121
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Worked great when ran from an ipython session or as part of a Python script! 🙂 💯

In a Jupyter notebook the output was directed into the terminal where jupyter notebook was invoked (and there was no output in the notebook itself).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch about Jupyter notebook! Do you think this is a common use-case we should support? I think we should not support running tests from within a Jupyter notebook

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kind of on the fence here 🤔 I think it's not too crucial, though if interactive sessions are supported then a Jupyter notebook would also be an important case. Having said that I think it's not a major point

================================ test session starts =======================================
platform linux -- Python 3.7.7, pytest-5.4.2, py-1.8.1, pluggy-0.13.1
rootdir: /home/josh/xanadu/pennylane/pennylane/plugins/tests, inifile: pytest.ini
plugins: flaky-3.6.1, cov-2.8.1, mock-3.1.0
collected 86 items
xanadu/pennylane/pennylane/plugins/tests/test_gates.py ..............................
............................... [ 70%]
xanadu/pennylane/pennylane/plugins/tests/test_measurements.py .......sss...sss..sss [ 95%]
xanadu/pennylane/pennylane/plugins/tests/test_properties.py .... [100%]
================================= 77 passed, 9 skipped in 0.78s ============================
josh146 marked this conversation as resolved.
Show resolved Hide resolved

"""
try:
import pytest # pylint: disable=unused-import
import pytest_mock # pylint: disable=unused-import
import flaky # pylint: disable=unused-import
except ImportError:
raise ImportError(
"The device tests requires the following Python packages:"
"\npytest pytest-cov pytest_mock flaky"
"\nThese can be installed using pip."
)

pytest_args = pytest_args or []
test_dir = get_device_tests()

cmds = ["pytest"]
cmds.append(test_dir)
cmds.append(f"--device={device}")

if shots is not None:
cmds.append(f"--shots={shots}")

if analytic is not None:
cmds.append(f"--analytic={analytic}")

if skip_ops:
cmds.append("--skip-ops")

if not flaky_report:
cmds.append("--no-flaky-report")

if kwargs:
device_kwargs = " ".join([f"{k}={v}" for k, v in kwargs.items()])
cmds += ["--device-kwargs", device_kwargs]

try:
subprocess.run(cmds + pytest_args, check=not interactive)
except subprocess.CalledProcessError as e:
# pytest return codes:
# Exit code 0: All tests were collected and passed successfully
# Exit code 1: Tests were collected and run but some of the tests failed
# Exit code 2: Test execution was interrupted by the user
# Exit code 3: Internal error happened while executing tests
# Exit code 4: pytest command line usage error
# Exit code 5: No tests were collected
if e.returncode in range(1, 6):
sys.exit(1)
raise e
josh146 marked this conversation as resolved.
Show resolved Hide resolved


def cli():
"""The PennyLane device test command line interface.

The ``pl-device-test`` CLI is a convenience wrapper that calls
pytest for a particular device.

.. code-block:: console

$ pl-device-test --help
usage: pl-device-test [-h] [--device DEVICE] [--shots SHOTS]
[--analytic ANALYTIC] [--skip-ops]

See below for available options and commands for working with the PennyLane
device tests.

General Options:
-h, --help show this help message and exit
--device DEVICE The device to test.
--shots SHOTS Number of shots to use in stochastic mode.
--analytic ANALYTIC Whether to run the tests in stochastic or exact mode.
--skip-ops Skip tests that use unsupported device operations.
--flaky-report Show the flaky report in the terminal
--device-kwargs KEY=VAL [KEY=VAL ...]
Additional device kwargs.

Note that additional pytest command line arguments and flags can also be passed:

.. code-block:: console

$ pl-device-test --device default.qubit --shots 1234 --analytic False --tb=short -x
"""
from .conftest import pytest_addoption

parser = argparse.ArgumentParser(
description="See below for available options and commands for working with the PennyLane device tests."
)
parser._optionals.title = "General Options" # pylint: disable=protected-access
pytest_addoption(parser)
args, pytest_args = parser.parse_known_args()

flaky = False
if "--flaky-report" in pytest_args:
pytest_args.remove("--flaky-report")
flaky = True

test_device(
args.device,
analytic=args.analytic,
shots=args.shots,
skip_ops=args.skip_ops,
flaky_report=flaky,
pytest_args=pytest_args,
**args.device_kwargs,
)
52 changes: 47 additions & 5 deletions pennylane/plugins/tests/conftest.py
Expand Up @@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
"""Contains shared fixtures for the device tests."""
import argparse
import os

import numpy as np
Expand Down Expand Up @@ -127,31 +128,71 @@ def pytest_runtest_setup(item):
# These functions are required to define the device name to run the tests for


class StoreDictKeyPair(argparse.Action):
"""Argparse action for storing key-value pairs as a dictionary.

For example, calling a CLI program with ``--mydict v1=k1 v2=5``:

>>> parser.add_argument("--mydict", dest="my_dict", action=StoreDictKeyPair, nargs="+")
>>> args = parser.parse()
>>> args.my_dict
{"v1": "k1", "v2": "5"}
"""

# pylint: disable=too-few-public-methods

def __init__(self, option_strings, dest, nargs=None, **kwargs):
self._nargs = nargs
super(StoreDictKeyPair, self).__init__(option_strings, dest, nargs=nargs, **kwargs)

def __call__(self, parser, namespace, values, option_string=None):
my_dict = {}
for kv in values:
k, v = kv.split("=")
my_dict[k] = v
setattr(namespace, self.dest, my_dict)
josh146 marked this conversation as resolved.
Show resolved Hide resolved


def pytest_addoption(parser):
"""Add command line option to pytest."""

if hasattr(parser, "add_argument"):
addoption = parser.add_argument
else:
addoption = parser.addoption
josh146 marked this conversation as resolved.
Show resolved Hide resolved

# The options are the three arguments every device takes
parser.addoption("--device", action="store", default=None, help="The device to test.")
parser.addoption(
addoption("--device", action="store", default=None, help="The device to test.")
addoption(
"--shots",
action="store",
default=None,
type=int,
help="Number of shots to use in stochastic mode.",
)
parser.addoption(
addoption(
"--analytic",
action="store",
default=None,
help="Whether to run the tests in stochastic or exact mode.",
)
parser.addoption(
addoption(
"--skip-ops",
action="store_true",
default=False,
help="Skip tests that use unsupported device operations.",
)

addoption(
"--device-kwargs",
dest="device_kwargs",
action=StoreDictKeyPair,
default={},
nargs="+",
metavar="KEY=VAL",
help="Additional device kwargs.",
)


def pytest_generate_tests(metafunc):
"""Set up fixtures from command line options. """
Expand All @@ -161,6 +202,7 @@ def pytest_generate_tests(metafunc):
"name": opt.device,
"shots": opt.shots,
"analytic": opt.analytic,
**opt.device_kwargs,
}

# ===========================================
Expand Down Expand Up @@ -209,7 +251,7 @@ def pytest_runtest_makereport(item, call):
# and those using not implemented features
if (
call.excinfo.type == qml.DeviceError
and "not supported on device" in str(call.excinfo.value)
and "supported" in str(call.excinfo.value)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come this needed a change?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed that not all of the plugins were standardized in their error messages for unsupported behaviour :( For example, the Cirq plugin raises an 'unsupported' error if you attempt to use certain gates it some situations.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I see! Hmm, would it be worth adding an extra condition here with or?

or call.excinfo.type == NotImplementedError
):
tr.wasxfail = "reason:" + str(call.excinfo.value)
Expand Down
4 changes: 4 additions & 0 deletions pennylane/plugins/tests/pytest.ini
@@ -0,0 +1,4 @@
[pytest]
markers =
skip_unsupported: skip a test if it uses an operation unsupported on a device

9 changes: 5 additions & 4 deletions setup.py
Expand Up @@ -44,15 +44,16 @@
'default.tensor = pennylane.beta.plugins.default_tensor:DefaultTensor',
'default.tensor.tf = pennylane.beta.plugins.default_tensor_tf:DefaultTensorTF',
],
'console_scripts': [
'pl-device-test=pennylane.plugins.tests:cli'
]
},
'description': 'PennyLane is a Python quantum machine learning library by Xanadu Inc.',
'long_description': open('README.rst').read(),
'provides': ["pennylane"],
'install_requires': requirements,
'command_options': {
'build_sphinx': {
'version': ('setup.py', version),
'release': ('setup.py', version)}}
'package_data': {'pennylane': ['plugins/tests/pytest.ini']},
'include_package_data': True
Comment on lines +55 to +56
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If not for this addition, would plugins/tests/pytest.ini be ignored when the test suite is invoked by using the cli?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue is that the pytest.ini file isn't included in the packaged wheel without this being included! So this forces it to be packaged in the wheel.

By default setuptools will only package up .py files.

}

classifiers = [
Expand Down