-
Notifications
You must be signed in to change notification settings - Fork 46
pytest
To run tests depending on external data files you need to install the dials-data package by running libtbx.pip install -U dials-data
.
For nicer test output you can further libtbx.pip install pytest-sugar
.
Go into the dials module directory and run pytest --regression -n auto
Go into the xia2 module directory and run pytest --regression -n auto
If you want to run all xia2 tests, including the full data processing jobs, run pytest --regression-full -n auto
Go into the dxtbx module directory and run pytest --regression -n auto
With pytest
you can run a subset of tests. If you run pytest
in a subdirectory then it will only pick up tests from that subdirectory (and no libtbx tests).
To only run tests from certain files use pytest $file $file [..]
.
While developing a test it might be useful to use pytest -s $file
, which stops capturing standard output.
When you run pytest
you will notice that it only runs one test at a time.
The equivalent of libtbx.run_tests_parallel nproc=$n
is pytest -n $n
.
Run pytest --pdb
to get a debug console on test failures.
You can use -x
to stop pytest after the first test failure or ``--maxfail=$n``` to stop after $n test failures.
After you fixed the problem you run pytest --lf
to only rerun the failed tests.
Or run pytest --ff
to run all tests, but the previously failed tests first.
Run pytest --duration=10
to show the 10 slowest steps (which may include setup/teardown times).
Check out the ptw
command.
Basically, name your test files test_*.py
and your functions test_*()
and then use assert
s.
Here are some more advanced things you can do in pytests:
Really you shouldn't be using dials_regression
any more. Use dials_data
instead. Testing on Jenkins may lead to problems if you update dials_regression. If you really have to you can use the dials_regression
fixture:
def test_only_runs_when_dials_regression_is_present(dials_regression):
print("The path to dials_regression is %s" % dials_regression)
Simply add an argument named 'dials_regression' to your test function. When the test is run this is replaced with a string containing the full path to the dials regression directory. If dials_regression is not present then the test is skipped.
Don't attempt try: import ...; except...
blocks. This is the simple and correct solution:
import pytest
def test_only_runs_when_package_mock_is_installed():
mock = pytest.importorskip("mock")
If necessary you can specify a version, too.
mock = pytest.importorskip("mock", minversion="1.0")
Use the tmpdir
fixture:
def test_requires_a_temporary_directory(tmpdir):
print("This is my temporary directory: %s" % tmpdir.strpath)
By adding an argument named 'tmpdir' to your test function you are passed a unique, existing temporary directory as a py.path.local
object. You can get the path as a string via the .strpath
attribute, as shown above.
It may be worth checking if you actually need the path as a string though. py.path
objects have a ton of interesting methods, for example:
with tmpdir.as_cwd():
# temporarily change the current directory
# go back to original location
tmpdir.chdir()
# go there and stay
assert (tmpdir / 'modules' / 'dials') == tmpdir.join('modules', 'dials')
# look familiar?
tmpdir.join('some', 'new', 'path', 'filename.txt').write('data', ensure=True)
# write data to a file, creating missing directories if required
assert 'data' == tmpdir.join('some', 'new', 'path', 'filename.txt').read()
# read from file
assert tmpdir.check(dir=1)
# check that location exists and is a directory
Now, if your test uses tmpdir and fails you may want to know where your files ended up.
If you ran the test through libtbx then this will be somewhere in pytest/t???/
. Otherwise it will be in your system temp directory. Pytest will tell you:
============================== FAILURES ===============================
___________________________ test_something ____________________________
tmpdir = local('/tmp/pytest-of-username/pytest-54/test_something0')
def test_something(tmpdir):
> assert False
E assert False
test_thing.py:2: AssertionError
====================== 1 failed in 0.04 seconds =======================
You will find the temporary files of the last couple of runs in /tmp/pytest-of-username
, pytest will automatically remove directories older than 3 runs.
If you want to have your temporary directories in a different place you can run eg. pytest --basetemp=/dls/tmp/$(whoami)/pytest
(NB: existing contents of the given directory will be deleted!)
Chances are that you should split up your test instead. (But pytest does have a fixture for that, too.)
Rather than commenting out the entire test - or maybe worse, converting the python code into a string by adding triple-quotes around the function - please just mark the test as skipped and give a reason why:
import pytest
@pytest.mark.skip('This test does not work, and I don't know why')
def test_broken():
assert False
This way the test still shows up in the overview, but is marked as skipped with a useful comment. That makes it harder to forget about the test and rediscover it five years later.
If you want to skip a test conditionally you can use:
import pytest
def test_weekend():
if get_weekday() in ('Saturday', 'Sunday'):
pytest.skip('I do not work on weekends and neither does this test')
Try pytest.approx
instead:
assert approx_equal(cell, [78.6, 78.6, 78.6, 90, 90, 90], eps=1e-1) # is equivalent to
assert cell == pytest.approx([78.6, 78.6, 78.6, 90, 90, 90], abs=1e-1)
assert approx_equal(a, b) # is equivalent to
assert a == pytest.approx(b, abs=1e-6)