Pytest plugin to randomly order tests and control
All of these features are on by default but can be disabled with flags.
- Randomly shuffles the order of test items. This is done first at the level of modules, then at the level of test classes (if you have them), then at the order of functions. This also works with things like doctests.
random.seed()at the start of every test case and test to a fixed number - this defaults to
time.time()from the start of your test run, but you can pass in
--randomly-seedto repeat a randomness-induced failure.
- If factory boy is installed, its random state is reset at the start of every test. This allows for repeatable use of its random 'fuzzy' features.
- If faker is installed, its random
state is reset at the start of every test. This is also for repeatable fuzzy
data in tests - factory boy uses faker for lots of data. This is also done
if you're using the
fakerpytest fixture, by defining the
- If numpy is installed, its random state is reset at the start of every test.
- If additional random generators are used, they can be registered under the
pytest_randomly.random_seederentry point and will have their seed reset at the start of every test. Register a function that takes the current seed value.
- Works with pytest-xdist.
Randomness in testing can be quite powerful to discover hidden flaws in the tests themselves, as well as giving a little more coverage to your system.
By randomly ordering the tests, the risk of surprising inter-test dependencies is reduced - a technique used in many places, for example Google's C++ test runner googletest. Research suggests that "dependent tests do exist in practice" and a random order of test executions can effectively detect such dependencies . Alternatively, a reverse order of test executions, as provided by pytest-reverse, may find less dependent tests but can achieve a better benefit/cost ratio.
By resetting the random seed to a repeatable number for each test, tests can create data based on random numbers and yet remain repeatable, for example factory boy's fuzzy values. This is good for ensuring that tests specify the data they need and that the tested system is not affected by any data that is filled in randomly due to not being specified.
Additionally, I appeared on the Test and Code podcast to talk about pytest-randomly.
Install from pip with:
python -m pip install pytest-randomly
Python 3.5 to 3.9 supported.
Testing a Django project? Check out my book Speed Up Your Django Tests which covers loads of best practices so you can write faster, more accurate tests.
Pytest will automatically find the plugin and use it when you run
The output will start with an extra line that tells you the random seed that is
$ pytest ... platform darwin -- Python 3.7.2, pytest-4.3.1, py-1.8.0, pluggy-0.9.0 Using --randomly-seed=1553614239 ...
If the tests fail due to ordering or randomly created data, you can restart them with that seed using the flag as suggested:
Or more conveniently, use the special value
Since the ordering is by module, then by class, you can debug inter-test pollution failures by narrowing down which tests are being run to find the bad interaction by rerunning just the module/class:
pytest --randomly-seed=1234 tests/module_that_failed/
You can disable behaviours you don't like with the following flags:
--randomly-dont-reset-seed- turn off the reset of
random.seed()at the start of every test
--randomly-dont-reorganize- turn off the shuffling of the order of tests
The plugin appears to Pytest with the name 'randomly'. To disable it
altogether, you can use the
-p argument, for example:
pytest -p no:randomly
If you're using a different randomness generator in your third party package,
you can register an entrypoint to be called every time
reseeds. Implement the entrypoint
to a function/callable that takes one argument, the new seed (int).
For example in your
[options.entry_points] pytest_randomly.random_seeder = mypackage = mypackage.reseed
|||Sai Zhang, Darioush Jalali, Jochen Wuttke, Kıvanç Muşlu, Wing Lam, Michael D. Ernst, and David Notkin. 2014. Empirically revisiting the test independence assumption. In Proceedings of the 2014 International Symposium on Software Testing and Analysis (ISSTA 2014). Association for Computing Machinery, New York, NY, USA, 385–396. doi:https://doi.org/10.1145/2610384.2610404|