Skip to content

Latest commit

 

History

History
1675 lines (992 loc) · 39.7 KB

reference.rst

File metadata and controls

1675 lines (992 loc) · 39.7 KB

API Reference

This page contains the full reference to pytest's API.

Functions

pytest.approx

pytest.approx

pytest.fail

Tutorial: skipping

pytest.fail

pytest.skip

pytest.skip(msg, [allow_module_level=False])

pytest.importorskip

pytest.importorskip

pytest.xfail

pytest.xfail

pytest.exit

pytest.exit

pytest.main

pytest.main

pytest.param

pytest.param(*values, [id], [marks])

pytest.raises

Tutorial: assertraises.

pytest.raises(expected_exception: Exception [, *, match])

pytest.deprecated_call

Tutorial: ensuring_function_triggers.

pytest.deprecated_call()

pytest.register_assert_rewrite

Tutorial: assertion-rewriting.

pytest.register_assert_rewrite

pytest.warns

Tutorial: assertwarnings

pytest.warns(expected_warning: Exception, [match])

pytest.freeze_includes

Tutorial: freezing-pytest.

pytest.freeze_includes

Marks

Marks can be used apply meta data to test functions (but not fixtures), which can then be accessed by fixtures or plugins.

pytest.mark.filterwarnings

Tutorial: filterwarnings.

Add warning filters to marked test items.

pytest.mark.parametrize

Tutorial: parametrize.

This mark has the same signature as :py_pytest.python.Metafunc.parametrize; see there.

pytest.mark.skip

Tutorial: skip.

Unconditionally skip a test function.

pytest.mark.skipif

Tutorial: skipif.

Skip a test function if a condition is True.

pytest.mark.usefixtures

Tutorial: usefixtures.

Mark a test function as using the given fixture names.

Note

When using usefixtures in hooks, it can only load fixtures when applied to a test function before test setup (for example in the pytest_collection_modifyitems hook).

Also not that his mark has no effect when applied to fixtures.

pytest.mark.xfail

Tutorial: xfail.

Marks a test function as expected to fail.

custom marks

Marks are created dynamically using the factory object pytest.mark and applied as a decorator.

For example:

@pytest.mark.timeout(10, "slow", method="thread")
def test_function():
    ...

Will create and attach a Mark <_pytest.mark.structures.Mark> object to the collected Item <_pytest.nodes.Item>, which can then be accessed by fixtures or hooks with Node.iter_markers <_pytest.nodes.Node.iter_markers>. The mark object will have the following attributes:

mark.args == (10, "slow")
mark.kwargs == {"method": "thread"}

Fixtures

Tutorial: fixture.

Fixtures are requested by test functions or other fixtures by declaring them as argument names.

Example of a test requiring a fixture:

def test_output(capsys):
    print("hello")
    out, err = capsys.readouterr()
    assert out == "hello\n"

Example of a fixture requiring another fixture:

@pytest.fixture
def db_session(tmpdir):
    fn = tmpdir / "db.file"
    return connect(str(fn))

For more details, consult the full fixtures docs <fixture>.

@pytest.fixture

pytest.fixture

cache

config.cache

Tutorial: cache.

The config.cache object allows other plugins and fixtures to store and retrieve values across test runs. To access it from fixtures request pytestconfig into your fixture and get it with pytestconfig.cache.

Under the hood, the cache plugin uses the simple dumps/loads API of the :pyjson stdlib module.

_pytest.cacheprovider

Cache.get

Cache.set

Cache.makedir

capsys

capsys

Tutorial: capture.

_pytest.capture

capsys()

Returns an instance of :pyCaptureFixture.

Example:

def test_output(capsys):
    print("hello")
    captured = capsys.readouterr()
    assert captured.out == "hello\n"

CaptureFixture()

capsysbinary

capsysbinary

Tutorial: capture.

capsysbinary()

Returns an instance of :pyCaptureFixture.

Example:

def test_output(capsysbinary):
    print("hello")
    captured = capsysbinary.readouterr()
    assert captured.out == b"hello\n"

capfd

capfd

Tutorial: capture.

capfd()

Returns an instance of :pyCaptureFixture.

Example:

def test_system_echo(capfd):
    os.system('echo "hello"')
    captured = capfd.readouterr()
    assert captured.out == "hello\n"

capfdbinary

capfdbinary

Tutorial: capture.

capfdbinary()

Returns an instance of :pyCaptureFixture.

Example:

def test_system_echo(capfdbinary):
    os.system('echo "hello"')
    captured = capfdbinary.readouterr()
    assert captured.out == b"hello\n"

doctest_namespace

doctest_namespace

Tutorial: doctest.

_pytest.doctest.doctest_namespace()

Usually this fixture is used in conjunction with another autouse fixture:

@pytest.fixture(autouse=True)
def add_np(doctest_namespace):
    doctest_namespace["np"] = numpy

For more details: doctest_namespace.

request

request

Tutorial: request example.

The request fixture is a special fixture providing information of the requesting test function.

_pytest.fixtures.FixtureRequest()

pytestconfig

pytestconfig

_pytest.fixtures.pytestconfig()

record_property

record_property

Tutorial: record_property example.

_pytest.junitxml.record_property()

record_testsuite_property

record_testsuite_property

Tutorial: record_testsuite_property example.

_pytest.junitxml.record_testsuite_property()

caplog

caplog

Tutorial: logging.

_pytest.logging.caplog()

This returns a _pytest.logging.LogCaptureFixture instance.

_pytest.logging.LogCaptureFixture

monkeypatch

monkeypatch

_pytest.monkeypatch

Tutorial: monkeypatch.

_pytest.monkeypatch.monkeypatch()

This returns a MonkeyPatch instance.

_pytest.monkeypatch.MonkeyPatch

testdir

testdir

_pytest.pytester

This fixture provides a Testdir instance useful for black-box testing of test files, making it ideal to test plugins.

To use it, include in your top-most conftest.py file:

pytest_plugins = "pytester"

Testdir()

RunResult()

LineMatcher()

recwarn

recwarn

Tutorial: assertwarnings

_pytest.recwarn

recwarn()

_pytest.recwarn.WarningsRecorder()

Each recorded warning is an instance of warnings.WarningMessage.

Note

RecordedWarning was changed from a plain class to a namedtuple in pytest 3.1

Note

DeprecationWarning and PendingDeprecationWarning are treated differently; see ensuring_function_triggers.

tmp_path

tmp_path

Tutorial: tmpdir

_pytest.tmpdir

tmp_path()

tmp_path_factory

tmp_path_factory

Tutorial: tmp_path_factory example

tmp_path_factory instances have the following methods:

_pytest.tmpdir

TempPathFactory.mktemp

TempPathFactory.getbasetemp

tmpdir

tmpdir

Tutorial: tmpdir

_pytest.tmpdir

tmpdir()

tmpdir_factory

tmpdir_factory

Tutorial: tmpdir factory example

tmpdir_factory instances have the following methods:

_pytest.tmpdir

TempdirFactory.mktemp

TempdirFactory.getbasetemp

Hooks

Tutorial: writing_plugins.

_pytest.hookspec

Reference to all hooks which can be implemented by conftest.py files <localplugin> and plugins <plugins>.

Bootstrapping hooks

Bootstrapping hooks called for plugins registered early enough (internal and setuptools plugins).

pytest_load_initial_conftests

pytest_cmdline_preparse

pytest_cmdline_parse

pytest_cmdline_main

Initialization hooks

Initialization hooks called for plugins and conftest.py files.

pytest_addoption

pytest_addhooks

pytest_configure

pytest_unconfigure

pytest_sessionstart

pytest_sessionfinish

pytest_plugin_registered

Collection hooks

pytest calls the following hooks for collecting files and directories:

pytest_collection

pytest_ignore_collect

pytest_collect_directory

pytest_collect_file

pytest_pycollect_makemodule

For influencing the collection of objects in Python modules you can use the following hook:

pytest_pycollect_makeitem

pytest_generate_tests

pytest_make_parametrize_id

After collection is complete, you can modify the order of items, delete or otherwise amend the test items:

pytest_collection_modifyitems

pytest_collection_finish

Test running (runtest) hooks

All runtest related hooks receive a :pypytest.Item <_pytest.main.Item> object.

pytest_runtestloop

pytest_runtest_protocol

pytest_runtest_logstart

pytest_runtest_logfinish

pytest_runtest_setup

pytest_runtest_call

pytest_runtest_teardown

pytest_runtest_makereport

For deeper understanding you may look at the default implementation of these hooks in :py_pytest.runner and maybe also in :py_pytest.pdb which interacts with :py_pytest.capture and its input/output capturing in order to immediately drop into interactive debugging when a test failure occurs.

pytest_pyfunc_call

Reporting hooks

Session related reporting hooks:

pytest_collectstart

pytest_make_collect_report

pytest_itemcollected

pytest_collectreport

pytest_deselected

pytest_report_header

pytest_report_collectionfinish

pytest_report_teststatus

pytest_terminal_summary

pytest_fixture_setup

pytest_fixture_post_finalizer

pytest_warning_captured

pytest_warning_recorded

Central hook for reporting about test execution:

pytest_runtest_logreport

Assertion related hooks:

pytest_assertrepr_compare

pytest_assertion_pass

Debugging/Interaction hooks

There are few hooks which can be used for special reporting or interaction with exceptions:

pytest_internalerror

pytest_keyboard_interrupt

pytest_exception_interact

pytest_enter_pdb

Objects

Full reference to objects accessible from fixtures <fixture> or hooks <hook-reference>.

CallInfo

_pytest.runner.CallInfo()

Class

_pytest.python.Class()

Collector

_pytest.nodes.Collector()

CollectReport

_pytest.reports.CollectReport()

Config

_pytest.config.Config()

ExceptionInfo

_pytest._code.ExceptionInfo

ExitCode

_pytest.config.ExitCode

File

_pytest.nodes.File()

FixtureDef

_pytest.fixtures.FixtureDef()

FSCollector

_pytest.nodes.FSCollector()

Function

_pytest.python.Function()

Item

_pytest.nodes.Item()

MarkDecorator

_pytest.mark.MarkDecorator

MarkGenerator

_pytest.mark.MarkGenerator

Mark

_pytest.mark.structures.Mark

Metafunc

_pytest.python.Metafunc

Module

_pytest.python.Module()

Node

_pytest.nodes.Node()

Parser

_pytest.config.argparsing.Parser()

PluginManager

pluggy.PluginManager()

PytestPluginManager

_pytest.config.PytestPluginManager()

Session

_pytest.main.Session()

TestReport

_pytest.reports.TestReport()

_Result

Result used within hook wrappers <hookwrapper>.

pluggy.callers._Result

pluggy.callers._Result.get_result

pluggy.callers._Result.force_result

Global Variables

pytest treats some global variables in a special manner when defined in a test module or conftest.py files.

collect_ignore

Tutorial: customizing-test-collection

Can be declared in conftest.py files to exclude test directories or modules. Needs to be list[str].

collect_ignore = ["setup.py"]

collect_ignore_glob

Tutorial: customizing-test-collection

Can be declared in conftest.py files to exclude test directories or modules with Unix shell-style wildcards. Needs to be list[str] where str can contain glob patterns.

collect_ignore_glob = ["*_ignore.py"]

pytest_plugins

Tutorial: available installable plugins

Can be declared at the global level in test modules and conftest.py files to register additional plugins. Can be either a str or Sequence[str].

pytest_plugins = "myapp.testsupport.myplugin"
pytest_plugins = ("myapp.testsupport.tools", "myapp.testsupport.regression")

pytestmark

Tutorial: scoped-marking

Can be declared at the global level in test modules to apply one or more marks <marks ref> to all test functions and methods. Can be either a single mark or a list of marks (applied in left-to-right order).

import pytest

pytestmark = pytest.mark.webtest
import pytest

pytestmark = [pytest.mark.integration, pytest.mark.slow]

Environment Variables

Environment variables that can be used to change pytest's behavior.

PYTEST_ADDOPTS

This contains a command-line (parsed by the pyshlex module) that will be prepended to the command line given by the user, see adding default options for more information.

PYTEST_CURRENT_TEST

This is not meant to be set by users, but is set by pytest internally with the name of the current test so other processes can inspect it, see pytest current test env for more information.

PYTEST_DEBUG

When set, pytest will print tracing and debug information.

PYTEST_DISABLE_PLUGIN_AUTOLOAD

When set, disables plugin auto-loading through setuptools entrypoints. Only explicitly specified plugins will be loaded.

PYTEST_PLUGINS

Contains comma-separated list of modules that should be loaded as plugins:

export PYTEST_PLUGINS=mymodule.plugin,xdist

PY_COLORS

When set to 1, pytest will use color in terminal output. When set to 0, pytest will not use color. PY_COLORS takes precedence over NO_COLOR and FORCE_COLOR.

NO_COLOR

When set (regardless of value), pytest will not use color in terminal output. PY_COLORS takes precedence over NO_COLOR, which takes precedence over FORCE_COLOR. See no-color.org for other libraries supporting this community standard.

FORCE_COLOR

When set (regardless of value), pytest will use color in terminal output. PY_COLORS and NO_COLOR take precedence over FORCE_COLOR.

Exceptions

_pytest.config.UsageError()

Warnings

Custom warnings generated in some situations such as improper usage or deprecated features.

pytest.PytestWarning

pytest.PytestAssertRewriteWarning

pytest.PytestCacheWarning

pytest.PytestCollectionWarning

pytest.PytestConfigWarning

pytest.PytestDeprecationWarning

pytest.PytestExperimentalApiWarning

pytest.PytestUnhandledCoroutineWarning

pytest.PytestUnknownMarkWarning

Consult the internal-warnings section in the documentation for more information.

Configuration Options

Here is a list of builtin configuration options that may be written in a pytest.ini, pyproject.toml, tox.ini or setup.cfg file, usually located at the root of your repository. To see each file format in details, see config file formats.

Warning

Usage of setup.cfg is not recommended except for very simple use cases. .cfg files use a different parser than pytest.ini and tox.ini which might cause hard to track down problems. When possible, it is recommended to use the latter files, or pyproject.toml, to hold your pytest configuration.

Configuration options may be overwritten in the command-line by using -o/--override-ini, which can also be passed multiple times. The expected format is name=value. For example:

pytest -o console_output_style=classic -o cache_dir=/tmp/mycache

addopts

Add the specified OPTS to the set of command line arguments as if they had been specified by the user. Example: if you have this ini file content:

# content of pytest.ini
[pytest]
addopts = --maxfail=2 -rf  # exit after 2 failures, report fail info

issuing pytest test_hello.py actually means:

pytest --maxfail=2 -rf test_hello.py

Default is to add no options.

cache_dir

Sets a directory where stores content of cache plugin. Default directory is .pytest_cache which is created in rootdir <rootdir>. Directory may be relative or absolute path. If setting relative path, then directory is created relative to rootdir <rootdir>. Additionally path may contain environment variables, that will be expanded. For more information about cache plugin please refer to cache_provider.

confcutdir

Sets a directory where search upwards for conftest.py files stops. By default, pytest will stop searching for conftest.py files upwards from pytest.ini/tox.ini/setup.cfg of the project if any, or up to the file-system root.

console_output_style

Sets the console output style while running tests:

  • classic: classic pytest output.
  • progress: like classic pytest output, but with a progress indicator.
  • count: like progress, but shows progress as the number of tests completed instead of a percent.

The default is progress, but you can fallback to classic if you prefer or the new mode is causing unexpected problems:

# content of pytest.ini
[pytest]
console_output_style = classic

doctest_encoding

Default encoding to use to decode text files with docstrings. See how pytest handles doctests <doctest>.

doctest_optionflags

One or more doctest flag names from the standard doctest module. See how pytest handles doctests <doctest>.

empty_parameter_set_mark

Allows to pick the action for empty parametersets in parameterization

  • skip skips tests with an empty parameterset (default)
  • xfail marks tests with an empty parameterset as xfail(run=False)
  • fail_at_collect raises an exception if parametrize collects an empty parameter set
# content of pytest.ini
[pytest]
empty_parameter_set_mark = xfail

Note

The default value of this option is planned to change to xfail in future releases as this is considered less error prone, see #3155 for more details.

faulthandler_timeout

Dumps the tracebacks of all threads if a test takes longer than X seconds to run (including fixture setup and teardown). Implemented using the faulthandler.dump_traceback_later function, so all caveats there apply.

# content of pytest.ini
[pytest]
faulthandler_timeout=5

For more information please refer to faulthandler.

filterwarnings

Sets a list of filters and actions that should be taken for matched warnings. By default all warnings emitted during the test session will be displayed in a summary at the end of the test session.

# content of pytest.ini
[pytest]
filterwarnings =
    error
    ignore::DeprecationWarning

This tells pytest to ignore deprecation warnings and turn all other warnings into errors. For more information please refer to warnings.

junit_duration_report

4.1

Configures how durations are recorded into the JUnit XML report:

  • total (the default): duration times reported include setup, call, and teardown times.
  • call: duration times reported include only call times, excluding setup and teardown.
[pytest]
junit_duration_report = call

junit_family

4.2

Configures the format of the generated JUnit XML file. The possible options are:

  • xunit1 (or legacy): produces old style output, compatible with the xunit 1.0 format. This is the default.
  • xunit2: produces xunit 2.0 style output,

    which should be more compatible with latest Jenkins versions.

[pytest]
junit_family = xunit2

junit_logging

3.5

5.4 log, all, out-err options added.

Configures if captured output should be written to the JUnit XML file. Valid values are:

  • log: write only logging captured output.
  • system-out: write captured stdout contents.
  • system-err: write captured stderr contents.
  • out-err: write both captured stdout and stderr contents.
  • all: write captured logging, stdout and stderr contents.
  • no (the default): no captured output is written.
[pytest]
junit_logging = system-out

junit_log_passing_tests

4.6

If junit_logging != "no", configures if the captured output should be written to the JUnit XML file for passing tests. Default is True.

[pytest]
junit_log_passing_tests = False

junit_suite_name

To set the name of the root test suite xml item, you can configure the junit_suite_name option in your config file:

[pytest]
junit_suite_name = my_suite

log_auto_indent

Allow selective auto-indentation of multiline log messages.

Supports command line option --log-auto-indent [value] and config option log_auto_indent = [value] to set the auto-indentation behavior for all logging.

[value] can be:
  • True or "On" - Dynamically auto-indent multiline log messages
  • False or "Off" or 0 - Do not auto-indent multiline log messages (the default behavior)
  • [positive integer] - auto-indent multiline log messages by [value] spaces
[pytest]
log_auto_indent = False

Supports passing kwarg extra={"auto_indent": [value]} to calls to logging.log() to specify auto-indentation behavior for a specific entry in the log. extra kwarg overrides the value specified on the command line or in the config.

log_cli

Enable log display during test run (also known as "live logging" <live_logs>). The default is False.

[pytest]
log_cli = True

log_cli_date_format

Sets a :pytime.strftime-compatible string that will be used when formatting dates for live logging.

[pytest]
log_cli_date_format = %Y-%m-%d %H:%M:%S

For more information, see live_logs.

log_cli_format

Sets a :pylogging-compatible string used to format live logging messages.

[pytest]
log_cli_format = %(asctime)s %(levelname)s %(message)s

For more information, see live_logs.

log_cli_level

Sets the minimum log message level that should be captured for live logging. The integer value or the names of the levels can be used.

[pytest]
log_cli_level = INFO

For more information, see live_logs.

log_date_format

Sets a :pytime.strftime-compatible string that will be used when formatting dates for logging capture.

[pytest]
log_date_format = %Y-%m-%d %H:%M:%S

For more information, see logging.

log_file

Sets a file name relative to the pytest.ini file where log messages should be written to, in addition to the other logging facilities that are active.

[pytest]
log_file = logs/pytest-logs.txt

For more information, see logging.

log_file_date_format

Sets a :pytime.strftime-compatible string that will be used when formatting dates for the logging file.

[pytest]
log_file_date_format = %Y-%m-%d %H:%M:%S

For more information, see logging.

log_file_format

Sets a :pylogging-compatible string used to format logging messages redirected to the logging file.

[pytest]
log_file_format = %(asctime)s %(levelname)s %(message)s

For more information, see logging.

log_file_level

Sets the minimum log message level that should be captured for the logging file. The integer value or the names of the levels can be used.

[pytest]
log_file_level = INFO

For more information, see logging.

log_format

Sets a :pylogging-compatible string used to format captured logging messages.

[pytest]
log_format = %(asctime)s %(levelname)s %(message)s

For more information, see logging.

log_level

Sets the minimum log message level that should be captured for logging capture. The integer value or the names of the levels can be used.

[pytest]
log_level = INFO

For more information, see logging.

log_print

If set to False, will disable displaying captured logging messages for failed tests.

[pytest]
log_print = False

For more information, see logging.

markers

When the --strict-markers or --strict command-line arguments are used, only known markers - defined in code by core pytest or some plugin - are allowed.

You can list additional markers in this setting to add them to the whitelist, in which case you probably want to add --strict-markers to addopts to avoid future regressions:

[pytest]
addopts = --strict-markers
markers =
    slow
    serial

Note

The use of --strict-markers is highly preferred. --strict was kept for backward compatibility only and may be confusing for others as it only applies to markers and not to other options.

minversion

Specifies a minimal pytest version required for running tests.

# content of pytest.ini
[pytest]
minversion = 3.0  # will fail if we run with pytest-2.8

norecursedirs

Set the directory basename patterns to avoid when recursing for test discovery. The individual (fnmatch-style) patterns are applied to the basename of a directory to decide if to recurse into it. Pattern matching characters:

*       matches everything
?       matches any single character
[seq]   matches any character in seq
[!seq]  matches any char not in seq

Default patterns are '.*', 'build', 'dist', 'CVS', '_darcs', '{arch}', '*.egg', 'venv'. Setting a norecursedirs replaces the default. Here is an example of how to avoid certain directories:

[pytest]
norecursedirs = .svn _build tmp*

This would tell pytest to not look into typical subversion or sphinx-build directories or into any tmp prefixed directory.

Additionally, pytest will attempt to intelligently identify and ignore a virtualenv by the presence of an activation script. Any directory deemed to be the root of a virtual environment will not be considered during test collection unless ‑‑collect‑in‑virtualenv is given. Note also that norecursedirs takes precedence over ‑‑collect‑in‑virtualenv; e.g. if you intend to run tests in a virtualenv with a base directory that matches '.*' you must override norecursedirs in addition to using the ‑‑collect‑in‑virtualenv flag.

python_classes

One or more name prefixes or glob-style patterns determining which classes are considered for test collection. Search for multiple glob patterns by adding a space between patterns. By default, pytest will consider any class prefixed with Test as a test collection. Here is an example of how to collect tests from classes that end in Suite:

[pytest]
python_classes = *Suite

Note that unittest.TestCase derived classes are always collected regardless of this option, as unittest's own collection framework is used to collect those tests.

python_files

One or more Glob-style file patterns determining which python files are considered as test modules. Search for multiple glob patterns by adding a space between patterns:

[pytest]
python_files = test_*.py check_*.py example_*.py

Or one per line:

[pytest]
python_files =
    test_*.py
    check_*.py
    example_*.py

By default, files matching test_*.py and *_test.py will be considered test modules.

python_functions

One or more name prefixes or glob-patterns determining which test functions and methods are considered tests. Search for multiple glob patterns by adding a space between patterns. By default, pytest will consider any function prefixed with test as a test. Here is an example of how to collect test functions and methods that end in _test:

[pytest]
python_functions = *_test

Note that this has no effect on methods that live on a unittest .TestCase derived class, as unittest's own collection framework is used to collect those tests.

See change naming conventions for more detailed examples.

required_plugins

A space separated list of plugins that must be present for pytest to run. Plugins can be listed with or without version specifiers directly following their name. Whitespace between different version specifiers is not allowed. If any one of the plugins is not found, emit an error.

[pytest]
required_plugins = pytest-django>=3.0.0,<4.0.0 pytest-html pytest-xdist>=1.0.0

testpaths

Sets list of directories that should be searched for tests when no specific directories, files or test ids are given in the command line when executing pytest from the rootdir <rootdir> directory. Useful when all project tests are in a known location to speed up test collection and to avoid picking up undesired tests by accident.

[pytest]
testpaths = testing doc

This tells pytest to only look for tests in testing and doc directories when executing from the root directory.

usefixtures

List of fixtures that will be applied to all test functions; this is semantically the same to apply the @pytest.mark.usefixtures marker to all test functions.

[pytest]
usefixtures =
    clean_db

xfail_strict

If set to True, tests marked with @pytest.mark.xfail that actually succeed will by default fail the test suite. For more information, see xfail strict tutorial.

[pytest]
xfail_strict = True