-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Release 4.3.0 #4805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release 4.3.0 #4805
Conversation
…features Merge master into features
This adds the `--ignore-glob` option to allow Unix-style wildcards so that `--ignore-glob=integration*` excludes all tests that reside in files starting with `integration`. Fixes: pytest-dev#3711
This adds the `collect_ignore_glob` option for `conftest.py` to allow Unix-style wildcards for excluding files.
emit warning when pytest.warns receives unknown keyword arguments
…tures Merge master into features
Add ability to use globs when using --ignore
Add __repr__ for RunResult
tox: add generic xdist factor
This reverts commit eb92e57.
Display --help/--version with ArgumentErrors
AppVeyor: use xdist for py?? envs, drop py27/py37
…eprecation-warning Remove py27 py34 deprecation warning
Conflicts: tox.ini
…tures Merge master into features
- This patch allows to set log_file (path) from hook Signed-off-by: Thomas Hisch Signed-off-by: Andras Mitzki <andras.mitzki@balabit.com>
LoggingPlugin: Support to customize log_file from hook
Codecov Report
@@ Coverage Diff @@
## master #4805 +/- ##
=========================================
- Coverage 95.65% 95.5% -0.15%
=========================================
Files 113 113
Lines 25057 25141 +84
Branches 2489 2494 +5
=========================================
+ Hits 23968 24011 +43
- Misses 768 799 +31
- Partials 321 331 +10
Continue to review full report at Codecov.
|
Codecov Report
@@ Coverage Diff @@
## master #4805 +/- ##
==========================================
+ Coverage 95.67% 95.69% +0.01%
==========================================
Files 113 113
Lines 25057 25141 +84
Branches 2489 2494 +5
==========================================
+ Hits 23973 24058 +85
- Misses 766 767 +1
+ Partials 318 316 -2
Continue to review full report at Codecov.
|
test_caching.py:17: AssertionError | ||
-------------------------- Captured stdout setup --------------------------- | ||
running expensive computation... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure, it came from master
it seems
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I run tox -e regen
it appears again.
The diff is rather huge from it:
diff --git i/doc/en/assert.rst w/doc/en/assert.rst
index b119adcf..9076d9c7 100644
--- i/doc/en/assert.rst
+++ w/doc/en/assert.rst
@@ -32,17 +32,17 @@ you will see the return value of the function call:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
- test_assert1.py F [100%]
- ================================= FAILURES =================================
______________________________ test_function _______________________________
-
def test_function():
assert f() == 4
E + where 3 = f()
- test_assert1.py:5: AssertionError
========================= 1 failed in 0.12 seconds =========================
@@ -167,12 +167,12 @@ if you run this module:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
- test_assert2.py F [100%]
- ================================= FAILURES =================================
___________________________ test_set_comparison ____________________________
-
def test_set_comparison(): set1 = set("1308") set2 = set("8035")
@@ -183,7 +183,7 @@ if you run this module:
E Extra items in the right set:
E '5'
E Use -v to get the full diff
- test_assert2.py:5: AssertionError
========================= 1 failed in 0.12 seconds =========================
@@ -238,14 +238,14 @@ the conftest file:
F [100%]
================================= FAILURES =================================
_______________________________ test_compare _______________________________
-
def test_compare(): f1 = Foo(1) f2 = Foo(2)
assert f1 == f2
E vals: 1 != 2
- test_foocompare.py:11: AssertionError
1 failed in 0.12 seconds
diff --git i/doc/en/builtin.rst w/doc/en/builtin.rst
index a40dfc22..0acce74d 100644
--- i/doc/en/builtin.rst
+++ w/doc/en/builtin.rst
@@ -19,13 +19,13 @@ For information about fixtures, see :ref:fixtures
. To see a complete list of a
$ pytest -q --fixtures
cache
Return a cache object that can persist state between testing sessions.
-
cache.get(key, default) cache.set(key, value)
-
Keys must be a ``/`` separated value, where the first part is usually the name of your plugin or application to avoid clashes with other cache users.
-
capsys
Values can be any object handled by the json stdlib module.
Enable capturing of writes tosys.stdout
andsys.stderr
and make
@@ -51,9 +51,9 @@ For information about fixtures, see :ref:fixtures
. To see a complete list of a
Fixture that returns a :py:class:dict
that will be injected into the namespace of doctests.
pytestconfig
Session-scoped fixture that returns the :class:_pytest.config.Config
object.
-
Example::
-
def test_foo(pytestconfig): if pytestconfig.getoption("verbose"): ...
@@ -63,9 +63,9 @@ For information about fixtures, see :ref:fixtures
. To see a complete list of a
configured reporters, like JUnit XML.
The fixture is callable with (name, value)
, with value being automatically
xml-encoded.
-
Example::
-
record_xml_attribute
def test_function(record_property): record_property("example_key", 1)
@@ -74,9 +74,9 @@ For information about fixtures, see :ref:fixtures
. To see a complete list of a
automatically xml-encoded
caplog
Access and control log capturing.
-
Captured logs are available through the following properties/methods::
-
* caplog.text -> string containing formatted log output * caplog.records -> list of logging.LogRecord instances * caplog.record_tuples -> list of (logger_name, level, message) tuples
@@ -84,7 +84,7 @@ For information about fixtures, see :ref:fixtures
. To see a complete list of a
monkeypatch
The returned monkeypatch
fixture provides these
helper methods to modify objects, dictionaries or os.environ::
-
monkeypatch.setattr(obj, name, value, raising=True) monkeypatch.delattr(obj, name, raising=True) monkeypatch.setitem(mapping, name, value)
@@ -93,14 +93,14 @@ For information about fixtures, see :ref:fixtures
. To see a complete list of a
monkeypatch.delenv(name, raising=True)
monkeypatch.syspath_prepend(path)
monkeypatch.chdir(path)
-
recwarn
All modifications will be undone after the requesting test function or fixture has finished. The ``raising`` parameter determines if a KeyError or AttributeError will be raised if the set/deletion operation has no target.
Return a :class:WarningsRecorder
instance that records all warnings emitted by test functions.
-
tmpdir_factory
See http://docs.python.org/library/warnings.html for information on warning categories.
@@ -113,7 +113,7 @@ For information about fixtures, see :ref:fixtures
. To see a complete list of a
created as a sub directory of the base temporary
directory. The returned object is apy.path.local
_
path object.
-
tmp_path
.. _`py.path.local`: https://py.readthedocs.io/en/latest/path.html
Return a temporary directory path object
@@ -121,11 +121,11 @@ For information about fixtures, see :ref:fixtures
. To see a complete list of a
created as a sub directory of the base temporary
directory. The returned object is a :class:pathlib.Path
object.
-
.. note::
-
in python < 3.6 this is a pathlib2.Path
- no tests ran in 0.12 seconds
You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like::
diff --git i/doc/en/cache.rst w/doc/en/cache.rst
index c5a81b13..6bc441f0 100644
--- i/doc/en/cache.rst
+++ w/doc/en/cache.rst
@@ -51,26 +51,26 @@ If you run this for the first time you will see two failures:
.................F.......F........................ [100%]
================================= FAILURES =================================
_______________________________ test_num[17] _______________________________
- i = 17
-
@pytest.mark.parametrize("i", range(50)) def test_num(i): if i in (17, 25):
pytest.fail("bad luck")
- test_50.py:6: Failed
_______________________________ test_num[25] _______________________________
- i = 25
-
@pytest.mark.parametrize("i", range(50)) def test_num(i): if i in (17, 25):
pytest.fail("bad luck")
- test_50.py:6: Failed
2 failed, 48 passed in 0.12 seconds
@@ -85,31 +85,31 @@ If you then run it with --lf
:
rootdir: $REGENDOC_TMPDIR, inifile:
collected 50 items / 48 deselected / 2 selected
run-last-failure: rerun previous 2 failures
- test_50.py FF [100%]
- ================================= FAILURES =================================
_______________________________ test_num[17] _______________________________
- i = 17
-
@pytest.mark.parametrize("i", range(50)) def test_num(i): if i in (17, 25):
pytest.fail("bad luck")
- test_50.py:6: Failed
_______________________________ test_num[25] _______________________________
- i = 25
-
@pytest.mark.parametrize("i", range(50)) def test_num(i): if i in (17, 25):
pytest.fail("bad luck")
- test_50.py:6: Failed
================= 2 failed, 48 deselected in 0.12 seconds ==================
@@ -129,31 +129,31 @@ of FF
and dots):
rootdir: $REGENDOC_TMPDIR, inifile:
collected 50 items
run-last-failure: rerun previous 2 failures first
- test_50.py FF................................................ [100%]
- ================================= FAILURES =================================
_______________________________ test_num[17] _______________________________
- i = 17
-
@pytest.mark.parametrize("i", range(50)) def test_num(i): if i in (17, 25):
pytest.fail("bad luck")
- test_50.py:6: Failed
_______________________________ test_num[25] _______________________________
- i = 25
-
@pytest.mark.parametrize("i", range(50)) def test_num(i): if i in (17, 25):
pytest.fail("bad luck")
- test_50.py:6: Failed
=================== 2 failed, 48 passed in 0.12 seconds ====================
@@ -210,16 +210,18 @@ If you run this command for the first time, you can see the print statement:
F [100%]
================================= FAILURES =================================
______________________________ test_function _______________________________
- mydata = 42
-
def test_function(mydata):
assert mydata == 23
E -42
E +23
- test_caching.py:17: AssertionError
- -------------------------- Captured stdout setup ---------------------------
- running expensive computation...
1 failed in 0.12 seconds
If you run it a second time the value will be retrieved from
@@ -231,15 +233,15 @@ the cache and nothing will be printed:
F [100%]
================================= FAILURES =================================
______________________________ test_function _______________________________
- mydata = 42
-
def test_function(mydata):
assert mydata == 23
E -42
E +23
- test_caching.py:17: AssertionError
1 failed in 0.12 seconds
@@ -262,99 +264,19 @@ You can always peek at the content of the cache using the
cachedir: $PYTHON_PREFIX/.pytest_cache
------------------------------- cache values -------------------------------
cache/lastfailed contains:
-
{'a/test_db.py::test_a1': True,
-
'a/test_db2.py::test_a2': True,
-
'b/test_error.py::test_root': True,
-
'failure_demo.py::TestCustomAssertMsg::test_custom_repr': True,
-
'failure_demo.py::TestCustomAssertMsg::test_multiline': True,
-
'failure_demo.py::TestCustomAssertMsg::test_single_line': True,
-
'failure_demo.py::TestFailing::test_not': True,
-
'failure_demo.py::TestFailing::test_simple': True,
-
'failure_demo.py::TestFailing::test_simple_multiline': True,
-
'failure_demo.py::TestMoreErrors::test_compare': True,
-
'failure_demo.py::TestMoreErrors::test_complex_error': True,
-
'failure_demo.py::TestMoreErrors::test_global_func': True,
-
'failure_demo.py::TestMoreErrors::test_instance': True,
-
'failure_demo.py::TestMoreErrors::test_startswith': True,
-
'failure_demo.py::TestMoreErrors::test_startswith_nested': True,
-
'failure_demo.py::TestMoreErrors::test_try_finally': True,
-
'failure_demo.py::TestMoreErrors::test_z1_unpack_error': True,
-
'failure_demo.py::TestMoreErrors::test_z2_type_error': True,
-
'failure_demo.py::TestRaises::test_raise': True,
-
'failure_demo.py::TestRaises::test_raises': True,
-
'failure_demo.py::TestRaises::test_raises_doesnt': True,
-
'failure_demo.py::TestRaises::test_reinterpret_fails_with_print_for_the_fun_of_it': True,
-
'failure_demo.py::TestRaises::test_some_error': True,
-
'failure_demo.py::TestRaises::test_tupleerror': True,
-
'failure_demo.py::TestSpecialisedExplanations::test_eq_attrs': True,
-
'failure_demo.py::TestSpecialisedExplanations::test_eq_dataclass': True,
-
'failure_demo.py::TestSpecialisedExplanations::test_eq_dict': True,
-
'failure_demo.py::TestSpecialisedExplanations::test_eq_list': True,
-
'failure_demo.py::TestSpecialisedExplanations::test_eq_list_long': True,
-
'failure_demo.py::TestSpecialisedExplanations::test_eq_long_text': True,
-
'failure_demo.py::TestSpecialisedExplanations::test_eq_long_text_multiline': True,
-
'failure_demo.py::TestSpecialisedExplanations::test_eq_longer_list': True,
-
'failure_demo.py::TestSpecialisedExplanations::test_eq_multiline_text': True,
-
'failure_demo.py::TestSpecialisedExplanations::test_eq_set': True,
-
'failure_demo.py::TestSpecialisedExplanations::test_eq_similar_text': True,
-
'failure_demo.py::TestSpecialisedExplanations::test_eq_text': True,
-
'failure_demo.py::TestSpecialisedExplanations::test_in_list': True,
-
'failure_demo.py::TestSpecialisedExplanations::test_not_in_text_multiline': True,
-
'failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single': True,
-
'failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single_long': True,
-
'failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single_long_term': True,
-
'failure_demo.py::test_attribute': True,
-
'failure_demo.py::test_attribute_failure': True,
-
'failure_demo.py::test_attribute_instance': True,
-
'failure_demo.py::test_attribute_multiple': True,
-
'failure_demo.py::test_dynamic_compile_shows_nicely': True,
-
'failure_demo.py::test_generative[3-6]': True,
-
'test_50.py::test_num[17]': True,
-
{'test_50.py::test_num[17]': True, 'test_50.py::test_num[25]': True,
-
'test_anothersmtp.py::test_showhelo': True, 'test_assert1.py::test_function': True, 'test_assert2.py::test_set_comparison': True,
-
'test_backends.py::test_db_initialized[d2]': True, 'test_caching.py::test_function': True,
-
'test_checkconfig.py::test_something': True,
-
'test_class.py::TestClass::test_two': True,
-
'test_compute.py::test_compute[4]': True,
-
'test_example.py::test_error': True,
-
'test_example.py::test_fail': True,
-
'test_foocompare.py::test_compare': True,
-
'test_module.py::test_call_fails': True,
-
'test_module.py::test_ehlo': True,
-
'test_module.py::test_ehlo[mail.python.org]': True,
-
'test_module.py::test_ehlo[smtp.gmail.com]': True,
-
'test_module.py::test_event_simple': True,
-
'test_module.py::test_fail1': True,
-
'test_module.py::test_fail2': True,
-
'test_module.py::test_func2': True,
-
'test_module.py::test_interface_complex': True,
-
'test_module.py::test_interface_simple': True,
-
'test_module.py::test_noop': True,
-
'test_module.py::test_noop[mail.python.org]': True,
-
'test_module.py::test_noop[smtp.gmail.com]': True,
-
'test_module.py::test_setup_fails': True,
-
'test_parametrize.py::TestClass::test_equals[1-2]': True,
-
'test_sample.py::test_answer': True,
-
'test_show_warnings.py::test_one': True,
-
'test_simple.yml::hello': True,
-
'test_smtpsimple.py::test_ehlo': True,
-
'test_step.py::TestUserHandling::test_modification': True,
-
'test_strings.py::test_valid_string[!]': True,
-
'test_tmp_path.py::test_create_file': True,
-
'test_tmpdir.py::test_create_file': True,
-
'test_tmpdir.py::test_needsfiles': True,
-
'test_unittest_db.py::MyTest::test_method1': True,
-
'test_unittest_db.py::MyTest::test_method2': True}
-
cache/nodeids contains:
'test_foocompare.py::test_compare': True}
['test_caching.py::test_function']
cache/stepwise contains:
[]
example/value contains:
42
- ======================= no tests ran in 0.12 seconds =======================
Clearing Cache content
diff --git i/doc/en/capture.rst w/doc/en/capture.rst
index 78390034..cfbea9e3 100644
--- i/doc/en/capture.rst
+++ w/doc/en/capture.rst
@@ -71,16 +71,16 @@ of the failing function and hide the other one:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
- test_module.py .F [100%]
- ================================= FAILURES =================================
________________________________ test_func2 ________________________________
-
def test_func2():
assert False
- test_module.py:9: AssertionError
-------------------------- Captured stdout setup ---------------------------
setting up <function test_func2 at 0xdeadbeef>
diff --git i/doc/en/doctest.rst w/doc/en/doctest.rst
index 5aadc111..5d48deec 100644
--- i/doc/en/doctest.rst
+++ w/doc/en/doctest.rst
@@ -68,9 +68,9 @@ then you can just invokepytest
without command line options:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 1 item
- mymodule.py . [100%]
- ========================= 1 passed in 0.12 seconds =========================
It is possible to use fixtures using the getfixture
helper::
diff --git i/doc/en/example/markers.rst w/doc/en/example/markers.rst
index 864c1e80..47e6904a 100644
--- i/doc/en/example/markers.rst
+++ w/doc/en/example/markers.rst
@@ -37,9 +37,9 @@ You can then restrict a test run to only run tests marked with webtest
:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items / 3 deselected / 1 selected
- test_server.py::test_send_http PASSED [100%]
- ================== 1 passed, 3 deselected in 0.12 seconds ==================
Or the inverse, running all tests except the webtest ones:
@@ -52,11 +52,11 @@ Or the inverse, running all tests except the webtest ones:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items / 1 deselected / 3 selected
- test_server.py::test_something_quick PASSED [ 33%]
test_server.py::test_another PASSED [ 66%]
test_server.py::TestClass::test_method PASSED [100%]
- ================== 3 passed, 1 deselected in 0.12 seconds ==================
Selecting tests based on their node ID
@@ -74,9 +74,9 @@ tests based on their module, class, method, or function name:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 1 item
- test_server.py::TestClass::test_method PASSED [100%]
- ========================= 1 passed in 0.12 seconds =========================
You can also select on the class:
@@ -89,9 +89,9 @@ You can also select on the class:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 1 item
- test_server.py::TestClass::test_method PASSED [100%]
- ========================= 1 passed in 0.12 seconds =========================
Or select multiple nodes:
@@ -104,10 +104,10 @@ Or select multiple nodes:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 2 items
- test_server.py::TestClass::test_method PASSED [ 50%]
test_server.py::test_send_http PASSED [100%]
- ========================= 2 passed in 0.12 seconds =========================
.. _node-id:
@@ -144,9 +144,9 @@ select tests based on their names:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items / 3 deselected / 1 selected
- test_server.py::test_send_http PASSED [100%]
- ================== 1 passed, 3 deselected in 0.12 seconds ==================
And you can also run all tests except the ones that match the keyword:
@@ -159,11 +159,11 @@ And you can also run all tests except the ones that match the keyword:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items / 1 deselected / 3 selected
- test_server.py::test_something_quick PASSED [ 33%]
test_server.py::test_another PASSED [ 66%]
test_server.py::TestClass::test_method PASSED [100%]
- ================== 3 passed, 1 deselected in 0.12 seconds ==================
Or to select "http" and "quick" tests:
@@ -176,10 +176,10 @@ Or to select "http" and "quick" tests:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items / 2 deselected / 2 selected
- test_server.py::test_send_http PASSED [ 50%]
test_server.py::test_something_quick PASSED [100%]
- ================== 2 passed, 2 deselected in 0.12 seconds ==================
.. note::
@@ -215,23 +215,23 @@ You can ask which markers exist for your test suite - the list includes our just
$ pytest --markers
@pytest.mark.webtest: mark a test as a webtest.
- @pytest.mark.filterwarnings(warning): add a warning filter to the given test. see https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings
- @pytest.mark.filterwarnings(warning): add a warning filter to the given test. see https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings
- @pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
- @pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see https://docs.pytest.org/en/latest/skipping.html
- @pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See https://docs.pytest.org/en/latest/skipping.html
- @pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see https://docs.pytest.org/en/latest/parametrize.html for more info and examples.
- @pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see https://docs.pytest.org/en/latest/fixture.html#usefixtures
- @pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see https://docs.pytest.org/en/latest/fixture.html#usefixtures
- @pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
- @pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
For an example on how to add and work with markers from a plugin, see
:ref:adding a custom marker from a plugin
.
@@ -368,9 +368,9 @@ the test needs:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
- test_someenv.py s [100%]
- ======================== 1 skipped in 0.12 seconds =========================
and here is one that specifies exactly the environment needed:
@@ -383,32 +383,32 @@ and here is one that specifies exactly the environment needed:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
- test_someenv.py . [100%]
- ========================= 1 passed in 0.12 seconds =========================
The --markers
option always gives you a list of available markers::
$ pytest --markers
@pytest.mark.env(name): mark test to run only on named environment
- @pytest.mark.filterwarnings(warning): add a warning filter to the given test. see https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings
- @pytest.mark.filterwarnings(warning): add a warning filter to the given test. see https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings
- @pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
- @pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see https://docs.pytest.org/en/latest/skipping.html
- @pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See https://docs.pytest.org/en/latest/skipping.html
- @pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see https://docs.pytest.org/en/latest/parametrize.html for more info and examples.
- @pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see https://docs.pytest.org/en/latest/fixture.html#usefixtures
- @pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see https://docs.pytest.org/en/latest/fixture.html#usefixtures
- @pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
- @pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
.. _passing callables to custom markers
:
@@ -551,11 +551,11 @@ then you will see two tests skipped and two executed tests as expected:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items
- test_plat.py s.s. [100%]
========================= short test summary info ==========================
SKIPPED [2] $REGENDOC_TMPDIR/conftest.py:12: cannot run on platform linux
- =================== 2 passed, 2 skipped in 0.12 seconds ====================
Note that if you specify a platform via the marker-command line option like this:
@@ -568,9 +568,9 @@ Note that if you specify a platform via the marker-command line option like this
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items / 3 deselected / 1 selected
- test_plat.py . [100%]
- ================== 1 passed, 3 deselected in 0.12 seconds ==================
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
@@ -622,9 +622,9 @@ We can now use the -m option
to select one set:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items / 2 deselected / 2 selected
- test_module.py FF [100%]
- ================================= FAILURES =================================
__________________________ test_interface_simple ___________________________
test_module.py:3: in test_interface_simple
@@ -646,9 +646,9 @@ or to select both "event" and "interface" tests:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items / 1 deselected / 3 selected
- test_module.py FFF [100%]
- ================================= FAILURES =================================
__________________________ test_interface_simple ___________________________
test_module.py:3: in test_interface_simple
diff --git i/doc/en/example/nonpython.rst w/doc/en/example/nonpython.rst
index bf7173ee..fe32fae2 100644
--- i/doc/en/example/nonpython.rst
+++ w/doc/en/example/nonpython.rst
@@ -33,9 +33,9 @@ now execute the test specification:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
collected 2 items
- test_simple.yml F. [100%]
- ================================= FAILURES =================================
______________________________ usecase: hello ______________________________
usecase execution failed
@@ -68,10 +68,10 @@ consulted when reporting inverbose
mode:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
collecting ... collected 2 items
- test_simple.yml::hello FAILED [ 50%]
test_simple.yml::ok PASSED [100%]
- ================================= FAILURES =================================
______________________________ usecase: hello ______________________________
usecase execution failed
@@ -96,5 +96,5 @@ interesting to just look at the collection tree:
- ======================= no tests ran in 0.12 seconds =======================
diff --git i/doc/en/example/parametrize.rst w/doc/en/example/parametrize.rst
index b5d4693a..adeec6e4 100644
--- i/doc/en/example/parametrize.rst
+++ w/doc/en/example/parametrize.rst
@@ -59,13 +59,13 @@ let's run the full monty:
....F [100%]
================================= FAILURES =================================
_____________________________ test_compute[4] ______________________________
- param1 = 4
-
def test_compute(param1):
assert param1 < 4
- test_compute.py:3: AssertionError
1 failed, 4 passed in 0.12 seconds
@@ -157,7 +157,7 @@ objects, they are still using the default pytest representation:
<Function test_timedistance_v2[20011211-20011212-expected1]>
<Function test_timedistance_v3[forward]>
<Function test_timedistance_v3[backward]>
- ======================= no tests ran in 0.12 seconds =======================
In test_timedistance_v3
, we used pytest.param
to specify the test IDs
@@ -207,9 +207,9 @@ this is a fully self-contained example which you can run with:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items
- test_scenarios.py .... [100%]
- ========================= 4 passed in 0.12 seconds =========================
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:
@@ -228,7 +228,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
<Function test_demo2[basic]>
<Function test_demo1[advanced]>
<Function test_demo2[advanced]>
- ======================= no tests ran in 0.12 seconds =======================
Note that we told metafunc.parametrize()
that your scenario values
@@ -292,7 +292,7 @@ Let's first see how it looks like at collection time:
<Function test_db_initialized[d1]>
<Function test_db_initialized[d2]>
- ======================= no tests ran in 0.12 seconds =======================
And then when we run the test:
@@ -303,15 +303,15 @@ And then when we run the test:
.F [100%]
================================= FAILURES =================================
_________________________ test_db_initialized[d2] __________________________
- db = <conftest.DB2 object at 0xdeadbeef>
-
def test_db_initialized(db): # a dummy test if db.__class__.__name__ == "DB2":
pytest.fail("deliberately failing for demo purposes")
- test_backends.py:6: Failed
1 failed, 1 passed in 0.12 seconds
@@ -357,7 +357,7 @@ The result of this test will be successful:
collected 1 item
<Function test_indirect[a-b]>
- ======================= no tests ran in 0.12 seconds =======================
.. regendoc:wipe
@@ -405,15 +405,15 @@ argument sets to use for each test function. Let's run it:
F.. [100%]
================================= FAILURES =================================
________________________ TestClass.test_equals[1-2] ________________________
- self = <test_parametrize.TestClass object at 0xdeadbeef>, a = 1, b = 2
-
def test_equals(self, a, b):
assert a == b
E -1
E +2
- test_parametrize.py:18: AssertionError
1 failed, 2 passed in 0.12 seconds
@@ -436,10 +436,8 @@ Running it results in some skips if we don't have all the python interpreters in
.. code-block:: pytest
. $ pytest -rs -q multipython.py
- ...sss...sssssssss...sss... [100%]
- ========================= short test summary info ==========================
- SKIPPED [15] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.4' not found
- 12 passed, 15 skipped in 0.12 seconds
- ........................... [100%]
- 27 passed in 0.12 seconds
Indirect parametrization of optional implementations/imports
@@ -492,11 +490,11 @@ If you run this with reporting for skips enabled:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
- test_module.py .s [100%]
========================= short test summary info ==========================
SKIPPED [1] $REGENDOC_TMPDIR/conftest.py:11: could not import 'opt2'
- =================== 1 passed, 1 skipped in 0.12 seconds ====================
You'll see that we don't have an opt2
module and thus the second test run
@@ -550,11 +548,11 @@ Then run pytest
with verbose mode and with only the basic
marker:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 17 items / 14 deselected / 3 selected
- test_pytest_param_example.py::test_eval[1+7-8] PASSED [ 33%]
test_pytest_param_example.py::test_eval[basic_2+4] PASSED [ 66%]
test_pytest_param_example.py::test_eval[basic_6*9] XFAIL [100%]
- ============ 2 passed, 14 deselected, 1 xfailed in 0.12 seconds ============
As the result:
diff --git i/doc/en/example/pythoncollection.rst w/doc/en/example/pythoncollection.rst
index 750bc58d..873c2e94 100644
--- i/doc/en/example/pythoncollection.rst
+++ w/doc/en/example/pythoncollection.rst
@@ -142,7 +142,7 @@ The test collection would look like this:
- ======================= no tests ran in 0.12 seconds =======================
You can check for multiple glob patterns by adding a space between the patterns::
@@ -199,7 +199,7 @@ You can always peek at the collection tree without running tests like this:
- ======================= no tests ran in 0.12 seconds =======================
.. _customizing-test-collection:
@@ -267,7 +267,7 @@ file will be left out:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 0 items
- ======================= no tests ran in 0.12 seconds =======================
It's also possible to ignore files based on Unix shell-style wildcards by adding
diff --git i/doc/en/example/reportingdemo.rst w/doc/en/example/reportingdemo.rst
index 9fcc72ff..09761e1e 100644
--- i/doc/en/example/reportingdemo.rst
+++ w/doc/en/example/reportingdemo.rst
@@ -17,82 +17,82 @@ get on the terminal - we are working on that):
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR/assertion, inifile:
collected 44 items
- failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [100%]
- ================================= FAILURES =================================
___________________________ test_generative[3-6] ___________________________
- param1 = 3, param2 = 6
-
@pytest.mark.parametrize("param1, param2", [(3, 6)]) def test_generative(param1, param2):
assert param1 * 2 < param2
- failure_demo.py:22: AssertionError
_________________________ TestFailing.test_simple __________________________
- self = <failure_demo.TestFailing object at 0xdeadbeef>
-
def test_simple(self): def f(): return 42
-
def g(): return 43
-
assert f() == g()
E + where 42 = <function TestFailing.test_simple..f at 0xdeadbeef>()
E + and 43 = <function TestFailing.test_simple..g at 0xdeadbeef>()
- failure_demo.py:33: AssertionError
____________________ TestFailing.test_simple_multiline _____________________
- self = <failure_demo.TestFailing object at 0xdeadbeef>
-
def test_simple_multiline(self):
otherfunc_multi(42, 6 * 9)
- failure_demo.py:36:
-
- failure_demo.py:36:
-
- a = 42, b = 54
-
def otherfunc_multi(a, b):
assert a == b
- failure_demo.py:17: AssertionError
___________________________ TestFailing.test_not ___________________________
- self = <failure_demo.TestFailing object at 0xdeadbeef>
-
def test_not(self): def f(): return 42
-
assert not f()
E + where 42 = <function TestFailing.test_not..f at 0xdeadbeef>()
- failure_demo.py:42: AssertionError
_________________ TestSpecialisedExplanations.test_eq_text _________________
- self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
-
def test_eq_text(self):
assert "spam" == "eggs"
E - spam
E + eggs
- failure_demo.py:47: AssertionError
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
- self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
-
def test_eq_similar_text(self):
assert "foo 1 bar" == "foo 2 bar"
@@ -100,12 +100,12 @@ get on the terminal - we are working on that):
E ? ^
E + foo 2 bar
E ? ^
- failure_demo.py:50: AssertionError
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
- self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
-
def test_eq_multiline_text(self):
assert "foo\nspam\nbar" == "foo\neggs\nbar"
@@ -113,12 +113,12 @@ get on the terminal - we are working on that):
E - spam
E + eggs
E bar
- failure_demo.py:53: AssertionError
______________ TestSpecialisedExplanations.test_eq_long_text _______________
- self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
-
def test_eq_long_text(self): a = "1" * 100 + "a" + "2" * 100 b = "1" * 100 + "b" + "2" * 100
@@ -130,12 +130,12 @@ get on the terminal - we are working on that):
E ? ^
E + 1111111111b222222222
E ? ^
- failure_demo.py:58: AssertionError
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
- self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
-
def test_eq_long_text_multiline(self): a = "1\n" * 100 + "a" + "2\n" * 100 b = "1\n" * 100 + "b" + "2\n" * 100
@@ -148,25 +148,25 @@ get on the terminal - we are working on that):
E 1
E 1
E 1...
- E
- E
E ...Full output truncated (7 lines hidden), use '-vv' to show
- failure_demo.py:63: AssertionError
_________________ TestSpecialisedExplanations.test_eq_list _________________
- self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
-
def test_eq_list(self):
assert [0, 1, 2] == [0, 1, 3]
E At index 2 diff: 2 != 3
E Use -v to get the full diff
- failure_demo.py:66: AssertionError
______________ TestSpecialisedExplanations.test_eq_list_long _______________
- self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
-
def test_eq_list_long(self): a = [0] * 100 + [1] + [3] * 100 b = [0] * 100 + [2] + [3] * 100
@@ -174,12 +174,12 @@ get on the terminal - we are working on that):
E assert [0, 0, 0, 0, 0, 0, ...] == [0, 0, 0, 0, 0, 0, ...]
E At index 100 diff: 1 != 2
E Use -v to get the full diff
- failure_demo.py:71: AssertionError
_________________ TestSpecialisedExplanations.test_eq_dict _________________
- self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
-
def test_eq_dict(self):
assert {"a": 0, "b": 1, "c": 0} == {"a": 0, "b": 2, "d": 0}
@@ -190,14 +190,14 @@ get on the terminal - we are working on that):
E {'c': 0}
E Right contains more items:
E {'d': 0}...
- E
- E
E ...Full output truncated (2 lines hidden), use '-vv' to show
- failure_demo.py:74: AssertionError
_________________ TestSpecialisedExplanations.test_eq_set __________________
- self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
-
def test_eq_set(self):
assert {0, 10, 11, 12} == {0, 20, 21}
@@ -208,34 +208,34 @@ get on the terminal - we are working on that):
E Extra items in the right set:
E 20
E 21...
- E
- E
E ...Full output truncated (2 lines hidden), use '-vv' to show
- failure_demo.py:77: AssertionError
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
- self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
-
def test_eq_longer_list(self):
assert [1, 2] == [1, 2, 3]
E Right contains more items, first extra item: 3
E Use -v to get the full diff
- failure_demo.py:80: AssertionError
_________________ TestSpecialisedExplanations.test_in_list _________________
- self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
-
def test_in_list(self):
assert 1 in [0, 2, 3, 4, 5]
- failure_demo.py:83: AssertionError
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
- self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
-
def test_not_in_text_multiline(self): text = "some multiline\ntext\nwhich\nincludes foo\nand a\ntail"
assert "foo" not in text
@@ -247,14 +247,14 @@ get on the terminal - we are working on that):
E includes foo
E ? +++
E and a...
- E
- E
E ...Full output truncated (2 lines hidden), use '-vv' to show
- failure_demo.py:87: AssertionError
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
- self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
-
def test_not_in_text_single(self): text = "single foo line"
assert "foo" not in text
@@ -262,46 +262,46 @@ get on the terminal - we are working on that):
E 'foo' is contained here:
E single foo line
E ? +++
- failure_demo.py:91: AssertionError
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
- self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
-
def test_not_in_text_single_long(self): text = "head " * 50 + "foo " + "tail " * 20
assert "foo" not in text
E 'foo' is contained here:
- E head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
- E head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E ? +++
- failure_demo.py:95: AssertionError
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
- self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
-
def test_not_in_text_single_long_term(self): text = "head " * 50 + "f" * 70 + "tail " * 20
assert "f" * 70 not in text
E 'ffffffffffffffffff...fffffffffffffffffff' is contained here:
- E head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
- E head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
- failure_demo.py:99: AssertionError
______________ TestSpecialisedExplanations.test_eq_dataclass _______________
- self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
-
def test_eq_dataclass(self): from dataclasses import dataclass
-
@dataclass class Foo(object): a: int b: str
-
left = Foo(1, "b") right = Foo(1, "c")
assert left == right
@@ -309,20 +309,20 @@ get on the terminal - we are working on that):
E Omitting 1 identical items, use -vv to show
E Differing attributes:
E b: 'b' != 'c'
- failure_demo.py:111: AssertionError
________________ TestSpecialisedExplanations.test_eq_attrs _________________
- self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
-
def test_eq_attrs(self): import attr
-
@attr.s class Foo(object): a = attr.ib() b = attr.ib()
-
left = Foo(1, "b") right = Foo(1, "c")
assert left == right
@@ -330,136 +330,136 @@ get on the terminal - we are working on that):
E Omitting 1 identical items, use -vv to show
E Differing attributes:
E b: 'b' != 'c'
- failure_demo.py:123: AssertionError
______________________________ test_attribute ______________________________
-
def test_attribute(): class Foo(object): b = 1
-
i = Foo()
assert i.b == 2
E + where 1 = <failure_demo.test_attribute..Foo object at 0xdeadbeef>.b
- failure_demo.py:131: AssertionError
_________________________ test_attribute_instance __________________________
-
def test_attribute_instance(): class Foo(object): b = 1
-
assert Foo().b == 2
E + where 1 = <failure_demo.test_attribute_instance..Foo object at 0xdeadbeef>.b
E + where <failure_demo.test_attribute_instance..Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_instance..Foo'>()
- failure_demo.py:138: AssertionError
__________________________ test_attribute_failure __________________________
-
def test_attribute_failure(): class Foo(object): def _get_b(self): raise Exception("Failed to get attrib")
-
b = property(_get_b)
-
i = Foo()
assert i.b == 2
- failure_demo.py:149:
-
- failure_demo.py:149:
-
- self = <failure_demo.test_attribute_failure..Foo object at 0xdeadbeef>
-
def _get_b(self):
raise Exception("Failed to get attrib")
- failure_demo.py:144: Exception
_________________________ test_attribute_multiple __________________________
-
def test_attribute_multiple(): class Foo(object): b = 1
-
class Bar(object): b = 2
-
assert Foo().b == Bar().b
E + where 1 = <failure_demo.test_attribute_multiple..Foo object at 0xdeadbeef>.b
E + where <failure_demo.test_attribute_multiple..Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple..Foo'>()
E + and 2 = <failure_demo.test_attribute_multiple..Bar object at 0xdeadbeef>.b
E + where <failure_demo.test_attribute_multiple..Bar object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple..Bar'>()
- failure_demo.py:159: AssertionError
__________________________ TestRaises.test_raises __________________________
- self = <failure_demo.TestRaises object at 0xdeadbeef>
-
def test_raises(self): s = "qwe"
raises(TypeError, int, s)
- failure_demo.py:169: ValueError
______________________ TestRaises.test_raises_doesnt _______________________
- self = <failure_demo.TestRaises object at 0xdeadbeef>
-
def test_raises_doesnt(self):
raises(IOError, int, "3")
- failure_demo.py:172: Failed
__________________________ TestRaises.test_raise ___________________________
- self = <failure_demo.TestRaises object at 0xdeadbeef>
-
def test_raise(self):
raise ValueError("demo error")
- failure_demo.py:175: ValueError
________________________ TestRaises.test_tupleerror ________________________
- self = <failure_demo.TestRaises object at 0xdeadbeef>
-
def test_tupleerror(self):
a, b = [1] # NOQA
- failure_demo.py:178: ValueError
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
- self = <failure_demo.TestRaises object at 0xdeadbeef>
-
def test_reinterpret_fails_with_print_for_the_fun_of_it(self): items = [1, 2, 3] print("items is %r" % items)
a, b = items.pop()
- failure_demo.py:183: TypeError
--------------------------- Captured stdout call ---------------------------
items is [1, 2, 3]
________________________ TestRaises.test_some_error ________________________
- self = <failure_demo.TestRaises object at 0xdeadbeef>
-
def test_some_error(self):
if namenotexi: # NOQA
- failure_demo.py:186: NameError
____________________ test_dynamic_compile_shows_nicely _____________________
-
def test_dynamic_compile_shows_nicely(): import imp import sys
-
src = "def foo():\n assert 1 == 0\n" name = "abc-123" module = imp.new_module(name)
@@ -467,65 +467,65 @@ get on the terminal - we are working on that):
six.exec_(code, module.dict)
sys.modules[name] = module
> module.foo()
- failure_demo.py:204:
-
- failure_demo.py:204:
-
-
def foo():
assert 1 == 0
E AssertionError
- <0-codegen 'abc-123' $REGENDOC_TMPDIR/assertion/failure_demo.py:201>:2: AssertionError
____________________ TestMoreErrors.test_complex_error _____________________
- self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
-
def test_complex_error(self): def f(): return 44
-
def g(): return 43
-
somefunc(f(), g())
- failure_demo.py:215:
-
- failure_demo.py:215:
-
failure_demo.py:13: in somefunc
otherfunc(x, y)
-
- a = 44, b = 43
-
def otherfunc(a, b):
assert a == b
- failure_demo.py:9: AssertionError
___________________ TestMoreErrors.test_z1_unpack_error ____________________
- self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
-
def test_z1_unpack_error(self): items = []
a, b = items
- failure_demo.py:219: ValueError
____________________ TestMoreErrors.test_z2_type_error _____________________
- self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
-
def test_z2_type_error(self): items = 3
a, b = items
- failure_demo.py:223: TypeError
______________________ TestMoreErrors.test_startswith ______________________
- self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
-
def test_startswith(self): s = "123" g = "456"
@@ -533,93 +533,93 @@ get on the terminal - we are working on that):
E AssertionError: assert False
E + where False = <built-in method startswith of str object at 0xdeadbeef>('456')
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
- failure_demo.py:228: AssertionError
__________________ TestMoreErrors.test_startswith_nested ___________________
- self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
-
def test_startswith_nested(self): def f(): return "123"
-
def g(): return "456"
-
assert f().startswith(g())
E + where False = <built-in method startswith of str object at 0xdeadbeef>('456')
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
E + where '123' = <function TestMoreErrors.test_startswith_nested..f at 0xdeadbeef>()
E + and '456' = <function TestMoreErrors.test_startswith_nested..g at 0xdeadbeef>()
- failure_demo.py:237: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________
- self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
-
def test_global_func(self):
assert isinstance(globf(42), float)
E + where False = isinstance(43, float)
E + where 43 = globf(42)
- failure_demo.py:240: AssertionError
_______________________ TestMoreErrors.test_instance _______________________
- self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
-
def test_instance(self): self.x = 6 * 7
assert self.x != 42
E + where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef>.x
- failure_demo.py:244: AssertionError
_______________________ TestMoreErrors.test_compare ________________________
- self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
-
def test_compare(self):
assert globf(10) < 5
E + where 11 = globf(10)
- failure_demo.py:247: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________
- self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
-
def test_try_finally(self): x = 1 try:
assert x == 0
- failure_demo.py:252: AssertionError
___________________ TestCustomAssertMsg.test_single_line ___________________
- self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
-
def test_single_line(self): class A(object): a = 1
-
b = 2
assert A.a == b, "A.a appears not to be b"
E assert 1 == 2
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_single_line..A'>.a
- failure_demo.py:263: AssertionError
____________________ TestCustomAssertMsg.test_multiline ____________________
- self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
-
def test_multiline(self): class A(object): a = 1
-
b = 2
assert (
A.a == b
@@ -629,19 +629,19 @@ get on the terminal - we are working on that):
E one of those
E assert 1 == 2
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_multiline..A'>.a
- failure_demo.py:270: AssertionError
___________________ TestCustomAssertMsg.test_custom_repr ___________________
- self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
-
def test_custom_repr(self): class JSON(object): a = 1
-
def __repr__(self): return "This is JSON\n{\n 'foo': 'bar'\n}"
-
a = JSON() b = 2
assert a.a == b, a
@@ -651,6 +651,6 @@ get on the terminal - we are working on that):
E }
E assert 1 == 2
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
- failure_demo.py:283: AssertionError
======================== 44 failed in 0.12 seconds =========================
diff --git i/doc/en/example/simple.rst w/doc/en/example/simple.rst
index 5904ea5a..87451fb2 100644
--- i/doc/en/example/simple.rst
+++ w/doc/en/example/simple.rst
@@ -51,9 +51,9 @@ Let's run this without supplying our new option:
F [100%]
================================= FAILURES =================================
_______________________________ test_answer ________________________________
- cmdopt = 'type1'
-
def test_answer(cmdopt): if cmdopt == "type1": print("first")
@@ -61,7 +61,7 @@ Let's run this without supplying our new option:
print("second")
> assert 0 # to see what was printed
E assert 0
- test_sample.py:6: AssertionError
--------------------------- Captured stdout call ---------------------------
first
@@ -75,9 +75,9 @@ And now with supplying a command line option:
F [100%]
================================= FAILURES =================================
_______________________________ test_answer ________________________________
- cmdopt = 'type2'
-
def test_answer(cmdopt): if cmdopt == "type1": print("first")
@@ -85,7 +85,7 @@ And now with supplying a command line option:
print("second")
> assert 0 # to see what was printed
E assert 0
- test_sample.py:6: AssertionError
--------------------------- Captured stdout call ---------------------------
second
@@ -131,7 +131,7 @@ directory with the above conftest.py:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items
- ======================= no tests ran in 0.12 seconds =======================
.. _excontrolskip
:
@@ -192,11 +192,11 @@ and when running it will see a skipped "slow" test:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
- test_module.py .s [100%]
========================= short test summary info ==========================
SKIPPED [1] test_module.py:8: need --runslow option to run
- =================== 1 passed, 1 skipped in 0.12 seconds ====================
Or run it including the slow
marked test:
@@ -209,9 +209,9 @@ Or run it including the slow
marked test:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
- test_module.py .. [100%]
- ========================= 2 passed in 0.12 seconds =========================
Writing well integrated assertion helpers
@@ -251,11 +251,11 @@ Let's run our little function:
F [100%]
================================= FAILURES =================================
______________________________ test_something ______________________________
-
def test_something():
checkconfig(42)
- test_checkconfig.py:11: Failed
1 failed in 0.12 seconds
@@ -353,7 +353,7 @@ which will add the string to the test header accordingly:
project deps: mylib-1.1
rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items
- ======================= no tests ran in 0.12 seconds =======================
.. regendoc:wipe
@@ -383,7 +383,7 @@ which will add info only when run with "--v":
did you?
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 0 items
- ======================= no tests ran in 0.12 seconds =======================
and nothing when run plainly:
@@ -396,7 +396,7 @@ and nothing when run plainly:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 0 items
- ======================= no tests ran in 0.12 seconds =======================
profiling test duration
@@ -436,9 +436,9 @@ Now we can profile which test functions execute the slowest:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items
- test_some_are_slow.py ... [100%]
- ========================= slowest 3 test durations =========================
0.30s call test_some_are_slow.py::test_funcslow2
0.20s call test_some_are_slow.py::test_funcslow1
@@ -511,18 +511,18 @@ If we run this:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items
- test_step.py .Fx. [100%]
- ================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________
- self = <test_step.TestUserHandling object at 0xdeadbeef>
-
def test_modification(self):
assert 0
- test_step.py:11: AssertionError
========================= short test summary info ==========================
XFAIL test_step.py::TestUserHandling::test_deletion
@@ -595,12 +595,12 @@ We can run this:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 7 items
- test_step.py .Fx. [ 57%]
a/test_db.py F [ 71%]
a/test_db2.py F [ 85%]
b/test_error.py E [100%]
- ================================== ERRORS ==================================
_______________________ ERROR at setup of test_root ________________________
file $REGENDOC_TMPDIR/b/test_error.py, line 1
@@ -608,37 +608,37 @@ We can run this:
E fixture 'db' not foundavailable fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory use 'pytest --fixtures [testpath]' for help on them.
- $REGENDOC_TMPDIR/b/test_error.py:1
================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________
- self = <test_step.TestUserHandling object at 0xdeadbeef>
-
def test_modification(self):
assert 0
- test_step.py:11: AssertionError
_________________________________ test_a1 __________________________________
- db = <conftest.DB object at 0xdeadbeef>
-
def test_a1(db):
assert 0, db # to show value
E assert 0
- a/test_db.py:2: AssertionError
_________________________________ test_a2 __________________________________
- db = <conftest.DB object at 0xdeadbeef>
-
def test_a2(db):
assert 0, db # to show value
E assert 0
- a/test_db2.py:2: AssertionError
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.12 seconds ==========
@@ -709,25 +709,25 @@ and run them:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
- test_module.py FF [100%]
- ================================= FAILURES =================================
________________________________ test_fail1 ________________________________
- tmpdir = local('PYTEST_TMPDIR/test_fail10')
-
def test_fail1(tmpdir):
assert 0
- test_module.py:2: AssertionError
________________________________ test_fail2 ________________________________
-
def test_fail2():
assert 0
- test_module.py:6: AssertionError
========================= 2 failed in 0.12 seconds =========================
@@ -811,36 +811,36 @@ and run it:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items
- test_module.py Esetting up a test failed! test_module.py::test_setup_fails
Fexecuting test failed test_module.py::test_call_fails
F
- ================================== ERRORS ==================================
____________________ ERROR at setup of test_setup_fails ____________________
-
@pytest.fixture def other():
assert 0
- test_module.py:7: AssertionError
================================= FAILURES =================================
_____________________________ test_call_fails ______________________________
- something = None
-
def test_call_fails(something):
assert 0
- test_module.py:15: AssertionError
________________________________ test_fail2 ________________________________
-
def test_fail2():
assert 0
- test_module.py:19: AssertionError
==================== 2 failed, 1 error in 0.12 seconds =====================
diff --git i/doc/en/fixture.rst w/doc/en/fixture.rst
index 4c8e24b9..0a576991 100644
--- i/doc/en/fixture.rst
+++ w/doc/en/fixture.rst
@@ -76,20 +76,20 @@ marked smtp_connection
fixture function. Running the test looks like this:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
- test_smtpsimple.py F [100%]
- ================================= FAILURES =================================
________________________________ test_ehlo _________________________________
- smtp_connection = <smtplib.SMTP object at 0xdeadbeef>
-
def test_ehlo(smtp_connection): response, msg = smtp_connection.ehlo() assert response == 250
assert 0 # for demo purposes
- test_smtpsimple.py:11: AssertionError
========================= 1 failed in 0.12 seconds =========================
@@ -217,32 +217,32 @@ inspect what is going on and can now run the tests:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
- test_module.py FF [100%]
- ================================= FAILURES =================================
________________________________ test_ehlo _________________________________
- smtp_connection = <smtplib.SMTP object at 0xdeadbeef>
-
def test_ehlo(smtp_connection): response, msg = smtp_connection.ehlo() assert response == 250 assert b"smtp.gmail.com" in msg
assert 0 # for demo purposes
- test_module.py:6: AssertionError
________________________________ test_noop _________________________________
- smtp_connection = <smtplib.SMTP object at 0xdeadbeef>
-
def test_noop(smtp_connection): response, msg = smtp_connection.noop() assert response == 250
assert 0 # for demo purposes
- test_module.py:11: AssertionError
========================= 2 failed in 0.12 seconds =========================
@@ -366,7 +366,7 @@ Let's execute it::
$ pytest -s -q --tb=no
FFteardown smtp
- 2 failed in 0.12 seconds
We see that the smtp_connection
instance is finalized after the two
@@ -475,7 +475,7 @@ again, nothing much has changed::
$ pytest -s -q --tb=no
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
- 2 failed in 0.12 seconds
Let's quickly create another test module that actually sets the
@@ -600,51 +600,51 @@ So let's just do another run:
FFFF [100%]
================================= FAILURES =================================
________________________ test_ehlo[smtp.gmail.com] _________________________
- smtp_connection = <smtplib.SMTP object at 0xdeadbeef>
-
def test_ehlo(smtp_connection): response, msg = smtp_connection.ehlo() assert response == 250 assert b"smtp.gmail.com" in msg
assert 0 # for demo purposes
- test_module.py:6: AssertionError
________________________ test_noop[smtp.gmail.com] _________________________
- smtp_connection = <smtplib.SMTP object at 0xdeadbeef>
-
def test_noop(smtp_connection): response, msg = smtp_connection.noop() assert response == 250
assert 0 # for demo purposes
- test_module.py:11: AssertionError
________________________ test_ehlo[mail.python.org] ________________________
- smtp_connection = <smtplib.SMTP object at 0xdeadbeef>
-
def test_ehlo(smtp_connection): response, msg = smtp_connection.ehlo() assert response == 250
assert b"smtp.gmail.com" in msg
- test_module.py:5: AssertionError
-------------------------- Captured stdout setup ---------------------------
finalizing <smtplib.SMTP object at 0xdeadbeef>
________________________ test_noop[mail.python.org] ________________________
- smtp_connection = <smtplib.SMTP object at 0xdeadbeef>
-
def test_noop(smtp_connection): response, msg = smtp_connection.noop() assert response == 250
assert 0 # for demo purposes
- test_module.py:11: AssertionError
------------------------- Captured stdout teardown -------------------------
finalizing <smtplib.SMTP object at 0xdeadbeef>
@@ -719,7 +719,7 @@ Running the above tests results in the following test IDs being used:
<Function test_noop[smtp.gmail.com]>
<Function test_ehlo[mail.python.org]>
<Function test_noop[mail.python.org]>
- ======================= no tests ran in 0.12 seconds =======================
.. _fixture-parametrize-marks
:
@@ -751,11 +751,11 @@ Running this test will skip the invocation of data_set
with value 2
:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 3 items
- test_fixture_marks.py::test_data[0] PASSED [ 33%]
test_fixture_marks.py::test_data[1] PASSED [ 66%]
test_fixture_marks.py::test_data[2] SKIPPED [100%]
- =================== 2 passed, 1 skipped in 0.12 seconds ====================
.. _interdependent fixtures
:
@@ -796,10 +796,10 @@ Here we declare an app
fixture which receives the previously defined
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 2 items
- test_appsetup.py::test_smtp_connection_exists[smtp.gmail.com] PASSED [ 50%]
test_appsetup.py::test_smtp_connection_exists[mail.python.org] PASSED [100%]
- ========================= 2 passed in 0.12 seconds =========================
Due to the parametrization of smtp_connection
, the test will run twice with two
@@ -867,26 +867,26 @@ Let's run the tests in verbose mode and with looking at the print-output:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 8 items
- test_module.py::test_0[1] SETUP otherarg 1
RUN test0 with otherarg 1
PASSED TEARDOWN otherarg 1
- test_module.py::test_0[2] SETUP otherarg 2
RUN test0 with otherarg 2
PASSED TEARDOWN otherarg 2
- test_module.py::test_1[mod1] SETUP modarg mod1
RUN test1 with modarg mod1
PASSED
test_module.py::test_2[mod1-1] SETUP otherarg 1
RUN test2 with otherarg 1 and modarg mod1
PASSED TEARDOWN otherarg 1
- test_module.py::test_2[mod1-2] SETUP otherarg 2
RUN test2 with otherarg 2 and modarg mod1
PASSED TEARDOWN otherarg 2
- test_module.py::test_1[mod2] TEARDOWN modarg mod1
SETUP modarg mod2
RUN test1 with modarg mod2
@@ -894,13 +894,13 @@ Let's run the tests in verbose mode and with looking at the print-output:
test_module.py::test_2[mod2-1] SETUP otherarg 1
RUN test2 with otherarg 1 and modarg mod2
PASSED TEARDOWN otherarg 1
- test_module.py::test_2[mod2-2] SETUP otherarg 2
RUN test2 with otherarg 2 and modarg mod2
PASSED TEARDOWN otherarg 2
TEARDOWN modarg mod2
- ========================= 8 passed in 0.12 seconds =========================
You can see that the parametrized module-scoped modarg
resource caused an
diff --git i/doc/en/getting-started.rst w/doc/en/getting-started.rst
index a9f7d1d1..19f06468 100644
--- i/doc/en/getting-started.rst
+++ w/doc/en/getting-started.rst
@@ -50,17 +50,17 @@ That’s it. You can now execute the test function:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
- test_sample.py F [100%]
- ================================= FAILURES =================================
_______________________________ test_answer ________________________________
-
def test_answer():
assert func(3) == 5
E + where 4 = func(3)
- test_sample.py:5: AssertionError
========================= 1 failed in 0.12 seconds =========================
@@ -121,15 +121,15 @@ Once you develop multiple tests, you may want to group them into a class. pytest
.F [100%]
================================= FAILURES =================================
____________________________ TestClass.test_two ____________________________
- self = <test_class.TestClass object at 0xdeadbeef>
-
def test_two(self): x = "hello"
assert hasattr(x, 'check')
E + where False = hasattr('hello', 'check')
- test_class.py:8: AssertionError
1 failed, 1 passed in 0.12 seconds
@@ -153,14 +153,14 @@ List the name tmpdir
in the test function signature and pytest
will look
F [100%]
================================= FAILURES =================================
_____________________________ test_needsfiles ______________________________
- tmpdir = local('PYTEST_TMPDIR/test_needsfiles0')
-
def test_needsfiles(tmpdir): print(tmpdir)
assert 0
- test_tmpdir.py:3: AssertionError
--------------------------- Captured stdout call ---------------------------
PYTEST_TMPDIR/test_needsfiles0
diff --git i/doc/en/index.rst w/doc/en/index.rst
index 000793d2..e84d9fbf 100644
--- i/doc/en/index.rst
+++ w/doc/en/index.rst
@@ -32,17 +32,17 @@ To execute it:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
- test_sample.py F [100%]
- ================================= FAILURES =================================
_______________________________ test_answer ________________________________
-
def test_answer():
assert inc(3) == 5
E + where 4 = inc(3)
- test_sample.py:6: AssertionError
========================= 1 failed in 0.12 seconds =========================
diff --git i/doc/en/parametrize.rst w/doc/en/parametrize.rst
index d1d23c67..05963db0 100644
--- i/doc/en/parametrize.rst
+++ w/doc/en/parametrize.rst
@@ -60,14 +60,14 @@ them in turn:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items
- test_expectation.py ..F [100%]
- ================================= FAILURES =================================
____________________________ test_eval[6*9-42] _____________________________
- test_input = '6*9', expected = 42
-
@pytest.mark.parametrize("test_input,expected", [ ("3+5", 8), ("2+4", 6),
@@ -77,7 +77,7 @@ them in turn:
> assert eval(test_input) == expected
E AssertionError: assert 54 == 42
E + where 54 = eval('6*9')
- test_expectation.py:8: AssertionError
==================== 1 failed, 2 passed in 0.12 seconds ====================
@@ -112,9 +112,9 @@ Let's run this:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 3 items
- test_expectation.py ..x [100%]
- =================== 2 passed, 1 xfailed in 0.12 seconds ====================
The one parameter set which caused a failure previously now
@@ -186,15 +186,15 @@ Let's also run with a stringinput that will lead to a failing test:
F [100%]
================================= FAILURES =================================
___________________________ test_valid_string[!] ___________________________
- stringinput = '!'
-
def test_valid_string(stringinput):
assert stringinput.isalpha()
E + where False = <built-in method isalpha of str object at 0xdeadbeef>()
E + where <built-in method isalpha of str object at 0xdeadbeef> = '!'.isalpha
- test_strings.py:3: AssertionError
1 failed in 0.12 seconds
diff --git i/doc/en/skipping.rst w/doc/en/skipping.rst
index dd0b5711..62faa51d 100644
--- i/doc/en/skipping.rst
+++ w/doc/en/skipping.rst
@@ -333,12 +333,12 @@ Running it with the report-on-xfail option gives this output:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR/example, inifile:
collected 7 items
- xfail_demo.py xxxxxxx [100%]
========================= short test summary info ==========================
XFAIL xfail_demo.py::test_hello
XFAIL xfail_demo.py::test_hello2
-
reason: [NOTRUN]
-
XFAIL xfail_demo.py::test_hello3
reason: [NOTRUN]
condition: hasattr(os, 'sep')
XFAIL xfail_demo.py::test_hello4
@@ -348,7 +348,7 @@ Running it with the report-on-xfail option gives this output:
XFAIL xfail_demo.py::test_hello6
reason: reason
XFAIL xfail_demo.py::test_hello7
- ======================== 7 xfailed in 0.12 seconds =========================
.. _skip/xfail with parametrize
:
diff --git i/doc/en/tmpdir.rst w/doc/en/tmpdir.rst
index 3d73d614..72b7941f 100644
--- i/doc/en/tmpdir.rst
+++ w/doc/en/tmpdir.rst
@@ -45,14 +45,14 @@ Running this would result in a passed test except for the last
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
- test_tmp_path.py F [100%]
- ================================= FAILURES =================================
_____________________________ test_create_file _____________________________
- tmp_path = PosixPath('PYTEST_TMPDIR/test_create_file0')
-
def test_create_file(tmp_path): d = tmp_path / "sub" d.mkdir()
@@ -62,7 +62,7 @@ Running this would result in a passed test except for the last
assert len(list(tmp_path.iterdir())) == 1
> assert 0
E assert 0
- test_tmp_path.py:13: AssertionError
========================= 1 failed in 0.12 seconds =========================
@@ -108,14 +108,14 @@ Running this would result in a passed test except for the last
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
- test_tmpdir.py F [100%]
- ================================= FAILURES =================================
_____________________________ test_create_file _____________________________
- tmpdir = local('PYTEST_TMPDIR/test_create_file0')
-
def test_create_file(tmpdir): p = tmpdir.mkdir("sub").join("hello.txt") p.write("content")
@@ -123,7 +123,7 @@ Running this would result in a passed test except for the last
assert len(tmpdir.listdir()) == 1
> assert 0
E assert 0
- test_tmpdir.py:7: AssertionError
========================= 1 failed in 0.12 seconds =========================
diff --git i/doc/en/unittest.rst w/doc/en/unittest.rst
index 7eb92bf4..944c815c 100644
--- i/doc/en/unittest.rst
+++ w/doc/en/unittest.rst
@@ -132,30 +132,30 @@ the self.db
values in the traceback:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
- test_unittest_db.py FF [100%]
- ================================= FAILURES =================================
___________________________ MyTest.test_method1 ____________________________
- self = <test_unittest_db.MyTest testMethod=test_method1>
-
def test_method1(self): assert hasattr(self, "db")
assert 0, self.db # fail for demo purposes
E assert 0
- test_unittest_db.py:9: AssertionError
___________________________ MyTest.test_method2 ____________________________
- self = <test_unittest_db.MyTest testMethod=test_method2>
-
def test_method2(self):
assert 0, self.db # fail for demo purposes
E assert 0
- test_unittest_db.py:12: AssertionError
========================= 2 failed in 0.12 seconds =========================
diff --git i/doc/en/usage.rst w/doc/en/usage.rst
index b894e0fd..576a9504 100644
--- i/doc/en/usage.rst
+++ w/doc/en/usage.rst
@@ -196,25 +196,25 @@ Example:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 6 items
- test_example.py .FEsxX [100%]
- ================================== ERRORS ==================================
_______________________ ERROR at setup of test_error _______________________
-
@pytest.fixture def error_fixture():
assert 0
- test_example.py:6: AssertionError
================================= FAILURES =================================
________________________________ test_fail _________________________________
-
def test_fail():
assert 0
- test_example.py:14: AssertionError
========================= short test summary info ==========================
SKIPPED [1] $REGENDOC_TMPDIR/test_example.py:23: skipping this test
@@ -223,7 +223,7 @@ Example:
XPASS test_example.py::test_xpass always xfail
ERROR test_example.py::test_error
FAILED test_example.py::test_fail
-
1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12 seconds
-
1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12 seconds
The -r
options accepts a number of characters after it, with a
used above meaning "all except passes".
@@ -248,30 +248,30 @@ More than one character can be used, so for example to only see failed and skipp
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 6 items
- test_example.py .FEsxX [100%]
- ================================== ERRORS ==================================
_______________________ ERROR at setup of test_error _______________________
-
@pytest.fixture def error_fixture():
assert 0
- test_example.py:6: AssertionError
================================= FAILURES =================================
________________________________ test_fail _________________________________
-
def test_fail():
assert 0
- test_example.py:14: AssertionError
========================= short test summary info ==========================
FAILED test_example.py::test_fail
SKIPPED [1] $REGENDOC_TMPDIR/test_example.py:23: skipping this test
-
1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12 seconds
-
1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12 seconds
Using p
lists the passing tests, whilst P
adds an extra section "PASSES" with those tests that passed but had
captured output:
@@ -284,25 +284,25 @@ captured output:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 6 items
- test_example.py .FEsxX [100%]
- ================================== ERRORS ==================================
_______________________ ERROR at setup of test_error _______________________
-
@pytest.fixture def error_fixture():
assert 0
- test_example.py:6: AssertionError
================================= FAILURES =================================
________________________________ test_fail _________________________________
-
def test_fail():
assert 0
- test_example.py:14: AssertionError
========================= short test summary info ==========================
PASSED test_example.py::test_ok
@@ -310,7 +310,7 @@ captured output:
_________________________________ test_ok __________________________________
--------------------------- Captured stdout call ---------------------------
ok
-
1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12 seconds
-
1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12 seconds
.. _pdb-option:
@@ -697,23 +697,23 @@ hook was invoked::
$ python myinvoke.py
.FEsxX. [100%]*** test run reporting finishing
- ================================== ERRORS ==================================
_______________________ ERROR at setup of test_error _______________________
-
@pytest.fixture def error_fixture():
assert 0
- test_example.py:6: AssertionError
================================= FAILURES =================================
________________________________ test_fail _________________________________
-
def test_fail():
assert 0
- test_example.py:14: AssertionError
.. note::
diff --git i/doc/en/warnings.rst w/doc/en/warnings.rst
index 11f73f43..e9314b98 100644
--- i/doc/en/warnings.rst
+++ w/doc/en/warnings.rst
@@ -28,14 +28,14 @@ Running pytest now produces this output:
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
- test_show_warnings.py . [100%]
- ============================= warnings summary =============================
test_show_warnings.py::test_one
$REGENDOC_TMPDIR/test_show_warnings.py:4: UserWarning: api v1, should use functions from v2
warnings.warn(UserWarning("api v1, should use functions from v2"))
- -- Docs: https://docs.pytest.org/en/latest/warnings.html
=================== 1 passed, 1 warnings in 0.12 seconds ===================
@@ -48,17 +48,17 @@ them into errors:
F [100%]
================================= FAILURES =================================
_________________________________ test_one _________________________________
-
def test_one():
assert api_v1() == 1
- test_show_warnings.py:8:
-
- test_show_warnings.py:8:
-
-
def api_v1():
warnings.warn(UserWarning("api v1, should use functions from v2"))
- test_show_warnings.py:4: UserWarning
1 failed in 0.12 seconds
@@ -375,12 +375,12 @@ defines an __init__
constructor, as this prevents the class from being insta
.. code-block:: pytest
$ pytest test_pytest_warnings.py -q
- ============================= warnings summary =============================
test_pytest_warnings.py:1
$REGENDOC_TMPDIR/test_pytest_warnings.py:1: PytestWarning: cannot collect test class 'Test' because it has a init constructor
class Test:
- -- Docs: https://docs.pytest.org/en/latest/warnings.html
1 warnings in 0.12 seconds
diff --git i/doc/en/writing_plugins.rst w/doc/en/writing_plugins.rst
index bc1bcda0..27a70f8e 100644
--- i/doc/en/writing_plugins.rst
+++ w/doc/en/writing_plugins.rst
@@ -416,14 +416,14 @@ additionally it is possible to copy examples for an example folder before runnin
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 2 items
- test_example.py .. [100%]
- ============================= warnings summary =============================
test_example.py::test_plugin
$REGENDOC_TMPDIR/test_example.py:4: PytestExperimentalApiWarning: testdir.copy_example is an experimental api that may change over time
testdir.copy_example("test_example.py")
- -- Docs: https://docs.pytest.org/en/latest/warnings.html
=================== 2 passed, 1 warnings in 0.12 seconds ===================
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's because we later run pre-commit
after tox -e regen
. Could you run pre-commit manually? If it fixes it then please push your commit to this branch. 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@blueyed anything else or can we proceed with the release? |
oh, I think I know what causes that, we should |
To the tox env? |
No description provided.