diff --git a/changelog/13895.bugfix.rst b/changelog/13895.bugfix.rst deleted file mode 100644 index 5acd47cff38..00000000000 --- a/changelog/13895.bugfix.rst +++ /dev/null @@ -1 +0,0 @@ -Restore support for skipping tests via ``raise unittest.SkipTest``. diff --git a/changelog/13896.bugfix.rst b/changelog/13896.bugfix.rst deleted file mode 100644 index 821af0c96b4..00000000000 --- a/changelog/13896.bugfix.rst +++ /dev/null @@ -1 +0,0 @@ -The terminal progress plugin added in pytest 9.0 is now automatically disabled when iTerm2 is detected, it generated desktop notifications instead of the desired functionality. diff --git a/changelog/13904.bugfix.rst b/changelog/13904.bugfix.rst deleted file mode 100644 index 739c16e12bd..00000000000 --- a/changelog/13904.bugfix.rst +++ /dev/null @@ -1 +0,0 @@ -Fixed the TOML type of the verbosity settings in the API reference from number to string. diff --git a/changelog/13910.bugfix.rst b/changelog/13910.bugfix.rst deleted file mode 100644 index f399f95b375..00000000000 --- a/changelog/13910.bugfix.rst +++ /dev/null @@ -1 +0,0 @@ -Fixed `UserWarning: Do not expect file_or_dir` on some earlier Python 3.12 and 3.13 point versions. diff --git a/changelog/13933.contrib.rst b/changelog/13933.contrib.rst deleted file mode 100644 index feab3e0dc03..00000000000 --- a/changelog/13933.contrib.rst +++ /dev/null @@ -1,4 +0,0 @@ -The tox configuration has been adjusted to make sure the desired -version string can be passed into its :ref:`package_env` through -the ``SETUPTOOLS_SCM_PRETEND_VERSION_FOR_PYTEST`` environment -variable as a part of the release process -- by :user:`webknjaz`. diff --git a/changelog/13933.packaging.rst b/changelog/13933.packaging.rst deleted file mode 120000 index 79139f98eed..00000000000 --- a/changelog/13933.packaging.rst +++ /dev/null @@ -1 +0,0 @@ -13933.contrib.rst \ No newline at end of file diff --git a/doc/en/announce/index.rst b/doc/en/announce/index.rst index f5e00d34cdb..2859e6210ff 100644 --- a/doc/en/announce/index.rst +++ b/doc/en/announce/index.rst @@ -6,6 +6,7 @@ Release announcements :maxdepth: 2 + release-9.0.1 release-9.0.0 release-8.4.2 release-8.4.1 diff --git a/doc/en/announce/release-9.0.1.rst b/doc/en/announce/release-9.0.1.rst new file mode 100644 index 00000000000..46af130e03c --- /dev/null +++ b/doc/en/announce/release-9.0.1.rst @@ -0,0 +1,18 @@ +pytest-9.0.1 +======================================= + +pytest 9.0.1 has just been released to PyPI. + +This is a bug-fix release, being a drop-in replacement. + +The full changelog is available at https://docs.pytest.org/en/stable/changelog.html. + +Thanks to all of the contributors to this release: + +* Bruno Oliveira +* Ran Benita +* 🇺🇦 Sviatoslav Sydorenko (Святослав Сидоренко) + + +Happy testing, +The pytest Development Team diff --git a/doc/en/builtin.rst b/doc/en/builtin.rst index a7b0ff2e5c8..5b66626fd20 100644 --- a/doc/en/builtin.rst +++ b/doc/en/builtin.rst @@ -18,7 +18,7 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a $ pytest --fixtures -v =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python cachedir: .pytest_cache rootdir: /home/sweet/project collected 0 items diff --git a/doc/en/changelog.rst b/doc/en/changelog.rst index 5220afc46e3..b55577d3f85 100644 --- a/doc/en/changelog.rst +++ b/doc/en/changelog.rst @@ -31,6 +31,44 @@ with advance notice in the **Deprecations** section of releases. .. towncrier release notes start +pytest 9.0.1 (2025-11-11) +========================= + +Bug fixes +--------- + +- `#13895 `_: Restore support for skipping tests via ``raise unittest.SkipTest``. + + +- `#13896 `_: The terminal progress plugin added in pytest 9.0 is now automatically disabled when iTerm2 is detected, it generated desktop notifications instead of the desired functionality. + + +- `#13904 `_: Fixed the TOML type of the verbosity settings in the API reference from number to string. + + +- `#13910 `_: Fixed `UserWarning: Do not expect file_or_dir` on some earlier Python 3.12 and 3.13 point versions. + + + +Packaging updates and notes for downstreams +------------------------------------------- + +- `#13933 `_: The tox configuration has been adjusted to make sure the desired + version string can be passed into its :ref:`package_env` through + the ``SETUPTOOLS_SCM_PRETEND_VERSION_FOR_PYTEST`` environment + variable as a part of the release process -- by :user:`webknjaz`. + + + +Contributor-facing changes +-------------------------- + +- `#13933 `_: The tox configuration has been adjusted to make sure the desired + version string can be passed into its :ref:`package_env` through + the ``SETUPTOOLS_SCM_PRETEND_VERSION_FOR_PYTEST`` environment + variable as a part of the release process -- by :user:`webknjaz`. + + pytest 9.0.0 (2025-11-05) ========================= diff --git a/doc/en/example/customdirectory.rst b/doc/en/example/customdirectory.rst index 1e4d7e370de..6e326352a7e 100644 --- a/doc/en/example/customdirectory.rst +++ b/doc/en/example/customdirectory.rst @@ -42,7 +42,7 @@ An you can now execute the test specification: customdirectory $ pytest =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project/customdirectory configfile: pytest.ini collected 2 items @@ -62,7 +62,7 @@ You can verify that your custom collector appears in the collection tree: customdirectory $ pytest --collect-only =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project/customdirectory configfile: pytest.ini collected 2 items diff --git a/doc/en/example/markers.rst b/doc/en/example/markers.rst index afb1ece0fe8..cbe417e8a3e 100644 --- a/doc/en/example/markers.rst +++ b/doc/en/example/markers.rst @@ -47,7 +47,7 @@ You can then restrict a test run to only run tests marked with ``webtest``: $ pytest -v -m webtest =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python cachedir: .pytest_cache rootdir: /home/sweet/project collecting ... collected 4 items / 3 deselected / 1 selected @@ -62,7 +62,7 @@ Or the inverse, running all tests except the webtest ones: $ pytest -v -m "not webtest" =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python cachedir: .pytest_cache rootdir: /home/sweet/project collecting ... collected 4 items / 1 deselected / 3 selected @@ -82,7 +82,7 @@ keyword arguments, e.g. to run only tests marked with ``device`` and the specifi $ pytest -v -m "device(serial='123')" =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python cachedir: .pytest_cache rootdir: /home/sweet/project collecting ... collected 4 items / 3 deselected / 1 selected @@ -106,7 +106,7 @@ tests based on their module, class, method, or function name: $ pytest -v test_server.py::TestClass::test_method =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python cachedir: .pytest_cache rootdir: /home/sweet/project collecting ... collected 1 item @@ -121,7 +121,7 @@ You can also select on the class: $ pytest -v test_server.py::TestClass =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python cachedir: .pytest_cache rootdir: /home/sweet/project collecting ... collected 1 item @@ -136,7 +136,7 @@ Or select multiple nodes: $ pytest -v test_server.py::TestClass test_server.py::test_send_http =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python cachedir: .pytest_cache rootdir: /home/sweet/project collecting ... collected 2 items @@ -180,7 +180,7 @@ The expression matching is now case-insensitive. $ pytest -v -k http # running with the above defined example module =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python cachedir: .pytest_cache rootdir: /home/sweet/project collecting ... collected 4 items / 3 deselected / 1 selected @@ -195,7 +195,7 @@ And you can also run all tests except the ones that match the keyword: $ pytest -k "not send_http" -v =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python cachedir: .pytest_cache rootdir: /home/sweet/project collecting ... collected 4 items / 1 deselected / 3 selected @@ -212,7 +212,7 @@ Or to select "http" and "quick" tests: $ pytest -k "http or quick" -v =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python cachedir: .pytest_cache rootdir: /home/sweet/project collecting ... collected 4 items / 2 deselected / 2 selected @@ -418,7 +418,7 @@ the test needs: $ pytest -E stage2 =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 1 item @@ -432,7 +432,7 @@ and here is one that specifies exactly the environment needed: $ pytest -E stage1 =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 1 item @@ -625,7 +625,7 @@ then you will see two tests skipped and two executed tests as expected: $ pytest -rs # this option reports skip reasons =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 4 items @@ -641,7 +641,7 @@ Note that if you specify a platform via the marker-command line option like this $ pytest -m linux =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 4 items / 3 deselected / 1 selected @@ -704,7 +704,7 @@ We can now use the ``-m option`` to select one set: $ pytest -m interface --tb=short =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 4 items / 2 deselected / 2 selected @@ -730,7 +730,7 @@ or to select both "event" and "interface" tests: $ pytest -m "interface or event" --tb=short =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 4 items / 1 deselected / 3 selected diff --git a/doc/en/example/nonpython.rst b/doc/en/example/nonpython.rst index a8d172937c5..3010601da8e 100644 --- a/doc/en/example/nonpython.rst +++ b/doc/en/example/nonpython.rst @@ -28,7 +28,7 @@ now execute the test specification: nonpython $ pytest test_simple.yaml =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project/nonpython collected 2 items @@ -41,6 +41,8 @@ now execute the test specification: no further details known at this point. ========================= short test summary info ========================== FAILED test_simple.yaml::hello - usecase execution failed + spec failed: 'some': 'other' + no further details known at this point. ======================= 1 failed, 1 passed in 0.12s ======================== .. regendoc:wipe @@ -64,7 +66,7 @@ consulted when reporting in ``verbose`` mode: nonpython $ pytest -v =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python cachedir: .pytest_cache rootdir: /home/sweet/project/nonpython collecting ... collected 2 items @@ -79,6 +81,8 @@ consulted when reporting in ``verbose`` mode: no further details known at this point. ========================= short test summary info ========================== FAILED test_simple.yaml::hello - usecase execution failed + spec failed: 'some': 'other' + no further details known at this point. ======================= 1 failed, 1 passed in 0.12s ======================== .. regendoc:wipe @@ -90,7 +94,7 @@ interesting to just look at the collection tree: nonpython $ pytest --collect-only =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project/nonpython collected 2 items diff --git a/doc/en/example/parametrize.rst b/doc/en/example/parametrize.rst index b27dae18c32..19dbbe3a544 100644 --- a/doc/en/example/parametrize.rst +++ b/doc/en/example/parametrize.rst @@ -158,11 +158,11 @@ objects, they are still using the default pytest representation: $ pytest test_time.py --collect-only =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 8 items - + @@ -221,7 +221,7 @@ this is a fully self-contained example which you can run with: $ pytest test_scenarios.py =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 4 items @@ -235,11 +235,11 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia $ pytest --collect-only test_scenarios.py =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 4 items - + @@ -314,11 +314,11 @@ Let's first see how it looks like at collection time: $ pytest test_backends.py --collect-only =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 2 items - + @@ -344,7 +344,7 @@ And then when we run the test: test_backends.py:8: Failed ========================= short test summary info ========================== - FAILED test_backends.py::test_db_initialized[d2] - Failed: deliberately f... + FAILED test_backends.py::test_db_initialized[d2] - Failed: deliberately failing for demo purposes 1 failed, 1 passed in 0.12s The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase. @@ -413,7 +413,7 @@ The result of this test will be successful: $ pytest -v test_indirect_list.py =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python cachedir: .pytest_cache rootdir: /home/sweet/project collecting ... collected 1 item @@ -566,7 +566,7 @@ If you run this with reporting for skips enabled: $ pytest -rs test_module.py =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 2 items @@ -627,7 +627,7 @@ Then run ``pytest`` with verbose mode and with only the ``basic`` marker: $ pytest -v -m basic =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python cachedir: .pytest_cache rootdir: /home/sweet/project collecting ... collected 24 items / 21 deselected / 3 selected diff --git a/doc/en/example/pythoncollection.rst b/doc/en/example/pythoncollection.rst index 09489418773..eba40d50d1b 100644 --- a/doc/en/example/pythoncollection.rst +++ b/doc/en/example/pythoncollection.rst @@ -137,12 +137,12 @@ The test collection would look like this: $ pytest --collect-only =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project configfile: pytest.toml collected 2 items - + @@ -200,12 +200,12 @@ You can always peek at the collection tree without running tests like this: . $ pytest --collect-only pythoncollection.py =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project configfile: pytest.toml collected 3 items - + @@ -284,7 +284,7 @@ file will be left out: $ pytest --collect-only =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project configfile: pytest.toml collected 0 items diff --git a/doc/en/example/reportingdemo.rst b/doc/en/example/reportingdemo.rst index 8040ee9b957..1b86be04d48 100644 --- a/doc/en/example/reportingdemo.rst +++ b/doc/en/example/reportingdemo.rst @@ -9,7 +9,7 @@ Here is a nice run of several failures and how ``pytest`` presents things: assertion $ pytest failure_demo.py =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project/assertion collected 44 items @@ -146,9 +146,14 @@ Here is a nice run of several failures and how ``pytest`` presents things: E 1 E 1 E 1 - E 1... - E - E ...Full output truncated (7 lines hidden), use '-vv' to show + E 1 + E 1 + E - b2 + E + a2 + E 2 + E 2 + E 2 + E 2 failure_demo.py:62: AssertionError _________________ TestSpecialisedExplanations.test_eq_list _________________ @@ -160,7 +165,16 @@ Here is a nice run of several failures and how ``pytest`` presents things: E assert [0, 1, 2] == [0, 1, 3] E E At index 2 diff: 2 != 3 - E Use -v to get more diff + E + E Full diff: + E [ + E 0, + E 1, + E - 3, + E ? ^ + E + 2, + E ? ^ + E ] failure_demo.py:65: AssertionError ______________ TestSpecialisedExplanations.test_eq_list_long _______________ @@ -174,7 +188,214 @@ Here is a nice run of several failures and how ``pytest`` presents things: E assert [0, 0, 0, 0, 0, 0, ...] == [0, 0, 0, 0, 0, 0, ...] E E At index 100 diff: 1 != 2 - E Use -v to get more diff + E + E Full diff: + E [ + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E 0, + E - 2, + E ? ^ + E + 1, + E ? ^ + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E 3, + E ] failure_demo.py:70: AssertionError _________________ TestSpecialisedExplanations.test_eq_dict _________________ @@ -192,7 +413,19 @@ Here is a nice run of several failures and how ``pytest`` presents things: E {'c': 0} E Right contains 1 more item: E {'d': 0} - E Use -v to get more diff + E + E Full diff: + E { + E 'a': 0, + E - 'b': 2, + E ? ^ + E + 'b': 1, + E ? ^ + E - 'd': 0, + E ? ^ + E + 'c': 0, + E ? ^ + E } failure_demo.py:73: AssertionError _________________ TestSpecialisedExplanations.test_eq_set __________________ @@ -210,7 +443,20 @@ Here is a nice run of several failures and how ``pytest`` presents things: E Extra items in the right set: E 20 E 21 - E Use -v to get more diff + E + E Full diff: + E { + E 0, + E - 20, + E ? ^ + E + 10, + E ? ^ + E - 21, + E ? ^ + E + 11, + E ? ^ + E + 12, + E } failure_demo.py:76: AssertionError _____________ TestSpecialisedExplanations.test_eq_longer_list ______________ @@ -222,7 +468,13 @@ Here is a nice run of several failures and how ``pytest`` presents things: E assert [1, 2] == [1, 2, 3] E E Right contains one more item: 3 - E Use -v to get more diff + E + E Full diff: + E [ + E 1, + E 2, + E - 3, + E ] failure_demo.py:79: AssertionError _________________ TestSpecialisedExplanations.test_in_list _________________ @@ -679,46 +931,424 @@ Here is a nice run of several failures and how ``pytest`` presents things: ========================= short test summary info ========================== FAILED failure_demo.py::test_generative[3-6] - assert (3 * 2) < 6 FAILED failure_demo.py::TestFailing::test_simple - assert 42 == 43 + + where 42 = .f at 0xdeadbeef0002>() + + and 43 = .g at 0xdeadbeef0003>() FAILED failure_demo.py::TestFailing::test_simple_multiline - assert 42 == 54 FAILED failure_demo.py::TestFailing::test_not - assert not 42 - FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_text - Asser... - FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_similar_text - FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_multiline_text - FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_long_text - ... - FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_long_text_multiline - FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_list - asser... - FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_list_long - ... - FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_dict - Asser... - FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_set - assert... - FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_longer_list - FAILED failure_demo.py::TestSpecialisedExplanations::test_in_list - asser... - FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_multiline - FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single - FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single_long - FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single_long_term - FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_dataclass - ... - FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_attrs - Asse... + + where 42 = .f at 0xdeadbeef0006>() + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_text - AssertionError: assert 'spam' == 'eggs' + + - eggs + + spam + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_similar_text - AssertionError: assert 'foo 1 bar' == 'foo 2 bar' + + - foo 2 bar + ? ^ + + foo 1 bar + ? ^ + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_multiline_text - AssertionError: assert 'foo\nspam\nbar' == 'foo\neggs\nbar' + + foo + - eggs + + spam + bar + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_long_text - AssertionError: assert '111111111111...2222222222222' == '111111111111...2222222222222' + + Skipping 90 identical leading characters in diff, use -v to show + Skipping 91 identical trailing characters in diff, use -v to show + - 1111111111b222222222 + ? ^ + + 1111111111a222222222 + ? ^ + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_long_text_multiline - AssertionError: assert '1\n1\n1\n1\n...n2\n2\n2\n2\n' == '1\n1\n1\n1\n...n2\n2\n2\n2\n' + + Skipping 190 identical leading characters in diff, use -v to show + Skipping 191 identical trailing characters in diff, use -v to show + 1 + 1 + 1 + 1 + 1 + - b2 + + a2 + 2 + 2 + 2 + 2 + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_list - assert [0, 1, 2] == [0, 1, 3] + + At index 2 diff: 2 != 3 + + Full diff: + [ + 0, + 1, + - 3, + ? ^ + + 2, + ? ^ + ] + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_list_long - assert [0, 0, 0, 0, 0, 0, ...] == [0, 0, 0, 0, 0, 0, ...] + + At index 100 diff: 1 != 2 + + Full diff: + [ + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + 0, + - 2, + ? ^ + + 1, + ? ^ + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + 3, + ] + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_dict - AssertionError: assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0} + + Omitting 1 identical items, use -vv to show + Differing items: + {'b': 1} != {'b': 2} + Left contains 1 more item: + {'c': 0} + Right contains 1 more item: + {'d': 0} + + Full diff: + { + 'a': 0, + - 'b': 2, + ? ^ + + 'b': 1, + ? ^ + - 'd': 0, + ? ^ + + 'c': 0, + ? ^ + } + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_set - assert {0, 10, 11, 12} == {0, 20, 21} + + Extra items in the left set: + 10 + 11 + 12 + Extra items in the right set: + 20 + 21 + + Full diff: + { + 0, + - 20, + ? ^ + + 10, + ? ^ + - 21, + ? ^ + + 11, + ? ^ + + 12, + } + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_longer_list - assert [1, 2] == [1, 2, 3] + + Right contains one more item: 3 + + Full diff: + [ + 1, + 2, + - 3, + ] + FAILED failure_demo.py::TestSpecialisedExplanations::test_in_list - assert 1 in [0, 2, 3, 4, 5] + FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_multiline - AssertionError: assert 'foo' not in 'some multil...nand a\ntail' + + 'foo' is contained here: + some multiline + text + which + includes foo + ? +++ + and a + tail + FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single - AssertionError: assert 'foo' not in 'single foo line' + + 'foo' is contained here: + single foo line + ? +++ + FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single_long - AssertionError: assert 'foo' not in 'head head h...l tail tail ' + + 'foo' is contained here: + head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail + ? +++ + FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single_long_term - AssertionError: assert 'fffffffffff...ffffffffffff' not in 'head head h...l tail tail ' + + 'ffffffffffffffffff...fffffffffffffffffff' is contained here: + head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail + ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_dataclass - AssertionError: assert TestSpecialis...oo(a=1, b='b') == TestSpecialis...oo(a=1, b='c') + + Omitting 1 identical items, use -vv to show + Differing attributes: + ['b'] + + Drill down into differing attribute b: + b: 'b' != 'c' + - c + + b + FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_attrs - AssertionError: assert Foo(a=1, b='b') == Foo(a=1, b='c') + + Omitting 1 identical items, use -vv to show + Differing attributes: + ['b'] + + Drill down into differing attribute b: + b: 'b' != 'c' + - c + + b FAILED failure_demo.py::test_attribute - assert 1 == 2 - FAILED failure_demo.py::test_attribute_instance - AssertionError: assert ... - FAILED failure_demo.py::test_attribute_failure - Exception: Failed to get... - FAILED failure_demo.py::test_attribute_multiple - AssertionError: assert ... - FAILED failure_demo.py::TestRaises::test_raises - ValueError: invalid lit... - FAILED failure_demo.py::TestRaises::test_raises_doesnt - Failed: DID NOT ... + + where 1 = .Foo object at 0xdeadbeef0018>.b + FAILED failure_demo.py::test_attribute_instance - AssertionError: assert 1 == 2 + + where 1 = .Foo object at 0xdeadbeef0019>.b + + where .Foo object at 0xdeadbeef0019> = .Foo'>() + FAILED failure_demo.py::test_attribute_failure - Exception: Failed to get attrib + FAILED failure_demo.py::test_attribute_multiple - AssertionError: assert 1 == 2 + + where 1 = .Foo object at 0xdeadbeef001b>.b + + where .Foo object at 0xdeadbeef001b> = .Foo'>() + + and 2 = .Bar object at 0xdeadbeef001c>.b + + where .Bar object at 0xdeadbeef001c> = .Bar'>() + FAILED failure_demo.py::TestRaises::test_raises - ValueError: invalid literal for int() with base 10: 'qwe' + FAILED failure_demo.py::TestRaises::test_raises_doesnt - Failed: DID NOT RAISE FAILED failure_demo.py::TestRaises::test_raise - ValueError: demo error - FAILED failure_demo.py::TestRaises::test_tupleerror - ValueError: not eno... - FAILED failure_demo.py::TestRaises::test_reinterpret_fails_with_print_for_the_fun_of_it - FAILED failure_demo.py::TestRaises::test_some_error - NameError: name 'na... + FAILED failure_demo.py::TestRaises::test_tupleerror - ValueError: not enough values to unpack (expected 2, got 1) + FAILED failure_demo.py::TestRaises::test_reinterpret_fails_with_print_for_the_fun_of_it - TypeError: cannot unpack non-iterable int object + FAILED failure_demo.py::TestRaises::test_some_error - NameError: name 'namenotexi' is not defined FAILED failure_demo.py::test_dynamic_compile_shows_nicely - AssertionError FAILED failure_demo.py::TestMoreErrors::test_complex_error - assert 44 == 43 - FAILED failure_demo.py::TestMoreErrors::test_z1_unpack_error - ValueError... - FAILED failure_demo.py::TestMoreErrors::test_z2_type_error - TypeError: c... - FAILED failure_demo.py::TestMoreErrors::test_startswith - AssertionError:... - FAILED failure_demo.py::TestMoreErrors::test_startswith_nested - Assertio... + FAILED failure_demo.py::TestMoreErrors::test_z1_unpack_error - ValueError: not enough values to unpack (expected 2, got 0) + FAILED failure_demo.py::TestMoreErrors::test_z2_type_error - TypeError: cannot unpack non-iterable int object + FAILED failure_demo.py::TestMoreErrors::test_startswith - AssertionError: assert False + + where False = ('456') + + where = '123'.startswith + FAILED failure_demo.py::TestMoreErrors::test_startswith_nested - AssertionError: assert False + + where False = ('456') + + where = '123'.startswith + + where '123' = .f at 0xdeadbeef0029>() + + and '456' = .g at 0xdeadbeef002a>() FAILED failure_demo.py::TestMoreErrors::test_global_func - assert False + + where False = isinstance(43, float) + + where 43 = globf(42) FAILED failure_demo.py::TestMoreErrors::test_instance - assert 42 != 42 + + where 42 = .x FAILED failure_demo.py::TestMoreErrors::test_compare - assert 11 < 5 + + where 11 = globf(10) FAILED failure_demo.py::TestMoreErrors::test_try_finally - assert 1 == 0 - FAILED failure_demo.py::TestCustomAssertMsg::test_single_line - Assertion... - FAILED failure_demo.py::TestCustomAssertMsg::test_multiline - AssertionEr... - FAILED failure_demo.py::TestCustomAssertMsg::test_custom_repr - Assertion... + FAILED failure_demo.py::TestCustomAssertMsg::test_single_line - AssertionError: A.a appears not to be b + assert 1 == 2 + + where 1 = .A'>.a + FAILED failure_demo.py::TestCustomAssertMsg::test_multiline - AssertionError: A.a appears not to be b + or does not appear to be b + one of those + assert 1 == 2 + + where 1 = .A'>.a + FAILED failure_demo.py::TestCustomAssertMsg::test_custom_repr - AssertionError: This is JSON + { + 'foo': 'bar' + } + assert 1 == 2 + + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a ============================ 44 failed in 0.12s ============================ diff --git a/doc/en/example/simple.rst b/doc/en/example/simple.rst index e150e7ca00b..ad70ce740ae 100644 --- a/doc/en/example/simple.rst +++ b/doc/en/example/simple.rst @@ -235,7 +235,7 @@ directory with the above conftest.py: $ pytest =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 0 items @@ -299,7 +299,7 @@ and when running it will see a skipped "slow" test: $ pytest -rs # "-rs" means report details on the little 's' =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 2 items @@ -315,7 +315,7 @@ Or run it including the ``slow`` marked test: $ pytest --runslow =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 2 items @@ -444,7 +444,7 @@ which will add the string to the test header accordingly: $ pytest =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y project deps: mylib-1.1 rootdir: /home/sweet/project collected 0 items @@ -472,7 +472,7 @@ which will add info only when run with "--v": $ pytest -v =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python cachedir: .pytest_cache info1: did you know that ... did you? @@ -487,7 +487,7 @@ and nothing when run plainly: $ pytest =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 0 items @@ -526,7 +526,7 @@ Now we can profile which test functions execute the slowest: $ pytest --durations=3 =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 3 items @@ -632,7 +632,7 @@ If we run this: $ pytest -rx =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 4 items @@ -714,7 +714,7 @@ We can run this: $ pytest =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 7 items @@ -765,8 +765,10 @@ We can run this: test_step.py:11: AssertionError ========================= short test summary info ========================== - FAILED a/test_db.py::test_a1 - AssertionError: + assert 0 + FAILED a/test_db2.py::test_a2 - AssertionError: + assert 0 FAILED test_step.py::TestUserHandling::test_modification - assert 0 ERROR b/test_error.py::test_root ============= 3 failed, 2 passed, 1 xfailed, 1 error in 0.12s ============== @@ -836,7 +838,7 @@ and run them: $ pytest test_module.py =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 2 items @@ -947,7 +949,7 @@ and run it: $ pytest -s test_module.py =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 3 items diff --git a/doc/en/getting-started.rst b/doc/en/getting-started.rst index ec1ef60a605..26c2571db77 100644 --- a/doc/en/getting-started.rst +++ b/doc/en/getting-started.rst @@ -20,7 +20,7 @@ Install ``pytest`` .. code-block:: bash $ pytest --version - pytest 9.0.0 + pytest 9.0.1 .. _`simpletest`: @@ -45,7 +45,7 @@ The test $ pytest =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 1 item @@ -62,6 +62,7 @@ The test test_sample.py:6: AssertionError ========================= short test summary info ========================== FAILED test_sample.py::test_answer - assert 4 == 5 + + where 4 = func(3) ============================ 1 failed in 0.12s ============================= The ``[100%]`` refers to the overall progress of running all test cases. After it finishes, pytest then shows a failure report because ``func(3)`` does not return ``5``. @@ -148,6 +149,7 @@ Once you develop multiple tests, you may want to group them into a class. pytest test_class.py:8: AssertionError ========================= short test summary info ========================== FAILED test_class.py::TestClass::test_two - AssertionError: assert False + + where False = hasattr('hello', 'check') 1 failed, 1 passed in 0.12s The first test passed and the second failed. You can easily see the intermediate values in the assertion to help you understand the reason for the failure. @@ -195,6 +197,7 @@ This is outlined below: test_class_demo.py:9: AssertionError ========================= short test summary info ========================== FAILED test_class_demo.py::TestClassDemoInstance::test_two - assert 0 == 1 + + where 0 = .value 1 failed, 1 passed in 0.12s Note that attributes added at class level are *class attributes*, so they will be shared between tests. diff --git a/doc/en/how-to/assert.rst b/doc/en/how-to/assert.rst index 4dfceda0fad..0ceee7aa248 100644 --- a/doc/en/how-to/assert.rst +++ b/doc/en/how-to/assert.rst @@ -29,7 +29,7 @@ you will see the return value of the function call: $ pytest test_assert1.py =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 1 item @@ -46,6 +46,7 @@ you will see the return value of the function call: test_assert1.py:6: AssertionError ========================= short test summary info ========================== FAILED test_assert1.py::test_function - assert 3 == 4 + + where 3 = f() ============================ 1 failed in 0.12s ============================= ``pytest`` has support for showing the values of the most common subexpressions @@ -404,7 +405,7 @@ if you run this module: $ pytest test_assert2.py =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 1 item @@ -423,11 +424,33 @@ if you run this module: E '1' E Extra items in the right set: E '5' - E Use -v to get more diff + E + E Full diff: + E { + E '0', + E + '1', + E '3', + E - '5', + E '8', + E } test_assert2.py:4: AssertionError ========================= short test summary info ========================== - FAILED test_assert2.py::test_set_comparison - AssertionError: assert {'0'... + FAILED test_assert2.py::test_set_comparison - AssertionError: assert {'0', '1', '3', '8'} == {'0', '3', '5', '8'} + + Extra items in the left set: + '1' + Extra items in the right set: + '5' + + Full diff: + { + '0', + + '1', + '3', + - '5', + '8', + } ============================ 1 failed in 0.12s ============================= Special comparisons are done for a number of cases: @@ -501,6 +524,7 @@ the conftest file: test_foocompare.py:12: AssertionError ========================= short test summary info ========================== FAILED test_foocompare.py::test_compare - assert Comparing Foo instances: + vals: 1 != 2 1 failed in 0.12s .. _`return-not-none`: diff --git a/doc/en/how-to/cache.rst b/doc/en/how-to/cache.rst index e3209b79359..bfc1902cae0 100644 --- a/doc/en/how-to/cache.rst +++ b/doc/en/how-to/cache.rst @@ -86,7 +86,7 @@ If you then run it with ``--lf``: $ pytest --lf =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 2 items run-last-failure: rerun previous 2 failures @@ -132,7 +132,7 @@ of ``FF`` and dots): $ pytest --ff =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 50 items run-last-failure: rerun previous 2 failures first @@ -281,7 +281,7 @@ You can always peek at the content of the cache using the $ pytest --cache-show =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project cachedir: /home/sweet/project/.pytest_cache --------------------------- cache values for '*' --------------------------- @@ -301,7 +301,7 @@ filtering: $ pytest --cache-show example/* =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project cachedir: /home/sweet/project/.pytest_cache ----------------------- cache values for 'example/*' ----------------------- diff --git a/doc/en/how-to/capture-stdout-stderr.rst b/doc/en/how-to/capture-stdout-stderr.rst index e6affd80ea1..cd5cb6d798f 100644 --- a/doc/en/how-to/capture-stdout-stderr.rst +++ b/doc/en/how-to/capture-stdout-stderr.rst @@ -89,7 +89,7 @@ of the failing function and hide the other one: $ pytest =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 2 items diff --git a/doc/en/how-to/capture-warnings.rst b/doc/en/how-to/capture-warnings.rst index 8ed546bedf7..2e61a4f1815 100644 --- a/doc/en/how-to/capture-warnings.rst +++ b/doc/en/how-to/capture-warnings.rst @@ -28,7 +28,7 @@ Running pytest now produces this output: $ pytest test_show_warnings.py =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 1 item @@ -77,7 +77,7 @@ as an error: test_show_warnings.py:5: UserWarning ========================= short test summary info ========================== - FAILED test_show_warnings.py::test_one - UserWarning: api v1, should use ... + FAILED test_show_warnings.py::test_one - UserWarning: api v1, should use functions from v2 1 failed in 0.12s The same option can be set in the configuration file using the diff --git a/doc/en/how-to/doctest.rst b/doc/en/how-to/doctest.rst index 601f5c0afd0..de6679bc452 100644 --- a/doc/en/how-to/doctest.rst +++ b/doc/en/how-to/doctest.rst @@ -30,7 +30,7 @@ then you can just invoke ``pytest`` directly: $ pytest =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 1 item @@ -58,7 +58,7 @@ and functions, including from test modules: $ pytest --doctest-modules =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 2 items diff --git a/doc/en/how-to/fixtures.rst b/doc/en/how-to/fixtures.rst index 0c4ddb8b4dc..c75dac7ef15 100644 --- a/doc/en/how-to/fixtures.rst +++ b/doc/en/how-to/fixtures.rst @@ -433,7 +433,7 @@ marked ``smtp_connection`` fixture function. Running the test looks like this: $ pytest test_module.py =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 2 items @@ -773,7 +773,7 @@ For yield fixtures, the first teardown code to run is from the right-most fixtur $ pytest -s test_finalizers.py =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 1 item @@ -807,7 +807,7 @@ For finalizers, the first fixture to run is last call to `request.addfinalizer`. $ pytest -s test_finalizers.py =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 1 item @@ -1170,7 +1170,8 @@ Running it: ------------------------- Captured stdout teardown ------------------------- finalizing (mail.python.org) ========================= short test summary info ========================== - FAILED test_anothersmtp.py::test_showhelo - AssertionError: (250, b'mail.... + FAILED test_anothersmtp.py::test_showhelo - AssertionError: (250, b'mail.python.org') + assert 0 voila! The ``smtp_connection`` fixture function picked up our mail server name from the module namespace. @@ -1356,7 +1357,7 @@ So let's just do another run: ========================= short test summary info ========================== FAILED test_module.py::test_ehlo[smtp.gmail.com] - assert 0 FAILED test_module.py::test_noop[smtp.gmail.com] - assert 0 - FAILED test_module.py::test_ehlo[mail.python.org] - AssertionError: asser... + FAILED test_module.py::test_ehlo[mail.python.org] - AssertionError: assert b'smtp.gmail.com' in b'mail.python.org\nPIPELINING\nSIZE 51200000\nETRN\nSTARTTLS\nAUTH DIGEST-MD5 NTLM CRAM-MD5\nENHANCEDSTATUSCODES\n8BITMIME\nDSN\nSMTPUTF8\nCHUNKING' FAILED test_module.py::test_noop[mail.python.org] - assert 0 4 failed in 0.12s @@ -1419,11 +1420,11 @@ Running the above tests results in the following test IDs being used: $ pytest --collect-only =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 12 items - + @@ -1474,7 +1475,7 @@ Running this test will *skip* the invocation of ``data_set`` with value ``2``: $ pytest test_fixture_marks.py -v =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python cachedir: .pytest_cache rootdir: /home/sweet/project collecting ... collected 3 items @@ -1524,7 +1525,7 @@ Here we declare an ``app`` fixture which receives the previously defined $ pytest -v test_appsetup.py =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python cachedir: .pytest_cache rootdir: /home/sweet/project collecting ... collected 2 items @@ -1604,7 +1605,7 @@ Let's run the tests in verbose mode and with looking at the print-output: $ pytest -v -s test_module.py =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python cachedir: .pytest_cache rootdir: /home/sweet/project collecting ... collected 8 items diff --git a/doc/en/how-to/output.rst b/doc/en/how-to/output.rst index e03f477b22d..34e5f098793 100644 --- a/doc/en/how-to/output.rst +++ b/doc/en/how-to/output.rst @@ -102,7 +102,18 @@ Executing pytest normally gives us this output (we are skipping the header to fo E AssertionError: assert ['banana', 'a...elon', 'kiwi'] == ['banana', 'a...elon', 'kiwi'] E E At index 2 diff: 'grapes' != 'orange' - E Use -v to get more diff + E + E Full diff: + E [ + E 'banana', + E 'apple', + E - 'orange', + E ? ^ ^^ + E + 'grapes', + E ? ^ ^ + + E 'melon', + E 'kiwi', + E ] test_verbosity_example.py:8: AssertionError ____________________________ test_numbers_fail _____________________________ @@ -118,7 +129,23 @@ Executing pytest normally gives us this output (we are skipping the header to fo E {'1': 1, '2': 2, '3': 3, '4': 4} E Right contains 4 more items: E {'10': 10, '20': 20, '30': 30, '40': 40} - E Use -v to get more diff + E + E Full diff: + E { + E '0': 0, + E - '10': 10, + E ? - - + E + '1': 1, + E - '20': 20, + E ? - - + E + '2': 2, + E - '30': 30, + E ? - - + E + '3': 3, + E - '40': 40, + E ? - - + E + '4': 4, + E } test_verbosity_example.py:14: AssertionError ___________________________ test_long_text_fail ____________________________ @@ -130,9 +157,46 @@ Executing pytest normally gives us this output (we are skipping the header to fo test_verbosity_example.py:19: AssertionError ========================= short test summary info ========================== - FAILED test_verbosity_example.py::test_words_fail - AssertionError: asser... - FAILED test_verbosity_example.py::test_numbers_fail - AssertionError: ass... - FAILED test_verbosity_example.py::test_long_text_fail - AssertionError: a... + FAILED test_verbosity_example.py::test_words_fail - AssertionError: assert ['banana', 'a...elon', 'kiwi'] == ['banana', 'a...elon', 'kiwi'] + + At index 2 diff: 'grapes' != 'orange' + + Full diff: + [ + 'banana', + 'apple', + - 'orange', + ? ^ ^^ + + 'grapes', + ? ^ ^ + + 'melon', + 'kiwi', + ] + FAILED test_verbosity_example.py::test_numbers_fail - AssertionError: assert {'0': 0, '1':..., '3': 3, ...} == {'0': 0, '10'...'30': 30, ...} + + Omitting 1 identical items, use -vv to show + Left contains 4 more items: + {'1': 1, '2': 2, '3': 3, '4': 4} + Right contains 4 more items: + {'10': 10, '20': 20, '30': 30, '40': 40} + + Full diff: + { + '0': 0, + - '10': 10, + ? - - + + '1': 1, + - '20': 20, + ? - - + + '2': 2, + - '30': 30, + ? - - + + '3': 3, + - '40': 40, + ? - - + + '4': 4, + } + FAILED test_verbosity_example.py::test_long_text_fail - AssertionError: assert 'hello world' in 'Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ips... sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet ' ======================= 3 failed, 1 passed in 0.12s ======================== Notice that: @@ -170,9 +234,14 @@ Now we can increase pytest's verbosity: E Full diff: E [ E 'banana', - E 'apple',... - E - E ...Full output truncated (7 lines hidden), use '-vv' to show + E 'apple', + E - 'orange', + E ? ^ ^^ + E + 'grapes', + E ? ^ ^ + + E 'melon', + E 'kiwi', + E ] test_verbosity_example.py:8: AssertionError ____________________________ test_numbers_fail _____________________________ @@ -188,9 +257,23 @@ Now we can increase pytest's verbosity: E {'1': 1, '2': 2, '3': 3, '4': 4} E Right contains 4 more items: E {'10': 10, '20': 20, '30': 30, '40': 40} - E ... E - E ...Full output truncated (16 lines hidden), use '-vv' to show + E Full diff: + E { + E '0': 0, + E - '10': 10, + E ? - - + E + '1': 1, + E - '20': 20, + E ? - - + E + '2': 2, + E - '30': 30, + E ? - - + E + '3': 3, + E - '40': 40, + E ? - - + E + '4': 4, + E } test_verbosity_example.py:14: AssertionError ___________________________ test_long_text_fail ____________________________ @@ -202,9 +285,46 @@ Now we can increase pytest's verbosity: test_verbosity_example.py:19: AssertionError ========================= short test summary info ========================== - FAILED test_verbosity_example.py::test_words_fail - AssertionError: asser... - FAILED test_verbosity_example.py::test_numbers_fail - AssertionError: ass... - FAILED test_verbosity_example.py::test_long_text_fail - AssertionError: a... + FAILED test_verbosity_example.py::test_words_fail - AssertionError: assert ['banana', 'a...elon', 'kiwi'] == ['banana', 'a...elon', 'kiwi'] + + At index 2 diff: 'grapes' != 'orange' + + Full diff: + [ + 'banana', + 'apple', + - 'orange', + ? ^ ^^ + + 'grapes', + ? ^ ^ + + 'melon', + 'kiwi', + ] + FAILED test_verbosity_example.py::test_numbers_fail - AssertionError: assert {'0': 0, '1':..., '3': 3, ...} == {'0': 0, '10'...'30': 30, ...} + + Omitting 1 identical items, use -vv to show + Left contains 4 more items: + {'1': 1, '2': 2, '3': 3, '4': 4} + Right contains 4 more items: + {'10': 10, '20': 20, '30': 30, '40': 40} + + Full diff: + { + '0': 0, + - '10': 10, + ? - - + + '1': 1, + - '20': 20, + ? - - + + '2': 2, + - '30': 30, + ? - - + + '3': 3, + - '40': 40, + ? - - + + '4': 4, + } + FAILED test_verbosity_example.py::test_long_text_fail - AssertionError: assert 'hello world' in 'Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet Lorem ipsum dolor sit amet ' ======================= 3 failed, 1 passed in 0.12s ======================== Notice now that: @@ -421,7 +541,7 @@ Example: $ pytest -ra =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 6 items @@ -478,7 +598,7 @@ More than one character can be used, so for example to only see failed and skipp $ pytest -rfs =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 6 items @@ -513,7 +633,7 @@ captured output: $ pytest -rpP =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 6 items diff --git a/doc/en/how-to/parametrize.rst b/doc/en/how-to/parametrize.rst index dba2ac0b91e..47abae35f06 100644 --- a/doc/en/how-to/parametrize.rst +++ b/doc/en/how-to/parametrize.rst @@ -57,7 +57,7 @@ them in turn: $ pytest =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 3 items @@ -76,7 +76,8 @@ them in turn: test_expectation.py:6: AssertionError ========================= short test summary info ========================== - FAILED test_expectation.py::test_eval[6*9-42] - AssertionError: assert 54... + FAILED test_expectation.py::test_eval[6*9-42] - AssertionError: assert 54 == 42 + + where 54 = eval('6*9') ======================= 1 failed, 2 passed in 0.12s ======================== .. note:: @@ -177,7 +178,7 @@ Let's run this: $ pytest =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 3 items @@ -290,6 +291,8 @@ Let's also run with a stringinput that will lead to a failing test: test_strings.py:4: AssertionError ========================= short test summary info ========================== FAILED test_strings.py::test_valid_string[!] - AssertionError: assert False + + where False = () + + where = '!'.isalpha 1 failed in 0.12s As expected our test function fails. diff --git a/doc/en/how-to/tmp_path.rst b/doc/en/how-to/tmp_path.rst index d19950431e5..04c663bb986 100644 --- a/doc/en/how-to/tmp_path.rst +++ b/doc/en/how-to/tmp_path.rst @@ -35,7 +35,7 @@ Running this would result in a passed test except for the last $ pytest test_tmp_path.py =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 1 item diff --git a/doc/en/how-to/unittest.rst b/doc/en/how-to/unittest.rst index a8c56c266bd..5e9d3dae687 100644 --- a/doc/en/how-to/unittest.rst +++ b/doc/en/how-to/unittest.rst @@ -137,7 +137,7 @@ the ``self.db`` values in the traceback: $ pytest test_unittest_db.py =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 2 items @@ -168,8 +168,10 @@ the ``self.db`` values in the traceback: test_unittest_db.py:14: AssertionError ========================= short test summary info ========================== - FAILED test_unittest_db.py::MyTest::test_method1 - AssertionError: .DummyDB object at 0xdeadbeef0001> + assert 0 + FAILED test_unittest_db.py::MyTest::test_method2 - AssertionError: .DummyDB object at 0xdeadbeef0001> + assert 0 ============================ 2 failed in 0.12s ============================= This default pytest traceback shows that the two test methods diff --git a/doc/en/how-to/writing_plugins.rst b/doc/en/how-to/writing_plugins.rst index 6382edc4797..ec10c0e261c 100644 --- a/doc/en/how-to/writing_plugins.rst +++ b/doc/en/how-to/writing_plugins.rst @@ -446,7 +446,7 @@ in our configuration file to tell pytest where to look for example files. $ pytest =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project configfile: pytest.toml collected 2 items diff --git a/doc/en/index.rst b/doc/en/index.rst index 2d9e3bed42c..e4fd00fa446 100644 --- a/doc/en/index.rst +++ b/doc/en/index.rst @@ -67,7 +67,7 @@ To execute it: $ pytest =========================== test session starts ============================ - platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y + platform linux -- Python 3.x.y, pytest-9.x.y, pluggy-1.x.y rootdir: /home/sweet/project collected 1 item @@ -84,6 +84,7 @@ To execute it: test_sample.py:6: AssertionError ========================= short test summary info ========================== FAILED test_sample.py::test_answer - assert 4 == 5 + + where 4 = inc(3) ============================ 1 failed in 0.12s ============================= Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used.