Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFE: provide command line option to finish pytest script with exit 0 when there is no units #9976

Closed
kloczek opened this issue May 19, 2022 · 13 comments

Comments

@kloczek
Copy link
Contributor

kloczek commented May 19, 2022

What's the problem this feature will solve?

Sometimes it is helpful to be able to finish pytest script execution if pytest was not able to find any units.

Describe the solution you'd like

Add th new --exit-0-if-no-units command-line option to allow finish with exit 0 if pytest cannot find any units.

I'm packaging python modules as rpm/Solaris IPS package. Part of the build procedure is obligatory execution available test suite. I have now +930 such packages, and for about 1/9 it is not possible to use pytest because there is no pytest support in the module code.

[tkloczko@devel-g2v SPECS]$ grep ^%pytest python-* | wc -l; grep ^%tox python-* | wc -l; grep ^%py3_test python-*|wc -l; ls -1 python-* | wc -l
830
1
27
933

I'm trying to reach the goal of having 100% of modules tested using pytest.
The obstacle on the road to that goal is that pytest scripts finish with a non-zero exit code when there is no units.
It would pytest command-line option to allow to finish pytest exit 0 if pytest could not find any units.

Use case for such cmd line option would be spread everywhere %pytest --exit-0-if-no-units in case of those modules where pytest is not able to find units.
That command line option would create a kind of base platform for the mass test of al module with redefined %ptest macro with additional --black, --cov or --flake8 switches to assess which one of the modules are passing the test of some extensions and present a public ranking of all modules compliant with exact additional tests.
In spec file would be something like

%check
%pytest --exit-0-if-no-units

This would also be a clear indicator that the exact module has no units.

Additional context

I don't think that such functionality should be possible to handle by some pytest.ini entry.
IMO the best would be to have only the command line option.

@RonnyPfannschmidt
Copy link
Member

Closing as proposed use case out of scope

If you package things that don't have a testsuite, don't run pytest on them

@kloczek
Copy link
Contributor Author

kloczek commented May 19, 2022

If you package things that don't have a testsuite, don't run pytest on them

Pleas read what I wrote one more time. This is not for for final build of the production package.

That option would allow reuse already completed build procedures in form of rpm spec files to perform additional test without touching those built procedure. All what would be necessary is just add for example python-pytest-black package in build env and redefine %pytest to add --black switch.

@RonnyPfannschmidt
Copy link
Member

Then create a plugin that supports packaging related use cases that way

@nicoddemus
Copy link
Member

nicoddemus commented May 19, 2022

To complement @RonnyPfannschmidt's answer, here's a simple plugin which implements this:

import pytest


def pytest_addoption(parser):
    parser.addoption(
        "--exit-0-if-no-units",
        action="store_true",
        default=False,
    )


def pytest_sessionfinish(session):
    if (
        session.config.getoption("exit_0_if_no_units")
        and session.exitstatus is pytest.ExitCode.NO_TESTS_COLLECTED
    ):
        session.exitstatus = pytest.ExitCode.OK

@kloczek
Copy link
Contributor Author

kloczek commented May 19, 2022

I've been thinking about similar ext however Issue is that such plugin needs to be explicit installed.
Embedding such disabled functionality would be waay easier to spread across all necessary modules.

IMO such functionality would allow as well quicker start with use pytest in case of modules written from scratch.
In other words IMO it is some significant set of use cases outside of my own needs.

BTW that code snippet: correct me if I'm wrong but probably first part of the code needs to be:

def pytest_addoption(parser):
    parser.addoption(
        "--exit-0-if-no-units",
        action="store_true",
        dest="exit_0_if_no_units",
        default=False,
    )

Am I right?

@nicoddemus
Copy link
Member

I've been thinking about similar ext however Issue is that such plugin needs to be explicit installed.

But if you can customize the command line to test the packages, surely you can customize the build environment to include that plugin?

Am I right?

Not needed, dest is optional and generated automatically from the command-line option if not explicitly given.

@kloczek
Copy link
Contributor Author

kloczek commented May 19, 2022

I've been thinking about similar ext however Issue is that such plugin needs to be explicit installed.

But if you can customize the command line to test the packages, surely you can customize the build environment to include that plugin?

Try to think that at some point probably other distribution may try to apply similar approach to testing.
Availability of such function OOTB may allow easier absorbed something like that.
Simple I don't want to keep that for only my own needs.

Am I right?

Not needed, dest is optional and generated automatically from the command-line option if not explicitly given.

OK thx.

Just made a patch which I'm going to integrate in my python-pytest.spec to be able start testing it.
May I ask to have look on it?

--- a/src/_pytest/python.py~    2022-04-23 11:33:44.000000000 +0000
+++ b/src/_pytest/python.py     2022-05-19 12:20:16.257788166 +0000
@@ -131,6 +131,12 @@
         help="disable string escape non-ascii characters, might cause unwanted "
         "side effects(use at your own risk)",
     )
+    group.addoption(
+        "--exit-0-if-no-units",
+        action="store_true",
+        default=False,
+        help="finish with exit 0 if there is no test units",
+    )


 def pytest_cmdline_main(config: Config) -> Optional[Union[int, ExitCode]]:
--- a/src/_pytest/main.py~      2022-04-23 11:33:44.000000000 +0000
+++ b/src/_pytest/main.py       2022-05-19 13:13:19.239785220 +0000
@@ -323,7 +323,7 @@

     if session.testsfailed:
         return ExitCode.TESTS_FAILED
-    elif session.testscollected == 0:
+    elif session.testscollected == 0 and not session.config.getoption("exit_0_if_no_units"):
         return ExitCode.NO_TESTS_COLLECTED
     return None

This is only POC and if it will be working probably it would be good to add some bits in documentation as well (if it will be accepted as PR 😋 )

@nicoddemus
Copy link
Member

Simple I don't want to keep that for only my own needs.

Oh I see, we probably should have been clearer: this code can be packaged and published to PyPI as a pytest plugin, so it is easily reused (it is just a matter of installing that package into the testing environment).

May I ask to have look on it?

Overall looks OK.

if it will be accepted as PR

What you mean, as PR to pytest itself? If that's the case, we don't think this needs to be in the core; that code can be easily packaged as a plugin and reused accordingly.

@nicoddemus
Copy link
Member

Reference on how to write a plugin that is installable by others: https://docs.pytest.org/en/stable/how-to/writing_plugins.html?highlight=plugins#writing-your-own-plugin

@kloczek
Copy link
Contributor Author

kloczek commented May 19, 2022

What you mean, as PR to pytest itself? If that's the case, we don't think this needs to be in the core; that code can be easily packaged as a plugin and reused accordingly.

Hmm .. what kind criteria needs to be fulfilled to be accepted as integrated in the core extended functionality? 🤔

@nicoddemus
Copy link
Member

There's no set-in-stone criteria, but in general if it is something not used by the vast majority of users and can be made available as a plugin, then going with a plugin is the recommended path.

@kloczek
Copy link
Contributor Author

kloczek commented May 19, 2022

Tthat patch just found that one new unit is now failing.
Looks like it is not related to my patch
May I ask to have look on that fail?

+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-pytest-7.1.2-2.fc35.x86_64/usr/lib/python3.8/site-packages
+ /home/tkloczko/rpmbuild/BUILDROOT/python-pytest-7.1.2-2.fc35.x86_64/usr/bin/pytest -ra --import-mode=importlib -p no:flaky -p no:randomly
=========================================================================== test session starts ============================================================================
platform linux -- Python 3.8.13, pytest-7.1.2, pluggy-1.0.0
rootdir: /home/tkloczko/rpmbuild/BUILD/pytest-7.1.2, configfile: pyproject.toml, testpaths: testing
plugins: forked-1.4.0, xdist-2.5.0, hypothesis-6.41.0
collected 3145 items

[..]

================================================================================= FAILURES =================================================================================
_____________________________________________________________ TestRaises.test_raises_exception_looks_iterable ______________________________________________________________

self = <testing.python.raises.TestRaises object at 0x7f239939a040>

    def test_raises_exception_looks_iterable(self):
        class Meta(type):
            def __getitem__(self, item):
                return 1 / 0

            def __len__(self):
                return 1

        class ClassLooksIterableException(Exception, metaclass=Meta):
            pass

        with pytest.raises(
            Failed,
            match=r"DID NOT RAISE <class 'raises(\..*)*ClassLooksIterableException'>",
        ):
>           pytest.raises(ClassLooksIterableException, lambda: None)

testing/python/raises.py:259:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

expected_exception = <class 'testing.python.raises.TestRaises.test_raises_exception_looks_iterable.<locals>.ClassLooksIterableException'>
args = (<function TestRaises.test_raises_exception_looks_iterable.<locals>.<lambda> at 0x7f2213da5af0>,), kwargs = {}, __tracebackhide__ = True
excepted_exceptions = (<class 'testing.python.raises.TestRaises.test_raises_exception_looks_iterable.<locals>.ClassLooksIterableException'>,)
exc = <class 'testing.python.raises.TestRaises.test_raises_exception_looks_iterable.<locals>.ClassLooksIterableException'>
message = "DID NOT RAISE <class 'testing.python.raises.TestRaises.test_raises_exception_looks_iterable.<locals>.ClassLooksIterableException'>"
func = <function TestRaises.test_raises_exception_looks_iterable.<locals>.<lambda> at 0x7f2213da5af0>

    def raises(
        expected_exception: Union[Type[E], Tuple[Type[E], ...]], *args: Any, **kwargs: Any
    ) -> Union["RaisesContext[E]", _pytest._code.ExceptionInfo[E]]:
        r"""Assert that a code block/function call raises ``expected_exception``
        or raise a failure exception otherwise.

        :kwparam match:
            If specified, a string containing a regular expression,
            or a regular expression object, that is tested against the string
            representation of the exception using :py:func:`re.search`. To match a literal
            string that may contain :std:ref:`special characters <re-syntax>`, the pattern can
            first be escaped with :py:func:`re.escape`.

            (This is only used when :py:func:`pytest.raises` is used as a context manager,
            and passed through to the function otherwise.
            When using :py:func:`pytest.raises` as a function, you can use:
            ``pytest.raises(Exc, func, match="passed on").match("my pattern")``.)

        .. currentmodule:: _pytest._code

        Use ``pytest.raises`` as a context manager, which will capture the exception of the given
        type::

            >>> import pytest
            >>> with pytest.raises(ZeroDivisionError):
            ...    1/0

        If the code block does not raise the expected exception (``ZeroDivisionError`` in the example
        above), or no exception at all, the check will fail instead.

        You can also use the keyword argument ``match`` to assert that the
        exception matches a text or regex::

            >>> with pytest.raises(ValueError, match='must be 0 or None'):
            ...     raise ValueError("value must be 0 or None")

            >>> with pytest.raises(ValueError, match=r'must be \d+$'):
            ...     raise ValueError("value must be 42")

        The context manager produces an :class:`ExceptionInfo` object which can be used to inspect the
        details of the captured exception::

            >>> with pytest.raises(ValueError) as exc_info:
            ...     raise ValueError("value must be 42")
            >>> assert exc_info.type is ValueError
            >>> assert exc_info.value.args[0] == "value must be 42"

        .. note::

           When using ``pytest.raises`` as a context manager, it's worthwhile to
           note that normal context manager rules apply and that the exception
           raised *must* be the final line in the scope of the context manager.
           Lines of code after that, within the scope of the context manager will
           not be executed. For example::

               >>> value = 15
               >>> with pytest.raises(ValueError) as exc_info:
               ...     if value > 10:
               ...         raise ValueError("value must be <= 10")
               ...     assert exc_info.type is ValueError  # this will not execute

           Instead, the following approach must be taken (note the difference in
           scope)::

               >>> with pytest.raises(ValueError) as exc_info:
               ...     if value > 10:
               ...         raise ValueError("value must be <= 10")
               ...
               >>> assert exc_info.type is ValueError

        **Using with** ``pytest.mark.parametrize``

        When using :ref:`pytest.mark.parametrize ref`
        it is possible to parametrize tests such that
        some runs raise an exception and others do not.

        See :ref:`parametrizing_conditional_raising` for an example.

        **Legacy form**

        It is possible to specify a callable by passing a to-be-called lambda::

            >>> raises(ZeroDivisionError, lambda: 1/0)
            <ExceptionInfo ...>

        or you can specify an arbitrary callable with arguments::

            >>> def f(x): return 1/x
            ...
            >>> raises(ZeroDivisionError, f, 0)
            <ExceptionInfo ...>
            >>> raises(ZeroDivisionError, f, x=0)
            <ExceptionInfo ...>

        The form above is fully supported but discouraged for new code because the
        context manager form is regarded as more readable and less error-prone.

        .. note::
            Similar to caught exception objects in Python, explicitly clearing
            local references to returned ``ExceptionInfo`` objects can
            help the Python interpreter speed up its garbage collection.

            Clearing those references breaks a reference cycle
            (``ExceptionInfo`` --> caught exception --> frame stack raising
            the exception --> current frame stack --> local variables -->
            ``ExceptionInfo``) which makes Python keep all objects referenced
            from that cycle (including all local variables in the current
            frame) alive until the next cyclic garbage collection run.
            More detailed information can be found in the official Python
            documentation for :ref:`the try statement <python:try>`.
        """
        __tracebackhide__ = True

        if isinstance(expected_exception, type):
            excepted_exceptions: Tuple[Type[E], ...] = (expected_exception,)
        else:
            excepted_exceptions = expected_exception
        for exc in excepted_exceptions:
            if not isinstance(exc, type) or not issubclass(exc, BaseException):
                msg = "expected exception must be a BaseException type, not {}"  # type: ignore[unreachable]
                not_a = exc.__name__ if isinstance(exc, type) else type(exc).__name__
                raise TypeError(msg.format(not_a))

        message = f"DID NOT RAISE {expected_exception}"

        if not args:
            match: Optional[Union[str, Pattern[str]]] = kwargs.pop("match", None)
            if kwargs:
                msg = "Unexpected keyword arguments passed to pytest.raises: "
                msg += ", ".join(sorted(kwargs))
                msg += "\nUse context-manager form instead?"
                raise TypeError(msg)
            return RaisesContext(expected_exception, message, match)
        else:
            func = args[0]
            if not callable(func):
                raise TypeError(f"{func!r} object (type: {type(func)}) must be callable")
            try:
                func(*args[1:], **kwargs)
            except expected_exception as e:
                # We just caught the exception - there is a traceback.
                assert e.__traceback__ is not None
                return _pytest._code.ExceptionInfo.from_exc_info(
                    (type(e), e, e.__traceback__)
                )
>       fail(message)

../../BUILDROOT/python-pytest-7.1.2-2.fc35.x86_64/usr/lib/python3.8/site-packages/_pytest/python_api.py:934:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

reason = "DID NOT RAISE <class 'testing.python.raises.TestRaises.test_raises_exception_looks_iterable.<locals>.ClassLooksIterableException'>", pytrace = True, msg = None

    @_with_exception(Failed)
    def fail(
        reason: str = "", pytrace: bool = True, msg: Optional[str] = None
    ) -> "NoReturn":
        """Explicitly fail an executing test with the given message.

        :param reason:
            The message to show the user as reason for the failure.

        :param pytrace:
            If False, msg represents the full failure information and no
            python traceback will be reported.

        :param msg:
            Same as ``reason``, but deprecated. Will be removed in a future version, use ``reason`` instead.
        """
        __tracebackhide__ = True
        reason = _resolve_msg_to_reason("fail", reason, msg)
>       raise Failed(msg=reason, pytrace=pytrace)
E       Failed: DID NOT RAISE <class 'testing.python.raises.TestRaises.test_raises_exception_looks_iterable.<locals>.ClassLooksIterableException'>

../../BUILDROOT/python-pytest-7.1.2-2.fc35.x86_64/usr/lib/python3.8/site-packages/_pytest/outcomes.py:196: Failed

During handling of the above exception, another exception occurred:

self = <testing.python.raises.TestRaises object at 0x7f239939a040>

    def test_raises_exception_looks_iterable(self):
        class Meta(type):
            def __getitem__(self, item):
                return 1 / 0

            def __len__(self):
                return 1

        class ClassLooksIterableException(Exception, metaclass=Meta):
            pass

        with pytest.raises(
            Failed,
            match=r"DID NOT RAISE <class 'raises(\..*)*ClassLooksIterableException'>",
        ):
>           pytest.raises(ClassLooksIterableException, lambda: None)
E           AssertionError: Regex pattern "DID NOT RAISE <class 'raises(\\..*)*ClassLooksIterableException'>" does not match "DID NOT RAISE <class 'testing.python.raises.TestRaises.test_raises_exception_looks_iterable.<locals>.ClassLooksIterableException'>".

testing/python/raises.py:259: AssertionError
========================================================================= short test summary info ==========================================================================
SKIPPED [1] testing/test_capture.py:1432: only on windows
SKIPPED [22] testing/test_nose.py:6: could not import 'nose': No module named 'nose'
SKIPPED [1] testing/test_pathlib.py:436: Windows only
SKIPPED [1] testing/test_tmpdir.py:221: win only
SKIPPED [1] testing/test_assertrewrite.py:770: importlib.resources.files was introduced in 3.9
SKIPPED [1] ../../BUILDROOT/python-pytest-7.1.2-2.fc35.x86_64/usr/lib/python3.8/site-packages/_pytest/pathlib.py:434: symlinks not supported: [Errno 17] File exists: '/tmp/pytest-of-tkloczko/pytest-126/test_collect_symlink_dir0/symlink_dir' -> '/tmp/pytest-of-tkloczko/pytest-126/test_collect_symlink_dir0/dir'
SKIPPED [1] testing/test_config.py:1849: does not work with xdist currently
SKIPPED [1] testing/test_conftest.py:361: only relevant for case insensitive file systems
SKIPPED [1] testing/test_parseopt.py:330: argcomplete not available
SKIPPED [1] testing/test_unittest.py:1287: could not import 'asynctest': No module named 'asynctest'
SKIPPED [3] testing/test_warnings.py:521: not relevant until pytest 8.0
SKIPPED [41] ../../BUILDROOT/python-pytest-7.1.2-2.fc35.x86_64/usr/lib/python3.8/site-packages/_pytest/pytester.py:1497: could not import 'pexpect': No module named 'pexpect'
SKIPPED [1] testing/test_faulthandler.py:71: sometimes crashes on CI (#7022)
XFAIL testing/acceptance_test.py::TestInvocationVariants::test_noclass_discovery_if_not_testcase
  decide: feature or bug
XFAIL testing/test_capture.py::TestPerTestCapturing::test_capture_scope_cache
  unimplemented feature
XFAIL testing/test_collection.py::TestPrunetraceback::test_collect_report_postprocessing
  other mechanism for adding to reporting needed
XFAIL testing/test_config.py::TestParseIni::test_confcutdir
  probably not needed
XFAIL testing/test_doctest.py::TestLiterals::test_number_non_matches['3.1416'-'3.14']
XFAIL testing/test_mark.py::TestKeywordSelection::test_keyword_extra_dash
XFAIL testing/test_pytester.py::test_make_hook_recorder
  reason: internal reportrecorder tests need refactoring
XFAIL testing/test_runner.py::test_runtest_in_module_ordering
XFAIL testing/python/fixtures.py::TestAutouseDiscovery::test_setup_enabled_functionnode
  'enabled' feature not implemented
XFAIL testing/python/fixtures.py::TestRequestBasic::test_request_garbage
  reason: this test is flaky when executed with xdist
FAILED testing/python/raises.py::TestRaises::test_raises_exception_looks_iterable - AssertionError: Regex pattern "DID NOT RAISE <class 'raises(\\..*)*ClassLooksIterable...
==================================================== 1 failed, 3058 passed, 76 skipped, 10 xfailed in 177.58s (0:02:57) ====================================================

@kloczek
Copy link
Contributor Author

kloczek commented May 19, 2022

Sorry I just found that I've already reported that #9764

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants