New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a way to run doctests and continue on failure? #3149

Closed
will133 opened this Issue Jan 24, 2018 · 6 comments

Comments

Projects
None yet
3 participants
@will133
Contributor

will133 commented Jan 24, 2018

Say you have a file named t.py:

"""
>>> print(1)
>>> print(1)
>>> print(1)
"""

if __name__ == "__main__":
    import doctest
    doctest.testmod()

When I run this, I would get:

$ python t.py
**********************************************************************
File "t.py", line 2, in __main__
Failed example:
    print(1)
Expected nothing
Got:
    1
**********************************************************************
File "t.py", line 3, in __main__
Failed example:
    print(1)
Expected nothing
Got:
    1
**********************************************************************
File "t.py", line 4, in __main__
Failed example:
    print(1)
Expected nothing
Got:
    1
**********************************************************************
1 items had failures:
   3 of   3 in __main__
***Test Failed*** 3 failures.

However, when I run with pytest, I would only get the first failure:

$ pytest t.py
========================================================= test session starts =========================================================
platform linux2 -- Python 2.7.14, pytest-3.3.2, py-1.5.2, pluggy-0.5.2
rootdir: /auto/cnvtvws/wlee/fcnvt/dates, inifile: setup.cfg
plugins: xdist-1.20.1, forked-0.2, flakes-1.0.1, flake8-0.9, cov-2.5.1, hypothesis-3.38.5, pep8-1.0.6
collected 2 items

t.py sF                                                                                                                         [100%]
======================================================= short test summary info =======================================================
FAIL t.py::t

============================================================== FAILURES ===============================================================
_____________________________________________________________ [doctest] t _____________________________________________________________
001
002 >>> print(1)
Expected nothing
Got:
    1

.../t.py:2: DocTestFailure
================================================= 1 failed, 1 skipped in 0.01 seconds =================================================

I've tried to dig through all the options form the documentation but I wan't able to figure it out. Is there a way to have pytest report all the failures at once? Otherwise I'd have to fix the failures one by one, which gets tedious. I tried to put in various entries in --doctest-report='none' but can't get it to work.

Here is the relevant section in setup.cfg:

[tool:pytest]
addopts = -rf --strict --doctest-modules --flake8 --doctest-glob='*.rst'

doctest_optionflags = NORMALIZE_WHITESPACE
maxfail = 5

Here is my pip list:

alabaster (0.7.10)
asn1crypto (0.24.0)
attrs (17.3.0)
Babel (2.5.3)
certifi (2017.11.5)
cffi (1.11.4)
chardet (3.0.4)
configparser (3.5.0)
coverage (4.4.2)
cryptography (2.1.4)
docutils (0.14)
enum34 (1.1.6)
execnet (1.2.0)
flake8 (3.5.0)
funcsigs (1.0.2)
hypothesis (3.38.5)
idna (2.6)
imagesize (0.7.1)
ipaddress (1.0.19)
Jinja2 (2.10)
llvmlite (0.15.0)
MarkupSafe (1.0)
mccabe (0.6.1)
numba (0.30.1+0.g8c1033f.dirty)
numpy (1.11.3)
pep8 (1.7.1)
pip (9.0.1)
pluggy (0.6.0)
py (1.5.2)
pycodestyle (2.3.1)
pycparser (2.18)
pyflakes (1.6.0)
Pygments (2.2.0)
Pympler (0.5)
pyOpenSSL (17.5.0)
PySocks (1.6.7)
pytest (3.3.2)
pytest-cache (1.0)
pytest-cov (2.5.1)
pytest-flake8 (0.9)
pytest-flakes (1.0.1)
pytest-forked (0.2)
pytest-pep8 (1.0.6)
pytest-runner (3.0)
pytest-xdist (1.20.1)
python-dateutil (2.4.1)
pytz (2015.7)
requests (2.18.4)
setuptools (38.4.0)
singledispatch (3.4.0.3)
six (1.11.0)
snowballstemmer (1.2.1)
Sphinx (1.6.6)
sphinxcontrib-websupport (1.0.1)
typing (3.6.2)
urllib3 (1.22)
wheel (0.30.0)
zope.interface (4.4.3)
@nicoddemus

This comment has been minimized.

Member

nicoddemus commented Jan 24, 2018

Hi @will133 thanks for reporting this.

It seems like this is a bug in pytest and I don't know of any workaround, I'm afraid. 😕

@will133

This comment has been minimized.

Contributor

will133 commented Jan 29, 2018

I ran a debugger in the two cases and it seems like in _pytest/doctest.py it would uses doctest.DebugRunner instead of the testmod() case which uses the doctest.DocTestRunner. The DebugRunner would raise an exception when it calls the self.report_failure(). In such case it would not continue to run the rest of the examples. The DocTestRunner would output the Expected/Got failure but would not raise an exception.

I'm not so familiar with the _pytest/doctest.py module, but do you know why it would use a DebugRunner there?

@nicoddemus

This comment has been minimized.

Member

nicoddemus commented Jan 29, 2018

I'm not sure, indeed one of the differences between those is that DebubRunner indeed stops at the first failure, while DocTestRunner accumulates statistics. Perhaps it is worth a try to see how it goes?

@will133

This comment has been minimized.

Contributor

will133 commented Jan 29, 2018

It seems like the DoctestItem's runtest would call self.runner.run(). This in the case of DebugRunner would raise a doctest.DocTestFailure when the example would fail. By the time it gets to repr_failure() for DoctestItem (which test the excinfo for doctest.DocTestFailure to report the error), the execution is already short-circuited.

I'm not sure how to make it work properly. Should the collect() method for DoctestTextfile and DoctestModule be changed to use a custom derivation of doctest.DoctestRunner? I'd suppose when you call the self.runner.run() inside runtest it would collect all the errors and then raise at the end with a custom MultipleDoctestFailure? I'm pretty new to the plugin API so I'm not sure if this is the right way to go. I'd imagine the repr_failure() would have to be changed as well to handle that case.

@nicoddemus

This comment has been minimized.

Member

nicoddemus commented Jan 31, 2018

@will133 sorry for the delay.

Your proposal of using a custom subclass of doctest.DoctestRunner which overrides doctest.DoctestRunner.report_unexpected_exception and doctest.DoctestRunner.report_failure to accumulate the errors seems feasible. Then, as you said, we would need to update repr_failure() to generate the failure text using the accumulated errors instead of using a single failure exception as it is done today.

Would you like to give this a try?

@will133

This comment has been minimized.

Contributor

will133 commented Feb 1, 2018

I have not checked in things for the pytest repo before, so please let me know what I'd need to change for the pull request.

will133 added a commit to will133/pytest that referenced this issue Feb 24, 2018

will133 added a commit to will133/pytest that referenced this issue Feb 24, 2018

@nicoddemus nicoddemus closed this Feb 27, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment