-
-
Notifications
You must be signed in to change notification settings - Fork 30k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unittest subTest failure causes result to be omitted from listing #70082
Comments
The title can barely be called accurate; the description of the problem isn't easy to condense to title length. Here's the issue: $ cat subtest_test.py
import os
import unittest class TestClass(unittest.TestCase):
def test_subTest(self):
for t in map(int, os.environ.get('tests', '1')):
with self.subTest(t):
if t > 1:
raise unittest.SkipTest('skipped')
self.assertTrue(t)
if __name__ == '__main__':
unittest.main()
$ ./python.exe subtest_test.py
. Ran 1 test in 0.000s OK ====================================================================== Traceback (most recent call last):
File "subtest_test.py", line 12, in test_subTest
self.assertTrue(t)
AssertionError: 0 is not true Ran 1 test in 0.001s FAILED (failures=1) Traceback (most recent call last):
File "subtest_test.py", line 12, in test_subTest
self.assertTrue(t)
AssertionError: 0 is not true Ran 1 test in 0.001s FAILED (failures=1, skipped=1) Note that on the first run, the short summary is ".", as expected. The second is "", when one of the subTests fails, but then the third is "s", when one subtest fails but another is skipped. This also extends to verbose mode: $ ./python.exe subtest_test.py -v
test_subTest (__main__.TestClass) ... ok Ran 1 test in 0.001s OK Traceback (most recent call last):
File "subtest_test.py", line 12, in test_subTest
self.assertTrue(t)
AssertionError: 0 is not true Ran 1 test in 0.001s FAILED (failures=1) ====================================================================== Traceback (most recent call last):
File "subtest_test.py", line 12, in test_subTest
self.assertTrue(t)
AssertionError: 0 is not true Ran 1 test in 0.001s FAILED (failures=1, skipped=1) Note the first run shows "... ok", the second "... ", and the third "... skipped 'skipped'" I'm unsure what the solution should be. There should at least be some indication that the test finished, but should mixed results be reported as 'm' ("mixed results" in verbose mode), or should failure/error take precedence, or should every different result be represented? |
Okay, so you have a test with subtests. You have presented three cases:
What is the use case? Why not skip the test before any subtests are started? |
Martin Panter added the comment:
Or several subtests which pass. No problems.
Any of multiple subtests fail, and there is no indication in the
Only the subtest is skipped (which should be valid, or documented as $ tests=210 ./python.exe subtest_test.py -v
test_subTest (__main__.TestClass) ... skipped 'skipped' ====================================================================== Traceback (most recent call last):
File "subtest_test.py", line 14, in test_subTest
self.assertTrue(t)
AssertionError: 0 is not true Ran 1 test in 0.001s FAILED (failures=1, skipped=1) But, the summary makes it seem as though the entire test was skipped. Hopefully this makes it a bit clearer :) |
I believe this was discussed at the time subTest was added and deemed an acceptable tradeoff for a simpler implementation. I'm not sure it is, but I'm not prepared to write code to fix it :) I'm bothered every time I see this, but I have to admit that the tracebacks are the most important feedback and you do get those. |
Yes now I understand. If a subtest fails, there is no status update (not even a newline in verbose mode), and each subtest skip triggers a separate status update. My gut feeling is that any subtest failure should be counted as the whole test failing. I’m not sure how the failure vs error cases should be handled. Maybe error should trump failure. Judging by <https://bugs.python.org/issue16997#msg180259\>, Antoine intended for SkipTest to skip subtests. But I’m not sure that be reported as the whole test being skipped. |
I think the priority should be error > failure > skip > pass. |
The basic model is this:
We certainly must not report the test as a whole passing if any subtest did not pass. Long term I want to remove the error/failure partitioning of exceptions; its not actually useful. The summary for the test, when subtests are used, should probably enumerate the states. test_foo (3 passed, 2 skipped, 1 failure) in much the same way the run as a whole is enumerated. |
Were not subtests proposed as a more flexible replacement of parametrized tests? I think that every subtest should be counted as a separate test case: in verbose mode it should output a separate line and in non-verbose mode it should output a separate character. |
PR 28082 is a draft that implements this idea. Skipped and failed (but not successfully passed) subtests are now reported separately, as a character (sFE) or a line ("skipped", "FAIL", "ERROR"). The description of the subtest is included in the line. For example: $ tests=.sFE ./python test_issue25894.py
sFE ====================================================================== Traceback (most recent call last):
File "/home/serhiy/py/cpython/test_issue25894.py", line 15, in test_subTest
raise Exception('error')
^^^^^^^^^^^^^^^^^^^^^^^^
Exception: error ====================================================================== Traceback (most recent call last):
File "/home/serhiy/py/cpython/test_issue25894.py", line 13, in test_subTest
self.fail('failed')
^^^^^^^^^^^^^^^^^^^
AssertionError: failed Ran 1 test in 0.001s FAILED (failures=1, errors=1, skipped=1) $ tests=.sFE ./python test_issue25894.py -v
test_subTest (__main__.TestClass) ...
test_subTest (__main__.TestClass) [1] (t='s') ... skipped 'skipped'
test_subTest (__main__.TestClass) [2] (t='F') ... FAIL
test_subTest (__main__.TestClass) [3] (t='E') ... ERROR ====================================================================== Traceback (most recent call last):
File "/home/serhiy/py/cpython/test_issue25894.py", line 15, in test_subTest
raise Exception('error')
^^^^^^^^^^^^^^^^^^^^^^^^
Exception: error ====================================================================== Traceback (most recent call last):
File "/home/serhiy/py/cpython/test_issue25894.py", line 13, in test_subTest
self.fail('failed')
^^^^^^^^^^^^^^^^^^^
AssertionError: failed Ran 1 test in 0.001s FAILED (failures=1, errors=1, skipped=1) As a side effect, the test description is also repeated for every error in the test cleanup code (in tearDown() and doCleanup()). Similar changes should be added also in RegressionTestResult. If apply bpo-45057 first they will be much simpler. bpo-29152 can be related. If call addError() and addFailure() from addSubTest(), PR 28082 should be rewritten. |
Any suggestions for the format of output? Currently PR 28082 formats lines for subtest skipping, failure or error with a 2-space identation. Lines for skipping, failure or error in tearDown() or functions registered with addCallback() do not differ from a line skipping, failure or error in the test method itself. I am not sure about backporting this change. On one hand, it fixes an old flaw in the unittest output. On other hand, the change affects not only subtests and can confuse programs which parse the unittest output because the test descriptions can occur multiple times. |
I'm fine with that. Ultimately I don't think differentiating subtest status from method status is that important.
Since we're too late for 3.10.0 and it isn't a trivially small change that changes test output, I think we shouldn't be backporting it. There are tools that parse this output in editors. I'm afraid that it's too risky since we haven't given the community enough time to test for output difference. I'm marking this as resolved. Thanks for your patch, Serhiy! If you feel strongly about backporting, we'd have to reopen and mark this as release blocker. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: