-
Notifications
You must be signed in to change notification settings - Fork 140
Include output in passing tests; fix expected failures; adjust verbose output #186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Generally speaking, expected occurrences are lowercase whereas anything exceptional and worthy of notice is uppercase. Examples of the former include tests passing, skips and expected failures; examples of the latter include failures, errors and unexpected successes.
| testinfo = self.infoclass(self, test, self.infoclass.ERROR, err) | ||
| testinfo.test_exception_name = 'ExpectedFailure' | ||
| testinfo.test_exception_message = 'EXPECTED FAILURE: {}'.format(testinfo.test_exception_message) | ||
| testinfo = self.infoclass(self, test, self.infoclass.SKIP, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://docs.python.org/3/library/unittest.html#unittest.expectedFailure
Mark the test as an expected failure. If the test fails it will be considered a success. If the test passes, it will be considered a failure.
Can you please elaborate why it should be marked as SKIP instead of ERROR or FAILURE?
To me a failure is a failure, expected or not; so I'm not sure I understand the change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you actually split the changes and address this semantic change separately? I'm okay merging the rest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll say that for Django's test suite, a recent change in unittest-xml-reporting
is making Jenkins marking our builds as "Unstable" rather than "Success" due to tests marked with @unittest.expectedFailure failing (as exected).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@timgraham, could be unrelated to this PR but related to some of my changes. please file a ticket with more info... I don't see django/django in travis, so I'm not sure which CI is used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I meant is that I think this change would fix that issue. Possibly the regression is due to cc05679. Our CI is at https://djangoci.com/.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://docs.python.org/3/library/unittest.html#unittest.expectedFailure
Mark the test as an expected failure. If the test fails it will be considered a success. If the test passes, it will be considered a failure.
Can you please elaborate why it should be marked as
SKIPinstead ofERRORorFAILURE?
To me a failure is a failure, expected or not; so I'm not sure I understand the change.
An expected failure is a way of marking as “incorrect” — a good example might be a test that is in fact correct, but which tests functionality not currently working. The entire point of an expected failure is to ensure that the test suite remains passing while semantically marking a test as “wrong” or “broken”.
Compared to the obvious alternative of skipping the test, the main feature of expectedFailure is that the test run actually fails if the test suddenly starts working, preventing it from happening accidentally. From the CPython 3.7 sources:
def wasSuccessful(self):
"""Tells whether or not this result was a success."""
[snip]
return ((len(self.failures) == len(self.errors) == 0) and
(not hasattr(self, 'unexpectedSuccesses') or
len(self.unexpectedSuccesses) == 0))Given that the JUnit XML does not support this feature, a “skip” is reasonably close approximation, although one might also mark it as a success. The latter feels a bit wrong to me. An unexpected success, however, is a clear failure.
As with Django, the motivation for me was that our test suite suddenly started becoming “unstable” on Jenkins — I'm suspect that wasn't the case with the code I originally submitted, but you never know…
| 'message', | ||
| test_result.test_exception_message | ||
| ) | ||
| if test_result.stdout: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: pep8, we shouldn't use if x when we mean if x is not None
Codecov Report
@@ Coverage Diff @@
## master #186 +/- ##
==========================================
- Coverage 99.49% 98.87% -0.63%
==========================================
Files 17 17
Lines 1393 1595 +202
==========================================
+ Hits 1386 1577 +191
- Misses 7 18 +11
Continue to review full report at Codecov.
|
| self.assertIn(b'<error', outdir.getvalue()) | ||
| self.assertNotIn(b'<skip', outdir.getvalue()) | ||
|
|
||
| def test_xmlrunner_safe_xml_encoding_name(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
duplicate test, issue with coverage
This PR adjust expected failures to be reported as skips, includes output in successful tests and makes the verbose output more consistent with
unittest.