-
Notifications
You must be signed in to change notification settings - Fork 137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix subtests #178
Fix subtests #178
Conversation
Codecov Report
@@ Coverage Diff @@
## master #178 +/- ##
==========================================
+ Coverage 99.46% 99.48% +0.01%
==========================================
Files 17 17
Lines 1307 1351 +44
==========================================
+ Hits 1300 1344 +44
Misses 7 7
Continue to review full report at Codecov.
|
11 similar comments
xmlrunner/result.py
Outdated
testinfo.test_exception_message = 'EXPECTED FAILURE: {}'.format(testinfo.test_exception_message) | ||
|
||
self.expectedFailures.append((testinfo, self._exc_info_to_string(err, test))) | ||
self._prepare_callback(testinfo, [], 'EXPECTED FAILURE', 'EF') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since all other short outputs are 1 character; we can't distinguish 'EF' from 'E' + 'F' which would be one error followed by one failure.
'u' -> unexpected success
'x' -> expected failure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
xmlrunner/result.py
Outdated
errorValue = None | ||
errorList = None | ||
if issubclass(err[0], test.failureException): | ||
# FAIL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comment does not provide value here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed
xmlrunner/result.py
Outdated
errorList = self.failures | ||
|
||
else: | ||
# ERROR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed
xmlrunner/result.py
Outdated
""" | ||
self._save_output_data() | ||
|
||
outcome = self.infoclass.ERROR # this (easy) way the XML reports as error, which is not success neither fail |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only used in one place, so please use the constant directly - it's also more readable as that's what the other methods look like.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
xmlrunner/result.py
Outdated
""" | ||
self._save_output_data() | ||
|
||
outcome = self.infoclass.ERROR # this (easy) way the XML reports as error, which is not success neither fail |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
xmlrunner/result.py
Outdated
testinfo.outcome = outcome | ||
# But since we want to have error outcome, we need to provide additional fields: | ||
testinfo.test_exception_name = 'UnexpectedSuccess' | ||
testinfo.test_exception_message = 'This test was marked as expected failure but passed, please review it' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should start with : 'UNEXPECTED SUCCESS:'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
xmlrunner/result.py
Outdated
testinfo.test_exception_message = 'This test was marked as expected failure but passed, please review it' | ||
|
||
self.unexpectedSuccesses.append(testinfo) | ||
self._prepare_callback(testinfo, [], 'UNEXPECTED SUCCESS', 'US') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one character instead of 'US' ('U' + 'S' which is 'skipped')
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed to U as mentioned in first comment
if subTest: | ||
self.test_id = subTest.id() | ||
self.subDescription = subTest._subDescription() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this update the test_description
to be that of the subtest? That will allow the existing description function to work as is once sub test information is stored into the test result.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i.e.: self.test_result.getDescription(subTest)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really understand what you mean... I did not find any getDescription definition in this repo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not in this repo - it's from the unittest base class which this inherits from. You can see its use a few lines above this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of getting the description for the testcase, as we are currently doing, we can get the description for the subTest. The default behavior of unittest
is to include the test description in the subtest.
@@ -156,8 +156,10 @@ def __init__(self, test_result, test_method, outcome=SUCCESS, err=None, subTest= | |||
|
|||
self.test_name = testcase_name(test_method) | |||
self.test_id = test_method.id() | |||
self.subDescription = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should the addSubtest
method also be updated to handled cases where the the subtest succeeded? This can be detected when no err is provided to the addSubtest
method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the docs for addSubTest(test, subtest, outcome)
:
If outcome is None, the subtest succeeded
Link: https://docs.python.org/3.4/library/unittest.html?highlight=unittest#unittest.TestResult.addSubTest
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know but with my changes the successful subtest does not get reported as failed or anything (or so I think)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My question was whether this method should also be updated to include successful subtests in the result XML. The docs indicate that unittest does nothing by default, but I think that the xml runner might want to do something different in this case.
Since subtest successes aren't currently tracked right now, the reports generate variable number of tests executed based on subtest success / failure. Additionally, as is, it is not possible to track the history of subtests since all the successes are seen as a no-run for the subtest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The docs indicate that unittest does nothing by default, but I think that the xml runner might want to do something different in this case.
But why? That may very well confuse people; we could have this opt-in based on a flag instead.
Generators can make subtests so easy that it may not be relevant for everybody, or it may be always a different set if fuzzing is involved.
I'm merging as-is, @bbartholomew-om1 if you want to work on this additional piece I can look at it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reasoning I was thinking through was the second paragraph above, though I may not have articulated it well. The test history, reliability rate, and time taken for sub tests are less meaningful unless you include successes as part of the report. A sub test might typically fail in 3 ms but take 32 seconds when succeeding - not recording the successes may mask test infrastructure issues and makes test analysis more difficult.
Another way to look at the above is that a subtest success should be treated like a test success. Based on the current implementation, test successes are considered interesting and hence included in the final report. Why would subtest successes be different? unitest does nothing with addSuccess
as well, but the library overloads this functionality to add the result to the xml.
I'm not sure I understand your point on confusing others, could you elaborate more on the point? I'm also not sure how generators making subtests easy is relevant to this confusion. Agreed that the use of the flag would simplify integration and avoid back-compat problems / confusion.
I can open an issue to get feedback from others if that would be more useful. Though based on my understanding of how subtests are used within unittest, I'd expect the framework to treat the results the same as a regular test.
150b6cd
to
a1ed8c4
Compare
Fix for #155: