Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix subtests #178

Merged
merged 4 commits into from
Nov 27, 2018
Merged

Fix subtests #178

merged 4 commits into from
Nov 27, 2018

Conversation

XonqNopp
Copy link
Contributor

Fix for #155:

  • treat subTest AssertionErrors as FAILURE instead of ERROR
  • report subtest description
  • add missing methods to handle expectedFailure and unexpectedSuccess

@codecov-io
Copy link

codecov-io commented Nov 19, 2018

Codecov Report

Merging #178 into master will increase coverage by 0.01%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #178      +/-   ##
==========================================
+ Coverage   99.46%   99.48%   +0.01%     
==========================================
  Files          17       17              
  Lines        1307     1351      +44     
==========================================
+ Hits         1300     1344      +44     
  Misses          7        7
Impacted Files Coverage Δ
xmlrunner/result.py 98.34% <100%> (+0.14%) ⬆️
tests/testsuite.py 100% <100%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 000969d...a1ed8c4. Read the comment docs.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.2%) to 99.251% when pulling f646ea5 on XonqNopp:fix_subtests into 028ad40 on xmlrunner:master.

11 similar comments
@coveralls
Copy link

Coverage Status

Coverage decreased (-0.2%) to 99.251% when pulling f646ea5 on XonqNopp:fix_subtests into 028ad40 on xmlrunner:master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.2%) to 99.251% when pulling f646ea5 on XonqNopp:fix_subtests into 028ad40 on xmlrunner:master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.2%) to 99.251% when pulling f646ea5 on XonqNopp:fix_subtests into 028ad40 on xmlrunner:master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.2%) to 99.251% when pulling f646ea5 on XonqNopp:fix_subtests into 028ad40 on xmlrunner:master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.2%) to 99.251% when pulling f646ea5 on XonqNopp:fix_subtests into 028ad40 on xmlrunner:master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.2%) to 99.251% when pulling f646ea5 on XonqNopp:fix_subtests into 028ad40 on xmlrunner:master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.2%) to 99.251% when pulling f646ea5 on XonqNopp:fix_subtests into 028ad40 on xmlrunner:master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.2%) to 99.251% when pulling f646ea5 on XonqNopp:fix_subtests into 028ad40 on xmlrunner:master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.2%) to 99.251% when pulling f646ea5 on XonqNopp:fix_subtests into 028ad40 on xmlrunner:master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.2%) to 99.251% when pulling f646ea5 on XonqNopp:fix_subtests into 028ad40 on xmlrunner:master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-0.2%) to 99.251% when pulling f646ea5 on XonqNopp:fix_subtests into 028ad40 on xmlrunner:master.

@coveralls
Copy link

coveralls commented Nov 19, 2018

Coverage Status

Coverage increased (+0.02%) to 99.481% when pulling a1ed8c4 on XonqNopp:fix_subtests into 000969d on xmlrunner:master.

testinfo.test_exception_message = 'EXPECTED FAILURE: {}'.format(testinfo.test_exception_message)

self.expectedFailures.append((testinfo, self._exc_info_to_string(err, test)))
self._prepare_callback(testinfo, [], 'EXPECTED FAILURE', 'EF')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since all other short outputs are 1 character; we can't distinguish 'EF' from 'E' + 'F' which would be one error followed by one failure.

'u' -> unexpected success
'x' -> expected failure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

errorValue = None
errorList = None
if issubclass(err[0], test.failureException):
# FAIL
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment does not provide value here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

errorList = self.failures

else:
# ERROR
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

"""
self._save_output_data()

outcome = self.infoclass.ERROR # this (easy) way the XML reports as error, which is not success neither fail
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only used in one place, so please use the constant directly - it's also more readable as that's what the other methods look like.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

"""
self._save_output_data()

outcome = self.infoclass.ERROR # this (easy) way the XML reports as error, which is not success neither fail
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

testinfo.outcome = outcome
# But since we want to have error outcome, we need to provide additional fields:
testinfo.test_exception_name = 'UnexpectedSuccess'
testinfo.test_exception_message = 'This test was marked as expected failure but passed, please review it'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should start with : 'UNEXPECTED SUCCESS:'

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

testinfo.test_exception_message = 'This test was marked as expected failure but passed, please review it'

self.unexpectedSuccesses.append(testinfo)
self._prepare_callback(testinfo, [], 'UNEXPECTED SUCCESS', 'US')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one character instead of 'US' ('U' + 'S' which is 'skipped')

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed to U as mentioned in first comment

@dnozay dnozay mentioned this pull request Nov 24, 2018
if subTest:
self.test_id = subTest.id()
self.subDescription = subTest._subDescription()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this update the test_description to be that of the subtest? That will allow the existing description function to work as is once sub test information is stored into the test result.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i.e.: self.test_result.getDescription(subTest)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really understand what you mean... I did not find any getDescription definition in this repo?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not in this repo - it's from the unittest base class which this inherits from. You can see its use a few lines above this.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of getting the description for the testcase, as we are currently doing, we can get the description for the subTest. The default behavior of unittest is to include the test description in the subtest.

@@ -156,8 +156,10 @@ def __init__(self, test_result, test_method, outcome=SUCCESS, err=None, subTest=

self.test_name = testcase_name(test_method)
self.test_id = test_method.id()
self.subDescription = None
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should the addSubtest method also be updated to handled cases where the the subtest succeeded? This can be detected when no err is provided to the addSubtest method.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the docs for addSubTest(test, subtest, outcome):

If outcome is None, the subtest succeeded

Link: https://docs.python.org/3.4/library/unittest.html?highlight=unittest#unittest.TestResult.addSubTest

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know but with my changes the successful subtest does not get reported as failed or anything (or so I think)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My question was whether this method should also be updated to include successful subtests in the result XML. The docs indicate that unittest does nothing by default, but I think that the xml runner might want to do something different in this case.

Since subtest successes aren't currently tracked right now, the reports generate variable number of tests executed based on subtest success / failure. Additionally, as is, it is not possible to track the history of subtests since all the successes are seen as a no-run for the subtest.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docs indicate that unittest does nothing by default, but I think that the xml runner might want to do something different in this case.

But why? That may very well confuse people; we could have this opt-in based on a flag instead.
Generators can make subtests so easy that it may not be relevant for everybody, or it may be always a different set if fuzzing is involved.

I'm merging as-is, @bbartholomew-om1 if you want to work on this additional piece I can look at it.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reasoning I was thinking through was the second paragraph above, though I may not have articulated it well. The test history, reliability rate, and time taken for sub tests are less meaningful unless you include successes as part of the report. A sub test might typically fail in 3 ms but take 32 seconds when succeeding - not recording the successes may mask test infrastructure issues and makes test analysis more difficult.

Another way to look at the above is that a subtest success should be treated like a test success. Based on the current implementation, test successes are considered interesting and hence included in the final report. Why would subtest successes be different? unitest does nothing with addSuccess as well, but the library overloads this functionality to add the result to the xml.

I'm not sure I understand your point on confusing others, could you elaborate more on the point? I'm also not sure how generators making subtests easy is relevant to this confusion. Agreed that the use of the flag would simplify integration and avoid back-compat problems / confusion.

I can open an issue to get feedback from others if that would be more useful. Though based on my understanding of how subtests are used within unittest, I'd expect the framework to treat the results the same as a regular test.

@dnozay dnozay merged commit 0703e1d into xmlrunner:master Nov 27, 2018
@XonqNopp XonqNopp deleted the fix_subtests branch November 28, 2018 10:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants