Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected Successes do not result in overall failure #189

Closed
electrofelix opened this issue Jul 12, 2018 · 2 comments · Fixed by #190
Closed

Unexpected Successes do not result in overall failure #189

electrofelix opened this issue Jul 12, 2018 · 2 comments · Fixed by #190
Labels

Comments

@electrofelix
Copy link

Issue description

It appears that tests that pass unexpectedly, when marked with @unittest.expectedFailure do not trigger an overall test suite failure.

Based on changes in python3.4 from the fix landing for https://bugs.python.org/issue20165 this is rather unexpected.

Expected behavior and actual behavior

Mark any successful test with the decorator @unittest.expectedFailure
Run the tests using stestr run --slowest
Exit code should be non-zero

Actual behaviour is that current exit code is zero (0)

Steps to reproduce the problem

Mark any successful test with the decorator @unittest.expectedFailure and run tests with stestr.

Specifications like the version of the project, operating system, or hardware

System information

stestr version (stestr --version): stestr 2.1.0

Python release (python --version): Python 3.5.2

pip packages (pip freeze):
-f /var/cache/pip
cliff==2.13.0
cmd2==0.9.2
colorama==0.3.9
extras==1.0.0
fixtures==3.0.0
flake8==3.5.0
future==0.16.0
linecache2==1.0.0

mccabe==0.6.1
pbr==4.1.0
pkg-resources==0.0.0
prettytable==0.7.2
pycodestyle==2.3.1
pyflakes==1.6.0
pyparsing==2.2.0
pyperclip==1.6.2
python-mimeparse==1.6.0
python-subunit==1.3.0
PyYAML==3.13
six==1.11.0
stestr==2.1.0
stevedore==1.28.0
testtools==2.3.0
traceback2==1.4.0
unittest2==1.1.0
voluptuous==0.11.1
wcwidth==0.1.7

Additional information

It would appear that testtools TestResult class contains a method that has the correct behaviour at https://github.com/testing-cabal/testtools/blob/master/testtools/testresult/real.py#L174 and it would be possible to fix this in stestr by changing from calling summary_result.wasSuccessful() at https://github.com/mtreinish/stestr/blob/master/stestr/commands/load.py#L222 to call testtools.TestResult.wasSuccessful(summary_result) instead to resolve. There would also be a few other places that would need updating.

Based on looking at the tests in testing-cabal/testtools It may be that testtools is maintaining backwards compatibility with unittest.TestResult prior to the fix landing in python 3.4 in the StreamSummary class and downstream projects at expected to handle this instead. Will log an issue there as well referencing this issue to try and determine where is the correct place to submit a fix for this.

@mtreinish
Copy link
Owner

I'm fine with switching over all the wasSuccessful() calls to use the TestResult method now. Even if that is treated as bug in testtools, the time to update StreamResult.wasSuccessful() and get that in a release is probably going to be a while. So in the meantime I'd like to fix this on the stestr side and get a bugfix release out the door. @electrofelix do you want to push a patch for switching things over?

If/when testtools pushes out a release with a fix for StreamResult we can go back and switch back all the calls.

@mtreinish mtreinish added the bug label Jul 12, 2018
@electrofelix
Copy link
Author

@mtreinish I'll try to put together a PR sometime tomorrow or Monday

mtreinish added a commit that referenced this issue Aug 3, 2018
This commit addresses a bug when unexpected success results are used
by a test suite. stestr was relying on wasSuccessful() method in the
StreamResult class from testtools, which doesn't properly account for
unexpected success. So when stestr was run on a test suite it wouldn't
properly handle tests with unexpected success result and thus treat
the run as successful. This addresses the issue by adjusting our
testtools api usage to call the wasSuccessful() method from the
TestResult class instead which has the correct behavior.

An issue has been opened in testtools testing-cabal/testtools#273
regarding the accounting error with StreamResult.wasSuccessful(),
if/when a fix is introduced into a released testtools we can investigate
switching these calls back to use that function. But for the time being
this mitigates the issue.

Fixes #189
mtreinish added a commit that referenced this issue Aug 3, 2018
This commit addresses a bug when unexpected success results are used
by a test suite. stestr was relying on wasSuccessful() method in the
StreamResult class from testtools, which doesn't properly account for
unexpected success. So when stestr was run on a test suite it wouldn't
properly handle tests with unexpected success result and thus treat
the run as successful. This addresses the issue by adjusting our
testtools api usage to manually query the results object for
unxsuccess results instead of relying on the exisiting wasSuccessful()
method from the StreamResult class. This is basically the same as the
wasSuccessful() method from the testtools.TestResult class, which exhibits
the correct behavior, but isn't something we're using.

An issue has been opened in testtools testing-cabal/testtools#273
regarding the accounting error with StreamResult.wasSuccessful(),
if/when a fix is introduced into a released testtools we can investigate
switching these calls back to use that method and deleting our local
helper function. But, for the time being this mitigates the issue.

Fixes #189
mtreinish added a commit that referenced this issue Aug 6, 2018
This commit addresses a bug when unexpected success results are used
by a test suite. stestr was relying on wasSuccessful() method in the
StreamResult class from testtools, which doesn't properly account for
unexpected success. So when stestr was run on a test suite it wouldn't
properly handle tests with unexpected success result and thus treat
the run as successful. This addresses the issue by adjusting our
testtools api usage to manually query the results object for
unxsuccess results instead of relying on the exisiting wasSuccessful()
method from the StreamResult class. This is basically the same as the
wasSuccessful() method from the testtools.TestResult class, which exhibits
the correct behavior, but isn't something we're using.

An issue has been opened in testtools testing-cabal/testtools#273
regarding the accounting error with StreamResult.wasSuccessful(),
if/when a fix is introduced into a released testtools we can investigate
switching these calls back to use that method and deleting our local
helper function. But, for the time being this mitigates the issue.

Fixes #189
mtreinish added a commit that referenced this issue Aug 6, 2018
This commit addresses a bug when unexpected success results are used
by a test suite. stestr was relying on wasSuccessful() method in the
StreamResult class from testtools, which doesn't properly account for
unexpected success. So when stestr was run on a test suite it wouldn't
properly handle tests with unexpected success result and thus treat
the run as successful. This addresses the issue by adjusting our
testtools api usage to manually query the results object for
unxsuccess results instead of relying on the exisiting wasSuccessful()
method from the StreamResult class. This is basically the same as the
wasSuccessful() method from the testtools.TestResult class, which exhibits
the correct behavior, but isn't something we're using.

An issue has been opened in testtools testing-cabal/testtools#273
regarding the accounting error with StreamResult.wasSuccessful(),
if/when a fix is introduced into a released testtools we can investigate
switching these calls back to use that method and deleting our local
helper function. But, for the time being this mitigates the issue.

Fixes #189
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants