New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
uncaught exception from testcase - exit without traceback #60
Comments
Reproduce with: mkdir green-60
cd green-60
git clone --depth 1 --branch green-60 https://github.com/krisztianfekete/lib.git
cd lib
./test |
I have managed to reproduce it with a smaller code, see testing-cabal/testtools#144 Even if the original problem is elsewhere I still think |
Ah, thank you! The smaller code helps. Sorry for the slow response, I have been swamped by life in general at the moment. I agree, this should definitely be handled by green. |
Okay, I finally got some time to sit down and look at this. Unfortunately, I can't reproduce your issue! What am I missing? I'm using this code (adapted from your post you linked to above): # test.py
import unittest
class Test_testtools(unittest.TestCase):
def test_1(self):
pass
def test_2(self):
raise SystemExit(0)
def test_3(self):
pass
|
the first code fragment is the one - with testtools in it, as testtools causes the problem: # test.py
import unittest
import testtools
class Test_testtools(testtools.TestCase):
def test_1(self):
pass
def test_2(self):
raise SystemExit(0)
def test_3(self):
pass
if __name__ == '__main__':
unittest.main() |
It appears to be a conscious design choice that testtools will consider If you would like, you may review the relevant testtools code yourself (in their current master branch) at the following locations:
That is contrary to the design used by unittest, unittest2, py.test, trial, nose, etc. -- at least as far as I am aware, all of the other tools that provide their own TestCase subclasses have designed them to catch all exceptions. Green relies on the internal exception-handling of those subclasses as part of the "Traditional" aspect of things (see green's feature list on the readme). Since this is a design choice that testtools has made, for whatever reason, I'm not going to start battling them by overriding their behavior when tests are run by green. Going down that road would eventually lead to me battling other behavior of other tools and eventually creating ill will among their users and developers. Definitely not something that's worth it to me. So, I think your options for handling it in your own code are:
I recommend number 1. Quick, easy, and doesn't pick fights with anyone's design decisions. Good luck! |
…f investigating issue #60. It turned out that the issue was actually a design decision by the testtools devs and happens when you subclass testtools.TestCase, so this test already passed and I didn't change anything. But I figured it wouldn't help to have one more test that tests something, so I'll leave this one in.
I am disappointed, maybe I have failed to convey my problem. Let me try again: I expect from a test runner
I have presented a test case, where all three above was compromised. This is clearly a problem. The whatever in this case is a widely used test library, these are testtools' statistics on PyPI at the time of writing:
testtools unexpectedly from such a library breaks compatibility with its ancestors (L from SOLID). Can I trust green (unittest, etc) anymore when third party libraries as widely used as testtools can cause them to fail silently? Zero exit status, no failure, no traceback! When I found this problem it was due to a failing test: a library ( "My options":
it looks quite funny (or not) when applied to the test in question and
Option 1 is DRY and assume and follow convention. |
I agree that catching it in your one test won't prevent this situation from occurring again in other tests. Catching the specific instance of Solving it in the long-term is always better. It would be great for the testtools to conform to standard python TestCase behavior so that other python testing tools, including alternate test runners like green, will not have to each individually compensate for their behavior. The testtools project appears to have established a good pattern of accepting pull requests. If you proposed a specific change in a pull request, maybe that would be enough to get them to change the behavior. |
Fixes CleanCut#60 - uncaught exception from testcase - exit without traceback
Patching it in testtools will be hard and time consuming, it looks like a conscious decision and they even defended that decision over the years. But even if it is patched green does not print stack trace next time. Would you accept the pull request that prints a stack trace? |
I think your idea has merit. Global handling of any unanticipated exceptions would be an excellent way to handle any corner cases caused by poorly-coded testing frameworks (including my own, if I fall into the same trap). I will move over to the pull request and continue commenting there, instead of on this issue. |
Version 1.10.0 was just released which contains the change that addresses this (and more...) |
Python 2.7.10 & 3.4.3
I have a failing test (in fact it is expected at them moment lacking some implementation), yet I get no traceback, nor do any remaining tests run - clean exit after printing a red E, which is quite embarrassing.
In fact I get the same clean exit from plain
unittest
Note: even a final new line was not printed!
unittest2
shows a relevant traceback and stops afterwards (no new tests are run).Actually I am using
testtools
which depends onunittest2
, so it works with that as well:nose
runs properly, even capturing the uncaught second exception inunittest2
and continuing.Unfortunately I could not create a minimal reproduction, yet (my attempts so far misteriously worked), but could pinpoint the place where it could be fixed.
Changing this line makes things work (though, this might not be the proper fix):
green/green/suite.py
Line 111 in 2a3b302
The text was updated successfully, but these errors were encountered: