Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better output for unittest #38312

Closed
theller opened this issue Apr 16, 2003 · 19 comments
Closed

Better output for unittest #38312

theller opened this issue Apr 16, 2003 · 19 comments
Labels
stdlib Python modules in the Lib dir

Comments

@theller
Copy link

theller commented Apr 16, 2003

BPO 722638
Nosy @gvanrossum, @tim-one, @brettcannon, @theller, @rhettinger
Files
  • unittest.diff: Patch for unittest
  • unittest-2.diff: 2nd version of patch
  • unittest.py.diff: unittest-3.diff
  • test_sample.py: test_sample.py
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = None
    closed_at = <Date 2003-12-06.13:19:38.000>
    created_at = <Date 2003-04-16.17:49:36.000>
    labels = ['library']
    title = 'Better output for unittest'
    updated_at = <Date 2003-12-06.13:19:38.000>
    user = 'https://github.com/theller'

    bugs.python.org fields:

    activity = <Date 2003-12-06.13:19:38.000>
    actor = 'purcell'
    assignee = 'purcell'
    closed = True
    closed_date = None
    closer = None
    components = ['Library (Lib)']
    creation = <Date 2003-04-16.17:49:36.000>
    creator = 'theller'
    dependencies = []
    files = ['5185', '5186', '5187', '5188']
    hgrepos = []
    issue_num = 722638
    keywords = ['patch']
    message_count = 19.0
    messages = ['43364', '43365', '43366', '43367', '43368', '43369', '43370', '43371', '43372', '43373', '43374', '43375', '43376', '43377', '43378', '43379', '43380', '43381', '43382']
    nosy_count = 6.0
    nosy_names = ['gvanrossum', 'tim.peters', 'brett.cannon', 'theller', 'rhettinger', 'purcell']
    pr_nums = []
    priority = 'normal'
    resolution = 'accepted'
    stage = None
    status = 'closed'
    superseder = None
    type = None
    url = 'https://bugs.python.org/issue722638'
    versions = []

    @theller
    Copy link
    Author

    theller commented Apr 16, 2003

    This patch enables more useful output for unittests: If
    a test crashes (raises an unexpected exception), a full
    traceback is printed.

    If a test failes, the output is something like this:

    ========================================
    FAIL: test_failUnlessEqual (main.FailingTests)
    ----------------------------------------------------------------------
    TestFailed: 0 != 1
    File "xunit.py", line 12, in test_failUnlessEqual
    self.failUnlessEqual(self.a, self.b)

    ========================================

    Before, this was printed:

    ========================================
    FAIL: test_failIfEqual (main.FailingTests)
    ----------------------------------------------------------------------

    Traceback (most recent call last):
      File "xunit.py", line 15, in test_failIfEqual
        self.failIfEqual(self.a, self.a)
      File "c:\python23\lib\unittest.py", line 300, in
    failIfEqual
        raise self.failureException, \
    AssertionError: 0 == 0

    ========================================

    If needed, I can upload the test script I use, together
    with the results before and after the patch.

    This has shortly been discussed on c.l.p, response was
    mostly positive.
    http://tinyurl.com/9obf

    @theller theller closed this as completed Apr 16, 2003
    @theller theller added the stdlib Python modules in the Lib dir label Apr 16, 2003
    @theller
    Copy link
    Author

    theller commented Apr 17, 2003

    Logged In: YES
    user_id=11105

    Attaching new version of the patch (unittest-2.diff). This
    gives better output for failUnlessRaises, like this:

    ======================================================================
    FAIL: test_failUnlessRaises (main.FailingTests)
    ----------------------------------------------------------------------
    TestFailed: wrong exception, expected TypeError
    got: 'ValueError: 10'
    File "xunit.py", line 18, in test_failUnlessRaises
    self.failUnlessRaises(TypeError, self._raise,
    ValueError, 10)

    ======================================================================
    FAIL: test_failUnlessRaises_2 (main.FailingTests)
    ----------------------------------------------------------------------
    TestFailed: wrong exception, expected TypeError, IndexError,
    or AttributeError
    got: 'ValueError: 10'
    File "xunit.py", line 21, in test_failUnlessRaises_2
    self.failUnlessRaises((TypeError, IndexError,
    AttributeError), self._raise, ValueError, 10)

    ----------------------------------------------------------------------

    @brettcannon
    Copy link
    Member

    Logged In: YES
    user_id=357491

    I like the new output, personally. I am +1 on letting Thomas add the
    changes.
    Does this mean we no longer treat unittest as a separate project?

    @purcell
    Copy link
    Mannequin

    purcell mannequin commented Apr 25, 2003

    Logged In: YES
    user_id=21477

    This behaviour of trimming the traceback was implemented in a previous
    version of PyUnit, but dropped because it did not work with Jython. My
    aim is that the same 'unittest.py' should work out of the box with both
    CPython and Jython.

    @theller
    Copy link
    Author

    theller commented Apr 25, 2003

    Logged In: YES
    user_id=11105

    What a pity! What exactly does not work in Jython?

    Before giving up on this, there are at least two ways to
    proceed:

    • Behave as before in Jython, and use the better output in
      CPython.
    • Apply this patch only the the unittest version bundled
      with CPython.

    How are the chances for one of this?

    @purcell
    Copy link
    Mannequin

    purcell mannequin commented Apr 25, 2003

    Logged In: YES
    user_id=21477

    After investigation, this seems to work with Jython (though not JPython,
    which didn't have tb_next etc.).

    In general I've been trying hard to keep 'unittest.py' vanilla, since a lot
    of people are still using it with Python 1.5 and even JPython. Hence the
    complete absence of string methods, list comprehensions and other new
    language features. Don't know if this policy makes sense in the longer
    term, but I value it right now.

    In that sense, I'm not sure if it's worth changing the message.

    @theller
    Copy link
    Author

    theller commented Apr 25, 2003

    Logged In: YES
    user_id=11105

    Last attempt to convince you: I could try to port the
    changes to Python 1.5, if you want to stay compatible.

    If you still reject the patch (you're the unittest boss),
    I'll have to live with subclassing unittest ;-)

    @rhettinger
    Copy link
    Contributor

    Logged In: YES
    user_id=80475

    I would like to see Thomas's patch or some comformant
    variant go in. Usability problems are a bug. Friendlier
    output makes it more likely that unittest will be used in
    the first place.

    @tim-one
    Copy link
    Member

    tim-one commented May 6, 2003

    Logged In: YES
    user_id=31435

    I'm split. The current output when assertRaises fails is a
    frequent cause of head-scratching ("what? it's complaining
    because ValueError got raised? ... no, it's complaining
    because ValueError wasn't raised? ..."). OTOH, I see no
    value in trimming the traceback. Now that *could* be
    because the assertRaises output can be so confusing that
    we end up using the rest of the traceback to figure out what
    unittest is trying to tell us in those cases.

    @theller
    Copy link
    Author

    theller commented May 6, 2003

    Logged In: YES
    user_id=11105

    That's exactly how I was feeling. When an assertRaises test
    failed, I usually inserted the call it made before this
    line, to see the real traceback.

    And that's what this patch tries to fix. I don't want to see
    tracebacks when a test fails, I want a clear indication that
    it failed (the patch prints "TestFailed" instead of
    "Traceback:").

    For the output of a failed assertRaises, see the first
    comment I added. IMO it clearly says what which exception
    was expected, and which one was raised.

    @tim-one
    Copy link
    Member

    tim-one commented May 6, 2003

    Logged In: YES
    user_id=31435

    That's why I'm split: I do like the new *messages* better
    (their content), but I don't like losing the tracebacks.
    Sometimes it's a bug in the test driver-- or in 20 layers of test
    driver code --and sometimes it's even a bug in unittest itself.
    The traceback is a fundamental tool when things go wrong,
    so I'm never in favor of hiding parts of tracebacks (hiding could
    be appropriate if you *knew* the true cause isn't in the part
    you're hiding -- but you can't know that).

    @theller
    Copy link
    Author

    theller commented Oct 14, 2003

    Logged In: YES
    user_id=11105

    Assigned to Steve for pronouncement (didn't he already
    comment on python-dev some time ago?)

    @purcell
    Copy link
    Mannequin

    purcell mannequin commented Oct 16, 2003

    Logged In: YES
    user_id=21477

    I'm looking at all this and will certainly incorporate some of
    the suggestions:

    • I'm +1 on the clearer message for assertRaises()
    • I'm +1 on clearer messages for _all_ assert*()/fail*()
      methods
    • The TestFailed exception doesn't really add much, since
      AssertionError works well already
    • I'm loathe to ever suppress tracebacks, or fiddle with them
      much: the traceback is the canonical way for IDEs to find
      lines in the code

    @theller
    Copy link
    Author

    theller commented Dec 5, 2003

    Logged In: YES
    user_id=11105

    I've brought the patch up to date with current cvs, and made
    small changes: The 'Traceback: most recent call last' line
    is no longer removed from the print. And, according to
    Steve's suggestion, AssertionError is used again instead of
    the TestFailed exceptiom.

    The patch has been discussed in this thread on python-dev:
    http://mail.python.org/pipermail/python-dev/2003-December/040623.html

    New patch attached: unittest-3.diff

    @theller
    Copy link
    Author

    theller commented Dec 5, 2003

    Logged In: YES
    user_id=11105

    Attached a test script together with what is printed before
    and after the patch:

    test_sample.py

    @gvanrossum
    Copy link
    Member

    Logged In: YES
    user_id=6380

    I love it!

    We're all waiting for a +1 from Steve so this can be checked
    into 2.4. (I'd love it in 2.3 too, but that's probably going
    to be blocked on the "no new features" rule. :-)

    @theller
    Copy link
    Author

    theller commented Dec 5, 2003

    Logged In: YES
    user_id=11105

    Cool, although after some thought I again like the
    TestFailed more than the AssertionError. (But I won't insist)

    @gvanrossum
    Copy link
    Member

    Logged In: YES
    user_id=6380

    Thomas, Can you make the TestFailed issue a separate
    bug/patch/feature request?

    @purcell
    Copy link
    Mannequin

    purcell mannequin commented Dec 6, 2003

    Logged In: YES
    user_id=21477

    Hi all, 
     
    I've accepted this patch, with some modifications, and checked it into 
    both Pyunit and Python CVS. 
     
    If the callable passed to assertRaises() throws an exception other than 
    the one expected, the test result should really be ERROR; the proposed 
    changes to assertRaises() resulted in tracebacks for such unexpected 
    exceptions being lost, since only the exception name and message were 
    formatted into the failure string. I've therefore left the logic there as it 
    was. 
     
    Also, rather than hard-code the number of levels of traceback to skip, 
    the changes I have checked in automatically determine the number of 
    levels to skip. Please take a look and tell me if the trick I have used is 
    excessively horrible. 
     
    As for the separate TestFailed issue, for me this is not terribly attractive, 
    since I know that many people find it very convenient to use the 'assert' 
    keyword in their test code. (Note that by setting the 'failureException' 
    attribute of your test cases, you can use an exception other than 
    AssertionError.) 
     
    For your reference, the output of Thomas' sample test script is now as 
    follows: 
     
     
    FFE 
    ====================================================================== 
    ERROR: test_4 (__main__.MyTestCase) 
    ---------------------------------------------------------------------- 
    Traceback (most recent call last): 
      File "heller.py", line 21, in test_4 
        self.assertRaises(ValueError, getattr, self, "spam") 
      File "/export/home/steve/projects/pyunit/pyunit-cvs/unittest.py", line 
    319, in failUnlessRaises 
        callableObj(*args, **kwargs) 
    AttributeError: 'MyTestCase' object has no attribute 'spam' 
     
    ====================================================================== 
    FAIL: test_1 (__main__.MyTestCase) 
    ---------------------------------------------------------------------- 
    Traceback (most recent call last): 
      File "heller.py", line 6, in test_1 
        self.do_this() 
      File "heller.py", line 9, in do_this 
        self.do_that() 
      File "heller.py", line 12, in do_that 
        self.failUnlessEqual(1, 2) 
    AssertionError: 1 != 2 
     
    ====================================================================== 
    FAIL: test_3 (__main__.MyTestCase) 
    ---------------------------------------------------------------------- 
    Traceback (most recent call last): 
      File "heller.py", line 17, in test_3 
        self.assertRaises(AttributeError, getattr, self, "silly") 
    AssertionError: AttributeError not raised 
     
    ---------------------------------------------------------------------- 
    Ran 3 tests in 0.008s 
     
    FAILED (failures=2, errors=1)

    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 9, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    stdlib Python modules in the Lib dir
    Projects
    None yet
    Development

    No branches or pull requests

    5 participants