Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bpo-4080: unittest durations #12271

Open
wants to merge 47 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
47 commits
Select commit Hold shift + click to select a range
6f0b1fe
print durations
giampaolo Mar 10, 2019
2b16e57
don't print duration if not verbose
giampaolo Mar 10, 2019
49aa791
add cmdline arg (bool) to toggle durations
giampaolo Mar 10, 2019
4830509
make --durations arg accept a mandatory int value
giampaolo Mar 10, 2019
f8a49d3
print durations
giampaolo Mar 10, 2019
875e93d
hide durations < 0.000
giampaolo Mar 10, 2019
59dfc67
minor cosmetic changes
giampaolo Mar 10, 2019
c92bc24
don't use bare except clause
giampaolo Mar 10, 2019
d88087e
add a public addDuration method to TestResult
giampaolo Mar 10, 2019
1952ffa
fix TypeError occurring on skip
giampaolo Mar 10, 2019
9f308dc
Merge branch 'master' into unittest-durations
giampaolo Mar 10, 2019
226288a
fix unit tests
giampaolo Mar 11, 2019
89e915f
add a couple of tests for runner
giampaolo Mar 11, 2019
968c3af
don't add durations multiple times + write durations test
giampaolo Mar 11, 2019
a8b38e9
don't print header if there are no durations
giampaolo Mar 11, 2019
f95edc7
small refactoring
giampaolo Mar 11, 2019
9beb931
add doc
giampaolo Mar 11, 2019
86fc3f8
update doc
giampaolo Mar 11, 2019
a1f5e49
improve test
giampaolo Mar 11, 2019
9d98536
use example from stdlib
giampaolo Mar 11, 2019
7223bc9
ignore me
giampaolo Mar 11, 2019
3c4af91
merge from master
giampaolo Apr 26, 2019
3f4d638
address PR review (amongst which, show slowest tests firsts)
giampaolo Apr 26, 2019
b3bb3e0
update doc samples
giampaolo Apr 26, 2019
a93b460
fix doc syntax
giampaolo Apr 29, 2019
0a6bcbb
Merge branch 'master' into unittest-durations
giampaolo Apr 29, 2019
961ba55
merge from master
giampaolo Feb 15, 2020
d0330f6
update doc
giampaolo Feb 15, 2020
9935349
update doc
giampaolo Feb 15, 2020
f4cd001
rephrasing
giampaolo Feb 16, 2020
4471928
forgot pdb line
giampaolo Feb 16, 2020
6412439
update whatsnew
giampaolo May 26, 2020
1a9c098
update code to latest main branch
giampaolo Nov 24, 2022
c917206
updated Misc/NEWS entry
giampaolo Nov 24, 2022
5a4e21b
merge from main
giampaolo Nov 24, 2022
2ffe250
address review comments
giampaolo Nov 24, 2022
2fdfe1d
add test
giampaolo Nov 24, 2022
c0e137e
no longer print individual test durations
giampaolo Nov 24, 2022
86a094c
remove duplicated test
giampaolo Nov 24, 2022
3649013
calculate duration time after cleanup methods
giampaolo Nov 24, 2022
3217b89
reword doc / cmdline help
giampaolo Nov 24, 2022
b2b37ad
move TextRunner test in test_runner.py
giampaolo Nov 24, 2022
2b7f3dc
improve unit test
giampaolo Nov 24, 2022
cfd65b0
remove code which is no longer used
giampaolo Nov 24, 2022
4772f96
Merge branch 'main' into unittest-durations
giampaolo Nov 24, 2022
3ea0be8
Merge branch 'main' into unittest-durations
giampaolo Nov 28, 2022
19f9c4b
address @ezio-melotti code review comments
giampaolo Nov 28, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
@@ -244,6 +244,10 @@ Command-line options

Show local variables in tracebacks.

.. cmdoption:: --durations N

Show the N slowest test cases (N=0 for all).

.. versionadded:: 3.2
The command-line options ``-b``, ``-c`` and ``-f`` were added.

@@ -253,10 +257,12 @@ Command-line options
.. versionadded:: 3.7
The command-line option ``-k``.

.. versionadded:: 3.12
The command-line option ``--durations``.

The command line can also be used for test discovery, for running all of the
tests in a project or just a subset.


.. _unittest-test-discovery:

Test Discovery
@@ -2009,6 +2015,13 @@ Loading and running tests
A list containing :class:`TestCase` instances that were marked as expected
failures, but succeeded.

.. attribute:: collectedDurations

A list containing 2-tuples of :class:`TestCase` instances and floats
representing the elapsed time of each test which was run.

.. versionadded:: 3.12

.. attribute:: shouldStop

Set to ``True`` when the execution of tests should stop by :meth:`stop`.
@@ -2160,14 +2173,27 @@ Loading and running tests

.. versionadded:: 3.4

.. method:: addDuration(test, elapsed)

Called when the test case finishes. *elapsed* is the time represented in
seconds, and it includes the execution of cleanup functions.

.. versionadded:: 3.12

.. class:: TextTestResult(stream, descriptions, verbosity)
.. class:: TextTestResult(stream, descriptions, verbosity, *, durations=None)

A concrete implementation of :class:`TestResult` used by the
:class:`TextTestRunner`.
:class:`TextTestRunner`. Subclasses should accept ``**kwargs`` to ensure
giampaolo marked this conversation as resolved.
Show resolved Hide resolved
compatibility as the interface changes.

.. versionadded:: 3.2

.. versionadded:: 3.12
Added *durations* keyword argument.

.. versionchanged:: 3.12
Subclasses should accept ``**kwargs`` to ensure compatibility as the
interface changes.

.. data:: defaultTestLoader

@@ -2177,7 +2203,8 @@ Loading and running tests


.. class:: TextTestRunner(stream=None, descriptions=True, verbosity=1, failfast=False, \
buffer=False, resultclass=None, warnings=None, *, tb_locals=False)
buffer=False, resultclass=None, warnings=None, *, \
tb_locals=False, durations=None)

A basic test runner implementation that outputs results to a stream. If *stream*
is ``None``, the default, :data:`sys.stderr` is used as the output stream. This class
@@ -2195,14 +2222,17 @@ Loading and running tests
*warnings* to ``None``.

.. versionchanged:: 3.2
Added the ``warnings`` argument.
Added the *warnings* parameter.

.. versionchanged:: 3.2
The default stream is set to :data:`sys.stderr` at instantiation time rather
than import time.

.. versionchanged:: 3.5
Added the tb_locals parameter.
Added the *tb_locals* parameter.

.. versionchanged:: 3.12
Added the *durations* parameter.

.. method:: _makeResult()

@@ -319,6 +319,26 @@ sys
with contributions from Gregory P. Smith [Google] and Mark Shannon
in :gh:`96123`.)

unittest
--------

Added ``--durations`` command line option, showing the N slowest test cases::

python3 -m unittest --durations=3 lib.tests.test_threading
.....
Slowest test durations
----------------------------------------------------------------------
1.210s test_timeout (Lib.test.test_threading.BarrierTests)
1.003s test_default_timeout (Lib.test.test_threading.BarrierTests)
0.518s test_timeout (Lib.test.test_threading.EventTests)

(0.000 durations hidden. Use -v to show these durations.)
----------------------------------------------------------------------
Ran 158 tests in 9.869s

OK (skipped=3)

(Contributed by Giampaolo Rodola in :issue:`4080`)

Optimizations
=============
@@ -136,3 +136,19 @@ def addSuccess(self, test):

def wasSuccessful(self):
return True


class BufferedWriter:
def __init__(self):
self.result = ''
self.buffer = ''

def write(self, arg):
self.buffer += arg

def flush(self):
self.result += self.buffer
self.buffer = ''

def getvalue(self):
return self.result
@@ -236,6 +236,7 @@ def __init__(self, catchbreak):
self.testRunner = FakeRunner
self.test = test
self.result = None
self.durations = None

p = Program(False)
p.runTests()
@@ -244,7 +245,8 @@ def __init__(self, catchbreak):
'verbosity': verbosity,
'failfast': failfast,
'tb_locals': False,
'warnings': None})])
'warnings': None,
'durations': None})])
self.assertEqual(FakeRunner.runArgs, [test])
self.assertEqual(p.result, result)

@@ -259,7 +261,8 @@ def __init__(self, catchbreak):
'verbosity': verbosity,
'failfast': failfast,
'tb_locals': False,
'warnings': None})])
'warnings': None,
'durations': None})])
self.assertEqual(FakeRunner.runArgs, [test])
self.assertEqual(p.result, result)

@@ -284,14 +284,16 @@ def testRunTestsRunnerClass(self):
program.failfast = 'failfast'
program.buffer = 'buffer'
program.warnings = 'warnings'
program.durations = '5'

program.runTests()

self.assertEqual(FakeRunner.initArgs, {'verbosity': 'verbosity',
'failfast': 'failfast',
'buffer': 'buffer',
'tb_locals': False,
'warnings': 'warnings'})
'warnings': 'warnings',
'durations': '5'})
self.assertEqual(FakeRunner.test, 'test')
self.assertIs(program.result, RESULT)

@@ -320,7 +322,8 @@ def test_locals(self):
'failfast': False,
'tb_locals': True,
'verbosity': 1,
'warnings': None})
'warnings': None,
'durations': None})

def testRunTestsOldRunnerClass(self):
program = self.program
@@ -333,6 +336,7 @@ def testRunTestsOldRunnerClass(self):
program.failfast = 'failfast'
program.buffer = 'buffer'
program.test = 'test'
program.durations = '0'

program.runTests()

@@ -356,6 +360,7 @@ def fakeInstallHandler():

program = self.program
program.catchbreak = True
program.durations = None

program.testRunner = FakeRunner

@@ -6,7 +6,9 @@

import traceback
import unittest
from unittest import mock
from unittest.util import strclass
from test.test_unittest.support import BufferedWriter


class MockTraceback(object):
@@ -33,22 +35,6 @@ def bad_cleanup2():
raise ValueError('bad cleanup2')


class BufferedWriter:
def __init__(self):
self.result = ''
self.buffer = ''

def write(self, arg):
self.buffer += arg

def flush(self):
self.result += self.buffer
self.buffer = ''

def getvalue(self):
return self.result


class Test_TestResult(unittest.TestCase):
# Note: there are not separate tests for TestResult.wasSuccessful(),
# TestResult.errors, TestResult.failures, TestResult.testsRun or
@@ -8,8 +8,11 @@
import unittest
from unittest.case import _Outcome

from test.test_unittest.support import (LoggingResult,
ResultWithNoStartTestRunStopTestRun)
from test.test_unittest.support import (
BufferedWriter,
LoggingResult,
ResultWithNoStartTestRunStopTestRun,
)


def resultFactory(*_):
@@ -1176,6 +1179,7 @@ def test_init(self):
self.assertTrue(runner.descriptions)
self.assertEqual(runner.resultclass, unittest.TextTestResult)
self.assertFalse(runner.tb_locals)
self.assertIsNone(runner.durations)

def test_multiple_inheritance(self):
class AResult(unittest.TestResult):
@@ -1362,6 +1366,65 @@ def testSpecifiedStreamUsed(self):
runner = unittest.TextTestRunner(f)
self.assertTrue(runner.stream.stream is f)

def test_durations(self):
def run(test, expect_durations):
stream = BufferedWriter()
runner = unittest.TextTestRunner(stream=stream, durations=5, verbosity=2)
result = runner.run(test)
self.assertEqual(result.durations, 5)
stream.flush()
text = stream.getvalue()
regex = r"\n\d+.\d\d\ds"
if expect_durations:
self.assertEqual(len(result.collectedDurations), 1)
self.assertIn('Slowest test durations', text)
self.assertRegex(text, regex)
else:
self.assertEqual(len(result.collectedDurations), 0)
self.assertNotIn('Slowest test durations', text)
self.assertNotRegex(text, regex)

# success
class Foo(unittest.TestCase):
def test_1(self):
pass

run(Foo('test_1'), True)

# failure
class Foo(unittest.TestCase):
def test_1(self):
self.assertEqual(0, 1)

run(Foo('test_1'), True)

# error
class Foo(unittest.TestCase):
def test_1(self):
1 / 0

run(Foo('test_1'), True)


# error in setUp and tearDown
class Foo(unittest.TestCase):
def setUp(self):
1 / 0
tearDown = setUp
def test_1(self):
pass

run(Foo('test_1'), True)

# skip (expect no durations)
class Foo(unittest.TestCase):
@unittest.skip("reason")
def test_1(self):
pass

run(Foo('test_1'), False)



if __name__ == "__main__":
unittest.main()
@@ -9,6 +9,7 @@
import collections
import contextlib
import traceback
import time
import types

from . import result
@@ -572,6 +573,15 @@ def _addUnexpectedSuccess(self, result):
else:
addUnexpectedSuccess(self)

def _addDuration(self, result, elapsed):
try:
addDuration = result.addDuration
except AttributeError:
warnings.warn("TestResult has no addDuration method",
RuntimeWarning)
else:
addDuration(self, elapsed)

def _callSetUp(self):
self.setUp()

@@ -612,6 +622,7 @@ def run(self, result=None):
getattr(testMethod, "__unittest_expecting_failure__", False)
)
outcome = _Outcome(result)
start_time = time.perf_counter()
try:
self._outcome = outcome

@@ -625,6 +636,7 @@ def run(self, result=None):
with outcome.testPartExecutor(self):
self._callTearDown()
self.doCleanups()
self._addDuration(result, (time.perf_counter() - start_time))

if outcome.success:
if expecting_failure: