A pytest plugin that limits the output to just the things you need.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
tests Fixed #6 - Restore support for other plugins providing end-of-test re… Jun 30, 2018
.gitignore Initial commit. Jun 17, 2018
CHANGELOG Bump version for v0.1.5 release. Jul 28, 2018
LICENSE Match library name in license Jun 17, 2018
MANIFEST.in Added beekeeper to manifest file. Jun 17, 2018
README.rst
beekeeper.yml
pytest_tldr.py Fixes #9 -- Restore output methods on TerminalReporter replacement. Jul 28, 2018
setup.cfg Added pytest-runner integration with setup.py. Jun 17, 2018
setup.py Bump version for v0.1.5 release. Jul 28, 2018

README.rst

pytest-tldr

PyPI version Python versions License Build status

A pytest plugin that limits the output of pytest to just the things you need to see.

One of my biggest personal complaints about pytest is that its console output is very, very chatty. It tells you it's starting. It tells you it's working. It tells you it's done. And if a test fails, it doesn't just tell you which test failed. It dumps pages and pages of code onto your console.

And it does all this in Glorious Technicolor. Better hope you have perfect color vision, and your console color choices are contrast compatible.

Yes: pytest has many, many command line options. And some of these behaviors can be configured or turned off with feature flags. But there are some people (presumably, at the very least, the pytest core team) who like pytest's output format. So if you're the odd-one-out on a team who doesn't like pytest's output, you can't commit "better" options into a default configuration - you have to manually specify your options every time you run the test suite.

Luckily, pytest also has a plugin system, so we can fix this.

pytest-tldr is plugin that gives you minimalist output, in monochrome, while still giving an indication of test suite progress.

Installation

You can install "pytest-tldr" via pip from PyPI:

$ pip install pytest-tldr

Then you can just run your test suite as normal:

$ pytest tests
EF..s..........ux
======================================================================
ERROR: tests/test_things.py::TestTests::test_error
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/rkm/projects/sample/tests/test_things.py", line 182, in test_error
    raise Exception("this is really bad")
Exception: this is really bad

======================================================================
FAIL: tests/test_things.py::TestTests::test_failed
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/rkm/projects/sample/tests/test_things.py", line 179, in test_failed
    self.fail('failed!')
  File "/Users/rkm/.pyenv/versions/3.6.2/lib/python3.6/unittest/case.py", line 670, in fail
    raise self.failureException(msg)
AssertionError: failed!

======================================================================
UNEXPECTED SUCCESS: tests/test_things.py::TestTests::test_upassed

----------------------------------------------------------------------
Ran 17 tests in 2.11s

FAILED (errors=1, failures=1, skipped=1, expected failures=1, unexpected successes=1)

Or, if you need a little more detail, use the verbosity option:

$ pytest tests -v
platform darwin -- Python 3.6.2
pytest==3.6.1
py==1.5.2
pluggy==0.6.0
rootdir: /Users/rkm/projects/sample
plugins: xdist-1.22.0, forked-0.2, tldr-0.1.0
cachedir: .pytest_cache

----------------------------------------------------------------------
tests/test_things.py::TestTests::test_error ... ERROR
tests/test_things.py::TestTests::test_failed ... FAIL
tests/test_things.py::TestTests::test_output ... ok
tests/test_things.py::TestTests::test_passed ... ok
tests/test_things.py::TestTests::test_skipped ... Skipped: tra-la-la
tests/test_things.py::TestTests::test_thing_0 ... ok
tests/test_things.py::TestTests::test_thing_1 ... ok
tests/test_things.py::TestTests::test_thing_2 ... ok
tests/test_things.py::TestTests::test_thing_3 ... ok
tests/test_things.py::TestTests::test_thing_4 ... ok
tests/test_things.py::TestTests::test_thing_5 ... ok
tests/test_things.py::TestTests::test_thing_6 ... ok
tests/test_things.py::TestTests::test_thing_7 ... ok
tests/test_things.py::TestTests::test_thing_8 ... ok
tests/test_things.py::TestTests::test_thing_9 ... ok
tests/test_things.py::TestTests::test_upassed ... unexpected success
tests/test_things.py::TestTests::test_xfailed ... expected failure

======================================================================
ERROR: tests/test_things.py::TestTests::test_error
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/rkm/projects/sample/tests/test_things.py", line 182, in test_error
    raise Exception("this is really bad")
Exception: this is really bad

======================================================================
FAIL: tests/test_things.py::TestTests::test_failed
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/rkm/projects/sample/tests/test_things.py", line 179, in test_failed
    self.fail('failed!')
  File "/Users/rkm/.pyenv/versions/3.6.2/lib/python3.6/unittest/case.py", line 670, in fail
    raise self.failureException(msg)
AssertionError: failed!

======================================================================
UNEXPECTED SUCCESS: tests/test_things.py::TestTests::test_upassed

----------------------------------------------------------------------
Ran 17 tests in 2.07s

FAILED (errors=1, failures=1, skipped=1, expected failures=1, unexpected successes=1)