Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

References to tests in === FAILURES === report should be suitable as input to select tests. #3546

Closed
TauPan opened this issue Jun 7, 2018 · 6 comments
Labels
status: needs information reporter needs to provide more information; can be closed after 2 or more weeks of inactivity topic: reporting related to terminal output and user-facing messages and errors type: question general question, might be closed after 2 weeks of inactivity

Comments

@TauPan
Copy link

TauPan commented Jun 7, 2018

Hi!

I'd like to be able to quickly retest a certain failing test after running the whole suite. --last-failed provides a similar functionality, but not the same if multiple (or many) tests fail.

Whenever I have failures, the "FAILED/PASSED" lines in the session report lists them in a way that is suitable for input for selecting them for a subsequent run (as outlined in https://docs.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests).

project/package/tests/test_or.py::TestSomeClass::test_some_behaviour FAILED  [  1%]

However when I'm investigating a failure I'm often looking at the backtrace and variables in the === FAILURES === section of pytest's output:

============================================== FAILURES ==============================================
___________________________ TestSomeClass.test_some_behaviour ____________________________

self = <project.package.tests.test_or.TestSomeClass testMethod=test_some_behaviour>

    def test_some_behaviour(self):
[...]
self       = <project.package.tests.test_or.TestSomeClass testMethod=test_some_behaviour>

project/package/tests/test_or.py:158: AssertionError

And to get the node id of the test in question, I either have to scroll up (quite far, if my testsuite contains over 500 tests) or piece the information together from the output.

I can use the method name test_some_behaviour together with -k but this might execute multiple tests with the same name.

In the concrete example it would be convenient for me if project/package/tests/test_or.py::TestSomeClass::test_some_behaviour would appear somewhere in the FAILURES output.

Thanks!

@pytestbot
Copy link
Contributor

GitMate.io thinks possibly related issues are #50 (Select tests according to their mark), #2579 (Test report text issue), #2156 (py.test reports IndexError/KeyError as an Failure instead of Error), #2549 (Report "pytestmark = skip" only once), and #157 (--verbose should report why a test case was skipped).

@pytestbot pytestbot added the plugin: unittest related to the unittest integration builtin plugin label Jun 7, 2018
@TauPan
Copy link
Author

TauPan commented Jun 7, 2018

I just checked the issues @pytestbot referenced and I don't think any of them is (closely) related.

@nicoddemus nicoddemus added type: question general question, might be closed after 2 weeks of inactivity topic: reporting related to terminal output and user-facing messages and errors status: needs information reporter needs to provide more information; can be closed after 2 or more weeks of inactivity and removed plugin: unittest related to the unittest integration builtin plugin labels Jun 7, 2018
@nicoddemus
Copy link
Member

Hi @TauPan,

The -ra flag can be used to produce a summary at the end of the test run that can be used to copy/paste test node ids:

====================================================== FAILURES =======================================================
________________________________________________________ test _________________________________________________________

    def test():
>       assert 0
E       assert 0

test_foo.py:2: AssertionError
=============================================== short test summary info ===============================================
FAIL test_foo.py::test
============================================== 1 failed in 0.27 seconds ===============================================

You think that's enough?

@TauPan
Copy link
Author

TauPan commented Jun 11, 2018

Yes. Thanks. In fact -rf would be enough for me.

I failed to find this in any documentation, although I now see that it's mentioned in the --help output.

Is this mentioned anywhere else? https://docs.pytest.org/en/latest/contents.html doesn't seem to have a section describing the reporting flags in more detail.

@nicoddemus
Copy link
Member

Hi,

It is shown in some places:

But you are correct that it should be displayed more prominently somewhere.

@nicoddemus
Copy link
Member

Opened #3566 to track adding a blurb about -r to the docs. Thanks for the report @TauPan!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: needs information reporter needs to provide more information; can be closed after 2 or more weeks of inactivity topic: reporting related to terminal output and user-facing messages and errors type: question general question, might be closed after 2 weeks of inactivity
Projects
None yet
Development

No branches or pull requests

3 participants