Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stop testing a file on first fail #9515

Open
james10424 opened this issue Jan 15, 2022 · 3 comments
Open

Stop testing a file on first fail #9515

james10424 opened this issue Jan 15, 2022 · 3 comments
Labels
type: enhancement new feature or API change, should be merged into features branch type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature

Comments

@james10424
Copy link

What's the problem this feature will solve?

This allows me to gather all failed files quicker, I don't need to know whether the remaining tests passed in this file.

Describe the solution you'd like

This will be equivalent to the -x option, but on a per-file basis. That is, pytest stops testing this file if it has one failed test.

This can save time when running on clouds like Github Action, if it fails fast, then I'll be charged less, knowing I have to re-run them.

Alternative Solutions

I'm not sure which plugin can solve this problem.

Additional context

@tirkarthi
Copy link
Contributor

As a workaround you can probably have a fixture that stores the module name on first failure to a list and raise skip for rest of the files of the same module name. It also will not work on xdist since the list is at module level and you might need to store it somewhere reliable for parallel processing.

This can save time when running on clouds like Github Action, if it fails fast, then I'll be charged less, knowing I have to re-run them.

-x is useful in this case. There is also --max-fail=n to fail on n number of failures. pytest also has other modes like --last-failed where if you can persist .pytest-cache folder between rerun only failed tests per PR.

  -x, --exitfirst       exit instantly on first error or failed test.
  --runxfail            report the results of xfail tests as if they were not
  --lf, --last-failed   rerun only the tests that failed at the last run (or all
                        if none failed)
  --ff, --failed-first  run all tests, but run the last failures first.
  --lfnf={all,none}, --last-failed-no-failures={all,none}
                        which tests to run with no previously (known) failures.
  --sw, --stepwise      exit on test failure and continue from last failing test
                        ignore the first failing test but stop on the next
                        failing test
                        (f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed,
                        failed tests. Default is 'all'.
  --maxfail=num         exit after first num failures or errors.
  --doctest-report={none,cdiff,ndiff,udiff,only_first_failure}
                        failure
  --doctest-continue-on-failure
                        failure
                        xfail_strict=True -o cache_dir=cache`.
  -f, --looponfail      run tests in subprocess, wait for modified files and re-
                        run failing test set until all pass.
  xfail_strict (bool):  default for the strict parameter of xfail markers when
import pytest


skip_modules = []


@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
    outcome = yield
    rep = outcome.get_result()
    setattr(item, "rep_" + rep.when, rep)


@pytest.fixture(autouse=True)
def skip_file_on_fail(request):
    mod_name = request.module.__name__

    if mod_name in skip_modules:
        pytest.skip(f"Skipping rest of tests in {mod_name}")

    outcome = yield

    if request.node.rep_call.failed:
        skip_modules.append(request.module.__name__)

@Zac-HD Zac-HD added type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature type: question general question, might be closed after 2 weeks of inactivity labels Jan 16, 2022
@Zac-HD Zac-HD closed this as completed Feb 6, 2022
@The-Compiler
Copy link
Member

Not sure if this should be closed, @Zac-HD - it seems like an useful proposal at first sight.

@Zac-HD Zac-HD added type: enhancement new feature or API change, should be merged into features branch and removed type: question general question, might be closed after 2 weeks of inactivity labels Feb 6, 2022
@Zac-HD Zac-HD reopened this Feb 6, 2022
@pansila
Copy link

pansila commented Apr 26, 2022

Worth a mention in the doc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: enhancement new feature or API change, should be merged into features branch type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature
Projects
None yet
Development

No branches or pull requests

5 participants