Skip to content

add run parameter to pytest.xfail() #9027

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
okken opened this issue Aug 20, 2021 · 10 comments · May be fixed by #10096
Open

add run parameter to pytest.xfail() #9027

okken opened this issue Aug 20, 2021 · 10 comments · May be fixed by #10096
Labels
type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature

Comments

@okken
Copy link
Contributor

okken commented Aug 20, 2021

What's the problem this feature will solve?

The marker pytest.mark.xfail() still runs the tests, but the test results in XFAIL if fail, XPASS if pass.
The API function pytest.xfail() stops execution of the test immediately and the test results in XFAIL.
This is surprising behavior, and not mentioned on https://docs.pytest.org/en/6.2.x/reference.html#pytest-xfail

There is a workaround, described at #6300.
The workaround is to replace:

def test_foo():
    if runtime_condition:
        pytest.xfail(reason='reasons')
    assert 1 == 1 # doesn't run

with:

def test_foo(request):
    if runtime_condition:
        request.node.add_marker(pytest.mark.xfail(reason='reasons'))
    assert 1 == 1 # does run

However, the workaround is rather clunky.

Describe the solution you'd like

I'd like to see:

  1. Add a run parameter, defaulted to False, to pytest.xfail().
  2. Add a xfail_run_default option to change the default for a test suite by setting this to True in pytest.ini.
  3. Update the API overview to explicitly tell people that the default is to stop execution.
  4. Also tell people in the API overview that the default will change in a future release.
  5. Deprecate the old behavior.
  6. Change the default to True in a future pytest release.

I think the behavior of stopping execution was accidental and not intentionally designed.
It is weird to have the mark and the API function behave so differently.

With the new parameter, the example above could be written as:

def test_foo():
    if runtime_condition:
        pytest.xfail(run=True, reason='reasons')
    assert 1 == 1 # does run

Or by setting xfail_run_default=True in pytest.ini:

[pytest]
xfail_run_default = True

Alternative Solutions

  • An alternative solution is listed above, with the request.node.add_marker(pytest.mark.xfail(reason='reasons')) method.
  • An alternate final solution would be to just do steps 1-3 and not deprecate anything.
  • Also, obviously, the names run and xfail_run_default could be changed if there are better names.
  • A final alternative would be to call the current behavior a defect and just change it.

Additional context

One thing to keep in mind is that --runxfail already exists to "report the results of xfail tests as if they were not marked".
This is a great flag. However, it does confuse the issue here a bit, as the natural option for my feature would be run_xfail, which is confusingly close to --runxfail with a completely different intent.

@okken
Copy link
Contributor Author

okken commented Aug 20, 2021

Another alternate workaround is to put the original workaround in a fixture:

@pytest.fixture()
def xfail(request):
    def _xfail(reason=''):
        request.node.add_marker(pytest.mark.xfail(reason=reason))
    return _xfail

def test_xfail_pass(xfail):
    xfail(reason='should xpass')
    assert 1 == 1


def test_xfail_fail(xfail):
    xfail(reason='should xfail')
    assert 1 == 2

That's fun.
A better name may be runtime_xfail.
Or maybe not, that's a bit long.

@nicoddemus
Copy link
Member

Hi @okken,

Thanks for writing this proposal.

An alternate final solution would be to just do steps 1-3 and not deprecate anything.

I think this is the only way to go; changing xfail to continue to run (even with a deprecation period) would be way too disruptive.

Naming parameters aside, in order to implement this cleanly (without global hacks), I think we would also need the request object, as the implementation would likely do the same thing as the workaround (apply the mark).

@nicoddemus nicoddemus added the type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature label Aug 20, 2021
@okken
Copy link
Contributor Author

okken commented Aug 20, 2021

After looking at the code I was worried about the need for a request object to be passed in.
It's not a real clean interface then, but would work.
Is there any other way to get at the request object from pytest.xfail without passing it in?

Unfortunately, if we have to pass in request, there's no way to change the default with an option.
And then, really, could the presence of request be the deciding factor?
If someone passes in a request object, then just add the mark, if not do the current behavior?
It's weird, but cleaner than having to pass in two parameters, maybe.

@RonnyPfannschmidt
Copy link
Member

These days a contextvar for the test state of the current thread/task may be a acceptable way to get the test state passed back into apis

As long as its explicitly specific as boundary helper not to be used in the internals themselves

@okken
Copy link
Contributor Author

okken commented Aug 26, 2021

@RonnyPfannschmidt I'm glad you have an idea about this. I don't know understand really how contextvar wold be applied.

@okken
Copy link
Contributor Author

okken commented Aug 26, 2021

Until we get this sorted out, I've added a plugin for the workaround, pytest-runtime-xfail. The name's a bit long, but it works-ish. I'd rather have it built into the pytest.xfail().

@manueljacob
Copy link
Contributor

I agree that it would be very helpful to have a simple way of marking a test as expected to fail inside the test. When parametrizing multiple parameters and specific combinations of parametrized arguments are expected to fail, pytest.mark.xfail can’t be used. What I’ve used so far was hooking into pytest_collection_modifyitems / pytest_itemcollected. Adding the marker with add_marker, as described above, improves on this by making it possible to put the xfail logic close to the test instead of inside conftest.py.

However, I didn’t find the behavior of pytest.xfail() too surprising. It seems to be in line with pytest.skip() and pytest.fail. Like the code can “skip a test” and “fail a test”, it can also “xfail a test”.

I don’t find a run parameter very intuitive. I wouldn’t implement the feature as a parameter to pytest.xfail(). What do you think about one of the following?

  • pytest.expect_failure()
  • pytest.expect_fail()
  • pytest.expect_to_fail()
  • pytest.mark_xfail()

manueljacob added a commit to manueljacob/pytest that referenced this issue Jul 1, 2022
TODO: discuss API in pytest-dev#9027
TODO: write documentation
TODO: write changelog entry
TODO: add me to AUTHORS
TODO: fix linting errors
TODO: write proper commit message
manueljacob added a commit to manueljacob/pytest that referenced this issue Jul 1, 2022
TODO: discuss API in pytest-dev#9027
TODO: write documentation
TODO: write changelog entry
TODO: add me to AUTHORS
TODO: fix linting errors
TODO: write proper commit message
@The-Compiler
Copy link
Member

I'm really not a fan of the proposed expect_failure API (nor any of the other expect_* spellings). Confusion between pytest.xfail and pytest.mark.xfail is already something I see a lot in my pytest trainings - what this does is introduce another variant, but the only difference to pytest.xfail is that it's an expanded version of the abbreviation "xfail". I don't see how this would clear how the behavior differs at all. The only scenario where I feel like this would be okay is if we aim to deprecate and replace pytest.xfail entirely, but having pytest.mark.xfail, pytest.xfail and pytest.expect_failure all with subtly different behavior sounds like a no-go to me.

In comparison, I don't think run=True is unintuitive at all - quite the opposite, actually! We already have run=False for pytest.mark.xfail, so having a run=True for pytest.xfail seems like a very consistent and intuitive extension to that, to me. It's using a perfectly matching API which we already have. Could you elaborate on what you don't like about it?

@klxfeiyang
Copy link

klxfeiyang commented Jul 1, 2022

I second what @The-Compiler mentioned. I think the original intention of the proposal by @okken is to converge the behavior difference between pytest.mark.xfail vs. pytest.xfail, and introducing a third variant would muddle the water even more from my perspective.

From a user perspective, I can hardly think of a case where pytest.xfail should behave similar to pytest.skip or pytest-fail. Because if I marked a test as xfail, I would expect the test to be executed before determining what the actual outcome is. In contrast, if I marked a test as pytest.skip or pytest.fail, I have predetermined what the test outcome should be and test execution is irrelevant.

For these reasons, I agree with @okken's original API proposal and my vote would go towards adding a parameter to pytest.xfail.

bluetech added a commit to bluetech/pytest that referenced this issue Aug 12, 2023
@bluetech
Copy link
Member

I think we ought to avoid context vars, IMO they will cause more headache than is worth. So this makes it necessary to pass in request, at which point the existing add_marker solution seems better.

So my opinion is that we should reject this proposal in favor of documenting the add_marker solution.

bluetech added a commit to bluetech/pytest that referenced this issue Aug 12, 2023

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants