Description
What's the problem this feature will solve?
The marker pytest.mark.xfail()
still runs the tests, but the test results in XFAIL if fail, XPASS if pass.
The API function pytest.xfail()
stops execution of the test immediately and the test results in XFAIL.
This is surprising behavior, and not mentioned on https://docs.pytest.org/en/6.2.x/reference.html#pytest-xfail
There is a workaround, described at #6300.
The workaround is to replace:
def test_foo():
if runtime_condition:
pytest.xfail(reason='reasons')
assert 1 == 1 # doesn't run
with:
def test_foo(request):
if runtime_condition:
request.node.add_marker(pytest.mark.xfail(reason='reasons'))
assert 1 == 1 # does run
However, the workaround is rather clunky.
Describe the solution you'd like
I'd like to see:
- Add a
run
parameter, defaulted to False, topytest.xfail()
. - Add a
xfail_run_default
option to change the default for a test suite by setting this toTrue
inpytest.ini
. - Update the API overview to explicitly tell people that the default is to stop execution.
- Also tell people in the API overview that the default will change in a future release.
- Deprecate the old behavior.
- Change the default to
True
in a future pytest release.
I think the behavior of stopping execution was accidental and not intentionally designed.
It is weird to have the mark and the API function behave so differently.
With the new parameter, the example above could be written as:
def test_foo():
if runtime_condition:
pytest.xfail(run=True, reason='reasons')
assert 1 == 1 # does run
Or by setting xfail_run_default=True
in pytest.ini
:
[pytest]
xfail_run_default = True
Alternative Solutions
- An alternative solution is listed above, with the
request.node.add_marker(pytest.mark.xfail(reason='reasons'))
method. - An alternate final solution would be to just do steps 1-3 and not deprecate anything.
- Also, obviously, the names
run
andxfail_run_default
could be changed if there are better names. - A final alternative would be to call the current behavior a defect and just change it.
Additional context
One thing to keep in mind is that --runxfail
already exists to "report the results of xfail tests as if they were not marked".
This is a great flag. However, it does confuse the issue here a bit, as the natural option for my feature would be run_xfail
, which is confusingly close to --runxfail
with a completely different intent.