-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
add run
parameter to pytest.xfail()
#9027
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Another alternate workaround is to put the original workaround in a fixture: @pytest.fixture()
def xfail(request):
def _xfail(reason=''):
request.node.add_marker(pytest.mark.xfail(reason=reason))
return _xfail
def test_xfail_pass(xfail):
xfail(reason='should xpass')
assert 1 == 1
def test_xfail_fail(xfail):
xfail(reason='should xfail')
assert 1 == 2 That's fun. |
Hi @okken, Thanks for writing this proposal.
I think this is the only way to go; changing Naming parameters aside, in order to implement this cleanly (without global hacks), I think we would also need the |
After looking at the code I was worried about the need for a Unfortunately, if we have to pass in |
These days a contextvar for the test state of the current thread/task may be a acceptable way to get the test state passed back into apis As long as its explicitly specific as boundary helper not to be used in the internals themselves |
@RonnyPfannschmidt I'm glad you have an idea about this. I don't know understand really how contextvar wold be applied. |
Until we get this sorted out, I've added a plugin for the workaround, pytest-runtime-xfail. The name's a bit long, but it works-ish. I'd rather have it built into the |
I agree that it would be very helpful to have a simple way of marking a test as expected to fail inside the test. When parametrizing multiple parameters and specific combinations of parametrized arguments are expected to fail, However, I didn’t find the behavior of I don’t find a
|
TODO: discuss API in pytest-dev#9027 TODO: write documentation TODO: write changelog entry TODO: add me to AUTHORS TODO: fix linting errors TODO: write proper commit message
TODO: discuss API in pytest-dev#9027 TODO: write documentation TODO: write changelog entry TODO: add me to AUTHORS TODO: fix linting errors TODO: write proper commit message
I'm really not a fan of the proposed In comparison, I don't think |
I second what @The-Compiler mentioned. I think the original intention of the proposal by @okken is to converge the behavior difference between From a user perspective, I can hardly think of a case where For these reasons, I agree with @okken's original API proposal and my vote would go towards adding a parameter to |
I think we ought to avoid context vars, IMO they will cause more headache than is worth. So this makes it necessary to pass in So my opinion is that we should reject this proposal in favor of documenting the |
What's the problem this feature will solve?
The marker
pytest.mark.xfail()
still runs the tests, but the test results in XFAIL if fail, XPASS if pass.The API function
pytest.xfail()
stops execution of the test immediately and the test results in XFAIL.This is surprising behavior, and not mentioned on https://docs.pytest.org/en/6.2.x/reference.html#pytest-xfail
There is a workaround, described at #6300.
The workaround is to replace:
with:
However, the workaround is rather clunky.
Describe the solution you'd like
I'd like to see:
run
parameter, defaulted to False, topytest.xfail()
.xfail_run_default
option to change the default for a test suite by setting this toTrue
inpytest.ini
.True
in a future pytest release.I think the behavior of stopping execution was accidental and not intentionally designed.
It is weird to have the mark and the API function behave so differently.
With the new parameter, the example above could be written as:
Or by setting
xfail_run_default=True
inpytest.ini
:Alternative Solutions
request.node.add_marker(pytest.mark.xfail(reason='reasons'))
method.run
andxfail_run_default
could be changed if there are better names.Additional context
One thing to keep in mind is that
--runxfail
already exists to "report the results of xfail tests as if they were not marked".This is a great flag. However, it does confuse the issue here a bit, as the natural option for my feature would be
run_xfail
, which is confusingly close to--runxfail
with a completely different intent.The text was updated successfully, but these errors were encountered: