New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: parametrizing conditional raising with pytest.raises #1830
Comments
I think |
how about something completely different like a different context manager, so that instead of parameterizing with booleans, we pass different contexts @pytest.mark.parametrize('inp, expectation', [
(1, pytest.raises(ValueError),
(2, pytest.raises(TypeError),
(3, pytest.wont_raise)])
def test_bar(inp, exception):
with expectation:
bar(inp) |
@RonnyPfannschmidt @nicoddemus I have an idea for completely different implementation. You can see it here |
@purpleP thanks for sharing! Your implementation seems interesting, but I would say it is a new way to declare tests altogether (more declarative than imperative) and independent from this proposal. 😁 |
Seems interesting and just as easy to implement as the original proposal. @The-Compiler what do you think? |
@nicoddemus It is indeed declarative and IMHO it's good for such simple tests. What I think you wrong about is that it isn't independent. It's dependent in a way, that if one would use my plugin or something similar there wouldn't be need to rewriting |
Not sure, I think both can co-exist: declarative-style is great for code which is functional and withougt side-effects, but might not scale well depending on what you are testing. And we are talking about a small addition/improvement to |
what i am worried about is loosneing a contract taking a function, and the suddenly making it work different seems very problematic to me in particular since even a optiona_raises function is 7 lines including the contextmanager decorator making loose contracts in general is a maintenance nightmare later on because functions loose unifom behaviour since they act all over the place im strictly opposed to modifying pytest-raises ot support none |
@nicoddemus Yes, declarative style isn't great when we're trying to test side-effects. But I thought that OP is talking exactly about that. |
Yeah I agree, and I think your proposal fits nicely with what @The-Compiler had in mind. @The-Compiler, thoughts? Here is an example which I think also illustrates @The-Compiler's use case, taken from pytest-qt: def context_manager_wait(qtbot, signal, timeout, multiple, raising,
should_raise):
"""
Waiting for signal using context manager API.
"""
func = qtbot.waitSignals if multiple else qtbot.waitSignal
if should_raise:
with pytest.raises(qtbot.SignalTimeoutError):
with func(signal, timeout, raising=raising) as blocker:
pass
else:
with func(signal, timeout, raising=raising) as blocker:
pass
return blocker This could be changed to: def context_manager_wait(qtbot, signal, timeout, multiple, raise_expectation,
should_raise):
"""
Waiting for signal using context manager API.
"""
func = qtbot.waitSignals if multiple else qtbot.waitSignal
with raise_expectation:
with func(signal, timeout, raising=raising) as blocker:
return blocker
I see your point, but are you arguing for us to drop this topic because your solution is enough? In that case I disagree, it is still worth discussing how change |
@nicoddemus And about whether or not rewrite
If anything I'd think @RonnyPfannschmidt approach is better. Only I propose a slightly different version. What if we make context manager that would kinda turn Exception into Either.
And then users can write tests like this
If we can somehow implement custom comparison for exceptions in assert statement than this should work. |
@nicoddemus No, I'm not arguing to drop the topic, I'm arguing that making new default value isn't the best way to do it. |
Oh sorry, I wasn't very clear then. After seeing @RonnyPfannschmidt's response, I think the best approach is to just provide a new context manager which just ensures nothing is raised. This way users can use it in the parametrization: @pytest.mark.parametrize('inp, expectation', [
(-1, pytest.raises(ValueError)),
(3.5, pytest.raises(TypeError)),
(5, pytest.does_not_raise),
(10, pytest.does_not_raise),
])
def test_bar(inp, expectation):
with expectation:
validate_positive_integer(inp) Just to clarify, this does not require changes to |
@purpleP This is mainly a matter of taste of course, but I don't find your declarative test examples very readable personally. I still think having something like my or @RonnyPfannschmidt's proposal would make sense and is completely orthogonal to what you propose. The proposal from @RonnyPfannschmidt sounds interesting indeed. There's one issue I have with it though: Seeing how many beginners I've seen asking "how do I check with pytest that a code doesn't raise an exception?" I kind of fear it being overused where it actually wouldn't be needed at all... |
"this is a contextmanager that does nothing, it exists simply to ease composition of test parameters that include failure/nonfailure cases" plus including a example how 2 tests can be turned into one should sufficie |
But notice that you original proposal also has the same opportunity for "abuse": with pytest.raises(None):
validate_positive_integer(5) 😉 But I think the advantage overweights this small disadvantage. Plus, we can (and should) mention in |
@RonnyPfannschmidt beat me to the reply for a few seconds, but I agree, I think this could be addressed in the docs. |
@The-Compiler How about my other proposal? Doesn't introduce overusing or anything, automatically compares messages from exception? |
@purpleP I don't understand it. You're saying you define an |
@The-Compiler Yeah, that's a typo. |
@The-Compiler can we close this issue - and should we open up one for a doesn't raise context manager? |
@RonnyPfannschmidt I'm a bit late - but I don't see the point in opening a separate issue for the same thing (and then "losing" the history and rationale behind doing it that way) |
Have found myself wanting this exact functionality a few times now. Seems there was a PR put together a few months back, but it was closed. 😕 Would be really nice to see something coalesce. What seems to be the blocker to implementing this in some form? |
Hey @jakirkham! 😄 It was closed only because we couldn't really reach a consensus on the naming. 😕 |
On the naming of the argument passed to |
from my pov the correct way was a new name because its a new behaviour and bike-shedding killed it in the end |
Just my 2c, this is going to get abused The original example in the issue would really be two tests as it is testing two behaviors (one with acceptable inputs and one with unacceptable (exceptional) inputs) The reason I say this is going to be abused is we had a similar context manager in testify and found that developers (especially beginners) would add it to every test despite it being a noop. Their thought being that it is better to be explicit that it doesn't raise but the mistake being that an exception already failed the test. EDIT (for example): def test_x():
with pytest.does_not_raise():
assert x(13) == 37 when really they wanted just def test_x():
assert x(13) == 37 |
Is there any option to prevent users to misuse it, apart from doc? |
Not sure, don't think so... |
fortunately, if you really want this behaviour you can get it today from the standard library: if sys.version_info >= (3, 7):
from contextlib import nullcontext
elif sys.version_info >= (3, 3):
from contextlib import ExitStack as nullcontext
else:
from contextlib2 import ExitStack as nullcontext
@pytest.mark.parametrize(
('x', 'expected_raises'),
(
(True, pytest.raises(ValueError)),
(False, nullcontext()),
),
)
def test(x, expected_raises):
with expected_raises:
foo(x) |
Let's consider the scenarios of having/not having the noop context manager (considering here Worst thing that can happen if we DO have Worst thing that can happen if we DON'T have Bottom line: Just implement it - its a small and uncertain harm (I would doubt someone will complain) compared to a big and certain benefit (many happy users). |
@Sup3rGeo my problem is it encourages two bad patterns:
is it insufficient to |
from contextlib import suppress as do_not_raise
@pytest.mark.parametrize('example_input,expectation', [
(3, do_not_raise()),
(2, do_not_raise()),
(1, do_not_raise()),
(0, raises(ZeroDivisionError)),
])
def test_division(example_input, expectation):
"""Test how much I know division."""
with expectation:
assert (6 / example_input) is not None Test session starts (platform: linux, Python 3.6.6, pytest 3.6.3, pytest-sugar 0.9.1)
rootdir: /home/sik/code/mne-python, inifile: setup.cfg
plugins: smother-0.2, sugar-0.9.1, pudb-0.6, ipdb-0.1, faulthandler-1.5.0, cov-2.5.1
matplotlib_test.py ✓✓✓✓ 100% ██████████
======================================= slowest 20 test durations =======================================
0.00s setup matplotlib_test.py::test_division[3-expectation0]
0.00s setup matplotlib_test.py::test_division[2-expectation1]
0.00s setup matplotlib_test.py::test_division[1-expectation2]
0.00s teardown matplotlib_test.py::test_division[3-expectation0]
0.00s setup matplotlib_test.py::test_division[0-expectation3]
0.00s call matplotlib_test.py::test_division[3-expectation0]
0.00s teardown matplotlib_test.py::test_division[2-expectation1]
0.00s teardown matplotlib_test.py::test_division[1-expectation2]
0.00s call matplotlib_test.py::test_division[2-expectation1]
0.00s call matplotlib_test.py::test_division[1-expectation2]
0.00s teardown matplotlib_test.py::test_division[0-expectation3]
0.00s call matplotlib_test.py::test_division[0-expectation3]
Results (0.08s):
4 passed |
as a backport is availiable in |
I provided a full recipe here including |
closing after the follow-up is created |
Addressing issues pytest-dev#4324 and pytest-dev#1830
Addressing issues pytest-dev#4324 and pytest-dev#1830
Other options not bikeshedded here yet:
Regarding this second point, it's arguably clearer this way anyhow since the use cases are sufficiently different. If code is raising, you usually want to make an assertion on the exception somehow, if code is not raising, you usually want to make an assertion on the results somehow. That's a high level difference, suggesting they should be different tests in the first place. It's already somewhat apparent even in the example tests of the PR recently created, there is |
A plugin indeed seems like a good option |
Addressing issues pytest-dev#4324 and pytest-dev#1830
Currently, parametrizing conditional raising of an exception is rather
cumbersome. Consider this:
When we write a test for it, we need to repeat ourselves:
An easy way to solve this would be to add something like an
activated
argument to
pytest.raises
, where it simply does nothing:But then sometimes, it's handy to parametrize the exception too - consider
this:
and the test:
So maybe we could just use
None
as a special value wherepytest.raises
doesnothing?
Or if we're worried about
None
being accidentally used (though I'm not surehow?), we could use a special value, potentially
pytest.raises.nothing
or so😆
Opinions?
The text was updated successfully, but these errors were encountered: