-
Notifications
You must be signed in to change notification settings - Fork 586
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pytest function scoped fixtures should run once per example, not once per test #377
Comments
Related to #59 |
Note that the |
Actually... it might be possible to incorporate a version of |
Maybe but the version would look very differently, assuming you want this to be the new default behavior of "given" |
I think we need to be quite careful about this. There are a bunch of problems with |
Fair enough! I didn't expect it to be that easy, but thought it was worth mentioning. |
I think @ossronny can judge better than me how bad the hack really is. I'm not really sure about adding this to hypothesis but I am not sure if it will be fixed in pytest somehow either |
After using subtesthack for a while I am now of the opinion that it's a good thing for the user to explicitly think about which fixtures really need setup/teardown, or if they are even testing the right thing if they have fixtures like that. If I am not the kind of guy who has a strong opinion on which tests are supposed to test what (unit tests vs integration tests... most of the time I am just glad to have any tests at all), but I think hypothesis doesn't really work that well with I/O heavy tests. I'm constantly finding old integration tests of mine that use hypothesis and run way slower than necessary and test less of the input space than possible. |
My time doing triage on the pytest issue tracker has left me feeling the same way, as well as less optimistic that there's a nice way to hook into fixtures internals. (even if there was, it would be horribly pytest-version-dependent) @DRMacIver, how would you feel about outright deprecating use of function-scoped fixtures with |
Not sure what exact changes you are proposing? My comment was in advocacy for the status quo, where the default behavior is as-is and people can opt into more test isolation with subtesthack.
…On October 10, 2019 9:21:45 AM GMT+02:00, Zac Hatfield-Dodds ***@***.***> wrote:
> After using subtesthack for a while I am now of the opinion that it's
a good thing for the user to explicitly think about which fixtures
really need setup/teardown, or if they are even testing the right thing
if they have fixtures like that.
My time doing triage on the pytest issue tracker has left me feeling
the same way, as well as less optimistic that there's a nice way to
hook into fixtures internals. (even if there was, it would be horribly
pytest-version-dependent)
@DRMacIver, how would you feel about outright deprecating use of
function-scoped fixtures with ***@***.***`? I think we can detect that,
and it's a much cleaner solution than the current silent subtle
maybe-broken state they're left in...
--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
#377 (comment)
|
Allow use of fixtures with >function scope; emit warnings for function-scoped fixtures. IMO this is a strict improvement over the current situation where we silently reuse them and surprise users who actually did need those values scoped to a single test invocation. |
I think this is a good idea. Ideally I'd still like to transparently support it in future, but we're not likely to make any progress on this in the foreseeable future, and adding a deprecation warning now gives us a certain leeway in future - we can either make it an error, or we can support it "properly", but until we do one or the other of these it will have significantly fewer gotchas for users if we warn them. |
How do I disable this warning? Am I supposed to refactor my tests to nested fns?
That would be less of a gotcha for sure and at the same time we can get rid of some of the argument faking code in hypothesis. |
It's certainly an option. I've been idly thinking it would be sensible to have a decorator to make it easier to run tests directly (basically just the equivalent of Another option which would get the correct semantics but I'm not sure how easy it is to do is to refactor your function scoped fixtures into functions you can call explicitly within the test body. |
This wouldn't be true FWIW. We'd still need the argument handling code for less fine grained fixtures and for e.g. handling |
Forwarding, sure, but not going through the effort of making |
No we would still need that in order to get proper support for module scoped fixtures. All pytest fixtures get passed through function arguments regardless of their scope. |
Got it, I was thinking of discouraging decorating a test function altogether (and always going for nested functions), since a lot of fixtures are function-scoped anyway |
master...Zac-HD:fixture-scope-warning Well, we can emit a warning if any function decorated with IMO the status quo is the best option short of forcing function-scoped fixtures to be example-scoped instead, which might even be possible these days. |
Can we not exclude autouse fixtures and only warn on ones that are explicit arguments?
Given how often the subject comes up I think it's pretty clear it's a problem in practice! |
Is the conclusion here that Hypothesis will warn in certain situations where it detects a fixture leak? Will Hypothesis include utilities to faciliate fixture setup/teardown as part of its pytest plugin? |
Yes: if a test decorated with
No. While I'd like this to work, the difficulty of making it robust is what led us to use the warning instead. |
Okay, yeah. I think the warning message will need a lot of work because right now it seems like it's just a "this feature is not implemented yet" while we actually think "this feature cannot be implemented and what you're doing is probably an antipattern". I think you'll see a bunch of feature requests popping up :) |
It's an antipattern in the sense that it doesn't work, but might seem to for long enough to corrupt your caches or whatever 🤷♂️ If anyone wants to do or to fund the work involved, I'd be delighted though! |
Hypothesis 6.6 will produce: E hypothesis.errors.FailedHealthCheck: test/test_flow_control_window.py::TestAutomaticFlowControl::test_mixing_update_forms uses the 'frame_factory' fixture, which is reset between function calls but not between test cases generated by `@given(...)`. You can change it to a module- or session-scoped fixture if it is safe to reuse; if not we recommend using a context manager inside your test function. See https://docs.pytest.org/en/latest/fixture.html#sharing-test-data for details on fixture scope. E See https://hypothesis.readthedocs.io/en/latest/healthchecks.html for more information about this. If you want to disable just this health check, add HealthCheck.function_scoped_fixture to the suppress_health_check settings for this test. Since the tests already workaround the problem, acknowledging HypothesisWorks/hypothesis#377, we can safely disable the check. Hypothesis 5.49 introduced the function_scoped_fixture health check value, hence it is now the lowest required version of hypothesis.
Because of the way Hypothesis just exposes a single test function, pytest just runs its fixtures once for that function.
This is a long running problem with Hypothesis's interaction with pytest and there is an open bug about it on the pytest side too.
Fixing this is tricky. The fixtures-integration branch has a partial prototype which makes some progress towards making it work, but it's very far from complete and has a number of issues which are likely to need substantial work to fix.
I am currently seeking funding to work on this ticket and it is unlikely to make much progress without said funding.
The text was updated successfully, but these errors were encountered: