-
Notifications
You must be signed in to change notification settings - Fork 575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
@pytest.fixture integration? #65
Comments
Forgot to say, my code is at https://github.com/jab/bidict/blob/ff5d343/tests/test_hypothesis.py in case it helps! |
Yeah, I know about pytest fixtures, but I basically don't think it's possible to use the combination of Hypothesis and them like this - you could use something like strategy(something).example() to generate the fixture (don't do that, it has problems for other reasons), but I'm almost 100% certain you can't get assume in a fixture to work. Fundamentally the problem is that Hypothesis and pytest have a very different notion of how tests are executed. Hypothesis In general Hypothesis and pytest have very different ideas about how tests should be executed. Integration between the two of them basically happens by Hypothesis going to some length to stay out of pytest's way and expose an interface it can use - Using given for some arguments will not prevent fixtures, parametrize, etc. from work with another, but that's about the best I can do. The major limitations you'd run into are a) Hypothesis mixes example generation and test execution, which the py.test test generator really doesn't like and b) Once Hypothesis has found a failure, it has to minimize it. b is also the major problem for treating Hypothesis as a fixture source. The examples Hypothesis generates are really messy and only become readable after the minimization process. If Hypothesis were just providing examples and not minimizing, which is what the fixture mode of operation requires, it would be basically useless. I'm also not really clear why you want this. Are you just worried about the performance implications of running the generation process multiple times? If so, is this actually proving a problem for you or is it just a theoretical concern? If the former maybe you're hitting a Hypothesis bug because generation is usually pretty fast. Also, for the tests you linked to, have you seen the support for mapping strategies? You might find it useful given your data conversion at the beginning of each test. |
Thanks for the suggestions @DRMacIver. Implemented in https://github.com/jab/bidict/blob/f31a193/tests/test_hypothesis.py#L20...L27. |
Pytest has a feature where you can apply the
@pytest.fixture
decorator on a functionfoo
, and then any test which Pytest discovers that accepts afoo
argument will get passed whateverfoo()
returns automatically (see https://pytest.org/latest/fixture.html#fixtures-as-function-arguments for more).You can even have fixtures derived from other fixtures:
If I wanted Hypothesis to generate the input
foo()
is supplying in the above, then havebar
transform it in some way (applying someassume
statements while it's at it), and then have Pytest feed that as input to all my tests, would that be possible (using pytest.fixture or otherwise)?I'm currently duplicating the same
@given(...)
line for every one of my tests, which means Hypothesis has to do a lot of duplicate work. This seems like maybe a common enough use case, but I haven't found anything that addresses it in the docs.(I did find https://pypi.python.org/pypi/hypothesis-pytest but it looks like it currently only improves Pytest reporting.)
Thanks for any help and apologies if I'm missing something (I'm new to both Hypothesis and Pytest). And thanks for releasing Hypothesis, it's awesome!
The text was updated successfully, but these errors were encountered: