You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hypothesis provides some ways to reproduce a failing test (eg https://hypothesis.readthedocs.io/en/latest/reproducing.html). But in case that the test is not a python test failure, but actually a crash (segfault), this information is not printed.
Are there ways to be able to reproduce failures in this case?
Could it be possible to have a mode where the @reproduce_failure content is printed before running the actual test? (so it will still get printed in case of a crash) Or where the example would be added to the database before running the test?
The specific context is that we run hypothesis tests on CI, and from time to time those tests crash, but we currently don't have a way to actually try to fix those crashes since we can't reproduce them.
The text was updated successfully, but these errors were encountered:
We can't in general print failures in advance, because with e.g. st.data() (or other APIs) some of the inputs are chosen in response to the behavior of the code under test.
However, you can run the test case in a subprocess using the "executors" API. I suspect subprocesses don't work very well with interactive data because the failing example is in the child process (which we can probably fix or work around?), but hopefully that's OK for your use-case.
Looking at this again, I think it has to be solved downstream:
Users can write test functions that run the code-under-test in a subprocess or similar
But Hypothesis can't do this for arbitrary code without breaking it at least sometimes
I'd be happy to include a recipe for delegating to a subprocess in our docs if someone proposed one, but I'm closing the issue since there's sadly no action for Hypothesis to take here.
Hypothesis provides some ways to reproduce a failing test (eg https://hypothesis.readthedocs.io/en/latest/reproducing.html). But in case that the test is not a python test failure, but actually a crash (segfault), this information is not printed.
Are there ways to be able to reproduce failures in this case?
Could it be possible to have a mode where the
@reproduce_failure
content is printed before running the actual test? (so it will still get printed in case of a crash) Or where the example would be added to the database before running the test?The specific context is that we run hypothesis tests on CI, and from time to time those tests crash, but we currently don't have a way to actually try to fix those crashes since we can't reproduce them.
The text was updated successfully, but these errors were encountered: