Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A way to reproduce a failure in case of a crash? #3487

Closed
jorisvandenbossche opened this issue Oct 21, 2022 · 2 comments
Closed

A way to reproduce a failure in case of a crash? #3487

jorisvandenbossche opened this issue Oct 21, 2022 · 2 comments
Labels
question not sure it's a bug? questions welcome

Comments

@jorisvandenbossche
Copy link

Hypothesis provides some ways to reproduce a failing test (eg https://hypothesis.readthedocs.io/en/latest/reproducing.html). But in case that the test is not a python test failure, but actually a crash (segfault), this information is not printed.

Are there ways to be able to reproduce failures in this case?

Could it be possible to have a mode where the @reproduce_failure content is printed before running the actual test? (so it will still get printed in case of a crash) Or where the example would be added to the database before running the test?

The specific context is that we run hypothesis tests on CI, and from time to time those tests crash, but we currently don't have a way to actually try to fix those crashes since we can't reproduce them.

@Zac-HD Zac-HD added the question not sure it's a bug? questions welcome label Oct 21, 2022
@Zac-HD
Copy link
Member

Zac-HD commented Oct 21, 2022

We can't in general print failures in advance, because with e.g. st.data() (or other APIs) some of the inputs are chosen in response to the behavior of the code under test.

However, you can run the test case in a subprocess using the "executors" API. I suspect subprocesses don't work very well with interactive data because the failing example is in the child process (which we can probably fix or work around?), but hopefully that's OK for your use-case.

@Zac-HD
Copy link
Member

Zac-HD commented Jun 4, 2023

Looking at this again, I think it has to be solved downstream:

  • Users can write test functions that run the code-under-test in a subprocess or similar
  • But Hypothesis can't do this for arbitrary code without breaking it at least sometimes

I'd be happy to include a recipe for delegating to a subprocess in our docs if someone proposed one, but I'm closing the issue since there's sadly no action for Hypothesis to take here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question not sure it's a bug? questions welcome
Projects
None yet
Development

No branches or pull requests

2 participants