New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need to be able to downgrade Unsatisfiable and Exhausted to warnings #22
Comments
|
I am interested in this also. My problem is that some of my tests are necessarily generated from the environment in which the tests are run. So, a sufficiently rich sysfs will generate enough tests, but a dead simple one will not. Right now, I'm making two versions of functions, like: which is grim. I think it would be simpler and more consistent if |
|
I don't think you need to do that? Hypothesis understands that sampled_from has only N elements and if N < settings.min_satisfying_examples then you as long as you don't assume() away every test it won't fail it. RE sampled_from and one_of not raising errors, I completely disagree. It would require providing strategies that can never produce any values (which seems questionable) and seems like it would serve mostly to mask usage errors. |
|
Checking against 0 is slightly better than checking against _MIN_SATISFYING_EXAMPLES, but still leaves the main problem. It's not that I don't think that some sort of warning should occur if no tests can be run, its that I would like to have a much better way of SKIP ing such a test and being able to report that it skipped. But there doesn't seem to be anyway that I can use pytest skip markers for this test, because the exception is raised by hypotheis before pytest can implement the SKIP. Essentially, it would be nice if I could do this instead of what I am doing. I think the problem is caused by the eager exception raising when the length to be sampled from is 0. The test does seem to be skipped succesfully if there is at least one element in the set to be sampled from. I think the deeper issue is that the hypothesis module is constructed regardless of whether the test is skipped or not. So, probably this whole issue could be avoided if the step of constructing the hypothesis module were entirely skipped if the test itself were to be skipped. |
|
Right, I see. The problem is that the error occurs at definition time rather than test execution time. That makes sense. I'll open an issue about supporting that use case. |
|
Thanks, that sums it up perfectly. I thought that the skipif markers were processed prior to execution time, which led me astray. |
|
They are, but the problem is that the functions you're calling aren't. You're calling sampled_from([]) when the module is first loaded, which is where it errors, so you never get the opportunity for pytest or similar to start running tests at all. |
|
Closed by #176, I think, and certainly the desired examples above work on recent version of Hypothesis with DeferredStrategy. |
|
I think there are two separate issues - the one @mulkieran and I talked about in this bug thread is somewhat different than that of the original issue. But again I'm not sure that the original idea was a good one, so I'm happy to close the bug. |
|
So... I'll keep closing years-old issues if I think the bug is fixed? You're welcome to reopen if I get too excited |
|
Works for me. |
|
I was happy with that solution: pyudev/pyudev#146. |
…smarkets First draft of smarkets blog post
This is blocked on Issue #11 as we need a feedback mechanism to do it, but basically Hypothesis needs to be able to run in a "no false positives" mode, which requires Unsatisfiable to not cause the tests to fail. It should instead log an error and move on.
The text was updated successfully, but these errors were encountered: