Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need to be able to downgrade Unsatisfiable and Exhausted to warnings #22

Closed
DRMacIver opened this issue Jan 13, 2015 · 11 comments
Closed

Comments

@DRMacIver
Copy link
Member

This is blocked on Issue #11 as we need a feedback mechanism to do it, but basically Hypothesis needs to be able to run in a "no false positives" mode, which requires Unsatisfiable to not cause the tests to fail. It should instead log an error and move on.

@DRMacIver DRMacIver changed the title Need to be able to downgrade Unsatisfiable to a warning Need to be able to downgrade Unsatisfiable and Exhausted to warnings Jan 13, 2015
@mulkieran
Copy link
Contributor

I am interested in this also. My problem is that some of my tests are necessarily generated from the environment in which the tests are run. So, a sufficiently rich sysfs will generate enough tests, but a dead simple one will not.

Right now, I'm making two versions of functions, like:

_devices = filter(lambda x: x.tags, _DEVICES)
if len(_devices) >= MIN_SATISFYING_EXAMPLES:
    @given(strategies.sampled_from(_devices))
    def test_junk(self, device):
        assert ...
else:
    def test_junk():
        skip("not enough...")

which is grim.

I think it would be simpler and more consistent if sampled_from() and one_of() did not raise an exception when the list of elements to be sampled from was empty, as well.

@DRMacIver
Copy link
Member Author

I don't think you need to do that? Hypothesis understands that sampled_from has only N elements and if N < settings.min_satisfying_examples then you as long as you don't assume() away every test it won't fail it.

RE sampled_from and one_of not raising errors, I completely disagree. It would require providing strategies that can never produce any values (which seems questionable) and seems like it would serve mostly to mask usage errors.

@mulkieran
Copy link
Contributor

Checking against 0 is slightly better than checking against _MIN_SATISFYING_EXAMPLES, but still leaves the main problem.

It's not that I don't think that some sort of warning should occur if no tests can be run, its that I would like to have a much better way of SKIP ing such a test and being able to report that it skipped. But there doesn't seem to be anyway that I can use pytest skip markers for this test, because the exception is raised by hypotheis before pytest can implement the SKIP.

Essentially, it would be nice if I could do this

@pytest.mark.skipif(len(_devices) == 0, message="no devices")
@given(strategies.sampled_from(_devices))
def test_junk(self, device):
    assert ...

instead of what I am doing.

I think the problem is caused by the eager exception raising when the length to be sampled from is 0. The test does seem to be skipped succesfully if there is at least one element in the set to be sampled from. I think the deeper issue is that the hypothesis module is constructed regardless of whether the test is skipped or not. So, probably this whole issue could be avoided if the step of constructing the hypothesis module were entirely skipped if the test itself were to be skipped.

@DRMacIver
Copy link
Member Author

Right, I see. The problem is that the error occurs at definition time rather than test execution time. That makes sense. I'll open an issue about supporting that use case.

@mulkieran
Copy link
Contributor

Thanks, that sums it up perfectly. I thought that the skipif markers were processed prior to execution time, which led me astray.

@DRMacIver
Copy link
Member Author

They are, but the problem is that the functions you're calling aren't. You're calling sampled_from([]) when the module is first loaded, which is where it errors, so you never get the opportunity for pytest or similar to start running tests at all.

@Zac-HD
Copy link
Member

Zac-HD commented May 11, 2017

Closed by #176, I think, and certainly the desired examples above work on recent version of Hypothesis with DeferredStrategy.

@Zac-HD Zac-HD closed this as completed May 11, 2017
@DRMacIver
Copy link
Member Author

I think there are two separate issues - the one @mulkieran and I talked about in this bug thread is somewhat different than that of the original issue.

But again I'm not sure that the original idea was a good one, so I'm happy to close the bug.

@Zac-HD
Copy link
Member

Zac-HD commented May 11, 2017

So... I'll keep closing years-old issues if I think the bug is fixed? You're welcome to reopen if I get too excited 😉

@DRMacIver
Copy link
Member Author

Works for me.

@mulkieran
Copy link
Contributor

I was happy with that solution: pyudev/pyudev#146.

Macavirus pushed a commit to Macavirus/hypothesis that referenced this issue May 2, 2022
…smarkets

First draft of smarkets blog post
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants