Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hypothesis doesn't tell me when a rule never assumes something successfully #213

Closed
radix opened this issue Oct 31, 2015 · 5 comments · Fixed by #3869
Closed

Hypothesis doesn't tell me when a rule never assumes something successfully #213

radix opened this issue Oct 31, 2015 · 5 comments · Fixed by #3869
Labels
legibility make errors helpful and Hypothesis grokable

Comments

@radix
Copy link
Contributor

radix commented Oct 31, 2015

I expect this program to raise an exception because the okay rule never successfully runs:

from hypothesis.stateful import RuleBasedStateMachine, Bundle, rule
from hypothesis import assume

class StatefulTests(RuleBasedStateMachine):
    state = Bundle('state')

    @rule(target=state)
    def whatever(self):
        return "Okay."

    @rule(initial=state)
    def okay(self, initial):
        assume(initial == "Not Okay.")

StatefulTests.TestCase().runTest()
@DRMacIver
Copy link
Member

To summarize some discussion from IRC:

It's almost impossible to solve this perfectly. The problem is that for strategies with lots of rules, the chances are very high that at least one will not get properly exercised in a given run, which means that if you have a bunch of perfectly reasonable assumes some of them will probably fail to ever pass in a given run. This means that simply failing if any rule never passes its assumptions is error prone to say the least.

However not doing anything about this is a major usability issue, so I'm going to try to figure out how to make this better, but it's unlikely that it can be made perfect.

@Zac-HD Zac-HD added the enhancement it's not broken, but we want it to be better label May 31, 2017
@Zac-HD
Copy link
Member

Zac-HD commented May 31, 2017

Closed by the new @invariant decorator in 3.7 - use it to decorate checks that should be run after every step.

@Zac-HD Zac-HD closed this as completed May 31, 2017
@DRMacIver DRMacIver reopened this May 31, 2017
@DRMacIver
Copy link
Member

I don't think so? The problem here is with rules that are not supposed to be run after every step, but should still sometimes run.

@Zac-HD Zac-HD added legibility make errors helpful and Hypothesis grokable and removed enhancement it's not broken, but we want it to be better labels Mar 14, 2018
@Zac-HD
Copy link
Member

Zac-HD commented Apr 20, 2020

I think emiting a warning for each precondition which was never satisfied would be more reasonable, since we notionally check each of them before each step.

master...Zac-HD:report-unsat-preconditions suggests to me that this should only run once, i.e. at the end of the generate phase, rather than at the end of each example. Otherwise it still tends to be flaky, as swarm testing makes it easy to never-satisfy a predicate that requires some other rule to have run.

@Zac-HD
Copy link
Member

Zac-HD commented Jan 15, 2024

Closing this issue in favor of the list of observability ideas linked above 🙂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
legibility make errors helpful and Hypothesis grokable
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants