Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce distribution logging to constrained-generators #4217

Closed
MaximilianAlgehed opened this issue Mar 25, 2024 · 0 comments · Fixed by #4301
Closed

Introduce distribution logging to constrained-generators #4217

MaximilianAlgehed opened this issue Mar 25, 2024 · 0 comments · Fixed by #4301
Assignees
Labels
🕵️ testing enhancement New feature or request

Comments

@MaximilianAlgehed
Copy link
Collaborator

One thing that we're missing in constrained-generators is the ability to see what the distribution of generated data is (to spot things where we aren't testing some feature because of some mistake in the generator).

The natural way to implement this would be with something like a monitor :: (Property -> Property) -> Pred fn predicate, allowing us to do something like:

spec :: Spec fn (Either Int Bool)
spec = constrained $ \ e ->
  caseOn e
    (branch $ \ i -> [ monitor $ label "left", assert $ 0 <. i ])
    (branch $ \  b -> [ monitor $ label "right", assert b])

or create re-useable things like:

monitorPositive :: Term fn Int -> Pred fn 
monitorPositive i = ifElse (0 <. i) (monitor $ label "positive") (monitor $ label  "non-positive")

This could be combined with reify to classify any boolean function over stuff as well and would generally be useful to make sure distributions don't suck.

The tricky thing is making the design work out in practise, but I think it can be done by adding a Property -> Property field to the constructors of and collecting the coverage information during spec checking: collectCoverageSpec :: Spec fn a -> a -> Property -> Property (or with a Writer component on one of the established runner functions).

In all, a worth-while thing to do that shouldn't be too hard and will allows us some certainty that our tests are actually meaningful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🕵️ testing enhancement New feature or request
Projects
None yet
1 participant