You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One thing that we're missing in constrained-generators is the ability to see what the distribution of generated data is (to spot things where we aren't testing some feature because of some mistake in the generator).
The natural way to implement this would be with something like a monitor :: (Property -> Property) -> Pred fn predicate, allowing us to do something like:
spec :: Spec fn (Either Int Bool)
spec = constrained $ \ e ->
caseOn e
(branch $ \ i -> [ monitor $ label "left", assert $ 0 <. i ])
(branch $ \ b -> [ monitor $ label "right", assert b])
or create re-useable things like:
monitorPositive :: Term fn Int -> Pred fn
monitorPositive i = ifElse (0 <. i) (monitor $ label "positive") (monitor $ label "non-positive")
This could be combined with reify to classify any boolean function over stuff as well and would generally be useful to make sure distributions don't suck.
The tricky thing is making the design work out in practise, but I think it can be done by adding a Property -> Property field to the constructors of and collecting the coverage information during spec checking: collectCoverageSpec :: Spec fn a -> a -> Property -> Property (or with a Writer component on one of the established runner functions).
In all, a worth-while thing to do that shouldn't be too hard and will allows us some certainty that our tests are actually meaningful.
The text was updated successfully, but these errors were encountered:
One thing that we're missing in
constrained-generators
is the ability to see what the distribution of generated data is (to spot things where we aren't testing some feature because of some mistake in the generator).The natural way to implement this would be with something like a
monitor :: (Property -> Property) -> Pred fn
predicate, allowing us to do something like:or create re-useable things like:
This could be combined with
reify
to classify any boolean function over stuff as well and would generally be useful to make sure distributions don't suck.The tricky thing is making the design work out in practise, but I think it can be done by adding a
Property -> Property
field to the constructors of and collecting the coverage information during spec checking:collectCoverageSpec :: Spec fn a -> a -> Property -> Property
(or with aWriter
component on one of the established runner functions).In all, a worth-while thing to do that shouldn't be too hard and will allows us some certainty that our tests are actually meaningful.
The text was updated successfully, but these errors were encountered: