-
Notifications
You must be signed in to change notification settings - Fork 586
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
magic() function to write the entire test, like hypothesis-auto
#2118
Comments
|
Having thought about this some more, I think that creating a function at runtime is the wrong approach - that kind of magic is very hard to learn from, let alone edit and extend. However, spitting out example source code would answer all my objections: it gives users a solid starting point, educates as well as assists, and there's no difference between the output and any other Hypothesis test. The module could be called something like As a bonus, a test templates give us an easy way to show several common properties and test tactics, from round-trips to simple fuzzing. |
|
Demo: master...Zac-HD:ghostwriter The implementation is pretty ugly, but it does spit out tests which usually work and are always a decent starting point for actual development. CC @timothycrosley FYI - you might have some feedback based on |
|
@HypothesisWorks/hypothesis-python-contributors: a demo, for your viewing pleasure and comments: $ hypothesis.ghostwriter fuzz re.compile --except re.error# This test template was produced by the `hypothesis.ghostwriter` module.
import re
from hypothesis import assume, given, strategies as st
# TODO: replace st.nothing() with an appropriate strategy
@given(pattern=st.nothing(), flags=st.just(0))
def test_fuzz_compile(pattern, flags):
try:
re.compile(pattern=pattern, flags=flags)
except re.error:
assume(False)And if you replace Here's a somewhat longer demo: json parsing is really configurable... $ hypothesis.ghostwriter roundtrip json.dumps json.loadsGhostwritten json-round-trip test# This test template was produced by the `hypothesis.ghostwriter` module.
import json
from hypothesis import assume, given, strategies as st
# TODO: replace st.nothing() with an appropriate strategy
@given(
allow_nan=st.booleans(),
check_circular=st.booleans(),
cls=st.none(),
default=st.none(),
ensure_ascii=st.booleans(),
indent=st.none(),
obj=st.nothing(),
separators=st.none(),
skipkeys=st.booleans(),
sort_keys=st.booleans(),
encoding=st.none(),
object_hook=st.none(),
object_pairs_hook=st.none(),
parse_constant=st.none(),
parse_float=st.none(),
parse_int=st.none(),
)
def test_roundtrip_dumps_loads(
allow_nan,
check_circular,
cls,
default,
ensure_ascii,
indent,
obj,
separators,
skipkeys,
sort_keys,
encoding,
object_hook,
object_pairs_hook,
parse_constant,
parse_float,
parse_int,
):
value0 = json.dumps(
obj=obj,
skipkeys=skipkeys,
ensure_ascii=ensure_ascii,
check_circular=check_circular,
allow_nan=allow_nan,
cls=cls,
indent=indent,
separators=separators,
default=default,
sort_keys=sort_keys,
)
value1 = json.loads(
s=value0,
encoding=encoding,
cls=cls,
object_hook=object_hook,
parse_float=parse_float,
parse_int=parse_int,
parse_constant=parse_constant,
object_pairs_hook=object_pairs_hook,
)
assert obj == value1(note that output is formatted with Black, if it's available!) Admittedly you'll probably want to delete a bunch of these parameters, but as a template to start from I think it's better to err on the side of specifying too much - it's much safer to rely on users deleting arguments they don't care about than adding those they should. So - your thoughts?
|
|
amazing! |
|
Thanks @auvipy! I know I'm glad to have finally shipped this, and I hope people find it useful 😄 |
|
@Zac-HD this is awesome! Is it safe for me to redirect anyone who stumbles onto the hypothesis-auto experiment, to this new feature of hypothesis? Is there anything missing? Really happy to see this functionality in the main project! |
Thanks! I am also super excited to have it working upstream. I still have #2548 to add some final bonus features, but it's feature-complete right now - if you want to make a final release of My plan at the moment is to keep this relatively quiet and seek feedback from early users, and then do a bigger promotional push after my talk at PyCon Australia in two weeks. So feel free to share it around in small groups, especially if people are willing to let me know how it goes, but please hold off on submitting it to Reddit or whatever 😉 |
I would like to use it in open source projects like celery and it's dependencies to find out subtle bugs along with the regular example-based tests! py-amqp and kombu and billiard would be the best place to start I guess as they are celery dependencies! |
Sounds good - just remember that outputs from the ghostwriter are just normal property-based tests. It does the first 50-80% of the job for you (depending on type annotations, mostly) but in the end it's your pull request 😄. Good luck and skill! |
yeah definitely! but will reduce at least 50-80% of my efforts O:) and will work as swiss army knife for generating property based testing <3 |
@timothycrosley's
hypothesis-autoextension made some waves on social media lately, going a step beyond our existing inference logic for strategies to create your entire test! As discussed on #2103, we're mutually keen to grow that kind of functionality upstream, so this issue is to track where we're up to and hash out any design issues.The ideal is to provide an easy way to start property-based testing, and a seamless transition to our current API once you need it. Users getting stuck and abandoning PBT altogether would not be a good outcome!
Proposed API
Minimal changes from the existing downstream API, aimed mostly at making it smaller, easier to teach, and easy to transition off at the end. By example:
We could add a
kwargsargument for a dictionary of strategies, anoracleto check that the return value is OK, and perhapssettings(more options, but easier transition thanauto_runs_).My concern is that over-complicating it encourages people to invest really deeply in the
auto_*model, and discourages moving to traditional Hypothesis tests. Perhaps add all arguments, but make it an error to specify more than one of them? Passing two or three could print source code for an equivalent test, which I'd like to have accessible some other way too... Not really sure of this part but that's what discussion is for!What next?
Timothy has a few other things on the go - including easier
isortconfig withblack, which we'll probably be using soon - but does intend to come back to this soon. There might be a@given-based version downstream first, or we might do it here and add docs+runtime warnings recommending the new upstream version.The text was updated successfully, but these errors were encountered: