New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Strategies from type descriptions #293
Comments
I've been discussing this with @DRMacIver by email, copied below for anyone interested and posterity. |
Hi David I've been looking at (this issue) over the weekend, and I think I could put together a solid extras module for it. I'd be targeting extras because it's only practical on 3.5+. There are also some cases where there's no reasonable output, and I'd like to build something solid but with clearly defined boundaries - ie automatic, not magic. Or at least not terribly black magic... This could be useful in several ways. I imagine it would make Hypothesis a little more popular with mypy enthusiasts, but more interesting to me is the things you could build on top. For example, a generic test which asserts that two functions have identical output (oracle pattern), two functions invert each others output (round-trip), check idempotence, etc - a library of predefined properties that a user can just drop their functions into. Before I go through the less-exciting bits of handling all the fiddly edge cases, docs, tests etc, would you accept such a pull? |
(reply email by @DRMacIver) "It's complicated." I'm not opposed to such a pull request, but there are a huge number of gotchas and difficult things that I think the feature would require to be useful. So I'm not saying no, but I am warning you that between my requirements and the intrinsic difficulty of the task it's going to be a fairly big undertaking. Here is my minimum set of requirements for such a feature (note: All names are made up on the spot and you should feel free to choose your own):
There might be others I'm currently forgetting! The fundamental reason for these requirements this: Type based property-based testing is an attractive nuisance. It's convenient but badly limited, and works nicely for like 80% of the use case and then completely screws you over for the remaining 20%. Hypothesis's strategy library is much more flexible than anything you could possibly do with type based approaches. Anything which makes it even slightly inconvenient for people to switch from using type based strategies to explicitly specified ones is in the end a net negative for the end user. On top of this:
If despite these caveats you still want to do it, I'd be happy to help with any questions you might etc. from fairly early on in the process. I do think this feature is probably inevitable at some point, it's just a bit scary and I'm not in any rush. |
(me again) I think we have different things in mind, which is complicating the discussion: you seem to be talking about an upgrade to This would automatically meet requirements R3 and R5. R1 and R2 are basically the same thing; conceptually a parameter for a supplementary lookup dict of type: strategy would suffice - in practice most of Challenges... C1 and C2 I think are bypassed by the "lookup with helper functions" approach; I don't expect people to use this on Hypothesis internals (which would need to be type-annotated first...). C3 and C4... I agree. The typing module is young, there's no consistent way to get the element type of container types, backward compatibility is terrible, etc. I think this is all manageable - if a lot of work to cover all the edges and provide a nice interface - but workable if I can stick it in an extra that requires 3.5+ and avoid doing too much to the internals. Does this scope seem reasonable to you? |
(reply email by @DRMacIver)
That seems much more reasonable, yes, and is largely the subset of this feature that I think is a good idea, so I'm keen. :-)
Can you outline a use case for this? Would this be used instead of given? Or within given? Either way this seems to either fall afoul of making it difficult to transition between type based and strategy based test definitions. Also what do you have in mind for asserting that the return type is correct? Are you just planning to look at function annotations or dynamically check the resulting data?
It probably would be nice to get things type annotated, especially the strategies module. Might help with the "check the return type is correct" part too. Up to you if you want to tackle that though.
It does. I'd quite like it if you could look into how much work it is to make this work with the backport of the typing module - if the answer is little to none then it would be nice to make it work across all the Python versions Hypothesis supports. If the answer is lots then I have no problem restricting it to 3.5+ only. Even if you do, it might be worth shimming the typing module into some sort of |
Now we're out of emails territory, and I'm only writing as myself. Hopefully that wasn't too hard to follow!
It's a lookup function that returns a strategy, just like the type-based lookup (and hopefully merged). So you call it with a function, and get a strategy - inline in The inference / manual transition is no better or worse than for purely type-based lookup - plenty of work but manageable.
Each time the search strategy draws, it asserts that the return value is an instance of the annotated type before returning it. I'm not certain how failures show up (ie at draw time), but I think it's a health problem for the strategy to return values of the wrong type.
I think that's a logically separate pull - and this work will be large enough as it is.
Hmm. I think it's probably possible to make this work with the backport, but without type annotations (3.5+, 3.3 for any annotations IIRC) it's going to be substantially less useful. Let's park this for now - I agree that it would be nice, but I'll develop a working version for 3.5 first and then revisit backports. Same principle for a compatibility shim - it may well be necessary or just desirable at some point, but I'll get a MVP without it first. |
Right. I misunderstood what you meant here. That sounds fine. A thing that might be worth considering is instead of making this a separate thing, add it into the
I suspect you will find this finicky - a lot of return types cannot be checked at runtime (e.g function types), or are expensive to check (e.g. parametrised lists checking the element types). I've no objection in principle to it doing this though.
These are all fine by me. |
I like the idea, but would be nervous about inference in
It turns out that |
I now have a WIP draft, so it's probably time to think about how this might be merged. I see two main and one optional parts to the work.
@DRMacIver, any feedback you have on 1 would be welcome so I can work on things useful for a pull; everything else should fall out nicely as we go from there. |
Referring to the
typing
module. This would make it easy to generate structured mock data. The recent PEP thing got me interested, as well as python/mypy which @Daenyth clued me in on.The first is basically done (some work here), although not all possible types are supported.
The second is similarly straight forward in python3. But I'm not sure if it's possible to get the function annotations from in py2. I'd welcome info on this if anyone knows.
The text was updated successfully, but these errors were encountered: