-
Notifications
You must be signed in to change notification settings - Fork 586
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automating keeping documentation examples up to date #711
Comments
|
FWIW pytest uses regendoc for this. I'm not sure how suitable this is for projects other than pytest (and it's pretty bare-bones at the moment, check pytest's doc makefile for how to use it). It seems to work pretty well for pytest though. |
|
As an additional data-point about the shape of what a solution to this should look like, I've just changed something in #710 on the basis of how bad it made the documentation look. This is a strong argument in favour of making the diffs visible in the changelog. It also means I have to redo all of the bits I've manually fixed so far. :-( |
|
@The-Compiler Interesting. Does this actually do the doctest stuff already? It looks more like it's for keeping version numbers and stuff up to date. |
|
Hmm, right, I don't think it does doctest-like things, it's more intended for See e.g. https://github.com/pytest-dev/pytest/pull/1780/files for a bigger change - it just reruns all invocations and updates the output. |
|
Yeah, the specific use-case is that we have a lot of console examples that are currently run via doctests, but their output necessarily changes when Hypothesis's implementation does. |
|
Initial investigation of rundoc is that it's a bit on the primitive side. In particular there isn't any sort of global setup like we currently do for doctest. OTOH it is also very small so we could just maintain a patched fork.... |
|
Yeah, the current situation is completely untenable I'm afraid. I've just discovered another problem: Because it's relying on the global random number generator, any change in testing has arbitrary knock on effects for other changes in testing. I would much rather have out of date docs than deal with this for any changes I'm making to Hypothesis, so I'm going to open a pull request disabling doctests until we can resolve this issue. Sorry. |
|
Helpful suggestion from Twitter: https://twitter.com/meejah/status/881545340360376320 Basically if we split out the doctest examples into their own files and include them in sphinx with a literalinclude, we can then run doctest on those files rather than on the documentation. It's then much easier to update those examples. We'd still have to write a small script to do it, but it should be comparatively straightforward I think. |
Don't run doctests until #711 is resolved
|
I have a strong preference for option # 1, to catch examples that we deprecate or outright break. Nobody wants to read the docs and see an example followed by a massive traceback - even if it's honest, fixing the example is better. I also prefer to keep the examples inline as they are now for ease of editing, so I volunteer |
We accept your sacrifice. 😈 More seriously, if you're willing to do it then I agree that this would be the best option. Thanks.
Yup, fine by me. |
|
Turns out that the implementation of the sphinx doctest builder is far wartier than I'd expected, due to shoehorning doctests into a builder API, but we still want to use it for nice setup and discovery features. So it looks like the best way forward is to... parse the output of
|
I really like that we check documentation examples are up to date in CI. I think it's a great addition to the build and very much want to keep it.
But as it stands it's basically untenable, because it means that every time I make a change to to something low level in Hypothesis that has an impact on the data generation I have to run the following loop:
In particular if I do something that changes the distribution of
integers()I have to run this loop several dozen times. This is both slow (the docs build process is ~10 seconds) and incredibly annoying. I'm doing this for #710 at the moment and I basically never want to do it again ever. This makes me want to not touch Hypothesis internals at all, which is pretty counterproductive.So in order to get the good without the annoyance, we need some form of automation.
Options I can think of:
I also propose adding an item to the guide to the tune of "to the greatest extent possible, any checks for which the fixes could be automated should be automated".
The text was updated successfully, but these errors were encountered: