Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support custom doctest parser hooks (perhaps module-specific) #26774

Open
embray opened this issue Nov 27, 2018 · 2 comments
Open

Support custom doctest parser hooks (perhaps module-specific) #26774

embray opened this issue Nov 27, 2018 · 2 comments

Comments

@embray
Copy link
Contributor

embray commented Nov 27, 2018

It would be nice if individual modules or sub-packages in Sage could register custom doctest result parsers (perhaps still to be manually enabled within individual tests with an appropriate # <keyword>).

This would allow specific code areas to register custom logic for parsing specialized doctest output. For example this would have been useful for complex_arb tests in #26360 for parsing complex ball representations.

Rather than build every possibility directly into the doctest framework, it would be better if customized parsers could be registered/enabled as-needed. Perhaps they could be enabled once on a per-file basis, so each file that needs a custom parser would have to explicitly enable it for it to work. This would avoid clutter for the majority of cases where those special cases aren't needed.

This would also be useful for third-party libraries that use Sage and want to use the Sage doctester.

Component: doctest framework

Issue created by migration from https://trac.sagemath.org/ticket/26774

@nbruin
Copy link
Contributor

nbruin commented Apr 16, 2019

comment:1

I think it is wasted effort to go to great lengths to parse output to validate tests. It's a test! It's under our control. Your parser will probably just be undoing the work that the repr method has just performed.

Just write the test such that it prints True if the result matches expectations and otherwise print False. If you want to verify arb results, just test that the centre point and radius are where you expect them to be. If you want to illustrate print output but not insist on matching strings then just mark the test #random and subsequently test the properties of the object you want to validate explicitly.

If you want to allow for a little bit of variation in float results (with IEEE this should not be necessary), then string matching is not the right tool.

Similarly, for testing sets: either construct the expected set explicitly and test for equality with that or do something (sort by string rep?) to the output so that string matching gives a good test for the output.

@embray
Copy link
Contributor Author

embray commented Apr 16, 2019

comment:2

I mostly agree of course, and the real problem here is overreliance on the doctest framework, where a simple assert want == got style test would do.

Nevertheless, sometimes it's also desirable to have doctests that are actual docs that take the form of meaningful and readable examples which are themselves tested. I'm okay with using # random for these in some cases as long as there's an equivalent test elsewhere that actually checks the value. But in many simple cases I don't think one needs to go to "great lengths" either.

See for example all the workarounds I've added for normalizing output differences between Python 2 and 3.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants