Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement testing #3

Open
haraldschilly opened this issue Apr 10, 2018 · 0 comments
Open

implement testing #3

haraldschilly opened this issue Apr 10, 2018 · 0 comments
Assignees

Comments

@haraldschilly
Copy link
Contributor

haraldschilly commented Apr 10, 2018

Goal of this ticket is to implement an annotation scheme for each entry, which in the end makes sure that the examples we want to test actually work.

So far, there is a primitive text_examples function here in the main file and some associated attribute for testing is here. This won't work out, because it takes way too much time to run all examples. It would be better to have this wrapped into a persistent session and reset the environment before each example. Maybe it can be done via usual doctest methods, or we have to use pexpect or jupyter kernels. Also, before each run, it needs to be reset. My thoughts are to start a "main session", and for each test we fork off from that process and use pexpect. I don't know if doctests already do this.

Besides that, it also needs to take the "setup:..." code part into account.

Finally, this is only an "opt-in" mechanism. Maybe we want to switch this to "opt-out", such that all examples are run and check if they do not produce any errors -- unless there is a marker set that this example is expected to not work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants