Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

YAML test loader #17

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

YAML test loader #17

wants to merge 3 commits into from

Conversation

chronitis
Copy link
Contributor

(This one is perhaps a bit more speculative).

For the purposes of providing a cross-language test mechanism, this avoids the need to write the python/unittest around what is otherwise a declarative list of code samples to test against. Instead a loader script generates and runs the unittest from a YAML file.

The YAML file has equivalent structure to the unittest class variables that would otherwise be written. Indeed, all the loader does is load the top-level object as a dictionary and then construct a KernelTest class around it. Multiple YAML documents (defining one kernel to test each) can be bundled in one file with --- seperators.

@takluyver
Copy link
Member

Thanks for thinking about this. However, I don't think there is a major advantage to using yaml for this:

  • The test machinery, and the other Jupyter machinery it uses, is written in Python, so you still need a Python setup to run it, even if you're not writing a .py file.
  • With a Python file, it's easy to add custom tests for your kernel just by writing more methods/functions.
  • A different format requires extra machinery to load and run the tests (your test_kernels script), whereas the Python test definitions use completely standard Python testing machinery, so there's less to maintain and they can easily be run by different test runners.

I also dislike YAML as a format - it's big and complex, and the Python bindings are big and complex, and they're not even safe by default: 'declarative' files can execute arbitrary code on load. If we do define a way to specify tests in a declarative format (which I'm not sold on, as described), I'd prefer to use TOML, which is simpler and lighter.

@chronitis
Copy link
Contributor Author

My thought was that a declarative format looks more suitable to try and establish a standard that kernels would provide a jkt.yaml/toml/whatever file for whatever CI testing or similar might be possible. I suppose arguing this is better than a python source file is more style than substance, however.

I don't particularly mind changing the format; I don't think safe-parsing or the size of the bindings are particularly problematic, but the different string parsing rules and consequent ways newlines and escapes are handled can be a bit tricky, but I sense from the above that the choice of format isn't the main issue.

@takluyver
Copy link
Member

That is a reasonable point - I've been thinking in terms of kernels setting up tests themselves, e.g. with Travis (we've done this with IRkernel, for instance). But for an external service that runs the tests, a declarative config file may be more palatable. Setting up the environment to install and run the kernel is still challenging, though. I'll keep thinking about this.

@jankatins
Copy link

knitpy has a similar thing, but using text files: https://github.com/JanSchulz/knitpy/tree/master/knitpy/tests

As the the returned message is json, one could just dump the json content... The flow would be:

  • write a small test tests.input
  • run the tests which fail -> produces a test.output and fails
  • check that it's the right result and rename to test.expected
  • rerun test and it passes...

@chronitis
Copy link
Contributor Author

Just for comparison, I think this TOML document is equivalent to the YAML version. It's workable, but it also has some non-optimal features:

  • (opinion) expressing multiple tests is a bit ugly where multiple keys are required for each test
[[code_display_data]]
code = "from IPython.display import HTML, display; display(HTML('<b>test</b>'))"
mime = "text/html"
[[code_display_data]]
code = "from IPython.display import Math, display; display(Math('\\frac{1}{2}'))"
mime = "text/latex"
  • since the file consists mostly of top-level keys (eg, code_hello_world = "string"), they have to be listed before the nested tables ([[completion_samples]] and the like), otherwise they end up in the wrong dict. This could be solved by restructuring the data a bit though:
[kernel_info]
kernel_name = "ipython3"
language_name = "python"

[basic_tests]
code_hello_world ...

In any case, I suspect this PR as it currently exists is not particularly useful, and the details of how such a file might be consumed and how it should be formatted decided before an implementation.

@williamstein
Copy link

Copying this comment from here:

So, tests of the kernels use a Python library and Python unit testing. The data that defines the tests is currently embedded as code in those Python (or other) files, rather than being in some declarative "data" format, which could be easily re-used by other testing systems. It might be useful to have a suite of declarative test data, since that same data could alternatively be used for mocking kernels (for other testing/dev purposes). Anyway, over the years, I've found automated testing to be incredibly powerful to improving development speed...

The point is that for some people like me, who like to write tests and use test data, this exact PR is potentially VERY valuable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants