New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
D4.8: Facilities for running notebooks as verification tests #98
Comments
This will build on top of diff tools developed as part of #95 |
Does this mean we could rerun a notebook, verify it still work and verify the output of the cell is the same as it used to be? In that case it sounds like a great feature to me! |
That would be the general idea yes, and also showing diffs of output when it changes. Note that there are already some tools for running tests with notebooks out there, I haven't looked into that much yet, my focus now is on the diff and merge. |
One of the testing tools is this: https://github.com/computationalmodelling/pytest_validate_nb A use case for us (and reason to develop it) is to re-execute documentation and tutorials and to check they still execute: it seems a common problem that you sit down and write some examples and tutorials at some point, but fail to update these as interfaces change etc. By running those notebooks as tests, we get the additional testing for free. Dealing with particular text output (times, dates, memory addresses, etc) needs additional attention as they can change from run to run. |
That is a very common problem yes. Are you using this for any large-ish projects? Is it stable and/or used by others outside your team? Either way it should serve as a base or at least inspiration later. |
No-ish. We have used this technology for a large (internal) project, and it worked well. (Until an IPython upgrade broke our homegrown scripts, and we failed to fix it.) The code in this repository is intended to be a replacement for that, and also increasingly used for new projects. I don't know about any other users. As you say, it may serve as inspiration only; and if it will be used for OpenDreamKit, that's also great. |
@takluyver I think it makes sense to build on nbval rather than creating a new tool, what is missing from nbval to cover this deliverable? I'll look more into it some time soon. |
@martinal We have gathered a wish list of features for nbval at https://github.com/computationalmodelling/nbval/issues |
I think that the the assignment of me to this deliverable is in error. I
do not seem to be affiliated with this in any way, in particular not as
lead beneficiary, deliverable lead is SR.
…On 21/11/2016 17:19, bpilorget wrote:
@minrk <https://github.com/minrk> (WP leader) and @kohlhase
<https://github.com/kohlhase> (Lead Beneficiary)
This deliverable is due for February 2017
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#98 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AD2bosZxDknWH_O_3WYwyE1VSaEgHXZ2ks5rAcSUgaJpZM4F5y8v>.
--
----------------------------------------------------------------------
Prof. Dr. Michael Kohlhase, http://kwarc.info/kohlhase, skype: mibein42
Professur für Wissensrepräsentation & -verarbeitung
FAU Erlangen Nürnberg, Martensstr. 3, D-91058 Erlangen, Room 11.139,
tel/fax: (49) 9131-85-64052/55, michael.kohlhase@fau.de
Adjunct Professor of Computer Science
Jacobs University Bremen, Campus Ring 1, D-28759 Bremen, Room 168, R1,
tel/fax: +49 421 200-3140/-493140, m.kohlhase@jacobs-university.de
----------------------------------------------------------------------
|
@kohlhase it certainly is. Unassigned, and assigned myself. |
@kohlhase is the lead for "Facilities for running notebooks as verification tests" which was here D4.9 whereas it is D4.8 in the grant. Hence the confusion. I corrected it. |
Dear M18 deliverable leaders, Just a reminder that reports are due for mid-february, to buy us some time for proofreading, feedback, and final submission before February 28th. See our README for details on the process. In practice, I'll be offline February 12-19, and the week right after will be pretty busy. Therefore, it would be helpful if a first draft could be available sometime this week, so that I can have a head start reviewing it. Thanks in advance! |
@nthiery @fangohr @takluyver I've pushed a draft of the D4.8 report if you'd like to have a look, propose more content. In particular, I'd like to know who I should add in the authors list. |
Hi Min, many thanks for putting this together. I'll try to read / extend / give feedback soon. Regarding authors: we had many people involved in the initial development, but they probably don't need to feature on the deliverable report. |
Hi @minrk,
***@***.*** ***@***.*** ***@***.*** I've pushed a draft of the D4.8
report if you'd like to have a look, propose more content. In
particular, I'd like to know who I should add in the authors list.
I just had a look, and this is a good start!
Could you edit the github description with some words of context (what
are Jupyter notebooks; task at hand; how it fits within the
reproducible science aim of ODK), brief description of what was
achieved, connection with nbdime. This will make for a nice abstract
for the report.
Little suggestions of additions (unless I missed them):
- nbval enables the inclusion of notebook testing in Continuous
Integration frameworks
- nvbal lets software projects test their demonstration notebooks
|
For the description of what Jupyter notebooks are, you can e.g. copy paste that from D4.4's (#93) issue description. |
@nthiery thanks, I've updated the GitHub description. @fangohr the guidelines suggest that the authors should include only the ODK participants on the report, since it's a mostly internal thing. I'm just not sure which contributors from your side are on that list. |
I just made a few minor changes. |
Hi @minrk, @takluyver, @fangohr, |
Almost there. I am going through a checklist, but will be done by the end of the day. |
@nthiery I believe this one is ready to go. |
Nothing from me to contribute further - making sure we have items from the check list should complete the item; thank you @minrk . @nthiery I have updated the report some time ago already taking into your account your feedback from 14 days ago. [The only remaining consideration for change would be to include the documentation in the report, but if we link to its URL, that should be just as well. And I should have mentioned it earlier anyway.] I won't any time to contribute further in the next few days, so please proceed without me from here. |
I am very pleased with the tool :) |
I did some minor edits, and added nbval's home and documentation as appendices to the report f31abea. About to submit! |
By the way: two suggestions about nbval itself (not needed for the report):
|
Submitted! Thanks everyone for all the cool work! |
The Jupyter Notebook is a web application that enables the creation and sharing of executable documents that contains live code, equations, visualizations and explanatory text. Thanks to a modular design, Jupyter can be used with any computational system that provides a so-called Jupyter kernel implementing the Jupyter messaging protocol to communicate with the notebook. OpenDreamKit therefore promotes the Jupyter notebook as user interface of choice, in particular since it is particularly suitable for building modular web based Virtual Research Environments.
This deliverable aims at enabling testing of Jupyter notebooks, with a good balance of convenience and configurability to address the range of possible ways to validate noteboooks. Testing is integral to ODK's goals of enabling reproducible practices in computational math and science, and this work enables validating notebooks as documentation and communication products, extending the scope of testing beyond traditional software.
Accomplishments:
The text was updated successfully, but these errors were encountered: