Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disable IPython History in executing preprocessor #1017

Merged
merged 7 commits into from May 23, 2019

Conversation

@alexrudy
Copy link
Contributor

alexrudy commented May 7, 2019

IPython has a feature where command history is written to a SQLite database. When running multiple kernels in parallel, this feature runs into problems, as the SQLite database will be open from several processes, and sometimes hits a locking error.

To fix this, we have disabled SQLite history in IPython Kernels launched from the preprocessing executor, as recommended by ipython/ipython#11460.

This issue was originally brought up in the context of multiprocessing (see nteract/papermill#239) and this PR partially solves that problem, and adds a test to prove that two notebook kernels, started from the same process can be run simultaneously.

I am working on a separate PR to test and maintain multiprocessing support, as there is at least one more issue to be solved.

Thanks @MSeal for all of the help and guidance during PyCon sprints to figure this out.

IPython saves history in an SQLite DB. This can break in unfortunate ways when
running multiple IPython kernels simultaneously.

NBConvert will now default to disabling history saving in IPython kernels.
@alexrudy alexrudy force-pushed the alexrudy:fix-history-multiprocessing branch from 53c190b to 81d3148 May 7, 2019
@alexrudy alexrudy force-pushed the alexrudy:fix-history-multiprocessing branch from be4927d to e348ee1 May 7, 2019
@MSeal MSeal requested review from mpacer and MSeal and removed request for mpacer May 8, 2019
@MSeal
MSeal approved these changes May 8, 2019
Copy link
Collaborator

MSeal left a comment

Thanks for the fix! Looking forward the the final PR for the zeromq issues around multiprocessing. Will wait for @mpacer to review before merging.

The two notebooks spawned here use the filesystem to check that the other notebook
wrote to the filesystem."""

opts = dict(kernel_name="python")

This comment has been minimized.

Copy link
@MSeal

MSeal May 8, 2019

Collaborator

Did you end up needing to overwrite the kernel?

This comment has been minimized.

Copy link
@alexrudy

alexrudy May 8, 2019

Author Contributor

I'm not sure what part of this you are thinking about here?

Kernels (and zmq.Context, the underlying culprit) are thread-safe, but not process safe – I am working on some other tests which demonstrate and then fix that for nbconvert, and thinking about how to best demonstrate it to upstream jupyter/jupyter_client.

This comment has been minimized.

Copy link
@MSeal

MSeal May 8, 2019

Collaborator

I meant that this link is replacing the kernel from the notebook, and was curious if the line was actually needed.

This comment has been minimized.

Copy link
@alexrudy

alexrudy May 8, 2019

Author Contributor

Ahh, yes, I remember discussing. It is probably unnecessary, I’ll double check.

This comment has been minimized.

Copy link
@alexrudy

alexrudy May 9, 2019

Author Contributor

Indeed, it was redundant. To better match the other tests here, I removed the kernel spec from the notebook file. Is that okay with you? It might make it run on python2, but I think this should work on both python2 and python3. Though my understanding kernel names is admittedly weak!

"metadata": {},
"outputs": [],
"source": [
"other_notebook = {'A':'B', 'B':'A'}[this_notebook]\n",

This comment has been minimized.

Copy link
@MSeal

MSeal May 8, 2019

Collaborator

Might be good to add a comment about how this works with the A waiting for B file and visa versa

This comment has been minimized.

Copy link
@MSeal

MSeal May 8, 2019

Collaborator

Also you probably want a comment that this_notebook is undefined and injected by the test

This comment has been minimized.

Copy link
@alexrudy

alexrudy May 8, 2019

Author Contributor

👍 I've added some comments to the notebook to explain a bit about how it works.

This comment has been minimized.

Copy link
@MSeal

MSeal May 8, 2019

Collaborator

Thanks

@@ -220,6 +220,20 @@ class ExecutePreprocessor(Preprocessor):
)
).tag(config=True)

ipython_hist_file = Unicode(
default_value=':memory:',

This comment has been minimized.

Copy link
@MSeal

MSeal May 8, 2019

Collaborator

👍 We should have probably had this default a long time ago regardless of parallelism concerns.

@MSeal

This comment has been minimized.

Copy link
Collaborator

MSeal commented May 12, 2019

Bumped priority for getting other reviewers, but I'll merge this next week if we don't get more eyes on it.

@minrk
minrk approved these changes May 13, 2019
Copy link
Member

minrk left a comment

Great!

@@ -268,6 +282,8 @@ def start_new_kernel(self, **kwargs):
'kernelspec', {}).get('name', 'python')
km = self.kernel_manager_class(kernel_name=self.kernel_name,
config=self.config)
if km.ipykernel and self.ipython_hist_file:

This comment has been minimized.

Copy link
@mpacer

mpacer May 13, 2019

Member

Is ipykernel an attribute of a KernelManager? It doesn't seem to be https://jupyter-client.readthedocs.io/en/stable/api/manager.html#jupyter_client.KernelManager

If not, I'd prefer to know where this is being assigned.

This comment has been minimized.

Copy link
@alexrudy

alexrudy May 13, 2019

Author Contributor

It is a property of the Kernel Manager but it isn't documented. I used it to encapsulate the logic about using an ipython specific argument, rather than doing my own check against the kernel name before passing this argument in.

@@ -12,7 +12,10 @@
import glob
import io
import os
import logging

This comment has been minimized.

Copy link
@mpacer

mpacer May 13, 2019

Member

Why are we adding logging; I don't see it being used a the traitlets LogginConfigurable should have logging functionality that will do a better job of tracing where the logged messages come from.

This comment has been minimized.

Copy link
@alexrudy

alexrudy May 13, 2019

Author Contributor

My mistake – will remove.

@@ -218,6 +224,12 @@ def assert_notebooks_equal(expected, actual):
actual_execution_count = actual_cell.get('execution_count', None)
assert expected_execution_count == actual_execution_count

def notebook_resources():
res = ResourcesDict()
res['metadata'] = ResourcesDict()

This comment has been minimized.

Copy link
@mpacer

mpacer May 13, 2019

Member

Does this need to be a nested ResourcesDict()? I feel like we should have a convenience class for creating a ResourcesDict that is prepopulated a value for metadata rather than doing this.

The pattern is fine for right now, but it would be useful to have a docstring describing the purpose of this new function.

This comment has been minimized.

Copy link
@alexrudy

alexrudy May 13, 2019

Author Contributor

🤔 I took this snippet from test_run_all_notebooks and extracted it. I'm not actually sure what is required in the metadata, but I didn't want to copy/paste this around the tests in this file.

I'll add a quick docstring, to the effect of "prepares a notebook resources dictionary"

This comment has been minimized.

Copy link
@MSeal

MSeal May 13, 2019

Collaborator

I'm fine with the test doing what it does and not blocking a merge on test helpers. It's not critical path code and it's readable for anyone who wants to tweak it in the future imo.

assert_notebooks_equal(input_nb, output_nb)


def label_parallel_notebook(nb, label):

This comment has been minimized.

Copy link
@mpacer

mpacer May 13, 2019

Member

I think this would more idiomatically be treated as a custom preprocessor. You would pass in via resources the label value and then overwrite the preprocess method:

def preprocess(self, nb, resources):
"""
Preprocessing to apply on each notebook.
Must return modified nb, resources.
If you wish to apply your preprocessing to each cell, you might want
to override preprocess_cell method instead.
Parameters
----------
nb : NotebookNode
Notebook being converted
resources : dictionary
Additional resources used in the conversion process. Allows
preprocessors to pass variables into the Jinja engine.
"""
for index, cell in enumerate(nb.cells):
nb.cells[index], resources = self.preprocess_cell(cell, resources, index)
return nb, resources

This comment has been minimized.

Copy link
@alexrudy

alexrudy May 13, 2019

Author Contributor

That makes sense – do you think that is worthwhile for this set of tests?

I was hesitant to re-implement papermill's templating here, but happy to do so if that makes it more maintainable.

This comment has been minimized.

Copy link
@MSeal

MSeal May 13, 2019

Collaborator

I share that hesitation just for testing purposes.

Used for parallel testing to label two notebooks which are run simultaneously.
"""
label_cell = nbformat.NotebookNode(

This comment has been minimized.

Copy link
@mpacer

mpacer May 13, 2019

Member

I think this would be cleaner if it used nbformat.v4's new_code_cell function:
https://github.com/jupyter/nbformat/blob/11903688167d21af96c92f7f5bf0634ab51819f1/nbformat/v4/nbbase.py#L98-L110

@alexrudy

This comment has been minimized.

Copy link
Contributor Author

alexrudy commented May 20, 2019

Follow up!

I think the following are the outstanding questions from the review of this PR:

  • Is it okay to use the ipython property of KernelManager – it is not a documented property, but seems like the best way to identify ipython kernels and avoid sending the history argument to other kernels.
  • Do I need to refactor notebook_resources() in the test cases?
  • Does the parallel test need to implement a full preprocessor for the test, and use that preprocessor in the notebook execution?

We've started the discussion for each of these questions above – I would love more feedback about all of them so that we can merge this PR (rather than having it get stale).

@MSeal

This comment has been minimized.

Copy link
Collaborator

MSeal commented May 23, 2019

Sorry we've been swamped after PyCon catching up. I agree let's not let this go stale and get some merges. On that note I'll follow-up on getting a jupyter_client release for the other fix you made.

I checked the first box. I think that's fine and I've got some more permissions on that project now to help maintain it.

The other two I find maintainable enough as is. I'm going to merge now and if anyone feels strongly enough they can open an issue or PR to address it differently.

Thanks for the improvement!

@MSeal MSeal merged commit 4d8efc0 into jupyter:master May 23, 2019
1 check passed
1 check passed
continuous-integration/travis-ci/pr The Travis CI build passed
Details
@MSeal MSeal added this to the 5.6 milestone Jul 30, 2019
@meeseeksmachine

This comment has been minimized.

Copy link

meeseeksmachine commented Aug 9, 2019

This pull request has been mentioned on Jupyter Community Forum. There might be relevant details there:

https://discourse.jupyter.org/t/nbconvert-5-6-0-release/1867/1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants
You can’t perform that action at this time.