Jupyter/IPython Kernel Tools
dsblank Merge pull request #174 from ccordoba12/fix_handling_metadata
Fix handling metadata in _formatter
Latest commit 0d5dafe Sep 22, 2018


A Jupyter kernel base class in Python which includes core magic functions (including help, command and file path completion, parallel and distributed processing, downloads, and much more).

https://badge.fury.io/py/metakernel.png/ https://coveralls.io/repos/Calysto/metakernel/badge.png?branch=master

See Jupyter's docs on wrapper kernels.

Additional magics can be installed within the new kernel package under a magics subpackage.


  • Basic set of line and cell magics for all kernels.
    • Python magic for accessing python interpreter.
    • Run kernels in parallel.
    • Shell magics.
    • Classroom management magics.
  • Tab completion for magics and file paths.
  • Help for magics using ? or Shift+Tab.
  • Plot magic for setting default plot behavior.

Kernels based on Metakernel

... and many others.


You can install Metakernel through pip:

pip install metakernel --upgrade

Installing metakernel from the conda-forge channel can be achieved by adding conda-forge to your channels with:

conda config --add channels conda-forge

Once the conda-forge channel has been enabled, metakernel can be installed with:

conda install metakernel

It is possible to list all of the versions of metakernel available on your platform with:

conda search metakernel --channel conda-forge

Use MetaKernel Magics in IPython

Although MetaKernel is a system for building new kernels, you can use a subset of the magics in the IPython kernel.

from metakernel import register_ipython_magics

Put the following in your (or a system-wide) ipython_config.py file:

# /etc/ipython/ipython_config.py
c = get_config()
startup = [
   'from metakernel import register_ipython_magics',
c.InteractiveShellApp.exec_lines = startup

Use MetaKernel Languages in Parallel

To use a MetaKernel language in parallel, do the following:

  1. Make sure that the Python module ipyparallel is installed. In the shell, type:

`shell pip install ipyparallel `

  1. To enable the extension in the notebook, in the shell, type:

`shell ipcluster nbextension enable `

  1. To start up a cluster, with 10 nodes, on a local IP address, in the shell, type:

`shell ipcluster start --n=10 --ip= `

  1. Initialize the code to use the 10 nodes, inside the notebook from a host kernel MODULE and CLASSNAME (can be any metakernel kernel):

` %parallel MODULE CLASSNAME `

For example:

` %parallel calysto_scheme CalystoScheme `

  1. Run code in parallel, inside the notebook, type:

Execute a single line, in parallel:

` %px (+ 1 1) `

Or execute the entire cell, in parallel:

` %%px (* cluster_rank cluster_rank) `

Results come back in a Python list (Scheme vector), in cluster_rank order. (This will be a JSON representation in the future).

Therefore, the above would produce the result:

`scheme #10(0 1 4 9 16 25 36 49 64 81) ` You can get the results back in any of the parallel magics (%px, %%px, or %pmap) in the host kernel by accessing the variable _ (single underscore), or by using the --set_variable VARIABLE flag, like so:

```shell %%px --set_variable results

(* cluster_rank cluster_rank) ```

Then, in the next cell, you can access results.

Notice that you can use the variable cluster_rank to partition parts of a problem so that each node is working on something different.

In the examples above, use -e to evaluate the code in the host kernel as well. Note that cluster_rank is not defined on the host machine, and that this assumes the host kernel is the same as the parallel machines.


Example notebooks can be viewed here.

Documentation is available online. Magics have interactive help (and online).

For version information, see the Revision History.