An MCP server that exposes a persistent IPython kernel to Claude Code (and any MCP client). Claude can execute Python interactively, inspect variables, switch conda environments on the fly, and export the session as a Jupyter notebook.
Inspired by GPT-Auto-Data-Analytics — the tool-calling loop from that project, ported from OpenAI function-calling to the MCP standard so it works natively with Claude Code and any MCP-compatible agent.
| Pain point | How this helps |
|---|---|
| Variables lost between tool calls | Kernel state persists for the whole session |
| Plots not visible to the agent | Images returned as native MCP image content |
| Hard to share results | kernel_save_notebook exports .ipynb / HTML / PDF |
| Locked to one Python env | kernel_use_env('myenv') switches conda envs on the fly |
| Session lost when Claude exits | Attach to a running JupyterLab kernel for persistence |
Claude Code
└── MCP client
└── ipython-kernel-mcp (this server)
├── mode: in-process → IPython InteractiveShell (default, fresh each session)
└── mode: attached → Jupyter kernel via jupyter_client
├── kernel_use_env('myenv') → launches kernel in any conda env
└── kernel_attach(id) → connects to running JupyterLab kernel
| Tool | Description |
|---|---|
kernel_exec(code) |
Run Python — stdout, rich text, and plots returned as images |
kernel_inspect(var_name) |
repr() of any variable in the kernel namespace |
kernel_list_vars() |
Show all user-defined variables and their types |
kernel_status() |
Current mode, active env, and execution count |
kernel_restart() |
Clear all variables and history |
kernel_add_markdown(text) |
Insert a markdown cell into the exported notebook |
kernel_save_notebook(path, save_html, save_pdf) |
Export session as .ipynb |
kernel_list_envs() |
List all conda/mamba environments on this machine |
kernel_use_env(env_name) |
Start a kernel in a named conda env and auto-attach |
kernel_list_running() |
List running Jupyter kernels (for JupyterLab attach) |
kernel_attach(kernel_id) |
Attach to an existing running Jupyter kernel |
kernel_detach() |
Disconnect and return to in-process mode |
git clone https://github.com/Animadversio/ipython-kernel-mcp.git
cd ipython-kernel-mcp
pip install -e .To install inside a specific conda environment (recommended — the server then runs in that env by default):
conda activate myenv
pip install -e /path/to/ipython-kernel-mcpQuickest way — use the claude CLI to register globally (user scope = all projects):
claude mcp add --scope user ipython-kernel /path/to/env/bin/ipython-kernel-mcpManual config — add to ~/.claude.json or a project .mcp.json:
{
"mcpServers": {
"ipython-kernel": {
"command": "/path/to/your/env/bin/ipython-kernel-mcp"
}
}
}Restart Claude Code. Tools appear as mcp__ipython-kernel__kernel_exec etc.
Install the package into that env and point the MCP command at its binary:
# Install into your preferred env
conda activate torch
pip install -e /path/to/ipython-kernel-mcp
# Register
claude mcp add --scope user ipython-kernel \
~/miniforge3/envs/torch/bin/ipython-kernel-mcpThe server (and the in-process kernel) then run entirely inside that environment.
Once registered, the agent can switch environments mid-session using kernel_use_env:
User: analyze this dataset — it needs torch and sklearn
Claude: [calls kernel_list_envs()] → sees: base, torch, ml-env
Claude: [calls kernel_use_env('torch')] → starts kernel in torch env
Claude: [calls kernel_exec('import torch; print(torch.__version__)')] → 2.x.x
Claude: [calls kernel_exec('..analysis code..')]
Claude: [calls kernel_save_notebook('results.ipynb')]
The target env needs ipykernel installed (conda install ipykernel).
This lets kernel state persist across Claude Code sessions:
- Start JupyterLab in your env and run some cells to set up your data
- In Claude Code:
kernel_list_running()→ see the kernel ID kernel_attach('<kernel_id>')→ Claude now operates in your live kernel- Close and reopen Claude Code — re-attach to pick up where you left off
You: Load ~/data/experiment.csv and give me a statistical summary with plots
Claude: [kernel_exec] import pandas as pd; df = pd.read_csv('~/data/experiment.csv')
Claude: [kernel_exec] df.describe() → table output
Claude: [kernel_exec] df.hist(figsize=(12,8)) → 📊 image returned
Claude: [kernel_add_markdown] "## Distribution Analysis"
Claude: [kernel_exec] df.corr() → correlation matrix
Claude: [kernel_save_notebook] 'experiment_analysis.ipynb', save_html=True
→ opens in browser, all code + outputs + plots included
- Python ≥ 3.10
mcp[cli]≥ 1.0,ipython≥ 8.0,nbformat,nbconvert,pillow- Target conda envs need
ipykernelforkernel_use_env
Tested on macOS. Should work on Linux. Windows support untested (paths use /).
This project is inspired by and ported from GPT-Auto-Data-Analytics.
That project built a local code-interpreter loop using OpenAI function-calling: an LLM agent writes Python, executes it in a live IPython kernel, reads the output, and iterates — with vision API support for interpreting generated plots.
This repo takes the same core idea and ports it to the MCP (Model Context Protocol) standard so it works natively with Claude Code and any MCP-compatible agent, without a custom tool loop. Claude's own agent loop handles the iteration; the kernel is just exposed as tools.