SpikeLab is a Python library for loading, analyzing, visualizing, and exporting neuronal spike train data from multi-electrode array (MEA) electrophysiology experiments.
📖 Documentation: spikelab.braingeneers.gi.ucsc.edu
📄 Preprint: www.biorxiv.org/content/10.64898/2026.04.25.720833v1
- Load data from common neuroscience formats (HDF5, NWB, KiloSort/Phy, SpikeInterface)
- Represent spike trains as
SpikeDataobjects with per-unit spike times in milliseconds - Compute firing rates as
RateDataobjects (instantaneous firing rates binned over time) - Slice around events to create
SpikeSliceStackorRateSliceStackobjects for event-aligned analysis - Conduct analyses at the single unit, pairwise and population level
- Export data to KiloSort, NWB, and other formats
- Store and organize results using the
AnalysisWorkspacefor multi-stage analysis projects - Access programmatically via a built-in MCP server for tool-based workflows
- Run spike sorting on electrophysiology recordings with built-in pipelines for Kilosort2, Kilosort4, and rt-sort (
spikelab.spike_sorting) - Submit batch jobs to remote Kubernetes clusters for compute-heavy workloads via
spikelab.batch_jobs
You need Python 3.10 or later. If you don't have Python installed, we recommend installing it via Miniconda.
pip install spikelabThis installs SpikeLab and its core dependencies (numpy, scipy, matplotlib, h5py).
If you prefer a conda environment with all dependencies pre-configured:
git clone https://github.com/braingeneers/SpikeLab.git
cd SpikeLab
conda env create -f environment.yml
conda activate spikelab
pip install spikelabFor development, clone the repository and install in editable mode:
git clone https://github.com/braingeneers/SpikeLab.git
cd SpikeLab
pip install -e .Open a Python prompt and run:
from spikelab import SpikeData
print("SpikeLab is installed correctly!")If you see the success message, you're ready to go.
Some features require additional packages that are not installed by default. Install them by appending the extra in brackets:
pip install "spikelab[s3]"
pip install "spikelab[s3,ml,mcp]" # multiple extras
pip install "spikelab[all]" # everything except kilosort4| Extra | Install command | What it enables |
|---|---|---|
mcp |
pip install "spikelab[mcp]" |
Built-in MCP server for tool-based workflows |
sse |
pip install "spikelab[sse]" |
SSE transport for the MCP server (uvicorn + starlette) |
s3 |
pip install "spikelab[s3]" |
Upload/download data from Amazon S3 (or any S3-compatible store) |
io |
pip install "spikelab[io]" |
Extra I/O helpers (pandas) |
ml |
pip install "spikelab[ml]" |
scikit-learn, UMAP, networkx, python-louvain |
neo |
pip install "spikelab[neo]" |
NWB / neo / quantities for reading NWB files |
ibl |
pip install "spikelab[ibl]" (+ pip install git+https://github.com/int-brain-lab/paper-brain-wide-map.git) |
Query and load IBL Brain-Wide Map datasets (ONE-api; brainwidemap not on PyPI) |
gplvm |
pip install "spikelab[gplvm]" |
Gaussian Process Latent Variable Model fitting |
numba |
pip install "spikelab[numba]" |
Numba-accelerated routines |
spike-sorting |
pip install "spikelab[spike-sorting]" (+ MATLAB for Kilosort2) |
Kilosort2 / rt-sort pipelines via spikelab.spike_sorting |
kilosort4 |
pip install "spikelab[kilosort4]" (+ PyTorch with CUDA, installed separately) |
Kilosort4 pipeline |
batch-jobs |
pip install "spikelab[batch-jobs]" |
Submit jobs to remote Kubernetes clusters (spikelab-batch-jobs CLI) |
docs |
pip install "spikelab[docs]" |
Sphinx + theme + autodoc-typehints for building the docs |
dev |
pip install "spikelab[dev]" |
pytest, black, and other dev utilities |
all |
pip install "spikelab[all]" |
All of the above except kilosort4 |
When installing from a local source checkout, replace spikelab with -e . (e.g. pip install -e ".[s3]").
from spikelab import SpikeData
from spikelab.data_loaders import load_spikedata_from_nwb
# Load spike data from an NWB file
sd = load_spikedata_from_nwb("recording.nwb")
# Basic properties
print(f"Units: {sd.N}")
print(f"Duration: {sd.length} ms")
# Compute instantaneous firing rates (100 ms bins)
rates = sd.rates(bin_size=100.0)
# Get a binary spike raster (1 ms bins)
raster = sd.raster(bin_size_ms=1.0)
# Compute pairwise spike time tiling coefficients
sttc_matrix = sd.spike_time_tilings(delt=20.0)
# Export to KiloSort format
sd.to_kilosort("ks_output/", fs_Hz=20000.0)- All spike times are in milliseconds throughout the library.
SpikeDataholds per-unit spike times and is the starting point for all analyses.RateDataholds binned instantaneous firing rates with shape(units, time_bins).SpikeSliceStack/RateSliceStackhold event-aligned slices for comparative analysis.PairwiseCompMatrixholds an N x N comparison matrix (e.g., STTC between unit pairs).AnalysisWorkspacestores intermediate results across multi-stage analysis pipelines.
SpikeLab includes a set of built-in skills that guide your CLI agent through data analysis, spike sorting, library development, and education — all through natural language conversation.
The skills ship inside the installed package at spikelab/agent/skills/. A lightweight spikelab router skill (installed separately into your agent's skills directory) handles environment detection (conda vs. system Python, install if missing) and then delegates to the in-repo skill that best matches the user's request:
| In-repo skill | Use when the user wants to… |
|---|---|
spikelab-analysis-implementer |
Load data, write/run analysis scripts, generate publication-quality figures, manage results, and keep repo maps current |
spikelab-spikesorter |
Sort raw recordings (Kilosort2, Kilosort4, RT-Sort), curate units, run stim-aligned sorting, and inspect sorting outputs |
spikelab-developer |
Promote ad-hoc analysis code into the library — identify reusable methods, integrate novel computations, write tests, expose via MCP, and submit a PR |
spikelab-educator |
Explain what an analysis does, how a method works, or what a result means — read-only, no code execution |
spikelab-map-updater |
Regenerate the repo map files after library changes |
CLI agents that load skills from installed packages pick the in-repo skills up automatically; alternatively, copy or symlink them into the agent's skills directory. As an alternative to the skills, MCP tools are available for all methods in the library.
SpikeLab/
├── src/
│ └── spikelab/ # Installable Python package
│ ├── spikedata/ # Core data structures and analysis
│ │ ├── spikedata.py # SpikeData class
│ │ ├── ratedata.py # RateData class
│ │ ├── spikeslicestack.py # SpikeSliceStack class
│ │ ├── rateslicestack.py # RateSliceStack class
│ │ ├── pairwise.py # PairwiseCompMatrix and PairwiseCompMatrixStack
│ │ ├── utils.py # Shared utility functions
│ │ └── plot_utils.py # Visualization helpers
│ ├── data_loaders/ # File I/O
│ │ ├── data_loaders.py # Load from HDF5, NWB, KiloSort, SpikeInterface
│ │ ├── data_exporters.py # Export to KiloSort, NWB, and other formats
│ │ └── s3_utils.py # Amazon S3 upload/download utilities
│ ├── spike_sorting/ # Spike-sorting pipelines
│ │ ├── pipeline.py # Top-level sorting pipeline + config
│ │ ├── ks2_runner.py # Kilosort2 runner (requires MATLAB)
│ │ ├── ks4_runner.py # Kilosort4 runner (PyTorch / CUDA)
│ │ ├── rt_sort/ # rt-sort runner
│ │ └── stim_sorting/ # Stimulation-aware sorting helpers
│ ├── workspace/ # Analysis workspace for storing intermediate results
│ │ ├── workspace.py # AnalysisWorkspace class
│ │ └── hdf5_io.py # HDF5 serialization for workspace objects
│ ├── mcp_server/ # MCP protocol server for programmatic access
│ │ ├── server.py # MCP server implementation
│ │ └── tools/ # MCP tool definitions
│ ├── batch_jobs/ # Remote Kubernetes job submission
│ │ ├── cli.py # spikelab-batch-jobs CLI
│ │ ├── session.py # RunSession entry point
│ │ ├── policy.py # Pre-submission policy checks
│ │ ├── profiles/ # Built-in cluster profiles (YAML)
│ │ └── templates/ # Jinja2 manifest templates
│ └── agent/ # Bundled agent skills (analysis-implementer, …)
│ └── skills/
├── tests/ # Test suite (pytest)
├── docs/ # Sphinx documentation source
├── examples/ # Example scripts and notebooks
├── environment.yml # Conda environment specification
└── pyproject.toml # Package configuration
git clone https://github.com/braingeneers/SpikeLab.git
cd SpikeLab
pip install -e ".[dev]"
pytest tests/ -v@article{vandermolen_spikelab_2026,
title = {{SpikeLab}: Agentic Tools for Spike Data Analysis},
author = {van der Molen, Tjitse and Cheney, Luka and Hussain, Kamran and Brahme, Ojas and Robbins, Ash and Lim, Max and Spaeth, Alex and Geng, Jinghui and Parks, David F. and Kosik, Kenneth S. and Teodorescu, Mircea and Haussler, David and Sharf, Tal},
year = {2026},
journal = {bioRxiv},
doi = {10.64898/2026.04.25.720833},
url = {https://doi.org/10.64898/2026.04.25.720833}
}Contributions are welcome! Please open an issue or pull request on the GitHub repository.
All code must be formatted with Black. You can check formatting with:
black --check .SpikeLab is released under the MIT License.