🎉 Accepted to the ML4PS workshop @ NeurIPS 2024
Benchmark coupled ODE surrogate models on curated datasets with reproducible training, evaluation, and visualization pipelines. CODES helps you answer: Which surrogate architecture fits my data, accuracy target, and runtime budget?
- Baseline surrogates (MultiONet, FullyConnected, LatentNeuralODE, LatentPoly) with configurable hyperparameters
- Rich datasets spanning chemistry, astrophysics, and dynamical systems
- Optional studies for interpolation/extrapolation, sparse data regimes, uncertainty estimation, and batch scaling
- Automated reporting: accuracy tables, resource usage, gradient analyses, and dozens of diagnostic plots
uv (recommended)
git clone https://github.com/robin-janssen/CODES-Benchmark.git
cd CODES-Benchmark
uv sync # creates .venv from pyproject/uv.lock
source .venv/bin/activate
uv run python run_training.py --config configs/train_eval/config_minimal.yaml
uv run python run_eval.py --config configs/train_eval/config_minimal.yamlpip alternative
git clone https://github.com/robin-janssen/CODES-Benchmark.git
cd CODES-Benchmark
python -m venv .venv && source .venv/bin/activate
pip install -e .
pip install -r requirements.txt
python run_training.py --config configs/train_eval/config_minimal.yaml
python run_eval.py --config configs/train_eval/config_minimal.yamlOutputs land in trained/<training_id>, results/<training_id>, and plots/<training_id>. The configs/ folder contains ready-to-use templates (train_eval/config_minimal.yaml, config_full.yaml, etc.). Copy a file there and adjust datasets/surrogates/modalities before running the CLIs.
The GitHub Pages site now hosts the narrative guides, configuration reference, and interactive notebooks alongside the generated API docs.
| Path | Purpose |
|---|---|
configs/ |
Ready-to-edit benchmark configs (train_eval/, tuning/, etc.) |
datasets/ |
Bundled datasets + download helper (data_sources.yaml) |
codes/ |
Python package with surrogates, training, tuning, and benchmarking utilities |
run_training.py, run_eval.py, run_tuning.py |
CLI entry points for the main workflows |
docs/ |
Sphinx project powering the GitHub Pages site (guides, tutorials, API reference) |
scripts/ |
Convenience tooling (dataset downloads, analysis utilities) |
Contribution guidelines are documented in CONTRIBUTING.md.
In short: open or pick an issue, make your changes in a branch, and submit a pull request with tests/docs updates as needed.