Daedalus extends ToolMaker to automatically discover tasks from Jupyter notebooks and output tools as MCP servers and ToolUniverse wrappers.
Given a GitHub repository URL, Daedalus:
- Discovers notebooks and infers task specifications (
task.yaml) - Builds the tool using ToolMaker's self-correction loop (Docker-isolated)
- Wraps the result as an MCP server and/or a ToolUniverse tool
uv run python -m toolmaker auto <GITHUB_URL> --name <tool> --output-mcpGitHub Repo
│
▼
DISCOVER ─── analyze_repo() → parse_notebook() → generate_task_spec() → task.yaml
│
▼
BUILD ─────── install (Docker) → create (LLM loop) → validate (tests)
│
▼
WRAP ──────── wrap_as_mcp() → *_mcp.py
generate_tooluniverse_tool() → *_tu.py + *_config.json
# Clone with submodules
git clone --recursive https://github.com/<you>/daedalus
cd daedalus
# Install dependencies
uv sync
# Configure LLM backend
cat > .env << 'EOF'
TOOLMAKER_LLM_BACKEND=ollama
TOOLMAKER_MODEL=qwen2.5-coder:7b
OLLAMA_BASE_URL=http://localhost:11434
EOF
# Verify setup
uv run python verify_setup.py
# Run full pipeline
uv run python -m toolmaker auto https://github.com/scverse/scanpy \
--name scanpy_preprocess \
--output-mcp \
--output-tooluniversetask.yamlis the canonical intermediate representation — all outputs derive from it- Tool code executes inside Docker containers, never on the host
- Only open-source LLMs are required (Ollama, vLLM, llama.cpp)
- A single MCP server exposes all tools from a repository (see ADR-0001)
| Command | Description |
|---|---|
toolmaker auto <URL> |
Full pipeline: discover → build → wrap |
toolmaker discover <URL> |
Generate task.yaml specs from notebooks |
toolmaker wrap <TOOL_DIR> |
Wrap a validated tool as MCP server |
toolmaker tooluniverse <TOOL_DIR> |
Generate ToolUniverse wrapper |
See docs/cli.md for full options.
| Backend | Use Case |
|---|---|
| Ollama | Development, single GPU |
| vLLM | Production, high throughput |
| OpenAI-compatible | llama.cpp, LM Studio, LocalAI |
| LiteLLM | Multi-provider routing |
See docs/setup.md for configuration details.
- Python 3.12+
- Docker 24.0+
- A running local LLM (Ollama recommended)
- NVIDIA GPU + Container Toolkit (optional, for CUDA tools)
LLM Agents Making Agent Tools Georg Wölflein et al. — ACL 2025 Original repo: KatherLab/ToolMaker