Skip to content

EBConlin/daedalus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Daedalus

Daedalus extends ToolMaker to automatically discover tasks from Jupyter notebooks and output tools as MCP servers and ToolUniverse wrappers.

What it does

Given a GitHub repository URL, Daedalus:

  1. Discovers notebooks and infers task specifications (task.yaml)
  2. Builds the tool using ToolMaker's self-correction loop (Docker-isolated)
  3. Wraps the result as an MCP server and/or a ToolUniverse tool
uv run python -m toolmaker auto <GITHUB_URL> --name <tool> --output-mcp

Architecture

GitHub Repo
    │
    ▼
DISCOVER ─── analyze_repo() → parse_notebook() → generate_task_spec() → task.yaml
    │
    ▼
BUILD ─────── install (Docker) → create (LLM loop) → validate (tests)
    │
    ▼
WRAP ──────── wrap_as_mcp() → *_mcp.py
              generate_tooluniverse_tool() → *_tu.py + *_config.json

Quick Start

# Clone with submodules
git clone --recursive https://github.com/<you>/daedalus
cd daedalus

# Install dependencies
uv sync

# Configure LLM backend
cat > .env << 'EOF'
TOOLMAKER_LLM_BACKEND=ollama
TOOLMAKER_MODEL=qwen2.5-coder:7b
OLLAMA_BASE_URL=http://localhost:11434
EOF

# Verify setup
uv run python verify_setup.py

# Run full pipeline
uv run python -m toolmaker auto https://github.com/scverse/scanpy \
  --name scanpy_preprocess \
  --output-mcp \
  --output-tooluniverse

Key Concepts

  • task.yaml is the canonical intermediate representation — all outputs derive from it
  • Tool code executes inside Docker containers, never on the host
  • Only open-source LLMs are required (Ollama, vLLM, llama.cpp)
  • A single MCP server exposes all tools from a repository (see ADR-0001)

CLI Reference

Command Description
toolmaker auto <URL> Full pipeline: discover → build → wrap
toolmaker discover <URL> Generate task.yaml specs from notebooks
toolmaker wrap <TOOL_DIR> Wrap a validated tool as MCP server
toolmaker tooluniverse <TOOL_DIR> Generate ToolUniverse wrapper

See docs/cli.md for full options.

LLM Backends

Backend Use Case
Ollama Development, single GPU
vLLM Production, high throughput
OpenAI-compatible llama.cpp, LM Studio, LocalAI
LiteLLM Multi-provider routing

See docs/setup.md for configuration details.

Requirements

  • Python 3.12+
  • Docker 24.0+
  • A running local LLM (Ollama recommended)
  • NVIDIA GPU + Container Toolkit (optional, for CUDA tools)

Documentation

Based on

LLM Agents Making Agent Tools Georg Wölflein et al. — ACL 2025 Original repo: KatherLab/ToolMaker

About

Extends ToolMaker to auto-discover tasks from Jupyter notebooks and output tools as MCP servers + ToolUniverse wrappers

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors