Automated ML approach planning, implementation, tuning, and reporting on Modal GPU infrastructure with a Supabase-backed realtime dashboard.
├── modal-agent-swarm/ # Modal backend (Python)
│ ├── orchestrator.py # 4-phase async pipeline (Plan -> Implement -> Tune -> Report)
│ ├── dashboard_launcher_service.py # Local FastAPI launcher service for dashboard run starts
│ ├── llm_service.py # Deployed shared LLM service app (ml-agent-llm-service)
│ ├── modal_app.py # Modal app/resources and shared images
│ ├── agents/ # Plan/impl/tuning/report agents + LLM handle binding
│ ├── schemas/ # Pydantic contracts for all phase handoffs
│ ├── supabase_helpers.py # Supabase run/update helpers for dashboard
│ └── test_dashboard.py # Optional fake pipeline simulator for UI testing
│
├── dashboard-next/ # Next.js frontend (TypeScript)
│ └── src/app/
│ ├── page.tsx # Landing page
│ ├── login/ # Email/password auth
│ ├── signup/ # Account creation
│ ├── dashboard/ # Main ML pipeline dashboard + "Start New Run" form
│ └── api/runs/start/route.ts # Server route that forwards to launcher service
│
└── supabase/
└── migrations/20260404_init.sql # swarm_runs schema + RLS + realtime
- Users can sign up/log in and start a run directly from the dashboard.
- Dashboard accepts:
- dataset file upload
- labels file upload
- task prompt
- optional run name
- Backend uploads files to Modal Volume and launches orchestrator automatically.
- Realtime updates stream into dashboard from Supabase (
swarm_runs). - Orchestrator is configured to use deployed LLM service by default and does not silently fallback.
Run the SQL in supabase/migrations/20260404_init.sql in Supabase SQL Editor.
cd modal-agent-swarm
# If using uv (recommended)
uv sync
# OR classic venv
python -m venv .venv
.venv\Scripts\activate # Windows
# source .venv/bin/activate # macOS/Linux
pip install -r requirements.txt
# Modal auth
modal setup
# Create backend secret used by Modal workers
modal secret create supabase-secrets SUPABASE_URL="https://your-project.supabase.co" SUPABASE_SERVICE_ROLE_KEY="your-service-role-key"
# Deploy shared LLM service first (recommended)
modal deploy llm_service.py
# Deploy orchestrator app
modal deploy orchestrator.pycd dashboard-next
npm installCreate dashboard-next/.env.local:
NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=your-anon-key
# Local launcher service
# DASHBOARD_LAUNCHER_URL=http://127.0.0.1:8001/start-runThen run:
npm run devStart the local launcher service in another terminal:
cd modal-agent-swarm
uv run python dashboard_launcher_service.py
# or: python dashboard_launcher_service.pyOrchestrator always uses deployed LLM service via modal.Cls.from_name(...).
LLM_SERVICE_APP_NAME(defaultml-agent-llm-service): deployed LLM app name.LLM_SERVICE_CLASS_NAME(defaultLLMServer): deployed class name inside the LLM service app.MODAL_ENVIRONMENT(defaultmain): environment used when resolving the deployed class.
- Open
http://localhost:3000/signup(or/login). - Go to
/dashboard. - In the sidebar "Start New Run" form, provide:
- task prompt
- dataset file
- labels file
- optional run name
- Click
Start Run. - Watch live phase/chat/flow updates as the Modal run executes.
cd modal-agent-swarm
modal run orchestrator.py --dataset-path /vol/datasets/sample.csv --task-description "Binary classification"With dashboard tracking to an existing swarm_runs.id:
modal run orchestrator.py --dataset-path /vol/datasets/sample.csv --labels-path /vol/datasets/sample.labels --task-description "Binary classification" --swarm-run-id "uuid"NEXT_PUBLIC_SUPABASE_URLNEXT_PUBLIC_SUPABASE_ANON_KEYDASHBOARD_LAUNCHER_URL(optional, defaulthttp://127.0.0.1:8001/start-run)
- Modal/runtime:
MODAL_APP_NAME(defaultml-agent-swarm)MODAL_VOLUME_NAME(defaultml-agent-swarm-data)MODAL_ENVIRONMENT(defaultmain)
- Launcher behavior:
- dashboard launcher now uses Modal Python SDK (
modal.Volume.batch_upload,modal.Function.from_name(...).spawn(...)) and does not shell out tomodalCLI.
- dashboard launcher now uses Modal Python SDK (
- LLM routing:
LLM_SERVICE_APP_NAME(defaultml-agent-llm-service)LLM_SERVICE_CLASS_NAME(defaultLLMServer)
- Supabase (for backend writes):
SUPABASE_URLSUPABASE_SERVICE_ROLE_KEY
cd dashboard-next
npx tsc --noEmit
npm run buildcd modal-agent-swarm
pytest -q- Dashboard start API returns 500:
- Check response JSON
detailsand verifydashboard_launcher_service.pyis running. - Verify
DASHBOARD_LAUNCHER_URLmatches launcher host/port.
- Check response JSON
- LLM service mismatch / unexpected local fallback:
- Ensure
llm_service.pyis deployed to the sameMODAL_ENVIRONMENTused by orchestrator.
- Ensure
- No realtime updates:
- Confirm Supabase realtime publication is enabled for
swarm_runs(migration includes this).
- Confirm Supabase realtime publication is enabled for
