Your AI development environment's doctor.
Daedalus is a Copilot CLI agent that autonomously probes your entire AI development
environment β IDE, OS, hardware (CPU/RAM/GPU/disk), all configured MCP servers, network
latency to AI APIs, and the model's own capabilities β then produces a single beautiful,
self-contained report.html with a health score, classified issue list, and copy-pasteable
fix commands.
| File | Purpose |
|---|---|
agents/debugger-agent.agent.md |
Top-level agent registration (Copilot CLI) |
agents/debugger-agent/daedalus.agent.md |
Full agent definition + execution protocol |
agents/debugger-agent/SKILL.md |
Skill trigger entry point (legacy compat) |
agents/debugger-agent/README.md |
Agent-level README with architecture docs |
install.ps1 |
Windows installer (PowerShell) |
install.sh |
macOS / Linux installer (Bash) |
LICENSE |
MIT License |
| Requirement | Details |
|---|---|
| GitHub Copilot | Active subscription (Individual, Business, or Ent.) |
| Copilot CLI | Installed and authenticated (copilot-cli) |
| IDE | VS Code, Cursor, or Windsurf with Copilot extension |
| OS | Windows 10+, macOS 12+, or Ubuntu 20.04+ |
| Node.js (optional) | 18+ (only needed if using the MCP server companion) |
git clone https://github.com/SufficientDaikon/daedalus-debugger.git
cd daedalus-debuggerWindows (PowerShell):
.\install.ps1macOS / Linux:
chmod +x install.sh
./install.shClose and reopen VS Code / Cursor / Windsurf so the agent is picked up.
Open your IDE's Copilot Chat panel and type:
@debugger-agent Run a full diagnostic and give me report.html
Daedalus runs autonomously through all 7 phases and drops a self-contained
report.html in your working directory. Open it in any browser.
Daedalus executes a 7-phase diagnostic pipeline β fully autonomous, no prompts:
ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ
β Phase 0 β β Phase 1 β β Phase 2 β β Phase 3 β
β Orient βββΆβ Hardware βββΆβ MCP βββΆβ Network β
β β β Baseline β β Audit β β Probe β
ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ
β
ββββββββββββ ββββββββββββ ββββββββββββ β
β Phase 6 β β Phase 5 β β Phase 4 βββββββββ
β Report ββββ Issues ββββ Model β
β .html β β + Score β β Bench β
ββββββββββββ ββββββββββββ ββββββββββββ
| Phase | Name | What Happens |
|---|---|---|
| 0 | Orient | Detects IDE, model, OS, shell, Node/Python, MCP config path |
| 1 | Hardware | Benchmarks CPU, RAM, GPU (VRAM + temp), disk I/O |
| 2 | MCP Audit | Tests every configured MCP server β connection + all tools |
| 3 | Network | Measures latency to Anthropic, OpenAI, Google, GitHub APIs |
| 4 | Model Bench | Self-benchmarks latency, tool use, JSON output, code gen |
| 5 | Issue Scan | Classifies issues by severity, computes health score (0β100) |
| 6 | Report | Writes self-contained report.html with fixes and roadmap |
The health score is computed from classified issues:
score = 100 - (CRITICAL Γ 20) - (HIGH Γ 10) - (MEDIUM Γ 5) - (LOW Γ 1)
score = clamp(score, 0, 100)
| Severity | Penalty | Cap | Example |
|---|---|---|---|
| CRITICAL | β20 | β60 | MCP server unreachable, model offline |
| HIGH | β10 | β30 | GPU driver outdated, high RAM usage |
| MEDIUM | β5 | β10 | API latency > 300ms, missing ext. |
| LOW | β1 | β5 | Suboptimal config, minor inefficiency |
| INFO | 0 | β | Observations (not problems) |
| Score Range | Label | Meaning |
|---|---|---|
| 90β100 | Healthy | All systems nominal |
| 75β89 | Good | Minor issues only |
| 60β74 | Degraded | Some features impaired |
| 40β59 | Impaired | Multiple significant issues |
| 20β39 | Critical | Core functionality broken |
| 0β19 | Down | Environment non-functional |
@debugger-agent Run a full diagnostic and give me report.html
@debugger-agent Pre-flight check before I start a coding session
@debugger-agent Is my environment healthy enough to run the SDD pipeline?
@debugger-agent My github MCP server stopped working. Debug it.
@debugger-agent Network feels slow β check API latencies.
@debugger-agent My GPU benchmark is slow. Find the bottleneck.
@debugger-agent Run diagnostics and export the session to ./session-today.json
@debugger-agent Compare today's environment with last week's at ./session-old.json
@debugger-agent Full diagnostic. Fail if health_score < 60.
@debugger-agent Stress test all MCP servers β I want per-tool latency breakdowns.
A Copilot CLI agent is a markdown file that defines a specialized AI persona for
GitHub Copilot. When you type @agent-name in the Copilot Chat panel, Copilot loads
that agent's instructions and tools, giving it a focused capability set.
Daedalus is one such agent. It has:
- A system prompt that defines its diagnostic behavior (
daedalus.agent.md) - Tool access to PowerShell, file system, grep, glob, and optionally an MCP server
- Handoffs to other agents (e.g.,
@spec-writerafter a healthy diagnostic) - Constraints (no destructive commands, 30s stress test cap, always produce report)
The agent files live in ~/.copilot/agents/ and are automatically discovered by
Copilot CLI and compatible IDEs.
By default, Daedalus writes report.html in your current working directory. Tell it
where to write:
@debugger-agent Run diagnostics. Output to ~/Desktop/env-report.html
@debugger-agent Run diagnostics but skip GPU benchmarks (I don't have a GPU).
@debugger-agent Full diagnostic. Only show issues if health_score < 80.
@debugger-agent Only test my MCP servers β skip everything else.
@debugger-agent Benchmark my GPU and nothing else.
- Verify the files are in the correct location:
~/.copilot/agents/debugger-agent.agent.md ~/.copilot/agents/debugger-agent/daedalus.agent.md ~/.copilot/agents/debugger-agent/SKILL.md - Restart your IDE completely (not just reload window).
- Check that Copilot CLI is installed and authenticated:
copilot --version
Daedalus works with or without its companion MCP server. When MCP tools are unavailable, it falls back to PowerShell commands automatically:
| MCP Tool | Fallback |
|---|---|
debug_detect_environment |
powershell env var inspection |
debug_probe_hardware |
WMI queries / nvidia-smi |
debug_test_all_mcp_servers |
Manual per-server test |
debug_probe_network |
Test-NetConnection |
debug_run_stress_test |
Inline PowerShell benchmarks |
debug_generate_report |
Creates HTML with create tool |
Daedalus is designed to always produce a report, even with partial data. If no report appears:
- Check your working directory for
report.html - Look for error messages in the Copilot Chat output
- Try again with:
@debugger-agent Run diagnostics. Force report generation even if phases fail.
Daedalus is part of the OMNISKILL universal AI agent & skills framework β 48 skills, 8 bundles, 7 agents, and 5 pipelines that work across Claude Code, Copilot CLI, Cursor, Windsurf, and Antigravity.
Install the full framework:
git clone https://github.com/SufficientDaikon/omniskill.git
cd omniskill
python scripts/install.py --platform copilot-cliMIT β Copyright Β© 2026 SufficientDaikon