Skip to content

SufficientDaikon/daedalus-debugger

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ” Daedalus β€” Autonomous AI Environment Debugger

Your AI development environment's doctor.

Docs License: MIT

Daedalus is a Copilot CLI agent that autonomously probes your entire AI development environment β€” IDE, OS, hardware (CPU/RAM/GPU/disk), all configured MCP servers, network latency to AI APIs, and the model's own capabilities β€” then produces a single beautiful, self-contained report.html with a health score, classified issue list, and copy-pasteable fix commands.


What's Included

File Purpose
agents/debugger-agent.agent.md Top-level agent registration (Copilot CLI)
agents/debugger-agent/daedalus.agent.md Full agent definition + execution protocol
agents/debugger-agent/SKILL.md Skill trigger entry point (legacy compat)
agents/debugger-agent/README.md Agent-level README with architecture docs
install.ps1 Windows installer (PowerShell)
install.sh macOS / Linux installer (Bash)
LICENSE MIT License

Prerequisites

Requirement Details
GitHub Copilot Active subscription (Individual, Business, or Ent.)
Copilot CLI Installed and authenticated (copilot-cli)
IDE VS Code, Cursor, or Windsurf with Copilot extension
OS Windows 10+, macOS 12+, or Ubuntu 20.04+
Node.js (optional) 18+ (only needed if using the MCP server companion)

Quick Install

1. Clone

git clone https://github.com/SufficientDaikon/daedalus-debugger.git
cd daedalus-debugger

2. Run the installer

Windows (PowerShell):

.\install.ps1

macOS / Linux:

chmod +x install.sh
./install.sh

3. Restart your IDE

Close and reopen VS Code / Cursor / Windsurf so the agent is picked up.


Quick Start

Open your IDE's Copilot Chat panel and type:

@debugger-agent Run a full diagnostic and give me report.html

Daedalus runs autonomously through all 7 phases and drops a self-contained report.html in your working directory. Open it in any browser.


How It Works

Daedalus executes a 7-phase diagnostic pipeline β€” fully autonomous, no prompts:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Phase 0  β”‚  β”‚ Phase 1  β”‚  β”‚ Phase 2  β”‚  β”‚ Phase 3  β”‚
β”‚  Orient  │─▢│ Hardware │─▢│   MCP    │─▢│ Network  β”‚
β”‚          β”‚  β”‚ Baseline β”‚  β”‚  Audit   β”‚  β”‚  Probe   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                               β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”       β”‚
β”‚ Phase 6  β”‚  β”‚ Phase 5  β”‚  β”‚ Phase 4  β”‚β—€β”€β”€β”€β”€β”€β”€β”˜
β”‚  Report  │◀─│  Issues  │◀─│  Model   β”‚
β”‚ .html    β”‚  β”‚ + Score  β”‚  β”‚  Bench   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
Phase Name What Happens
0 Orient Detects IDE, model, OS, shell, Node/Python, MCP config path
1 Hardware Benchmarks CPU, RAM, GPU (VRAM + temp), disk I/O
2 MCP Audit Tests every configured MCP server β€” connection + all tools
3 Network Measures latency to Anthropic, OpenAI, Google, GitHub APIs
4 Model Bench Self-benchmarks latency, tool use, JSON output, code gen
5 Issue Scan Classifies issues by severity, computes health score (0–100)
6 Report Writes self-contained report.html with fixes and roadmap

Health Score

The health score is computed from classified issues:

score = 100 - (CRITICAL Γ— 20) - (HIGH Γ— 10) - (MEDIUM Γ— 5) - (LOW Γ— 1)
score = clamp(score, 0, 100)
Severity Penalty Cap Example
CRITICAL βˆ’20 βˆ’60 MCP server unreachable, model offline
HIGH βˆ’10 βˆ’30 GPU driver outdated, high RAM usage
MEDIUM βˆ’5 βˆ’10 API latency > 300ms, missing ext.
LOW βˆ’1 βˆ’5 Suboptimal config, minor inefficiency
INFO 0 β€” Observations (not problems)
Score Range Label Meaning
90–100 Healthy All systems nominal
75–89 Good Minor issues only
60–74 Degraded Some features impaired
40–59 Impaired Multiple significant issues
20–39 Critical Core functionality broken
0–19 Down Environment non-functional

Prompt Examples

Getting Started

@debugger-agent Run a full diagnostic and give me report.html
@debugger-agent Pre-flight check before I start a coding session
@debugger-agent Is my environment healthy enough to run the SDD pipeline?

Common

@debugger-agent My github MCP server stopped working. Debug it.
@debugger-agent Network feels slow β€” check API latencies.
@debugger-agent My GPU benchmark is slow. Find the bottleneck.

Advanced

@debugger-agent Run diagnostics and export the session to ./session-today.json
@debugger-agent Compare today's environment with last week's at ./session-old.json
@debugger-agent Full diagnostic. Fail if health_score < 60.
@debugger-agent Stress test all MCP servers β€” I want per-tool latency breakdowns.

What Is a Copilot CLI Agent?

A Copilot CLI agent is a markdown file that defines a specialized AI persona for GitHub Copilot. When you type @agent-name in the Copilot Chat panel, Copilot loads that agent's instructions and tools, giving it a focused capability set.

Daedalus is one such agent. It has:

  • A system prompt that defines its diagnostic behavior (daedalus.agent.md)
  • Tool access to PowerShell, file system, grep, glob, and optionally an MCP server
  • Handoffs to other agents (e.g., @spec-writer after a healthy diagnostic)
  • Constraints (no destructive commands, 30s stress test cap, always produce report)

The agent files live in ~/.copilot/agents/ and are automatically discovered by Copilot CLI and compatible IDEs.


Customization

Change the report output path

By default, Daedalus writes report.html in your current working directory. Tell it where to write:

@debugger-agent Run diagnostics. Output to ~/Desktop/env-report.html

Skip specific phases

@debugger-agent Run diagnostics but skip GPU benchmarks (I don't have a GPU).

Set a health threshold

@debugger-agent Full diagnostic. Only show issues if health_score < 80.

Focus on a single component

@debugger-agent Only test my MCP servers β€” skip everything else.
@debugger-agent Benchmark my GPU and nothing else.

Troubleshooting

Agent not showing in picker

  1. Verify the files are in the correct location:
    ~/.copilot/agents/debugger-agent.agent.md
    ~/.copilot/agents/debugger-agent/daedalus.agent.md
    ~/.copilot/agents/debugger-agent/SKILL.md
    
  2. Restart your IDE completely (not just reload window).
  3. Check that Copilot CLI is installed and authenticated:
    copilot --version

MCP tools unavailable

Daedalus works with or without its companion MCP server. When MCP tools are unavailable, it falls back to PowerShell commands automatically:

MCP Tool Fallback
debug_detect_environment powershell env var inspection
debug_probe_hardware WMI queries / nvidia-smi
debug_test_all_mcp_servers Manual per-server test
debug_probe_network Test-NetConnection
debug_run_stress_test Inline PowerShell benchmarks
debug_generate_report Creates HTML with create tool

Report not generated

Daedalus is designed to always produce a report, even with partial data. If no report appears:

  1. Check your working directory for report.html
  2. Look for error messages in the Copilot Chat output
  3. Try again with: @debugger-agent Run diagnostics. Force report generation even if phases fail.

Part of OMNISKILL

Daedalus is part of the OMNISKILL universal AI agent & skills framework β€” 48 skills, 8 bundles, 7 agents, and 5 pipelines that work across Claude Code, Copilot CLI, Cursor, Windsurf, and Antigravity.

Install the full framework:

git clone https://github.com/SufficientDaikon/omniskill.git
cd omniskill
python scripts/install.py --platform copilot-cli

License

MIT β€” Copyright Β© 2026 SufficientDaikon

About

πŸ” Autonomous AI Environment Debugger β€” probes hardware, MCP servers, model capabilities. Produces self-contained report.html with health score and fix commands.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors