Single source of truth for Copilot customization across VS Code Chat and Copilot CLI.
- Canonical source lives under
resources/ - Runtime targets are generated/synced into standard discovery paths:
.github/agents.github/prompts.github/instructions.github/copilot-instructions.md.github/skills.github/hooksAGENTS.md.vscode/mcp.json
resources/ keeps authored assets organized and portable, while generated targets keep behavior aligned with current Copilot discovery expectations.
- Epic #1: Migrate existing agents, prompts, and instructions
- Epic #2: Build source-to-target deployment mapping
- Epic #3: Define unified Copilot customization architecture
- Epic #4: Achieve VS Code Chat + Copilot CLI parity
- Epic #5: Create canonical resources taxonomy
resources/canonical authored assetstargets/mapping and deployment contractdocs/architecture, compatibility, migrationscripts/migration and sync helpers.github/generated/runtime discovery targets.vscode/workspace MCP config
Use the launcher script when you want Copilot CLI to pick up work in the background without manually assembling the full command line.
Run from the repository root:
./scripts/start_copilot_handoff.sh --write-workspace-mcpDefault model policy:
- default:
gpt-5-mini - allowed overrides:
gpt-5-mini,gpt-4.1 - current CLI does not advertise
raptor-mini, so it is not allowed by the launcher yet
What it does:
- syncs canonical
resources/into runtime.github/targets - renders a runtime MCP config from
resources/mcp/mcp.jsonplus values from~/Library/Application Support/Code/User/mcp.json - optionally writes the rendered MCP config into
.vscode/mcp.jsonfor workspace parity - starts a background Copilot CLI session using the
scrum-masteragent - writes logs under
.copilot-runtime/logs/
After launching, verify runtime evidence immediately:
./scripts/handoff_status.sh --tailThis reports:
- running status (
yes/no) - process id (PID)
- start timestamp (UTC)
- log file path
- first log lines plus latest tail
The launcher also writes latest-run state to:
.copilot-runtime/handoff.latest.env
In VS Code Chat, use a direct request such as:
Start a background scrum-master handoff for the open copilot-config issues.Run the open-issues handoff in the background and write the workspace MCP config.
The intended chat behavior is to use the launcher script rather than reconstructing the full CLI command by hand.
For intent-based chat handoff with dispatch label transitions and runtime evidence output, use:
./scripts/chat_cli_handoff.sh \
--mode start \
--message "handoff to CLI for issue #63" \
--issue-url "https://forgejo.138corban.toyboxcreations.net/slowder/copilot-config/issues/63"The handoff launcher and related scripts include a few environment flags to help with safe, reproducible automation and optional automatic install of the Copilot CLI. These are intentionally opt-in.
COPILOT_AUTO_INSTALL=1— if set, the launcher will attempt to install thecopilotCLI automatically when it cannot find it on PATH. Recommended for disposable CI runners only.COPILOT_AUTO_INSTALL_ALLOW_UNTRUSTED=1— allow the launcher to run the upstream install script (https://gh.io/copilot-install). This runs remote code; only enable when you trust the environment.ALLOW_WORKTREE_RUN=1— override the repository-level guard that refuses to run sync/validation/launch logic from ephemeral git worktrees. Prefer running validators from the canonical repository clone or CI.
Example (CI runner) that installs into a temporary prefix and verifies the binary exists:
export COPILOT_AUTO_INSTALL=1
export COPILOT_AUTO_INSTALL_ALLOW_UNTRUSTED=1
export PREFIX="$RUNNER_TEMP"
curl -fsSL https://gh.io/copilot-install | PREFIX="$PREFIX" bash
export PATH="$PREFIX/bin:$PATH"
copilot --versionNotes:
- The launcher already passes an additional MCP config to the CLI using
--additional-mcp-config "@<path>". The runtime MCP is built fromresources/mcp/mcp.jsoncombined with the user's MCP file byscripts/build_runtime_mcp_config.py. - For durable correctness, CI runs that sync resources and validate agent
handoffs are recommended; see
.github/workflows/handoff-e2e.yml.
Workflow details and validation checklist:
docs/chat-to-cli-handoff.mdscripts/validate_chat_cli_handoff.sh
The optional cadence prompt is available at:
resources/prompts/autonomous-dispatch-cadence.prompt.md
Use it when you want scrum-master to run a periodic ready-queue scan and return a deterministic next dispatch batch with blockers and owner approvals called out.
Use Tasks: Run Task and choose one of these workspace tasks:
Copilot: Start Background HandoffCopilot: Start Background Handoff (Default)Copilot: Start Foreground HandoffCopilot: Tail Latest Handoff LogCopilot: Show Latest Handoff Status
The task definitions live in .vscode/tasks.json and call the same launcher script, so chat, tasks, and direct shell usage all go through one workflow.
./scripts/start_copilot_handoff.sh \
--write-workspace-mcp \
--prompt "Pick up the next two ready issues only and stop after opening PRs."./scripts/start_copilot_handoff.sh --model gpt-4.1Or set it for the session:
export COPILOT_HANDOFF_MODEL=gpt-4.1
./scripts/start_copilot_handoff.sh./scripts/start_copilot_handoff.sh --foreground- runtime MCP overlay:
.copilot-runtime/mcp.runtime.json - session logs:
.copilot-runtime/logs/ - local runtime state is gitignored and should not be committed
- Complete and maintain migration inventory from legacy repos
- Dogfood in selected repos and publish validation evidence
- Deprecate old split config methods after parity checks