Skip to content

docs: add Copilot CLI native integration research#161

Open
abeltrano wants to merge 2 commits intomainfrom
abeltrano/copilot-cli-integration-research
Open

docs: add Copilot CLI native integration research#161
abeltrano wants to merge 2 commits intomainfrom
abeltrano/copilot-cli-integration-research

Conversation

@abeltrano
Copy link
Copy Markdown
Collaborator

Summary

Adds a comprehensive research document evaluating how to integrate PromptKit natively into GitHub Copilot CLI, and updates the roadmap to reflect this new integration path.

Closes #160

Changes

New: docs/copilot-cli-integration-research.md

Research document covering seven Copilot CLI extension points for PromptKit integration:

Strategy Mechanism Key Benefit
A Skills /promptkit invocation, auto-detection
B Custom Agents Isolated context, infer: true auto-delegation
C MCP Server Deterministic assembly, cross-client
D Plugins One-command install, bundles everything
E Custom Instructions Zero-infrastructure lightweight option
F Hooks Output validation, guardrails, telemetry
G LSP Configs Enhanced code intelligence for analysis templates

Recommends a plugin-first approach bundling a skill (invocation), MCP server (deterministic assembly), agents (interactive templates), hooks (guardrails), and LSP configs (code intelligence).

Updated: docs/roadmap.md

Added "Copilot CLI Native Integration" as a new roadmap item alongside the existing Copilot Extension item. The two are presented as complementary — CLI integration targets terminal workflows, while the Copilot Extension targets Copilot Chat across web/IDE/CLI surfaces.

Add comprehensive research document evaluating seven GitHub Copilot CLI
extension points (skills, custom agents, MCP server, plugins, hooks,
LSP configs, custom instructions) for native PromptKit integration.

Update roadmap to add Copilot CLI Native Integration as a complementary
path alongside the existing Copilot Extension item.

Closes #160

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings April 1, 2026 17:46
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds documentation exploring native integration options for PromptKit inside GitHub Copilot CLI, and updates the project roadmap to track this new integration path.

Changes:

  • Added a comprehensive research doc enumerating Copilot CLI extension points and integration strategies.
  • Updated the roadmap with a new “Copilot CLI Native Integration” initiative and example usage.
  • Documented a recommended “plugin-first” approach combining skills + MCP + agents + hooks + LSP configs.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
docs/roadmap.md Adds a new roadmap item describing the Copilot CLI native integration direction and linking to the research doc.
docs/copilot-cli-integration-research.md New long-form research document detailing possible integration mechanisms and a recommended architecture.

- Fix link text to match href (remove misleading docs/ prefix)
- Change fenced code block from bare prompt to labeled sh block

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copy link
Copy Markdown
Member

@Alan-Jowett Alan-Jowett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice research — the comparison matrix and the plugin-first recommendation are solid. A few questions / suggestions inline.

├── skills/
│ └── promptkit/
│ ├── SKILL.md # Invocation entry point — uses MCP tools
│ └── [minimal inline content for fallback if MCP unavailable]
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the skill fall back to direct file reading if the MCP server isn't available? That reintroduces the LLM-reads-500KB-of-markdown problem and the assembly fidelity drops back to LLM honor system.

I'm wondering if it's simpler to make MCP mandatory — if it's not running, the skill tells the user to run copilot plugin update promptkit (or whatever starts it) rather than silently degrading to an unreliable assembly path. Thoughts?

| **MCP Server** (C) | Deterministic assembly engine | P0 (must-have) |
| **Meta-Agent** (B1) | Interactive templates + full composition | P1 (high-value) |
| **Hooks** (F) | Output validation, telemetry, guardrails | P1 (high-value) |
| **Per-Template Agents** (B2) | High-value pre-composed workflows | P2 (nice-to-have) |
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to bump per-template agents to P1? The sonde case study shows that pre-composed workflows (coder/reviewer/validator with personas and protocols baked in) are where PromptKit delivers the most value in practice. A promptkit-investigator.agent.md with systems-engineer + root-cause-analysis + memory-safety-c pre-selected would be immediately useful — no manifest lookup, no assembly step.

4. promptkit_assemble({template, params}) → returns complete prompt
Copilot adopts assembled prompt as working instructions
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the step I keep getting stuck on — what does adopts assembled prompt as working instructions actually look like mechanically? Does the skill instruct Copilot to treat the MCP result as a behavioral override for the rest of the session? Write it to a temp file and read it? Something else?

Even if the answer is needs prototyping, it might be worth calling that out explicitly as the key UX gap to resolve — everything else in the architecture flows from this.


6. **Interactive templates in MCP**: MCP tools are request/response. For interactive
templates (`mode: interactive`), the custom agent (B1) is likely needed. Can the
MCP server provide a `promptkit_get_interactive_context` tool that returns the
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The promptkit_get_interactive_context approach feels right — MCP handles assembly (deterministic), the custom agent handles the conversational execution. Clean separation. Might be worth promoting this from an open question to a design recommendation, since it resolves the interactive-vs-request/response tension pretty naturally.

├── skills/
│ └── promptkit/
│ ├── SKILL.md # Invocation entry point — uses MCP tools
│ └── [minimal inline content for fallback if MCP unavailable]
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the skill fall back to direct file reading if the MCP server isn't available? That reintroduces the LLM-reads-500KB-of-markdown problem and the assembly fidelity drops back to LLM honor system.\n\nI'm wondering if it's simpler to make MCP mandatory — if it's not running, the skill tells the user to run copilot plugin update promptkit (or whatever starts it) rather than silently degrading to an unreliable assembly path. Thoughts?

| **MCP Server** (C) | Deterministic assembly engine | P0 (must-have) |
| **Meta-Agent** (B1) | Interactive templates + full composition | P1 (high-value) |
| **Hooks** (F) | Output validation, telemetry, guardrails | P1 (high-value) |
| **Per-Template Agents** (B2) | High-value pre-composed workflows | P2 (nice-to-have) |
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to bump per-template agents to P1? The sonde case study shows that pre-composed workflows (coder/reviewer/validator with personas and protocols baked in) are where PromptKit delivers the most value in practice. A promptkit-investigator.agent.md with systems-engineer + root-cause-analysis + memory-safety-c pre-selected would be immediately useful — no manifest lookup, no assembly step.

4. promptkit_assemble({template, params}) → returns complete prompt
Copilot adopts assembled prompt as working instructions
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the step I keep getting stuck on — what does adopts assembled prompt as working instructions actually look like mechanically? Does the skill instruct Copilot to treat the MCP result as a behavioral override for the rest of the session? Write it to a temp file and read it? Something else?\n\nEven if the answer is needs prototyping, it might be worth calling that out explicitly as the key UX gap to resolve — everything else in the architecture flows from this.


6. **Interactive templates in MCP**: MCP tools are request/response. For interactive
templates (`mode: interactive`), the custom agent (B1) is likely needed. Can the
MCP server provide a `promptkit_get_interactive_context` tool that returns the
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The promptkit_get_interactive_context approach feels right — MCP handles assembly (deterministic), the custom agent handles the conversational execution. Clean separation. Might be worth promoting this from an open question to a design recommendation, since it resolves the interactive-vs-request/response tension pretty naturally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Research: Copilot CLI native integration strategies

3 participants