Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 20 additions & 0 deletions .markdownlint.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Default state for all rules
default: true

# Path to configuration file to extend
extends: null

# Line length
MD013: false

# Duplicate headings
MD024: false

# Inline HTML
MD033: false

# Bare URLs
MD034: false

# First line in file must be a top-level heading
MD041: false
8 changes: 0 additions & 8 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,14 +20,6 @@ repos:
rev: v0.45.0
hooks:
- id: markdownlint-fix
args:
- "--disable"
- "MD013" # Line length
- "MD024" # Duplicate headings
- "MD033" # Inline HTML
- "MD034" # Bare URLs
- "MD041" # First line in file must be a top-level heading
- "--"
exclude: "CHANGELOG.md|LICENSE"

- repo: https://github.com/alessandrojcm/commitlint-pre-commit-hook
Expand Down
141 changes: 94 additions & 47 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,36 +1,76 @@
# Spec Driven Development (SDD) MCP
<div align="center">
<img src="./misc/header.png" alt="Spec Driven Development header" width="400"/>
<h1>🧭 Spec-Driven Development Workflow</h1>
<h3><em>Build predictable software with a repeatable AI-guided workflow.</em></h3>
</div>

[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](LICENSE)

<img alt="Spec Driven Development MCP header" src="./misc/header.png" width="400">
<p align="center">
<strong>Spec-driven development tools for collaborating with AI agents to deliver reliable outcomes.</strong>
</p>

## Why does this exist?
<p align="center">
<a href="https://github.com/liatrio-labs/spec-driven-workflow-mcp/actions/workflows/ci.yml"><img src="https://github.com/liatrio-labs/spec-driven-workflow-mcp/actions/workflows/ci.yml/badge.svg" alt="CI Status"/></a>
<a href="https://github.com/liatrio-labs/spec-driven-workflow-mcp/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" alt="License"/></a>
<a href="https://github.com/liatrio-labs/spec-driven-workflow-mcp/stargazers"><img src="https://img.shields.io/github/stars/liatrio-labs/spec-driven-workflow-mcp?style=social" alt="GitHub stars"/></a>
<a href="docs/operations.md"><img src="https://img.shields.io/badge/docs-Operations-blue" alt="Documentation"/></a>
</p>

This project provides a ubiquitous framework for spec driven development (SDD) that can be used anywhere an AI agent is used as a collaborator.
## Highlights

MCP technology provides a standard way to represent and exchange information between AI agents, and this framework provides a way to use that information to guide the process of refining and implementing specifications of all kinds. Using MCP allows users to take advantage of the framework with whatever AI tool and AI model they choose, in whatever workflow they prefer.
- **Prompt-first workflow:** Use curated prompts to go from idea → spec → task list → implementation-ready backlog.
- **Predictable delivery:** Every step emphasizes demoable slices, proof artifacts, and collaboration with junior developers in mind.
- **Bonus MCP tooling:** Optionally pair the workflow with an MCP server for automation inside modern AI clients.

## Goals
## Why Spec-Driven Development?

- **Simple:** Easy to use and understand with transparent access to the underlying tools and processes.
- **Ubiquitous:** Can be used anywhere an AI agent is used as a collaborator.
- **Reliable:** Reliable and can be trusted to deliver consistent results.
- **Flexible:** Can be used with any AI tool and AI model inside any workflow.
- **Scalable:** Can be used with any size of project.
Spec-Driven Development (SDD) keeps AI collaborators and human developers aligned around a shared source of truth. This repository packages a lightweight, prompt-centric workflow that turns an idea into a reviewed specification, an actionable plan, and a disciplined execution loop. By centering on markdown artifacts instead of tooling, the workflow travels with you—across projects, models, and collaboration environments.

Future functionality will include:
MCP technology remains available as an optional integration, but the heart of the project is the trio of prompts that guide teams from idea to demoable outcomes with consistent artifacts.

- User-defined output formats (Markdown task list, Jira objects via Atlassian MCP, GitHub issues, etc.)
- Ability to customize the prompts used to drive the SDD workflow
- TBD
## Guiding Principles

- **Clarify intent before delivery:** The spec prompt enforces clarifying questions so requirements are explicit and junior-friendly.
- **Ship demoable slices:** Every stage pushes toward thin, end-to-end increments with clear demo criteria and proof artifacts.
- **Make work transparent:** Tasks live in versioned markdown files so stakeholders can review, comment, and adjust scope anytime.
- **Progress one slice at a time:** The management prompt enforces single-threaded execution to reduce churn and unfinished work-in-progress.
- **Stay automation ready:** While SDD works with plain Markdown, the prompts are structured for MCP, IDE agents, or other AI integrations.

## Prompt Workflow

All prompts live in `prompts/` and are designed for use inside your preferred AI assistant.

1. **`generate-spec`** (`prompts/generate-spec.md`): Ask clarifying questions, then author a junior-friendly spec with demoable slices.
2. **`generate-task-list-from-spec`** (`prompts/generate-task-list-from-spec.md`): Transform the approved spec into actionable parent tasks and sub-tasks with proof artifacts.
3. **`manage-tasks`** (`prompts/manage-tasks.md`): Coordinate execution, update task status, and record outcomes as you deliver value.

Each prompt writes Markdown outputs into `tasks/`, giving you a lightweight backlog that is easy to review, share, and implement.

## How does it work?

The MCP is driven by basic Markdown files that function as prompts for the AI agent. Users can reference the specific MCP tools in their prompts to use specific flows within the SDD workflow. Users can manage the context of the AI by using the tools of their existing workflows (GitHub CLI, Atlassian MCP, etc.). The AI agent can use the tools of the user's existing workflows to perform actions (e.g., create a new issue, update an existing issue, etc.)
The workflow is driven by Markdown prompts that function as reusable playbooks for the AI agent. Reference the prompts directly, or invoke them via supported tooling, to keep the AI focused on structured outcomes. Users can manage context with their existing workflows (GitHub CLI, Atlassian MCP, etc.), and optionally let the MCP server automate portions of the process.

## Workflow Overview

Three prompts in `/prompts` define the full lifecycle. Use them sequentially to move from concept to completed work.

### Stage 1 — Generate the Spec ([prompts/generate-spec.md](./prompts/generate-spec.md))

- Directs the AI assistant to use clarifying questions with the user before writing a Markdown spec.
- Produces `/tasks/000X-spec-<feature>.md` with goals, demoable units of work, functional/non-goals, metrics, and open questions.

### Stage 2 — Generate the Task List ([prompts/generate-task-list-from-spec.md](./prompts/generate-task-list-from-spec.md))

- Reads the approved spec, inspects the repo for context, and drafts parent tasks first.
- On confirmation from the user, expands each parent task into sequenced subtasks with demo criteria, proof artifacts, and relevant files.
- Outputs `/tasks/tasks-000X-spec-<feature>.md` ready for implementation.

### SDD Workflow Overview
### Stage 3 — Manage Tasks ([prompts/manage-tasks.md](./prompts/manage-tasks.md))

Here is a detailed diagram of the SDD workflow:
- Enforces disciplined execution: mark in-progress immediately, finish one subtask before starting the next, and log artifacts as you go.
- Bakes in commit hygiene, validation steps, and communication rituals so handoffs stay tight.

### Detailed SDD Workflow Diagram

```mermaid
sequenceDiagram
Expand Down Expand Up @@ -69,24 +109,34 @@ sequenceDiagram
MT->>CODE: Iterate changes
```

### Available Prompts
## Core Artifacts

The server provides three core prompts for spec-driven development:
- **Specs:** `000X-spec-<feature>.md` — canonical requirements, demo slices, and success metrics.
- **Task Lists:** `tasks-000X-spec-<feature>.md` — parent/subtask checklist with relevant files and proof artifacts.
- **Status Keys:** `[ ]` not started, `[~]` in progress, `[x]` complete, mirroring the manage-tasks guidance.
- **Proof Artifacts:** URLs, CLI commands, screenshots, or tests captured per task to demonstrate working software.

- `generate-spec`: Create a detailed specification from a feature description
- `generate-task-list-from-spec`: Generate an actionable task list from a spec
- `manage-tasks`: Manage and track progress on task lists
## Hands-On Usage (No MCP Required)

## Technologies Used
1. **Kick off a spec:** Copy or reference `prompts/generate-spec.md` inside your preferred AI chat. Provide the feature idea, answer the clarifying questions, and review the generated spec before saving it under `/tasks`.
2. **Plan the work:** Point the assistant to the new spec and walk through `prompts/generate-task-list-from-spec.md`. Approve parent tasks first, then request the detailed subtasks and relevant files. Commit the result to `/tasks`.
3. **Execute with discipline:** Follow `prompts/manage-tasks.md` while implementing. Update statuses as you work, attach proof artifacts, and pause for reviews at each demoable slice.

| Technology | Description | Link |
| --- | --- | --- |
| `uv` | Modern Python package and project manager | <https://docs.astral.sh/uv/> |
| FastMCP | Tool for building MCP servers and clients | <https://github.com/jlowin/fastmcp> |
| `pre-commit` | Git hooks for code quality and formatting | <https://pre-commit.com/> |
| Semantic Release | Automated release process (via GitHub Actions) | <https://github.com/python-semantic-release/python-semantic-release> |
### Slash Command Integration (TBD)

Guides are coming for wiring these prompts as first-class slash commands in popular IDEs and AI tools (Windsurf, VS Code, Cursor, Claude Code, Codex, and more).

## Optional: Automate with the MCP Server

Prefer tighter tooling? This repository also ships an MCP server that exposes the same prompts programmatically. Treat it as an accelerator—everything above works without it.

> Note: MCP prompt support is not uniformly supported across AI tools. See [docs/mcp-prompt-support.md](./docs/mcp-prompt-support.md) for details.

## Quick Start
### Workflow Essentials

1. Open `prompts/generate-spec.md` inside your AI assistant and follow the instructions to produce a new spec in `tasks/`.
2. Point the assistant at the generated spec and run `prompts/generate-task-list-from-spec.md` to create the implementation backlog.
3. Use `prompts/manage-tasks.md` while executing work to keep status, demo criteria, and proof artifacts up to date.

### Installation

Expand All @@ -97,41 +147,38 @@ cd spec-driven-workflow-mcp

# Install dependencies
uv sync

# Run tests
uv run pytest
```

### Running the Server
### Run the MCP Server

**STDIO Transport (for local development):**
**STDIO (local development):**

```bash
uvx fastmcp run server.py
```

# Or start an MCP Inspector instance for local testing along with the app:
**With MCP Inspector:**

```bash
uvx fastmcp dev server.py
```

**HTTP Transport (for remote access):**
**HTTP Transport:**

```bash
uvx fastmcp run server.py --transport http --port 8000
```

See [docs/operations.md](docs/operations.md) and [CONTRIBUTING.md](CONTRIBUTING.md) for detailed configuration, contribution workflow, and deployment options.
See [docs/operations.md](docs/operations.md) and [CONTRIBUTING.md](CONTRIBUTING.md) for advanced configuration, deployment, and contribution guidelines.

## References

| Reference | Description | Link |
| --- | --- | --- |
| MCP | MCP is a standard way to represent and exchange information between AI agents | <https://modelcontextprotocol.io/docs/getting-started/intro> |
| FastMCP | The fast, Pythonic way to build MCP servers and clients. | <https://gofastmcp.com/getting-started/welcome> |
| AI Dev Tasks | Example of a basic SDD workflow using only markdown files. | <https://github.com/snarktank/ai-dev-tasks> |
| AI Dev Tasks (customized) | A customized version of AI Dev Tasks | <https://github.com/liatrio/read-me/tree/main/damien-storm/ai-stuff#feature-development-flow> |
| Spec Driven Workflow | Liatrio app that provides a unified development workflow system | <https://github.com/liatrio-labs/spec-driven-workflow> |
| AI Dev Tasks | Foundational example of an SDD workflow expressed entirely in Markdown. | <https://github.com/snarktank/ai-dev-tasks> |
| MCP | Standard protocol for AI agent interoperability, used here as an optional integration layer. | <https://modelcontextprotocol.io/docs/getting-started/intro> |
| FastMCP | Python tooling for building MCP servers and clients that power this repo's automation. | <https://github.com/jlowin/fastmcp> |

## License

This project is licensed under the Apache License, Version 2.0. See the
[LICENSE](LICENSE) file for details.
This project is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for details.
46 changes: 46 additions & 0 deletions docs/mcp-prompt-support.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# MCP Prompt Support

This guide tracks how well popular IDEs, CLIs, and agent shells load the Spec Driven Development (SDD) prompts exposed by the MCP server. Use it to choose the smoothest environment, understand current limitations, and contribute new findings.

## Support Matrix

| Tool | Version<br />Tested | Loads MCP? | Prompt Actions | Experience | Workarounds / Notes |
| --- | --- | --- | --- | --- | --- |
| Claude Code CLI | TBD | Yes | Slash commands generated automatically | Ideal | Prompts appear as native slash commands. |
| Claude Code Desktop | TBD | Yes | TBD | Ideal | Loads successfully; verifying how quickly prompts become slash commands. |
| Claude Code IDE (JetBrains) | TBD | Yes | TBD | Ideal | Successful load; documenting slash-command behavior. |
| Cursor | TBD | Yes | Implicit trigger (no slash commands) | Ideal | Natural-language requests ("generate a spec") invoke the prompts. |
| Gemini CLI | TBD | Yes | Slash commands generated automatically | Ideal | Prompts appear as native slash commands. |
| OpenCode | TBD | Yes | Implicit trigger (no slash commands) | Ideal | Prompts are invoked through natural language requests. |
| Windsurf | TBD | Yes | No | Not good | MCP loads but returns `Error: no tools returned.` Adding a dummy tool unblocks basic use. |
| VS Code | TBD | Yes | Slash commands generated, but not executed | Not good | Prompts appear as commands but are inserted verbatim into chat; AI ignores them. |
| Codex CLI | TBD | Yes | No | Non-existent | Prompts not recognized; manual copy/paste required. |
| Codex IDE Plugin | TBD | Yes | No | Non-existent | Same as CLI—no prompt awareness. |
| Goose | TBD | Yes | TBD | TBD | Loads successfully; behavior still being evaluated. |
| Crush | TBD | TBD | TBD | TBD | Awaiting confirmation. |
| Q Developer CLI | TBD | TBD | TBD | TBD | Awaiting confirmation. |
| Q Developer IDE Plugin | TBD | TBD | TBD | TBD | Awaiting confirmation. |

## Interpretation

- **Ideal** environments either supply native slash commands or automatically invoke the correct prompt flows from natural language requests.
- **Not good** means the MCP connection succeeds but prompt usage is clumsy or broken without manual intervention.
- **Non-existent** indicates the tool ignores MCP prompts entirely today.
- **TBD** rows invite contributors to validate behavior and update this document.

## Field Notes & Tips

- Tools that surface the prompts as first-class slash commands (Claude Code CLI/Desktop, Gemini CLI) provide the fastest path to running the SDD workflow without touching raw Markdown.
- When slash commands are absent but the tool still uses the MCP (Cursor, OpenCode), instruct the assistant with the stage name ("generate spec", "generate task list", etc.) to trigger the correct prompt.
- Windsurf currently requires registering a simple placeholder tool to prevent the `no tools returned` error. After that, prompts still are not recognized.
- VS Code recognizes the prompts but pastes the entire template back into chat. Until native execution improves, reference the relevant prompt file and run it manually in the chat window.

## How to Contribute Updates

1. Launch the MCP server with the environment you are testing.
2. Note whether prompts load automatically and how the assistant responds to each stage of the SDD workflow.
3. Capture any error messages or required workarounds.
4. Update the support matrix and notes above with your findings.
5. Open a pull request summarizing the change so the community keeps an accurate inventory.

Have results for a tool marked **TBD**? Please add them—this table is only as useful as the data we collectively maintain.
6 changes: 6 additions & 0 deletions mcp_server/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@
"""

from fastmcp import FastMCP
from starlette.requests import Request
from starlette.responses import PlainTextResponse

from .config import config
from .prompts_loader import register_prompts
Expand All @@ -21,6 +23,10 @@ def create_app() -> FastMCP:
# Initialize FastMCP server
mcp = FastMCP(name="spec-driven-development-mcp")

@mcp.custom_route("/health", methods=["GET"])
async def health_check(request: Request) -> PlainTextResponse:
return PlainTextResponse("OK")

# Load prompts from the prompts directory and register them
register_prompts(mcp, config.prompts_dir)

Expand Down
Loading