Skip to content

DRAFT Protocol — Intake governance for AI tool calls. Ensures AI understands human intent before execution begins.

License

Notifications You must be signed in to change notification settings

manifold-vectors/draft-protocol

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

DRAFT Protocol

CI Python 3.10+ License: Apache 2.0 Typed

Every AI guardrail watches what already went wrong. DRAFT prevents it from going wrong in the first place.

DRAFT (Define, Rules, Artifacts, Flex, Test) is a structured intake governance protocol that forces AI agents to confirm they understand your intent before they act. Five dimensions. Three tiers. One rule: questions come before answers.

pip install draft-protocol
Output Guardrails DRAFT Protocol
When it acts After the LLM responds Before the LLM acts
What it checks Toxicity, format, policy Intent, scope, assumptions
Failure mode Catches bad output, wastes the call Prevents bad calls entirely
Evidence basis Synthetic benchmarks 50+ real governed sessions
Complementary? Yes Yes — use both for defense-in-depth

The Problem

AI agents are getting powerful. They can write code, manage files, query databases, deploy infrastructure. But they all share the same failure mode: they act on what they think you meant, not what you actually meant.

The result? Scope creep, misunderstood requirements, wasted work, and sometimes real damage — all because no one verified intent before execution.

Current solutions focus on output safety (content filtering, guardrails). Almost nobody governs the intake — the moment where intent is captured and interpreted.

How DRAFT Works

DRAFT maps every request across five dimensions:

Dimension Question Why It Matters
Define What exactly are we building? Prevents vague starts
Rules Who decides? What's forbidden? Prevents authority drift
Artifacts What goes in? What comes out? Prevents garbage-in/garbage-out
Flex What can change? What can't? Prevents scope creep
Test How do we know it worked? Prevents "done" without evidence

Each field is labeled SATISFIED, AMBIGUOUS, or MISSING. Ambiguous and missing fields generate targeted questions. The confirmation gate blocks execution until all applicable fields are confirmed by the human.

Three Tiers

Not every message needs the same scrutiny:

  • CASUAL — "What's the weather?" → Internal mapping only. No visible ceremony.
  • STANDARD — "Build a REST API" → Full pipeline. Questions for gaps. Assumptions surfaced.
  • CONSEQUENTIAL — "Restructure the auth system" → Maximum rigor. All dimensions mandatory. Devil's Advocate on assumptions. Quality review required.

Tier classification is automatic (keyword matching + optional LLM), with manual override.

Quick Start

pip install draft-protocol

MCP Clients (stdio — default)

Works with any MCP-compatible AI client. Add to your config:

Claude Desktopclaude_desktop_config.json
{
  "mcpServers": {
    "draft-protocol": {
      "command": "python",
      "args": ["-m", "draft_protocol"],
      "env": {}
    }
  }
}
Cursor.cursor/mcp.json
{
  "mcpServers": {
    "draft-protocol": {
      "command": "python",
      "args": ["-m", "draft_protocol"],
      "env": {}
    }
  }
}
Windsurf~/.codeium/windsurf/mcp_config.json
{
  "mcpServers": {
    "draft-protocol": {
      "command": "python",
      "args": ["-m", "draft_protocol"],
      "env": {}
    }
  }
}
Continue~/.continue/config.json
{
  "experimental": {
    "modelContextProtocolServers": [
      {
        "transport": { "type": "stdio", "command": "python", "args": ["-m", "draft_protocol"] }
      }
    ]
  }
}
VS Code Copilot.vscode/settings.json
{
  "github.copilot.chat.mcpServers": {
    "draft-protocol": {
      "command": "python",
      "args": ["-m", "draft_protocol"]
    }
  }
}

Web & HTTP Clients (SSE / Streamable HTTP)

For web-based MCP clients, browser extensions, or remote access:

# SSE transport (Server-Sent Events)
python -m draft_protocol --transport sse --port 8420

# Streamable HTTP (new MCP standard)
python -m draft_protocol --transport streamable-http --port 8420

Connect any SSE-capable MCP client to http://127.0.0.1:8420/sse.

REST API (for non-MCP clients & Chrome extension)

python -m draft_protocol --transport rest --port 8420

Endpoints: /classify, /session, /map, /confirm, /gate, /elicit, /assumptions, /status, /health. Full CORS support.

Chrome Extension (any AI chat)

The included Chrome extension adds DRAFT governance to any AI chat interface:

  1. Start the REST server: python -m draft_protocol --transport rest
  2. Load the extension: Chrome → chrome://extensions → Developer mode → Load unpacked → select extension/
  3. Visit any supported AI chat — a governance badge appears automatically

Supported platforms: ChatGPT, Claude, Gemini, Copilot, Mistral, Poe, Perplexity, HuggingFace Chat.

The badge shows real-time tier classification as you type. Click it for full session status. Open the side panel for the complete DRAFT workflow.

Environment Variables

Variable Default Description
DRAFT_TRANSPORT stdio Transport: stdio, sse, streamable-http, rest
DRAFT_HOST 127.0.0.1 Bind address for HTTP transports
DRAFT_PORT 8420 Port for HTTP transports
DRAFT_DB_PATH ~/.draft_protocol/draft.db SQLite database location
DRAFT_LLM_PROVIDER none LLM provider: none, ollama, openai, anthropic
DRAFT_LLM_MODEL (empty) Model name (auto-detects provider if not set)
DRAFT_EMBED_MODEL (empty) Embedding model name
DRAFT_API_KEY (empty) API key for cloud providers
DRAFT_API_BASE (empty) Custom API endpoint URL

Optional: Enhanced Intelligence with Any LLM

DRAFT works out of the box with keyword matching and heuristics. For better accuracy, connect any LLM provider:

Ollama (local, free)
{
  "env": {
    "DRAFT_LLM_PROVIDER": "ollama",
    "DRAFT_LLM_MODEL": "llama3.2:3b",
    "DRAFT_EMBED_MODEL": "nomic-embed-text"
  }
}
OpenAI
{
  "env": {
    "DRAFT_LLM_PROVIDER": "openai",
    "DRAFT_LLM_MODEL": "gpt-4o-mini",
    "DRAFT_EMBED_MODEL": "text-embedding-3-small",
    "DRAFT_API_KEY": "sk-..."
  }
}
Anthropic
{
  "env": {
    "DRAFT_LLM_PROVIDER": "anthropic",
    "DRAFT_LLM_MODEL": "claude-sonnet-4-20250514",
    "DRAFT_API_KEY": "sk-ant-..."
  }
}

Note: Anthropic does not provide an embeddings API. When using the Anthropic provider, DRAFT uses Claude for chat-based classification but embedding-based features (semantic field matching) are unavailable. For full functionality, pair Anthropic chat with a separate embedding provider, or use Ollama/OpenAI which support both chat and embeddings.

Any OpenAI-compatible API (Together, Groq, LM Studio, etc.)
{
  "env": {
    "DRAFT_LLM_PROVIDER": "openai",
    "DRAFT_LLM_MODEL": "meta-llama/Llama-3-70b-chat-hf",
    "DRAFT_API_KEY": "...",
    "DRAFT_API_BASE": "https://api.together.xyz/v1"
  }
}

Add the env block to any MCP client config above. With an LLM, DRAFT gets semantic tier classification, embedding-based field assessment, and context-aware suggestions. Without one, you still get full governance via keyword heuristics. Auto-detection: set DRAFT_LLM_MODEL without a provider and DRAFT infers it (gpt-* → openai, claude-* → anthropic, anything else → ollama).

Tools

Tool Purpose
draft_intake Start a session. Classifies tier automatically.
draft_map Map all 5 dimensions against your context.
draft_elicit Generate questions for gaps.
draft_confirm Record your answer for a field.
draft_assumptions Surface key assumptions as falsifiable claims.
draft_verify Confirm or reject an assumption.
draft_gate Check if all fields are confirmed. Blocks execution if not.
draft_review Quality self-assessment of the elicitation.
draft_status View current session state.
draft_escalate Manually increase tier.
draft_deescalate Manually decrease tier (logged).
draft_unscreen Reverse a dimension marked N/A.
draft_add_assumption Add a manual or Devil's Advocate assumption.
draft_override Override a blocked gate (logged, auditable).
draft_close Close the current session.

Example Flow

You: "Build a Python CLI that backs up my PostgreSQL database to S3"

DRAFT classifies: STANDARD (keyword: "build")

DRAFT maps dimensions and finds gaps:

  • D1 ✅ SATISFIED — CLI tool for PostgreSQL backup to S3
  • D3 ❓ MISSING — What fails without it? (manual backups? no backups at all?)
  • R3 ❓ MISSING — What's forbidden? (drop tables? modify data?)
  • T1 ❓ MISSING — How do we know it worked?
  • A2 ❓ MISSING — What inputs should be rejected?

DRAFT asks targeted questions:

"What currently handles backups? If nothing, what's the risk of the current approach?" "Are there any operations this tool must never perform?" "What does a successful backup look like — file in S3, notification, verification?"

You answer. DRAFT confirms. Gate opens. AI executes with verified intent.

End-to-End Transcript (REST API)

A complete session using curl — copy-paste to try it yourself:

# Start the server
python -m draft_protocol --transport rest --port 8420 &

# 1. Create a session
curl -s -X POST http://127.0.0.1:8420/session \
  -H "Content-Type: application/json" \
  -d '{"message": "Build a CLI that backs up PostgreSQL to S3"}'
# → {"session_id": "abc123", "tier": "STANDARD", "reasoning": "Keyword match: build", "confidence": 0.85}

# 2. Map dimensions (provide context about your project)
curl -s -X POST http://127.0.0.1:8420/map \
  -H "Content-Type: application/json" \
  -d '{"session_id": "abc123", "context": "Python CLI using boto3. Only read-only DB access. Success = verified S3 upload."}'
# → Returns dimension map with SATISFIED, AMBIGUOUS, and MISSING fields

# 3. Check what's blocking the gate
curl -s -X POST http://127.0.0.1:8420/gate \
  -H "Content-Type: application/json" \
  -d '{"session_id": "abc123"}'
# → {"passed": false, "confirmed": 2, "total": 5, "blockers": ["D3: MISSING", "A2: MISSING", "T2: MISSING"]}

# 4. Confirm a missing field
curl -s -X POST http://127.0.0.1:8420/confirm \
  -H "Content-Type: application/json" \
  -d '{"session_id": "abc123", "field_key": "D3", "value": "Without this, backups are manual and unreliable"}'
# → {"field": "D3", "status": "CONFIRMED", "value": "Without this, backups are manual and unreliable"}

# 5. Repeat for remaining blockers, then check the gate again
curl -s -X POST http://127.0.0.1:8420/gate \
  -H "Content-Type: application/json" \
  -d '{"session_id": "abc123"}'
# → {"passed": true, "confirmed": 5, "total": 5, "blockers": [], "summary": "[PASS]: 5/5"}

# Gate passed — safe to execute.

Tip: In MCP mode the AI client calls these tools automatically. The REST API is for non-MCP integrations and the Chrome extension.

Security

DRAFT includes hardened input validation:

  • Empty/whitespace message rejection at intake
  • Minimum content threshold on field confirmations (prevents bypass)
  • Empty dimension detection at gate check
  • Prompt extraction pattern detection (OWASP LLM07) — automatically escalates suspicious messages
  • Full audit trail in SQLite (every tool call logged with timestamp)

Storage

Sessions are stored in SQLite at ~/.draft_protocol/draft.db (configurable via DRAFT_DB_PATH). The database includes a full audit trail of every action.

Part of Vector Gate

DRAFT Protocol is the intake governance layer of Vector Gate, a three-gate AI governance pipeline:

  • Gate 1 — DRAFT (this project): Intake governance. Ensures AI understands intent.
  • Gate 2 — Guardian: Output governance. Checks responses against constitutional rules.
  • Gate 3 — GovMCP: Execution governance. Enforces authorized execution boundaries.

DRAFT works standalone. The full pipeline provides defense in depth.

License

Apache 2.0 — see LICENSE.

Contributing

We welcome contributions! See CONTRIBUTING.md for development setup, code style, and PR guidelines.

If you find a governance gap (gate bypassed when it shouldn't be), that's a critical bug — please report it immediately.


Built by Manifold Vector LLC. AI governance that works mechanically, not behaviorally.

About

DRAFT Protocol — Intake governance for AI tool calls. Ensures AI understands human intent before execution begins.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors