A portable governance framework for AI coding agents
Don't make the same mistake twice. Write judgment into structure.
Language: English · 简体中文
A framework for controlling how AI agents behave in your development environment.
Instead of relying on prompt instructions like "please don't repeat mistakes," this framework uses engineering structures — files, protocols, and registries that agents must read and follow.
Agent = Model + Harness
The model provides reasoning. The harness determines reliability.
| Component | What It Does |
|---|---|
| AGENTS.md | Root governance contract — routing, session protocol, working agreements |
| Corrections Registry | Append-only log of past mistakes (Pitfalls) and user constraints (Constraints) |
| Execution Lanes | Five levels of execution complexity — always pick the lightest sufficient one |
| Session Protocol | Six-step checklist agents follow at session startup |
| Task Contract | Four-role model (Planner/Analyst/Executor/Verifier) with hard boundaries |
| Working Agreements | Eight rules including mandatory correction capture |
# 1. Clone the repo
git clone https://github.com/watsonctl/agent-harness.git
# 2. Preview what will be deployed
bash agent-harness/scripts/bootstrap.sh --dry-run
# 3. Deploy to your home directory
bash agent-harness/scripts/bootstrap.sh
# 4. Validate the deployment
bash agent-harness/scripts/lint.shAfter bootstrap:
~/AGENTS.md— your root governance contract~/control/references/CORRECTIONS_REGISTRY.md— your corrections registry~/control/GOVERNANCE.md— your governance rules
framework/ Core concepts (modular, one file per concept)
templates/ Deploy-ready templates with {{variable}} placeholders
examples/ Sanitized real-world examples
scripts/ Bootstrap and lint tools
docs/ Architecture, customization guide, background
User corrects AI → Agent fixes it → Captures to Registry + Memory
│
Next session reads Registry ◄┘
│
Agent doesn't repeat ◄─┘
This closes the loop that most AI setups leave open: corrections evaporate after the session ends.
Anyone using AI coding agents (Claude, Gemini, Cursor, Copilot, etc.) who wants:
- Agents that don't repeat mistakes after correction
- A structured way to express and enforce constraints
- Portable governance that survives machine changes
- A framework they can share with teammates
- deepengram-harness — Obsidian + Engram knowledge governance framework
MIT