A unified-memory protocol for working with AI on a real codebase.
When you build software with both a strategist AI (Claude.ai) and an executor AI (Claude Code), you have three agents working on the same repo: you, the strategist, and the executor. Each agent has different strengths and different blind spots. None of them remembers everything across sessions. The Third Agent is a six-file protocol that keeps all three minds in sync by treating the repo as the source of truth and every mind as a cache.
This repo is the spec, the file templates, and the session protocol — extracted from a real production project where the absence of this protocol caused a seven-day debugging cascade that should have been a twelve-minute task.
Three agents work on the same codebase. Each has a different consistency model.
| Agent | Strengths | Blind spots |
|---|---|---|
| You (operator) | Continuous memory across days and weeks. Real intent. Final authority on what's true. | Forgets specifics under pressure. Cannot recall exact code edits from last session. |
| Claude.ai (strategist) | Rich session context. Reads project knowledge. Plans, drafts, advises. | No memory across conversations. Cannot run code or read live infrastructure. |
| Claude Code (executor) | Live filesystem, git, can execute anything in the repo. | No memory across sessions. Doesn't know why decisions were made. |
When their gaps line up at the wrong moment, the system fails. A scope upgrade that should take twelve minutes takes seven days. A reinstall that should preserve state uninstalls everything. A decision that was made and documented gets re-litigated three sessions in a row.
The cause is not intelligence. It's memory fragmentation across agents with no shared protocol.
If memory contradicts the repo, the repo wins.
The repo is the source of truth. All three minds are caches.
Session start = read canonical state. Session end = write canonical state. Documentation changes ride in the same commit as code changes — never separate.
This is a familiar pattern from distributed systems: eventually-consistent caches with a canonical store, write-through on session end, cache invalidation on session start. The novelty isn't the pattern. It's applying the pattern to multi-agent AI workflows where two of the three agents have no native memory at all.
Each file has a single purpose. Together they cover the failure modes that caused the original cascade.
| File | Purpose | Read when |
|---|---|---|
CLAUDE.md |
Architectural blueprint, conventions, hard rules, session protocols. The "how this project is built and how to behave" file. | Session start (every session) |
docs/current-state.md |
Living snapshot of right now. Subsystem status, what's broken, what's in flight, known gaps. The "where are we" file. | Session start (every session) |
docs/lessons.md |
Append-only correction log. Every time an AI was wrong, the lesson goes here. Never deleted. | Conditional — grep when relevant |
docs/decisions.md |
ADR-lite. Every meaningful architectural decision with reasoning. The "we already settled this" file. | Conditional — grep before proposing changes |
docs/recent-incidents.md |
Auto-rolled 30-day summary of significant events. Distills lessons + decisions + runbook triggers into one short read. | Session start (every session) |
docs/runbooks/*.md |
Reactive procedures. "When X breaks, do Y, in this order." One file per failure mode. | On demand, when symptoms match |
A seventh file, .claude-context/README.md, is the universal session-start pointer. It tells any AI which files to read first and in what order. It is the entry point for every fresh AI session.
your-project/
├── CLAUDE.md ← architectural blueprint + hard rules
├── .claude-context/
│ └── README.md ← session-start pointer
└── docs/
├── current-state.md ← living snapshot
├── lessons.md ← append-only correction log
├── decisions.md ← ADR-lite decision log
├── recent-incidents.md ← rolling 30-day summary
└── runbooks/
├── README.md ← runbook index
└── (one file per failure mode)
The bridge files live inside the project repo they govern. They are not a separate repo — they are part of the codebase. Documentation and code share one git history.
Every AI session — Claude.ai or Claude Code — follows the same start and end protocols. The files alone are not enough; the protocol is the binding behavior.
- Read
CLAUDE.md— architectural context and hard rules - Read
docs/current-state.md— what's true right now - Read
docs/recent-incidents.md— last 30 days of significant events - Read
docs/runbooks/README.md— index of available recovery procedures - Acknowledge loaded state in one sentence
- Proceed with the user's request — grounded in canonical state, not memory or guesswork
docs/lessons.md— grep for keywords related to the current task. "Have we been wrong about this before?"docs/decisions.md— grep before proposing architectural changes. "Have we already settled this?"docs/runbooks/{X}.md— when an incident matches a runbook's symptom
- Update
docs/current-state.mdif subsystem status, in-flight work, or material changes occurred - Append
docs/lessons.mdif a correction was logged this session - Append
docs/decisions.mdif an architectural decision was made - Update the runbook's "Last triggered" field if a runbook was used
- Commit doc changes alongside code changes — same commit, never separate
- Re-upload changed bridge files to the Claude.ai project knowledge base so the cache stays in sync with the repo
Each mind can fail to remember. The bridge guarantees the other minds, or the repo itself, can restore canonical state.
| If... | Recovery |
|---|---|
| You forget where you left off | Read current-state.md. The "In Flight" and "Recent Material Changes" sections exist for this. |
| You forget why you chose X over Y | Grep decisions.md for the keyword. Reasoning and tradeoffs are logged. |
| You forget if a problem is new or old | Grep lessons.md. If it has been seen before, it is there. |
| Claude.ai forgets a past conversation's context | Claude.ai reads canonical state at session start. Re-upload bridge files to project knowledge if they have drifted. |
| Claude.ai recommends something that contradicts a decision | Point at the relevant decisions.md entry. The hard rules require deferring to the repo when memory and repo conflict. |
| Claude Code starts fresh with zero memory | Claude Code reads CLAUDE.md and the bridge files first. .claude-context/README.md is the entry point. |
| Both Claude.ai and Claude Code disagree about state | Whichever matches current-state.md wins. If neither does, the repo is wrong — update it after verifying with the operator. |
| A failure mode appears that's not in any runbook | Resolve it manually. Then write the runbook before the session ends. |
| You forget to update the bridge after a session | Drift accumulates silently. Catch it at the next session start when current-state.md doesn't match what you find. |
This repo contains working file templates for every bridge file. To adopt the protocol in your own project:
- Copy the bridge files into your project repo at the paths shown in Repository layout
- Replace the placeholder content with your own architectural context, current state, and known decisions
- Add
CLAUDE.mdand the bridge files to your Claude.ai project's knowledge base - Reference
.claude-context/README.mdat the start of every Claude Code session - Commit. From this point forward, the protocol is live. Every session starts by reading these files. Every session that materially changes state ends by updating them.
The protocol is intentionally lightweight. Six files, two protocols, one rule. No tooling required. No external dependencies. Just discipline in reading and writing the canonical state.
The protocol was extracted from a real production project where the absence of shared memory across three agents caused a seven-day debugging cascade. The fault was not in any single agent's reasoning. It was in the system: three minds, each with partial information, no shared protocol, and no canonical state to fall back on. The same incident, with the bridge in place, would have taken twelve minutes.
The protocol is not theoretical. It was built reactively, after the cost had already been paid, to make sure the same shape of failure never compounds again.
The Third Agent is a living protocol, not a finished framework. It will be wrong in places. It will be incomplete. If you have a counter-example, a war story from your own setup, or a clarification that would make this clearer, the door is open:
- Open a Discussion for conceptual conversation — counter-examples, extensions to other stacks, questions about the framing.
- Open an Issue if you've spotted something concretely wrong or unclear in the protocol document itself.
- Open a Pull Request if you want to propose a specific change to the text.
Replies may not be immediate, but they will come.
MIT. See LICENSE.
The Third Agent was created by David Askaripour (publicly: Dave Mate), the operator behind Askarione — an ongoing journal about AI orchestration, written by someone building real products with AI rather than writing about other people building them.
If this protocol resonates, the journal goes deeper on adjacent topics: how to structure projects so AI can actually reason about them, how to delegate to Claude Code without losing the thread, how to build internal infrastructure before building products to sell. Episodes drop every two weeks.
- Journal: askarione.com/journal
- AI Project Starter Kit (the production-grade scaffolding referenced in the journal): askarione.com/kit
- YouTube: @askarionehq
- Contact: admin@askarione.com
Built reactively in response to a seven-day cascade. Released publicly because the next person hitting the same wall shouldn't have to rebuild it from scratch.