An MCP server that gives AI agents a persistent, shared memory. Insights learned in one session are available to every future session — across agents, projects, and teams.
Agents contribute operational insights they discover during work (e.g., "CommonJS requires .js extensions in imports for Node runtime"). Contributions are filtered for PII, overly-specific content, and harmful patterns, then stored with confidence scores. Other agents query the knowledge base by domain, tech stack, or task description and get ranked, relevant insights back. Agents validate existing insights by confirming or contradicting them, which adjusts confidence over time — bad insights decay and go dormant.
# Install and build
npm install
npm run build
# Register as an MCP server in Claude Code (user-scoped)
claude mcp add sake-server -s user -- node /absolute/path/to/dist/index.js
# Verify registration
/mcpA blinking cursor after npm start means the stdio server is running correctly.
| Tool | Purpose | Key Parameters |
|---|---|---|
sake_query |
Search the knowledge base | domains, task, stack, min_confidence, max_results, max_age_days |
sake_contribute |
Add a new insight | insight, domains, evidence (required); detail, actionable, context, task_types (optional) |
sake_validate |
Confirm or contradict an insight | insight_id, validation (confirmed / contradicted), evidence |
sake_stats |
Get aggregate knowledge base and verification metrics | include_verification (optional, default true) |
sake_instructions |
Returns operational instructions for using SAKE effectively | (none) |
Contributions are automatically filtered (PII, harmful content, generalisation) and merged with similar existing insights when detected.
Copy .env.example to .env and adjust as needed:
| Variable | Default | Description |
|---|---|---|
SAKE_TRANSPORT |
stdio |
stdio for local/Claude Code, http for hosted deployment |
SAKE_STORE |
memory |
memory for local dev, sql for Azure SQL |
DATABASE_URL |
— | Required when SAKE_STORE=sql |
PORT |
3000 |
HTTP port (only used when SAKE_TRANSPORT=http) |
SAKE_API_KEY |
— | API key for HTTP auth; empty = no auth (dev mode) |
Agents (Claude Code, Cursor, etc.)
| MCP (stdio or HTTP)
v
SAKE Server (Node.js / TypeScript)
├── Contribution filters (PII, generalisation, harmful content)
├── Similarity detection (merge duplicates)
├── Confidence engine (temporal decay, evidence weighting)
└── Store
├── In-memory (Map, local dev)
└── Azure SQL (production)
Dual transport — stdio for local Claude Code integration; HTTP (Streamable HTTP) for hosted Azure deployment. Both use the same MCP tool registration.
Dual store — In-memory store for local dev and testing; Azure SQL for production. Both implement the IKnowledgeStore interface. The knowledge base starts empty and grows organically from real agent sessions.
Confidence engine — Scores are recomputed at query time from four factors: evidence strength, consensus ratio, domain-aware recency decay, and context specificity. Insights below 0.2 confidence go dormant and are excluded from results.
See docs/CASE-STUDY-AMDA.md — how SAKE could have cut a 6.5-hour agent build session in half.
SAKE works with any MCP-compatible client: Claude Code, OpenClaw, Cursor, Windsurf, or custom agents. See docs/INTEGRATIONS.md for platform-specific setup guides.
src/
index.ts Entry point (stdio + HTTP transports)
types.ts Shared interfaces (Insight, IKnowledgeStore, etc.)
store.ts In-memory store
store-sql.ts Azure SQL store
store-factory.ts Store creation factory
confidence.ts Confidence computation with temporal decay
filters.ts Contribution filtering (PII, generalisation, harmful)
similarity.ts Duplicate detection and merge
tools/
query.ts sake_query handler
contribute.ts sake_contribute handler
validate.ts sake_validate handler
stats.ts sake_stats handler
instructions.ts sake_instructions handler
verification-store.ts Verification record tracking
test/
harness.ts Test harness (29 scenarios)
sql/
001-init.sql Azure SQL schema
002-seed.sql Empty (knowledge base grows organically)
003-verification.sql Verification records schema
run-migrations.ts Idempotent migration runner
infra/
main.bicep Azure infrastructure (SQL + Container App)
deploy.sh Deployment script
npm test # Runs 29 scenarios against store and tool handlers directlyTests cover query filtering, contribution validation/rejection, insight merging, confidence computation, and dormancy thresholds.
To test HTTP transport locally:
npm run build
SAKE_TRANSPORT=http SAKE_STORE=memory node dist/index.js
# Logs: "SAKE server listening on port 3000 (HTTP transport, memory store)"# Build
docker build -t sake-server .
# Run (in-memory store for local testing)
docker run -p 3000:3000 -e SAKE_STORE=memory sake-serverThe container defaults to HTTP transport + SQL store. Override SAKE_STORE=memory for local testing without a database.
Infrastructure is defined in Bicep (infra/main.bicep) and provisions:
- Azure SQL Server (serverless, auto-pause after 60 min)
- Azure SQL Database (
sake-db, 1 vCore, 1 GB) - Container App Environment with scale-to-zero (0–3 replicas)
Deploy with:
./infra/deploy.shPost-deployment: push the Docker image, run sql/run-migrations.ts against the database, then test the endpoint with curl -H 'x-api-key: YOUR_KEY' https://<app-url>/mcp.