The universal memory layer for AI coding tools.
One integration. Every AI editor. Persistent memory that never forgets.
Every developer using AI tools has felt this frustration:
Monday morning. You've spent 3 hours with Claude explaining your authentication architecture, the edge cases, why you chose JWT over sessions, the rate limiting strategy. The AI finally gets it. You ship great code together.
Monday afternoon. New chat window. The AI has no idea what JWT is in your context. It suggests sessions. You explain everything again.
Tuesday. You switch to Cursor for a quick refactor. Start from zero. "We use TypeScript with strict mode." "Our API follows REST conventions." "The user service is in /src/services." Again.
A week later. "Why did we build it this way?" Nobody remembers. The decision rationale is buried in a closed Slack thread. The AI certainly doesn't know.
A month later. New team member joins. Days of onboarding conversations. Explaining the same architectural decisions. Documenting tribal knowledge that should already exist.
This isn't a minor inconvenience. It's death by a thousand cuts.
Every re-explanation is lost productivity. Every forgotten decision is technical debt. Every context switch is cognitive load. Every new teammate is weeks of redundant knowledge transfer.
Your AI is brilliant for 30 minutes at a timeβthen it's a goldfish.
ContextStream gives your AI a permanent brain.
You: "Initialize session. Remember: we use PostgreSQL, TypeScript strict mode,
and JWT for auth. Rate limits are 100 req/min per user."
...3 weeks later, different tool, new conversation...
You: "What database do we use?"
AI: "You're using PostgreSQL. You also prefer TypeScript with strict mode
and JWT authentication with 100 req/min rate limiting per user."
It remembers. Across sessions. Across tools. Forever.
Not just factsβdecisions, context, and reasoning:
You: "Why did we choose PostgreSQL over MongoDB?"
AI: "Based on your captured decision from March 15th: 'Chose PostgreSQL for
ACID compliance and complex joins in the reporting module. MongoDB
considered but rejected due to transaction requirements.'"
Sign up at contextstream.io β Settings β API Keys β Create
Claude Code / Cursor / Windsurf / VS Code:
{
"mcpServers": {
"contextstream": {
"command": "npx",
"args": ["-y", "@contextstream/mcp-server"],
"env": {
"CONTEXTSTREAM_API_URL": "https://api.contextstream.io",
"CONTEXTSTREAM_API_KEY": "your_api_key"
}
}
}
}Codex CLI (~/.codex/config.toml):
[mcp_servers.contextstream]
command = "npx"
args = ["-y", "@contextstream/mcp-server"]
[mcp_servers.contextstream.env]
CONTEXTSTREAM_API_URL = "https://api.contextstream.io"
CONTEXTSTREAM_API_KEY = "your_api_key"Codex expects snake_case
mcp_serverskeys. After editing, fully restart Codex.
Codex rules (recommended): Create an AGENTS.md in your project root (project rules) or a common parent folder like ~/dev/AGENTS.md (global rules) with the ContextStream rule content. See the full template in the MCP docs: https://contextstream.io/docs/mcp
You: "Initialize session and remember I prefer functional React components"
Open a new conversation (even in a different tool):
You: "What's my React preference?"
AI: "You prefer functional React components."
β¨ That's it. Your AI remembers now.
Memory is just the foundation. ContextStream understands your codebase at a deeper level.
When your AI makes a mistakeβwrong approach, broken build, production issueβcapture it as a lesson:
You: "Capture lesson: Always run tests before pushing to main"
These lessons surface automatically in future sessions. Before the AI takes a similar action, it sees the warning. Your AI learns from mistakes just like you do.
| Trigger | Example |
|---|---|
| User correction | "No, we use PostgreSQL not MySQL" |
| Production issue | "That deploy broke the API" |
| Workflow mistake | "You forgot to run the linter" |
Lessons are categorized by severity (critical, high, medium, low) and automatically retrieved when relevant context is detected.
You: "What breaks if I change the UserService class?"
See all dependencies and side effects before you refactor. No more surprise breakages.
You: "Find where we handle authentication errors"
Search by meaning, not keywords. Find code by what it does, not what it's named.
Decisions, code, and documentationβall connected. Ask "why" and get answers with full context.
Context loads automatically on first interaction. No manual setup:
βββββββββββββββββββββββββββββββββββββββββββ
π§ AUTO-CONTEXT LOADED (ContextStream)
βββββββββββββββββββββββββββββββββββββββββββ
π Workspace: acme-corp
π Project: backend-api
π Recent Decisions:
β’ Use PostgreSQL for persistence
β’ JWT for authentication
β οΈ Active Lessons:
β’ Always run tests before pushing
π§ Recent Context:
β’ [decision] API rate limiting strategy
β’ [preference] TypeScript strict mode
βββββββββββββββββββββββββββββββββββββββββββ
| Tool | What It Does |
|---|---|
session_init |
Initialize with auto-context loading |
context_smart |
Get relevant context for any message |
session_remember |
Natural language: "Remember X" |
session_recall |
Natural language: "What did we decide about X?" |
session_capture |
Store decisions, insights, preferences |
session_capture_lesson |
Capture mistakes to prevent repeating them |
session_get_lessons |
Retrieve relevant lessons |
| Tool | What It Does |
|---|---|
search_semantic |
Find code by meaning |
search_hybrid |
Semantic + keyword combined |
graph_dependencies |
See what depends on what |
graph_impact |
Understand change impact |
graph_call_path |
Trace execution flows |
graph_unused_code |
Find dead code |
| Tool | What It Does |
|---|---|
ai_context |
Build LLM-ready context |
ai_context_budget |
Context within token limits |
ai_plan |
Generate development plans |
ai_tasks |
Break work into tasks |
| Built-in "Memory" | ContextStream |
|---|---|
| π Locked to one vendor | π Universal β works with Cursor, Claude, Windsurf, any MCP client |
| β±οΈ Expires or resets | βΎοΈ Persistent β never lose context |
| π Basic key-value | π§ Semantic β understands meaning and relationships |
| π€ Personal only | π₯ Team-ready β shared workspace, instant onboarding |
| β No lessons | β Learns from mistakes β captures and surfaces lessons |
| β No code understanding | π Deep analysis β dependencies, impact, knowledge graph |
| π€· Hope it remembers | π― Deterministic β you control what's stored |
- π Encrypted at rest β AES-256 encryption for all stored data
- π« Never trains on your data β Your code is yours. Period.
- ποΈ You control access β Workspace permissions, API key management
- ποΈ Delete anytime β Full data deletion on request
ContextStream uses the Model Context Protocolβthe emerging standard for AI tool integrations.
Supported today:
- Claude Code
- Cursor
- Windsurf
- VS Code (with MCP extension)
- Codex CLI
- Any MCP-compatible client
One integration. Every tool. Same memory.
| Variable | Required | Description |
|---|---|---|
CONTEXTSTREAM_API_URL |
Yes | https://api.contextstream.io |
CONTEXTSTREAM_API_KEY |
Yes | Your API key |
# Install globally
npm install -g @contextstream/mcp-server
# Or run via npx (recommended for MCP configs)
npx @contextstream/mcp-server- β Add MCP config to your editor
- β Start a conversation: "Initialize session for [project-name]"
- β Tell it your preferences: "Remember we use TypeScript strict mode"
- β Make a decision: "Capture decision: Using PostgreSQL for the user database"
- β Open a new conversation and ask: "What are my preferences?"
| Resource | URL |
|---|---|
| Website | contextstream.io |
| Documentation | contextstream.io/docs |
| MCP Setup Guide | contextstream.io/docs/mcp |
| npm Package | @contextstream/mcp-server |
| GitHub | contextstream/mcp-server |
We welcome contributions:
- Report bugs β Open an issue
- Request features β Share ideas in GitHub Issues
- Submit PRs β Fork, branch, and submit
git clone https://github.com/contextstream/mcp-server.git
cd mcp-server
npm install
npm run dev # Development mode
npm run build # Production build
npm run typecheckMIT
Stop re-explaining. Start building.