Standardizing the "Brain" of AI Agents.
Condensate is an open-source Memory Condensation OS that gives AI agents structured, deterministic, and verifiable long-term memory. It replaces the "bag of text" RAG approach with a rigorous ontology of Events, Learnings, and Policies, enforcing Traffic Control (No-LLM paths) and Cognitive Provenance (Proof Envelopes).
Python
pip install condensateTypeScript / Node.js
npm install @condensate/sdkClaude / Cursor / Windsurf (MCP)
npx -y @condensate/coreRust
cargo add condensateGo
go get github.com/condensate/condensate-go-sdk- Docker & Docker Compose
- Python 3.11+
git clone https://github.com/condensate-io/core
cd core
cp .env.example .env
# Edit .env with your settings (see Environment Variables below)./start.shThis starts:
- Condensate Core API on
http://localhost:8000 - Admin Dashboard on
http://localhost:3010 - Qdrant (vector store) on
http://localhost:6333 - Ollama (local LLM) on
http://localhost:11434
Open http://localhost:3010 → API Keys → Create Key. Copy the sk-... value.
from condensate import CondensateClient
client = CondensateClient("http://localhost:8000", "sk-your-key")
client.store_memory(content="User prefers dark mode.", type="episodic")
result = client.retrieve("What are the user's preferences?")
print(result["answer"])Copy .env.example to .env and configure:
| Variable | Description | Default |
|---|---|---|
DATABASE_URL |
PostgreSQL connection string | postgresql://condensate:password@db:5432/condensate_db |
QDRANT_HOST |
Qdrant hostname (used in docker-compose) | qdrant |
QDRANT_PORT |
Qdrant port | 6333 |
QDRANT_URL |
Full Qdrant URL — overrides HOST+PORT when set | http://{QDRANT_HOST}:{QDRANT_PORT} |
QDRANT_API_KEY |
Qdrant API key (required for Qdrant Cloud) | — |
| Variable | Description | Default |
|---|---|---|
LLM_ENABLED |
Enable LLM-based extraction pipeline | false |
LLM_BASE_URL |
OpenAI-compatible base URL | http://ollama:11434/v1 |
LLM_API_KEY |
LLM provider API key | ollama |
LLM_MODEL |
Model name for extraction | phi3 |
| Variable | Description | Default |
|---|---|---|
HF_TOKEN |
Hugging Face token — enables authenticated downloads and higher rate limits for the ModernBERT NER model. Strongly recommended to avoid cold-start failures. | — |
| Variable | Description | Default |
|---|---|---|
CONDENSATE_SECRET |
HMAC secret for signing Proof Envelopes | changeme_in_production |
ADMIN_USERNAME |
Admin dashboard username | admin |
ADMIN_PASSWORD |
Admin dashboard password | admin |
| Variable | Description | Default |
|---|---|---|
REVIEW_MODE |
Assertion review mode: manual (HITL queue) or auto |
manual |
INSTRUCTION_BLOCK_THRESHOLD |
Guardrail threshold for instruction injection (0.0–1.0) | 0.5 |
SAFETY_BLOCK_THRESHOLD |
Guardrail threshold for safety violations (0.0–1.0) | 0.7 |
| Variable | Description | Default |
|---|---|---|
INGEST_WORKERS |
Parallel worker threads for ingest_codebase.py |
8 |
UPLOAD_DIR |
Directory for file uploads (relative to app root) | uploads |
| Variable | Description | Default |
|---|---|---|
CONDENSATE_URL |
Server URL used by the Python SDK CLI | http://localhost:8000 |
CONDENSATE_API_KEY |
API key used by the Python SDK CLI | — |
| Variable | Description | Default |
|---|---|---|
LOCALMEMCP_PATH |
Path to LocalMem data directory for bootstrap import | /app/localmemcp_data |
OLD_QDRANT_HOST |
Old Qdrant host for data migration | host.docker.internal |
OLD_QDRANT_PORT |
Old Qdrant port for data migration | 6333 |
LLM_ENABLED=true
LLM_BASE_URL=https://api.openai.com/v1
LLM_API_KEY=sk-openai-xxxx
LLM_MODEL=gpt-4o-miniLLM_ENABLED=true
LLM_BASE_URL=http://ollama:11434/v1
LLM_API_KEY=ollama
LLM_MODEL=phi3| SDK | Package | Docs |
|---|---|---|
| Python | condensate |
sdks/python |
| TypeScript | @condensate/sdk |
sdks/ts |
| MCP Bridge | @condensate/core |
sdks/mcp-bridge |
| Rust | condensate |
sdks/rust |
| Go | condensate-go-sdk |
sdks/go |
Raw Input (Chat / Docs / API)
│
▼
[Ingress Agent] ──── stores EpisodicItem + vector embedding
│
▼
[Condenser] ──── NER → LLM Extraction → Entity Canonicalization
│ → Assertion Consolidation → Edge Synthesis
▼
[Knowledge Graph] ─── Entities, Assertions, Relations (Postgres)
│
▼
[Memory Router] ──── Vector search + Graph traversal + Hebbian updates
│
▼
[MCP / API] ──── Agents, SDKs, Admin Dashboard
Releases are triggered by pushing a version tag:
git tag v1.2.3
git push origin v1.2.3This triggers the GitHub Actions release workflow which:
- Builds Rust binaries for Linux, macOS (x64 + arm64), and Windows
- Publishes
condensateto PyPI - Publishes
@condensate/sdkand@condensate/coreto npm - Publishes
condensateto crates.io - Creates a GitHub Release with binary attachments
| Secret | Description |
|---|---|
NPM_TOKEN |
npm Automation token (npm token create --type=automation) |
PYPI_API_TOKEN |
PyPI API token (starts with pypi-) |
CARGO_REGISTRY_TOKEN |
crates.io API token |
GITHUB_TOKEN |
Injected automatically by GitHub Actions |
./run_tests.shCondensate works with any OpenAI-compatible LLM provider and any MCP-compatible agent:
- Model Providers: OpenAI, Anthropic, Azure OpenAI, Google Gemini, Mistral
- Local Inference: Ollama, LM Studio, LocalAI
- Agent Frameworks: LangChain, LlamaIndex, AutoGen, CrewAI
- Agent Hosts: Claude Desktop, Cursor, Windsurf, Codeium
Apache 2.0 — see LICENSE.