AI memory that forgets intelligently.
The first memory framework built on cognitive science.
Quick Start Β· Architecture Β· Core vs Pro Β· Website Β· API
Every AI memory solution stores memories. Mnemo is the first to forget intelligently.
Humans don't remember everything equally β important memories consolidate, trivial ones fade, frequently recalled knowledge strengthens. Mnemo models this with:
- Weibull decay β stretched-exponential forgetting:
exp(-(t/Ξ»)^Ξ²)with tier-specific Ξ² - Triple-path retrieval β Vector + BM25 + Knowledge Graph fused with RRF
- Three-layer contradiction detection β regex signal β LLM 5-class β dedup pipeline
- 10-stage retrieval pipeline β from preprocessing to context injection
The result: your AI agent's memory stays relevant instead of drowning in noise.
| Capability | Mem0 $249 | Zep $199 | Letta $49 | Cognee $149 | Mnemo Core FREE | Mnemo Pro $69 |
|---|---|---|---|---|---|---|
| Vector search | β | β | β | β | β | β |
| BM25 keyword search | β | β | β | β | β | β |
| Knowledge graph | Pro | β | β | β | β | β |
| Forgetting model | β | Basic | Basic | β | Weibull | Weibull |
| Memory tiers | β | β | β | β | β | β |
| Cross-encoder rerank | β | Basic | β | β | β | β |
| Contradiction detection | β | β | β | Partial | β | β |
| Triple-path fusion | β | β | β | β | β | β |
| Scope isolation | Basic | β | β | β | β | β |
| Emotional salience | β | β | β | β | β | β |
| WAL crash recovery | β | β | β | β | β | β |
| Session reflection | β | β | β | β | β | β |
| Self-improvement | β | β | β | β | β | β |
| Observability | Partial | β | β | β | β | β |
| Self-hosted | β | β | β | β | β | β |
Mnemo Core (free) already outperforms most $99+/mo paid solutions on retrieval quality.
ββββββββββββββββ Write Layer (6 channels) ββββββββββββββββ
β β Hook realtime β£ Daily archive extractor β
β β‘ Plugin SmartExtract β€ File watcher (fs.watch) β
β β’ L1 Distiller (cron) β₯ Manual memory_store β
ββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββ
βΌ
store.ts (dedup + contradiction L1)
ββββββ΄βββββ
βΌ βΌ
LanceDB Graphiti/Neo4j
(Vec + BM25) (Knowledge Graph + WAL)
ββββββββββββββββ Retrieval Layer (10 stages) ββββββββββββββ
β S0 Preprocessing S5 Min-score filter β
β S1 Resonance gate S6 Cross-encoder rerank β
β S2 Multi-hop detection S7 Weibull decay β
β S3 Triple-path parallel S8 Hard cutoff + normalize β
β (VectorβBM25βGraph) S9 MMR deduplication β
β S4 RRF fusion S10 Session dedup + inject β
ββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββ
βΌ
Top-K β Agent Context
ββββββββββββββββ Lifecycle Layer ββββββββββββββββββββββββββ
β Tier classification: Core (Ξ²=0.8) β Working (Ξ²=1.0) β
β β Peripheral (Ξ²=1.3) β
β Weibull decay: exp(-(t/Ξ»)^Ξ²) β
β Access reinforcement (spaced repetition) β
β Emotional salience modulation (up to 1.5Γ) β
β Session reflection + overnight consolidation β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
git clone https://github.com/Methux/mnemo.git
cd mnemo
cp .env.example .env # add your API keys
docker compose up -d # starts Neo4j + Graphiti + DashboardDashboard at http://localhost:18800
# Install Ollama models
ollama pull nomic-embed-text # embedding
ollama pull qwen3:8b # smart extraction LLM
ollama pull bge-reranker-v2-m3 # cross-encoder rerank
# Use local config
cp config/mnemo.local.example.json ~/.mnemo/mnemo.json
docker compose up -d # Neo4j + GraphitiFull Core functionality β embedding, extraction, rerank, graph β all running locally. Zero API cost.
npm install @mnemoai/coreimport { createMnemo } from '@mnemoai/core';
const mnemo = await createMnemo({
embedding: {
provider: 'openai-compatible',
apiKey: process.env.VOYAGE_API_KEY,
baseURL: 'https://api.voyageai.com/v1',
model: 'voyage-3-large',
dimensions: 1024,
},
dbPath: './memory-db',
});
// Store a memory
await mnemo.store({
text: 'User prefers dark mode and minimal UI',
category: 'preference',
importance: 0.8,
});
// Recall β automatically applies decay, rerank, MMR
const results = await mnemo.recall('UI preferences', { limit: 5 });npm run init # guided wizard β generates config + .envopenclaw plugins install mnemoThe open-source foundation. Full retrieval engine, no restrictions.
| Feature | Details |
|---|---|
| Storage | Pluggable backend β LanceDB (default), Qdrant, Chroma, PGVector |
| Retrieval | Triple-path (Vector + BM25 + Graphiti) with RRF fusion |
| Rerank | Cross-encoder (Voyage rerank-2) |
| Decay | Weibull stretched-exponential, tier-specific Ξ² |
| Tiers | Core (Ξ²=0.8) / Working (Ξ²=1.0) / Peripheral (Ξ²=1.3) |
| Contradiction | Three-layer detection (regex + LLM + dedup) |
| Extraction | Smart extraction with GPT-4.1 |
| Graph | Graphiti/Neo4j knowledge graph |
| Scopes | Multi-agent isolation |
| Emotional salience | Amygdala-modeled half-life adjustment |
| Noise filtering | Embedding-based noise bank + regex |
| Temporal queries | Date format expansion (δΈ/EN) |
Everything in Core, plus enterprise features:
| Feature | Details |
|---|---|
| WAL | Write-ahead log for crash recovery |
| Session reflection | Deep summary at session boundaries |
| Self-improvement | Learning from interaction patterns |
| Memory tools | memory_store / search / delete for agents |
| MCP Server | Model Context Protocol integration |
| Observability | Query tracking, latency monitoring, health checks |
| Access tracking | Spaced repetition with reinforcement |
# Activate Pro
export MNEMO_LICENSE_TOKEN="mnemo_your_token"
# Auto-activates on first run, binds to this machine| Plan | Price | Devices | Support |
|---|---|---|---|
| Core | Free forever | Unlimited | GitHub Issues |
| Indie | $69/mo Β· $690/yr | 1 | |
| Team | $199/mo Β· $1,990/yr | 5 | Priority + Slack |
| Enterprise | Custom | Unlimited | Dedicated + SLA |
Mnemo requires external models for embedding, extraction, and reranking. You bring your own API keys β Mnemo does not proxy or bundle API costs. Choose a setup that fits your budget:
| Setup | Embedding | LLM Extraction | Rerank | Est. API Cost |
|---|---|---|---|---|
| Local | Ollama nomic-embed-text | Ollama qwen3:8b | Ollama bge-reranker | $0/mo |
| Hybrid | Voyage voyage-3-large | GPT-4.1-mini | Voyage rerank-2 | ~$20/mo |
| Cloud | Voyage voyage-3-large | GPT-4.1 | Voyage rerank-2 | ~$45/mo |
These are your own API costs, not Mnemo subscription fees. All setups use the same Core/Pro features β the difference is model quality.
- Local: Runs entirely offline via Ollama. Good enough to beat most paid competitors.
- Hybrid: Best quality-to-cost ratio. Recommended for most users.
- Cloud: Maximum extraction quality for high-volume production.
See
config/mnemo.local.example.jsonfor the $0 local setup, orconfig/mnemo.example.jsonfor the cloud setup.
Mnemo's design maps directly to established memory research:
| Human Memory | Mnemo Implementation |
|---|---|
| Ebbinghaus forgetting curve | Weibull decay: exp(-(t/Ξ»)^Ξ²) |
| Spaced repetition effect | Access reinforcement extends half-life |
| Memory consolidation (sleep) | Session reflection + overnight cron |
| Core vs peripheral memory | Tier system with differential Ξ² |
| Spreading activation | Graphiti 1-hop neighborhood traversal |
| Amygdala emotional tagging | emotionalSalience modulates half-life (up to 1.5Γ) |
| Interference / false memories | MMR deduplication + noise bank |
| Selective attention | Resonance gating (adaptive threshold) |
| Metamemory | mnemo-doctor + Web Dashboard |
| Tool | Description | Run |
|---|---|---|
mnemo init |
Interactive config wizard | npm run init |
mnemo-doctor |
One-command health check | npm run doctor |
validate-config |
Config validation gate | npm run validate |
| Dashboard | Web UI for browsing, debugging, monitoring | http://localhost:18800 |
- Architecture Deep Dive
- Configuration Reference
- Retrieval Pipeline
- Cognitive Science Model
- API Reference
- OpenClaw Integration
This project uses a dual-license model:
- MIT β Files marked
SPDX-License-Identifier: MIT(Core features) - Commercial β Files marked
SPDX-License-Identifier: LicenseRef-Mnemo-Pro(Pro features)
See LICENSE and packages/pro/LICENSE for details.
We welcome contributions to Mnemo Core (MIT-licensed files). See CONTRIBUTING.md.
Areas where we'd love help:
- Benchmark evaluation (LOCOMO, MemBench)
- New embedding provider adapters
- Retrieval pipeline optimizations
- Language-specific SDKs (Python, Go)
- Documentation and examples
Built with cognitive science, not hype.
**Trademarks:** LanceDB is a trademark of LanceDB, Inc. Neo4j is a trademark of Neo4j, Inc. Qdrant is a trademark of Qdrant Solutions GmbH. Mnemo is not affiliated with, endorsed by, or sponsored by any of these organizations. Storage backends are used under their respective open-source licenses.