Tempera gives Claude Code a persistent memory that learns from experience. Instead of starting fresh each session, Claude can recall past solutions, learn what works, and get smarter over time.
The Problem: Claude Code forgets everything between sessions. You solve the same problems repeatedly, and Claude can't learn from past successes or failures.
The Solution: Tempera captures coding sessions as "episodes", indexes them for semantic search, and uses reinforcement learning to surface the most valuable memories when relevant.
Without Tempera: With Tempera:
βββββββββββββββ βββββββββββββββ
β Session 1 β ββforgottenββ> β Session 1 β ββcapturedβββ
βββββββββββββββ βββββββββββββββ β
βββββββββββββββ βββββββββββββββ βΌ
β Session 2 β ββforgottenββ> β Session 2 β βββrecallsβββ€
βββββββββββββββ βββββββββββββββ β
βββββββββββββββ βββββββββββββββ β
β Session 3 β ββforgottenββ> β Session 3 β βββrecallsβββ
βββββββββββββββ βββββββββββββββ
β β
βΌ βΌ
No learning Continuous improvement
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 1. START TASK β
β User: "Fix the login redirect bug" β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 2. RETRIEVE MEMORIES β
β Claude searches: "login redirect bug" β
β Finds: "Fixed similar issue by sanitizing return URLs" β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 3. SOLVE FASTER β
β Claude uses past experience to solve the problem β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 4. CAPTURE SESSION β
β Claude saves: what was done, what worked, what failed β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 5. LEARN FROM FEEDBACK β
β User: "That memory was helpful!" β
β β Episode utility increases β
β β Similar episodes get boosted (Bellman propagation) β
β β Unhelpful memories fade over time β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
| Mechanism | What It Does |
|---|---|
| Feedback | Helpful episodes gain utility score |
| Bellman Propagation | Value spreads to semantically similar episodes |
| Temporal Credit | Episodes before successes get credit |
| Decay | Unused memories fade (1% per day) |
| Retrieval Ranking | High-utility episodes surface first |
Over time, frequently helpful knowledge rises to the top, while stale or unhelpful memories fade away.
# Clone and build
git clone https://github.com/anvanster/tempera.git
cd tempera
cargo build --release
# Two binaries are created:
# - target/release/tempera (CLI tool)
# - target/release/tempera-mcp (MCP server for Claude Code)cargo install temperaOn first use, Tempera downloads the BGE-Small embedding model (~128MB) for semantic search. This happens automatically and only once:
# Initialize and trigger model download
tempera init
# Output:
# π Loading embedding model (this may download the model on first run)...
# β
Embedding model loadedThe model is cached globally at ~/.tempera/models/ and shared across all projects.
claude mcp add tempera --scope user -- /path/to/Tempera/target/release/tempera-mcpThe --scope user flag makes it available across all your projects.
Exit and restart Claude Code to load the new MCP server.
Run /mcp in Claude Code. You should see tempera with 7 tools.
Once connected, Claude has access to these tools:
| Tool | Description | When to Use |
|---|---|---|
tempera_retrieve |
Search memories by query, list all, or show details | Start of session - always check first |
tempera_capture |
Save session as episode (auto-propagates utility) | End of task - capture successes proactively |
tempera_feedback |
Mark episodes as helpful/not helpful | After using retrieved memories |
tempera_status |
Check memory health for current project | Understand memory state |
tempera_stats |
View overall memory statistics | Analytics and monitoring |
tempera_propagate |
Spread value to similar episodes | Periodic maintenance |
tempera_review |
Consolidate and cleanup memories | After related task series |
Start of session: Claude should automatically check for relevant memories using tempera_retrieve before starting non-trivial tasks.
End of task: Claude should proactively capture successful sessions using tempera_capture - don't wait to be asked. The capture tool automatically runs utility propagation.
Cross-project insights: Use tempera_capture(project: "other-project") to tag memories for a different project.
Search: tempera_retrieve(query: "authentication bug")
List all: tempera_retrieve(all: true)
Show details: tempera_retrieve(query: "abc12345") # episode ID
User: Fix the login redirect vulnerability
Claude: Let me check if we've solved similar problems before...
[Calls tempera_retrieve("login redirect vulnerability")]
Found 2 relevant past experiences:
1. "Fixed open redirect by validating return URLs against whitelist"
- 78% similarity, 85% utility
2. "Auth redirect issue - added URL sanitization"
- 65% similarity, 72% utility
Based on past experience, I'll implement URL validation...
[Works on the fix]
Done! Let me save this session.
[Calls tempera_capture(summary: "Fixed login redirect...", outcome: "success")]
User: That retrieved memory was really helpful!
Claude: [Calls tempera_feedback(helpful: true, episodes: ["abc123"])]
Marked as helpful - this will improve future retrievals!
# Initialize Tempera
tempera init
# Capture an episode manually
tempera capture --prompt "Fixed the authentication bug"
# Index episodes for semantic search
tempera index
# Search memories
tempera retrieve "database connection issues"
# Provide feedback
tempera feedback helpful --episodes abc123,def456
# Run utility propagation
tempera propagate --temporal
# Prune old/low-value episodes
tempera prune --older-than 90 --min-utility 0.2 --execute
# View statistics
tempera statsTempera stores everything locally in ~/.tempera/ (shared across all projects):
~/.tempera/
βββ config.toml # Configuration
βββ episodes/ # Episode JSON files
β βββ 2026-01-25/
β βββ session-abc123.json
βββ vectors/ # Vector database
β βββ episodes.lance/ # LanceDB embeddings
βββ models/ # Embedding model cache (~128MB)
βββ models--Xenova--bge-small-en-v1.5/
All projects share the same memory database, enabling cross-project learning.
Tempera uses reinforcement learning concepts:
| Parameter | Default | Purpose |
|---|---|---|
decay_rate |
0.01 | 1% utility decay per day |
discount_factor |
0.9 | RL gamma for Bellman updates |
learning_rate |
0.1 | Conservative alpha for updates |
propagation_threshold |
0.5 | Min similarity for propagation |
Episode Lifecycle:
Captured β Indexed β Retrieved β Feedback β Utility Updated β Propagated
β
[Low utility + old]
β
Pruned
Run periodically to keep memory healthy:
# Weekly: Propagate utility values
tempera propagate --temporal
# Monthly: Clean up old/useless episodes
tempera prune --older-than 90 --min-utility 0.2 --execute
# As needed: Check health
tempera stats| Variable | Description |
|---|---|
ANTHROPIC_API_KEY |
For LLM-based intent extraction (--extract-intent) |
TEMPERA_DATA_DIR |
Override default data directory |
- Check path:
ls /path/to/tempera-mcp - Check config:
cat ~/.claude.json - Restart Claude Code completely
- Run
/mcpto verify
The BGE-Small model (~128MB) downloads on first use from HuggingFace. This requires internet access. After download, the model is cached at ~/.tempera/models/ and works offline.
Run tempera index to create/update the vector database.
If behind a firewall or proxy, ensure access to huggingface.co. The model files are downloaded via HTTPS.
Apache 2.0
Contributions welcome! Please open an issue or PR.