Structured, Temporal Memory for AI Agents
Docs • Getting Started • PyPI
🧠 Predict-Calibrate extraction • ⏱️ Bi-Temporal validity • 🔍 Hybrid retrieval • 📦 SQLite default
Most memory systems extract everything and hope retrieval sorts it out. memv is different:
| Typical Approach | memv |
|---|---|
| Extract all facts upfront | Extract only what we failed to predict |
| Overwrite old facts | Invalidate with temporal bounds |
| Retrieve by similarity | Hybrid vector + BM25 + RRF |
| Timestamps only | Bi-temporal: event time + transaction time |
Result: Less noise, better retrieval, accurate history.
pip install memveefrom memv import Memory
from memv.embeddings import OpenAIEmbedAdapter
from memv.llm import PydanticAIAdapter
memory = Memory(
db_path="memory.db",
embedding_client=OpenAIEmbedAdapter(),
llm_client=PydanticAIAdapter("openai:gpt-4o-mini"),
)
async with memory:
# Store conversation
await memory.add_exchange(
user_id="user-123",
user_message="I just started at Anthropic as a researcher.",
assistant_message="Congrats! What's your focus area?",
)
# Extract knowledge
await memory.process("user-123")
# Retrieve context
result = await memory.retrieve("What does the user do?", user_id="user-123")
print(result.to_prompt())That's it. Your agent now has:
- ✅ Episodic memory — conversations grouped into coherent episodes
- ✅ Semantic knowledge — facts extracted via predict-calibrate
- ✅ Temporal awareness — knows when facts were true
- ✅ Hybrid retrieval — vector + text search with RRF fusion
🧠 Predict-Calibrate Extraction
Only extracts what the model failed to predict. Importance emerges from prediction error, not upfront scoring. Based on Nemori.
⏱️ Bi-Temporal Validity
Track when facts were true (event time) vs when you learned them (transaction time). Query history at any point in time. Based on Graphiti.
🔍 Hybrid Retrieval
Combines vector similarity and BM25 text search with Reciprocal Rank Fusion. Configurable weighting.
📝 Episode Segmentation
Automatically groups messages into coherent conversation episodes. Handles interleaved topics.
🔄 Contradiction Handling
New facts automatically invalidate conflicting old facts. Full history preserved.
📅 Temporal Parsing
Relative dates ("last week", "yesterday") resolved to absolute timestamps at extraction time.
⚡ Async Processing
Non-blocking
process_async()with auto-processing when message threshold is reached.
🗄️ SQLite Default
Zero-config local storage with sqlite-vec for vectors and FTS5 for text search.
memv's bi-temporal model lets you query knowledge as it was at any moment:
from datetime import datetime
# What did we know about user's job in January 2024?
result = await memory.retrieve(
"Where does user work?",
user_id="user-123",
at_time=datetime(2024, 1, 1),
)
# Show full history including superseded facts
result = await memory.retrieve(
"Where does user work?",
user_id="user-123",
include_expired=True,
)Messages (append-only)
│
▼
Episodes (segmented conversations)
│
▼
Knowledge (extracted facts with bi-temporal validity)
│
├── Vector Index (sqlite-vec)
└── Text Index (FTS5)
Extraction Flow:
- Messages buffered until threshold
- Boundary detection segments into episodes
- Episode narrative generated
- Predict what episode should contain (given existing KB)
- Compare prediction vs actual → extract gaps
- Store with embeddings + temporal bounds
memv works with any agent framework:
class MyAgent:
def __init__(self, memory: Memory):
self.memory = memory
async def run(self, user_input: str, user_id: str) -> str:
# 1. Retrieve relevant context
context = await self.memory.retrieve(user_input, user_id=user_id)
# 2. Generate response with context
response = await self.llm.generate(
f"{context.to_prompt()}\n\nUser: {user_input}"
)
# 3. Store the exchange
await self.memory.add_exchange(user_id, user_input, response)
return response- Getting Started — Installation, setup, first example
- Core Concepts — Predict-calibrate, episodes, bi-temporal, retrieval
- API Reference — All public classes and methods
git clone https://github.com/vstorm-co/memv.git
cd memv
make install
make allSee CONTRIBUTING.md for details.
MIT — see LICENSE
Built with ❤️ by vstorm
