Agent Memory is a high-density, persistent storage engine designed specifically for autonomous AI agents. It evolves from a simple key-value store into a "thinking" system by using an LSM-based architecture coupled with an asynchronous, LLM-powered semantic compaction process.
Traditional agent memory systems either store every interaction (leading to context window exhaustion) or use simple "keep newest" strategies (losing critical historical evolution). Agent Memory solves this by mimicking human memory consolidation: as data ages, it is semantically merged and abstracted into higher-density summaries.
This is the "Brain" of the storage engine. Instead of a mechanical merge, the background compaction process uses an LLM to read through the history of a specific key and synthesize a unified, updated record. For example, if a user changes their preference from Go to Rust over several sessions, the semantic compactor will consolidate these into a single updated fact: "User previously used Go but has transitioned to Rust as their primary language."
- Zero Write Latency: Put operations are recorded instantly to an in-memory MemTable and a Write-Ahead Log (WAL).
- Atomic State Swapping: Uses the Read-Copy-Update (RCU) pattern to ensure that background compaction never blocks read or write operations.
- Efficient Retrieval: Implements Bloom Filters and hierarchical SSTables to minimize disk I/O during lookups.
Memories are stored as structured Markdown with YAML frontmatter. This makes the data both machine-readable (for vector search and tool use) and human-inspectable for debugging.
The project uses uv for dependency management.
# Clone the repository
git clone https://github.com/your-repo/agent-memory.git
cd agent-memory
# Install dependencies
uv syncfrom agent_memory import AgentMemory, Config, SemanticCompactor
# Configure the engine
config = Config()
config.lsm.data_dir = "data/my_agent"
# Initialize with a Semantic Compactor (requires LLM API)
compactor = SemanticCompactor()
memory = AgentMemory(config=config, compactor=compactor)
# Store a memory
memory.put("user/profile", "---\nname: Alex\n---\nAlex is a software architect.")
# Retrieve a memory
profile = memory.get("user/profile")
print(profile)
# Perform a semantic search
results = memory.search("What language does Alex prefer?")Agent Memory is built on several key components:
- LSMEngine: Manages the MemTable, WAL, and SSTables on disk.
- CompactionWorker: A background daemon that triggers when Level 0 SSTables reach a threshold.
- SemanticCompactor: Integrates with LLMs (Mistral/OpenAI) to consolidate overlapping memory versions during compaction.
- Parser: Handles the extraction and validation of Markdown frontmatter.
For a deeper dive into the system design, see arch.md.
This project is currently in the Alpha stage. It is an experimental implementation of semantic storage concepts and is not yet recommended for production use without thorough testing.