Skip to content

robotrocketscience/agentmemory

Repository files navigation

agentmemory

Your AI finally remembers what you told it.

License: MIT PyPI Python 3.12+ Tests


You tell your AI something. It says "got it." Next conversation, it has no idea. So you explain again. And again. Every session starts from zero.

agentmemory gives your AI a permanent memory. It captures corrections, decisions, and preferences from your conversations, stores them on your computer, and feeds them back automatically. Every future conversation starts where the last one left off.

Comic: user says 'no implementation, we're in research.' Agent says 'got it!' Next session: 'Ready to implement?' User: 'I TOLD YOU. TWICE.' Agent: '...three times, actually!'


How It Works (the short version)

You talk to your AI normally. agentmemory listens in the background.

What you say What happens
"I'm allergic to shellfish" Remembered. Surfaces whenever food comes up.
"No, my budget is $2000 not $5000" Correction. Replaces the old belief.
"I prefer detailed explanations" Preference. Shapes every future response.
"ok good" or "no that's wrong" Natural feedback. Strengthens or weakens what the AI just used.

Next session -- next week -- next month -- it's all still there. No re-explaining.


The "I Already Told You" Problem

Everyone who uses AI regularly hits this wall. You build up context over a conversation -- your preferences, your situation, your constraints -- and then the session ends and it's all gone. The next conversation, the AI doesn't know you prefer concise answers, doesn't remember your tech stack, doesn't know you already tried that approach and it failed.

This isn't a feature gap. It's the fundamental limitation. Every AI assistant today has amnesia between sessions. agentmemory fixes that.

Session 1:  "I use Python 3.12 with uv for package management."
Session 2:  "pip install requests"        <- forgot already
Session 1:  (with agentmemory) "I use Python 3.12 with uv."
Session 47: "uv add requests"             <- still remembers

What Makes This Different

agentmemory isn't a note-taking app. It doesn't just store what you said -- it understands what matters and delivers it at the right moment.

It's selective. When you say "push to github", it doesn't dump 200 memories on the AI. It finds the 2-3 that are relevant right now ("use the publish script, not git push directly") and injects them before the AI responds.

It learns from feedback. Say "ok good" and the memories that informed that response get stronger. Say "no that's wrong" and they get weaker. Over time, it gets better at surfacing what helps.

It catches contradictions. When two memories disagree, it flags them: "these contradict each other -- resolve before proceeding." No more silent confusion.

It self-corrects. Even permanent rules can be updated if enough evidence accumulates that they're wrong. Durable, not dogmatic.


Who Can Use It Today

agentmemory works with any AI tool that supports MCP (Model Context Protocol):

  • Claude Code (CLI, desktop app, VS Code, JetBrains)
  • Codex (OpenAI)
  • Any MCP-compatible client
pip install agentmemory-rrs
agentmemory setup

Restart your AI tool. That's it.

Broader platform support (ChatGPT, Gemini, native macOS app) is on the roadmap. The architecture is ready -- we're waiting for those platforms to support the protocol.


Your Data Stays Yours

  • 100% local. Everything stored in SQLite on your machine.
  • No cloud, no accounts. Nothing leaves your computer. Ever.
  • No telemetry. Not "opt-out telemetry." Zero telemetry.
  • No GPU required. Runs on any machine that runs Python.

For Power Users and Developers

This is where agentmemory really shines. If you're building software with AI agents, the memory problem compounds fast.

Real Example: Deploy Workflow

You type push the release to github. Before the agent sees your message, agentmemory fires a multi-layer search in ~45ms:

== ACTIVE CONSTRAINTS (VERIFY before proceeding) ==
  1. VERIFY: NEVER use git push github directly. Use scripts/publish-to-github.sh
  2. VERIFY: Pre-push hook scans for PII; direct push bypasses safety checks

== OPERATIONAL STATE ==
[!] GitHub account renamed (changed 2d ago)

== BACKGROUND ==
- Remote 'github' points to git@github.com:robotrocketscience/agentmemory.git

Without agentmemory: the agent runs git push github main, bypassing every safety check.

With agentmemory: three words triggered the full procedure -- publish script, PII guards, pre-push hook. That procedure accumulated from corrections over weeks. Why this matters.

The Markdown Workaround Problem

Most developer power users end up building the same thing: a growing collection of markdown files. STATE.md for current position. ROADMAP.md for what's next. DECISIONS.md for why you stopped doing it that way. Cross-references everywhere.

Every prompt is a bet. You write "see docs/deploy-runbook.md" and hope the agent reads it. When it doesn't, you get silent failures.

agentmemory makes every prompt a contract. Relevant context is found and injected as part of the prompt. The agent doesn't choose to read it -- it's already there.

Manual approach What breaks agentmemory
Rules in config files Agent reads them, doesn't follow them Injected per-prompt
Cross-references to docs Agent skips the reference Content extracted and injected directly
State files updated by hand One missed update breaks the chain State tracked automatically

Developer Commands

Command What it does
/mem:search <query> Find specific memories
/mem:lock <rule> Create a permanent rule
/mem:wonder <topic> Deep parallel research across the memory graph
/mem:reason <question> Test a hypothesis against stored evidence
/mem:stats See what's in memory
/mem:health Run diagnostics

Full command reference: docs/COMMANDS.md.


It Gets Smarter Over Time

Early sessions are sparse. But the graph densifies as you work.

"Didn't we already decide this?" Memories link chronologically. When the AI retrieves a decision, it knows what led to it. No more re-suggesting abandoned approaches.

"Wait, those two contradict each other." Conflicting memories get flagged before the AI responds. You resolve it once, not repeatedly.

"That rule is outdated." Locked rules resist casual noise but yield to consistent evidence. Three independent signals that something changed will trigger a review.


Emergent Behavior

These weren't planned. They emerged from persistent memory + reasoning tools.

The agent audited its own memory and found a bug

user    Clean up the config file, I think agentmemory handles this now.

agent   [searches memory, compares against config]
        Found 2 redundant items. But also: this locked belief about
        your server setup is stale. It references infrastructure
        you removed. Every session is getting incorrect instructions.

user    Fix it.

agent   [corrects the belief, supersedes the old one]
        Done. Want to lock the updated version?

The system caught its own error because it had enough accumulated context to notice the inconsistency.


Under the Hood

Conversations are broken into individual beliefs stored in a local SQLite database. Retrieval uses full-text search, graph traversal, and vocabulary bridging. The system extracts structure from your codebase -- function signatures, docstrings, class definitions -- and links them to conversational knowledge through typed edges.

Every design decision is backed by an experiment. Every claim links to a test that reproduces it.

300+ experiments. 1250+ tests. 5 academic benchmarks. 35 modules. 31 tools.

Architecture | Benchmarks | Graph Model

Knowledge graph visualization showing thousands of interconnected beliefs built up over weeks of use

The knowledge graph after a few weeks of daily use. Each dot is a belief. Lines are relationships -- supports, contradicts, supersedes, temporal, semantic.


Obsidian Integration (Optional)

Sync the belief graph to an Obsidian vault. Each belief becomes a markdown note with wikilinks. Browse, edit, or visualize the entire knowledge network.

Setup: agentmemory sync-obsidian --vault ~/path/to/vault. Details: docs/OBSIDIAN.md.


What's Next

  • One-click install for macOS (Homebrew, native app)
  • Broader platform support (ChatGPT, Gemini -- waiting on MCP adoption)
  • Cross-project memory (beliefs that flow between projects)
  • Continuous regression (CI that runs only the tests your changes affect)

Documentation

Development

git clone https://github.com/robotrocketscience/agentmemory.git
cd agentmemory
uv sync --all-groups
uv run pytest tests/ -x -q

Contributions welcome. See CONTRIBUTING.md.

Citation

@software{agentmemory2026,
  author    = {robotrocketscience},
  title     = {agentmemory: Persistent Memory for AI Coding Agents},
  year      = {2026},
  url       = {https://github.com/robotrocketscience/agentmemory},
  license   = {MIT}
}

License

MIT

Packages

 
 
 

Contributors