Skip to content

memtomem/memtomem

memtomem

Official website & docs: https://memtomem.com

PyPI Downloads GitHub stars Python 3.12+ License: Apache 2.0 CLA

Give your AI agent a long-term memory.

memtomem turns your markdown notes, documents, and code into a searchable knowledge base that any AI coding agent can use. Write notes as plain .md files — memtomem indexes them and makes them searchable by both keywords and meaning.

flowchart LR
    A["Your files\n.md .json .py"] -->|Index| B["memtomem"]
    B -->|Search| C["AI agent\n(Claude Code, Cursor, etc.)"]
Loading

First time here? Follow the Getting Started guide — you'll have a working setup in under 5 minutes.


Why memtomem?

Problem How memtomem solves it
AI forgets everything between sessions Index your notes once, search them in every session
Keyword search misses related content Hybrid search: exact keywords + meaning-based similarity
Notes scattered across tools One searchable index for markdown, JSON, YAML, Python, JS/TS
Vendor lock-in Your .md files are the source of truth. The DB is a rebuildable cache

Quick Start

1. Install

ollama pull nomic-embed-text          # local embeddings (~270MB, free)
uv tool install memtomem             # or: pipx install memtomem

No GPU? Pick OpenAI in the wizard — see Embeddings.

2. Setup

mm init                               # 8-step wizard (or: mm init -y for CI)

The wizard picks your embedding model, points at the folder you want indexed, and registers memtomem with your AI editor.

3. Use

"Call the mem_status tool"   →  confirms the server is connected
"Index my notes folder"      →  mem_index(path="~/notes")
"Search for deployment"      →  mem_search(query="deployment checklist")
"Remember this insight"      →  mem_add(content="...", tags=["ops"])
Other install options

Project-scoped (per-project isolation):

uv add memtomem && uv run mm init    # all commands need `uv run` prefix

No install (uvx on demand):

claude mcp add memtomem -s user -- uvx --from memtomem memtomem-server

See MCP Client Setup for Cursor / Windsurf / Claude Desktop / Gemini CLI.


Key Features

  • Hybrid search — BM25 keyword + dense vector + RRF fusion in one query
  • Semantic chunking — heading-aware Markdown, AST-based Python, tree-sitter JS/TS, structure-aware JSON/YAML/TOML
  • Incremental indexing — chunk-level SHA-256 diff; only changed chunks get re-embedded
  • Namespaces — organize memories into scoped groups with auto-derivation from folder names
  • Maintenance — near-duplicate detection, time-based decay, TTL expiration, auto-tagging
  • Web UI — visual dashboard for search, sources, tags, sessions, health monitoring
  • MCP toolsmem_do meta-tool routes all non-core actions in core mode for minimal context usage

Ecosystem

Package Description
memtomem Core — MCP server, CLI, Web UI, hybrid search, storage
memtomem-stm STM proxy — proactive memory surfacing via tool interception

Documentation

Guide Description
Getting Started Install, setup wizard, first use
Hands-On Tutorial Follow-along with example files
Interactive Notebooks Jupyter notebooks for the Python API — hello, indexing, sessions, tuning, LangGraph
User Guide Complete feature walkthrough
Configuration All MEMTOMEM_* environment variables
Embeddings ONNX, Ollama, and OpenAI embedding providers
LLM Providers Ollama, OpenAI, Anthropic, and compatible endpoints
MCP Client Setup Editor-specific configuration
Agent Memory Guide Sessions, working memory, procedures
Web UI Visual dashboard
Hooks Claude Code hooks for auto-indexing

Contributing

See CONTRIBUTING.md for setup instructions and the contributor guide.

License

Apache License 2.0. Contributions are accepted under the terms of the Contributor License Agreement.

About

Markdown-first, long-term memory infrastructure for AI agents. Hybrid BM25 + semantic search across markdown/code files via MCP.

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors