Skip to content
View memtomem's full-sized avatar

Block or report memtomem

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
memtomem/README.md

memtomem

PyPI Python 3.12+ License: Apache 2.0 CLA

Give your AI agent a long-term memory.

memtomem turns your markdown notes, documents, and code into a searchable knowledge base that any AI coding agent can use. Write notes as plain .md files — memtomem indexes them and makes them searchable by both keywords and meaning.

flowchart LR
    A["Your files\n.md .json .py"] -->|Index| B["memtomem"]
    B -->|Search| C["AI agent\n(Claude Code, Cursor, etc.)"]
Loading

First time here? Follow the Getting Started guide — you'll have a working setup in under 5 minutes.


Why memtomem?

Problem How memtomem solves it
AI forgets everything between sessions Index your notes once, search them in every session
Keyword search misses related content Hybrid search: exact keywords + meaning-based similarity
Notes scattered across tools One searchable index for markdown, JSON, YAML, Python, JS/TS
Vendor lock-in Your .md files are the source of truth. The DB is a rebuildable cache

Quick Start

1. Install

ollama pull nomic-embed-text          # local embeddings (~270MB, free)
uv tool install memtomem             # or: pipx install memtomem

No GPU? Pick OpenAI in the wizard — see Embeddings.

2. Setup

mm init                               # 7-step wizard (or: mm init -y for CI)

The wizard picks your embedding model, points at the folder you want indexed, and registers memtomem with your AI editor.

3. Use

"Call the mem_status tool"   →  confirms the server is connected
"Index my notes folder"      →  mem_index(path="~/notes")
"Search for deployment"      →  mem_search(query="deployment checklist")
"Remember this insight"      →  mem_add(content="...", tags="ops")
Other install options

Project-scoped (per-project isolation):

uv add memtomem && uv run mm init    # all commands need `uv run` prefix

No install (uvx on demand):

claude mcp add memtomem -s user -- uvx --from memtomem memtomem-server

See MCP Client Setup for Cursor / Windsurf / Claude Desktop / Gemini CLI.


Key Features

  • Hybrid search — BM25 keyword + dense vector + RRF fusion in one query
  • Semantic chunking — heading-aware Markdown, AST-based Python, tree-sitter JS/TS, structure-aware JSON/YAML/TOML
  • Incremental indexing — chunk-level SHA-256 diff; only changed chunks get re-embedded
  • Namespaces — organize memories into scoped groups with auto-derivation from folder names
  • Maintenance — near-duplicate detection, time-based decay, TTL expiration, auto-tagging
  • Web UI — visual dashboard for search, sources, tags, sessions, health monitoring
  • 72 MCP toolsmem_do meta-tool routes 64 actions in core mode for minimal context usage

Ecosystem

Package Description
memtomem Core — MCP server, CLI, Web UI, hybrid search, storage
memtomem-stm STM proxy — proactive memory surfacing via tool interception

Documentation

Guide Description
Getting Started Install, setup wizard, first use
Hands-On Tutorial Follow-along with example files
Interactive Notebooks Jupyter notebooks for the Python API — hello, indexing, sessions, tuning, LangGraph
User Guide Complete feature walkthrough
Configuration All MEMTOMEM_* environment variables
Embeddings Ollama and OpenAI providers
MCP Client Setup Editor-specific configuration
Agent Memory Guide Sessions, working memory, procedures
Web UI Visual dashboard
Hooks Claude Code hooks for auto-indexing

Contributing

See CONTRIBUTING.md for setup instructions and the contributor guide.

License

Apache License 2.0. Contributions are accepted under the terms of the Contributor License Agreement.

Popular repositories Loading

  1. memtomem memtomem Public

    Markdown-first, long-term memory infrastructure for AI agents. Hybrid BM25 + semantic search across markdown/code files via MCP.

    Python

  2. memtomem-stm memtomem-stm Public

    Short-term memory proxy gateway with proactive memory surfacing for AI agents

    Python