tagmem is local memory storage and retrieval for LLM agents.
It is built around a simple model:
entriesstore verbatim texttagsare the primary way to organize and filter memorydepthindicates how close a memory should stay to the surfacefactsstore structured knowledgediarystores agent-specific notes
The system is local-first, retrieval-oriented, and designed to be usable through:
- CLI
- MCP
The recommended way to run tagmem is in Docker.
Published image:
ghcr.io/codysnider/tagmemRun the CLI from the published image:
docker run --rm ghcr.io/codysnider/tagmem:latest helpRun the MCP server from the published image:
docker run -i --rm ghcr.io/codysnider/tagmem:latest mcpIf you want to build locally from source:
go build ./cmd/tagmemThis creates the tagmem binary in the current directory.
If you want a direct Go-based install:
go install github.com/codysnider/tagmem/cmd/tagmem@latestThis is best used once a release/tag workflow is in place. Docker is still the preferred runtime path.
Initialize storage:
tagmem initAdd an entry:
tagmem add --depth 0 --title "Working identity" --body "You are helping ship a local-first memory system."Search:
tagmem search "identity"
tagmem search --depth 2 "auth migration"
tagmem search --tag auth "token refresh"The Docker workflow keeps model files, cache, and benchmark artifacts outside the repo in mounted volumes.
Default Docker data root:
$HOME/.local/share/tagmemOverride it if you want Docker state elsewhere:
export TAGMEM_DATA_ROOT=/path/to/tagmem-dataThe helper just commands are primarily for development and benchmarking. Most users only need the published image or the go install path.
Core commands:
tagmem inittagmem ingesttagmem splittagmem addtagmem listtagmem searchtagmem showtagmem statustagmem contexttagmem depthstagmem pathstagmem doctortagmem repairtagmem mcptagmem bench
Examples:
tagmem ingest --mode files --depth 1 ~/projects/my_app
tagmem ingest --mode conversations --depth 2 ~/chats
tagmem ingest --mode conversations --extract general ~/chats
tagmem split ~/chats
tagmem status
tagmem context --depth 0
tagmem context --tag auth
tagmem show 1Run the MCP server over stdio:
tagmem mcpCurrent MCP tools:
tagmem_statustagmem_pathstagmem_list_depthstagmem_list_tagstagmem_get_tag_maptagmem_list_entriestagmem_searchtagmem_show_entrytagmem_check_duplicatetagmem_add_entrytagmem_delete_entrytagmem_kg_querytagmem_kg_addtagmem_kg_invalidatetagmem_kg_timelinetagmem_kg_statstagmem_graph_traversetagmem_find_bridgestagmem_graph_statstagmem_diary_writetagmem_diary_readtagmem_doctor
The embedded backend runs locally.
Default embedded configuration:
export TAGMEM_EMBED_PROVIDER=embedded
export TAGMEM_EMBED_MODEL=bge-small-en-v1.5
export TAGMEM_EMBED_ACCEL=autoexport TAGMEM_EMBED_PROVIDER=openai
export TAGMEM_OPENAI_MODEL=nomic-embed-text
export TAGMEM_OPENAI_BASE_URL=http://localhost:11434/v1
export TAGMEM_OPENAI_API_KEY=| Variable | Default | Purpose |
|---|---|---|
TAGMEM_EMBED_PROVIDER |
embedded |
Selects the embedding backend: embedded, openai, or embedded-hash. |
TAGMEM_EMBED_MODEL |
bge-small-en-v1.5 |
Selects the embedded local model. Supported values currently include all-MiniLM-L6-v2, bge-small-en-v1.5, and bge-base-en-v1.5. |
TAGMEM_EMBED_ACCEL |
auto |
Embedded acceleration mode: auto, cuda, or cpu. |
TAGMEM_OPENAI_MODEL |
nomic-embed-text |
Model name for OpenAI-compatible embeddings. |
OPENAI_MODEL |
unset | Fallback model name for OpenAI-compatible mode. |
TAGMEM_OPENAI_BASE_URL |
unset | Base URL for an OpenAI-compatible embeddings endpoint. If no path is provided, /v1 is assumed. |
OPENAI_BASE_URL |
unset | Fallback base URL for OpenAI-compatible mode. |
OLLAMA_HOST |
unset | Convenience fallback base URL, normalized to /v1 if used. |
TAGMEM_OPENAI_API_KEY |
unset | API key for an OpenAI-compatible endpoint. |
OPENAI_API_KEY |
unset | Fallback API key for OpenAI-compatible mode. |
TAGMEM_DATA_ROOT |
$HOME/.local/share/tagmem |
Host-side root directory for Docker state, including XDG data, model caches, datasets, and benchmark results. |
TAGMEM_BENCH_ROOT |
Docker-only | Root path for benchmark outputs in the Docker workflow. |
TAGMEM_DATASET_ROOT |
Docker-only | Root path for benchmark datasets in the Docker workflow. |
XDG_CONFIG_HOME |
platform default | XDG config root used for config and identity files. |
XDG_DATA_HOME |
platform default | XDG data root used for storage, vectors, knowledge graph, diaries, and models. |
XDG_CACHE_HOME |
platform default | XDG cache root. |
- data:
~/.local/share/tagmem/store.json - vector index:
~/.local/share/tagmem/vector/ - knowledge graph:
~/.local/share/tagmem/knowledge.json - diaries:
~/.local/share/tagmem/diaries/ - models:
~/.local/share/tagmem/models/ - config:
~/.config/tagmem/ - cache:
~/.cache/tagmem/
tagmem is local-first, keeps original text intact, avoids lossy memory dialects, and uses simple user-facing concepts: entries, tags, depth, facts, and diary.
Current benchmark snapshot:
xychart-beta
title "LongMemEval Recall@5"
x-axis ["bge-base", "bge-small", "MemPalace", "Mastra", "Hindsight", "Stella", "Contriever", "BM25"]
y-axis "Recall@5" 0.65 --> 1.00
bar [0.992, 0.990, 0.966, 0.9487, 0.914, 0.85, 0.78, 0.70]
tagmem(bge-small-en-v1.5):Recall@1 0.924,Recall@5 0.990,MRR 0.955tagmem(bge-base-en-v1.5):Recall@1 0.922,Recall@5 0.992,MRR 0.953- MemPalace raw baseline:
Recall@5 0.966 - Source-reported comparisons from MemPalace docs:
Mastra 0.9487,Hindsight 0.914,Stella ~0.85,Contriever ~0.78,BM25 ~0.70
FalseMemBench is a standalone adversarial distractor benchmark focused on conflicting, stale, and near-miss memories.
xychart-beta
title "FalseMemBench Recall@1"
x-axis ["tagmem", "BM25", "MemPalace", "Contriever", "Stella"]
y-axis "Recall@1" 0.40 --> 0.90
bar [0.8674, 0.6946, 0.6632, 0.6527, 0.4258]
xychart-beta
title "FalseMemBench MRR"
x-axis ["tagmem", "BM25", "MemPalace", "Contriever", "Stella"]
y-axis "MRR" 0.60 --> 0.95
bar [0.9288, 0.8278, 0.8154, 0.8049, 0.6465]
tagmem:Recall@1 0.8674,Recall@5 0.9983,MRR 0.9288BM25:Recall@1 0.6946,Recall@5 0.9930,MRR 0.8278- MemPalace raw-style:
Recall@1 0.6632,Recall@5 0.9948,MRR 0.8154 Contriever:Recall@1 0.6527,Recall@5 0.9843,MRR 0.8049Stella:Recall@1 0.4258,Recall@5 0.9791,MRR 0.6465
| Model | LongMemEval R@5 | LongMemEval Time | LoCoMo Avg Recall | MemBench R@5 | ConvoMem Avg Recall |
|---|---|---|---|---|---|
all-MiniLM-L6-v2 |
0.982 | 14.4s | 0.915 | 0.778 | 0.931 |
bge-small-en-v1.5 |
0.990 | 23.0s | 0.941 | 0.804 | 0.898 |
bge-base-en-v1.5 |
0.992 | 44.1s | 0.949 | 0.802 | 0.920 |
For methodology, machine specs, charts, and raw JSON outputs, see:
