Persistent memory with semantic search for Claude and MCP-compatible clients
Give your AI assistant persistent memory that survives between conversations. Save context once, retrieve it intelligently forever.
- π Semantic Search - Find memories by meaning, not just keywords
- πΎ SQLite Storage - Fast, reliable, and scalable
- π€ User Biography - Structured profile (name, occupation, tech stack, etc.)
- π 100% Local - No external APIs, all processing on your machine
- β‘ Fast - Powered by all-MiniLM-L6-v2 embeddings (~50ms searches)
- π Private - Your data never leaves your computer
Problem: Claude forgets everything between conversations. You constantly re-explain your context, projects, preferences, and tech stack.
Solution: This MCP server gives Claude persistent memory with intelligent semantic search. Save information once, and Claude retrieves it automatically when relevant.
// Save once
save_memory("project-info", "Working on an e-commerce site with Next.js and Stripe")
// Days later, in a new conversation
User: "How do I add payments to my project?"
Claude: *searches memory* "Since you're using Stripe in your e-commerce project..."# Clone the repository
git clone https://github.com/GFYURI/mcp-semantic-memory.git
cd mcp-semantic-memory
# Install dependencies (pnpm recommended)
pnpm install
# or: npm installAdd to your MCP client config (e.g., Claude Desktop):
Windows: %APPDATA%\Claude\claude_desktop_config.json
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
{
"mcpServers": {
"semantic-memory": {
"command": "node",
"args": ["/absolute/path/to/mcp-semantic-memory/index.js"]
}
}
}Example (Windows):
{
"mcpServers": {
"semantic-memory": {
"command": "node",
"args": ["C:\\Users\\YourName\\mcp-semantic-memory\\index.js"]
}
}
}- Restart your MCP client (e.g., Claude Desktop)
- The server will download the embedding model (~25MB) on first use
- Start saving memories!
Save a memory with semantic embedding.
save_memory({
id: "my-cat",
text: "My cat's name is Mia, she's orange and very playful",
metadata: { category: "personal", type: "pet" }
})Search memories by semantic similarity.
search_memory({
query: "what's my pet's name?",
n_results: 5, // optional, default: 5
threshold: 0.3 // optional, default: 0.3 (0-1 scale)
})Retrieve a specific memory by ID.
Delete a memory permanently.
List all stored memories (ordered by last update).
Get the user's complete biographical profile.
Create or update user biography. All fields are optional.
set_user_bio({
nombre: "Angel",
ocupacion: "Student",
ubicacion: "Santiago, Chile",
tecnologias: ["Python", "JavaScript", "Node.js"],
herramientas: ["VS Code", "Docker", "pnpm"],
idiomas: ["Spanish", "English"],
timezone: "America/Santiago",
mascotas: ["Mia (cat)"]
})Update a single field in the biography.
update_user_bio({
field: "tecnologias",
value: ["Python", "JavaScript", "TypeScript"]
})- Remember your tech stack and project context
- Store solutions to common problems
- Keep track of configurations and preferences
- Save study notes and learning progress
- Remember assignment deadlines and requirements
- Track research topics and sources
- Personal preferences and interests
- Important dates and events
- Conversation context across sessions
Traditional keyword search:
Query: "what's my pet's name?"
Memory: "My cat Mia is orange"
Result: β No matches (different words)
Semantic search:
Query: "what's my pet's name?"
Memory: "My cat Mia is orange"
Result: β
78% similarity (understands meaning)
- Embeddings: all-MiniLM-L6-v2 (384 dimensions)
- Storage: SQLite with optimized indexes
- Search: Cosine similarity between vectors
- Performance: ~50-100ms per save, ~200ms search in 100 memories
-- Memories table
CREATE TABLE memories (
id TEXT PRIMARY KEY,
text TEXT NOT NULL,
embedding TEXT NOT NULL, -- JSON array of 384 floats
metadata TEXT, -- JSON object
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
);
-- User biography table
CREATE TABLE user_bio (
id INTEGER PRIMARY KEY CHECK (id = 1),
nombre TEXT,
ocupacion TEXT,
ubicacion TEXT,
tecnologias TEXT, -- JSON array
herramientas TEXT, -- JSON array
idiomas TEXT, -- JSON array
timezone TEXT,
mascotas TEXT, -- JSON array
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
);| Feature | This MCP | @modelcontextprotocol/server-memory |
|---|---|---|
| Semantic Search | β | β |
| User Biography | β | β |
| Storage | SQLite | In-memory |
| Persistence | β Disk | β RAM only |
| Scalability | 1000s of memories | Limited |
| Search Speed | Fast (indexed) | N/A |
# Install dependencies
pnpm install
# Run locally
node index.js
# Test with MCP inspector
npx @modelcontextprotocol/inspector node index.js- Node.js >= 18.0.0
- ~100MB disk space (model + dependencies)
- MCP-compatible client (Claude Desktop, LM Studio, etc.)
The embedding model is being downloaded (~25MB). Subsequent runs are instant.
pnpm rebuild sharp
# or
pnpm install --forceClose other connections to memory.db or restart your MCP client.
Check that the absolute path in your MCP config is correct.
Contributions are welcome! Feel free to:
- Report bugs
- Suggest features
- Submit pull requests
- Improve documentation
MIT License - feel free to use this in your own projects!
- Built with @modelcontextprotocol/sdk
- Embeddings by @xenova/transformers
- Powered by better-sqlite3
If you find this useful, consider giving it a star! It helps others discover the project.
Made with β€οΈ for the MCP community