Skip to content

Indhar01/MemoGraph

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

111 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

MemoGraph 🧠

PyPI version Python Version License MCP Registry MCP Code style: ruff pre-commit Type checked: mypy Tests Code Quality

A graph-based memory system for LLMs with intelligent retrieval. MemoGraph provides a powerful solution to the LLM memory problem by combining knowledge graphs, hybrid retrieval, and semantic search.

✨ Features

  • πŸ€– Smart Auto-Organization Engine: Automatically extract structured information from memories using LLMs
    • Topics, subtopics, and recurring themes
    • People with roles and organizations
    • Action items with assignees and deadlines
    • Decisions, questions, and sentiment analysis
    • Risks, ideas, and timeline events
  • Graph-Based Memory: Navigate knowledge using bidirectional wikilinks and backlinks
  • Hybrid Retrieval: Combines keyword matching, graph traversal, and optional vector embeddings
  • Markdown-Native: Human-readable markdown files with YAML frontmatter
  • Memory Types: Support for episodic, semantic, procedural, and fact-based memories
  • Smart Indexing: Efficient caching system that only re-indexes changed files
  • CLI & Python API: Use via command line or integrate into your Python applications
  • Multiple LLM Providers: Works with Ollama, Claude, and OpenAI
  • Context Compression: Intelligent token budgeting for optimal context windows
  • Salience Scoring: Memory importance ranking for better retrieval

πŸš€ Quick Start

Installation

pip install memograph

Install with optional dependencies:

# For OpenAI support
pip install memograph[openai]

# For Anthropic Claude support
pip install memograph[anthropic]

# For Ollama support
pip install memograph[ollama]

# For embedding support
pip install memograph[embeddings]

# Install everything
pip install memograph[all]

Python Usage

from memograph import MemoryKernel, MemoryType

# Initialize the kernel attached to your vault path
kernel = MemoryKernel("~/my-vault")

# Ingest all notes in the vault
stats = kernel.ingest()
print(f"Indexed {stats['indexed']} memories.")

# Programmatically add a new memory
kernel.remember(
    title="Meeting Note",
    content="Decided to use BFS graph traversal for retrieval.",
    memory_type=MemoryType.EPISODIC,
    tags=["design", "retrieval"]
)

# Retrieve context for an LLM query
context = kernel.context_window(
    query="how does retrieval work?",
    tags=["retrieval"],
    depth=2,
    top_k=8
)

print(context)

πŸ”Œ MCP Server (Model Context Protocol)

MemoGraph includes a full-featured MCP server for seamless integration with AI assistants like Cline and Claude Desktop.

19 Available Tools

Category Tools Description
Search search_vault, query_with_context Semantic search and context retrieval
Create create_memory, import_document Add memories and import documents
Read list_memories, get_memory, get_vault_info Browse and retrieve memories
Update update_memory Modify existing memories
Delete delete_memory Remove memories by ID
Analytics get_vault_stats Vault statistics and insights
Discovery list_available_tools List all available tools
Autonomous auto_hook_query, auto_hook_response, configure_autonomous_mode, get_autonomous_config Autonomous memory management
Graph relate_memories, search_by_graph, find_path Graph-native linking and traversal
Bulk bulk_create Create multiple memories in one call

Quick Setup for Cline

Add to your ~/.cline/mcp_settings.json:

{
  "mcp": {
    "servers": {
      "memograph": {
        "command": "python",
        "args": ["-m", "memograph.mcp.run_server"],
        "env": {
          "MEMOGRAPH_VAULT": "/path/to/your/vault"
        }
      }
    }
  }
}

Quick Setup for Claude Desktop

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "memograph": {
      "command": "python",
      "args": ["-m", "memograph.mcp.run_server", "--vault", "/path/to/your/vault"]
    }
  }
}

Install from MCP Registry

NEW: MemoGraph is now available in the official MCP Registry!

# Install via MCP CLI (if available)
mcp install io.github.indhar01/memograph

# Or manually configure in your MCP client:
{
  "mcpServers": {
    "memograph": {
      "command": "python",
      "args": ["-m", "memograph.mcp.run_server"],
      "env": {
        "MEMOGRAPH_VAULT": "~/my-vault"
      }
    }
  }
}

Benefits of MCP Registry:

  • βœ… Official registry backed by Anthropic, GitHub, and Microsoft
  • βœ… Automatic version updates from PyPI
  • βœ… Discoverable by all MCP-compatible clients
  • βœ… Verified and trusted installation

See MCP_REGISTRY_GUIDE.md for complete submission and configuration guide.

Usage Examples

Once configured, use natural language with your AI assistant:

"Search my vault for memories about Python"
"Create a memory titled 'Project Ideas' with content '...'"
"Update memory abc-123 to have salience 0.9"
"Delete memory xyz-456"
"What tools are available?"
"Get vault statistics"

See CONFIG_REFERENCE.md for complete MCP configuration guide.

🎯 CLI Usage

MemoGraph comes with a powerful CLI for managing your vault and chatting with it.

Ingest

Index your markdown files into the graph database:

memograph --vault ~/my-vault ingest

Force re-indexing all files:

memograph --vault ~/my-vault ingest --force

Remember

Quickly add a memory from the command line:

memograph --vault ~/my-vault remember \
    --title "Team Sync" \
    --content "Discussed Q3 goals." \
    --tags planning q3

Context Window

Generate context for a query:

memograph --vault ~/my-vault context \
    --query "What did we decide about the database?" \
    --tags architecture \
    --depth 2 \
    --top-k 5

Ask (Interactive Chat)

Start an interactive chat session with your vault context:

memograph --vault ~/my-vault ask --chat --provider ollama --model llama3

Or ask a single question:

memograph --vault ~/my-vault ask \
    --query "Summarize our design decisions" \
    --provider claude \
    --model claude-3-5-sonnet-20240620

Diagnostics

Check your environment and connection to LLM providers:

memograph --vault ~/my-vault doctor

πŸ“– Core Concepts

Memory Types

MemoGraph supports different types of memories inspired by cognitive science:

  • Episodic: Personal experiences and events (e.g., meeting notes)
  • Semantic: Facts and general knowledge (e.g., documentation)
  • Procedural: How-to knowledge and processes (e.g., tutorials)
  • Fact: Discrete factual information (e.g., configuration values)

Graph Traversal

The library uses BFS (Breadth-First Search) to traverse your knowledge graph:

# Retrieve nodes with depth=2 (2 hops from seed nodes)
nodes = kernel.retrieve_nodes(
    query="graph algorithms",
    depth=2,  # Traverse up to 2 levels deep
    top_k=10  # Return top 10 relevant memories
)

Salience Scoring

Each memory has a salience score (0.0-1.0) that represents its importance:

---
title: "Critical Architecture Decision"
salience: 0.9
memory_type: semantic
---

We decided to use PostgreSQL for better ACID guarantees...

πŸ—οΈ Project Structure

MemoGraph/
β”œβ”€β”€ memograph/          # Main package
β”‚   β”œβ”€β”€ core/           # Core functionality
β”‚   β”‚   β”œβ”€β”€ kernel.py   # Memory kernel
β”‚   β”‚   β”œβ”€β”€ graph.py    # Graph implementation
β”‚   β”‚   β”œβ”€β”€ retriever.py # Hybrid retrieval
β”‚   β”‚   β”œβ”€β”€ indexer.py  # File indexing
β”‚   β”‚   └── parser.py   # Markdown parsing
β”‚   β”œβ”€β”€ adapters/       # LLM and embedding adapters
β”‚   β”‚   β”œβ”€β”€ embeddings/ # Embedding providers
β”‚   β”‚   β”œβ”€β”€ frameworks/ # Framework integrations
β”‚   β”‚   └── llm/        # LLM providers
β”‚   β”œβ”€β”€ storage/        # Storage and caching
β”‚   β”œβ”€β”€ mcp/            # MCP server implementation
β”‚   └── cli.py          # CLI implementation
β”œβ”€β”€ tests/              # Test suite
β”œβ”€β”€ examples/           # Example usage
└── scripts/            # Utility scripts

🀝 Contributing

We welcome contributions! Please see our Contributing Guide for details.

Development Setup

  1. Clone the repository:

    git clone https://github.com/Indhar01/MemoGraph.git
    cd MemoGraph
  2. Install in development mode:

    pip install -e ".[all,dev]"
  3. Install pre-commit hooks:

    pre-commit install
  4. Run tests:

    pytest

Code Quality

We maintain high code quality standards:

  • Linting: Ruff for fast Python linting
  • Formatting: Ruff formatter for consistent code style
  • Type Checking: MyPy for static type analysis
  • Testing: Pytest with comprehensive test coverage
  • Pre-commit Hooks: Automated checks before each commit

πŸ“š Documentation

πŸ”’ Security

See our Security Policy for reporting vulnerabilities.

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

🌟 Acknowledgments

Inspired by the need for better memory management in LLM applications. Built with:

  • Graph-based knowledge representation
  • Hybrid retrieval strategies
  • Cognitive science principles

πŸ“¬ Contact & Support

🚦 Status

Current Version: 0.1.0 (Alpha - Marketplace Ready)

This project is in active development with a focus on code quality and stability:

  • βœ… Core functionality is stable and tested
  • βœ… All linter checks passing (Ruff)
  • βœ… Type checking configured (MyPy)
  • βœ… Pre-commit hooks enabled
  • βœ… Comprehensive test suite
  • ⚠️ API may change in minor versions until v1.0.0

Recent Improvements:

  • πŸŽ‰ Published to official MCP Registry (io.github.indhar01/memograph)
  • Enhanced code quality with Ruff linting and formatting
  • Added comprehensive type checking with MyPy
  • Improved project structure and organization
  • Updated MCP server with 19 tools including autonomous features and graph operations
  • Added AGENTS.md for AI assistant integration
  • Created comprehensive MCP Registry submission guide

Made with ❀️ for better LLM memory management

About

MemoGraph is an intelligent personal memory management system for Large Language Models (LLMs) that transforms markdown notes into a queryable, graph-based knowledge base. It enables LLMs to access contextually relevant information from your personal knowledge vault using hybrid retrieval techniques.

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Sponsor this project

Packages

 
 
 

Contributors