Skip to content

scops/engrama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Engrama

Graph-based long-term memory framework for AI agents.

Python Neo4j License PyPI

Engrama gives any AI agent persistent, structured memory backed by a Neo4j knowledge graph. Instead of flat key-value stores or opaque vector databases, Engrama stores entities, observations, and relationships — and lets agents traverse that graph to reason about their accumulated knowledge.

Inspired by Karpathy's second brain concept, but built for agents rather than humans — and with graphs instead of wikis.


Why graphs?

Flat JSON / KV Vector DB Engrama (Graph)
Relationship queries ✅ native
Scales to 10k+ memories ❌ slow
Works without embeddings ✅ (optional Ollama)
Local-first / private depends
"What projects use FastMCP?" full scan approximate 1-hop traversal

Prerequisites

You need three things installed before starting. If you already have them, skip to Quick start.

Requirement Version How to check Install guide
Python 3.11 or newer python --version python.org/downloads
Docker Desktop any recent docker --version docker.com/products/docker-desktop
uv (Python package manager) any recent uv --version docs.astral.sh/uv

Windows users: after installing Python, make sure "Add Python to PATH" is checked. After installing uv, you may need to restart your terminal.

Optional:


Quick start

Step 1: Clone the repository

git clone https://github.com/scops/engrama
cd engrama

Step 2: Configure credentials

Copy the example environment file and set a password:

# Linux / macOS / Git Bash
cp .env.example .env

# PowerShell (Windows)
Copy-Item .env.example .env

Now open .env in any text editor and set two values:

  1. NEO4J_PASSWORD — change CHANGE_ME_BEFORE_FIRST_RUN to a password of your choice
  2. VAULT_PATH — the absolute path to your Obsidian vault folder (e.g. VAULT_PATH=C:\Users\you\Documents\obsidian_vault\vault)

VAULT_PATH is required for Obsidian sync tools (engrama_sync_note, engrama_sync_vault, engrama_write_insight_to_vault). If you don't use Obsidian, you can leave it empty — the graph tools will still work.

Step 3: Start Neo4j

docker compose up -d

Wait ~15 seconds for the database to start. You can check it's healthy with:

docker ps

You should see engrama-neo4j with status Up ... (healthy).

Step 4: Install dependencies

uv sync

This creates a virtual environment in .venv/ and installs all dependencies.

Step 5: Initialise the schema

This generates the graph schema from the developer profile and applies it to Neo4j:

uv run engrama init --profile developer

You should see:

Generating schema from developer.yaml...
Schema files generated.
Applying schema to Neo4j...
Schema applied successfully.

Step 6: Verify everything works

uv run engrama verify

Expected output: Connected to Neo4j at bolt://localhost:7687

Optionally, run the test suite:

uv run pytest tests/ -v

Step 7: Use it

You have three ways to use Engrama:

A) From Claude Desktop (recommended) — see the MCP section below.

B) From Python:

from engrama import Engrama

with Engrama() as eng:
    eng.remember("Technology", "Neo4j", "Graph database for knowledge graphs")
    results = eng.search("Neo4j")

C) From the command line:

uv run engrama search "Neo4j"
uv run engrama reflect

Note: all engrama CLI commands must be prefixed with uv run unless you activate the virtual environment first with .venv\Scripts\Activate.ps1 (Windows) or source .venv/bin/activate (Linux/macOS).

Embedding setup (optional)

Engrama works out of the box with fulltext search only. If you want semantic similarity search (finding conceptually related nodes, not just keyword matches), you can enable local embeddings via Ollama.

1. Install Ollama — download from ollama.com and make sure it's running (ollama serve or launch the desktop app).

2. Pull the embedding model:

ollama pull nomic-embed-text

3. Enable embeddings in .env:

EMBEDDING_PROVIDER=ollama
EMBEDDING_MODEL=nomic-embed-text
EMBEDDING_DIMENSIONS=768
OLLAMA_URL=http://localhost:11434

4. Verify the model is available:

ollama list

You should see nomic-embed-text:latest in the output.

Note: embeddings are generated locally — no data leaves your machine. The nomic-embed-text model is ~274 MB and supports 8192-token context.

What's next?

The Quick Start sets you up with the default developer profile. If you're not a developer, or you want a graph that fits your specific workflow, see the Personalizing your graph section below.

If you have existing Obsidian notes and want to populate the graph from them, connect via Claude Desktop (next section) and ask Claude to run engrama_sync_vault.


MCP integration (Claude Desktop)

Engrama acts as an abstraction layer between the AI agent and the database. Claude Desktop connects to the Engrama MCP server — it never sees database credentials, connection strings, or raw queries.

1. Find your Claude Desktop config file:

  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json

2. Add the Engrama server. Open the file and add (or merge into) the mcpServers section:

{
  "mcpServers": {
    "engrama": {
      "command": "uv",
      "args": [
        "run", "--directory", "C:\\Proyectos\\engrama",
        "--extra", "mcp", "engrama-mcp"
      ]
    }
  }
}

Important: change C:\\Proyectos\\engrama to the actual path where you cloned the repo. On macOS/Linux use forward slashes (e.g. /home/you/engrama). No database credentials are needed here — the server reads them from .env.

3. Restart Claude Desktop completely (quit and reopen, not just close the window).

You should now see the Engrama tools available. There are eleven:

Tool Description
engrama_search Hybrid search (vector + fulltext + graph boost)
engrama_remember Create or update a node (always MERGE)
engrama_relate Create a relationship between two nodes
engrama_context Retrieve the neighbourhood of a node
engrama_sync_note Sync a single Obsidian note to the graph
engrama_sync_vault Full vault scan, reconcile all notes
engrama_ingest Read content + extract knowledge automatically
engrama_reflect Adaptive cross-entity pattern detection → Insights
engrama_surface_insights Read pending Insights for review
engrama_approve_insight Approve or dismiss an Insight
engrama_write_insight_to_vault Write approved Insight to Obsidian

See examples/claude_desktop/system-prompt.md for a ready-to-paste system prompt that teaches Claude how to use the memory graph.


Python SDK

Use Engrama directly from any Python script — no MCP required:

from engrama import Engrama

with Engrama() as eng:
    # Write
    eng.remember("Technology", "FastAPI", "High-performance async framework")
    eng.associate("MyProject", "Project", "USES", "FastAPI", "Technology")

    # Read
    results = eng.recall("FastAPI", hops=2)
    hits = eng.search("microservices", limit=5)

    # Reflect
    insights = eng.reflect()
    pending = eng.surface_insights()
    eng.approve_insight(pending[0].title)

    # Forget
    eng.forget("Technology", "OldLib")
    eng.forget_by_ttl("Technology", days=365, purge=True)

All methods are documented with docstrings — use help(Engrama) or your IDE autocomplete to explore.


CLI reference

All commands require uv run prefix (or an activated virtualenv):

uv run engrama init --profile developer                        # Standalone profile
uv run engrama init --profile base --modules hacking teaching  # Composable
uv run engrama init --profile developer --dry-run              # Preview without writing
uv run engrama verify                                          # Check Neo4j connectivity
uv run engrama search "microservices"                          # Fulltext search
uv run engrama reflect                                         # Run pattern detection
uv run engrama reindex                                         # Batch re-embed all nodes
uv run engrama decay --dry-run                                 # Preview confidence decay
uv run engrama decay --rate 0.01                               # Apply gentle decay
uv run engrama decay --rate 0.1 --min-confidence 0.05          # Aggressive + archive

Search modes

Engrama supports three search modes depending on your configuration:

Fulltext only (EMBEDDING_PROVIDER=none, default) — keyword matching via Neo4j's built-in fulltext index. Works out of the box, no extra dependencies.

Hybrid (EMBEDDING_PROVIDER=ollama) — combines semantic similarity (vector search) with keyword matching plus graph topology boost and temporal scoring. Finds conceptually related nodes even without exact keyword matches. Requires Ollama running locally with nomic-embed-text model.

How to activate hybrid search:

  1. Set EMBEDDING_PROVIDER=ollama in .env (see Embedding setup)
  2. Run uv run engrama reindex to embed existing nodes
  3. New nodes are embedded automatically on creation

The scoring formula is: final = α × vector + (1-α) × fulltext + β × graph_boost + γ × temporal, where α=0.6, β=0.15, γ=0.1 by default. These are configurable via .env variables HYBRID_ALPHA and HYBRID_GRAPH_BETA.


Personalizing your graph (onboarding)

Engrama ships with a developer profile, but the graph schema should match your world, not a generic template. A nurse's graph looks nothing like a developer's graph — and that's the point.

Option A: Use the built-in developer profile

If you're a developer or technical instructor, the default profile already works:

uv run engrama init --profile developer

This creates nodes for Projects, Technologies, Decisions, Problems, Courses, Concepts, and Clients.

Option B: Let Claude build your modules (recommended)

This is the easiest path, and it works for any role or combination of roles. Open Claude Desktop with Engrama connected and say:

"I want to set up Engrama for my work. I'm a nurse with a master in biology, I teach undergraduate students, and I love cooking on weekends."

Claude will interview you for about 5 minutes — what you track day to day, how things connect in your head — and then generate custom domain modules tailored to you: nursing.yaml, biology.yaml, teaching.yaml, cooking.yaml. It composes them with the universal base.yaml and applies the schema, all in one conversation. No YAML knowledge required.

Option C: Compose from existing modules

Engrama ships with a few example modules to get you started. Combine any of them with the universal base profile:

uv run engrama init --profile base --modules hacking teaching photography ai

This merges profiles/base.yaml (Project, Concept, Decision, Problem, Technology, Person) with domain-specific nodes and relations from profiles/modules/.

Included example modules:

Module Adds
hacking Target, Vulnerability, Technique, Tool, CTF
teaching Course, Client, Exercise, Material
photography Photo, Location, Species, Gear
ai Model, Dataset, Experiment, Pipeline

These four are examples, not a closed list. The real power is that anyone can create a module for any domain — see Option D below.

Option D: Write your own module

A module is just a small YAML file in profiles/modules/. Here's a complete example for someone who tracks cooking:

name: cooking
description: Recipes, techniques, and ingredients

nodes:
  - label: Recipe
    properties: [name, cuisine, difficulty, time, notes]
    required: [name]
    description: "A dish or preparation."
  - label: Ingredient
    properties: [name, category, season, notes]
    required: [name]
    description: "A food ingredient — vegetable, spice, protein."
  - label: CookingTechnique
    properties: [name, type, notes]
    required: [name]
    description: "A culinary method — sous vide, fermentation, braising."

relations:
  - {type: USES,      from: Recipe,           to: Ingredient}
  - {type: APPLIES,   from: Recipe,           to: CookingTechnique}
  - {type: RELATED,   from: Ingredient,       to: Concept}        # 'Concept' comes from base.yaml
  - {type: DOCUMENTS, from: Recipe,           to: Project}        # 'Project' comes from base.yaml

Save it as profiles/modules/cooking.yaml, then compose:

uv run engrama init --profile base --modules cooking teaching

Rules for modules:

  • Nodes use PascalCase labels and name or title as the merge key
  • Relations can reference any label in base.yaml (Project, Concept, Decision, Problem, Technology, Person) without redefining them
  • If two modules define the same label, properties are merged automatically
  • Relationship types should be verbs (USES, TREATS, COVERS), not nouns

See profiles/developer.yaml for a complete standalone profile, and engrama/skills/onboard/references/example-profiles.md for worked profiles across very different domains (nurse, lawyer, PM, freelance creative).

Tips for good profiles

  • 3 to 5 node types per module is the sweet spot. The base already gives you 6. A typical multi-role user ends up with 12–18 total, which is fine.
  • Use title as the merge key for sentence-like things (decisions, problems, protocols). Use name for everything else.
  • Always include status on nodes with a lifecycle — the reflect skill uses it to distinguish open vs resolved items.
  • When in doubt, let Claude generate the module for you (Option B).

Documentation


License

License Engrama is licensed under the Apache License 2.0. Copyright 2026 Sinensia IT Solutions

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

You are free to use, modify, and distribute Engrama in both personal and commercial projects. The Apache 2.0 license includes an explicit patent grant, giving you confidence to adopt Engrama in enterprise environments without IP concerns. Contributing By submitting a pull request or contribution, you agree that your contribution is licensed under the same Apache 2.0 terms. We use a Developer Certificate of Origin (DCO) — sign off your commits with git commit -s to certify that you have the right to submit the code under this license. Commercial Extensions Certain premium features (such as managed hosting, multi-tenant collaboration, and advanced analytics) may be offered under a separate commercial license. The core engine, MCP tools, and all community-facing functionality remain fully open source under Apache 2.0. For commercial licensing inquiries, contact sinensiaitsolutions@gmail.com.

Related

About

Engrama gives any AI agent persistent, structured memory backed by a Neo4j knowledge graph. Instead of flat key-value stores or opaque vector databases, Engrama stores **entities**, **observations**, and **relationships** — and lets agents traverse that graph to reason about their accumulated knowledge.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors