Skip to content

Genious07/mindlayer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

mindlayer

PyPI version CI License: MIT Python 3.9+

Open-source, model-agnostic memory layer for LLMs.

Give any LLM application persistent, structured memory with a single pip install. No API key required. No infrastructure. No vendor lock-in.

import mindlayer

with mindlayer.MemCore() as mem:
    mem.add("My name is Alice. I prefer dark mode and I work in Python.")
    results = mem.search("programming preferences")
    for r in results:
        print(r.content)

Why mindlayer?

Most LLM apps lose context between sessions. Vector databases are heavy to set up. Existing memory libraries tie you to a specific LLM or cloud service.

mindlayer is different:

mindlayer
Setup pip install mindlayer, nothing else
Storage SQLite, embedded, zero config
LLM Bring your own, or use built-in Gemma
Offline Fully offline capable
License MIT

Installation

pip install mindlayer

With semantic vector search (downloads ~130MB embedding model on first use):

pip install "mindlayer[vector]"

With LLM-powered extraction (downloads Gemma ~800MB on first use):

pip install "mindlayer[llm]"

Architecture

3-Layer Memory Model

Inspired by how human memory works: short-term facts get promoted to long-term knowledge based on how often they are accessed.

flowchart TD
    Input(["Raw Text"]) --> Ingestion["Ingestion"]
    Ingestion --> W["Working Memory\nshort-term facts"]
    W -->|"accessed 3+ times"| E["Episodic Memory\nmid-term facts"]
    E -->|"accessed 10+ times"| S["Semantic Memory\nlong-term knowledge"]

    Decay(["Decay"]) -.->|"prunes idle entries"| W
    Decay -.->|"prunes idle entries"| E

    style W fill:#dbeafe,stroke:#3b82f6
    style E fill:#fef9c3,stroke:#eab308
    style S fill:#dcfce7,stroke:#22c55e
    style Input fill:#f3f4f6,stroke:#6b7280
    style Decay fill:#fee2e2,stroke:#ef4444
Loading

Data Flow

flowchart LR
    T(["Text Input"]) --> X["Extractor\nRules / Gemma"]
    X --> F["Discrete Facts"]
    F --> EM["Embedder\nFastEmbed / bge-small"]
    EM --> DB[("SQLite\n+ vec0")]

    Q(["Search Query"]) --> EM2["Embedder"]
    EM2 -->|"vector similarity"| DB
    DB --> R(["Ranked Results"])

    style DB fill:#f3f4f6,stroke:#6b7280
    style R fill:#dcfce7,stroke:#22c55e
    style T fill:#dbeafe,stroke:#3b82f6
    style Q fill:#dbeafe,stroke:#3b82f6
Loading

5 Core Primitives

flowchart LR
    A["Ingestion"] --> B["Conflict\nResolution"]
    B --> C["Consolidation"]
    C --> D["Retrieval"]
    D --> E["Decay"]

    style A fill:#dbeafe,stroke:#3b82f6
    style B fill:#fce7f3,stroke:#ec4899
    style C fill:#fef9c3,stroke:#eab308
    style D fill:#dcfce7,stroke:#22c55e
    style E fill:#fee2e2,stroke:#ef4444
Loading

Usage

Default: rule-based extractor, no LLM needed

import mindlayer

mem = mindlayer.MemCore()
mem.add("I am a Python developer. I love open source.")
results = mem.search("developer")

Semantic vector search: best recall

# pip install "mindlayer[vector]"
mem = mindlayer.MemCore(use_vector=True)
mem.add("I prefer concise explanations and dislike verbose output.")
results = mem.search("communication style")

Gemma LLM extractor: best extraction quality

# pip install "mindlayer[llm]"
mem = mindlayer.MemCore(use_llm=True)
mem.add("Long conversation text with lots of context...")

Bring your own extractor

from mindlayer.extractors.base import BaseExtractor

class MyExtractor(BaseExtractor):
    def extract(self, text: str) -> list[str]:
        # call OpenAI, Anthropic, Ollama, anything
        return ["fact 1", "fact 2"]

mem = mindlayer.MemCore(extractor=MyExtractor())

Bring your own storage backend

from mindlayer.storage.base import BaseStorage

class PostgresStorage(BaseStorage):
    # implement the interface
    ...

mem = mindlayer.MemCore(storage=PostgresStorage())

Memory maintenance

mem.consolidate()  # promote memories across layers
mem.decay()        # decay and prune stale memories

Roadmap

  • SQLite storage with vector search (sqlite-vec)
  • Rule-based extractor
  • Gemma LLM extractor (auto-download)
  • 3-layer memory model
  • Async support
  • PostgreSQL storage backend
  • LLM-based conflict resolution
  • REST API server mode
  • JavaScript / TypeScript port

Contributing

Contributions are welcome. Please open an issue before submitting large PRs.

git clone https://github.com/Genious07/mindlayer
cd mindlayer
pip install -e ".[dev,vector]"
pytest

License

MIT

About

memvault is an open-source, model-agnostic memory layer for Large Language Models that gives AI applications persistent, structured memory without requiring external APIs or complex infrastructure.

Resources

License

Stars

Watchers

Forks

Contributors

Languages