Skip to content

nikuhkid/mnem

Repository files navigation

Mnem

Persistent memory and execution infrastructure for local AI systems.

Mnem solves a specific problem: AI interfaces are stateless. Every session starts from zero. Mnem is the layer underneath that makes continuity possible — storing what matters, loading what's needed, and coordinating execution across a structured pipeline.

It is not an AI. It is the infrastructure an AI runs on.


The Problem

Consumer AI interfaces have no memory between sessions. Every conversation starts cold. Any context, preferences, working knowledge, or session state has to be re-established from scratch — or stuffed into an ever-growing system prompt that degrades with length.

For personal or professional use cases where the AI is a long-term collaborator, this is a fundamental limitation. The model is capable. The infrastructure around it isn't built for persistence.

Mnem is built for persistence.


What It Does

Selective memory loading — Personality, tool history, and operational context are stored in a vector database (ChromaDB) and loaded at boot by relevance tier. High-priority context loads immediately. Everything else is queryable on demand. The system doesn't bloat the context window — it loads what's needed.

Structured knowledge base — A three-tier SQLite knowledge layer separates static facts, learned knowledge (verified during sessions), and live web-sourced data. Each tier has different trust levels and update cycles.

Task orchestration — A coordination loop receives natural language requests, plans them into structured steps, routes each step to the appropriate execution method (script, model call, or system), and returns results. Multi-step tasks are tracked with state across execution.

Model routing — Requests are routed to the appropriate model based on task type and complexity. The routing layer is configurable and model-agnostic — swap endpoints without touching orchestration logic.

Script registry — Capabilities are registered in a database, not hardcoded. Adding a new tool means dropping a script and registering it — the orchestrator picks it up automatically.

Boot context generation — At session start, a boot script pulls essential context from both the vector DB and knowledge base and surfaces it as structured markdown. The AI interface reads this as its opening context — effectively waking up with memory intact.


Architecture

┌─────────────────────────────────────┐
│           Interface Layer           │  ← Your AI of choice
│    (Claude, local model, or API)    │
└────────────────┬────────────────────┘
                 │
┌────────────────▼────────────────────┐
│           Control Layer             │
│  orchestrator · task_planner        │
│  tool_router · model_router         │
│  state_manager · reflection         │
└────────────────┬────────────────────┘
                 │
┌────────────────▼────────────────────┐
│          Execution Layer            │
│  scripts/ · model calls · system   │
└────────────────┬────────────────────┘
                 │
┌────────────────▼────────────────────┐
│            Data Layer               │
│  memory/vector.db  (ChromaDB)       │
│  knowledge/knowledge.db  (SQLite)   │
│  ops/  (state · logs · tasks)       │
└─────────────────────────────────────┘

Structure

mnem/
├── core/
│   ├── orchestrator.py      — main coordination loop
│   ├── task_planner.py      — breaks requests into executable steps
│   ├── tool_router.py       — routes steps to scripts or models
│   ├── model_router.py      — model selection and API dispatch
│   ├── state_manager.py     — tracks active tasks and context state
│   └── reflection.py        — post-execution evaluation and memory updates
├── ops/
│   ├── boot_context.py      — generates session context from memory and knowledge
│   └── session.py           — session lifecycle management
├── scripts/                 — registered capability scripts (one per tool)
├── memory/                  — vector DB (user-provided, not tracked)
├── knowledge/               — knowledge DB (user-provided, not tracked)
├── dpo_calibration.md       — DPO dataset methodology for model fine-tuning
├── architecture.json        — system architecture spec
├── config.json              — model endpoints and routing configuration
├── secrets.env.example      — API key template
└── setup.sh                 — one-shot environment setup

Setup

Requirements: Python 3.9+, API keys for your chosen model endpoints.

git clone https://github.com/nikuhkid/mnem
cd mnem
bash setup.sh
cp secrets.env.example secrets.env
# edit secrets.env with your keys

Run:

source venv/bin/activate
python3 ops/boot_context.py          # verify memory loads correctly
python3 core/orchestrator.py "your request here"

Design Principles

Memory is tiered, not flat. Not all context is equal. Essential identity and tool history load at boot. Experiences and relationships are queried on demand. The boot window stays lean.

Capabilities are registered, not hardcoded. The script registry means the orchestrator doesn't need to know what tools exist at write time. New capability = new script + registration. Nothing else changes.

The data layer is yours. Memory and knowledge databases are gitignored by design. The infrastructure is open. Your data stays local.

Model-agnostic by design. The model router abstracts API calls behind a task-type interface. The orchestration logic doesn't care whether the model is Groq, Gemini, a local Ollama instance, or Claude. Route configuration lives in config.json.

Built to migrate local. The current implementation uses API-based models. The architecture is designed to swap in local models (Ollama, llama.cpp) without touching orchestration logic. The long-term target is fully local inference.


Related

  • IRIS — the controlled AI runtime built on top of Mnem. Inference, Response & Input System.

Status

Active development. Core infrastructure is functional. IRIS integration is the current focus.


Mnem — from Mnemosyne, Greek Titan of memory.

About

Mnem — persistent AI infrastructure. Memory, boot, session continuity.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages