A persistent, cross-platform memory system for AI agents and LLMs, structured like human cognition into 4 memory types.
┌─────────────────────────────────────────────────────────────────────────┐
│ Streamlit Dashboard │
│ ┌──────────────┐ ┌─────────────────────┐ ┌──────────────────────┐ │
│ │ Memory Graph │ │ Memory Chat │ │ Live Memory Feed │ │
│ │ (Nodes+Edges) │ │ (RAG-enabled) │ │ (Real-time updates) │ │
│ └──────────────┘ └─────────────────────┘ └──────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────┐
│ FastAPI Backend │
│ ┌──────────┐ ┌──────────┐ ┌───────────┐ ┌──────┐ ┌────────────────┐ │
│ │ Episodic │ │ Semantic │ │Procedural │ │Graph │ │ RAG Engine │ │
│ │ Router │ │ Router │ │ Router │ │Router│ │ (GPT-4o) │ │
│ └──────────┘ └──────────┘ └───────────┘ └──────┘ └────────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────┐
│ PostgreSQL + pgvector (Supabase) │
│ ┌─────────────────┐ ┌────────────────┐ ┌──────────────┐ ┌──────────┐ │
│ │ episodic_memory │ │semantic_memory │ │procedural_mem │ │knowledge │ │
│ │ (hypertable) │ │(VECTOR(1536)) │ │ (JSONB) │ │ _graph │ │
│ └─────────────────┘ └────────────────┘ └──────────────┘ └──────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
| Type | Description | Storage | Decay |
|---|---|---|---|
| 🧠 Episodic | Time-stamped conversation history | PostgreSQL hypertable | 30 days (configurable) |
| 🔍 Semantic | Vector embeddings for meaning search | pgvector (384-dim) | Never |
| ⚙️ Procedural | User preferences, settings, workflows | JSONB | Never |
| 🕸️ Associative | Knowledge graph relationships | Nodes + Edges | Never |
# Clone the repository
git clone <repo-url>
cd memorylayer
# Set your OpenAI API key
export OPENAI_API_KEY=sk-...
# Start all services
docker-compose up --build
# Access the dashboard
open http://localhost:8501-
Create Supabase Project
- Go to https://app.supabase.com
- Create a new project
- Enable pgvector extension:
CREATE EXTENSION IF NOT EXISTS "vector";
-
Run Migration
- Open Supabase SQL Editor
- Copy and run
supabase/migrations/001_initial_schema.sql
-
Configure Backend
cd backend cp .env.example .env # Edit .env with your DATABASE_URL and OPENAI_API_KEY
-
Start Backend
pip install -r requirements.txt uvicorn app.main:app --reload --port 8000
-
Start Frontend
cd frontend pip install -r requirements.txt # Update .streamlit/secrets.toml with your backend URL streamlit run app.py
| Endpoint | Method | Description |
|---|---|---|
/api/v1/agents/{user_id}/episodes |
GET/POST | List/Create episodic memories |
/api/v1/agents/{user_id}/semantics |
GET/POST | List/Create semantic memories |
/api/v1/agents/{user_id}/semantic/search |
POST | Vector similarity search |
/api/v1/agents/{user_id}/procedural/settings |
GET/POST | Get/Upsert procedural memory |
/api/v1/agents/{user_id}/graph/nodes |
GET/POST | List/Create knowledge nodes |
/api/v1/agents/{user_id}/graph/edges |
GET/POST | List/Create knowledge edges |
/api/v1/agents/{user_id}/graph/path |
POST | Find path between nodes |
/api/v1/rag/chat |
POST | RAG-enhanced chat |
/api/v1/memory/stats/{user_id} |
GET | Memory statistics |
nexmem/
├── backend/
│ ├── app/
│ │ ├── main.py # FastAPI application
│ │ ├── config.py # Settings from env vars
│ │ ├── database.py # Async SQLAlchemy connection
│ │ ├── models/
│ │ │ └── memory.py # SQLAlchemy ORM models
│ │ ├── schemas/
│ │ │ └── memory.py # Pydantic request/response schemas
│ │ ├── routers/
│ │ │ ├── episodic.py # Episodic memory endpoints
│ │ │ ├── semantic.py # Semantic search endpoints
│ │ │ ├── procedural.py # Procedural memory endpoints
│ │ │ ├── graph.py # Knowledge graph endpoints
│ │ │ └── rag.py # RAG chat endpoint
│ │ └── services/
│ │ ├── embedder.py # OpenAI embedding service
│ │ └── llm.py # LLM service (GPT-4o)
│ ├── requirements.txt
│ ├── .env.example
│ └── Dockerfile
├── frontend/
│ ├── app.py # Streamlit dashboard
│ ├── requirements.txt
│ ├── .env.example
│ ├── .streamlit/
│ │ └── secrets.toml
│ └── Dockerfile
├── supabase/
│ └── migrations/
│ └── 001_initial_schema.sql
├── docker-compose.yml
└── README.md
| Variable | Default | Description |
|---|---|---|
DATABASE_URL |
- | PostgreSQL connection string |
OPENAI_API_KEY |
- | OpenAI API key |
OPENAI_EMBEDDING_MODEL |
all-MiniLM-L6-v2 |
Embedding model |
OPENAI_LLM_MODEL |
gpt-4o |
LLM for RAG responses |
MEMORY_DECAY_DAYS |
30 |
Days before episodic cleanup |
SEMANTIC_TOP_K |
5 |
Default search results |
DEBUG |
false |
Enable debug mode |
- End-to-end retrieval loop works for a single agent
- Semantic search returns relevant results
- Episodic history informs the LLM system prompt
- Data retention and privacy flags behave as specified