An open-source long-term memory engine for developers. It provides primitives to ingest events, produce layered digests, retrieve memory, and (optionally) answer questions grounded in that memory.
This is not a consumer assistant app. You bring your own infrastructure and secrets via environment variables.
- Start infra
cd project-memory
docker-compose up -dThis starts Postgres + Redis only.
- Install deps
pnpm install- Set env
cp .env.example .envApps will auto-load the repo root .env on startup.
You can override per app by creating apps/<app>/.env (for example apps/worker/.env).
- DB migrate + seed
pnpm db:generate
pnpm db:migrate
pnpm seed- Run services
pnpm dev:api
pnpm dev:worker
pnpm dev:telegramOptional CLI:
pnpm dev:cli -- scopesNo-LLM smoke test:
./scripts/smoke-no-llm.shLLM smoke test:
FEATURE_LLM=true OPENAI_API_KEY=... ./scripts/smoke-llm.shReminder smoke test:
./scripts/smoke-reminders.shAll smoke tests:
pnpm smokeCore unit tests (digest control layer):
pnpm --filter @project-memory/core testBenchmark (performance + reliability score):
pnpm benchmarkRequired for all:
DATABASE_URL,REDIS_URL
API (apps/api):
PORT,LOCAL_USER_TOKEN(dev)- Optional LLM:
FEATURE_LLM=true+OPENAI_*
Worker (apps/worker):
FEATURE_LLM=true+OPENAI_*for digests- Optional Telegram reminder delivery:
FEATURE_TELEGRAM=true+TELEGRAM_BOT_TOKEN - Digest control vars:
DIGEST_EVENT_BUDGET_TOTAL,DIGEST_EVENT_BUDGET_DOCS,DIGEST_EVENT_BUDGET_STREAMDIGEST_NOVELTY_THRESHOLD,DIGEST_MAX_RETRIESDIGEST_USE_LLM_CLASSIFIER,DIGEST_DEBUG,DIGEST_REBUILD_CHUNK_SIZE
Telegram adapter (apps/adapter-telegram):
FEATURE_TELEGRAM=trueTELEGRAM_BOT_TOKEN,PUBLIC_BASE_URL,TELEGRAM_WEBHOOK_PATH,API_BASE_URLADAPTER_PORT(optional)
CLI (apps/cli):
API_BASE_URL
Set PUBLIC_BASE_URL and call the adapter:
curl -X POST "http://localhost:3001/telegram/webhook/set"import { ProjectMemoryClient } from "@projectmemory/client";
const client = new ProjectMemoryClient({ baseUrl: process.env.API_BASE_URL!, userId: "dev-user" });
const scope = await client.createScope({ name: "Chat App" });
await client.ingestEvent({ scopeId: scope.id, type: "stream", source: "sdk", content: "User asked about pricing" });await client.ingestEvent({
scopeId: scope.id,
type: "document",
source: "sdk",
key: "note:roadmap",
content: "Updated roadmap draft"
});Set FEATURE_LLM=true and provide OPENAI_API_KEY to enable /memory/answer and digest jobs. If disabled, the API returns a clear error and worker jobs fail fast.
Digest is processed as a controlled pipeline (not a single LLM call):
- Event selection with dedupe and per-type budgets
- Delta detection with novelty threshold
- Protected deterministic state merge for stable facts
- LLM stage with strict JSON schema
- Consistency checks + retry (
DIGEST_MAX_RETRIES) - Rebuild/backfill endpoint:
POST /memory/digest/rebuild
flowchart LR
U[Adapter / CLI / SDK] --> A[API]
A --> DB[(Postgres)]
A --> Q[(Redis Queue)]
Q --> W[Worker]
W --> LLM[OpenAI-compatible LLM]
W --> DB
A --> U
- API validates input with shared Zod contracts and scopes all requests by user identity.
- Core engine (
packages/core) performs selection/delta/state/consistency logic. - Worker executes digest and rebuild jobs asynchronously via BullMQ.
- Digests are stored as first-class records, with optional
rebuildGroupIdfor backfills. - SDK and adapters call API only (no direct database coupling).
Use the built-in benchmark runner to generate reproducible metrics and a score report:
- Ingest throughput + p95 latency
- Retrieve semantic/strict hit-rate + p95 latency
- Digest success/consistency/latency (when
FEATURE_LLM=true) - Reminder due-to-sent delay
Run:
pnpm benchmarkOptional profile:
BENCH_PROFILE=stress pnpm benchmarkReports are generated in benchmark-results/ as JSON + Markdown.
- Prisma runs from
packages/db, so copy.envtopackages/db/.envbeforepnpm db:migrate. - If API or worker says
FEATURE_LLM disabledbut.envis set, restart the process after updating.env. - Ensure Postgres port mapping matches
DATABASE_URL(e.g.5433:5432indocker-compose.yml). - Reminder smoke test depends on the worker’s 60s scheduler; keep the worker running and allow ~1–2 minutes.
- Digest and rebuild endpoints require
FEATURE_LLM=true; otherwise API returns an actionable 400 message.
apps/apiNestJS REST APIapps/workerBullMQ workersapps/adapter-telegramTelegram reference adapterapps/cliDeveloper CLIpackages/coredomain services + pipelinespackages/contractsZod schemas + shared enumspackages/promptsprompt templatespackages/sdk-client@projectmemory/clientpackages/sdk-react@projectmemory/react(hooks only)packages/dbPrisma schema + client
See docs/api.md for endpoint details.
See docs/glossary.md for term definitions.
See docs/technical-overview.md for architecture and pipeline internals.
See docs/benchmarking.md for benchmark methodology and scoring.
See docs/release.md for release/versioning workflow.
See docs/release-v0.1.0.md for the initial release notes draft.
See ROADMAP.md for planned milestones.
- Contribution guide:
CONTRIBUTING.md - Code of conduct:
CODE_OF_CONDUCT.md - Security policy:
SECURITY.md - Changelog:
CHANGELOG.md - CI workflow:
.github/workflows/ci.yml
pnpm format:checkfor formatting checkspnpm lintfor strict TypeScript checks across workspacespnpm buildfor full workspace compilationpnpm --filter @project-memory/core testfor core unit tests.github/workflows/integration-smoke.ymlruns API + worker smoke tests (no LLM)