SecondBrain gives your AI memory of your real life.
SecondBrain is an experimental memory layer for real-world relationships.
It helps you remember:
- who someone is
- where you met
- what mattered in your last conversation
- what you promised to do
Instead of treating interactions as raw transcript logs, SecondBrain stores structured memory episodes so context can be surfaced instantly when it matters.
Most tools remember documents and prompts. They do not remember your life.
SecondBrain is built for the moment right before you speak to someone and realize:
- you forgot their name
- you forgot where you met
- you forgot what you owe them
SecondBrain makes that moment recoverable by turning interactions into memory that can be searched and used in real time.
During a live session, the system identifies (or suggests) who is in front of you and immediately loads context.
Conversation text is transformed into structured fields:
- people involved
- key topics
- explicit promises
- actionable next steps
You get short context before asking:
- name
- where met
- one key detail
- one open loop
Use dashboard + chat to ask questions like:
- what did I promise them?
- who did I meet this week?
- what follow-ups are still open?
Face tracking and matching is implemented as a practical experiment pipeline:
- Face is detected from webcam frames.
- The face is vectorized into an embedding.
- Stored identity vectors are grouped by person.
- A centroid vector is used as a fast representative for each person.
- Distances are computed and top-K candidates are returned.
- Confidence gating determines auto-resolve vs manual selection fallback.
This gives a resilient demo path:
- fast best-guess identity
- candidate fallback when uncertain
- quick manual correction loop
SecondBrain is local-first in this repo:
- local SQLite persistence for people + episodes
- local profile/session artifacts
- local development environment for rapid iteration
This keeps iteration fast and demo behavior deterministic.
This repo is a hackathon experiment, not a production compliance product.
- Privacy/compliance guarantees are out of scope in this build.
- This is experimentation of thought and product direction.
- Do not treat this repository as a finalized privacy architecture.
In the long term, SecondBrain could become a wearable context layer that supports people with memory loss by helping reconnect names, faces, and personal history in real time.
Think: lightweight Meta Glasses-style attachment that adds context for people you have met and indexes people you meet next.
Pipeline:
camera + mic -> identity resolution -> person_id -> memory retrieval -> context nudge -> transcript extraction -> memory update
Key boundaries:
- Nia: memory/retrieval layer
- Recognition system: identity resolution
- Dashboard + live UI: user-facing interaction surface
- Next.js + React frontend (
frontend/) - API routes under
frontend/app/api/* - Local DB and memory utilities under
frontend/lib/* - Dashboard routes in
frontend/app/dashboard/*
cd frontend
npm install
npm run devOpen http://localhost:3000.
Set frontend/.env.local as needed:
OPENAI_API_KEYNIA_API_KEYNIA_BASE_URL
Optional:
GOOGLE_CLIENT_IDGOOGLE_CLIENT_SECRETGOOGLE_REDIRECT_URITAVILY_API_KEYAPOLLO_API_KEYOPENAI_MEMORY_MODEL
From frontend/:
npm run devnpm run buildnpm run startnpm run lint
MIT. See LICENSE.