A local, markdown-first personal wiki with AI-assisted ingestion, semantic indexing, retrieval, and chat.
- Turns raw source markdown in
raw/into structured notes inwiki/. - Builds a reusable vector index at
.wiki-index/index.json. - Answers grounded questions over your notes with citations.
- Runs an interactive chat over the same retrieval pipeline.
- Install dependencies:
pnpm install- Create environment file:
cp .env.example .env- Set your API key in
.env:
OPENAI_API_KEY="..."- Ingest a raw markdown file:
pnpm ingest raw/ai/large-language-model.md- Build/update the index:
pnpm index- Query:
pnpm query "What are the main limitations of LLMs?"- Start chat:
pnpm chatConverts a markdown file under raw/ into a structured note under wiki/, preserving the relative path.
Options:
--model <id>: generation model (default:WIKI_MODEL, fallbackgpt-4.1-mini).--max-chars <n>: max source chars to ingest (default:100000).
Example:
pnpm ingest raw/philosophy/nihilism.md --model gpt-5-mini --max-chars 80000Builds or updates the semantic index from files in wiki/.
Options:
--embedding-model <id>: embedding model (default:WIKI_EMBEDDING_MODEL, fallbacktext-embedding-3-small).--chunk-size <n>: chunk size in characters (default:1100, must be> 200).--chunk-overlap <n>: overlap in characters (default:180, must be>= 0).--batch-size <n>: embedding batch size (default:32).--index-path <path>: custom index file path.
Example:
pnpm index --chunk-size 1200 --chunk-overlap 200Retrieves top chunks and produces a grounded answer with confidence and source references.
Options:
--model <id>: answer model (default:WIKI_ANSWER_MODEL, fallbackgpt-5-mini).--embedding-model <id>: query embedding model (default:WIKI_EMBEDDING_MODEL).--k <n>: number of retrieved chunks (default:8).--index-path <path>: custom index file path.
Example:
pnpm query "Compare absurdism and nihilism" --k 10Starts interactive retrieval chat using the same models/index options as query.
Chat commands:
/sources: show citations from the last answer./reset: clear conversation history./exit: quit chat.
See .env.example:
OPENAI_API_KEY=""
WIKI_MODEL="gpt-5-mini"
WIKI_ANSWER_MODEL="gpt-5-mini"
WIKI_EMBEDDING_MODEL="text-embedding-3-small"Notes:
OPENAI_API_KEYis required for ingestion, indexing (when new embeddings are needed), query, and chat.- CLI flags override env vars.
raw/ # source markdown clips
wiki/ # generated canonical notes
lib/ # ingestion, indexing, retrieval, chat modules
scripts/ # CLI entrypoint
.wiki-index/ # generated embedding index
- Add or update markdown files in
raw/. - Run
pnpm ingest <raw-file>for each new/changed source. - Run
pnpm indexto refresh embeddings incrementally. - Use
pnpm queryorpnpm chatto explore your wiki.