KnowledgeWeaver was inspired by Andrej Karpathy's LLM Wiki note — the idea that LLMs should compile sources into a persistent intermediate layer instead of re-reading raw text on every query. In KnowledgeWeaver, that intermediate layer is made of Cognodes: small, typed units of knowledge that stay readable on disk.
Most RAG systems chunk raw documents at query time. KnowledgeWeaver does the structure work up front: each source becomes a set of typed Cognodes such as concept, fact, experience, narrative, opinion, and known_unknown. Those units can point to each other through typed relations such as depends_on, instantiates, and evidences.
That gives you:
- readable markdown you can grep, diff, and version
- structure-first retrieval over typed fields and relations
- a compiled Postgres index you can delete and rebuild from markdown at any time
See cognodes/examples/karpathy-llm-wiki for a worked example.
kw distill: turn raw text or files into Cognode markdownkw query-cognode: answer questions from Cognode markdown on diskkw index: compile Cognodes into Postgreskw query: query the Postgres compiled indexkw eval-distill: run YAML distillation evals
Before you start, make sure you have:
- Python
3.10+ - Docker Desktop or Docker Engine with
docker compose - Ollama installed and running at
http://localhost:11434/v1 - local Ollama models configured for both generation and embeddings
Notes:
- The bundled Postgres setup is started through Docker, so you do not need to install Postgres separately if you use the commands below.
- The default config is written to
~/.knowledgeweaver/kw.yamland the generated Docker env file is written to~/.knowledgeweaver/docker-compose.env. - If your Ollama model names differ from the defaults, edit
~/.knowledgeweaver/kw.yamlafterkw config initso theextraction,query_*, andembeddingmodel entries match what you have installed locally. - Run the Docker command from the repo root so
docker-compose.postgres.ymlandDockerfile.postgresare available.
pip install .
kw config init
# edit ~/.knowledgeweaver/kw.yaml if you need different Ollama model names
kw config validate
# start the bundled pgvector/Postgres instance on port 55432
docker compose --env-file ~/.knowledgeweaver/docker-compose.env -f docker-compose.postgres.yml up -d --build
kw db init
kw distill --text "Revenue dropped by 4% in Q2 due to lower enterprise renewals."
kw query-cognode "What caused the Q2 revenue drop?"
kw index
kw query "What caused the Q2 revenue drop?"The default config assumes:
- Ollama at
http://localhost:11434/v1 - local Postgres credentials stored in
~/.knowledgeweaver/docker-compose.env - config at
~/.knowledgeweaver/kw.yaml
Claude Code auto-registers the bundled skills under skills/cognode-distillation/ and skills/cognode-query/.
For Codex or other agents, point them at the skill files directly and keep the Cognode markdown as the canonical artifact. The intended workflow is still:
- distill raw text into Cognodes
- save Cognodes as markdown
- query markdown directly with
kw query-cognodeor compile into Postgres withkw index
Show config resolution:
kw config path
kw config show
kw config validateDistill:
kw distill --input ./notes/article.txt
kw distill --text "Revenue dropped by 4% in Q2 due to lower enterprise renewals."
cat article.txt | kw distill --stdin
kw distill --input ./inbox --recursiveQuery markdown directly:
kw query-cognode "What caused the Q2 revenue drop?"Compile and query Postgres:
kw index
kw query "What caused the Q2 revenue drop?"
kw query "What caused the Q2 revenue drop?" --debugInitialize Postgres schema explicitly:
kw db init
kw db init --profile postgres_localUseful flags:
--config: use a non-default config file--profile: select a profile from the config--doc-id,--title,--author,--source-uri,--created-at: attach metadata during distillation--top-k: control retrieval breadth forqueryandquery-cognode
kw looks for config in this order:
--config <path>$KW_HOME/kw.yamlifKW_HOMEis set, otherwise~/.knowledgeweaver/kw.yaml./kw.yaml
Inside the config, kw_home is the root for generated files:
${kw_home}/cognodes${kw_home}/cognodes/manifests${kw_home}/logs${kw_home}/docker-compose.env
After changing kw_home, run kw config validate again so kw refreshes docker-compose.env.
This repo includes:
Start the bundled local Postgres:
kw config validate
docker compose --env-file ~/.knowledgeweaver/docker-compose.env -f docker-compose.postgres.yml up -d --build
kw db initThe bundled setup uses port 55432.
Supported provider styles:
ollamaopenai-compatibleanthropic
For each model under profiles.<name>.models.<key>, you usually set:
providermodelapi_base- one auth method:
api_key,api_key_env,oauth_token, oroauth_token_env - optional tuning fields such as
timeout_seconds,temperature,max_tokens, anddimensions
kw config show redacts inline secrets and DSN passwords before printing.
- Use
kw query-cognodewhen you want the markdown-only workflow. - Use
kw querywhen you want structure-first retrieval over the compiled Postgres index. - Rebuilding Postgres should never require editing Cognodes.
- Changing
kw_homemoves Cognodes, manifests, logs, and Docker helper files together.