Konteks is a memory engine for AI coding agents.
It builds a project-local context graph through autonomous knowledge curation, ensuring you never re-explain your project to an AI agent.
Memory artifacts are stored directly inside your repository, exposing compact, task-specific recall through an MCP server without requiring global installation or cloud dependencies.
- Zero-Install: Run anywhere via
npxorbunxwithout native dependencies. - Language-Aware: Precise semantic parsing via Tree-sitter (not just regex).
- Local-First: Your project memory stays in your repo—no cloud, no accounts.
- Token-Efficient: High-fidelity context synthesis designed for LLM economy.
For a deep dive into the philosophy, architecture, and usage, see the Full Documentation.
- Overview: Vision, Philosophy, and the "Why."
- Session Lifecycle: How to work with Konteks (Warm Up -> Build -> Save).
- Architecture Overview: How the memory engine works under the hood.
- Glossary: Short definitions for Konteks terms.
Konteks uses specialized grammars for semantic extraction:
- TypeScript / JavaScript
- HTML / JSDoc
- JSON
- PHP
- (More coming soon)
Konteks runs on Node.js (>=22) or Bun. Start by initializing memory from your project root:
npx -y konteks-cli initContinue with the Quickstart for MCP setup and the Warm Up -> Build -> Save flow.
Konteks writes local memory under .konteks/. It uses SQLite (WASM) for the graph/indexes and a content-addressed object store for payloads. No host SQLite client or native modules are required.
MIT Licensed. See LICENSE for details.