One file. Every layer. Zero drift.
Define your entire application — schema, types, validation, API, UI, and tests — in a single .kappa file.
Generate production code for any stack. Nothing is repeated. Nothing falls out of sync.
Language Spec · Dense Notation · Examples · Contributing
You describe a User six times to ship one feature:
Prisma schema → model User { id Int @id @default(autoincrement()) ... }
TypeScript interface → interface User { id: number; email: string; ... }
Zod validation → z.object({ email: z.string().email(), ... })
API route → router.get('/users/:id', async (req, res) => { ... })
React form → <input type="email" required pattern="..." />
Test fixture → const mockUser = { id: 1, email: 'test@example.com', ... }
Six files. Six chances for things to drift. Six places to update when you add a field.
For AI-assisted development, this is worse: an LLM spends 70–80% of its context window reading boilerplate before it can do anything useful.
User { email: s@~#email, name: s(1,100), role: (admin|editor|viewer), active: b=true, created: dt!^ }
One line. Five fields. Every decision is explicit: unique (@), indexed (~), format-annotated (#email), length-constrained ((1,100)), defaulted (=true), immutable (!), hidden (^). Fields are required by default — no * needed.
A parser reads this single line and generates the database column, the TypeScript type, the validation rule, the API endpoint, the form input, and the test case for every field.
Same information. Written once. Generated everywhere.
Kappa was built from the ground up for LLM workflows.
Minimal token footprint. Every character carries meaning. No decorative syntax, no boilerplate, no repetition. An LLM can express a complete entity in a single line instead of spending hundreds of tokens across multiple files.
Streaming parse. The dense notation parses incrementally, token by token — no buffering, no lookahead. Each field emits a complete AST node on the comma delimiter. When an LLM streams Kappa output, code generation begins before the spec is fully written. The schema column for email is generated while the model is still producing the next field.
Constrained vocabulary. A small, precise set of type codes and modifiers means the LLM has fewer ways to be wrong. The grammar is unambiguous — there's exactly one way to express any given constraint.
.kappa file → Parser → AST → Generators → target code
The parser is deterministic. The generators are deterministic. Input adapters read existing schemas (OpenAPI, SQL, GraphQL) and produce Kappa. Output generators read Kappa and produce code for any stack. Same spec, different targets.
The compact syntax. One entity per line.
Product { sku: s@~(8,20), name: s(1,200), price: m(0.01,), stock: i(0,)=0, status: (draft|active|discontinued), category: Category, created: dt!^ }
Fields are required by default. Use ? for optional/nullable.
Quick reference
| Code | Type | Modifier | Meaning | |
|---|---|---|---|---|
s |
String | ? |
Optional | |
t |
Text | * |
Required (emphasis) | |
i |
Integer | ! |
Immutable | |
f |
Float | ~ |
Indexed | |
m |
Decimal | @ |
Unique | |
b |
Boolean | ^ |
Hidden (internal) | |
d |
Date | =val |
Default | |
dt |
DateTime | (min,max) |
Constraint | |
id |
Identifier | #fmt |
Format annotation | |
x |
Binary | ++ |
Auto-increment |
References: author: User (required) — team: Team? (optional) — Enums: (a|b|c) — Named enums: enum Role (a|b|c) — Arrays: [s]
Full reference: Dense Notation Spec
When you need computed fields, authorization, or workflows — things dense notation can't express:
entity Order {
items: [OrderItem]
status: (pending|paid|shipped|cancelled) = "pending"
total: Float = fn() => this.items |> sum(item => item.price * item.quantity)
capability owner {
scope: fn(user: User) => this.customer == user
actions: ["read", "update", "cancel"]
}
workflow onUpdate {
when this.status == "paid" then {
notify(this.customer, "Payment confirmed")
inventory.reserve(this.items)
}
}
}
Both syntaxes mix in the same file. Both produce the same AST.
| Example | What it covers |
|---|---|
| Blog | Users, posts, comments |
| E-commerce | Products, orders, line items |
| SaaS Project Manager | Multi-tenant orgs, projects, tasks |
| AI Chat Platform | Conversations, messages, tool calls, billing |
| ML Platform | Experiments, runs, datasets, model registry |
| Compiler Pipeline | Source files, AST, symbols, IR, diagnostics |
| Quantum Lab | Backends, circuits, jobs, calibration |
| Order with Logic | Computed fields, authorization, workflows |
| Streaming Parse | Token-by-token incremental parsing |
# Parse a file and output the AST as JSON
kappa parse schema.kappa
# Validate one or more files
kappa validate src/*.kappa
# Reformat to canonical dense notation
kappa fmt schema.kappa --write- Language Specification — complete reference
- Dense Notation Reference — quick reference
- Dense Grammar (EBNF) — formal grammar
- Full Grammar (EBNF) — formal grammar
The specification is stable. The toolchain is under active development:
- Language specification (v2 — required-by-default, implicit id,
mdecimal,^hidden,#format, named enums,@unique) - Dense and full syntax with unified AST
- Formal EBNF grammars
- Parser generator — one script produces parsers for 5 languages
- Reference parsers (TypeScript, Python, Rust, Go, Java)
- Streaming parser (character-by-character, emits on comma)
- Cross-language test suite (116 AST tests + 13,500 property-based + fuzz)
- CLI tooling (
parse,validate,fmt) - Input adapters (OpenAPI, SQL, GraphQL → Kappa)
- Output generators (Drizzle, Zod, tRPC, React)
- VS Code extension with syntax highlighting
The spec is the product right now. Read it, try writing .kappa files for your own domain, and open an issue with what you find.
If you want to build a parser or generator, see CONTRIBUTING.md.