The backend AI can actually understand.
Pure functions. Explicit effects. Safe evolution.
LambdaGraph is a self-contained application platform built on functional programming principles. Think of it as a modern MS Access: database, logic, and UI in a single deployable unit - but designed for the era where your co-developer is an AI.
┌────────────────────────────────────────────────────────────────-─┐
│ LambdaGraph │
├────────────────────────────────────────────────────────────────-─┤
│ │
│ Pure Functions + Explicit Effects = AI-Safe │
│ (no hidden I/O) (declared capabilities) (auditable) │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Types │ │ Functions │ │ Apps │ │
│ │ (structs, │───▶│ (pure, with │───▶│ (TEA-style │ │
│ │ enums) │ │ caps) │ │ UI) │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────┐ │
│ │ Capabilities │ │
│ │ KV │ Blob │ │ │
│ │ HTTP│Sched │ │ │
│ └──────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────-┘
Modern frameworks (Express, Django, Rails, etc.) evolved before AI-assisted development. They have:
- Side effects everywhere - Database calls buried in helpers, implicit state, middleware magic
- Hard to reason about - "What does this function actually do?" requires reading the entire codebase
- Brittle to change - AI makes a change → breaks something three modules away
- Testing is painful - Mock half the universe to test one function
This was manageable when humans wrote all the code. But when AI generates code, these problems become critical. You can't trust what you can't audit.
Every function is pure or explicitly declares its effects.
// This function is PURE - no side effects, deterministic
fn calculate_total(items: Vec<Item>) -> f64 {
items.iter().map(|i| i.price * i.quantity).sum()
}
// This function uses KV storage - it SAYS SO
#[capabilities(KV)]
fn save_order(order: Order) -> Result<(), Error> {
kv_set(&format!("order:{}", order.id), &order)
}
// This function calls external APIs - it SAYS SO
#[capabilities(HTTP)]
fn fetch_weather(city: String) -> Result<Weather, Error> {
http_get(&format!("https://api.weather.com/{}", city))
}When you (or an AI) look at a function, you know exactly what it can do:
- No capabilities? It's pure. Same input = same output, always.
- Has
KV? It might read/write to storage. - Has
HTTP? It might call external services.
No surprises. No hidden landmines.
Every type and function is identified by its content hash (BLAKE3):
FunctionId: 7a3f8b2c1d4e5f6a7b8c9d0e1f2a3b4c...
This means:
- Same code = same identity - Duplicate detection is free
- Old versions are never lost - Just not pointed to anymore
- Rollback is instant - Change a pointer, done
- AI can experiment safely - Every change is reversible
LambdaGraph stores its own schema in its own storage. The system describes itself:
type:{id} → TypeDef (JSON)
func:{id} → FunctionDef (JSON)
app:{id} → AppDef (JSON)
One binary. One database file. Deploy anywhere.
# Clone and build
git clone https://github.com/pjankiewicz/lambdagraph
cd lambdagraph
cargo build --release
# Run in demo mode (in-memory, sample data)
cargo run --bin server
# Run with persistence
cargo run --bin server -- --data-dir ./dataOpen http://localhost:4000 to access:
- UI: Browse and edit types, functions, apps
- API: REST endpoints at
/api - Docs: Swagger UI at
/api/docs - MCP: AI integration at
/mcp
Define your data structures:
// A simple struct
struct User {
id: String,
name: String,
email: String,
created_at: DateTime,
}
// An enum for states
enum OrderStatus {
Pending,
Processing,
Shipped { tracking: String },
Delivered,
Cancelled { reason: String },
}Write pure logic with optional capabilities:
// Pure function - no side effects
fn validate_email(email: String) -> Result<(), ValidationError> {
if email.contains('@') && email.contains('.') {
Ok(())
} else {
Err(ValidationError::InvalidEmail)
}
}
// Function with KV capability
#[capabilities(KV)]
fn get_user(id: String) -> Option<User> {
kv_get(&format!("user:{}", id))
}
// Function with multiple capabilities
#[capabilities(KV, HTTP, Log)]
fn sync_user_from_api(id: String) -> Result<User, Error> {
log_info("Fetching user from external API");
let user: User = http_get(&format!("https://api.example.com/users/{}", id))?;
kv_set(&format!("user:{}", id), &user)?;
Ok(user)
}Build interactive applications using The Elm Architecture:
// Model - your app's state
struct TodoModel {
items: Vec<TodoItem>,
input: String,
}
// Msg - things that can happen
enum TodoMsg {
AddItem,
DeleteItem(usize),
UpdateInput(String),
ToggleComplete(usize),
}
// init - starting state
fn todo_init() -> TodoModel {
TodoModel { items: vec![], input: "".to_string() }
}
// update - handle messages, return new state
fn todo_update(msg: TodoMsg, model: TodoModel) -> TodoModel {
match msg {
TodoMsg::AddItem => { /* ... */ }
TodoMsg::DeleteItem(i) => { /* ... */ }
// ...
}
}
// view - render state to UI
fn todo_view(model: TodoModel) -> View {
// Returns a declarative UI description
}Side effects are explicit and controlled:
| Capability | What it allows |
|---|---|
KV |
Read/write key-value storage |
Blob |
Store/retrieve binary files |
HTTP |
Make external HTTP requests |
Scheduler |
Schedule future function calls |
Log |
Emit structured logs |
Functions without capabilities are pure: deterministic, cacheable, safe to run anywhere.
Expose functions as HTTP endpoints:
Route {
path: "/api/users/{id}",
method: GET,
function: "get_user",
}
// GET /api/users/123 → calls get_user("123")LambdaGraph is designed for AI-assisted development:
Connect Claude or other AI assistants:
{
"mcpServers": {
"lambdagraph": {
"type": "http",
"url": "http://localhost:4000/mcp"
}
}
}The AI can:
- List and search types/functions
- Execute functions with test inputs
- Create and modify definitions
- Deploy changes atomically
- Pure functions are predictable - AI can reason about behavior without simulating the universe
- Explicit effects are auditable - "This function only reads KV" is verifiable
- Content-addressing enables safe experimentation - Every change is reversible
- Structured mutations - AI proposes changes as data, not raw code edits
- Type system catches errors - Before deployment, not in production
See docs/AI_NATIVE.md for the full philosophy.
| Mode | Storage | Use Case |
|---|---|---|
| Local | SQLite + filesystem | Development, personal projects |
| Cloud Cheap | S3/GCS/R2 | Small teams, $5-20/month |
| Cloud Performance | Redis + S3 | Production, low latency |
| Enterprise | Postgres + S3 | Self-hosted, compliance |
# Local (default)
cargo run --bin server -- --data-dir ./data
# With config file
cargo run --bin server -- --config lambdagraph.toml- Architecture Overview
- AI-Native Design Philosophy
- Vision
- Types & Functions
- Capabilities
- TEA Apps
- HTTP Routing
- API Reference
- Functions and data structures - That's all you need
- Explicit effects - No hidden I/O, ever
- Single binary - No deployment complexity
- Content-addressed - Every version preserved, rollback instant
- AI-native - Designed for AI to understand and evolve
- Observable by default - Logging and metrics built in
We're not building AWS. We're building what you actually need.
LambdaGraph is in active development. Core features work:
- Types, functions, apps
- WASM compilation and execution
- Capability system (KV, Blob, HTTP, Scheduler)
- REST API with Swagger
- React frontend
- MCP server for AI integration
- SQLite persistence
See CONTRIBUTING.md for how to build, test, and submit pull requests.
Built for the era where your co-developer is an AI.