"Make each program do one thing well" assumed humans were writing them — slow, deliberate, by hand. Agents flip this. They can author small, single-purpose programs constantly: for one task, for one user, torn down when no longer useful. The philosophy scales in a way its authors couldn't have imagined.
But durable work needs somewhere to live. Not a conversation context that evaporates. Not a workflow SaaS built for humans clicking through a GUI.
cue is that runtime. Any MCP-supporting agent — Claude Code, Cursor, Codex, a chatbot backend, an eval harness — calls create_action and create_trigger and walks away with a persistent, sandboxed, addressable mini-app. Actions the agent authors become callable — by a schedule, a webhook, an app the agent spun up, another agent. Each invocation runs in a fresh unitask unikernel under declarative policy, so scale doesn't mean blast radius.
"App" here is deliberately minimal: actions (named code snippets that run on demand) + triggers (cron schedules, webhook endpoints). No UI layer — whatever medium the agent is talking to the user on is the UI surface. Claude.ai artifact, Claude Code terminal output, Slack message, your own app — all the same on the backend.
agent ──authors──▶ cue action ──fires on──▶ cron
(durable) webhook POST
(sandboxed) HTTP URL
(addressable) another agent
Here's how an agent OS starts. See demos/ for end-to-end walkthroughs of an agent building real apps in ~4 messages (push notifications, live dashboards, …) with verbatim captured output.
- Actions — named JS snippets, invoked on demand, each call runs in a fresh unikernel
- Triggers —
cronandwebhook, managed by the daemon, fire actions with captured input - Addressable — every action gets a stable
http://<host>:<port>/a/<id>invoke URL + bearer token so UIs, webhooks, and humans can call it - MCP server — stdio and streamable-HTTP transports, same tool surface, one daemon. Local agents over stdio; remote/multi-tenant over HTTP.
- Policy (inherited from unitask) — per-action
allowNet,allowTcp,secrets,files,dirs,timeoutSeconds,memoryMb. Project-root.cue.tomlsets the ceiling; effective policy = intersection. - Namespaces — first-class isolation boundary with lifecycle (
active | paused | archived). Every action, trigger, secret, state entry, and artifact is namespace-scoped.cue ns create | list | inspect | pause | resume | archive | deletecovers the full lifecycle;pausestops invocations without deleting state,archiveis read-only/frozen,deletecascades. - Artifacts — agent-hosted static assets (HTML/JS/CSS/images). The agent uploads bytes via
create_artifact; cue serves them atGET /u/<namespace>/<path>on the same origin as webhooks, so browser-side fetches to/w/:idwork without CORS or mixed-content issues. Public by default; per-artifact view tokens for non-public. - Storage — SQLite for metadata, local disk for run blobs. Postgres + S3-compatible adapters designed for fleet, not yet shipped. See docs/storage.md.
- Run records — every invocation captures stdout, stderr, exit, input, trigger id, and the unitask run id; metadata in SQL, output bytes in
~/.cue/blobs/runs/<id>/. doctor— verifies unitask is on PATH, the daemon is up, the port is reachable
- unitask on PATH (
unitask doctorgreen) - Node.js ≥ 22.5 (uses
node:sqlite)
git clone https://github.com/jnormore/cue.git
cd cue
npm install && npm run build && npm link
cue doctorcue serve # starts HTTP + MCP + cron on localhostLeave it running — terminal pane, tmux, launchd, systemd, your call. Everything else (cue mcp, the CLI subcommands, the MCP clients agents spawn) talks to this one process over HTTP.
Binds to 127.0.0.1 by default — local agents over stdio need nothing more. For a remote or shared daemon, bind to a routable interface with --host, terminate TLS at a reverse proxy, and give each client a scoped agent token (see Agent tokens). /mcp refuses the master token, so an exposed daemon can't be taken over by a misconfigured client.
The daemon generates a master token at ~/.cue/token (mode 0600) on first start. It is the operator's credential for POST /a/:id (action invocation), /state/:ns/:key (state log), and /admin/* (operator CRUD on actions, triggers, secrets, agent tokens, namespaces). The cue CLI uses it to talk to the daemon — every operator command is an authenticated HTTP call; the daemon owns the database. /mcp does not accept the master token; every MCP client must carry a scoped agent token minted via cue token create (see Agent tokens). This split means a misconfigured agent client cannot silently run as operator. Webhook triggers and state logs have their own scoped tokens.
cue mcp config claude-code # → JSON snippet + the path it goes in
cue mcp config claude-desktop # also: cursor, vscode-copilotEvery invocation mints a fresh wildcard-scoped agent token. The locally-connected agent can create and operate as many namespaces as it wants (each namespace = one app). The agent allocates namespaces via the create_namespace MCP tool; cue token list shows the minted tokens for revocation. For multi-tenant deployments, don't use this path — mint scoped tokens explicitly with cue token create --namespace <pattern>.
cue mcp config requires a running daemon (it mints the token via the daemon's admin API). Run cue serve first.
For a stdio client:
{
"mcpServers": {
"cue": { "command": "cue", "args": ["mcp", "--token", "atk_..."] }
}
}cue mcp --token <agent-token> is the stdio↔HTTP bridge. It forwards tool calls to the running daemon using the supplied agent token — it does not read the master token. Paste, restart, done.
Want a scoped (non-wildcard) token — e.g., a token that can only touch shop, or any namespace under an acme- prefix? Skip cue mcp config and mint explicitly:
cue token create --namespace shop # literal: only `shop`
cue token create --namespace shop --namespace billing # literal allowlist
cue token create --namespace "acme-*" # prefix: anything starting with acme-Paste the resulting bearer into the client's MCP config yourself. The three pattern shapes are documented in Agent tokens.
Point any MCP client that supports streamable-HTTP directly at the daemon:
cue mcp config claude-desktop --http
cue mcp config claude-desktop --url https://cue.example.com/mcpSame auto-sandbox behavior as the stdio path. The snippet:
{
"mcpServers": {
"cue": {
"url": "http://cue.example.com/mcp",
"headers": { "Authorization": "Bearer atk_..." }
}
}
}No bridge needed — the client handles HTTP directly. The bearer is a scoped agent token (never the master token). Use this for a remote/shared daemon. For local single-user setups, stdio is simpler.
If you're writing a backend that talks to cue over MCP, use the SDK with a scoped agent token (never the master token — /mcp rejects it):
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
const agentToken = "atk_..."; // minted via `cue token create --namespace <ns>`
const client = new Client(
{ name: "my-app", version: "0.1.0" },
{ capabilities: {} },
);
await client.connect(
new StreamableHTTPClientTransport(new URL("http://cue.example.com/mcp"), {
requestInit: { headers: { authorization: `Bearer ${agentToken}` } },
}),
);
await client.callTool({
name: "create_action",
arguments: {
/* ... — must target a namespace in the token's scope */
},
});For operator-style tooling (minting tokens, managing all namespaces, invoking actions outside of MCP), skip the MCP SDK and talk to ~/.cue/ + /a/:id directly — see Agent tokens and the operator model section below.
create_action(name, code, namespace?, policy?)→{ id, invokeUrl }update_action(id, patch)/delete_action(id)invoke_action(id, input?)→{ stdout, stderr, exitCode, runId }get_action(id)/list_actions(namespace?)/list_action_runs(id)/inspect_run(runId)create_trigger({ type, config, actionId, namespace? })→{ id, webhookUrl? }delete_trigger(id)/get_trigger(id)/list_triggers(namespace?)set_secret(namespace, name, value)— store a secret scoped to one namespace; read by actions declaring it inpolicy.secretscreate_artifact(namespace, path, content, mimeType?, public?)→{ url, viewToken? }— publish a static asset under the namespace, served atGET /u/<namespace>/<path>on the daemonupdate_artifact(namespace, path, patch)/get_artifact/read_artifact/list_artifacts(namespace)/delete_artifact(namespace, path)state_append(namespace, key, entry)→{ seq, at }— append to a namespace's shared log (see State)state_read(namespace, key, since?, limit?)→{ entries, lastSeq }state_delete(namespace, key)create_namespace(name, displayName?, description?)— allocate a new namespace. Token must permit the chosen name (wildcard or prefix scope grants this; an explicit allowlist does not).get_namespace(name)/update_namespace(name, patch)— read or relabel (displayName/description). Status changes are operator-only.delete_namespace(name)— cascades actions, triggers, secrets, statewhoami()→{ principal, namespaces[{name, status, displayName?}] }— what this token can touch and the lifecycle status of each namespacedoctor()
Operator-only operations (minting/revoking agent tokens, cascading namespace deletes, secret CRUD) go through the daemon's /admin/* HTTP surface, master-token gated. The cue CLI is a thin client over those routes — never an MCP tool. See Agent tokens.
Secrets are scoped to a namespace and stored on the daemon (rows in the secrets table, encrypted at rest is a future Cloud-day-2 concern — see docs/storage.md). The daemon's own process.env is never forwarded — the only way a value reaches an action's unikernel is set_secret + a matching policy.secrets entry on the action. Cross-namespace reads are prohibited: an action in namespace: "evil" cannot resolve shop/SHOPIFY_TOKEN.
Typical agent-driven flow:
- Agent writes an action declaring
policy.secrets: ["SHOPIFY_TOKEN"]. - Invoke fails —
process.env.SHOPIFY_TOKENisundefinedinside the guest. - Agent asks the user for the token, calls
set_secret({ namespace: "shop", name: "SHOPIFY_TOKEN", value: "shpat_…" }). - Re-invoke succeeds. unitask redacts the value from the run record's stdout.
delete_namespace wipes the namespace's secrets along with its actions and triggers.
A namespace-scoped, durable, append-only log that multiple actions in the same namespace can share. Exists because actions run in fresh unikernels that can't see each other's memory, and the dirs injection is read-only — so when a webhook-fired action needs to hand data to a polled action, or vice versa, you need a primitive that outlives the unikernel and lives on the daemon.
An action opts in with policy.state: true. Inside the unikernel, require('/cue-state') returns:
const state = require("/cue-state");
await state.append("orders", { total: 99 }); // → { seq, at }
const { entries, lastSeq } = await state.read("orders", { since: 0 });
await state.delete("orders"); // wipe one keyAll calls are implicitly scoped to the action's namespace — the helper carries a per-namespace token and the daemon enforces that the URL's namespace matches. An action in ns: evil cannot read ns: shop's log.
Storage is backed by a StateAdapter, picked the same way as the store/runtime/cron adapters (CUE_STATE=sqlite by default, .cue.toml key state = "sqlite"). The SQLite adapter shares the daemon's cue.db file with the main store; appends compute MAX(seq) + 1 inside a write transaction so concurrent writers serialize through the SQL write lock. Each entry is capped at 64KB — larger payloads should go through the blob store with a reference in the entry. For fleet/scale-out, swap to a Postgres adapter without touching action code; the interface doesn't change. See docs/storage.md.
From outside a unikernel (agents pre-seeding, debugging, inspection) use the state_append / state_read / state_delete MCP tools or the /state/:namespace/:key HTTP routes. delete_namespace cascades state (logs + tokens) along with actions, triggers, and secrets.
cue has two principal types:
| Principal | Bearer | Where it's honored | Used by |
|---|---|---|---|
| master | ~/.cue/token |
POST /a/:id, /state/:ns/:key, /admin/* |
the local cue CLI, operator scripts |
| agent | atk_<id>.<hex> |
/mcp, POST /a/:id, /state/:ns/:key |
MCP clients (Claude Desktop, Claude Code, Cursor, ...) |
The master token is not accepted on /mcp. Every MCP client must carry a scoped agent token — there is no way to configure an agent to run as the operator. The master token gates the /admin/* operator surface, which is what the cue CLI uses; agent tokens are explicitly rejected there.
An agent token's scope.namespaces is a list of patterns. Each entry is one of:
| Pattern | Matches | Use |
|---|---|---|
"*" |
any namespace | local-dev default — cue mcp config mints this |
"acme-*" |
anything starting with acme- |
multi-tenant: one prefix per workspace/project |
"shop" |
exactly shop |
explicit allowlist (composable: --namespace shop --namespace weather) |
When an MCP client authenticates with an agent token, the daemon:
- Filters
list_actions/list_triggersto namespaces matching any pattern in scope. - Returns
NotFoundforget_action/invoke_action/inspect_run/list_action_runs/get_trigger/delete_action/delete_trigger/update_actionon records whose namespace doesn't match — existence is hidden, not just access. - Returns
Forbiddenoncreate_action/create_trigger/create_namespace/set_secret/state_append/state_read/state_delete/delete_namespacetargeting an out-of-scope namespace. - Never exposes agent-token CRUD over MCP — minting and revoking happen via the local
cue tokenCLI, which calls the daemon's/admin/agent-tokensroute with the master token.
Mint one explicitly (the daemon must be running):
# wildcard — equivalent to what cue mcp config produces
cue token create --namespace "*" --label "trusted-agent"
# prefix — multi-tenant slice
cue token create --namespace "acme-*" --label "acme-workspace"
# literal allowlist (composable; can mix with patterns)
cue token create --namespace shop --namespace weather --label "claude-desktop"
# → { "id": "atk_...", "token": "atk_....<hex>", "scope": {...}, ... }The bearer string is printed once; there's no way to recover it later. Re-mint if you lose it.
Or wire an MCP client locally in one command:
cue mcp config claude-desktopThis mints a wildcard-scoped token and emits an MCP config snippet. The connected agent can create and operate any namespace. For stdio clients the snippet contains cue mcp --token atk_...; for HTTP clients (--http) it contains the bearer in the Authorization header. The master token never leaves the box.
Inspect and revoke:
cue token list
cue token delete atk_01K...Revocation is immediate — the token's next MCP or HTTP request returns 401.
Storage: an agent_tokens row in ~/.cue/cue.db. Cross-adapter: the AgentTokenStore interface lives under StoreAdapter alongside SecretStore, so the Postgres adapter slots in without changing call sites. Tokens use constant-time compare on verify to avoid timing leaks.
Webhook tokens are orthogonal. A webhook trigger's scoped token gates one specific trigger's URL and is unaffected by any agent-token scope. A webhook firing into shop/order-created still works even if the caller has no agent-token scope for shop.
The daemon is the only process that touches ~/.cue/cue.db. The CLI is a thin HTTP client.
- CLI → daemon HTTP, every command. Every
cue action,cue trigger,cue token,cue secret,cue nscommand sends an authenticated request to the daemon's/admin/*routes using the master token at~/.cue/token. The daemon must be running. - Cron reconciliation is in-process. When the daemon mutates a trigger (via
/admin/triggers), the in-processSubscribersnotify the cron registry synchronously; a 1-second poll covers any out-of-process write (future fleet peers, manual DB edits). Nofs.watch. - Action invocation uses
/a/:id. The one operation that has always been daemon-only (spawn a unikernel, stream output, record a run) is unchanged. Master token works there too. cue doctorruns local. Instantiates each adapter in-process and calls itsdoctor()probe (read-only on the DB). Separately pings/health(unauth) to report daemon liveness. Works with no daemon running —daemonUp: falseis a valid result.
The complete HTTP surface:
| Route | Auth | Purpose |
|---|---|---|
GET /health |
none | liveness probe |
POST /a/:id |
master or agent (scoped) | invoke an action |
POST /w/:id |
webhook token (per-trigger) | fire a webhook |
/state/:ns/:key[/append] |
master or state-token (scoped) or agent (scoped) | append-log I/O |
/admin/* |
master only — agent rejected | operator CRUD (actions, triggers, …) |
/mcp |
agent only — master rejected | MCP streamable-HTTP for agents |
If you're writing operator tooling in another language, the mental model is: POST to /admin/* with the master token for storage operations, POST to /a/:id with the master token (or any agent token in scope) for action invocation. The on-disk database schema is an implementation detail — don't write to it directly.
The CLI is intentionally narrow. Apps are authored by agents through MCP, not by humans through the CLI. The cue command is for running and configuring the daemon — nothing more. There are deliberately no cue action create / cue trigger create / cue secret set commands; those would mirror the MCP surface and tell the wrong story.
What the CLI covers:
# start the daemon (leave running)
cue serve
# wire your MCP client — auto-mints a sandbox token + namespace
cue mcp config claude-code # also: claude-desktop, cursor, vscode-copilot, ...
# health check
cue doctor
# operator: see what's running on the daemon
cue ns list
cue ns inspect demo
# operator: pause / resume / archive / delete a namespace
cue ns pause demo
cue ns resume demo
cue ns archive demo
cue ns delete demo
# operator: mint an explicit-namespace token (instead of auto-sandbox)
cue token create --namespace shop --label "shopify-team"
cue token list
cue token delete atk_01K…Everything an agent does — creating actions, attaching triggers, setting secrets, invoking — happens via MCP tools. See the MCP tools section.
cue reads configuration from three places: CLI flags on cue serve, environment variables, and a project-level .cue.toml (walked up from cwd, like git/tsc). Flag > env > .cue.toml > default.
| Variable | What it does | Default |
|---|---|---|
CUE_HOME |
State directory (token, port, cue.db, blobs). | ~/.cue |
CUE_PORT |
Daemon port. | 4747, or last value in <CUE_HOME>/port |
CUE_RUNTIME |
Runtime adapter. Shipped: unitask. |
unitask |
CUE_STORE |
Store adapter. Shipped: sqlite. |
sqlite |
CUE_CRON |
Cron scheduler. Shipped: node-cron. |
node-cron |
CUE_STATE |
State adapter. Shipped: sqlite. |
sqlite |
CUE_UNITASK_BIN |
Path to the unitask binary. |
resolved via PATH |
--port <n> / -p, --host <h> (default 127.0.0.1), --runtime <name>, --store <name>, --cron <name>. cue serve --help for the full list.
Unknown runtime/store/cron names hard-fail at startup, as does a failed doctor() on the selected adapter — there's no silent fallback.
Drop a .cue.toml in your project root (or any parent — cue walks up) to cap every action's requested policy. Same shape as .unitask.toml:
memoryMb = 512
timeoutSeconds = 60
allowNet = ["api.github.com", "api.openai.com"]
allowTcp = ["127.0.0.1:5432"]
secrets = ["GITHUB_TOKEN", "OPENAI_API_KEY"]
files = ["/Users/me/work/config.yml"]
dirs = ["/Users/me/work"]Effective policy = requested ∩ ceiling. Denials land in the run record for the audit trail. Missing fields mean no ceiling on that field.
The same file can pin adapter selection for the project:
runtime = "unitask"
store = "sqlite"
cron = "node-cron"
state = "sqlite"npm test # unit
npm run smoke # boots a real daemon, drives both MCP transports
npm run cli # exercises the `cue` CLI against a real daemon
npm run integration # hits a real `unitask` binary (must be on PATH)
npm run verify # typecheck + build + unit + smoke + cliMIT.