Mission Control is a Node.js TypeScript workflow / mission runtime for long-lived application flows.
It targets a focused MVP with:
- a typed mission definition DSL
- durable mission inspection state
- explicit waits, retries, timers, and signals
- single-instance runtime orchestration through ticks
- an in-memory adapter and a durable SQLite adapter
Mission Control MVP is intentionally narrow:
- available adapters:
@mission-control/in-memory-commander,@mission-control/adapter-sqlite - runtime requirement: Node.js 24 minimum; Node.js 25+ recommended
- runtime model: single-instance, tick-driven
- side-effect model: at-least-once unless application code provides idempotency
- source code uses erasable TypeScript syntax only; npm packages publish built ESM output generated by
tsc
| Package | Status | Notes |
|---|---|---|
@mission-control/core |
Supported | Mission DSL, engine, contracts |
@mission-control/runtime |
Supported | Single-instance tick runtime |
@mission-control/client |
Supported | Runtime-owned mission client helpers |
@mission-control/testing |
Supported | Shared testing helpers |
@mission-control/in-memory-commander |
Supported | Local/in-memory adapter surface |
@mission-control/adapter-sqlite |
Supported | Durable SQLite adapter surface backed by node:sqlite |
examples/* |
Private examples | Reference usage patterns, not published APIs |
Owns:
- mission definition DSL
- shared contracts and types
- validation helpers
- retry and timer primitives
- shared execution engine
- commander abstractions
This package remains runtime-neutral.
Owns:
- explicit in-memory runtime adapter
- local testing helpers
- deterministic local behavior
Owns:
- sqlite schema and migrations
- serialization and persistence
- durable recovery behavior for waits and retries
- backend-specific tests
The primary integration path is createSqlitePersistenceAdapter(...) with
createCommander(...) or createCommanderRuntime(...). SQLiteCommander is
kept as a compatibility convenience wrapper over the same shared commander path.
Owns:
- startup tick to inspect incomplete jobs
- explicit next-tick scheduling (
setNextTickAt,setNextTickIn) - single-flight tick guarantees (one tick at a time)
- logger and metric hooks around runtime/tick lifecycle
@mission-control/client: mission-native client helpers@mission-control/testing: shared test helpers
Mission Control is durable for:
- mission state snapshots
- waits, retries, timers, cancellation requests
- restart recovery coordination
Mission Control does not provide exactly-once side-effect guarantees for user code.
Practical rule:
- treat mission state as durable
- treat external side effects as at-least-once unless your application code is idempotent
waitForCompletion() rejects when a mission fails. result() resolves to the
terminal mission snapshot for completed, failed, and cancelled missions.
The runtime is built around explicit ticks:
- startup runs a tick that checks incomplete jobs
- next tick can be scheduled by timeout (
setNextTickAt/setNextTickIn) - only one tick runs at a time
- a tick can start or continue missions but does not own full mission scope
- startup can schedule ticks for persisted
start_at*timer entries - a tick can run even when no jobs are due
- a tick does not chain a follow-up tick automatically
This MVP removes the polling and lease-claim runtime model entirely.
Concept mapping from older runtime designs:
- polling loop (
pollIntervalMs) -> explicit scheduling (setNextTickAt,setNextTickIn) - claim batches (
batchSize) -> one single-instance tick pass over incomplete missions - lease ownership (
identity+ lease timeout) -> process-local single-flight tick guard (isTickRunning) - claim completion/failure hooks -> mission resume logs and metrics (
mission-resume-started,mission-resume-failed) - claim release on shutdown -> timer cleanup + in-flight tick drain on
stop()
Migration checklist:
- Remove runtime configuration fields tied to polling and leases.
- Configure startup with
createCommanderRuntime(...), then callstart(). - Use
setNextTickAt(...)orsetNextTickIn(...)from application code when deferred work should trigger another tick window. - Keep retry/timer durability in adapter persistence; do not attempt to recreate multi-worker claim ownership logic.
- Update operational expectations: one process, one tick at a time, no automatic tick chaining.
Release work for MVP is intentionally explicit and human-driven.
- Run validation:
npm run release:check - Verify package tarballs:
npm run release:pack - Smoke-test the built tarballs in a clean consumer project on Node.js 24+.
- Ensure docs and changelog are updated for API/semantics changes.
- Keep internal
@mission-control/*package dependencies on exact matching versions for published packages. - Human operators perform dependency updates, generated-file actions, lockfile modifications, and publish commands.
- multi-instance claim/lease orchestration
- multi-cluster coordination
- visual workflow builders
- browser-first runtime
- broad backend matrix
import { createCommander, m } from "@mission-control/core";
const approvalMission = m
.define("approval")
.start({
input: {
parse: (input) => {
const value = input as { email?: unknown };
if (typeof value.email !== "string" || !value.email.includes("@")) {
throw new Error("Invalid approval input.");
}
return { email: value.email };
},
},
run: async ({ ctx }) => ({ email: ctx.events.start.input.email }),
})
.step("send-email", async ({ ctx }) => ({
sentTo: ctx.events.start.output.email,
}))
.needTo("receive-approval", {
parse: (input) => {
const value = input as { approvedBy?: unknown };
if (typeof value.approvedBy !== "string") {
throw new Error("Invalid approval signal.");
}
return { approvedBy: value.approvedBy };
},
})
.end();
const commander = createCommander({
definitions: [approvalMission],
});
const mission = await commander.start(approvalMission, {
email: "ops@example.com",
});
await mission.signal("receive-approval", { approvedBy: "reviewer-1" });
console.log(mission.inspect());