Local-first agent orchestration for TypeScript.
Define agents, chain them into pipelines, and run them with any model provider — all from a single typed API.
MVP status. Haven is in early development. The core API (
agent,pipeline,run, middleware, adapters, plugins, CLI) is functional and tested with 232 tests. Some features listed under Roadmap are planned but not yet shipped.
- Agent — define named agents with model, system prompt, tools, and middleware
- Pipeline — chain agents sequentially; each step's output feeds the next
run()— execute any agent or pipeline with a single function call- Streaming — incremental output via
onEventcallbacks withstream: true - Channels — typed communication between agents
- Middleware —
logger,retry,timeout,costLimit,progressiveTrust,compose - Adapters —
FakeAdapter,OpenAIAdapter,AnthropicAdapter,OllamaAdapter - More adapters — Gemini, Mistral, and custom provider support
- Plugins —
havensdk-plugin-fs(file system),havensdk-plugin-web(fetch/search),havensdk-plugin-shell(shell commands),havensdk-plugin-git(git ops),havensdk-plugin-browser(browser automation) - Store —
havensdk-storewith SQLite-backed run persistence - CLI —
havensdk-cli(haven initandhaven run)
bun add havensdkOfficial companion packages:
havensdk-clihavensdk-storehavensdk-plugin-fshavensdk-plugin-webhavensdk-plugin-shellhavensdk-plugin-githavensdk-plugin-browser
For monorepo development:
git clone https://github.com/doanbactam/haven.git
cd haven
bun installimport { run, FakeAdapter } from "havensdk";
const result = await run("What is the capital of France?", {
modelOverrides: { test: new FakeAdapter("Paris") },
});
console.log(result.output); // "Paris"
console.log(result.metrics.totalTokens); // 15For a named agent:
import { agent, run, FakeAdapter } from "havensdk";
const greeter = agent({
name: "greeter",
model: "test:fake",
system: "You are a friendly greeting assistant.",
});
const result = await run(greeter, "Say hello!", {
modelOverrides: { test: new FakeAdapter("Hello! Welcome to Haven.") },
});For a pipeline:
import { agent, pipeline, run, FakeAdapter } from "havensdk";
const researcher = agent({ name: "researcher", model: "research:mock" });
const writer = agent({ name: "writer", model: "writer:mock" });
const result = await run(pipeline(researcher, writer), "Research and summarize X.", {
modelOverrides: {
research: new FakeAdapter("Research notes: Haven is local-first."),
writer: new FakeAdapter("Summary: Haven is a local-first orchestration SDK."),
},
});An agent is a named, configured unit of work. It has a model, optional system prompt, optional tools, and optional middleware.
const myAgent = agent({
name: "assistant",
model: "openai:gpt-4o",
system: "You are a helpful assistant.",
tools: [...],
middleware: [...],
});A pipeline chains agents sequentially. The output of each step becomes the input of the next.
const p = pipeline(step1, step2, step3);
p.use(logger()); // pipeline-level middlewareMiddleware wraps execution. Use it for logging, retries, timeouts, cost limits, or custom cross-cutting concerns.
const myAgent = agent({
name: "robust",
model: "openai:gpt-4o",
middleware: [logger(), retry({ maxRetries: 3 }), timeout(30000)],
});Adapters connect Haven to model providers. The built-in adapters are:
| Adapter | Provider | Env var |
|---|---|---|
FakeAdapter |
test |
None |
OpenAIAdapter |
openai |
OPENAI_API_KEY |
AnthropicAdapter |
anthropic |
ANTHROPIC_API_KEY |
OllamaAdapter |
ollama |
OLLAMA_BASE_URL |
GeminiAdapter |
google |
GOOGLE_API_KEY or GEMINI_API_KEY |
MistralAdapter |
mistral |
MISTRAL_API_KEY |
CustomAdapter |
(configurable) | None (pass baseUrl + optional apiKey) |
Override adapters per-run with modelOverrides:
await run(myAgent, "Hello", {
modelOverrides: { test: new FakeAdapter("Hi!") },
});Or register globally:
import { registerAdapter, OpenAIAdapter } from "havensdk";
registerAdapter("openai", new OpenAIAdapter());Tools are typed functions agents can call. Define them with the tool() helper:
import { tool } from "havensdk";
const echo = tool<{ message: string }, { echo: string }>({
name: "echo",
description: "Echoes the input",
parameters: { type: "object", properties: { message: { type: "string" } }, required: ["message"] },
execute: async (input) => ({ echo: input.message }),
trustLevel: 0, // optional: minimum trust level required to execute
});A plugin bundles tools and optional middleware under a name.
import type { HavenPlugin } from "havensdk";
const myPlugin: HavenPlugin = {
name: "my-plugin",
tools: [echo],
middleware: [logger()],
onInit: async () => { /* setup */ },
onDestroy: async () => { /* cleanup */ },
};Persist run data to SQLite with havensdk-store:
import { SqliteStore } from "havensdk-store";
const store = new SqliteStore(".haven/data.db");
await store.init();
const result = await run(myAgent, "Hello", { store });
await store.close();Pass stream: true to receive incremental output through onEvent callbacks:
import { agent, run, FakeAdapter } from "havensdk";
const a = agent({ name: "writer", model: "openai:gpt-4o" });
const result = await run(a, "Write a haiku about coding", {
stream: true,
onEvent: (event) => {
if (event.type === "stream_delta") {
process.stdout.write(event.content);
}
},
});Typed communication between agents in a pipeline. Channels enforce type-safe data transfer between steps, with optional runtime validation.
import { agent, pipeline, channel, execute, FakeAdapter } from "havensdk";
import type { RunEvent } from "havensdk";
interface ResearchData {
topic: string;
sources: { url: string; summary: string }[];
keyFindings: string[];
}
const findings = channel<ResearchData>();
const researcher = agent({
name: "researcher",
model: "test:fake",
output: findings, // typed output
});
const writer = agent({
name: "writer",
model: "test:fake",
input: findings, // typed input
});
const result = await execute(
pipeline(researcher, writer),
"Research AI trends",
{
modelOverrides: {
test: new FakeAdapter(JSON.stringify({ topic: "AI", sources: [], keyFindings: ["LLMs evolving"] })),
},
},
);
// channel_send and channel_receive events are emitted during handoff
// Access the structured data after pipeline completes
const data = findings.read();
console.log(data); // { topic: "AI", sources: [], keyFindings: ["LLMs evolving"] }Channels are opt-in — pipelines work without them (string-based handoff). When an agent declares output: channel, structured data is written to the channel after the step completes. When the next agent declares input: channel, data is injected into its prompt as [Channel Data]. Optional parse functions provide runtime validation.
# Initialize a new project
haven init my-agent
# Run an agent or pipeline
haven run src/main.ts
# Run with SQLite persistence
haven run src/main.ts --store
# Override model
haven run src/main.ts --model openai:gpt-4oInside this monorepo, the raw CLI source is mainly for development and smoke tests. A freshly scaffolded standalone project still needs bun install so the generated package.json can install haven.
| Package | Tools |
|---|---|
havensdk-plugin-fs |
fs_read_file, fs_write_file, fs_list_dir, fs_exists |
havensdk-plugin-web |
web_fetch, web_search |
havensdk-plugin-shell |
shell_run |
havensdk-plugin-git |
git_status, git_diff, git_log, git_commit, git_worktree |
havensdk-plugin-browser |
browser_navigate, browser_screenshot, browser_click (experimental) |
| # | Example | What it shows |
|---|---|---|
| 01 | one-liner | Smallest possible run() call |
| 02 | named-agent | Agent with system prompt |
| 03 | research-pipeline | Multi-agent pipeline |
| 04 | local-tools | Plugin tools via execution context |
| 05 | custom-plugin | Custom plugin with tool() helper |
| 06 | streaming | Streaming output with onEvent |
| 07 | trust | Progressive trust escalation |
| 08 | mesh | Parallel agent execution |
| 09 | real-provider | Using real model providers |
Run any example from the repo root:
bun run examples/01-one-liner/main.tsbun install # Install all workspace dependencies
bun test # Run the full test suite
bunx turbo run build # Build all packages
bunx turbo run types # Type-check all packages
bun run verify:release # Run the release gate used by CI
bun packages/cli/src/index.ts help # CLI helpThe repository ships with a minimal GitHub Actions workflow at .github/workflows/ci.yml.
It installs dependencies with Bun and runs the same verify:release gate used locally.
These features are planned but not yet shipped:
- Plugin registry — discover and install community plugins
- Observability — built-in tracing and metrics export
- Vector search — semantic search via sqlite-vec
- MCP protocol — Model Context Protocol adapter
haven/
├── packages/
│ ├── core/ # havensdk — agent, pipeline, run, middleware, adapters
│ ├── cli/ # havensdk-cli — haven init, haven run
│ ├── store/ # havensdk-store — SQLite persistence
│ └── plugins/
│ ├── plugin-fs/ # havensdk-plugin-fs
│ ├── plugin-web/ # havensdk-plugin-web
│ ├── plugin-shell/ # havensdk-plugin-shell
│ ├── plugin-git/ # havensdk-plugin-git
│ └── plugin-browser/ # havensdk-plugin-browser (experimental)
├── examples/ # Runnable example programs
├── docs/ # Documentation
└── package.json # Monorepo root
MIT