Hanami is a local-first Electron app for autonomous knowledge-work tasks. It wraps an AI tool-using agent with a desktop UI, workspace-scoped file tools, session persistence, and approval prompts.
The core MVP is implemented:
- Electron main/preload shell with typed IPC
- React renderer with chat, session switching, settings, file explorer, and approval UI
- Session-scoped agent loop using the Vercel AI SDK
- Workspace-sandboxed file tools and web search
- Web page fetching, structured planning, and local skill discovery
- Asynchronous shell commands, code execution, and background task waiting
- Focused subagents launched as background tasks
- MCP client support for stdio servers: tools, resources, and prompts
- SQLite persistence via Drizzle
- Secure API-key storage through Electron
safeStorage - OpenRouter model catalog loading in settings
- First-run onboarding and crash logging
- Headless CLI runner for engine testing
Two plan items remain partially outside this repository state:
- Cross-platform packaging has only been verified locally in this environment, not on macOS/Windows/Linux release targets.
- Document parsing tools from the original plan are intentionally not shipped yet.
- Node.js 22+
- npm 10+
- An OpenRouter API key
Install dependencies:
npm installIf Electron reports a native module ABI mismatch, rebuild Electron-native dependencies:
npm run native:rebuildRun the renderer and Electron shell in development:
npm run dev:electronOn Linux, unpackaged development runs automatically add --no-sandbox and
--disable-setuid-sandbox so Electron starts without requiring root-owned
chrome-sandbox. Packaged builds are not affected.
If you want Chromium's sandbox enabled during development, fix the helper permissions and opt back in:
sudo chown root:root node_modules/electron/dist/chrome-sandbox
sudo chmod 4755 node_modules/electron/dist/chrome-sandbox
HANAMI_FORCE_SANDBOX=1 npm run dev:electronTypecheck:
npm run typecheckBuild production assets:
npm run buildTo launch the built app without the Vite dev server:
npx electron .On Linux, if direct Electron launch still hits a sandbox error, either use the
same chrome-sandbox fix above or run:
npx electron --no-sandbox --disable-setuid-sandbox .Package the desktop app:
npm run packageThe Phase 2 CLI runner exercises the engine without the GUI and prints every streamed event as JSON:
npm run agent:cli -- \
--api-key "$OPENROUTER_API_KEY" \
--workspace . \
--message "Summarize the current repository and suggest one improvement."Add --deny-sensitive to reject confirm/dangerous tool calls during the run.
Hanami can talk to stdio-based MCP servers through the unified mcp tool. Use action: "list_servers" | "list_tools" | "list_prompts" | "list_resources" | "read_resource" | "get_prompt" | "call_tool". The repo includes a tiny example server:
[
{
"id": "example",
"name": "Example MCP",
"transport": "stdio",
"command": "npx",
"args": ["tsx", "scripts/example-mcp-server.ts"]
}
]Paste that JSON into the MCP servers JSON field in settings while the workspace is this repository.
- Visible reply vs reasoning: The UI separates collapsible "Thought" from the main answer when the model/provider emits a distinct reasoning stream. Some models send everything as normal text; in that case the app cannot split thinking from the answer—use models with explicit reasoning support if that matters.
- Markdown in chat: The renderer expects normal GFM; odd structure (e.g. lists glued to prose) may look wrong until the message is complete or depending on streaming.
- Shell and code execution are asynchronous task tools. The agent can start multiple tasks and later use
await_tasksto join them. - Subagents are implemented as focused child sessions. Their runs are streamed and persisted separately from the parent session.
- Skills are local Markdown guides loaded from the repo
skills/directory and any workspaceskills/or.hanami/skills/folders.
Long autonomous work today assumes the desktop app is available; dormant runs can resume when you open a session again. A future separate daemon or headless worker could process a durable job queue while the UI is closed. That would require a distinct security and approval story (workspace trust, sandboxing, unattended policy), scheduling and backoff, and likely a process boundary separate from the interactive Electron agent. It is intentionally out of scope for the current shipped product; the in-repo CLI harness is for development and testing only.
- Hanami is local-first, not fully offline. Model requests and web search can send data to configured providers.
- API keys are stored in Electron
safeStoragewhen encryption is available, with a plaintext fallback only on platforms where Electron cannot encrypt locally. - Crash logs are written under the Electron
userDatadirectory ascrashes.log.