Like its comic-book namesake, MASTER MOLD isn't just a single operative—it's the durable, self-aware command center that builds, orchestrates, and unleashes your agentic workforce. Designed as a local-first Model Context Protocol (MCP) server, it provides the heavy-lifting infrastructure—state continuity, multi-agent councils, and root-capable host control—so your AI models can stop chatting and start building.
The repository is intentionally split into two layers:
- Core runtime: durable memory, transcripts, tasks, run ledgers, governance, ADRs, and safety checks.
- Domain packs: optional modules that register domain-specific MCP tools without modifying core infrastructure.
This repository ships with one workflow pack by default:
agenticGSD/autoresearch-inspired planner and verifier hooks for local development workflows.
The runtime also includes first-class office/orchestration tools (trichat.* under the hood) for multi-agent turns, autonomous loops, and tmux-backed nested execution control, plus the newer local control-plane surfaces:
tool.searchfor live capability discovery from the registered MCP tool registrypermission.profilefor durable session permission inheritance across goals, plans, tasks, and sessionsbudget.ledgerfor append-only token/cost tracking and operator budget summarieswarm.cachefor startup prefetch and cached operator surfacesfeature.flagfor durable rollout statedesktop.*,patient.zero, andprivileged.execfor explicit local desktop and privileged execution lanes
Patient Zero is the intended end-state of this repo: a local-first operator partner that can take full bounded control of the host when explicitly armed.
When enabled, Patient Zero is the mode that ties the stack together:
- office and council orchestration
- autonomous continuation through
autonomy.maintain - CLI and IDE bridge usage across Codex, Claude CLI, Cursor, Gemini CLI, GitHub Copilot CLI, and
gh - local desktop/browser/root-capable host-control lanes
- full auditability through runtime events, runs, ledgers, and operator surfaces
Start here if you want the current docs map:
- Documentation Index
- Quick Setup
- System Interconnects
- IDE + Agent Setup Guide
- Transport Connection Guide
- Provider Bridge Matrix
- TriChat Compatibility Reference
Root-level companion files intentionally left outside docs/:
AGENTS.mdfor coding-agent operating instructionsGEMINI.mdfor Gemini-specific local notes
flowchart TD
Operator["Operator Surfaces<br/>README / docs / Agent Office GUI / tmux / shell wrappers"] --> Clients["IDE + Terminal Clients<br/>Codex / Claude CLI / Cursor / Gemini CLI / GitHub Copilot CLI / shell sessions / gh"]
Clients --> Transport["MCP Transport Layer<br/>HTTP / STDIO / launchd / app launchers"]
Transport --> Kernel["MCP Kernel Layer<br/>toolRegistry / server.ts / core tools / office snapshot"]
Kernel --> Control["Control-Plane Layer<br/>goal.* / plan.* / task.* / kernel.summary / operator.brief / tool.search / permission.profile / warm.cache / feature.flag / budget.ledger"]
Kernel --> Autonomy["Autonomy Fabric Layer<br/>autonomy.maintain / autonomy.command / goal.autorun / reaction.engine / eval / optimizer"]
Kernel --> Orchestration["Orchestration Fabric Layer<br/>office council / tmux controller / runtime.worker / worker.fabric / model.router / provider.bridge"]
Kernel --> Local["Local Host Control Layer<br/>desktop.* / patient.zero / privileged.exec"]
Control --> State["Durable State Layer<br/>SQLite / warm cache / artifacts / runs / events / daemon configs / local secrets"]
Autonomy --> State
Orchestration --> State
Local --> State
This is the operator-facing map of the current server surface.
flowchart TD
Clients["Codex / Cursor / IDE / HTTP Clients"] --> Transport["MCP Transports<br/>stdio / HTTP / launchd"]
Transport --> Kernel["MASTER MOLD Server"]
Kernel --> Memory["Continuity + Knowledge<br/>memory.* / transcript.* / who_knows / knowledge.query / retrieval.hybrid / imprint.*"]
Kernel --> Control["Execution Control Plane<br/>goal.* / plan.* / dispatch.autorun / goal.autorun* / playbook.*"]
Kernel --> Worker["Durable Worker Ops<br/>agent.session.* / agent.claim_next / agent.report_result / task.* / run.* / lock.* / event.*"]
Kernel --> Evidence["Evidence + Governance<br/>artifact.* / experiment.* / policy.evaluate / preflight.check / postflight.verify / adr.create / decision.link / incident.*"]
Kernel --> Office["Office + Orchestration Ops<br/>trichat.thread* / trichat.turn* / trichat.autopilot / trichat.tmux_controller / trichat.bus / trichat.adapter_telemetry / trichat.slo / trichat.chaos"]
Kernel --> Health["Runtime + Recovery<br/>kernel.summary / health.* / migration.status / backups / corruption quarantine"]
Kernel --> Learning["Bounded Agent Learning<br/>agent.learning_* / mentorship notes / MCP memory / ADR trail"]
Office --> Dashboard["Agent Office Dashboard<br/>curses TUI + tmux war room + macOS app"]
Worker --> Packs["Domain Packs + Hooks<br/>agentic pack / pack.plan.generate / pack.verify.run"]
Evidence --> Packs
Learning --> Office
Learning --> Worker
This is the current ring-leader spawning and delegation shape.
flowchart TD
User["User / Operator"] --> Ring["Ring Leader<br/>lead agent / council selector / confidence gate / GSD planner"]
Ring --> DirImpl["implementation-director<br/>implementation planner"]
Ring --> DirResearch["research-director<br/>research planner"]
Ring --> DirVerify["verification-director<br/>verification planner"]
Ring --> LocalImprint["local-imprint<br/>local memory + continuity lane"]
Ring --> Codex["codex<br/>frontier review / hard problems / integration lane"]
DirImpl --> CodeSmith["code-smith<br/>leaf SME for implementation slices"]
DirResearch --> ResearchScout["research-scout<br/>leaf SME for bounded research"]
DirVerify --> QualityGuard["quality-guard<br/>leaf SME for verification and release checks"]
Ring --> Claim["agent.claim_next<br/>claim bounded work"]
Claim --> Council["Office council turn<br/>confidence + plan substance + policy gates"]
Council --> Execute["Execution router<br/>direct command / tmux dispatch / fallback task batch"]
Execute --> Leafs["Leaf / SME agents<br/>single-owner bounded tasks"]
Leafs --> Report["agent.report_result<br/>artifacts / evidence / outcomes / learning signal"]
Report --> Learn["Bounded learning ledger<br/>prefer / avoid / proof bars / rollback discipline"]
Learn --> Ring
Execute --> Tmux["office tmux controller<br/>worker lanes / queue discipline / office telemetry"]
Tmux --> Dashboard["Agent Office Dashboard<br/>desk work / chat / break / sleep sprites"]
This is the current end-to-end local topology: launchers, IDEs, terminal bridges, the MCP runtime, the autonomy fabric, and the local-control lanes.
flowchart LR
subgraph Operator["Operator Surfaces"]
OfficeGUI["Agent Office GUI<br/>/office/"]
OfficeTUI["Agent Office TUI / tmux"]
Suite["Agentic Suite.app"]
Shell["Shell wrappers<br/>autonomy_*.sh / provider_bridge.sh"]
end
subgraph Clients["IDE + Bridge Clients"]
Codex["Codex"]
Claude["Claude CLI"]
Cursor["Cursor"]
Gemini["Gemini CLI"]
Copilot["GitHub Copilot CLI"]
Browser["Safari"]
end
subgraph Transport["Local MCP Transport"]
HTTP["HTTP transport<br/>/ready /office/api/* / MCP bearer auth"]
STDIO["STDIO transport<br/>single-client / helper calls"]
end
subgraph Kernel["MASTER MOLD MCP Server"]
Registry["toolRegistry + tool.search"]
Control["goal.* / plan.* / task.* / agent.session.* / operator.brief / kernel.summary"]
Fabric["office orchestration / worker.fabric / runtime.worker / model.router / provider.bridge"]
Flags["permission.profile / feature.flag / budget.ledger / warm.cache"]
Local["desktop.* / patient.zero / privileged.exec"]
end
subgraph State["Durable Local State"]
SQLite["SQLite state authority<br/>goals / plans / tasks / runs / events / ledgers / daemon configs"]
Cache["Warm cache + office snapshot cache"]
Secret["Local secret file<br/>~/.codex/secrets/mcagent_admin_password"]
end
subgraph Host["Local Host Capabilities"]
Desktop["Desktop control<br/>observe / act / listen"]
Admin["mcagent -> root lane"]
Runtime["launchd / tmux / local workers"]
end
OfficeGUI --> HTTP
OfficeTUI --> HTTP
Suite --> HTTP
Shell --> HTTP
Codex --> STDIO
Cursor --> STDIO
Gemini --> STDIO
Copilot --> STDIO
Browser --> HTTP
HTTP --> Registry
HTTP --> Control
HTTP --> Fabric
STDIO --> Registry
STDIO --> Control
STDIO --> Fabric
Registry --> Flags
Control --> SQLite
Fabric --> SQLite
Flags --> SQLite
Local --> SQLite
Control --> Cache
HTTP --> Cache
Fabric --> Runtime
Fabric --> Desktop
Local --> Desktop
Local --> Admin
Admin --> Secret
Full diagrams for demos and technical walk-throughs: System Interconnects
flowchart LR
subgraph Entry["Entry Points"]
OfficeGUI["Agent Office GUI"]
OfficeTUI["Agent Office tmux"]
Suite["Agentic Suite.app"]
Shell["Terminal sessions<br/>bash / zsh / shell wrappers"]
Codex["Codex"]
Claude["Claude CLI"]
Cursor["Cursor"]
Gemini["Gemini CLI"]
Copilot["GitHub Copilot CLI"]
GH["GitHub CLI (gh)"]
end
subgraph MCP["Local MCP Surfaces"]
HTTP["HTTP<br/>/ready /office/api/* / MCP POST"]
STDIO["STDIO<br/>client-launched MCP sessions"]
end
subgraph Runtime["MASTER MOLD Runtime"]
Registry["tool registry + capability discovery"]
Brief["kernel.summary / operator.brief / office.snapshot"]
Council["office council + autopilot"]
Workers["runtime.worker / worker.fabric / tmux lanes"]
LocalCtl["desktop.* / patient.zero / privileged.exec"]
end
subgraph State["State + Evidence"]
DB["SQLite"]
Cache["warm cache"]
Events["event trail / runs / artifacts / learning"]
end
OfficeGUI --> HTTP
OfficeTUI --> HTTP
Suite --> HTTP
Shell --> HTTP
Shell --> STDIO
Codex --> STDIO
Cursor --> STDIO
Gemini --> STDIO
Copilot --> STDIO
GH --> Shell
HTTP --> Registry
STDIO --> Registry
Registry --> Brief
Registry --> Council
Registry --> Workers
Registry --> LocalCtl
Brief --> DB
Council --> DB
Workers --> DB
LocalCtl --> DB
Brief --> Cache
Council --> Events
Workers --> Events
LocalCtl --> Events
flowchart TD
Operator["Operator"] --> Arm["patient.zero enable"]
Arm --> PZ["Patient Zero posture"]
PZ --> Desktop["Desktop lanes<br/>observe / act / listen / Safari"]
PZ --> Root["Privileged lane<br/>mcagent -> root"]
PZ --> Maintain["autonomy.maintain<br/>self-drive on"]
PZ --> Autopilot["office autopilot<br/>trichat.autopilot execute enabled"]
Autopilot --> Toolkit["Terminal toolkit<br/>codex / claude / cursor / gemini / gh"]
Autopilot --> Bridges["Bridge-capable agents<br/>codex / claude / cursor / gemini / github-copilot"]
Autopilot --> Locals["Local office agents<br/>directors / leaves / local-imprint"]
Desktop --> Audit["event.* / run.* / operator surfaces"]
Root --> Audit
Maintain --> Audit
Autopilot --> Audit
Toolkit --> Audit
Bridges --> Audit
Locals --> Audit
Most MCP projects repeat the same infrastructure work:
- durability and state continuity
- safe/idempotent writes
- local governance and auditability
- task orchestration
- cross-client interoperability
This template centralizes those concerns so teams can build domain tools directly.
Use this framing with stakeholders:
- This is not a single-purpose assistant.
- This is a local MCP platform with reusable reliability primitives.
- Domain value is delivered through packs, not by rewriting runtime infrastructure.
flowchart LR
A["Cursor / Codex / IDE Clients"] --> K["Local MCP Kernel"]
B["Inbox Workers / tmux / Background Automation"] --> K
C["Office / Council UI"] --> K
D["Future External Adapters"] --> K
K --> E["Control Plane
agent.session.*
goal.*
plan.*
dispatch.autorun"]
K --> F["Execution + Audit
task.*
run.*
lock.*
event.*"]
K --> G["Evidence + Methodology
artifact.*
experiment.*
playbook.*
pack hooks"]
G --> H["GSD Delivery Flow"]
G --> I["autoresearch Optimization Loop"]
K --> J[("SQLite + local runtime state")]
More detail: Architecture Pitch
Methodology automation: Automated GSD + autoresearch Pipeline
Execution roadmap: Bleeding-Edge Execution Roadmap
Execution substrate additions now shipped in core:
worker.fabricfor host registry, telemetry, and resource-aware lane routingcluster.topologyfor the durable lab plan: active Mac control plane plus planned future CPU-heavy and GPU-heavy nodesmodel.routerfor measured backend selection across local and remote model runtimes, plus topology-backed future placement recommendations for planned hostsbenchmark.*andeval.*for isolated execution scoring and router-aware eval suitestask.compilefor durable DAG-style plan compilation with owner/evidence/rollback contractsorg.programfor versioned ring leader, director, SME, and leaf operating doctrine
Practical entrypoint:
- use
playbook.runto instantiate a GSD/autoresearch workflow and immediately entergoal.execute - let
agent.report_resultfeed artifacts, experiment observations, evidence gates, and boundedgoal.autoruncontinuation back into the kernel - use
kernel.summaryfor one-shot operator state andgoal.autorun_daemonfor bounded unattended continuation
The canonical live brief surface is operator.brief.
task.compilewrites a durablecompile.briefartifact for the active planruntime.workerwrites a concrete handoffsession_brief.mdinto the worker workspaceoffice.snapshotincludesoperator_briefso the office dashboard and GUI can consume the same canonical briefoperator.briefmerges current objective, delegation contract, compile brief, runtime handoff brief, and execution backlog into one operator-facing payload
Shell entrypoint:
npm run brief:current
# compact JSON for scripts / dashboards
npm run brief:current -- --json --compactRaw MCP example:
node ./scripts/mcp_tool_call.mjs --tool operator.brief --args '{}' --transport http --url http://127.0.0.1:8787/ --origin http://127.0.0.1 --cwd .Use the provider bridge to distinguish three different states:
- client config is installed
- a provider runtime is present on this host
- the provider can actually see the shared MCP server right now
Commands:
npm run providers:status
npm run providers:diagnose -- claude-cli gemini-cli cursor github-copilot-cliNotes:
- Claude CLI now defaults to a resilient stdio proxy on this host: it targets the MCP HTTP daemon first and falls back to a local stdio server path if the daemon is unhealthy, while still mapping to the
claudeoffice agent andautonomy.ide_ingress. - Claude model use still depends on Claude Code being authenticated on the host;
provider.bridge diagnosedistinguishes configured MCP install from live authenticated runtime. - Gemini CLI now installs with an explicit trusted stdio proxy config, working directory, timeout, and HTTP-to-stdio fallback in
~/.gemini/settings.json. - Cursor is validated as configured on this host, but runtime MCP status still has to be checked in the Cursor UI because Cursor does not expose a local MCP status CLI on this machine.
npm run bootstrap:env
npm run start:stdioIf this is your first time with MASTER MOLD, think of it as a local AI-agent toolbench rather than a normal app you click through manually. You bootstrap the base runtime, then your MCP-capable AI client uses the tools here to build and adapt project-specific scaffolding, status surfaces, memories, and workflows.
On macOS, do not start with brew install npm by itself. That often leaves your terminal on the latest Node/npm pair, which can overshoot this repo's supported range. Use npm run bootstrap:env:install or install node@22 first, then rerun the bootstrap.
On Windows, use the npm run ... scripts exactly as shown. Do not manually type bash-style environment prefixes such as MCP_HTTP=1 node ...; npm run start:http handles that in cross-platform Node code.
Fresh clone:
git clone https://github.com/driverd12/MASTER-MOLD.git
cd master-mold
npm run bootstrap:envIf you already have a local checkout:
git fetch origin
git checkout main
git pull --ff-only origin main
npm run bootstrap:envIf npm ci says EBADENGINE, stop there and run npm run bootstrap:env:install. The repo now hard-stops early on unsupported Node/npm versions and points back to the pinned runtime path.
When npm run doctor ends with Result: ready, the core MCP setup is complete. Any remaining recommendations are optional lanes such as HTTP auth, Patient Zero/full-device-control permissions, local training, tmux, Ollama, or provider bridges you have not chosen to activate yet.
Start HTTP transport:
npm run start:httpIf Windows prints 'MCP_HTTP' is not recognized, that checkout is old or a direct shell command was copied. Pull latest main and run the npm script above.
Start pure core runtime with workflow hooks disabled:
npm run start:core
# or
npm run start:core:httpThe older trichat:* script names are still present for compatibility, but the user-facing surface is the Agent Office and its council/autopilot fabric.
Quick launch:
npm run trichat:tui
npm run trichat:office:gui
npm run autonomy:command -- "Take this objective from intake to durable execution."Full legacy command reference, roster commands, doctor flows, and validation examples now live in TriChat Compatibility Reference.
Launch the animated office monitor directly:
npm run trichat:officeLaunch the clickable local GUI control deck:
npm run trichat:office:guiStart the tmux war room with dedicated windows for the office scene, briefing board, lane monitor, and worker queue:
npm run trichat:office:tmuxOpen the intake desk directly when you want to hand the office a plain-language objective and let the autonomous stack run with it:
npm run autonomy:intake:shellThe intake desk now uses the same autonomy.ide_ingress path as the IDE wrapper, so office intake, Codex/IDE intake, transcript continuity, thread mirroring, memory capture, and durable background execution all stay on one real lane.
This dashboard is MCP-backed and reads live state from office/orchestration tools, kernel summaries, Patient Zero state, privileged execution state, budgets, flags, and warm-cache surfaces. The compatibility-level tool list is kept in TriChat Compatibility Reference.
The /office/ GUI is served directly by the HTTP transport. Under normal polling it prefers cached office snapshots; explicit operator actions and forced refreshes are the only paths that demand live snapshot work.
The office scene keeps working agents at their desks, moves active chatter to the coffee and water cooler strip, shows resets in the lounge, and parks long-idle agents on the sofa in sleep mode. Action badges reflect real MCP/tmux signals such as desk work, briefing, chatting, break/reset, blocked, offline, and sleep.
Recent polish added:
- a stylized night-shift office banner with a built-in mascot and richer ASCII sprite poses
- animated per-agent states for desk work, supervision, chatter, break, blocked, offline, and sleep
- a
thotkey to cycle dashboard themes (night,sunrise,mono) - a dedicated
intaketmux window and5hotkey from the office dashboard so the war room can take objectives, not just monitor them - confidence-check surfacing in the briefing board so ring-leader confidence is explainable, not just numeric
Install the single-click macOS app launcher in /Applications:
npm run trichat:app:installBy default the app opens the built-in /office/ GUI and keeps the tmux-backed Agent Office substrate available underneath it. If you do not pass --icon, it generates its own built-in office mascot icon.
Install the umbrella launcher for the broader local suite:
npm run agentic:suite:app:installThat launcher brings up the Agent Office web surface and opens the local desktop tools listed in AGENTIC_SUITE_OPEN_APPS (defaults to Codex,Cursor).
Keyboard controls inside the TUI:
1office2briefing3lanes4workershhelprrefreshppausetcycle themeqquit
Legacy command names, old app-installer naming, and compatibility branding notes now live in TriChat Compatibility Reference.
The current office/autonomy environment intentionally borrows and reinterprets the strongest open-source ideas from:
- RALPH TUI: multi-pane operator UX, persistent dashboard feel, session-oriented monitoring, and a more playful terminal surface
- Get Shit Done: bounded work packets, single-owner delegation, and orchestration that stays simple while the system grows complex
- autoresearch: small-budget experiment loops, org-first task shaping, and disciplined overnight continuation
- SuperClaude Framework: confidence-before-action methodology and explicit mode/check thinking before implementation
We also reviewed the DAN-prompt gist for stylistic inspiration only. Unsafe jailbreak behavior is intentionally excluded; the only acceptable lift is playful operator-facing mode naming, not guardrail bypassing.
Upstream coverage matrix: Upstream Implementation Matrix
When GitHub push access is unavailable, export a portable handoff bundle for a stronger server:
npm run replication:exportThe export includes:
- a
git bundlefor the current branch and commit .env.exampleconfig/trichat_agents.jsonbootstrap-server.shreplication-manifest.json
On the target server:
./bootstrap-server.sh /path/to/target /path/to/master-mold-<timestamp>.bundleCopy the template:
cp .env.example .envKey variables:
ANAMNESIS_HUB_DB_PATHlocal SQLite pathANAMNESIS_HUB_RUN_QUICK_CHECK_ON_STARTrun SQLite quick integrity check at startup (1by default)ANAMNESIS_HUB_STARTUP_BACKUPcreate rotating startup snapshots (1by default)ANAMNESIS_HUB_BACKUP_DIRsnapshot directory (default: siblingbackups/near DB path)ANAMNESIS_HUB_BACKUP_KEEPretained snapshot count (default:24)ANAMNESIS_HUB_AUTO_RESTORE_FROM_BACKUPauto-restore latest snapshot on startup corruption (1by default)ANAMNESIS_HUB_ALLOW_FRESH_DB_ON_CORRUPTIONallow empty DB bootstrap if no backup exists (0by default)MCP_HTTP_BEARER_TOKENauth token for HTTP transportMCP_HTTP_ALLOWED_ORIGINScomma-separated local originsMCP_DOMAIN_PACKScomma-separated pack ids (agentic, etc.); defaults toagentic, setnoneto disable all packsTRICHAT_AGENT_IDScomma-separated active office council rosterTRICHAT_GEMINI_CMDoverride full Gemini bridge commandTRICHAT_CLAUDE_CMDoverride full Claude bridge commandTRICHAT_GEMINI_EXECUTABLE/TRICHAT_GEMINI_ARGSprovider CLI overrideTRICHAT_CLAUDE_EXECUTABLE/TRICHAT_CLAUDE_ARGSprovider CLI overrideTRICHAT_CODEX_EXECUTABLE/TRICHAT_CURSOR_EXECUTABLEoverride the provider binary inside the wrapperTRICHAT_GEMINI_MODEselectauto,cli, orapiTRICHAT_GEMINI_MODELoverride Gemini API model (gemini-2.0-flashdefault)TRICHAT_IMPRINT_MODEL/TRICHAT_OLLAMA_URLcontrol the local imprint laneTRICHAT_LOCAL_INFERENCE_PROVIDERselectsauto,ollama, ormlxfor the local bridge laneTRICHAT_MLX_PYTHON/TRICHAT_MLX_MODEL/TRICHAT_MLX_ENDPOINTdefine the optional Metal-backed MLX laneTRICHAT_MLX_ADAPTER_PATHturns the managed MLX lane into an adapter-backedmlx_lm.serverTRICHAT_LOCAL_ADAPTER_REGISTRATION_PATH/TRICHAT_LOCAL_ADAPTER_ACTIVE_PROVIDERrecord which accepted adapter is currently integrated and whether it is active throughmlxorollamaTRICHAT_LOCAL_ADAPTER_OLLAMA_MODELrecords the exported Ollama companion model name when the active integration target isollamaTRICHAT_MLX_SERVER_ENABLED=1enables a managed localmlx_lm.serverlaunch agent; leave it0to keep MLX installed but not auto-servedTRICHAT_BRIDGE_TIMEOUT_SECONDSbound per-bridge request timeTRICHAT_BRIDGE_MAX_RETRIES/TRICHAT_BRIDGE_RETRY_BASE_MScontrol wrapper-level transient retry behaviorGEMINI_API_KEYorGOOGLE_API_KEYenable direct Gemini API fallback
The runtime now quarantines non-SQLite/corrupted artifacts into corrupt/ before recovery attempts so startup failures do not silently overwrite evidence.
Local Metal setup:
npm run mlx:setupcreates.venv-mlx, installsmlx+mlx-lm, and writes the repo-local MLX env vars into.env- the control plane now prefers the repo’s
.venv-mlx/bin/pythonwhen probing MLX availability - local bridges can use the MLX chat-completions endpoint when
TRICHAT_LOCAL_INFERENCE_PROVIDER=mlxorautowith a healthy MLX endpoint - On Apple Silicon,
npm run doctornow reports whether the host is ready for Ollama's March 30, 2026 MLX preview path. The official Ollama post calls outqwen3.5:35b-a3b-coding-nvfp4on Ollama0.19+and recommends a Mac with more than 32 GB of unified memory. npm run ollama:mlx:previewis the guarded Apple Silicon-only setup path for that Ollama MLX preview model. It refuses to run on Linux or Windows, checks the Ollama runtime floor, and pullsqwen3.5:35b-a3b-coding-nvfp4. It does not cut the active local model over until the post-pull gate passes.- After that pull completes, the same path automatically runs
scripts/ollama_mlx_postpull.mjsto stress the local Ollama runtime, run the default local benchmark/eval gate, inspect router readiness plus rollback viability, verifyoffice.snapshottruth surfaces, and write a report underdata/imprint/reports/. Only a fully green gate will cut the active local model over; otherwise the runner records the blockers and leaves the current default untouched. Re-run it manually withnpm run ollama:mlx:postpull. The runner is single-instance per model, so duplicate manual starts now exit cleanly instead of piling up background waiters. npm run local:training:bootstrapreuses the repo’s.venv-mlxsetup path and gives the adapter lane a real local trainer backend on Apple Silicon instead of leaving it in a permanent “missing module” state.npm run local:training:prepare+npm run local:training:train+npm run local:training:promote+npm run local:training:integrate+npm run local:training:cutover+npm run local:training:soak+npm run local:training:watchdognow form a truthful bounded training lane: prepare curates the packet, train runs an MLX LoRA pass against a trainable companion model, promote runs the repo's benchmark/eval gate so the adapter is either rejected or registered, integrate materializes the accepted candidate as a real MLX backend or an Ollama companion model, cutover is the explicit router-default switch with rollback if post-cutover verification fails, soak keeps validating the new primary against the rollback path over repeated benchmark/eval cycles with deterministic rollback heuristics tied to the accepted reward score and baseline contract, and watchdog re-runs that bounded confidence pass automatically whenever the last green soak is missing, failed, or stale.npm run local:training:verifyis the fail-closed evidence gate that re-checks the registry, manifest, corpus splits, promotion proof, rollback metadata, and primary-watchdog freshness from disk before you trust the reported state.- On this Apple Silicon host, the current Qwen companion adapter is served through MLX because
mlx_lm.serversupports--adapter-path. Ollama companion export remains a real path for supported adapter families, but Ollama's documented adapter import support is narrower than the MLX training surface, so not every accepted adapter will be exportable there. - “Imprinting” here means durable local memory, profile preferences, and bootstrap context for the control plane. It is not pretending to silently fine-tune model weights.
Core runtime tools include:
- Memory and continuity:
memory.*includingmemory.reflection_capturefor externally grounded episodic reflections,transcript.*,who_knows,knowledge.query,retrieval.hybrid - Governance and safety:
policy.evaluate,preflight.check,postflight.verify,mutation.check - Durable execution:
run.*,task.*,lock.* - Permanent regression capture:
golden.case_captureturns research, incidents, and traces into verified golden cases that can seed future benchmark/eval fixtures. - Agentic kernel:
goal.*includinggoal.execute,goal.autorun, andgoal.autorun_daemon,kernel.summary,plan.*,artifact.*,experiment.*,event.*,agent.session.*,dispatch.autorun - Workflow methodology:
playbook.*includingplaybook.run,pack.hooks.list,pack.plan.generate,pack.verify.run - Decision and incident logging:
adr.create,decision.link,incident.* - Runtime ops:
health.*,migration.status,imprint.*,imprint.inbox.* - Office orchestration:
trichat.*(roster,thread/message/turn,autopilot,tmux_controller,bus,adapter_telemetry,chaos,slo) - Control-plane discovery and rollout:
tool.search,permission.profile,feature.flag,warm.cache - Budget and cost visibility:
budget.ledger - Local host control:
desktop.control,desktop.observe,desktop.act,desktop.listen,patient.zero,privileged.exec
Workflow/domain packs are loaded at startup from MCP_DOMAIN_PACKS or --domain-packs.
- Framework:
src/domain-packs/types.ts,src/domain-packs/index.ts - Default workflow pack:
src/domain-packs/agentic.ts
Pack authoring guide: Domain Packs
Connection examples and client setup:
- Documentation Index
- Quick Setup
- IDE + Agent Setup Guide
- Transport Connection Guide
- Coworker Quickstart (Cursor + Codex)
- Provider Bridge Matrix
- System Interconnects
- Presentation Runbook
- Ring Leader MCP Ops
Provider bridge commands:
npm run providers:status
npm run providers:export
npm run providers:install -- claude-cli cursor gemini-cli github-copilot-cliprovider.bridge is the truthful federation surface:
- it reports which clients can really connect into this MCP runtime
- it reports which providers are already available as live outbound council agents
- it projects runtime-eligible outbound providers into bridge-backed
model.routerbackend candidates autonomy.bootstrapseeds those eligible bridge backends automatically without replacing the local default backendautonomy.command,goal.execute, andplan.dispatchuse router output to augment local-first councils with relevant hosted agents instead of treating provider bridges as a separate side path- it exports config bundles for Claude CLI, Cursor, Gemini CLI, GitHub Copilot, and Codex
- it installs Claude CLI through the native
claude mcp add/add-jsonpath instead of editing opaque hidden formats directly - it installs both global and workspace-local Cursor MCP config for better editor reliability
- it defaults Claude CLI and Gemini CLI to a resilient stdio proxy on this host, using the MCP HTTP daemon first and a direct stdio fallback when needed
- it preserves
autonomy.ide_ingressas the one canonical operator/IDE ingress path
Fast STDIO connection example:
{
"mcpServers": {
"master-mold": {
"command": "node",
"args": ["/absolute/path/to/master-mold/dist/server.js"],
"env": {
"ANAMNESIS_HUB_DB_PATH": "/absolute/path/to/master-mold/data/hub.sqlite"
}
}
}
}Pure core / no-pack connection example:
{
"mcpServers": {
"master-mold-core-only": {
"command": "node",
"args": ["/absolute/path/to/master-mold/dist/server.js"],
"env": {
"ANAMNESIS_HUB_DB_PATH": "/absolute/path/to/master-mold/data/hub.sqlite",
"MCP_DOMAIN_PACKS": "none"
}
}
}
}How to publish an agentic-development-focused fork from this template:
npm test
npm run mvp:smoke
npm run agentic:micro-soakLocal HTTP teammate validation:
npm run launchd:install
npm run it:http:validateOffice and council reliability checks:
npm run trichat:bridges:test
npm run trichat:doctor
npm run production:doctor
npm run autonomy:status
npm run autonomy:maintain
npm run trichat:smoke
npm run trichat:dogfood
npm run trichat:soak:gate -- --hours 1 --interval-seconds 60Background upkeep is real, not advisory: launchd keepalive drives autonomy.maintain, which keeps the control plane ready, keeps goal.autorun_daemon alive, refreshes bounded learning visibility, maintains tmux worker lanes, and runs the default eval suite only when it is due. When the MCP HTTP lane is still coming back after a restart, the keepalive runner now exits temporary-failure so launchd retries the upkeep lane immediately instead of waiting for the next timer slot.
Extended validation flows, tmux dry-run examples, legacy env vars, and older compatibility-named autopilot examples now live in TriChat Compatibility Reference.
src/server.tscore MCP runtime and tool registrationsrc/tools/core reusable toolssrc/domain-packs/optional domain modulesbridges/bridge adapters and client-facing helper lanesconfig/roster, bridge, and runtime configurationscripts/operational scripts and smoke checksdocs/centralized human-facing docs, setup guides, and architecture diagramstests/integration and persistence testsdata/local runtime state and SQLite databaseweb/office/browser-based Agent Office GUIui/terminal-facing dashboard surfaces