A local-first LLM runtime for developers who demand more.
Persistent sessions Β· Multi-provider Β· Branching Β· Diffing Β· Sandboxing Β· Evals Β· Pipelines Β· MCP Β· WhatsApp
Traditional LLM interfaces treat every conversation as a one-off chat that vanishes when you close the tab. JCLAW shifts the paradigm entirely β the LLM becomes a persistent, programmable runtime environment that you own and control.
Every session is a first-class object stored in SQLite on your machine. You can branch conversations, diff model responses, pipe outputs to webhooks or scripts, run structured evaluations, enforce prompt-injection sandboxes, and expose your entire workflow to other AI agents via MCP β all from the CLI or a sleek Star Trekβinspired web dashboard.
π‘οΈ No cloud accounts needed. No telemetry. All data stays in
~/.jclaw/jclaw.db.
|
Conversations survive restarts. Each session carries its own model, provider, system prompt, temperature, and accumulated token/cost metrics β permanently tracked in SQLite. Swap providers mid-conversation. Supports Anthropic, OpenAI, Groq, Google Gemini, Ollama, and LM Studio. Every message is tagged with the exact model that generated it. Fork any session at any message. The history up to that point is copied into a new session; the original remains untouched. Explore alternative reasoning paths without losing your work. Regenerate responses to see word- and line-level diffs. Run the same prompt against multiple models in parallel and compare outputs side-by-side. |
A dedicated agent runtime orchestrates multi-step autonomous tasks with tool-calling loops β powered by MCP tool integrations. Define evaluation cases with expected outputs and judge models. Run structured benchmarks against any model, with concurrency control and per-case scoring. Server-level prompt-injection detection, system prompt prefix/suffix injection, client-override blocking, and a built-in red-team harness to stress-test your own prompts. Export conversation datasets in JSONL format and submit fine-tuning jobs directly to OpenAI or Groq from within the framework. |
|
Send and receive WhatsApp messages via the Meta Business Cloud API. Configure webhooks, test sends, view a live message log, and optionally enable auto-replies driven by any JCLAW session. Pipe LLM responses to files, the system clipboard, webhooks, or arbitrary shell scripts β first-class primitives, not afterthoughts. Store reusable prompts with |
Instantly search across your entire message history using SQLite FTS5 β blazing-fast, local, always available. Expose JCLAW as an MCP tool provider for other AI agents. Also acts as an MCP client β connect to any external MCP server and use its tools inside your chat sessions. The Overview page shows a real-time processing bar (tokens/sec, last model, last provider) and a Framework Status panel with live indicators for every active framework β Sandbox, Red Team, MCP, WhatsApp, Pipeline, Evals, Fine-Tune, and Embeddings. |
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CLI (commander) Web Dashboard (React + Vite) β
β β β β
β βββββββββββββ WebSocket ββββββββ β
β β β
β gate/server.ts (Express + WebSocketServer) β
β ββ /webhook/whatsapp GET (verify) + POST (receive) β
β ββ /health β
β β β
β gate/protocol.ts (JSON-RPC method router, 40+ methods) β
β β β
β ββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββ β
β β β β β
β storage/ runtime/ providers/ β
β ββ db.ts ββ chat.ts β send / stream ββ anthropic β
β ββ sessions.ts ββ composer.ts β context budget ββ openai β
β ββ messages.ts ββ differ.ts β response diffs ββ ollama β
β ββ prompts.ts ββ pipeline.ts β output piping ββ ... β
β ββ templates.ts ββ eval.ts β benchmarking β
β ββ sandbox.ts ββ finetune.ts β fine-tune jobs β
β ββ evals.ts ββ embeddings.ts β vector caching β
β ββ metrics*.ts β
β β
β agent/runtime.ts β agentic loop mcp/ β server + client β
β channels/ β I/O plugin channels (WhatsApp, β¦) β
β plugins/ β plugin registry β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Stack: Node.js 20 (ESM) Β· TypeScript 5.6 Β· Express Β· WebSocket Β· SQLite (better-sqlite3, WAL + FTS5) Β· React + Vite Β· Vitest
Prerequisites: Node.js 20+
# 1. Clone the repository
git clone https://github.com/Jacobcdsmith/jclaw-framework.git
cd jclaw-framework
# 2. Install backend dependencies
npm install
# 3. Build the backend
npm run build
# 4. Install & build the frontend
cd web && npm install && npm run build && cd ..JCLAW_PORT=5000 npm run dev:gateThe web dashboard is available at http://localhost:5000 β a Star Trek / LCARS terminal aesthetic with deep-black background, amber + cyan accents, and monospace type.
Set API keys via environment variables or directly in the Providers dashboard page:
| Provider | Environment Variable |
|---|---|
| Anthropic | ANTHROPIC_API_KEY |
| OpenAI / Groq / Gemini | OPENAI_API_KEY |
| Ollama | (no key needed β runs locally) |
| LM Studio | (no key needed β runs locally) |
# Create a new session
jclaw sessions start --model claude-sonnet-4-6 --label "my first session"
# Send a message
jclaw chat send <sessionId> -m "Explain monads in one paragraph"
# Stream tokens in real time
jclaw chat send <sessionId> -m "Write me a poem" --stream
# Pipe output to clipboard and a file simultaneously
jclaw chat send <sessionId> -m "Draft release notes" \
--pipe-file notes.txt \
--pipe-clipboard
# Fork a session at a specific message to explore a different path
jclaw sessions fork <sessionId> --at <messageId>
# Run a benchmark eval suite
jclaw eval run <suiteId> --model openai:gpt-4o --concurrency 4
# Full-text search across all history
jclaw search "context window optimization"| Page | Description |
|---|---|
| Overview | Live processing bar, framework status panel, session stats, provider health |
| Sessions | Sortable list of all sessions with token/cost info |
| Session Detail | Full message thread with inline model tags |
| Prompts | Saved system prompts with {{variable}} templating |
| Templates | Pre-configured session templates (model, system prompt, cost ceiling) |
| Chat | Live streaming chat with tool-call trace |
| Terminal | Raw JSON-RPC console |
| Activity | Real-time WebSocket frame monitor with tokens/sec counter |
| Metrics | Historical token/latency/cost graphs |
| Providers | API key management, base URL config, connection testing, model listing |
| Sandbox | Prompt-injection protection, prefix/suffix injection, red-team harness |
| MCP | Add/edit/delete MCP server configs, connection status, available tools |
| Meta Business Cloud API channel β config, live message log, test send | |
| Datasets | Conversation dataset management for fine-tuning |
| Fine-Tune | Submit and monitor fine-tuning jobs (OpenAI/Groq) |
| Evals | Create eval suites, run benchmarks, view scored results |
| Embed Search | Semantic search over message history |
| Search | Full-text search (FTS5) across all message history |
JCLAW integrates with the Meta WhatsApp Business Cloud API to send and receive WhatsApp messages directly from the dashboard.
-
Create a Meta app at developers.facebook.com and add the WhatsApp Business product.
-
Copy your credentials from WhatsApp β API Setup:
- Phone Number ID β a numeric ID tied to your test/production number
- System User Access Token β long-lived token from Meta Business Manager
- App Secret β found in App Dashboard β Basic (used for webhook signature verification)
-
Configure JCLAW β open the WhatsApp dashboard page and enter:
- Phone Number ID
- Access Token
- Verify Token (any string you choose, e.g.
jclaw-verify) - App Secret (optional but recommended β enables
X-Hub-Signature-256validation on inbound webhooks)
-
Register the webhook in your Meta app under WhatsApp β Configuration:
- Webhook URL:
https://<your-host>/webhook/whatsapp - Verify Token: same string you entered above
- Subscribe to the
messagesfield
- Webhook URL:
-
Test β use the Test Send panel to send a message to any number approved in your Meta app.
Note: Inbound messages are broadcast as
whatsapp.messageWebSocket events and appear in the live log instantly. Enable Auto-Reply in the config panel to have JCLAW forward incoming messages to a session and reply automatically.
| Method | Description |
|---|---|
whatsapp.config.get |
Get current config (access token, verify token, and app secret are masked) |
whatsapp.config.set |
Update Phone Number ID, access token, verify token, app secret, auto-reply settings |
whatsapp.send |
Send a text message to a phone number |
whatsapp.messages.list |
List recent inbound/outbound messages |
Inbound messages are also emitted as whatsapp.message WebSocket events for real-time display.
JCLAW works as both an MCP server and an MCP client:
As a server β expose JCLAW to other agents via stdio or HTTP/SSE:
| Tool | Description |
|---|---|
sessions_list |
List all chat sessions |
sessions_create |
Create a new session |
chat_send |
Send a message and receive a response |
messages_search |
Search message history with FTS5 |
As a client β connect to any external MCP server and use its tools directly inside chat sessions. Manage connections from the MCP dashboard page.
jclaw-framework/
βββ src/
β βββ agent/ # Agentic workflow runtime
β βββ channels/ # I/O plugin channels
β β βββ plugins/
β β βββ exampleChannel.ts # Stdout logger (template)
β β βββ whatsapp.ts # Meta WhatsApp Business Cloud API
β βββ cli/ # CLI entry point (commander)
β βββ gate/ # Express + WebSocket server & protocol router
β β βββ server.ts # HTTP + WS server, WhatsApp webhook endpoints
β β βββ protocol.ts # JSON-RPC method router (40+ methods)
β β βββ whatsapp-store.ts # In-process WhatsApp message store
β βββ mcp/ # MCP server, client manager, shared types
β βββ plugins/ # Plugin registry
β βββ providers/ # LLM adapters (Anthropic, OpenAI, Ollama, β¦)
β βββ runtime/ # Chat, eval, fine-tune, embeddings, diffing, piping
β βββ storage/ # SQLite layer (sessions, messages, prompts, sandbox, β¦)
βββ web/ # React + Vite dashboard frontend
β βββ src/
β βββ pages/
β βββ Overview.tsx # Live processing bar + framework status panel
β βββ WhatsApp.tsx # WhatsApp channel management page
β βββ ... # All other dashboard pages
βββ scripts/ # Utility scripts
βββ tsconfig.json
βββ package.json
Contributions are very welcome! Here's how to get started:
- Fork the repository
- Create your feature branch β
git checkout -b feature/my-feature - Commit your changes β
git commit -m 'feat: add my feature' - Push to the branch β
git push origin feature/my-feature - Open a Pull Request and describe what you've built
Please keep PRs focused and include tests where relevant.
This project is licensed under the MIT License β see the LICENSE file for details.
Built with β and TypeScript Β· Report a bug Β· Request a feature