Skip to content

Jacobcdsmith/jclaw-framework

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

53 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

⚑ JCLAW Framework

Treat the LLM API as a runtime, not a chatbox.

License: MIT Node.js TypeScript SQLite Version

A local-first LLM runtime for developers who demand more.
Persistent sessions Β· Multi-provider Β· Branching Β· Diffing Β· Sandboxing Β· Evals Β· Pipelines Β· MCP Β· WhatsApp


🌟 Why JCLAW?

Traditional LLM interfaces treat every conversation as a one-off chat that vanishes when you close the tab. JCLAW shifts the paradigm entirely β€” the LLM becomes a persistent, programmable runtime environment that you own and control.

Every session is a first-class object stored in SQLite on your machine. You can branch conversations, diff model responses, pipe outputs to webhooks or scripts, run structured evaluations, enforce prompt-injection sandboxes, and expose your entire workflow to other AI agents via MCP β€” all from the CLI or a sleek Star Trek–inspired web dashboard.

πŸ›‘οΈ No cloud accounts needed. No telemetry. All data stays in ~/.jclaw/jclaw.db.


✨ Feature Highlights

πŸ—ƒοΈ Persistent Sessions

Conversations survive restarts. Each session carries its own model, provider, system prompt, temperature, and accumulated token/cost metrics β€” permanently tracked in SQLite.

πŸ”€ Multi-Provider Support

Swap providers mid-conversation. Supports Anthropic, OpenAI, Groq, Google Gemini, Ollama, and LM Studio. Every message is tagged with the exact model that generated it.

🌿 Conversation Branching

Fork any session at any message. The history up to that point is copied into a new session; the original remains untouched. Explore alternative reasoning paths without losing your work.

πŸ” Response Diffing & Model Comparison

Regenerate responses to see word- and line-level diffs. Run the same prompt against multiple models in parallel and compare outputs side-by-side.

πŸ”§ Agentic Workflow Runtime

A dedicated agent runtime orchestrates multi-step autonomous tasks with tool-calling loops β€” powered by MCP tool integrations.

πŸ§ͺ Eval & Benchmarking Suite

Define evaluation cases with expected outputs and judge models. Run structured benchmarks against any model, with concurrency control and per-case scoring.

πŸ›‘οΈ Prompt Sandbox & Red Team

Server-level prompt-injection detection, system prompt prefix/suffix injection, client-override blocking, and a built-in red-team harness to stress-test your own prompts.

πŸ“Š Fine-Tuning Pipeline

Export conversation datasets in JSONL format and submit fine-tuning jobs directly to OpenAI or Groq from within the framework.

πŸ“± WhatsApp Business Integration

Send and receive WhatsApp messages via the Meta Business Cloud API. Configure webhooks, test sends, view a live message log, and optionally enable auto-replies driven by any JCLAW session.

πŸ“‘ Automation-Native Output Piping

Pipe LLM responses to files, the system clipboard, webhooks, or arbitrary shell scripts β€” first-class primitives, not afterthoughts.

πŸ“š Prompt Library & Templates

Store reusable prompts with {{variable}} templating. Create session templates with pre-configured models, system prompts, and cost ceilings.

πŸ”Ž Full-Text Search

Instantly search across your entire message history using SQLite FTS5 β€” blazing-fast, local, always available.

πŸ”Œ Model Context Protocol (MCP)

Expose JCLAW as an MCP tool provider for other AI agents. Also acts as an MCP client β€” connect to any external MCP server and use its tools inside your chat sessions.

πŸ“ˆ Live Processing Dashboard

The Overview page shows a real-time processing bar (tokens/sec, last model, last provider) and a Framework Status panel with live indicators for every active framework β€” Sandbox, Red Team, MCP, WhatsApp, Pipeline, Evals, Fine-Tune, and Embeddings.


πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  CLI (commander)          Web Dashboard (React + Vite)                  β”‚
β”‚       β”‚                              β”‚                                   β”‚
β”‚       └──────────── WebSocket β”€β”€β”€β”€β”€β”€β”€β”˜                                  β”‚
β”‚                          β”‚                                               β”‚
β”‚              gate/server.ts   (Express + WebSocketServer)               β”‚
β”‚              β”œβ”€ /webhook/whatsapp  GET (verify) + POST (receive)        β”‚
β”‚              └─ /health                                                  β”‚
β”‚                          β”‚                                               β”‚
β”‚              gate/protocol.ts  (JSON-RPC method router, 40+ methods)    β”‚
β”‚                          β”‚                                               β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”‚
β”‚   β”‚                      β”‚                                        β”‚     β”‚
β”‚ storage/             runtime/                               providers/   β”‚
β”‚ β”œβ”€ db.ts             β”œβ”€ chat.ts       ← send / stream       β”œβ”€ anthropic β”‚
β”‚ β”œβ”€ sessions.ts       β”œβ”€ composer.ts   ← context budget       β”œβ”€ openai   β”‚
β”‚ β”œβ”€ messages.ts       β”œβ”€ differ.ts     ← response diffs       β”œβ”€ ollama   β”‚
β”‚ β”œβ”€ prompts.ts        β”œβ”€ pipeline.ts   ← output piping        └─ ...     β”‚
β”‚ β”œβ”€ templates.ts      β”œβ”€ eval.ts       ← benchmarking                    β”‚
β”‚ β”œβ”€ sandbox.ts        β”œβ”€ finetune.ts   ← fine-tune jobs                  β”‚
β”‚ β”œβ”€ evals.ts          └─ embeddings.ts ← vector caching                  β”‚
β”‚ └─ metrics*.ts                                                           β”‚
β”‚                                                                          β”‚
β”‚   agent/runtime.ts   ← agentic loop       mcp/ ← server + client        β”‚
β”‚   channels/          ← I/O plugin channels (WhatsApp, …)                β”‚
β”‚   plugins/           ← plugin registry                                   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Stack: Node.js 20 (ESM) Β· TypeScript 5.6 Β· Express Β· WebSocket Β· SQLite (better-sqlite3, WAL + FTS5) Β· React + Vite Β· Vitest


πŸš€ Installation

Prerequisites: Node.js 20+

# 1. Clone the repository
git clone https://github.com/Jacobcdsmith/jclaw-framework.git
cd jclaw-framework

# 2. Install backend dependencies
npm install

# 3. Build the backend
npm run build

# 4. Install & build the frontend
cd web && npm install && npm run build && cd ..

🏁 Quick Start

Start the server + dashboard

JCLAW_PORT=5000 npm run dev:gate

The web dashboard is available at http://localhost:5000 β€” a Star Trek / LCARS terminal aesthetic with deep-black background, amber + cyan accents, and monospace type.

Configure providers

Set API keys via environment variables or directly in the Providers dashboard page:

Provider Environment Variable
Anthropic ANTHROPIC_API_KEY
OpenAI / Groq / Gemini OPENAI_API_KEY
Ollama (no key needed β€” runs locally)
LM Studio (no key needed β€” runs locally)

πŸ’» CLI Usage

# Create a new session
jclaw sessions start --model claude-sonnet-4-6 --label "my first session"

# Send a message
jclaw chat send <sessionId> -m "Explain monads in one paragraph"

# Stream tokens in real time
jclaw chat send <sessionId> -m "Write me a poem" --stream

# Pipe output to clipboard and a file simultaneously
jclaw chat send <sessionId> -m "Draft release notes" \
  --pipe-file notes.txt \
  --pipe-clipboard

# Fork a session at a specific message to explore a different path
jclaw sessions fork <sessionId> --at <messageId>

# Run a benchmark eval suite
jclaw eval run <suiteId> --model openai:gpt-4o --concurrency 4

# Full-text search across all history
jclaw search "context window optimization"

πŸ–₯️ Dashboard Pages

Page Description
Overview Live processing bar, framework status panel, session stats, provider health
Sessions Sortable list of all sessions with token/cost info
Session Detail Full message thread with inline model tags
Prompts Saved system prompts with {{variable}} templating
Templates Pre-configured session templates (model, system prompt, cost ceiling)
Chat Live streaming chat with tool-call trace
Terminal Raw JSON-RPC console
Activity Real-time WebSocket frame monitor with tokens/sec counter
Metrics Historical token/latency/cost graphs
Providers API key management, base URL config, connection testing, model listing
Sandbox Prompt-injection protection, prefix/suffix injection, red-team harness
MCP Add/edit/delete MCP server configs, connection status, available tools
WhatsApp Meta Business Cloud API channel β€” config, live message log, test send
Datasets Conversation dataset management for fine-tuning
Fine-Tune Submit and monitor fine-tuning jobs (OpenAI/Groq)
Evals Create eval suites, run benchmarks, view scored results
Embed Search Semantic search over message history
Search Full-text search (FTS5) across all message history

πŸ“± WhatsApp Integration

JCLAW integrates with the Meta WhatsApp Business Cloud API to send and receive WhatsApp messages directly from the dashboard.

Setup (5 steps)

  1. Create a Meta app at developers.facebook.com and add the WhatsApp Business product.

  2. Copy your credentials from WhatsApp β†’ API Setup:

    • Phone Number ID β€” a numeric ID tied to your test/production number
    • System User Access Token β€” long-lived token from Meta Business Manager
    • App Secret β€” found in App Dashboard β†’ Basic (used for webhook signature verification)
  3. Configure JCLAW β€” open the WhatsApp dashboard page and enter:

    • Phone Number ID
    • Access Token
    • Verify Token (any string you choose, e.g. jclaw-verify)
    • App Secret (optional but recommended β€” enables X-Hub-Signature-256 validation on inbound webhooks)
  4. Register the webhook in your Meta app under WhatsApp β†’ Configuration:

    • Webhook URL: https://<your-host>/webhook/whatsapp
    • Verify Token: same string you entered above
    • Subscribe to the messages field
  5. Test β€” use the Test Send panel to send a message to any number approved in your Meta app.

Note: Inbound messages are broadcast as whatsapp.message WebSocket events and appear in the live log instantly. Enable Auto-Reply in the config panel to have JCLAW forward incoming messages to a session and reply automatically.

Protocol methods

Method Description
whatsapp.config.get Get current config (access token, verify token, and app secret are masked)
whatsapp.config.set Update Phone Number ID, access token, verify token, app secret, auto-reply settings
whatsapp.send Send a text message to a phone number
whatsapp.messages.list List recent inbound/outbound messages

Inbound messages are also emitted as whatsapp.message WebSocket events for real-time display.


πŸ”Œ MCP Integration

JCLAW works as both an MCP server and an MCP client:

As a server β€” expose JCLAW to other agents via stdio or HTTP/SSE:

Tool Description
sessions_list List all chat sessions
sessions_create Create a new session
chat_send Send a message and receive a response
messages_search Search message history with FTS5

As a client β€” connect to any external MCP server and use its tools directly inside chat sessions. Manage connections from the MCP dashboard page.


πŸ“ Project Structure

jclaw-framework/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ agent/        # Agentic workflow runtime
β”‚   β”œβ”€β”€ channels/     # I/O plugin channels
β”‚   β”‚   └── plugins/
β”‚   β”‚       β”œβ”€β”€ exampleChannel.ts   # Stdout logger (template)
β”‚   β”‚       └── whatsapp.ts         # Meta WhatsApp Business Cloud API
β”‚   β”œβ”€β”€ cli/          # CLI entry point (commander)
β”‚   β”œβ”€β”€ gate/         # Express + WebSocket server & protocol router
β”‚   β”‚   β”œβ”€β”€ server.ts             # HTTP + WS server, WhatsApp webhook endpoints
β”‚   β”‚   β”œβ”€β”€ protocol.ts           # JSON-RPC method router (40+ methods)
β”‚   β”‚   └── whatsapp-store.ts     # In-process WhatsApp message store
β”‚   β”œβ”€β”€ mcp/          # MCP server, client manager, shared types
β”‚   β”œβ”€β”€ plugins/      # Plugin registry
β”‚   β”œβ”€β”€ providers/    # LLM adapters (Anthropic, OpenAI, Ollama, …)
β”‚   β”œβ”€β”€ runtime/      # Chat, eval, fine-tune, embeddings, diffing, piping
β”‚   └── storage/      # SQLite layer (sessions, messages, prompts, sandbox, …)
β”œβ”€β”€ web/              # React + Vite dashboard frontend
β”‚   └── src/
β”‚       └── pages/
β”‚           β”œβ”€β”€ Overview.tsx   # Live processing bar + framework status panel
β”‚           β”œβ”€β”€ WhatsApp.tsx   # WhatsApp channel management page
β”‚           └── ...            # All other dashboard pages
β”œβ”€β”€ scripts/          # Utility scripts
β”œβ”€β”€ tsconfig.json
└── package.json

🀝 Contributing

Contributions are very welcome! Here's how to get started:

  1. Fork the repository
  2. Create your feature branch β€” git checkout -b feature/my-feature
  3. Commit your changes β€” git commit -m 'feat: add my feature'
  4. Push to the branch β€” git push origin feature/my-feature
  5. Open a Pull Request and describe what you've built

Please keep PRs focused and include tests where relevant.


πŸ“„ License

This project is licensed under the MIT License β€” see the LICENSE file for details.


Built with β˜• and TypeScript Β· Report a bug Β· Request a feature

About

JCLaw parallel agent framework: gateway, plugin system, channels, CLI, and agent runtime.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages