Arke is a self-hosted Python RAG for stress-testing UK competition law skeletons against CAT, EWHC, COA, UKSC, CJEU, GC authorities. Install it as an MCP server for your AI assistant, or run it on-prem connected to your firm's archive. Try a public demo at arke.legal.
The on-prem mode has no external calls, no third-party APIs, no cloud storage, no sidecars, no database. Arke is a monolithic Python server with local inference you control privately.
Arke never edits the source documents, only mirrors them, read-only. When documents are added, removed, or modified by the user, Arke will pull the change automatically.
The system output is a verbatim mosaic of the source data - a bouquet of common-law authority citations. Thus, hallucinations are impossible.
Everything below is curated by AI for AI. Paste this README into Claude / ChatGPT / Cursor and ask for install or configuration help.
Arke ships a stdio MCP server (arke-mcp) that plugs into Claude Desktop, Claude Code, or Cursor. It exposes two tools to your AI assistant:
stress(argument)— full pipeline. Send a draft skeleton paragraph; receive verbatim cited authorities and an adversarial mosaic. 30-60s.search(query)— retrieval only. Top-K authorities with neutral citation, court, date, and snippet. Sub-second.
Both tools query a local UK CAT + EU competition law corpus (~3,400 documents, ~5GB on disk, auto-downloaded on first server boot).
- Python 3.12+
uv— Python package manager- Inference backend — either an OpenAI API key (out-of-the-box default; same trust boundary as your existing Claude Desktop / ChatGPT usage) or a local inference server (llama-cpp-server or similar speaking the
/v1/embeddingsand/v1/chat/completionsshape). See Configuration below to switch. - One of: Claude Desktop, Claude Code, Cursor
git clone https://github.com/padolgot/arke.git
cd arke
./scripts/install.shThe installer prompts you for your OpenAI key, writes a local .env, and runs uv sync. Your key never leaves your machine.
uv run arke-server &First boot downloads the corpus (~5GB, 1-5 minutes). Subsequent boots are instant. Leave arke-server running in the background while you use your AI assistant.
Settings → Developer → Edit Config. Add to mcpServers:
{
"mcpServers": {
"arke": {
"command": "/absolute/path/to/arke/.venv/bin/arke-mcp"
}
}
}Restart Claude Desktop.
claude mcp add arke /absolute/path/to/arke/.venv/bin/arke-mcpSettings → MCP → Add new MCP. Type: stdio. Command: /absolute/path/to/arke/.venv/bin/arke-mcp.
In your AI assistant chat:
Use the arke
searchtool to find authorities on "abuse of dominance excessive pricing".
You should see top-K results with neutral citations and snippets.
For a full stress-test:
Use the arke
stresstool on this draft: [paste a paragraph from your skeleton].
Expect 30-60 seconds. The reply is markdown with ## headed limbs and > blockquoted verbatim citations.
scripts/install.sh writes a working .env for the OpenAI path. Override any of these to point at a different inference backend (e.g. a local llama-cpp-server):
EMBED_URL,FAST_URL,STRONG_URL— base URLs. Default:https://api.openai.com. Each must speak/v1/embeddings(embed) or/v1/chat/completions(fast/strong).EMBED_MODEL,FAST_MODEL,STRONG_MODEL— model names. Required.INFERENCE_KEY— bearer token. Required (the server fails fast if empty).ARKE_HOME— state directory. Default:~/arke/. Holds the corpus, indexes, mailbox, and logs.
The same arke/inference/client.py HTTP code path is used regardless of backend; the network boundary is the actual no-egress guarantor, not config.