LocalClaw is a Pinokio package that adapts OpenClaw for strictly local model usage with:
- Ollama (
http://127.0.0.1:11434/v1) - LM Studio (
http://127.0.0.1:1234/v1)
No cloud endpoints are configured by default.
- A local-only config renderer (
scripts/render_local_config.py) that:- rejects non-local hosts,
- seeds OpenClaw with OpenAI-compatible local model definitions,
- writes config to
.localclaw/openclaw.local.json.
- Auto backend detection (
scripts/start_localclaw.sh) for Ollama vs LM Studio. - A reproducible smoke test (
scripts/smoke_test_local.sh) using a mock OpenAI-compatible server (scripts/mock_openai_server.py) that auto-responds withLOCAL_TEST_PHRASE. - Pinokio menu entries for Install / Start / Test / Update / Uninstall.
- Click Install.
- Start either:
- Ollama with an OpenAI-compatible endpoint (
/v1), or - LM Studio local server.
- Ollama with an OpenAI-compatible endpoint (
- Click Start (Auto detect Ollama/LM Studio).
- Optionally run Local API smoke test.
You can change behavior with environment variables:
LOCALCLAW_BACKEND=auto|ollama|lmstudioLOCALCLAW_BASE_URL=http://127.0.0.1:11434/v1LOCALCLAW_MODEL=llama3.2:latestLOCALCLAW_GATEWAY_PORT=18789
OpenClaw validates model IDs against configured provider models.
Mitigation: set LOCALCLAW_MODEL=<your-local-model-id> and rerun Start. The config renderer injects that model ID into provider definitions.
Auto mode checks 11434 then 1234.
Mitigation: set LOCALCLAW_BACKEND explicitly or set LOCALCLAW_BASE_URL directly.
Strict local mode blocks non-local hosts.
Mitigation: only 127.0.0.1, localhost, and ::1 are allowed by config generation.
Some OpenAI-compatible servers support /v1/chat/completions only.
Mitigation: LocalClaw config uses openai-completions API mode for compatibility.
To maximize visibility when published:
- Keep
pinokio.js,pinokio.json, andicon.svgin the repository root. - Use a clear
title+descriptioninpinokio.json. - Include a concise README and screenshots if/when UI behavior changes.
- Tag the GitHub repo clearly (
pinokio,openclaw,ollama,lmstudio,local-llm) and provide release notes so community users understand what is local-only.
OpenClaw users commonly run multi-channel automations and cloud providers by default. This package fills local-first gaps by adding:
- hardened localhost-only guardrails,
- backend auto-detection for common local inference runtimes,
- deterministic API emulation tests for safer packaging/sharing.