Skip to content

fix(docker): enable host.docker.internal for local providers#68702

Closed
skolez wants to merge 2 commits intoopenclaw:mainfrom
skolez:fix/docker-host-internal
Closed

fix(docker): enable host.docker.internal for local providers#68702
skolez wants to merge 2 commits intoopenclaw:mainfrom
skolez:fix/docker-host-internal

Conversation

@skolez
Copy link
Copy Markdown

@skolez skolez commented Apr 18, 2026

Summary

  • Problem: When OpenClaw runs in a container, the onboarding wizard prompts for a base URL (e.g. for LM Studio or Ollama) and users naturally enter http://127.0.0.1:1234 / http://127.0.0.1:11434. Inside the container that resolves to the container itself, not the host, so every model call silently fails.
  • Why it matters: Local-provider users on Docker hit a dead end during onboarding with no in-product hint about why.
  • What changed: Mapped host.docker.internalhost-gateway in the bundled docker-compose.yml (so the alias works on Linux, not just Docker Desktop) and added a "Connecting to host services" subsection to docs/install/docker.md documenting the correct base URLs for LM Studio and Ollama.
  • What did NOT change (scope boundary): No changes to the LM Studio or Ollama setup wizards, no plugin SDK additions, no auto-detection logic. Issue suggested wizard auto-detection as one option; kept this PR docs+compose-only as the smallest defensible fix. Wizard hint can be a follow-up if maintainers want it.

AI-assisted (Claude). Lightly tested locally — verified extra_hosts syntax against existing repo precedent (extensions/qa-lab/src/docker-harness.ts); did not stand up a full Docker rebuild on Linux to confirm host-gateway resolution end-to-end.

Change Type (select all)

  • Bug fix
  • Feature
  • Refactor required for the fix
  • Docs
  • Security hardening
  • Chore/infra

Scope (select all touched areas)

  • Gateway / orchestration
  • Skills / tool execution
  • Auth / tokens
  • Memory / storage
  • Integrations
  • API / contracts
  • UI / DX
  • CI/CD / infra

Linked Issue/PR

Root Cause (if applicable)

  • Root cause: docker-compose.yml did not declare extra_hosts: ["host.docker.internal:host-gateway"], so on Linux the alias does not resolve (Docker Desktop sets it implicitly; Docker Engine does not). The Docker install docs also did not call out the container-vs-host loopback gotcha for local AI providers, so users entering 127.0.0.1 had no hint why their setup silently failed.
  • Missing detection / guardrail: The onboarding wizard for LM Studio/Ollama does not detect container environments and warn about loopback. Out of scope here; deferred to a possible follow-up.

Regression Test Plan (if applicable)

N/A — no runtime/test code is touched. The change is a YAML config addition and a docs section.

User-visible / Behavior Changes

  • Inside the bundled OpenClaw Docker Compose stack, http://host.docker.internal:<port> now resolves to the host gateway on Linux (it already worked on Docker Desktop). Users following the docs can now point LM Studio/Ollama setup at the correct URL on any platform.
  • Docs gain a new "Connecting to host services (LM Studio, Ollama, etc.)" subsection under docs/install/docker.md.

Diagram (if applicable)

Before (Linux, Docker Engine):
[wizard] -> http://127.0.0.1:11434 -> container loopback (no Ollama) -> silent failure
[wizard] -> http://host.docker.internal:11434 -> name resolution failure

After:
[wizard] -> http://host.docker.internal:11434 -> host gateway -> host Ollama -> 200 OK

Security Impact (required)

  • New permissions/capabilities? No
  • Secrets/tokens handling changed? No
  • New/changed network calls? No (only adds a name resolution alias inside the container)
  • Command/tool execution surface changed? No
  • Data access scope changed? No

extra_hosts: ["host.docker.internal:host-gateway"] only resolves a hostname to the existing host gateway IP; it does not open new ports, expose new services, or grant new privileges. Same alias Docker Desktop already provides implicitly.

Repro + Verification

Environment

  • OS: Linux Docker Engine host (issue reporter's environment); also Docker Desktop on macOS/Windows
  • Runtime/container: bundled openclaw-gateway container
  • Model/provider: LM Studio, Ollama (host-side)
  • Integration/channel (if any): N/A
  • Relevant config (redacted): default docker-compose.yml

Steps

  1. Run ./scripts/docker/setup.sh on a Linux Docker Engine host.
  2. During onboarding, configure LM Studio (or Ollama) and enter http://127.0.0.1:1234 (or http://127.0.0.1:11434).
  3. Send a message via any channel.

Expected

  • After this PR: docs guide users to enter http://host.docker.internal:<port> instead, and that hostname resolves correctly on both Docker Desktop and Linux Docker Engine.

Actual

  • Before this PR on Linux: entering host.docker.internal gave name resolution errors; entering 127.0.0.1 reached the container loopback (no provider). Either way, model calls silently failed.

Evidence

  • Failing test/log before + passing after
  • Trace/log snippets
  • Screenshot/recording
  • Perf numbers (if relevant)

Reproduction and root cause are well-documented in #68684 (filed by @safrano9999). No new automated coverage added — this PR is config + docs only.

Human Verification (required)

  • Verified scenarios:
    • Diff review of docker-compose.yml, docs/install/docker.md, and CHANGELOG.md.
    • Confirmed extra_hosts: ["host.docker.internal:host-gateway"] matches the syntax already in use elsewhere in this repo (extensions/qa-lab/src/docker-harness.ts:125, src/agents/sandbox/config-hash.test.ts:17).
    • Confirmed openclaw-cli shares the gateway's network namespace (network_mode: "service:openclaw-gateway"), so adding extra_hosts only on the gateway service is sufficient.
  • Edge cases checked:
    • macOS / Windows Docker Desktop: host.docker.internal already resolves implicitly; adding extra_hosts is a harmless no-op.
    • Linux Docker Engine: host-gateway magic value is supported since Docker 20.10 (released 2020-12), which is well below the project's existing Docker baseline.
  • What I did not verify:
    • Did not stand up a full Linux Docker Engine rebuild end-to-end to confirm name resolution. The compose syntax change is small and matches established repo precedent, so confidence is high but not first-hand validated.

Review Conversations

  • I replied to or resolved every bot review conversation I addressed in this PR.
  • I left unresolved only the conversations that still need reviewer or maintainer judgment.

Compatibility / Migration

  • Backward compatible? Yes
  • Config/env changes? No (no environment variables added)
  • Migration needed? No — existing users who already entered the working URL keep working; the new extra_hosts is additive.

Risks and Mitigations

  • Risk: Some unusual Docker setups (e.g. ancient Docker Engine < 20.10 without host-gateway magic value) might fail to start the container with the new extra_hosts.
    • Mitigation: Docker 20.10 was released in Dec 2020; OpenClaw's bundled tooling already assumes a modern Docker. If this becomes an issue, the line can be made conditional via a Compose override.

@openclaw-barnacle openclaw-barnacle Bot added docs Improvements or additions to documentation docker Docker and sandbox tooling size: XS labels Apr 18, 2026
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Apr 18, 2026

Greptile Summary

This PR adds extra_hosts: ["host.docker.internal:host-gateway"] to the openclaw-gateway service in docker-compose.yml, making the host.docker.internal alias work on Linux Docker Engine (Docker Desktop already sets it implicitly). It also adds a "Connecting to host services" documentation section to docs/install/docker.md with the correct base URLs for LM Studio and Ollama, and a corresponding CHANGELOG entry.

The change is minimal, additive, and well-reasoned. Because openclaw-cli shares openclaw-gateway's network namespace via network_mode: "service:openclaw-gateway", adding extra_hosts only on the gateway service is sufficient for both containers.

Confidence Score: 5/5

Safe to merge — config-only and docs-only change with no runtime code touched.

The extra_hosts entry is syntactically correct, matches established repo precedent, and the host-gateway magic value is supported since Docker 20.10. The docs section accurately covers the loopback-vs-host-gateway distinction and correctly notes that Ollama requires OLLAMA_HOST=0.0.0.0:11434 to be reachable from the container. No P0/P1 issues found.

No files require special attention.

Reviews (1): Last reviewed commit: "fix(docker): map host.docker.internal fo..." | Re-trigger Greptile

@skolez skolez force-pushed the fix/docker-host-internal branch from 8ca47bb to 47cab92 Compare April 18, 2026 22:15
@steipete
Copy link
Copy Markdown
Contributor

Maintainer review note: the Compose change itself looks right, but I do not think this should land as-is yet.

Two issues from local review:

  • docs/install/docker.md says LM Studio binds to 0.0.0.0 by default. The installed LM Studio CLI here reports lms server start --help default bind as 127.0.0.1. On Linux Docker Engine, host.docker.internal:host-gateway reaches the host gateway address, not a service bound only to host loopback, so the doc should tell LM Studio users to bind explicitly, e.g. lms server start --port 1234 --bind 0.0.0.0, or point them to the equivalent Desktop setting.
  • The CHANGELOG.md entry needs a contributor attribution per repo changelog policy (Thanks @...).

Evidence checked: PR diff only touches docker-compose.yml, docs/install/docker.md, and CHANGELOG.md; CI is green, and lms server start --help on this machine documents --bind <address> default as 127.0.0.1.

@clawsweeper
Copy link
Copy Markdown
Contributor

clawsweeper Bot commented Apr 27, 2026

Codex automated review: keeping this open.

Keep this PR open. The linked Docker/local-provider bug is real and current main still lacks the host-gateway mapping and Docker provider guidance, but this PR should not be closed or merged as-is because a maintainer identified a concrete docs error for LM Studio binding plus a changelog policy violation. The Compose change appears directionally correct and limited; the best path is a small revision, not cleanup closure.

Best possible solution:

Keep this PR open and ask for a small revision: update the Docker docs to tell LM Studio users to enable network serving explicitly, such as the GUI Serve on Local Network setting or lms server start --bind 0.0.0.0 --port 1234; keep the Ollama OLLAMA_HOST=0.0.0.0:11434 guidance; add the required changelog Thanks @... attribution; then re-run the repo's relevant docs/changed checks before review/merge. The linked #68684 remains the right bug target for this implementation candidate.

What I checked:

  • Current main still lacks bundled host-gateway mapping: On current main, openclaw-gateway proceeds from the optional Docker socket/group section directly to ports with no extra_hosts; openclaw-cli uses network_mode: "service:openclaw-gateway", so adding the mapping to the gateway service is the relevant Compose surface if this fix lands. (docker-compose.yml:37, 646a268d2710)
  • Current docs do not solve the provider loopback problem: The Docker install guide currently documents gateway LAN/loopback binding and shared-network security, but it does not explain that LM Studio/Ollama running on the host are not reachable from inside the container at 127.0.0.1 or localhost. Public docs: docs/install/docker.md. (docs/install/docker.md:204, 646a268d2710)
  • Maintainer-blocked docs issue is substantive: The PR diff adds Docker docs claiming LM Studio binds to 0.0.0.0 by default, while the maintainer review says local lms server start --help reports default bind as 127.0.0.1. Upstream LM Studio docs describe the default API server as available at http://localhost:1234 and expose Serve on Local Network as a separate setting; NVIDIA OpenShell docs likewise say LM Studio listens on 127.0.0.1:1234 by default and needs Serve on Local Network or lms server start --bind 0.0.0.0 for gateway/container access. (lmstudio.ai) Public docs: docs/install/docker.md. (docs/install/docker.md, 8b3f50092049)
  • Changelog attribution policy blocks as-is: Root policy requires every added changelog entry to include at least one Thanks @author attribution. The provided PR diff adds a Docker/setup changelog bullet without any Thanks @..., matching the maintainer's review note. (AGENTS.md:144, 646a268d2710)
  • Compose direction is plausible and not suspicious: The same host.docker.internal:host-gateway mapping already appears in the QA Docker harness and sandbox config-hash coverage, and Docker docs describe host.docker.internal as the special hostname for reaching host services from a container. The PR's changed file set is limited to docker-compose.yml, Docker docs, and CHANGELOG.md; it does not touch workflows, dependencies, lockfiles, scripts, package publishing metadata, secrets handling, or arbitrary code execution paths. (docs.docker.com) (extensions/qa-lab/src/docker-harness.ts:124, 646a268d2710)

Remaining risk / open question:

Codex Review notes: model gpt-5.5, reasoning high; reviewed against 646a268d2710.

@steipete
Copy link
Copy Markdown
Contributor

Closing as superseded by 66f4b52 on main. Thanks @skolez for the PR.

I kept the core Compose direction from this PR and added the missing pieces from review:

  • Docker setup now marks provider onboarding with OPENCLAW_DOCKER_SETUP=1.
  • LM Studio and Ollama plugin setup use host.docker.internal as the Docker setup default, so users are not prompted toward container loopback.
  • The Docker docs now note that host-side LM Studio/Ollama must bind outside loopback, with concrete commands.
  • The changelog includes the required attribution.

Validation:

  • pnpm test extensions/lmstudio/src/setup.test.ts extensions/ollama/src/setup.test.ts -- --reporter=verbose passed: 59 tests.
  • pnpm check:changed passed.
  • docker compose config --quiet passed and rendered extra_hosts: host.docker.internal=host-gateway.
  • Live Docker probe passed: a Compose one-shot container fetched a host-bound HTTP endpoint via http://host.docker.internal:<port>.

@steipete steipete closed this Apr 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

docker Docker and sandbox tooling docs Improvements or additions to documentation size: XS

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Docker setup: 127.0.0.1 doesn't work for local AI providers (LM Studio, Ollama) — host.docker.internal required

2 participants