Problem
When using --dind with airlock mode enabled, Testcontainers .NET times out trying to connect to the resource reaper (ryuk) and all other sibling containers. DinD works fine without airlock.
Root Cause
The airlock compose network is internal: true, which prevents the workload container from reaching host port mappings via raw TCP.
The connectivity chain that breaks:
- Broker creates ryuk on the airlock network (NetworkMode injection) -- works
- Docker maps ryuk's port 8080 to a random host port (e.g., 32768) -- works
- Testcontainers extracts the hostname from
DOCKER_HOST=tcp://proxy:2375 --> uses "proxy" as the host
- If
TESTCONTAINERS_HOST_OVERRIDE=host.docker.internal is set, it uses that instead
- Testcontainers connects to
host.docker.internal:32768 or proxy:32768 -- fails
host.docker.internal resolves (via extra_hosts) but is unreachable because the internal: true network has no route to the host. The proxy container only listens on 2375 (socat bridge) and 58080 (HTTP proxy), not on random mapped ports.
Confirmed via testing:
/dev/tcp/host.docker.internal/80 from workload --> "Network is unreachable"
- Manually created ryuk on the airlock network with
DOCKER_HOST=tcp://proxy:2375 starts fine
- Workload CAN reach sibling container IPs directly on the airlock network (connection refused = route works, just no listener)
- Testcontainers .NET v4.8.1
Hostname property (decompiled): always uses DOCKER_HOST hostname for TCP, no DinD auto-detection
Potential Solutions
A. iptables DNAT on the proxy container
Add iptables to the proxy image. When BROKER_BRIDGE_TARGET is set, configure DNAT rules to forward all incoming TCP (except 2375/58080) to host.docker.internal. Set TESTCONTAINERS_HOST_OVERRIDE=proxy. Add NET_ADMIN capability to the proxy (TCP path only).
| Pros |
Cons |
| Transparent to Testcontainers |
Requires NET_ADMIN capability on proxy |
| Minimal code change |
Adds iptables dependency to proxy image |
| Works for any port range |
Slightly larger attack surface |
| Follows existing dual-homed proxy pattern |
|
B. Extend the Rust proxy to handle generic TCP forwarding
Add a TCP bridge mode to the Rust proxy binary: for any connection on ports other than 58080/2375, forward to host.docker.internal:<same_port>.
| Pros |
Cons |
| No extra capabilities needed |
Requires binding many ports or a catch-all mechanism |
| Userspace-only, no kernel features |
Significant Rust code change |
| No iptables dependency |
Port range binding is expensive (memory/FDs) |
C. Don't use internal: true for the airlock network (when DinD is active)
Remove the internal: true flag from the airlock network when --dind is enabled. The HTTP proxy env vars (HTTP_PROXY/HTTPS_PROXY) still control outbound HTTP/HTTPS traffic.
| Pros |
Cons |
| Zero code change to proxy |
Weakens airlock security model |
| Simple compose template change |
Raw TCP connections bypass the proxy |
| Works immediately |
Non-HTTP egress is uncontrolled |
D. Run an actual Docker-in-Docker daemon on the airlock network
Add a docker:dind service to the compose file. Port mappings from siblings would be on the DinD daemon's IP (reachable on the airlock network).
| Pros |
Cons |
| Standard DinD pattern |
Runs a full Docker daemon in a container |
| Port mappings work naturally |
Defeats the purpose of the broker (security gating) |
| No proxy changes needed |
Higher resource usage |
Current Workaround
Use --dind without --airlock. The broker still enforces all Docker API rules; you just lose the HTTP proxy network isolation.
Environment
- macOS (OrbStack / Docker Desktop)
- Testcontainers .NET v4.8.1
- copilot_here v2026.04.09
Problem
When using
--dindwith airlock mode enabled, Testcontainers .NET times out trying to connect to the resource reaper (ryuk) and all other sibling containers. DinD works fine without airlock.Root Cause
The airlock compose network is
internal: true, which prevents the workload container from reaching host port mappings via raw TCP.The connectivity chain that breaks:
DOCKER_HOST=tcp://proxy:2375--> uses"proxy"as the hostTESTCONTAINERS_HOST_OVERRIDE=host.docker.internalis set, it uses that insteadhost.docker.internal:32768orproxy:32768-- failshost.docker.internalresolves (viaextra_hosts) but is unreachable because theinternal: truenetwork has no route to the host. The proxy container only listens on 2375 (socat bridge) and 58080 (HTTP proxy), not on random mapped ports.Confirmed via testing:
/dev/tcp/host.docker.internal/80from workload --> "Network is unreachable"DOCKER_HOST=tcp://proxy:2375starts fineHostnameproperty (decompiled): always usesDOCKER_HOSThostname for TCP, no DinD auto-detectionPotential Solutions
A. iptables DNAT on the proxy container
Add
iptablesto the proxy image. WhenBROKER_BRIDGE_TARGETis set, configure DNAT rules to forward all incoming TCP (except 2375/58080) tohost.docker.internal. SetTESTCONTAINERS_HOST_OVERRIDE=proxy. AddNET_ADMINcapability to the proxy (TCP path only).NET_ADMINcapability on proxyiptablesdependency to proxy imageB. Extend the Rust proxy to handle generic TCP forwarding
Add a TCP bridge mode to the Rust proxy binary: for any connection on ports other than 58080/2375, forward to
host.docker.internal:<same_port>.C. Don't use
internal: truefor the airlock network (when DinD is active)Remove the
internal: trueflag from the airlock network when--dindis enabled. The HTTP proxy env vars (HTTP_PROXY/HTTPS_PROXY) still control outbound HTTP/HTTPS traffic.D. Run an actual Docker-in-Docker daemon on the airlock network
Add a
docker:dindservice to the compose file. Port mappings from siblings would be on the DinD daemon's IP (reachable on the airlock network).Current Workaround
Use
--dindwithout--airlock. The broker still enforces all Docker API rules; you just lose the HTTP proxy network isolation.Environment