Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Internet DNS resolution no longer works in internal bridge network #7262

Open
paya-cz opened this issue Apr 21, 2024 · 4 comments
Open

Internet DNS resolution no longer works in internal bridge network #7262

paya-cz opened this issue Apr 21, 2024 · 4 comments

Comments

@paya-cz
Copy link

paya-cz commented Apr 21, 2024

Description

I use Compose. I have two networks:

  • private (bridge, internal)
  • public (bridge)

I have these containers:

  • wireguard - in both networks private and public at the same time. Provides VPN. Single instance only.
  • worker - in network private only. Multiple instances. These have a dns config added with a public DNS server IP to allow resolution of public domains.

Objective: The worker internet traffic must only ever go through wireguard and never leave the host machine directly. Local DNS must continue to work so the containers can resolve each other's IPs and talk to each other. Internet DNS must also resolve.


Issue 1: This is difficult to setup because Docker networking does not have the right tools for the job. It is not possible to change a network's default gateway to point to a container IP.

Solution for issue 1: I added NET_ADMIN capability to worker and used this script to change the default route:

wireguard_ip=$(getent hosts wireguard | awk '{ print $1 }')
ip route replace default via $wireguard_ip

Issue 2: With NET_ADMIN capability, nothing prevents the container from changing the default route again and pointing back to the Docker default gateway, at which point it would bypass the VPN traffic.

Solution for 2: That's why I use the internal network flag on private, which adds iptables firewall rules that prevent traffic from leaving that network completely.


Issue 3: These iptables rules block all traffic with IPs outside of the private CIDR. So it won't let traffic pass through the wireguard VPN because the IPs fall outside of the private CIDR.

Solution for 3:

I created another container with capability NET_ADMIN and critically with network_mode: host. This container then adds more iptables rules to the DOCKER-USER chain. It runs:

iptables -N ACCEPT-TO-INTERNET
iptables -A ACCEPT-TO-INTERNET -d 10.0.0.0/8 -j RETURN
iptables -A ACCEPT-TO-INTERNET -d 172.16.0.0/12 -j RETURN
iptables -A ACCEPT-TO-INTERNET -d 192.168.0.0/16 -j RETURN
iptables -A ACCEPT-TO-INTERNET -j ACCEPT

iptables -N ACCEPT-FROM-INTERNET
iptables -A ACCEPT-FROM-INTERNET -s 10.0.0.0/8 -j RETURN
iptables -A ACCEPT-FROM-INTERNET -s 172.16.0.0/12 -j RETURN
iptables -A ACCEPT-FROM-INTERNET -s 192.168.0.0/16 -j RETURN
iptables -A ACCEPT-FROM-INTERNET -j ACCEPT

iptables -I DOCKER-USER -i private -o private -j ACCEPT-TO-INTERNET
iptables -I DOCKER-USER -i private -o private -j ACCEPT-FROM-INTERNET

This will allow traffic destined to internet IPs to be exchanged between private containers. Crucially, it still blocks traffic that is sent to the Docker's original default route, but allows internet traffic to go to the wireguard container.

The reason I chose to do this setup in a container is because I wanted the entire application to be self-contained and portable, instead of relying on host machine startup scripts.


After jumping through all the hoops above, I got my networking setup working. Traffic goes from worker in private via wireguard in private, and exits via wireguard in public but now in a VPN tunnel. The worker containers cannot bypass this even if they get hacked and change their default route.

Now for the actual issue I have: this setup worked fine all the way to v4.28.0 (139021). Traffic went through correctly. Local DNS resolved correctly. Internet DNS also resolved correctly. After updating to v4.29.0 (145265), the internet DNS no longer works. Cannot resolve internet names. Local resolution continues to work. Traffic still flows through correctly, I can ping 8.8.8.8 from worker just fine.

I could do something like echo "nameserver 8.8.8.8" > /etc/resolv.conf in worker and that would allow me to resolve internet names but also prevent me from resolving local Docker names. I do not know what changed or where between v4.28 and v4.29, but internet resolution no longer works since v4.29.

Reproduce

As you can see, this is a pretty elaborate setup which also requires an active WireGuard connection, the latter I cannot provide for you. If necessary, I can make a repository to put my docker-compose.yaml in there together with the .sh scripts, but it will require you to supply your own WireGuard config anyway. Let me know what you need.

Expected behavior

No response

docker version

Client:
 Cloud integration: v1.0.35+desktop.13
 Version:           26.0.0
 API version:       1.45
 Go version:        go1.21.8
 Git commit:        2ae903e
 Built:             Wed Mar 20 15:14:46 2024
 OS/Arch:           darwin/amd64
 Context:           desktop-linux

Server: Docker Desktop 4.29.0 (145265)
 Engine:
  Version:          26.0.0
  API version:      1.45 (minimum version 1.24)
  Go version:       go1.21.8
  Git commit:       8b79278
  Built:            Wed Mar 20 15:18:01 2024
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.28
  GitCommit:        ae07eda36dd25f8a1b98dfbf587313b99c0190bb
 runc:
  Version:          1.1.12
  GitCommit:        v1.1.12-0-g51d5e94
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

docker info

Client:
 Version:    26.0.0
 Context:    desktop-linux
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.13.1-desktop.1
    Path:     /Users/xxx/.docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.26.1-desktop.1
    Path:     /Users/xxx/.docker/cli-plugins/docker-compose
  debug: Get a shell into any image or container. (Docker Inc.)
    Version:  0.0.27
    Path:     /Users/xxx/.docker/cli-plugins/docker-debug
  dev: Docker Dev Environments (Docker Inc.)
    Version:  v0.1.2
    Path:     /Users/xxx/.docker/cli-plugins/docker-dev
  extension: Manages Docker extensions (Docker Inc.)
    Version:  v0.2.23
    Path:     /Users/xxx/.docker/cli-plugins/docker-extension
  feedback: Provide feedback, right in your terminal! (Docker Inc.)
    Version:  v1.0.4
    Path:     /Users/xxx/.docker/cli-plugins/docker-feedback
  init: Creates Docker-related starter files for your project (Docker Inc.)
    Version:  v1.1.0
    Path:     /Users/xxx/.docker/cli-plugins/docker-init
  sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc.)
    Version:  0.6.0
    Path:     /Users/xxx/.docker/cli-plugins/docker-sbom
  scout: Docker Scout (Docker Inc.)
    Version:  v1.6.3
    Path:     /Users/xxx/.docker/cli-plugins/docker-scout
WARNING: Plugin "/Users/xxx/.docker/cli-plugins/docker-scan" is not valid: failed to fetch metadata: fork/exec /Users/xxx/.docker/cli-plugins/docker-scan: no such file or directory

Server:
 Containers: 3
  Running: 3
  Paused: 0
  Stopped: 0
 Images: 6
 Server Version: 26.0.0
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: ae07eda36dd25f8a1b98dfbf587313b99c0190bb
 runc version: v1.1.12-0-g51d5e94
 init version: de40ad0
 Security Options:
  seccomp
   Profile: unconfined
  cgroupns
 Kernel Version: 6.6.22-linuxkit
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 1.92GiB
 Name: docker-desktop
 ID: c0a42fb8-518b-45f0-b43c-4b30d10147a9
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 HTTP Proxy: http.docker.internal:3128
 HTTPS Proxy: http.docker.internal:3128
 No Proxy: hubproxy.docker.internal
 Labels:
  com.docker.desktop.address=unix:///Users/xxx/Library/Containers/com.docker.docker/Data/docker-cli.sock
 Experimental: false
 Insecure Registries:
  hubproxy.docker.internal:5555
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: daemon is not using the default seccomp profile

Diagnostics ID

a

Additional Info

No response

@dgageot
Copy link
Member

dgageot commented Apr 22, 2024

cc @djs55

@dathbe
Copy link

dathbe commented Apr 24, 2024

This is not just an issue on Docker for Mac. I'm running Docker CLI on Debian and seeing a similar issue with resolving DNS.

@akerouanton
Copy link
Member

@dathbe If that issue also exists with no Docker Desktop involved, that's an Engine bug. Could you open a ticket on https://github.com/moby/moby please?

@stealthvette
Copy link

stealthvette commented Apr 26, 2024

I was able to get my Docker Containers to connect to VPN- finally. I use Docker Desktop for Mac v4.29.

After weeks of trouble shooting, I found that-- under features under development -- an option for "Enable Host Networking". I'm not usually a fan of selecting these new features while under testing. BUT, frustrated I gave it a try. And everything suddenly went back to normal. I was able to connect to my VPN. DNS issue fixed and my setup went back to functioning normal again.

I don't know where/how to access this in Docker for other platforms.... but this selection worked for me and my setup.

The feature doesn't necessarily feel "optional" since not having it selected broke my entire setup and prevented my Containers from accessing the Internet (or in my case connecting to my VPN) without being selected. If the feature was mandatory to be selected for my setup to continue working post-Update.... Docker should have selected it for me automatically.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants