Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker 20.10 RC1 breaks IPv6 routing #41699

Closed
chris-crone opened this issue Nov 20, 2020 · 11 comments · Fixed by moby/libnetwork#2596
Closed

Docker 20.10 RC1 breaks IPv6 routing #41699

chris-crone opened this issue Nov 20, 2020 · 11 comments · Fixed by moby/libnetwork#2596
Labels
area/networking kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. version/20.10
Milestone

Comments

@chris-crone
Copy link
Contributor

Description
Prior to 20.10 RC1 (including beta1 and 19.03.x) the following Compose snippet would work:

services:
  test:
    image: nginx
    networks:
      test_net:
    ports:
      - "${IP6_ADDR}:80:80/tcp"

networks:
  test_net:
    enable_ipv6: true
    ipam:
      config:
        - subnet: "fd00:2::/64"

It now fails with:

ERROR: for d67edf86c583_ip6_test_1  Cannot start service test: driver failed programming external connectivity on endpoint ip6_test_1 (207d3ce431f361136e1bd137c9ea429beb6b2e5690ea7b0b05793738125bc8f2):  (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d REDACTED_IPV6_ADDRESS --dport 80 -j DNAT --to-destination 172.31.0.2:80 ! -i br-3c90dbe5ce74: iptables v1.8.4 (legacy): host/network `REDACTED_IPV6_ADDRESS' not found
Try `iptables -h' or 'iptables --help' for more information.
 (exit status 2))

ERROR: for test  Cannot start service test: driver failed programming external connectivity on endpoint ip6_test_1 (207d3ce431f361136e1bd137c9ea429beb6b2e5690ea7b0b05793738125bc8f2):  (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d REDACTED_IPV6_ADDRESS --dport 80 -j DNAT --to-destination 172.31.0.2:80 ! -i br-3c90dbe5ce74: iptables v1.8.4 (legacy): host/network `REDACTED_IPV6_ADDRESS' not found
Try `iptables -h' or 'iptables --help' for more information.
 (exit status 2))

Steps to reproduce the issue:

  1. docker-compose up with Compose snippet above and a valid IPv6 address.

Additional information you deem important (e.g. issue happens only occasionally):

Output of docker version:

$ docker version
Client: Docker Engine - Community
 Version:           20.10.0-rc1
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        5cc2396
 Built:             Tue Nov 17 22:51:53 2020
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.0-rc1
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       131bf7e
  Built:            Tue Nov 17 22:50:10 2020
  OS/Arch:          linux/amd64
  Experimental:     true
 containerd:
  Version:          1.4.1
  GitCommit:        c623d1b36f09f8ef6536a057bd658b3aa8632828
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Output of docker-compose version:

docker-compose version 1.27.4, build 40524192
docker-py version: 4.3.1
CPython version: 3.7.7
OpenSSL version: OpenSSL 1.1.0l  10 Sep 2019

Output of docker info:

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.4.2-docker)

Server:
 Containers: 7
  Running: 4
  Paused: 0
  Stopped: 3
 Images: 28
 Server Version: 20.10.0-rc1
 Storage Driver: overlay2
  Backing Filesystem: btrfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: c623d1b36f09f8ef6536a057bd658b3aa8632828
 runc version: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.0-54-generic
 Operating System: Ubuntu 20.04.1 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 15.59GiB
 Name: jupiter
 ID: RROU:WC3V:7XMK:H2BY:2ICR:6A4R:VMTP:RKAU:XHZU:LEY4:M2BM:JXR4
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: true
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support
WARNING: No blkio weight support
WARNING: No blkio weight_device support

Docker daemon config

{
  "storage-driver": "overlay2",
  "experimental": true,
  "ipv6": true,
  "fixed-cidr-v6": "fd00::/80",
  "features": {
          "buildkit": true
  }
}
@AkihiroSuda AkihiroSuda added area/networking kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. version/20.10 labels Nov 20, 2020
@AkihiroSuda AkihiroSuda added this to the 20.10.0 milestone Nov 20, 2020
@thaJeztah
Copy link
Member

@arkodg PTAL; possibly related to moby/libnetwork#2572 ?

@thaJeztah thaJeztah added this to To do in 20.10 planning via automation Nov 20, 2020
@arkodg
Copy link
Contributor

arkodg commented Nov 20, 2020

yes @thaJeztah its related to that PR, will take a look into this
the title is misleading, this issue seems to be related to specifically binding a ipv6 address and port to a container port
cc: @bboehmke

@bboehmke
Copy link
Contributor

bboehmke commented Nov 20, 2020

It seems that on some point the detection of the IPv6 address is not working correctly and we get a iptables with mixed IPv4 and IPv6 addresses.
I will take a look into the change hopefully I can find the reason of this issue.

btw: Is there an easy way to get this RC installed to do some tests?

@bboehmke
Copy link
Contributor

I think the change in the portmapper/mapper.go is maybe the reason of this issue.

Previously the port mapper was only used for IPv4 but is now also used for IPv6 which causes trouble if ip6tables support is disabled but a mapping for a IPv6 address is requested. In this case this is now simply executed with iptables which will fail.

I will try to fix the port mapper so it is only using the right IP version. Will try to provide a PR shortly.

@arkodg
Copy link
Contributor

arkodg commented Nov 20, 2020

it looks we might have been using the user land proxy earlier for this case and now it uses iptables but we can't forward Ipv6->ipv4 via iptables

if we use the ipv6 container IP, then we could use ip6tables to perform port forwarding
@bboehmke - you can use curl -fsSL https://get.docker.com/ | CHANNEL=test sh to install the RC

@bboehmke
Copy link
Contributor

With enabled EnableIP6Tables the ip6tables is used in the same situations as iptables would be used.

Currently if EnableIP6Tables is false the ip6tables should not be used at all.

I created a PR for the libnetwork that is maybe solving this issue: moby/libnetwork#2596

@arkodg
Copy link
Contributor

arkodg commented Nov 20, 2020

  1. so it looks like this case was never working before

  2. Added improved IP validation for port mapper libnetwork#2596 will skip this case again to go back to the previous state, (thanks for the quick PR @bboehmke )

  3. But the issue is we got a IPv4 container IP due to https://github.com/moby/libnetwork/blob/535ef365dc1dd82a5135803a58bc6198a3b9aa27/portmapper/mapper.go#L262
    in the future when we support IPv6 NAT, if the Host IP is IPv6 we need to use the IPv6 address of the container as well as use the ip6tables command

@bboehmke
Copy link
Contributor

bboehmke commented Nov 21, 2020

Regarding your 3. point I think this is not the issue. Based on the error message the host IP is cause the problem.

The correct port mapper based on the container IP is selected in the allocatePort function (drivers/bridge/port_mapping.go). Where the IPv6 port mapper should not do any iptables commands because the chain is not initialized. (Unless the EnableIP6Tables is set to true)

So in this case the port mapper for IPv4 (with an IPv4 container IP 172.31.0.2) is called with an IPv6 host address which causes an issue with the iptables command.

@chris-crone
Copy link
Contributor Author

Checked with a master build and think this is now fixed. Thanks!

@thaJeztah
Copy link
Member

Thanks @chris-crone !

@chris-crone
Copy link
Contributor Author

Reconfirming that this is fixed in 20.10.0-rc2 🎉 Thanks @bboehmke!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. version/20.10
Projects
No open projects
Development

Successfully merging a pull request may close this issue.

5 participants