New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Better IPv6 support #1176

Open
hairyhenderson opened this Issue May 15, 2015 · 7 comments

Comments

Projects
None yet
6 participants
@hairyhenderson
Contributor

hairyhenderson commented May 15, 2015

Right now, the IPv6 story with Docker Machine is pretty sketchy:

  • not all drivers have a way of enabling IPv6 support on hosts (even when the provider allows this)
  • even for the drivers where you can enable IPv6, you must also remember to use --engine-opt ipv6=true to configure Docker to support IPv6
  • even when Docker is configured to support IPv6, you need to also use --engine-opt fixed-cidr-v6=... to set the net block

But there's a problem with this last bit - providers will assign a /64 usually when the host is provisioned. So even if a user wanted to specify --engine-opt fixed-cidr-v6=... they wouldn't be able to assign a valid value.

So, here's the proposal:

  1. Add IPv6 enablement support to the drivers that are missing it
  2. When IPv6 support is enabled (for example, when --digitalocean-ipv6 is set), automatically figure out the assigned /64, and set --ipv6 and --fixed-cidr-v6 appropriately on the engine.

This is mostly just late-night brainstorming, and I realize there are gaps... But I think this is doable!

/cc @ehazlett @nathanleclaire @sthulb - WDTY?

(please also CC any others who have more clue than me about IPv6 in Docker)

@sthulb

This comment has been minimized.

Contributor

sthulb commented May 15, 2015

I think we should support IPv6. The behaviour you're specifying would be a good start.

What do we need to do to support this?

@hairyhenderson

This comment has been minimized.

Contributor

hairyhenderson commented May 15, 2015

@sthulb - Well I was thinking of starting a PR soon for the second part. It'll probably end up being two (or more) PRs though, since all the drivers should be audited to make sure there's a ---ipv6 flag.

What I was thinking was to add a GetIPv6CIDR() method to the Driver interface, and that'd make things a bit simpler.

I'm not sure if it's a good idea to just give Docker the host's /64. I can't think of any reason why not, but I'm not totally familiar with how Docker deals with it.

@ehazlett

This comment has been minimized.

Member

ehazlett commented May 15, 2015

/cc @mavenugo -- ^

@JeNeSuisPasDave

This comment has been minimized.

JeNeSuisPasDave commented Feb 18, 2017

What is the priority on this? Is there some design or technical reason why this shouldn't be pursued?

My interest in this is: I can't use the virtualbox driver to create a docker host machine that properly provisions IPv6 interfaces. Which means I can't use docker-machine to help me investigate Docker's IPv6 support and behavior.

@hairyhenderson

This comment has been minimized.

Contributor

hairyhenderson commented Feb 18, 2017

@JeNeSuisPasDave It's been a while since I logged this, and a while since I've hacked on docker-machine, but from what I recall it is possible to provision IPv6 with docker-machine and virtualbox.

The key is you need to set --engine-opt ipv6=true and --engine-opt fixed-cidr-v6=<something>, and I'm pretty sure you need to have your host already configured with a routable v6 address. For the fixed-cidr-v6 option, you can assign your host's CIDR block - in essence adding containers onto the same network as the host.

I haven't had a chance to play with docker-machine's virtualbox support in a while (I use Docker for Mac for local setups), so I'm not sure if you'll run into problems. But, if you do, it'd be useful to update this issue with your findings 🙂.

@JeNeSuisPasDave

This comment has been minimized.

JeNeSuisPasDave commented Feb 23, 2017

I haven't been able to get it working, but maybe I don't understand the requirement "have your host already configured with a routable v6 address". I'm just trying to get the container network to be routable and the containers to use IPv6 among themselves, plus one container providing an exported IPv6 address (i.e. a web server container).

First, the version info:

$ docker --version
Docker version 1.12.3, build 6b644ec
$ docker-machine --version
docker-machine version 0.8.2, build e18a919

Next, I did this command to create a Docker VM:

#! /bin/bash
#
# Create a new VirtualBox virtual machine that hosts docker
# Machine name will be 'dev-dkrv6'
#
docker-machine create --driver virtualbox \
  --virtualbox-cpu-count 2 \
  --virtualbox-memory "4096" \
  --virtualbox-disk-size "20000" \
  --engine-opt ipv6=true \
  --engine-opt fixed-cidr-v6="fc00::d0c:0:0:0:1/64" \
  dev-dkrv6

The output was:

Running pre-create checks...
(dev-dkrv6) Default Boot2Docker ISO is out-of-date, downloading the latest release...
(dev-dkrv6) Latest release for github.com/boot2docker/boot2docker is v1.13.1
(dev-dkrv6) Downloading /Users/datihein/.docker/machine/cache/boot2docker.iso from https://github.com/boot2docker/boot2docker/releases/download/v1.13.1/boot2docker.iso...
(dev-dkrv6) 0%....10%....20%....30%....40%....50%....60%....70%....80%....90%....100%
Creating machine...
(dev-dkrv6) Copying /Users/datihein/.docker/machine/cache/boot2docker.iso to /Users/datihein/.docker/machine/machines/dev-dkrv6/boot2docker.iso...
(dev-dkrv6) Creating VirtualBox VM...
(dev-dkrv6) Creating SSH key...
(dev-dkrv6) Starting the VM...
(dev-dkrv6) Check network to re-create if needed...
(dev-dkrv6) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env dev-dkrv6

That produced this virtualbox host-only network on my Mac (my host):

vboxnet0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
	ether 0a:00:27:00:00:00 
	inet 192.168.99.1 netmask 0xffffff00 broadcast 192.168.99.255
	inet6 fe80::800:27ff:fe00:0%vboxnet0 prefixlen 64 scopeid 0xa 
	inet6 fc00:0:0:d0c::1 prefixlen 64 
	nd6 options=1<PERFORMNUD>

And a virtual machine with this ifconfig:

Boot2Docker version 1.13.1, build HEAD : b7f6033 - Wed Feb  8 20:31:48 UTC 2017
Docker version 1.13.1, build 092cba3
docker@dev-dkrv6:~$ ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:40:F4:72:D6  
          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::1/64 Scope:Link
          inet6 addr: fc00:0:0:d0c::1/64 Scope:Global
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr 08:00:27:80:C0:F3  
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe80:c0f3/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:755 errors:0 dropped:0 overruns:0 frame:0
          TX packets:451 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:137426 (134.2 KiB)  TX bytes:123492 (120.5 KiB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:EE:78:2F  
          inet addr:192.168.99.101  Bcast:192.168.99.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:feee:782f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:41 errors:0 dropped:0 overruns:0 frame:0
          TX packets:55 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:6372 (6.2 KiB)  TX bytes:58787 (57.4 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Now if I start a container (docker run --rm -i -i alpine:3.4 /bin/ash), and do ifconfig in that container I see:

/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02  
          inet addr:172.17.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:acff:fe11:2%32608/64 Scope:Link
          inet6 addr: fc00::d0c:0:242:ac11:2%32608/64 Scope:Global
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1792 (1.7 KiB)  TX bytes:508 (508.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1%32608/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

If I then ping those addresses from the Docker host VM, I see:

docker@dev-dkrv6:~$ ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.058 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.080 ms
^C
--- 172.17.0.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.058/0.069/0.080 ms
docker@dev-dkrv6:~$ ping fe80::42:acff:fe11:2
PING fe80::42:acff:fe11:2 (fe80::42:acff:fe11:2): 56 data bytes
^C
--- fe80::42:acff:fe11:2 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
docker@dev-dkrv6:~$ ping fc00::d0c:0:242:ac11:2
PING fc00::d0c:0:242:ac11:2 (fc00::d0c:0:242:ac11:2): 56 data bytes
64 bytes from fc00::d0c:0:242:ac11:2: seq=0 ttl=64 time=1.733 ms
64 bytes from fc00::d0c:0:242:ac11:2: seq=1 ttl=64 time=0.098 ms
64 bytes from fc00::d0c:0:242:ac11:2: seq=2 ttl=64 time=0.219 ms
^C
--- fc00::d0c:0:242:ac11:2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.098/0.683/1.733 ms
docker@dev-dkrv6:~$ 

The container /etc/hosts looks like:

 # cat /etc/hosts
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
172.17.0.2	92f99ad7254c

I expected to also see fe80::42:acff:fe11:2 and fc00::d0c:0:242:ac11:2 in there.

Pinging the Docker host VM from my Mac looks like:

ping6 fc00:0:0:d0c::1
PING6(56=40+8+8 bytes) fc00:0:0:d0c::1 --> fc00:0:0:d0c::1
^C
--- fc00:0:0:d0c::1 ping6 statistics ---
6 packets transmitted, 0 packets received, 100.0% packet loss

So there are several problems:

  1. I cannot ping the IPv6 addresses of the docker container from the host (Mac) (this is probably because there is no IPv6 route to the fc00::d0c:* addresses from the host ... not sure how to make that happen)
  2. I cannot ping the IPv6 addresses of the Docker VM from the host (Mac)
  3. The container doesn't appear to be adding the IPv6 address(es) to DNS or /etc/hosts.
@rubdos

This comment has been minimized.

rubdos commented Jun 23, 2018

FYI, use case. I, myself, can (for now) live with link local IPv6, so it's kind of possible to do (although not very beautiful).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment