Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPv6 is not practically usable for production machines #13481

Closed
ghost opened this issue May 26, 2015 · 102 comments
Closed

IPv6 is not practically usable for production machines #13481

ghost opened this issue May 26, 2015 · 102 comments

Comments

@ghost
Copy link

@ghost ghost commented May 26, 2015

As far as I'm getting from all the available documentation on docker with IPv6, it doesn't seem to be designed in a way to be usable in practice.

1.) enabling it requires specific docker daemon startup options, but does anybody even start the docker daemon manually on a production machine? Usually, the init system handles this and when I start to use things it's already running. Am I supposed to restart the whole docker daemon just to enable IPv6? Apparently, there is no /etc/docker.conf or something similar either which would be an obvious point to enable it forever.

2.) as far as I've read the IPs are randomly assigned, but there is no NAT functionality available and these are actually public IPs. (side note: to my knowledge, NAT is still possible with IPv6 albeit unusual - so this could be possible to implement at least as an option, in the future) Apparently, I can't explicitly specify a static IPv6 for a container either even if I wanted to. This all means that I see no easy way to tell how a docker container will be reachable from the outside, and I can't reliably assign a public domain to it. How can I run a public server like that?

Since we're in 2015 now and last reserves of new IPv4 addresses will run out in a few months and people are actually already causing routing conflicts over spare IPv4 addresses (see recent news articles), it would be nice if IPv6 could be fixed to be more usable for practical use.

@ghost
Copy link
Author

@ghost ghost commented May 28, 2015

Since I should deploy my server in a few days, it would be nice if someone knows a workaround that is a bit simpler setting up the whole bridged network interface/all the routing stuff manually.. (that seems to be the common suggested "solution" on various websites)

@thaJeztah
Copy link
Member

@thaJeztah thaJeztah commented May 30, 2015

@Jonast perhaps you could discuss your use-case in the #docker-network IRC channel with the libnetwork maintainers. We're currently in feature-freeze for the 1.7 release, but discussing it could help the maintainers get a better understanding in what you think is needed.

@ghost
Copy link
Author

@ghost ghost commented May 30, 2015

It's pretty simple really: if you assign public ips to the containers, I NEED a way to have a container have a predictable ip (predictable from the image, not from container creation) - e.g. by allowing me to set it myself through docker run/docker-compose.yml.

If you really want to keep random IPs, there needs to be a proper NAT in front. No NAT and random public ips is just a weird idea which isn't of any practical use - and that's the ONLY way IPv6 with docker currently operates.

And IPv6 should be possible to activate docker-wide without restarting the whole docker daemon and possibly bringing down ipv4 connections in any way.

@ghost
Copy link
Author

@ghost ghost commented May 30, 2015

Also, all solutions to this currently seem to involve complex custom scripts and rolling my completely manually set up network bridge for my docker containers. I could do that of course - but it's just a waste of time, since docker could provide a useful implementation to start with.

@thaJeztah
Copy link
Member

@thaJeztah thaJeztah commented May 30, 2015

If this is for assigning an IP-address, isn't that the same as moby/libnetwork#161? I see you commented on that already

@ghost
Copy link
Author

@ghost ghost commented May 30, 2015

Yup it is! I just wanted to point out that IPv6 in overall (the way it is activated, the way IPs are assigned) seems to be a rushed and not very good design, as if nobody actually used it in practice. And that for it to be useful, this all should be done differently.

@thaJeztah
Copy link
Member

@thaJeztah thaJeztah commented May 30, 2015

@mrjana @mavenugo any additional thoughts on this? Just checking if improvements are on the roadmap here.

@ghost
Copy link
Author

@ghost ghost commented Jun 1, 2015

Guys, does no NAT also mean that all ports opened on any container with ipv6 enabled will, without any use of docker run's -p, be exposed to the internet?

Or did you at least include some basic firewall to prevent that so that port publishing still works as intended?

@jansauer
Copy link

@jansauer jansauer commented Jun 3, 2015

@Jonast I like your last point. So far it seems like the plan is to just give every container a globally routable IPv6 addresses on the docker0 interface. This may lead to security problems because image authors never consider ports being accessible when not explicitly published.

At the moment the simple task of running two web servers on the same port with different ip addresses is far too difficult.

@jansauer
Copy link

@jansauer jansauer commented Jun 4, 2015

@Jonast I just realised that you can predict the ipv6 address of a container by using --mac-address.
The ipv6 address is only a combination of the prefix set by --fixed-cidr-v6 and the mac address set with --mac-address.

--fixed-cidr-v6="2001:db8:1::/64" (on the daemon)
--mac-address="02:42:ac:11:ff:ff" (on the run command)
result in: 2001:db8:1:0:0:242:ac11:ffff

Regarding your point about docker startup options. This is more a job for your system manufacturer than for the docker project itself. Fedora Atomic Host for example uses systemd with env vars and i can set options for the docker daemon in /etc/sysconfig/docker-network.

At the end i'm still on your side and using ipv6 with docker should be the default and not a special use case.

@DominicBoettger
Copy link

@DominicBoettger DominicBoettger commented Jul 8, 2015

--mac-address is nearly unusable. When restarting the container docker says that the address is already assigned.

It's needed to restart the docker daemon. I think this is really a major blocker for production use!!!

@thaJeztah
Copy link
Member

@thaJeztah thaJeztah commented Jul 8, 2015

ping @mrjana @mavenugo

@mavenugo
Copy link
Contributor

@mavenugo mavenugo commented Jul 19, 2015

@DominicBoettger @thaJeztah the issue with --mac-address, --fixed-cidr-v6 and the address already in use error is resolved via #10774. Please check the latest docker master and confirm.

@darconeous
Copy link

@darconeous darconeous commented Sep 1, 2015

I fail to understand why IPv6 can't work exactly the same way that it works for IPv4: You assign a private prefix (FD00::DEAD::BEEF/64, for example) to the docker interface and give each container an address with this prefix. You then use NAT for getting packets in-to and out-of the container, just like you currently do for IPv4.

There are few cases where I think using a NAT for IPv6 is a reasonable thing to do, but I think this case is the exception.

This doesn't preclude you from giving public IPv6 addresses to your containers if that is really what you want to do, but it will just work for most containers. And that's the whole idea, right? It should do something reasonable by default that just works in a way that people would expect. If you want more control, you have to dive in. Forcing everyone to do all of these extra steps to get IPv6 working is just a PITA.

You could have an option (like --supports-v4-only) which reverts back to the current behavior for containers which only support IPv4 (so that you can still access them via IPv6 on the host).

@timcoote
Copy link

@timcoote timcoote commented Sep 29, 2015

@darconeous surely firewalls would make more sense than NAT? I'd have thought that the host would get a /64 and allocate to the containers from that network.

shouldn't the address construction based on --fixed-cidr-v6 and --mac-address be based on SLAAC, if it's to map to the mac address or use the privacy options, otherwise. In either case, 'finding' containers would depend on using dns (?)

@ghost
Copy link
Author

@ghost ghost commented Sep 29, 2015

@timcoote it doesn't really matter whether it's NAT or not, but it does matter that docker needs to do it automatically so the -p/publish option of docker run behaves similarly, including a protection of stuff that is not published explicitly. Everything else is just asking for people to run into unexpected security pitfalls

@Frogging101
Copy link

@Frogging101 Frogging101 commented Sep 29, 2015

It's pretty simple really: if you assign public ips to the containers, I NEED a way to have a container have a predictable ip (predictable from the image, not from container creation) - e.g. by allowing me to set it myself through docker run/docker-compose.yml.

If you really want to keep random IPs, there needs to be a proper NAT in front. No NAT and random public ips is just a weird idea which isn't of any practical use - and that's the ONLY way IPv6 with docker currently operates.

This is the point I'm interested in too. Say I want to run a web server within Docker and have it be IPv6-compatible. How am I meant to do this if the IP address is a different random value every time I start it up? I can't route that, I can't use DNS with that, there's not much at all I can do with that honestly, aside from outgoing connections.

I understand that part of the purpose of IPv6 is to make NAT unnecessary, but these are containers. Containers are meant to be isolated from each other and from the host, and are also meant to be highly versatile in terms of access control. Giving Docker containers their own, random public IPv6 address does make NAT unnecessary but it also makes the containers useless in an IPv6 configuration, as well as insecure because there is no firewall on these things by default. It is expected that the host acts as access control and if the ports aren't published to the host, they won't be accessible. This, as I understand it, is not the case with IPv6 in Docker.

@timcoote
Copy link

@timcoote timcoote commented Sep 30, 2015

I'm not clear why SLAAC, or even dhcpv6 would not solve the problems:

  • the router to the subnet on the docker host would control access to container port/IP addresses, which are predictable (in both cases, there could be a simple mapping between mac address and the host identifier of the allocated IP address. The prefix would depend on the delegated prefix to the docker host).
  • the docker host would need to be responsible for updating any network prefix changes in dns, in much the same way as it would if it was on an IPv4 subnet.

Maybe I don't understanding the problem description?

NAT is not a firewall, it just makes it much harder to ensure that the designed security has been implemented and is still working. It also subverts mechanisms such as TLS for end to end confidentiality and integrity, and significantly complicates container <-> container connectivity as it would introduce arbitrarily deep NATting between the end points.

If the containers are intended to be used both inside a firewalled zone and directly on the internet, then they will need suitable security configuration. There is also an argument that, even within a firewalled zone, good practice would be to not trust other services/containers.

@ghost
Copy link
Author

@ghost ghost commented Sep 30, 2015

I am the original reporter, and as you can see above no the point is not to ask for a NAT. However, not using a NAT has various consequences which docker has apparently chosen to ignore at the cost of its users which really should be fixed.

Those are:

  • don't use random public IPs. Seriously. Nobody can DNS to that. It's just useless, don't do it. Allowing us to specify a static ip is minimum, but docker really should probably be assigning a fixed IPv6 to a container once it's created or something.
  • make sure -p/publish is still safe. And no, telling people "uuum well even within a firewalled zone, good practice would be to not trust other services/containers" is NOT helpful. There is no big fat red warning that someone decided to drop the ball on this and unexpectedly expose all ports per default, so unless you put that everywhere you ARE gonna have people who miss out on that and you don't want to have that.

Can we now actually consider fixing this?? NAT would have been one way to address some of the problems here, you don't like it, that's understandable and I agree there are other ways that are probably better. But we shouldn't forget the actual problems over it which haven't magically disappeared and really need to be addressed.

Edit: also NAT is one simple solution to those problems, which is why people keep bringing it up. But it ruins e.g. the simple use case of running multiple webservers at port 80 at different IPs which will be a nice thing to do once people stop relying on IPv4, so I agree it's not the wisest future-proof choice. However, that doesn't mean we shouldn't find ways to fix those things!

@Frogging101
Copy link

@Frogging101 Frogging101 commented Sep 30, 2015

Couldn't have said it better myself. And running two mail or web server instances on one machine with different IP addresses is exactly the sort of use case that containers should excel at. And they do, except for where it's falling flat which is the networking.

And yeah, I wasn't saying we should keep using NAT. I'm saying that NAT has the side effect of solving certain problems, and if we're taking it out of the picture then we need to have proper solutions to those problems to fill the void.

@cpuguy83
Copy link
Member

@cpuguy83 cpuguy83 commented Sep 30, 2015

Networking has been undergoing a major overhaul.
The reality is that the network backend in docker prior to docker 1.7 was very rigid, and could only really be used one way.
In docker 1.7 the networking was ripped out and replaced with libnetwork, which has been iterated on since then.

Docker 1.9 should include some major changes to how you can consume networking in containers and will include for the first time the containe rnetwork model first introduced here: https://blog.docker.com/2015/04/docker-networking-takes-a-step-in-the-right-direction-2/

Regarding stable networking (ie, assigned IP stays with the container after restart): moby/libnetwork#489

For static assignment moby/libnetwork#161

All that to say, this stuff is being worked on, and the libnetwork team is doing an amazing job.

@narqo
Copy link
Contributor

@narqo narqo commented Jul 19, 2016

Speaking about current Docker for Mac / Windows (e.g. -rc2-beta17): is IPv6-only network is supposed to be (anyhow-) acceptable from within the container?

@justincormack
Copy link
Contributor

@justincormack justincormack commented Jul 20, 2016

We would like to support ipv6 for docker for Mac and Windows, it is on the
roadmap. Because of the setup with a VM as well it will probably have to be
set up with a private ipv6 range on the VM and then ipv6 NAT on the host,
as that is how ipv4 works. So probably not quite the same as how this issue
suggests as we would not even have a single real ipv6 address on the VM, as
that would be on the OSX or Windows host.

On 20 Jul 2016 12:08 a.m., "Vladimir Varankin" notifications@github.com
wrote:

Speaking about current Docker for Mac / Windows (e.g. -rc2-beta17): is
IPv6-only network is supposed to be acceptable (anyhow) from within the
container?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#13481 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAdcPIVWr1kNdi85UWeia_DfjqjSNRCbks5qXVj_gaJpZM4Eqiit
.

@jgunthorpe
Copy link

@jgunthorpe jgunthorpe commented Jul 20, 2016

@justincormack At least on Linux you can also use proxy ARP (well, proxy neighbour discovery) with IPv6 to capture a routable SLAAC address and route it to the tunnel device for the VM. The same can be done with IPv4, but it is more difficult to get an on-demand IPv4 address.

@justincormack
Copy link
Contributor

@justincormack justincormack commented Jul 20, 2016

That is possible too, then we would need ipv6 NAT in the VM as well.

On 20 Jul 2016 6:00 p.m., "Jason Gunthorpe" notifications@github.com
wrote:

@justincormack https://github.com/justincormack At least on Linux you
can also use proxy ARP (well, proxy neighbour discovery) with IPv6 to
capture a routable SLAAC address and route it to the tunnel device for the
VM. The same can be done with IPv4, but it is more difficult to get an
on-demand IPv4 address.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#13481 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAdcPKm9pNEGWRCGU93yeqGfZbVIyDUpks5qXlRAgaJpZM4Eqiit
.

@jgunthorpe
Copy link

@jgunthorpe jgunthorpe commented Jul 20, 2016

No, you don't need NAT when using proxy ARP.

You still need the private subnet with the VM, but those IPs are only used for packet routing and never appear in any public destined packets (really, they are only needed because the VM will be pretending to be Ethernet, not a tunnel device). For IPv4 this can be any private range, for IPv6 you can use the standard link-local addresses.

Once the private IP is setup, then you assign the public IP in the VM's interface with a /32 or /128 subnet and set the VM's routing table to default route all packets to the host's IP in the private range with a source of the public IP. This gets traffic out of the VM using the public IP.

On the host side the public IP is captured with proxy ARP and it routes it to the host size of virtual VM eth device, directed to the VM's private IP. This is the point you'd inject iptables as a non-NAT stateful firewall. This gets traffic into the VM using the public IP.

The host simply forwards the public IP it captures with proxy ARP/ND to the tunnel device, and forwards packets from the public IP on the tunnel device back to the network. Netfilter provides the stateful firewall, but that is for security, not for correct forwarding.

This is the basic proxy ARP configuration with the VM's virtual ethernet device pair substituted for a real ethernet network.

You don't need to use proxy ARP to get a SLAAC address and then NAT that. If you want to use NAT then just directly NAT IPv6 the same way IPv4 is being NAT'd today, as others in the thread have explained. The purpose of proxy ARP would be to give the docker container an actual public IPv6 address without having to setup IPv6 special subnets (which is largely impractical). It is similar to what ipvlan does, except it is compatible with a VM environment.

@timcoote
Copy link

@timcoote timcoote commented Jul 21, 2016

@jgunthorpe I'm not clear why a host running docker (your laptop in your example) wouldn't just request an IPv6 prefix and allocate to services on containers from that address space in a deterministic way, using its firewall to constrain reachability, while allowing routability. In normal operation, I'd expect each container to have several IPv6 addresses, depending on the route to the outside world (e.g. wifi, ethernet, LTE). Am I thinking of the wrong problem?

@bltavares
Copy link
Contributor

@bltavares bltavares commented Jul 21, 2016

One example is Digital Ocean, that does not allocate a prefix for each droplet and gives you just a few ips

@timcoote
Copy link

@timcoote timcoote commented Jul 21, 2016

not according to this: http://do.co/29YFARS a droplet can request a /64, like most hosts.

@bprodoehl
Copy link

@bprodoehl bprodoehl commented Jul 21, 2016

DigitalOcean gives you 15 usable addresses in a /64. I've asked support how I can use or request more than 15, and they said they don't support that.

@jgunthorpe
Copy link

@jgunthorpe jgunthorpe commented Jul 21, 2016

@timecoote, I don't know what environments you work with, but in most standard IPv6 deployment there is no way to 'request a prefix', AFAIK.

Typically an Ethernet segment is assigned a /64 and the hosts on that segment choose addresses from the /64 pool and cannot go beyond that. Consider something like my corp WLAN or a coffee shop serving IPv6.

You only get to request full 128 bit addresses via SLAAC using the prefix assigned to the segment. This can be used with ipvlan or proxy ARP as I described above.

Arranging things so a single host has an entire prefix delegated to it is hard. It requires cooperation from the net-admin responsible for the segment. It requires that every host on the segment have a route entry for my docker box's prefixes (for efficiency) and that the central router(s) have the route entry too. Typically in a datacenter one would run a route daemon on each VM/docker server to propagate all the IPv6 routes and provide prefix mobility. Presumably Digital Ocean is doing that to provide /64 to containers.

I'm not able to do that on a WLAN/laptop type of environment.

@wmark
Copy link

@wmark wmark commented Jul 21, 2016

@jgunthorpe It's actually standard in the data centers I work with to use prefix delegation. That is, you have a DUID, and use the PD flag with DHCPv6. (You have a /48 which you split according to your needs. For example, a server gets a /56 to delegate /64 to clients.) YMMV.

Anyway, flags --ipv4= and --ipv6=iiii/mmm (mmm=128 for stable addresses) for command run would be excellent. Not only to define a default address for exposed ports, but to set the source IP address, too.

@timcoote
Copy link

@timcoote timcoote commented Jul 21, 2016

That's how it works for domestic deployments, too. The typical house should get a /56 to carve up into 256 different physical networks so that different media qualities can easily be handled (e.g. keep the video streaming away from the low power wireless networks). The mechanisms to automate this process are under the HOMENET WG of IETF.

This isn't strictly an IPv6 issue. It's to do with the increasing address density of ubiquitous computing. However, since IPv4 cannot really handle double NAT (or even single NAT for many scenarios) with adequate performance, it only becomes apparent when you start to look at IPv6 deployments. There will be issues like picking a route to another service based on pricing that will float up into the application space.

@Frogging101
Copy link

@Frogging101 Frogging101 commented Jul 21, 2016

Anyway, flags --ipv4= and --ipv6=iiii/mmm (mmm=128 for stable addresses) for command run would be excellent. Not only to define a default address for exposed ports, but to set the source IP address, too.

Isn't that pretty much already how it works? You use docker network create to define a network with a subnet, an then you can use --ip or --ipv6 to assign the container an address within that subnet.

@jgunthorpe
Copy link

@jgunthorpe jgunthorpe commented Jul 21, 2016

@timecoote that still conflates deployment from the ISP with deployment within the site, I can't forsee turning on prefix delegation for my corp WLAN, for instance, but I still want 'docker build' to work out of the box in that environment with IPv6 connectivity...

@timcoote
Copy link

@timcoote timcoote commented Jul 22, 2016

@jgunthorpe I would expect DHCP6-PD to work on a corporate lan. in any case a ula prefix could be handed out by the docker daemon. One of the challenges that Homenet addresses is how to allocate prefixes to an unconnected network, so that stuff works before the internet is connected, or when the connection is lost. I think that a lesson learned was the need for ULAs.

@jgunthorpe
Copy link

@jgunthorpe jgunthorpe commented Jul 22, 2016

@timecoote My understanding of DHCPv6 PD is you should not allow it for untrusted clients, so no, it will not be enabled on my cell phone, coffee shop, corp wifi. If docker is the only user you can bet it will not be available widely...

How does a ULA help? We need public connectivity inside the container, not a ULA.

@timcoote
Copy link

@timcoote timcoote commented Jul 23, 2016

@jgunthorpe on the internet, there's no such thing as a trusted client ;-)

I'd assumed that you were developing on your laptop using vms with containers on them in a coffee shop. So your laptop is a pd client for the wifi. the dev. vms are pd clients, either for something like homenet on your laptop, or for the wifi. Docker on the vm would handle the subnet for hosted containers. I'm not aware of any particular vulnerabilities for dhcp6-pd. Where have you seen these mentioned?

As far as I'm aware, PD is just part of DHCP6. It's not directly available for LTE implementations. There's some work going on to enable subnets or individual addresses to be handed out for hotspots for local devices (including containers running on the phone).

the ULA thought was for if you needed to work while not connected to the internet.

@timcoote
Copy link

@timcoote timcoote commented Jul 23, 2016

@Jonast NAT is not nice ;-) Don't confuse IP addresses with hosts. A host can have many IP addresses: the mean number in a corporate datacentre is 4.2 with IPv4 (lower for windows, higher for test machines). I'd expect that to grow with IPv6 - it certainly does for end-user devices.

This increase is a function of increased networking demand, rather than IPv6. NATting has meant that it's not very practical to divide networks up into sensible groupings, based on physical medium characteristics (e.g. low power networks for battery powered devices separate from low latency streaming networks, free wifi, paid for LTE).

I don't think that it's a good assumption that sets of services are on the same IP address, is it? You'd lose the flexibility to split them / recombine them.

Non-reachability is a function of firewalls, not of routers, and the usual default behaviour for subnets is that they are not reachable (but they are routeable). There's a significant hidden cost in managing the security of NATted environments as much of the environment is invisible, so there's no awareness of the actual vulnerabilities that have crept in.

I'm sure that Docker will support NAT6, but I don't think it's a good idea to use it in normal circumstances.

@jgunthorpe
Copy link

@jgunthorpe jgunthorpe commented Jul 23, 2016

@timcoote RFC3633 discusses the requirement. DHCP is often used as a point-to-point protocol so often trust is established by controlling one side of the 'cable' (eg the FTTH), for something like WLAN the recommendation is to use RADIUS/ipsec with the DHCP protocol. This is obvious otherwise an attacker can trivially exhaust the tiny PD pool creating a DOS.

@Jonast - You are right. docker already has the bits to manually configure PD, and this thread still exists. The desire is for something automatic, universal and 'works out of the box' - which can only be done via NATv6. For deployment an operator should pick between NATv6, ipvlan and DHCPv6 PD, they all have various trade offs.

@lcolitti
Copy link

@lcolitti lcolitti commented Aug 3, 2016

NAT is needlessly limiting when a /64 block has 10^19 addresses. Why have 100 containers, but then put them all onto the same IP address, and then go crazy mapping ports? IPv6 has the potential to make things much simpler:

  • Use a different public IPv6 address for every container; you'll never run out.
  • Run a separate SSH daemon in every container, if you like; yes, they can all use port 22 on their own address. Same goes for web servers or anything else.
  • Use firewalling instead of NAT. Instead of publishing ports, allow IPv6 address/port pairs, or entire IPv6 addresses for hosts that have their own firewalls. The security properties are the same, but the management overhead is reduced, and protocols that don't work well behind a NAT, like SIP or some forms of webrtc, now have a much easier time.

If you look at draft-herbert-nvo3-ila-02, you'll see that one IPv6 address per container is what leading operators like Facebook are doing. (In fact, they go a step further, using the bottom 64 bits of the IPv6 address as a container ID and changing the top 64 bits as the container moves around, but let's keep it simple for now.)

This does of course require the ability to assign static IPv6 addresses to containers. That's critical, I think.

As for IPv6 address availability: RFC 7934 recommends that every customer should get access to as much IPv6 address as they need. For isolation, at least a /64 per customer is recommended; the customer does whatever they want with it. The datacenter operator can implement this via DHCPv6 PD if they want, or at worst with an RA. Each container can do ND proxying to form an address out of the /64; or you can just use bridging and SLAAC. Yes, this will work on a laptop at home.

If a VM provider doesn't assign a user at least a /64, they're needlessly limiting their customers, and said customers should ask them to follow RFC 7934.

@jgunthorpe
Copy link

@jgunthorpe jgunthorpe commented Aug 3, 2016

@lcolitti the downside with SLAAC (eg via ipvlan, bridging, etc) is that it does not setup automatically in a multi-interface situation, eg my laptop with lan and wireless. Docker has to guess which interface should be the one to do SLAAC on and then everything breaks if the network reconfigures (eg switch to wireless from wired). NATv6 does not have these downsides. My laptop is not a datacenter, docker should not treat it like one.

@Frogging101
Copy link

@Frogging101 Frogging101 commented Aug 4, 2016

Shouldn't this be a separate bug report? The original problem was that IPv6 was useless on production servers, due to the fact that there was no supported way to get a predictable, routable, DNSable IPv6 address. That issue has since been solved with the new "docker network create" and "docker run --ip" options, since it is now possible to statically assign a container its own IP address.

IPv6 NAT is a valid, but different feature request, and I think it would be tidier to start with a fresh bug report for that one.

John Brooks
Sent from my BlackBerry PRIV with K-9 Mail.

On August 4, 2016 8:35:19 AM EDT, Jonas Thiem notifications@github.com wrote:

@lcolitti Nobody here argues NATv6 is the best solution for all use
cases. However, I think it has been sufficiently demonstrated there is
a need for NATv6 for some people - that doesn't mean you need to use
it, and certainly nobody wants NATv6 to be the only available solution
or an irreversible replacement that cannot be turned off for the
current behavior.

For certain environments, especially smaller servers or laptops, NATv6
would be a huge improvement over the current sole available behavior
which is why it should be an option. That datacenters run better with
custom iptables instead of a NAT with fully customized routing
shouldn't come to anyone's surprise.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
#13481 (comment)

@thaJeztah
Copy link
Member

@thaJeztah thaJeztah commented Aug 4, 2016

@Jonast sounds good to me; following the discussion above; this issue can be closed?

@thaJeztah thaJeztah removed this from the 1.13.0 milestone Aug 4, 2016
@thaJeztah thaJeztah removed the status/needs-attention label Aug 4, 2016
@CMCDragonkai
Copy link

@CMCDragonkai CMCDragonkai commented Feb 3, 2017

Is there any documentation with regards to IPv6 for Docker for Windows vEthernet (DockerNAT) (the native one that uses Hyper-V)? https://docs.docker.com/engine/userguide/networking/default_network/ipv6/ assumes Linux.

@valentin2105
Copy link

@valentin2105 valentin2105 commented May 5, 2017

#32675

Docker is now unusable in a IPv6 only network.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests