New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A More Usable IPv6 Network Approach #45296
Comments
I would really appreciate this change, I have been having issues with using IPv6 addresses for my current project as well. |
Just to preempt the question, this would likely require the main interface to be able to participate in a network bridge (related), which means (sadly) that the machine's address would have to be moved from it's primary interface (e.g. eth0 or enp1s0) to a bridge interface. This step would be annoying for users, but would only need to be done once and every other step would be much more sane. The corresponding docker configuration could simply indicate which bridge interface to participate with. |
This may work on standalone docker: I am a little worried about swarm mode. |
@yorickdowne Does using bridged networking (as opposed to routing or something else) fundamentally make this:
problematic? |
I admit I do not know. Swarm mode uses overlay networks. How that interacts with the rest of its networking, I am unsure. I wanted to raise it as something to take into account, though. I generally like the idea of not using NPT for ipv6, so I get where you’re coming from. If NPT is required it wouldn’t be the end of the world. The current situation is not truly usable particularly in a home / PD setting, fully agreed. |
@bradleypeabody
Depending on what you define as outbound requests exactly, I don't think it will work out of the box beyond the on-host Docker network. The host running Docker would not announce a container IP in the Docker-network address on its LAN interface automatically for any responses back to the host/container. IIUC, neighbour discovery solicit requests will go unanswered when communicating outside of the host on the LAN or further - because these addresses aren't neighbours on this interface (they're one hop away on the Docker bridge). In order to bridge the gap to the Docker network bridge - without requiring a separate routeable subnet from the upstream provider/router - the best one can do is proxying the neighbour discovery packets. This is analogous to proxy-ARP in IPv4. systemd-networkd can already do this; one would have to include every individual address in the IPv6ProxyNDPAddress setting on the LAN interface. ndppd is a tool that can do it independently of the network orchestration used. Regardless how it's set up, once the host is answering the neighbour discovery solicit messages, regular forwarding of packets should work and for the other hosts in the LAN it would look like these IPv6 addresses are directly available on the LAN, including the default gateway. No routing setup is needed, yet firewalling is still possible using ip6tables/nftables. I think that would please this generic use case. Issues that will show up in practice though:
Another downside I see is that a 'proper' network set up with an IPv6 subnet routed to your Docker host will become a second class citizen, as one must disable the automatic 'proxy NDP' feature. 😕 FWIW, using that ndppd/IPv6ProxyNDPAddress is how I set up IPv6 Global addresses on containers running on a host with a /64-assigned static subnet (and even larger prefixes) commonly found on budget cloud hosting services without offering a routed subnet as well. Sorry for being a party-pooper with this long post, just wanted to address my concerns over a proposal that seems to oversimplify the actual situation and would set back those running a 'proper' routed network. HTH. |
@gertvdijk Thanks for the detailed reply.
I think there is still a simpler option using a regular Linux network "bridge". I spent a little time trying to make things work as described but ran into some issues using dummy devices (and I don't seem to have all of the right tooling to hand to quickly try it with tap devices, which may be necessary). But here's what I was thinking so far: The target environment begins with a regular IPv6 address directly on an ethernet interface:
We need to convert this configuration into a bridge, which means basically making a bridge interface, assigning enp1s0 as a member, and moving this IP address to the bridge - and doing these last two steps at once so we don't lose our SSH connection. I would not expect Docker to do this automatically, my suggestion would be instead to simply provide instructions of how to do this in common environments and tell the user to do it. This is not an obscure feature or approach, it's used frequently, but it does modify the main network interface, and I'm thinking the user would have to be responsible for ensuring this gets into what Linux-distro-specific configuration (e.g. (https://www.cyberciti.biz/faq/how-to-configuring-bridging-in-debian-linux/)[this sort of thing]), etc. so it is applied again upon reboot and to ensure that other network tooling doesn't interfere with the config, etc. Anyway, the command line way to effect this change would be: Create the bridge interface:
And then make the main interface a member and move the IP address to the bridge (this is required just due to how Linux bridges work):
Which results in this:
The bridge device So if we add a new interface and join it to the bridge, it should be exposed to the external network in the same way as the main IP address. Here's the example I tried and died on using a dummy interface, adding another address on the same /64 network:
So far this didn't work for me, but I suspect as mentioned that it's due to the use of a dummy interface, I could be wrong. (I did also try enabling arp and multicast options on the interface with I do know for a fact that this works as expected if you add the additional address to the bridge directly, but obviously that doesn't help for what we're discussing here. Let me know if at least that explanation so far makes sense. Maybe I'm missing some basic aspect of this, but still seems to me like this can be made to work. References: |
Also, to address this:
I do appreciate the detail, and to some degree understand where you're coming from. The aspect that I don't fully understand is why using routing instead of bridging is considered more 'proper'. The way I look at it (and I'm not trying to argue here, just articulate another perspective on it), network routing is something that is typically handled by routing hardware/software and often involves propagation using a routing protocol such as BGP, OSPF, etc. Although there are many topologies possible, I think it's fair to say that a common network scenario is to have routing handled by dedicated routing infrastructure ("above" the host, at it's gateway(s)), and within each subnet there exists any number of switches (or software bridges - effectively the same thing). The rationale of my approach here is that for a lot of environments (not just home ISP stuff, many enterprise and cloud environments too), converting a host running docker from a single IP host into a software switch/bridge with multiple IPs on the same subnet fits, I believe, a lot more naturally into many network topologies. Also just to be clear, I'm thinking this bridged mode would be something entirely separate from other network modes that use routed prefixes - I agree that there are other scenarios possible and bridging doesn't solve everything for every scenario, but I'm thinking it should be an option for those cases where it fits well with the topology. |
I'm going down this rabbit hole a little. There is a solution that works, though it is experimental and it really needs #43033 to land. Without that fix, it looks something like this:
The "many many address pools" (I actually have 32) is why the lazy allocation or a fix like it is needed, so this can become And even better if it actually worked with something in the ULA by default, without needing to set default-address-pools. Baby steps. The advantage of iterating on what exists is that it's, well, iterative. It doesn't require an entirely new way of doing things, it doesn't require changes to how host interfaces are configured or changes to the v6 networking in the DC. It works with PD as well as without. |
Description
After reading through the various features and modes related to IPv6 support, there is a behavior that I think would make sense and I wanted to propose it in the hopes that maybe it could simplify IPv6 deployments in the wild.
The basic requirements I would expect from such a feature would be:
What I would expect to see instead is something like:
I believe such a configuration would provide IPv6 functionality that "just works" in most environments in a way that is also familiar to existing IPv4 docker users.
Does this seem like a workable solution? Has this been considered?
The text was updated successfully, but these errors were encountered: