Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Rewrite of IPv6 page #3244

Merged
merged 12 commits into from Jul 2, 2023
Merged

docs: Rewrite of IPv6 page #3244

merged 12 commits into from Jul 2, 2023

Conversation

georglauterbach
Copy link
Member

Description

See #1438 and #3057 (with #3057 (comment)) for reference. @polarathene please also apply further findings. I took a comment from #3057 from you and basically copy-pasted it with minor adjustments; please change if you deem that inappropriate, but I found it to be very good advice.

Closes #3061 (superseeds)

Type of change

  • This change is a documentation update

Checklist:

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code

see #1438 and #3057 (with
#3057 (comment))
for reference.

Superseeds #3061
@georglauterbach georglauterbach added kind/improvement Improve an existing feature, configuration file or the documentation area/documentation kind/update Update an existing feature, configuration file or the documentation labels Apr 10, 2023
@georglauterbach georglauterbach added this to the v12.1.0 milestone Apr 10, 2023
@georglauterbach georglauterbach self-assigned this Apr 10, 2023
Copy link
Member

@polarathene polarathene left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WIP - Just submitting progress to ensure it's not lost by a random system crash overnight (happens 😅 )

docs/content/config/advanced/ipv6.md Outdated Show resolved Hide resolved
docs/content/config/advanced/ipv6.md Outdated Show resolved Hide resolved
docs/content/config/advanced/ipv6.md Outdated Show resolved Hide resolved
docs/content/config/advanced/ipv6.md Outdated Show resolved Hide resolved
@vedranmiletic
Copy link

FWIW, I followed these instructions and I have yet to successfully send or receive an e-mail via IPv6. I have confirmed my mail server is reachable over IPv6. On the plus side, there are no rejections either.

@georglauterbach
Copy link
Member Author

FWIW, I followed these instructions and I have yet to successfully send or receive an e-mail via IPv6. I have confirmed my mail server is reachable over IPv6.

Probably because most people still use IPv4 and it is the preferred protocol.

On the plus side, there are no rejections either.

This is actually our main concern.

@vedranmiletic
Copy link

This is actually our main concern.

I will report back as as soon as I get a successful sending, reception, or even a rejection on either side. So far, I only got one greylisting from a non-major provider server, so it doesn't prove anything.

@georglauterbach
Copy link
Member Author

This is actually our main concern.

I will report back as as soon as I get a successful sending, reception, or even a rejection on either side. So far, I only got one greylisting from a non-major provider server, so it doesn't prove anything.

Thank's a lot for the feedback - very much appreciated! ❤️

@polarathene
Copy link
Member

I followed these instructions and I have yet to successfully send or receive an e-mail via IPv6.

Are you using a public IPv6 address assigned to your Docker container, or are you using private ULA address with NAT (ip6tables)?

There's been a few reports of unsuccessful IPv6 container setups, can you try with a simple web container? I have a 3 step example detailed here.

If you can get that to work, it should translate to the mail-server too. Make sure you use curl from another IPv6 enabled host, if you access via your web browser and you don't have an IPv6 address, the connection won't be successful AFAIK.

Once that is working, if you're not happy with ULA + NAT (which honestly should be fine?), then what are you working with? Is it a /64 subnet? Or only a slice of that from AWS/GCP like /80 or /96? Or just a single IPv6 IP?

@vedranmiletic
Copy link

Are you using a public IPv6 address assigned to your Docker container, or are you using private ULA address with NAT (ip6tables)?

I'm using private ULA address in Docker Compose, if that was the question.

There's been a few reports of unsuccessful IPv6 container setups, can you try with a simple web container? I have a 3 step example detailed here.

That's roughly what I have for Docker Mailserver. I can try traefik/whoami.

If you can get that to work, it should translate to the mail-server too. Make sure you use curl from another IPv6 enabled host, if you access via your web browser and you don't have an IPv6 address, the connection won't be successful AFAIK.

I don't have another IPv6 enabled host.

Once that is working, if you're not happy with ULA + NAT (which honestly should be fine?), then what are you working with? Is it a /64 subnet? Or only a slice of that from AWS/GCP like /80 or /96? Or just a single IPv6 IP?

I have a /64. I'm not aware how I can use a subnet of that in Docker Compose to avoid NAT; any pointers are welcome.

@vedranmiletic
Copy link

I have just tried traefik/whoami and seems to work as expected. Keeping my fingers crossed Docker Mailserver will work just as well. I will report back when I have more info.

@polarathene
Copy link
Member

I don't have another IPv6 enabled host.

I have just tried traefik/whoami and seems to work as expected.

How did you verify? On the same host only? I used two VPS on Vultr.

I'm using private ULA address in Docker Compose, if that was the question.

I have a /64. I'm not aware how I can use a subnet of that in Docker Compose to avoid NAT; any pointers are welcome.

I've only used ULA with NAT myself. Vultr issues a /64, but I haven't been able to get Docker IPs assigned to container from it to be reachable by the other VPS instance.

I have found this article which might explain it with an NDP proxy, but I need to sleep now.

@georglauterbach
Copy link
Member Author

I marked this for v12.1.0, but I think we do not require this to be in a special version. What's your opinion @polarathene? I can resolve the conflicts, and we can apply your suggestions and merge this. If you still want to work in this, please let me know :)

@polarathene polarathene modified the milestones: v12.1.0, v13.0.0 Apr 24, 2023
@polarathene
Copy link
Member

I am coming back to this after PR reviews, delaying until v13 is fine 👍

@vedranmiletic
Copy link

Sorry for the delay, missed the original notification.

How did you verify? On the same host only? I used two VPS on Vultr.

Connected from Scaleway VPS to Hetzner VPS.

I've only used ULA with NAT myself. Vultr issues a /64, but I haven't been able to get Docker IPs assigned to container from it to be reachable by the other VPS instance.

Ditto.

Anyhow, great improvement!

@polarathene
Copy link
Member

polarathene commented Apr 30, 2023

I have had success with IPv6 container not using NAT. I still need to verify a few things, but I think the below documents the steps most would encounter to get this working.

I've not yet tried to test it with DMS containers between hosts. After a bit more work, I'll come back and revise this information into the docs.


This is for a VPS (Vultr, Ubuntu 23.04) that is providing a single /64 IPv6 subnet that seems to be configured with DHCPv6, no SLAAC. Vultr was provisioned the instance the interface enp1s0, with an IPv6 address assigned that is publicly reachable. The /64 network assigned is not routed (not quite sure how that's determined), which apparently requires additional work 🤷‍♂️

My understanding is that Docker can create network interfaces that are subnets of that /64 network address. But you need to handle the routing via enp1s0 using NDP proxying (for remotes to successfuly route to containers, within the same host it routes fine IIRC).

IPv6 public IP (GUA)

This is a bit more complicated to setup compared to NAT6 with internal IPv6 ULA addresses for containers (where port management is a bit simpler to keep private/portable).

  1. Network config: Create a new docker network, I am using a /80 subnet here, but anything smaller than /64 should be fine? (adjust IP /64 network prefix below to what was assigned to your VPS):

    docker network create \
      --ipv6 \
      --subnet 2001:db8:feed:face:f00d::/80 \
      --opt com.docker.network.bridge.name=br-example \
      example-ipv6

    The --opt line above is optional, but provides a friendlier name than br-<INTERFACE ID> (docker network ls) when you need to interact with it later.

  2. Prep container: Run a container on the network:

    docker run --rm -d --network example-ipv6 --name test traefik/whoami

    With -p 80:80, it would be reachable on the IPv6 address assigned to enp1s0, just like with traditional NAT approach, but then only one container can use that public port. The IPV6 address assigned to the container (docker inspect test), fails to be reachable though.

  3. Enable NDP proxying: You need to have forwarding and proxy_ndp enabled (presumably not relevant if you can assign a /64 to a docker network that a remote can reach):

    sysctl net.ipv6.conf.enp1s0.forwarding=1
    sysctl net.ipv6.conf.enp1s0.proxy_ndp=1
    
  4. Proxy table + test: With NDP enabled, we still need to add the containers IP to the NDP proxy table (so that enp1s0 knows about the assignment to route remotes to a the internal interface managing that address?):

    ip -6 neigh add proxy 2001:db8:feed:face:f00d::2 dev enp1s0

    You may be able to reach the container via the containers IPv6 address now (especially if you used -p 80:80 where it is bound to both IPs):

    # Should be successful now:
    ping6 2001:db8:feed:face:f00d::2
    
    # Potentially only successful with `-p 80:80` at this stage:
    curl http://[2001:db8:feed:face:f00d::2]:80
  5. Troubleshooting step: If the ping6 was unsuccessful, double check proxy_ndp is still enabled with 1:

    sysctl net.ipv6.conf.enp1s0.proxy_ndp
    

    If it was set back to 0, this can happen due to network changes (such as a container being started). In my VPS, it uses a cloud-init config (/etc/cloud/cloud.cfg) that monitors network updates with a hotplug udev rule (/etc/udev/rules.d/10-cloud-init-hook-hotplug.rules). This activity can be viewed with journalctl --since '10 minutes ago', where after starting a container within about a minute, it processed the generated netplan config (/etc/netplan/50-cloud-init.yaml) creating/overwriting a systemd-networkd network config (/run/systemd/network/10-netplan-enp1s0.network), which despite no differences, does reset the proxy_ndp to 0 again. This network manager can configure that to be 1 with IPv6ProxyNDP=true, but that is not a configurable setting for cloud-init / netplan.

    Instead they provide hook scripts support via networkd-dispatcher (if you're using networkd at least). Other network managers have similar features, create the configured.d/ folder with a script like: /etc/networkd-dispatcher/configured.d/enable-ndp.sh:

    #! /bin/bash  
    
    TARGET_IFACE='enp1s0'
    
    if [[ ${IFACE} == "${TARGET_IFACE}" ]]
    then
     sysctl "net.ipv6.conf.${TARGET_IFACE}.proxy_ndp=1"
    fi

    systemctl restart networkd-dispatcher (assuming package already installed), the service may not have started successfully when no script files were present in the /etc/networkd-dispatcher/ config folders. After this, you can test with systemctl restart systemd-networkd or by triggering the cloud-init hotplug hook via some network update like starting a container.

    Whenever a configured event occurs for enp1s0 it will ensure proxy_ndp=1 is enabled. This prevents the container becoming unreachable from unrelated network events. Which I think is the cause of many reports about NDP being spotty / unstable?

  6. Troubleshooting step: If you could not curl the container IP address, a firewall is probably active. Next sections describe additional steps needed.

Firewall - UFW

iptables -nvL / ip6tables -nvL may reveal that the forwarding policy is DROP:

Chain FORWARD (policy DROP 0 packets, 0 bytes)

Dockers port publishing with -p can bypass that, but otherwise you have the following options:

  • nano /etc/default/ufw and change the policy to ACCEPT with DEFAULT_FORWARD_POLICY="ACCEPT". After saving run ufw reload. This is probably not want you want as it is not scoped to this specific container / network.
  • ufw route allow in on enp1s0 out on br-example will reduce the scope to routing only through enp1s0 to br-example networks. It's still broad, in that all ports open on that container are accessible, unlike with port publishing -p.
  • ufw route allow in on enp1s0 out on br-example to 2001:db8:feed:face:f00d::2 port 80 proto tcp will forward to the container IP only for this port over TCP. You'll have to do this manually for each port AFAIK. You will probably want to assign an explicit IPv6 address to the container for this.

Firewall - Firewalld

Not looked into what equivalent configuration is required for this alternative firewall frontend.

Additional Notes for IPv6 GUA setup

Instead of manually adding each container IP to the NDP proxy table, you can have a daemon service configured for proxying the IPs from the docker interface to the main NIC enp1s0:

  • ndppd - Commonly cited choice, especially around 2016 in official docker docs since removed. No official 1.0 release, apparently has some issues. Last tag release from 2016.
  • ndppd 1.0 - Different repo instead of upstream dev branch. More recent commit activity with tagged 1.0 release from Sep 2022. Has original ndppd author listed as a contributor and says it's the 1.0 release ported to C, but unclear how official / endorsed / trusted it is 😅
  • pndpd - Golang alternative. There are a few other projects providing similar functionality.

If you are not affected by cloud-init network management like I encountered. You can try persist the network tunables across reboots with a /etc/sysctl.d/99-ipv6-ndp.conf file:

net.ipv6.conf.enp1s0.forwarding=1
net.ipv6.conf.enp1s0.proxy_ndp=1

Proxy table is lost upon reboot AFAIK (but should be a non-issue with a background service like above automating the proxy table updates for you).

ip -6 route show will show IPv6 networks you've setup with Docker, which should not be confused with ip -6 neigh show proxy that lists the current entries in the NDP proxy table. While ip -6 neigh show will only show a container IP once it's had a connection established to it.

IPv6 Router Advertisements (accept_ra=2)

sysctl net.ipv6.conf.enp1s0.accept_ra=2 is sometimes mentioned when discussing setup with proxy_ndp=1.

This was 0 for my Vultr VPS instance, and according to networkd docs for IPv6AcceptRA=yes which Vultr implicitly configured via cloud-init, systemd will set accept_ra=0 / ignore the kernel router advertisements in favour of it's own internal implementation?

The docs also mention:

If true, RAs are accepted; if false, RAs are ignored.

  • When RAs are accepted, they may trigger the start of the DHCPv6 client if the relevant flags are set in the RA data, or if no routers are found on the link.
  • The default is to disable RA reception for bridge devices or when IP forwarding is enabled, and to enable it otherwise.

enp1s0 is an ethernet device, while the custom docker network is a bridge:

$ networkctl list

IDX LINK            TYPE     OPERATIONAL SETUP     
  1 lo              loopback carrier     unmanaged
  2 enp1s0          ether    routable    configured
  3 docker0         bridge   no-carrier  unmanaged
  4 br-example      bridge   routable    unmanaged
  5 veth1f46ccd     ether    enslaved    unmanaged

You can check the related settings for each with (veth is the interface belonging to the container):

sysctl net.ipv6.conf.{enp1s0,docker0,br-example,veth1f46ccd}.{proxy_ndp,accept_ra,forwarding}
net.ipv6.conf.enp1s0.proxy_ndp = 0
net.ipv6.conf.enp1s0.accept_ra = 0
net.ipv6.conf.enp1s0.forwarding = 1
net.ipv6.conf.docker0.proxy_ndp = 0
net.ipv6.conf.docker0.accept_ra = 0
net.ipv6.conf.docker0.forwarding = 1
net.ipv6.conf.br-example.proxy_ndp = 0
net.ipv6.conf.br-example.accept_ra = 0
net.ipv6.conf.br-example.forwarding = 1
net.ipv6.conf.veth1f46ccd.proxy_ndp = 0
net.ipv6.conf.veth1f46ccd.accept_ra = 1
net.ipv6.conf.veth1f46ccd.forwarding = 1

@polarathene
Copy link
Member

@vedranmiletic I found some time to investigate this again :)

If you have the time to go over the instructions above, it would be good to know if you have success with the container being reachable now 👍

@AlperShal @ki9us @super9mega tagging you from related discussions. Your input would be appreciated as well if you can confirm the IPv6 GUA setup steps work for you too, or if I'm missing something else 😅

@AlperShal
Copy link

@polarathene Tested and can confirm it works! Looks like only thing that's left to be done was adding userland-proxy for me. (I've had everything you documented already done also a few other things) Thanks for taking your time and writing the docs. It's pretty clear. 👍🏻

@polarathene
Copy link
Member

polarathene commented May 1, 2023

was adding userland-proxy for me

That is enabled by default I think (although considered for being disabled in future), many old guides talk about disabling IIRC, so thanks for pointing that out, I'll make sure that's mentioned too :)

  • Are you also using ip6tables? I had that enabled from the ULA setup and still need to test if it matters for GUA. (EDIT: not important for IPv6 GUA, see below update)
  • @ki9us had shown an issue of losing the remote client IPv6 address with DMS, Postfix was seeing an IPv6 Docker network gateway address apparently, so I need to look at reproducing that (unless that is what the userland-proxy: false was causing). (EDIT: Appears to be from userland-proxy: true + ip6tables: false, but should only happen when not accessing container IPv6 GUA address directly..)

Update: ip6tables is not relevant to GUA usage. Only when publishing ports to redirect traffic from an external facing interface like enp1s0 via NAT (useful for non-public routable ULA IPv6 addresses).

Remote client IP (traefik/whoami response field RemoteAddr):

  • ip6tables: false + userland-proxy: false (neutral?):
    • Container cannot be reached indirectly as no proxy via NAT enabled.
    • For direct connection to container IP, firewall must also route ALLOW FWD) traffic from the bridged interface (enp1s0) to the docker network (br-example).
  • ip6tables: false + userland-proxy: true (bad):
    • For a published port, firewall needs to allow it (ufw allow 8000/tcp for -p 8000:80).
    • Container responds with IP gateway address of the docker network it belongs to. IPv6 gateway address requires NAT66:
      • NAT64 => Host offered IPv6 to remote client, network is IPv4 only.
      • NAT66 => This time the docker network also has an IPv6 subnet.
  • ip6tables: true + userland-proxy: irrelevant? (good?):
    • Requests from remote client to bridged interface (enp1s0), should have different IP and Host response fields, and now correctly provide the expected RemoteAddr.
    • Only relevant if you want the container to also be reachable on the bridged interface (enp1s0), but like with IPv4 prevents containers publishing ports that are already in use on that interface.
    • Firewall config no longer relevant, -p A:B makes the container reachable on port A (for enp1s0), or port B (container IP) as ip6tables rules added by Docker overrule the UFW rules.
  • Direct container IP (via any assigned publicly routable address) (good):
    • If a firewall config does not prevent it, the container should be reachable directly (may require NDP proxy table entry).
    • IP and Host response fields will match as no proxy involved. RemoteAddr should correctly match the expected remote client IP.
    • Unlike via NAT, requires additional firewall config to secure port exposure risk.

Summary:

  • userland-proxy is presently enabled by default, but may not serve a purpose (especially if no port publishing is used?).
  • iptables / ip6tables Docker settings bypass firewall rules.
  • IPv6 ULA => ip6tables: true 👍 (NAT with one public facing IPv6, like IPv4 is handled)
    • Should be familiar to users comfortable with IPv4 and easier to configure, unless you understand IPv6 and related infrastructure/environment well enough to deal with all the gotchas. IPv6 GUA could otherwise avoid these concerns:
      • Some issues with NAT, but ip6tables: true should avoid them similar to IPv4 handling.
      • Reverse-proxy like Traefik managing an internal/private IPv6 network for containers may work well, so long as remote client IP is preserved. Typically necessary regardless unless IPv6-only when multiple containers are involved.
      • ULA address can have lower precedence to resolve DNS names than IPv4 private addresses (typically configured in /etc/gai.conf), resulting in IPv4 being preferred over IPv6.
  • IPv6 GUA => can avoid NAT, must be comfortable with firewall config/routing, and ensure NDP proxy is stable.

@vedranmiletic
Copy link

@vedranmiletic I found some time to investigate this again :)

If you have the time to go over the instructions above, it would be good to know if you have success with the container being reachable now +1

Apologies, I don't have time to tinker with this anytime soon as my mail server needs to keep working. I am glad to have IPv6 working properly via NAT and don't want to break my current setup.

@github-actions github-actions bot added the meta/stale This issue / PR has become stale and will be closed if there is no further activity label May 23, 2023
@docker-mailserver docker-mailserver deleted a comment from github-actions bot May 23, 2023
@polarathene
Copy link
Member

Almost there. I need to double-check, but I think it should work fine with IPv6 host and IPv4 only docker networks if ip6tables: true is configured.

I'm linking to an earlier comment for firewall / NDP proxy info regarding IPv6 GUA setup. If I find time to, I'll open a follow-up PR to migrate that into actual docs.

@vedranmiletic
Copy link

vedranmiletic commented Jun 22, 2023 via email

@polarathene
Copy link
Member

polarathene commented Jun 22, 2023

I can try the steps you described there on that machine.

That'd be good thanks!

EDIT: I've looked into it and shared results below. If your experience differs do share :)


Results - Importance of container having an IPv6 address

TL;DR: Required to preserve the remote client IP address when connecting to host via public IPv6 address (with published container port). A firewall frontend prevents the misleading gateway IP connections from being established.

Test Environment:

  • Host runs container (with optional IPv6 user-defined network): docker run --rm -p 80:80 traefik/whoami
  • Client curls IPv6 address of host with container published port, and verifies RemoteAddr in response is the client IPv6 address.
  • Tested on Vultr with Fedora 38 (firewalld) and Ubuntu 23.04 (ufw) hosts. Docker Engine v24. Tests with a container IPv6 address assigned was via a IPv6 ULA subnet of fd00:cafe:face:feed::/64.
  • Not tested, may vary: Rootless daemon (one known example documented) and Podman

Container has no IPv6 address assigned:

  • ip6tables: true + userland-proxy: true:
  • ip6tables: false + userland-proxy: true:
    • Responds with RemoteAddr as IPv4 docker gateway IP.
  • ip6tables: true + userland-proxy: false:
  • ip6tables: false + userland-proxy: false:
    • Connection failure with "Failed to connect" message (No proxy process involved)

Container has IPv6 address assigned:

  • ip6tables: true + userland-proxy: true:
  • ip6tables: true + userland-proxy: false:
    • Responds with RemoteAddr as the preserved Client IP.
  • ip6tables: false + userland-proxy: true:
    • Responds with RemoteAddr as IPv6 docker gateway IP.
  • ip6tables: false + userland-proxy: false:
    • Connection timeout (technically --connection-timeout 5 won't trigger, but --max-time 5 will, connection is established but no response?)
    • Could be due to the FORWARD chain default DROP policy, or something to do with the dockerd process having a dummy listener binding the port (default of [::]:PORT provided the container has IPv6 address, otherwise only 0.0.0.0:PORT according to ss -tlpn).

With Firewall frontends active:

  • firewalld (and it's docker zone active):
    • Any above response with a docker gateway IP results in "Failed to connect".
    • The connection timeout becomes a connection failure as well.
    • Affects remote connections, not connections run on the host.
  • ufw:
    • Similar to firewalld, except those connection failures instead hang (as the INPUT chain default policy is set to DROP when UFW is enabled, otherwise it'd be ACCEPT).

Additionally, local connections within the same host (UFW/firewalld don't affect these):

  • Queries from container to container:
    • Direct container IP => RemoteAddr is preserved.
    • Indirectly via host IP:
      • userland-proxy: false preserves RemoteAddr, but cannot connect to container in a separate docker network.
      • userland-proxy: true responds with RemoteAddr as gateway IP.
  • Queries from container to the same container:
    • Direct container IP => RemoteAddr is preserved.
      • Indirectly via host IP => RemoteAddr is gateway IP (IPv6 fails if userland-proxy: false)
  • Queries from host to container:
    • Direct container IP => RemoteAddr is the gateway IP (regardless of ip6tables and userland-proxy settings).
    • Indirectly via host IP published port => RemoteAddr is the host IP (when userland-proxy: true, IPv6 needs ip6tables: true as well), IPv6 fails if userland-proxy: false.

Looks like only thing that's left to be done was adding userland-proxy for me.

I've looked into userland-proxy rather extensively upstream in Docker repo/issues and it's history. It seems this may have only been relevant when querying the host public IP address from the host itself.

userland-proxy: false would return a docker network gateway IP in that scenario, while userland-proxy: true will do the same if you query the host public IP address from a container running on that host.

Additionally with userland-proxy: false, you have no proxy process involved. So you can't use curl [::1]:PORT or curl -6 localhost:PORT even with ip6tables: true (even with an IPv6 address assigned to the container), as the traffic can't be routed from the loopback like it can be with IPv4 (route_localnet=1). There's a few other gotchas, but also by avoiding the userspace proxy network performance is much better.

@polarathene polarathene changed the title docs: revise IPv6 docs: Rewrite of IPv6 page Jun 27, 2023
@polarathene polarathene enabled auto-merge (squash) July 2, 2023 23:23
@github-actions
Copy link
Contributor

github-actions bot commented Jul 2, 2023

Documentation preview for this PR is ready! 🎉

Built with commit: c89eec7

Copy link
Member

@polarathene polarathene left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My PC died a few days ago, I am still looking to get a replacement. I may have lost some WIP changes 😅

I believe this had reached a point that I was quite happy with it and can merge. I helped revise / improve the current upstream Docker IPv6 docs as well.


Thanks everyone for your input/feedback!

Sorry that I can't find time to include the IPv6 GUA (no NAT variant), that'll take too much time for me right now to adapt for docs and verify/test. IPv6 ULA should work well for the majority.

For future reference, Docker devs have been focusing on improving both IPv6 and networking in general this year, while nftables support is a ways off, iptables may become legacy in favour of a newer IPVS / plugin approach being worked on. That transition will take time and the current iptables networking will remain around for some time (and as default I think) for legacy compatibility reasons IIRC.

@polarathene polarathene merged commit 9f5d662 into master Jul 2, 2023
3 checks passed
@polarathene polarathene deleted the docs/ipv6 branch July 2, 2023 23:33
@AlperShal
Copy link

AlperShal commented Jul 3, 2023

Hey there! I have a question and a suggestion.

My question is what is the difference between this (Method User-defined networks via docker network create or compose.yaml:

networks:
  ip6net:
    enable_ipv6: true
    subnet: 2001:0DB8::/112

and this (Method Default network for a compose.yaml)

networks:
  default:
    enable_ipv6: true
    ipam:
      config:
        - subnet: fd00:1111:2222:3333::/64

If we change the name "ip6net" to "default" or "default" to "ip6net", what difference is left (also the daemon.json is the same if we follow the DMS docs)? Are there any performance differences or is one better than other? I couldn't understand why or when choose one to another.

And my suggestion is to write these methods (in the photo) into the docs as text instead of forwarding to external sites since they are the important parts. (Like method 1, method 2 etc.) It's obviously not necessary but would be better I think. (I may create a pull request but I guess it is clear I am no professional at Docker networking :) )
image

@polarathene
Copy link
Member

If we change the name "ip6net" to "default" or "default" to "ip6net", what difference is left

default is the network that compose creates for each compose.yaml by default. It'll probably be named differently in docker network ls though (likely namespaced to the compose). This does not use the default docker0 bridge that you would get with docker run ... which some daemon.json settings are specifically for (fixed-cidr-v6).

The main difference from choosing default instead of ip6net is that if you override the default network, you are telling compose to create the default network with your own settings. All services defined in the compose.yaml will get that default network. Whereas with ip6net, the default network is created implicitly and you'll have to choose which services you add a network: ip6net parameter to for opt-in.

TL;DR: Preferring default as the network name will minimize config, all services in compose.yaml will have this as their primary/default network.


(also the daemon.json is the same if we follow the DMS docs)?

You only need ip6tables: true and experimental: true there.

userland-proxy varies in behaviour a bit if you have your host making a connection to to a container (eg: with curl) or containers that connect to a container indirectly via the host IP (published ports). Neither setting is ideal for that, although my preference is userland-proxy: false unless you identify an issue where userland-proxy: true is needed. It should not affect clients connecting from systems outside of the Docker host, so long as ip6tables: true is enabled.

enable_ipv6: true in compose.yaml is equivalent to ipv6: true in daemon.json, however AFAIK ipv6: true is only for the docker0 bridge, not related to anything for compose.yaml. Enabling ipv6: true requires fixed-cidr-v6 to be set, which assigns an IPv6 subnet to the docker0 bridge.

If you don't enable IPv6 but your host can be reached via IPv6, then use userland-proxy: false if possible. It'll avoid proxying IPv6 connections to IPv4 gateway on docker networks which is bad as outlined in the DMS IPv6 docs.


And my suggestion is to write these methods (in the photo) into the docs as text instead of forwarding to external sites since they are the important parts. (Like method 1, method 2 etc.) It's obviously not necessary but would be better I think.

Sure! I have been short on time, and don't have a proper computer at the moment so I wanted to merge what I have as it was already a big improvement.

Original message (Misunderstanding on my part)

You'll find that there is already an example for IPv6 ULA. I would just reference that. You could indent these into the lists and collapse them by default (use ??? instead of !!!), or if you look at the mkdocs-material docs, they have another feature with tabs (we've used it on the dkim page) that could work well too.

I still think the links are good to keep around though.

EDIT: I thought it was more about documenting how to configure IPv6 GUA better 😅

  • You could document how to configure docker0 for IPv6 with daemon.json if you like. But we'd be repeating official docker docs for the most part there, plus we're very focused on compose.yaml. I'm not against it if you think it's valuable to users.
  • User-defined networks are an option too. The docs mostly assume that a user is familiar with these if they want to use custom defined networks.
  • For everyone else, using compose.yaml is very common and if they don't have any networks defined yet, all we're doing is modifying the default network to have IPv6 enabled. User-defined networks either change the default name in compose.yaml or reference an externally created network (from docker network create) and then additionally configure services to use that network.

My preference is to keep it simple and focused for users, but still provide enough information/resources for more advanced users. We've had users attempt IPv6 GUA in the past but fail, I've put in extra effort to provide more info on that for them to troubleshoot, but there is quite a bit of environment influence that complicates it and very little advantage offered. I'd prefer we focus support and encourage IPv6 ULA subnets, at least until Docker improves IPv6 networking further (there's still quite a bit of GUA features an IPv6 enthusiast probably will have problems with).

I do acknowledge that third-party resources can change (in fact the official Docker IPv6 docs were much worse until recently). I tried to discourage mixing the documentation only IPv6 subnet with IPv4 private range addresses in their docs, but they didn't agree hence my warning in our docs about it as I don't trust inexperienced IPv6 users to know better otherwise (I've seen it abused when their actual problem was with /etc/gai.conf, which I also documented).

I think the example that is shown directly below your screenshot for those links is sufficient and should lead users (and any support requests) to be focused around IPv6 ULA, preferably with compose.yaml and the default network.

Screenshot_20230704_113344

The example might not be as clear with the CLI snippet. It's meant to be for using with docker run ..., or would require different configuration in compose.yaml to reference an external network etc. That could probably be revised to avoid any confusion. Another slip up might be in the example title "User-defined" which is valid for docker network create, and technically for compose.yaml, but the default name is a bit special as explained earlier, requiring less configuration changes.

@polarathene
Copy link
Member

My question is what is the difference between this

Actually, since you've referenced both config snippets from the links, and not the one I showed in the screenshot, I can see how that could be a bit more confusing to compare.

I'd be happy to take a content tabs approach (like shown here in the DNS example) that keeps the reference link, but has an inline snippet directly below to compare against the other tabs.

Would that be better?

@AlperShal
Copy link

Sorry about my late response. Was a busy day.

First of all thank you for taking your time and writing such a long explanation. Appreciate it.

About the first paragraph, I already know the difference between naming the network to "default" or something else. I just wondered what effect does it have configuring it like "ipam:\config:\subnet:" instead of just "subnet:". From my understanding they do the same thing but they are placed under different titles and have different syntax so couldn't be sure about that. You being misleaded to tell the difference between default network or else is probably caused by my English skills so sorry about that. This was what I wanted to ask.

About the latter paragraphs, thanks for the information. Made things clearer.

And the last paragraph, yeah having the methods under tab views would be really nice. Also, I would suggest instead of just directly copying from reference site a little bit of re-write would be cool. For example the instructions in the purple box (title: "User-defined IPv6 ULA subnet") is a mix of method one and method three. Read method 3, you will see a part saying "Alternatively use a custom bridge network instead:". That's just the same thing with method 1 (if IPAM doesn't make any difference) but just using the default network instead of creating a new one. I think anyone would know the difference between default network and creating a new one since that's just a pure basic of using Docker Compose. If I understood the things right method 1, method 3's "alternatively..." part and the purple are just the same thing (again, thinking IPAM doesn't make any difference) but with really minor differences. If you think they are worth a mention no problem it's again obvious you are experienced in docs writing and the topic but they just look like repeating the same thing to my eyes.

@polarathene
Copy link
Member

I just wondered what effect does it have configuring it like "ipam:\config:\subnet:" instead of just "subnet:". From my understanding they do the same thing but they are placed under different titles and have different syntax so couldn't be sure about that.

Yeah sorry I don't know about that one. It might be that only ipam nested config options was supported in the past and perhaps with Compose v2 (CLI not schema) or potentially earlier they added some top-level config that presumably maps to the same settings.

Mine was from v3 compose schema docs at the time I believe.


I think anyone would know the difference between default network and creating a new one since that's just a pure basic of using Docker Compose.

It took me a while to get comfortable with network config in compose / docker when I started learning it. I don't think I even bothered much until I got into DNS a few years ago and I had been using Docker since 2015 or so?

The default override and fact that it didn't use same daemon.json default bridge settings for docker0 was both surprises to me until maybe a year ago? I don't think Docker even documented network: bridge (docker0) or network: default (override) at the time, I haven't checked in a while but perhaps they do now. I had learned about it from somewhere else.

Knowing how to add a service to a different network than the implicitly generated default one is basics I'd agree. I'm not sure how common it is to know about default override though.


For example the instructions in the purple box (title: "User-defined IPv6 ULA subnet") is a mix of method one and method three. Read method 3, you will see a part saying "Alternatively use a custom bridge network instead:".

If I understood the things right method 1, method 3's "alternatively..." part and the purple are just the same thing (again, thinking IPAM doesn't make any difference) but with really minor differences.

I don't see any text for "Alternatively" or "custom" that you're referencing sorry?

But you're right and we could remove the third bullet point, instead making it a tip to inform the user the benefit of naming the user-defined network as default.

@AlperShal
Copy link

AlperShal commented Jul 6, 2023

Looks like I have forgotten to hit the "Comment" button. Sorry for making you wait again.

I don't see any text for "Alternatively" or "custom" that you're referencing sorry?

image

we could remove the third bullet point, instead making it a tip to inform the user the benefit of naming the user-defined network as default.

Absolutely that would be the best I think.

Yeah sorry I don't know about that one. It might be that only ipam nested config options was supported in the past and perhaps with Compose v2 (CLI not schema) or potentially earlier they added some top-level config that presumably maps to the same settings.

I just checked again the Docker Docs but couldn't find any definition of what it's doing, just how to use it. Having something we don't understand in the docs would not make sense I guess but having it mentioned would not hurt too. Your decision.

Knowing how to add a service to a different network than the implicitly generated default one is basics I'd agree. I'm not sure how common it is to know about default override though.

This was the very first thing I have needed to learn when using Docker Compose so I thought it would be something known by most of the users. If you say that it could be something people may miss then yeah another tab or some comment/note would not hurt too.

Would you like me to make a rewrite/PR according to these or would you like to do it yourself?

@polarathene
Copy link
Member

polarathene commented Jul 6, 2023

Oh that screenshot is about a link to my Github comment from Jan 2023 with earlier IPv6 advice on a different project 😅

That explains why I couldn't find it in our docs and got a little confused with what you were saying 😛

We could probably remove that link. I've covered the content fairly well in the docs now and have a section on testing IPv6 is configured properly.

I just checked again the Docker Docs but couldn't find any definition of what it's doing, just how to use it. Having something we don't understand in the docs would not make sense I guess but having it mentioned would not hurt too. Your decision.

The ipam section? It's specifically for configuring network drivers with IPAM (IP Address Management). Here we only set the subnet, but there's a bunch of other keys too (I don't recall if Docker documents them well though).

I assume at some point the Compose schema made common keys available to some extent available beside ipam for convenience. Like you can specify a subnet, but not an array of subnets (IIRC you can speciy multiple IPv4 and IPv6 subnets in the IPAM config).

If the less verbose config still implicitly adds an IPv4 subnet (I think it does), that's simpler to demonstrate in docs then.


This was the very first thing I have needed to learn when using Docker Compose so I thought it would be something known by most of the users.

default as an override? Maybe it was covered better when you got into Docker Compose, for me that was almost a decade a go and wasn't something I saw mentioned much online. IPAM/DNS config or other network settings with Docker was also something I didn't get familiar with until around 2020.

Before that I was just doing simple frontend/backend network names or external reference with no other customization, then assigning them to different services. That was common to see in the wild for me and quite simple to grok.

If you say that it could be something people may miss then yeah another tab or some comment/note would not hurt too.

Just a tip admonition about default or a separate example snippet mentioning it should be sufficient.


Would you like me to make a rewrite/PR according to these or would you like to do it yourself?

I'm quite busy lately and still have other PR work I need to catch up on. If you're willing to take a shot at making these improvements and raising a new PR that'd be very much appreciated ❤️

Otherwise opening an issue to request it would be fine so it can be better tracked as a todo item 😅

@AlperShal
Copy link

Okay I couldn't manage to do it lol. I am opening an issue then. Thanks for taking your time!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/documentation kind/improvement Improve an existing feature, configuration file or the documentation kind/update Update an existing feature, configuration file or the documentation stale-bot/ignore Indicates that this issue / PR shall not be closed by our stale-checking CI
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

None yet

4 participants