-
-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
X-Forwarded-For does not contain the IP of the original caller when caddy in docker and client uses IPv6 #4339
Comments
Please enable the I'm not sure I understand the issue here, I'd need to see the logs to better understand. Keep in mind that when using Docker, it may use a userland proxy, which would make the remote address on TCP packets look like they're coming from Docker itself and not from the real client. |
Thanks to your pointer, I took a closer look at the docker networking. You're right, Docker does some 'magic': Exposing a Port on the Host while the container is in a bridge network seems to be transparently forwarded for IPv4 but not for IPv6, so Caddy could only see the Docker gateway. I created a separate Caddy-in-Docker instance on :81 because I didn't want to stop the main server and first reproduced the issue described above: Here's the debug log from IPv6:
IPv4:
Then I switched to So, it's not a Caddy issue but an issue due to that Docker proxy. Sorry for bothering you. |
For those struggling the same setting it up correctly, I want to share my key learnings which made it work now: Docker Daemon
This is my docker host's /etc/docker/daemon.json: {
"metrics-addr" : "127.0.0.1:9323",
"experimental" : true,
"ipv6": true,
"ip6tables": true,
"userland-proxy": false,
"fixed-cidr-v6": "fd00:1234:5678::/48",
"live-restore": true,
"registry-mirrors": ["https://mirror.gcr.io"]
}
I think any ULA is fine for Caddy network in docker-compose
version: "3.9"
services:
caddy:
image: caddy:2
restart: unless-stopped
container_name: caddy
ports:
- "80:80"
- "80:80/udp"
- "443:443"
- "443:443/udp"
- "127.0.0.1:2019:2019"
volumes:
- ./data/etc_caddy:/etc/caddy
- ./data/caddy_data:/data
- ./data/caddy_config:/config
- ./logs:/logs
networks:
- caddy
networks:
caddy:
name: caddy
driver: bridge
enable_ipv6: true
ipam:
driver: default
config:
- subnet: fd0c:add1::/56 Additional informationI found the Readme of https://github.com/robbertkl/docker-ipv6nat/ very, very helpful, just to learn the background. Thanks a lot @robbertkl Luckily, the ipv6nat project is going to be obsolete and everything already worked for me using only docker's build-in functionality. See robbertkl/docker-ipv6nat#65 for details (and possible pitfalls, e.g. with wireguard) |
This response is just an update of current state with Docker if it is helpful to anyone 👍 Summary
Below documents the environment used and provides several configs / examples that may clear up any concerns or confusion for netizens landing here :)
I'm not able to see a benefit for This is with an IPv6 capable VPS (Vultr) running Ubuntu 22.10 with Docker Engine There doesn't appear to be any additional fixes at a glance for the Docker Engine release notes since the
|
This issue is likely due to Moby Issue #44408 - "Original ip6 is not passed to containers". |
@arazilsongweaver no, you need to enable |
This might be a follow-up to #3661, as my observations are at least similar, if not same.
Caddy Version
v2.4.5 h1:P1mRs6V2cMcagSPn+NWpD+OEYUYLIf6ecOa48cFGeUg= (issue also present in earlier versions)
running the standard docker container
Host is Ubuntu 20.04.3 LTS
Docker Version 20.10.7
Configs
DNS
testing.example.org as BOTH an A record (IPv4) and an AAAA record (IPv6) (Domainname obfuscated, obviously)
CaddyFile
/etc/docker/daemon.json
(Be sure to enable ipv6 here...)
compose.yaml
Partial output of
docker inspect caddy
, reverseproxy-net:So .10 is the Upstream address which the target service actually sees when Caddy connects, .1 is the Docker-internal virtual gateway.
Reverse Proxy Target container
Nginx
I initially observed the issue in a container based on phusion/passenger-ruby30 (which is basically an nginx listening on :80) where I modified the log format:
Netcat
To be sure it's not nginx modifying something without me knowing, I also tried
nc
.(This does not work as good as I expected, I did ctrl-c and start again after basically every request).
Testing and observed results
Client:
(curlie is just a wrapper around
curl
, thinkcurl -i
plus pretty printing)Nginx
First line is requested with IPv6, second line with IPv4.
(real client IP changed to 94.123.123.123 in second line above)
When the client does a request, the remote addr is always the caddy container's IP, as expected. ✅
When the client's request is done using IPv4, the X-Forwarded-For header is set correctly. ✅
When the client's request is done using IPv6, the X-Forwarded-For header is the one of the Docker-internal network gateway, not to the client IP ❌
Netcat
This is just for completeness, to rule out Nginx as culprit:
❌ IPv6: wrong forwarded-for
✅ IPv4: correct forwarded-for
The text was updated successfully, but these errors were encountered: