Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nginx-proxy only sees docker's virtual interface (and IP) #133

Open
finferflu opened this issue Mar 26, 2015 · 30 comments
Open

nginx-proxy only sees docker's virtual interface (and IP) #133

finferflu opened this issue Mar 26, 2015 · 30 comments
Labels
kind/bug Issue reporting a bug

Comments

@finferflu
Copy link

My front-facing nginx-proxy container doesn't seem to see the real IP a connection is coming from; here is an example:

nginx.1    | 172.17.42.1 - - [26/Mar/2015:14:15:16 +0000] "GET / HTTP/1.1" 200 9319 "http://mydomain.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/600.4.10 (KHTML, like Gecko) Version/8.0.4 Safari/600.4.10" "-"

The IP 172.17.42.1 is actually from the virtual interface docker has created (docker0). For this reason even if I set Nginx to set the header to the real IP, it's all for nothing, since Nginx can't see the real IP to start with. So the question is how do I set nginx-proxy to see the real IP a connection is coming from? Is this something that should be rather adjusted with the Docker daemon instead?

Thanks!

@md5
Copy link
Contributor

md5 commented Mar 27, 2015

Could you describe how you're generating this HTTP request? What URL are you using, what command are you running, or what are you doing in the browser?

@finferflu
Copy link
Author

First of all, thanks for your assistance.

The HTTP request is being generated by my web browser, running on my local machine. The server that is hosting nginx-proxy is a remote VPS. At the back of nginx-proxy is running a Wordpress container. The URL is generated by simply opening the Wordpress blog homepage.

The reason why I'm using --iptables=false is that I don't want Docker to override my firewall rules, especially when I'm using a Docker container such as Docker UI, which exposes port 9000 to the public (which I think it's not safe, since I can pretty much control all the running containers with Docker UI).

@finferflu
Copy link
Author

I just wanted to add that I have recently switched on the --iptables=false flag and that I had already noticed this behaviour before that.

@md5
Copy link
Contributor

md5 commented Mar 27, 2015

Just to confirm, you're putting a DNS name that resolves to the public IP of your VPS (or the public IP itself) in the browser and you're seeing this behavior?

If you're running with --iptables=false, that means that Docker itself is not setting up any IPTables rules to get traffic from the outside world into your container. Since something is making those packets get to your nginx-proxy container, there must be some other IPTables rules in there taking care of things. Seeing your iptables-save output might be helpful to see how it compares with a typical Docker networking setup.

Also, it would probably be good to know what version of Docker you're running.

One other thing that could be involved here is the docker-proxy process that Docker uses as a userland proxy. In recent versions it's only used for IPv6 from what I can see, but I believe older Docker versions may have run it on IPv4 as well.

@md5
Copy link
Contributor

md5 commented Mar 27, 2015

This looks possibly relevant to my docker-proxy speculation: moby/moby#7540

@finferflu
Copy link
Author

Yes, it's an existing DNS name that resolves to the public IP of the VPS.

As far as I can see, the iptables rules that were applied initially by Docker are still in place, even after restarting Docker with --iptables=false (I guess Docker doesn't flush its own rules when that option is disabled). For reference here is the output from iptables-save.

I am running Docker 1.5 on Ubuntu 14.04, and the VPS public IP is v4.

@chenjie4255
Copy link

...
I meet this issus when I try to redirect some http request in nginx for a range of IP,but I found that the remote_addr(client IP) is always the docker0 bridge interface....

I think it was a bug for boot2docker,I run it in ubuntu it works well

@twang2218
Copy link

I believe this issue is related to moby/moby#14856.

The solution might be adding --userland-proxy=false to the docker daemon options. I tried on boot2docker 1.12.0, and works. Hope it works for you.

@batazor
Copy link

batazor commented Aug 16, 2016

I have this same problem with CoreOS. This parameter --userland-proxy = false not help me. Everything works well on Ubunty.

CoreOS(1068.9.0. and 1122.0.0.)
Docker(1.10 and 1.11)

@lliknart
Copy link

I have the same issue explained here:
moby/moby#7540 (comment)

@lenovouser
Copy link

Same problem here 😞

@hiromaily
Copy link

+1

1 similar comment
@aliasmee
Copy link

+1

@k2xl
Copy link

k2xl commented Oct 2, 2017

anyone resolve this?

@kspearrin
Copy link

Same problem :(

@Steiniche
Copy link

Steiniche commented Jul 24, 2018

I just wanna confirm that the problem is still present.

@emrecanozkok
Copy link

same problem

@wandebandera
Copy link

Hi, if you are using docker-compose file, add next field to your nginx service settings:

nginx-proxy:
   image: jwilder/nginx-proxy
   ports:
     - "80:80"
   network_mode: "host"

Now user real IP will be in x-real-ip header of your web services.
Hope it is helpful

@leon0707
Copy link

leon0707 commented Aug 26, 2019

@wandebandera Hi. I tried this too. Once all services attached to the default host network, the service name cannot be resolved. Any solution?

I'm using a mac. The port 80 is not mapped to the host

related link: https://stackoverflow.com/questions/43349996/docker-cannot-link-containers-in-net-host-mode

@emrecanozkok
Copy link

im getting with this

$request->server('HTTP_X_REAL_IP') ?? $request->ip()

there is no problem now

@scyto
Copy link

scyto commented Oct 5, 2019

@emrecanozkok which block does this go into and is that a complete directive? were you still using bridge network too?

@jantoine1
Copy link

I was also having this issue with local development. For me, the issue was due to me updating my hosts file and pointing all my domains to my localhost IP (127.0.0.1). Something special about this IP always has it resolve to docker's virtual interface. When I changed it up so that the domains pointed to my eth0 interface (192.168.1.2, etc.), then all of a sudden IP addresses were coming through in the reverse proxy container and to the container behind that one.

@trwm
Copy link

trwm commented Jul 21, 2020

Hi all!

I also had the problem, that my installation always changed the source IP on packets sent to containers on e.g. the bridge interface.

I found a vague MASQUERADE rule, which causes this, unfortunately I still cannot explain how it gets there after every restart (most likely it is not created by docker - didn't find it on other docker installations).

I found it with:
iptables -S -t nat"

and the rule looked like this:
-A POSTROUTING -j MASQUERADE

This masquerades everything, however this does not make sense, since only traffic from the containers to other (external) networks should be changed => you should find other rules in POSTROUTING for this in your installation.

The rule can be deleted by using (please don't run this on a any system without understanding it in detail):
iptables -t nat -D POSTROUTING -j MASQUERADE

Best regards,
Michael

@gabrielke
Copy link

I found the solution on the traefik page and used with jwilder/nginx-proxy

The relevant part is:

    # Listen on port 80, default for HTTP, necessary to redirect to HTTPS
      - target: 80
        published: 80
        mode: host
      # Listen on port 443, default for HTTPS
      - target: 443
        published: 443
        mode: host

Main point is you don't have to put the "whole container" to "host networking mode" just those ports.
Meaning it remains part of the default docker-compose network too, so available for all the "microservice" containers.
I tried it with docker-compose file version '3.8' and it works.

@thatjames
Copy link

thatjames commented Jan 8, 2021

docker0 and any interface created by docker network create (which is what docker-compose uses) are bridged network interfaces.

Your router works the same way, it bridges your internal network to your ISP and performs Network Address Translation between the two.

That's why you don't see the IP your router assigned to you when you check your "public" IP: You actually see the ISPs bridged connection. NAT is a hack to solve the address drain of IPV4. Works well, is a pain though.

docker0 does the same. Specifically, @trwm found it: It uses the PREROUTING and POSTROUTING chains in the NAT table to MASQUERADE packets between interfaces. (this is INCREDIBLY oversimplified for the sake of explanation, technically the kernel does all of this juuuust above the hardware layer)

Hence: Nginx will never see an external IP if it is behind docker's network bridges.

That's why the only way to do this is to use the host network

@mfenniak
Copy link

mfenniak commented Mar 15, 2021

I encountered this issue today and discovered a reason why this can happen that isn't documented (directly) here, so I wanted to add to the conversation regarding it. Most versions of docker will use the userland proxy, rather than iptables/NAT capabilities, for sending traffic to nginx-proxy if the docker daemon is configured to use IPv6. The iptables/NAT capabilities didn't exist for docker's IPv6 networking stack until recently (moby/moby#41622), so the userland proxy was required to compensate in this case. This will cause the incoming connections to have a container-local IP address rather than the real incoming connection IP.

As this is recently fixed, there is a light at the end of the tunnel if IPv6 support is the reason why this feature doesn't work for you, as it was for me.

@tkw1536 tkw1536 added the kind/bug Issue reporting a bug label Apr 10, 2022
@polarathene
Copy link
Contributor

Windows / macOS hosts AFAIK don't work with this (I don't have either available to test, but have read those platforms are more problematic, I'm not aware of any progress being made with them).

If it's helpful to those on Linux hosts, the following appears to work.


This is an observation from an IPv6 enabled VPS (Vultr) running Ubuntu 22.10 with Docker Engine 22.10.22 and Docker Compose 2.14.1:

  • IPv4 requests from external clients should already provide the correct remote IP in this environment.
  • userland-proxy only appears helpful for querying IP of interfaces that are externally reachable (eg: not a loopback for localhost / 127.0.0.1 / [::1]). With that enabled your local requests would show the expected IP instead of the gateway IP.
  • If using IPv6, use ip6tables: true (requires experimental: true). Adding an IPv6 subnet for your containers network will avoid the IPv4 NAT routing, while this config change will ensure you return the IPv6 client/remote IP instead of the IPv6 docker gateway IP.

Below I use traefik/whoami as it's a simple way to test everything is working properly. I can provide a reverse-proxy example with Caddy, but there's not much else to demonstrate AFAIK, should be roughly the same with nginx-proxy?

daemon.json config (IPv6 + userland-proxy)

Relevant daemon.json config (userland-proxy is optional, it should be true by default):

/etc/docker/daemon.json:

{
  "ip6tables": true,
  "experimental" : true,
  "userland-proxy": true
}

Apply daemon.json config updates:

# Ensure the above config exists here:
nano /etc/docker/daemon.json
# Restart of docker service required,
# If `userland-proxy` was false, restart the system or manually remove the iptables+ip6tables rules that shouldn't have carried over
# https://github.com/moby/moby/issues/44721#issuecomment-1368603067
systemctl restart docker

IPv6 capable network (to avoid IPv6 client to IPv4 Gateway IP)

You can update the default bridge network for docker run (or network_mode: bridge in compose.yaml) to support / use IPv6, configure an IPv6 subnet in your daemon.json:

"ipv6": true,
"fixed-cidr-v6": "fd00:1111:2222:3333::/64",

Alternatively use a custom bridge network instead:

# Via CLI:
docker network create --ipv6 --subnet fd00:1111:2222:3333::/64 example
docker run --rm -d -p 80:80 --network example traefik/whoami
# Or via `compose.yaml` and then `docker compose up -d`:
services:
  test-remote-ip:
    image: traefik/whoami
   # If using a `daemon.json` ipv6 configured network, swap the custom default network for this line:
   #network_mode: bridge
    ports:
      - "80:80"

networks:
  # Overrides the `default` compose generated network, avoids needing to attach to each service:
  default:
    enable_ipv6: true
    # An IPv4 subnet is implicitly configured, IPv6 needs to be specified:
    ipam:
      config:
        - subnet: fd00:1111:2222:3333::/64

The subnet fd00:1111:2222:3333::/64 is for IPv6 ULA addresses (similar to IPv4 private subnets). If you're not familiar with them and IPv6 addresses in general, I understand it as:

  • The 00:1111:2222 part can be whatever hexadecimal values you like AFAIK, so long as they don't conflict with any others in your network.
  • The 3333 part is the subnet ID and can likewise be changed (each network you create with an IPv6 subnet should at least change this subnet ID, and/or the earlier routing prefix part).
  • Finally the ::/64 is the mask value to configure the first 64-bits of the IPv6 address (what is defined) should be left alone, and the remaining 64-bits is the interface ID that can differ for each container in that subnet.
  • These subnets are just used internally for your containers network(s) to connect and NAT against your public IPv6 address AFAIK, and is required to properly preserve the Client IPv6 address.

Testing

In both cases below, the RemoteAddr should be an address in the same IPv4 or IPv6 protocol used for the request (host).

While the below example is not using a reverse-proxy, I have done the same with Caddy where it adds a header of the original Client IP (X-Forwarded-For) when forwarding the request to another container.

In that situation even network_mode: host or without the reverse-proxy in a Docker container, I observe the remote ip (RemoteAddr below) as 127.0.0.1 or the containers IP assigned to the reverse-proxy, not the docker network gateway itself.

I assume it's the same for nginx-proxy or traefik, so this seems to work as intended.

Client request from the docker host

# Send a request to your servers externally reachable IP, like this IPv6 address:
$ curl -s http://[2001:19f0:7001:13c9:5400:4ff:fe41:5e06] | grep RemoteAddr

# Correct output for `userland-proxy: true` (Interface IP matched):
# NOTE: For the loopback interface it will be the Docker network gateway IP
RemoteAddr: [2001:19f0:7001:13c9:5400:4ff:fe41:5e06]

# Correct output for `userland-proxy: false` (Docker network IPv6 gateway IP):
RemoteAddr: [fd00:1111:2222:3333::1]

Client request from remote server (also with IPv6)

# Send a request to your servers externally reachable IP, like this IPv6 address:
$ curl -s http://[2001:19f0:7001:13c9:5400:4ff:fe41:5e06] | grep RemoteAddr

# Correct output (Requests Client IP is matched):
# NOTE: `userland-proxy` doesn't affect this outcome
RemoteAddr: [2001:19f0:7001:1811:5400:4ff:fe42:cee8]

@pini-gh
Copy link
Contributor

pini-gh commented Jan 20, 2024

Many thanks @polarathene for this detailed howto. It helped me to understand better the IPv6 Docker documentation.

I share here the Docker daemon configuration I came up with:

$ cat /etc/docker/daemon.json 
{
  "experimental": true,
  "ip6tables": true,

  "ipv6": true,
  "fixed-cidr-v6": "fd00::/80",

  "default-address-pools" : [
    {
      "base" : "172.17.0.0/12",
      "size" : 24
    },
    {
      "base" : "fd00:1::/104",
      "size" : 120
    }
  ]
}

A few notes below.

It is recommended to use ULA addresses. From the IPv6 Docker doc:

The address 2001:db8 in this example is reserved for use in documentation. Replace it with a valid IPv6 network. The default IPv4 pools are from the private address range, the IPv6 equivalent would be ULA networks.

Configuring the IPv6 subnet pools it tricky. I had to try many prefix length / size configurations to understand what was possible or not. This note from the Docker documentation is important:

Be aware that the following known limitations exist for IPv6 pools:

Configuring IPv6 subnet pools enables declaring IPv6 networks with no subnet specified. The network configuration from @polarathene example above becomes:

networks:
  # Overrides the `default` compose generated network, avoids needing to attach to each service:
  default:
    enable_ipv6: true

If the above example produces this error:

ERROR: could not find an available, non-overlapping IPv6 address pool among the defaults to assign to the network

then you'll have to change the subnet pools' prefix length (up) and / or size (down).

I configured IPv4 subnet pools as well after this helpful blog post. I keep out 192.168.0.0/16 for manually configured subnets.

@polarathene
Copy link
Contributor

Many thanks @polarathene for this detailed howto. It helped me to understand better the IPv6 Docker documentation.

You are welcome, although I also participated in review and revisions of the Docker IPv6 docs you linked, they are in much better shape than they used to be! 😝

I invested a lot of time going through the same pains (or worse haha), glad to hear it's helpful! ❤️


It is recommended to use ULA addresses. From the IPv6 Dcoker doc
Configuring the IPv6 subnet pools it tricky. I had to try many prefix length / size configurations to understand what was possible or not.

This was my advice, and I emphasized important to document the default IPv4 pools, along with an example to add an IPv6 pool.

I don't think the location in IPv6 docs is the best place for that, they may relocate the documentation on default pools in future to another page.

I thought that all you needed to know about configuring the pools was covered in the IPv6 docs?: https://docs.docker.com/config/daemon/ipv6/#dynamic-ipv6-subnet-allocation

Perhaps the terminology was a bit too much and not as easy to follow when first exposed to it? I would suggest the /112 instead of /104 as <IPv6 address>/112 is equivalent to <IPv4 address>/16, 16-bits (65k IPs for a subnet). The docs do try to communicate that.


I wasn't happy about their decision to use the documentation IPv6, while they mixed in actual private range IPv4 subnets however.

While I couldn't win the Docker reviewers over on that inconsistency, but if you look at my own IPv6 docs I wrote for Docker Mailserver, you will get some better advice:

image

image

@polarathene
Copy link
Contributor

polarathene commented Jan 20, 2024

"fixed-cidr-v6": "fd00::/80"

You don't need to set this to be so large btw, it's unrelated to IPv6 pools. /112 should work just as well here too.

That is for the default docker0 network (legacy bridge, does not behave same as custom networks from docker network create or implicitly created by Docker Compose which use the address pools). That said, since it's not an address pool (which takes a base and subnet size to split by), you won't have the memory issue that address pools have.

/80 had been advised in the past elsewhere due to historical reasons (which IIRC actually may apply to docker0 here) regarding using the last 48 bits as containers MAC. That would still be the case I think when you explicitly assign a /80 subnet, but anything else and it'll assign IP addresses incrementally like you'd expect instead of using MAC.

In the docs I wrote I assign a /64 block, the IPv6 full-width of the interface identifier, while also clearly defining the full-width of the network prefix (routing + subnet ID).

image


"ipv6": true

That is only relevant to docker0 legacy bridge, you don't need it either unless you want IPv6 addresses when using docker run without --network.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Issue reporting a bug
Projects
None yet
Development

No branches or pull requests