-
Notifications
You must be signed in to change notification settings - Fork 261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Original ip is not passed to containers ... #157
Comments
See also moby/moby#15086 |
@aguedeney thanks for raising! I see that @stephen-turner has linked the Moby issue. We will track here as well :) |
Is everyone here using http? Have had some discussion about explicit http support for Compose at least. In which case having layer 7 routers passing x-forwarded-for is an option, versus tcp changes. |
Nginx or Traefic proxies for docker ( are loyal and reliable companions of any Dockerized HTTP server. You can find many ready-to-use examples of compose files using Google. The Dockers images are in the Docker Hub |
I achieved a working solution with docker-compose by using the 'X-Real-IP' header provided by the nginx-proxy container's default configuration. Obviously this is a workaround, just thought I'd put it out there. |
It is still binding to the docker socket, which is not the best solution. Can anyone comment on how providers like traefik gets around security issues with mounting the socket (if they do)? |
Every Kubernetes node has Kube-proxy running, Selinux built-in firewall and a member of Kubernetes cluster VPN. Traffic between Kubernetes nodes is filtered by Istio. NGInx and Traefic Ingresses have no direct access to Docker socket. The Kubernetes security is tested and proven. |
Docker containers and pods never expose unix domain sockets. All Docker and Podman networking are TCP IP networking. There is no way to "EXPOSE" unix socket in the Dockerfile. Only Docker engine may expose API socker but User requests never uses Docker API. It is used only by orchestration engines. When Docker / Podman run on SeLinux nodes API sockets are protected by the native SeLinux security very well. Docker API can't extract and use HTTP headers from Docker API calls because the default Docker API host is unix socket. The Docker API host is configured in the daemon.json. It uses TLS 1.1 at least via port 2376. Port 2375 can be used without TLS but with warnings in Docker info output. On the client side security-related stuff is configured in the Docker Context and Kubernetes Context. |
In host mode it is passed. But host mode is only available on Linux. A special situation is when the docker host is ipv4 and ipv6 (which is quite normal today) and the containers are ipv4 only. It seems to come something ipv6 libnetwork related in the next release v20.10... |
M$ would be happy and state in their advertisement: " We finally introduced that thing "privacy" ". Realtalk: It would be awesome if it's possible to work around it by using host-networking and wiring it to other containers, but since that's only possible in bridge mode as far as I know and the Docker Docs I've really tried to read (and understand) seem to state that too. We need a fix.... and a date ASAP. When can we expect the new docker (engine?) version? And when does it include the "Privacy on" Switch or better the "turn off non-compliance mode"? ;-) |
Some use cases that I constantly have a problem with:
|
With rootless docker, the source IP is also not passed to the container (e.g. when trying to log access to a reverse-proxy). This is the default behaviour, however it would be great if this was better documented - took lots of searching to figure out what was happening. |
Hope this can be implemented soon. This is badly affecting our sticky services deployed with docker. Now we need to add extra configurations and placement for those services, since host network mode is not suitable for production deployment. |
We just came across this issue as well and it's a serious issue for us as we need the client ip for security reasons. |
This is real issue we need to have the client IP for security reasons but in bridge mode it is not possible hoping for solution soon in bridge mode |
to solve the problem, for now, set up a reverse proxy outside the swarm that keeps track of the IP, and that can forward it as a header to the service if necessary. |
we have a gigantic application for capturing documents and ecm. we need this functionality to validate snmp data from multifunctional devices, for security reasons, the licenses are based on information in the mib, such as serial and mac address. I came across this problem, and the worst is not documented, I spent days trying somehow. unfortunately when researching more deeply this problem dates back more than 4 years, and until today it has not been implemented, I believe that they are not caring about this. |
Pardon my lack of networking knowledge (a weakness I am actively working to fix), but why does the X-Forwarded-For header get changed in between HAProxy and my container? I thought the path to my container was taken care of by iptables routing/forwarding the data to the correct location. And since headers are part of http, wouldn't iptables just ignore them? My specific case, in case I'm missing something obvious: Ubuntu 20.04 running HAProxy and Docker normally (no Kubernetes, or HAProxy in a container.) HAProxy sends traffic to a docker-compose based php:7-apache based app. In the app logs I just see the Docker network's gateway, and the container's Docker ip. While in (Edit: Just wondered to myself if the forums would be a better place for this post, but my last few posts there didn't get any replies, and my question is directly related to this issue.) |
@jerrac Yes it would be and not asking in an existing issue targeting a different problem. |
I have the problem "original IP is not passed to containers" on docker desktop in windows. The problem made my smtp server function as an open relay since it thought that all connections where "local". (due to the mailu default config) |
I just had this issue with netflow UDP packets. It was working properly for along while, then I moved container host to a new subnet and it continued to work until I moved the docker-compose instructions to a consolidated file with other containers. For some reason at that point I was seeing one of the two source IPs (the primary router) that go to this service as the Docker gateway IP. The other (the secondary router) was still correct. I restarted the netflow service on the primary router and the container started seeing it as the correct source IP again, but the secondary router source IP now became the Docker gateway IP inside the container. Restarted the netflow service on the secondary router and both now began showing up correctly in the container again. |
It helps to include host IP when binding SMTP port from containter.
When there is connection to port on external IP, source address is preserved properly and authentication is required. |
Lot's of lovely discussion about the issue, but little in the way of when we can expect a fix. Given it's been an issue for over 4 years, and on the list for 18 months I'm guessing it's difficult to fix. Can we get some feedback? I'm rapidly running out of time and will have to de-containerize everything if there is no solution. I just need an estimate, please. |
FWIW for people coming here (like myself) looking for a solution to this issue, I think a workaround is to have nginx (or something) in front of your host forwarding |
Although that may work, we're working on a situation where we can't rely on that and must dockerize Nginx and have it at the front to route all the traffic to a variety of other services. |
At our server we are running multiple containers and they now get the remote ip address properly. The problem for us was that the problem occurred when we ran the same image multiple times. When we compiled a new image for every container and then ran them (on separate ports of course), they all receive the proper remote addresses. Te port mapping for the 3 containers that we now use are: 8080:443 , 8090:443 , and 9010:443 without the problem. When the problem has occurred the docker service has to be restarted before gearing up the new containers. This is on Linux (CentOS) |
I seriously can't believe that in the nearly 6 years since this issue was first brought up with the Docker folks, this has not gotten fixed nor has there been hardly any communication. I'm at a loss. As a developer myself ...I have to wonder what they are doing. If I told a client "I'm looking into the issue" and that is the only feedback I gave over the course of YEARS, I would be fired. I love Docker, but this is completely ridiculous. |
Yes that and you could have looked into Hyper-V OR WSL. |
I partially worked around this by installing nginx directly on the MacOS host using homebrew. It listens for all port 80/443 traffic hitting the host and proxy_pass’s it to itself on a different port (which a second nginx inside docker is listening on). The host nginx applies the X-Forwarded-For header value with the original source IP. The nginx inside docker can then read that IP. It’s not perfect; it would be nice if I didn’t need the host nginx, and it only works for traffic which nginx can proxy_pass, and most web applications won’t know to look for that header, but it works. For example, the original public IPs that access my web applications inside docker are visible in logs. Which is important for me to see in my authelia logs, as an example. |
@Junto026 Do you now manage SSL certs on the host, assuming you have SSL Certs for your applications in the containers? I have the same use case but I am running apache in the containers and SSL certs and virtual hosts are managed inside the container and I just need to reverse proxy all 80 and 443 requests to the container. I would prefer to keep cert management also in my docker logic. |
@nitishdhar I have SSL Certs for each of the web servers running in docker, generated via certbot. Before, I only had nginx running inside a container and pointed that docker nginx to my cert files. But after I added an additional nginx install onto the host, I now store and point to the certificate files on the host, and I no longer need to reference the certificate files inside the docker nginx. So I think you'd keep doing everything you're already doing, but install a second apache on the host and point your host apache to the cert files you're already generating. You can of course still also point your docker apache to the cert files, but it's not required unless you're encrypting the internal communications or an app requires certs on the internal communications. Example nginx server block on my host nginx:
Example server block on the second nginx in docker (note I since commented out the certificate reference line):
|
so how many years it's been already? I've lost track. |
Same problem here. Are there any updates or at least a roadmap? |
You might need to share a bit more about the issue you're facing? This issue is rather generic in scope, there are more specific issues for each IIRC. So if you want to ask about if something is fixed or how to fix it, it helps to document the particular issue you're dealing with, better a reproduction example. IPv6 client IP replaced by IPv4 gateway IPTL;DR: Fix with either:
If you have a container using a published port, any external IP should be correct. If your Docker host is reachable via IPv6 but the container does not have an IPv6 address assigned (and the implicit default To fix that, you'll want to either disable Client IP replaced by Reverse Proxy IPFor HTTP/HTTPS, this should usually be a non-issue as the reverse proxy software can include the appropriate host/forwarded header. In some cases it's not sufficient, or you are proxying TCP connections. Reverse proxies can use PROXY protocol, but this requires the proxied service to support accepting PROXY protocol connections (these just include an additional header at the start to preserve the real IP). Something else?It should matter less for non-production deployments, but could be a bit inconvenient. Depends what you're doing. I'm familiar with Linux and WSL, and WSL I imagine is a bit similar to macOS support where there's some extra complexity. |
The issue described here, imo, is with the swarm, that if no load balancer is placed in front of the swarm that sets the original IP as a header when forwarding upstream, then the original IP is lost. So the reversed proxy that exists in a swarm setup, rather than outside of it as for an example a reverse proxy or load balancer, looses the original IP, and we can not trace where the requests come from. The solution, as I mentioned before, and as I believe everyone uses, is to simply not rely fully on swarm, but to have an external component/reversed proxy/load balancer in front of the cluster, where the IP still exists, and set as a non standard header, which the solution in the swarm then have to be adopted to if we want to use the IP. It would however be nice if we could have a global reversed proxy in the swarm, and with the help of a floating IP, solely rely on the swarm cluster, rather then this external component. So if a host goes down, the IP is attached to a different host that now acts as the reverse proxy host, but with the difference that the component is still hosted by the swarm. |
Ah ok, sorry I missed that. I don't have experience with swarm, I thought that swarm had been on the decline in favor of k8s becoming a more dominant choice for scaling production.
In that case, you should be able to use something like Traefik to proxy TCP/UDP connections to an internal reverse-proxy (or direct to a container service when viable) and internally when forwarding that connection append the PROXY protocol header. That's rather simple to do and I could provide an example of the config for those two reverse proxies. Doing that should preserve the client IP and work well for most?
That would be similar to an ingress controller in k8s? I recently updated documentation for a project I maintain where we needed to rely on PROXY protocol for preserving the client IP to a mail server container, but within the context of k8s this was more complicated as the containers are a little less flexible with routing through ingress, instead needing direct container to container connections without PROXY protocol supported as well. |
Yes it has, but it's different product, and aside form the point.
Well yes, this is what I just explained to you that I believe everyone does to solve the problem, we use a component before we reach the swarm that mapps the IP to a non standard header... or did I misunderstood what you meant? Anyway, this is not the place to converse about it.
Ingress Controllers and Floating IPs operate at different levels and serves different and distinct purposes, so they are not directly similar. Floating IPs are used to redirect network traffic to any virtual machine within a cloud environment, primarily for ensuring accessibility and facilitating failover. An Ingress Controller is used to expose HTTP services to the external network. Floating IPs is a feature used for assigning a static IP address to an instance or resource that can be remapped as needed. Both Ingress Controllers and Floating IPs deal with managing access to services, they do so in fundamentally different ways and can be used together, rather than replacing each other. Similar to an ingress controller in k8's you have the docker swarm mode routing mech in docker swarm: https://docs.docker.com/engine/swarm/ingress/
...which underlines one of some of the reasons why I prefer to work with swarm, I also often find that k8's is more complex to get work done. I enjoy the conversation, but this is perhaps not the place for it :) To contextualize and to answer the underlying and previous question to the topic, the issue is simply that we would like to be able to read what source IP the client has that reached our containers. No matter if this could be done by using a different product, or a different approach, we would like to be able to do it using this product. |
I am referring to a standard solution for this called PROXY protocol. It prepends a header at Layer 4 (TCP) connections that the other end of the connection will accept and treat as the actual client IP. You can read about it this docs page I wrote (linking to edge version as it's unreleased rewrite). A related guide is here for k8s.
I am still inexperienced with k8s myself, I've only worked with Docker / Docker Compose without any scaling needs personally. I helped review and revise the k8s docs I linked. We have the ingress controller there as the external public IP endpoint to route connections to a service at one or more instances via IPs managed by the load balancer IIRC.
As you know swarm well, then it sounds like what you're asking for is support for PROXY protocol? Perhaps a more explicit feature request would better serve that? Here's some context from those links I provided: |
That kind of comment doesn't help move anything here along, it's just noise and everyone subscribed gets pinged/notified pointlessly. Please only chime in when you have something worthwhile to contribute. If you have a problem that isn't resolvable from the advice above though, please share it. My previous comment right above yours explains how to preserve the IP, while an earlier one a few posts back describes how an IPv6 client IP will be lost if you have The configuration is very simple to resolve those concerns. The PROXY protocol one has nothing to do with Docker if you run into it, while the Ideally the IPv6 support could be enabled by default and no longer need the experimental flag, users may still expect IPv6 addresses in the default address pool though, and perhaps not everyone would be happy with those decisions 🤷♂️ I'm not sure what the status is regarding this switch over, I know there was some plans to move from iptables to IPVS and that the current IPv6 support may not be in good enough shape yet so Docker tries not to intervene with routing there? |
Came here through a circuitous route. This is a deal breaker. Given the amount of time this problem has existed and the lack of a solution... the lack of a simple solution for an important must have is NOT good. Docker already adds complexity and a learning curve. Although it does solve some problems it has a hard line reason to avoid. We cannot operate without the visitors IP address. All of the effort to get this running and hitting a "big-broken-dead-and-not-going-to-fix" like this is disappointing. I have to believe there is a different solution for such a large problem. |
yes, it is (though I don't know it, as I've just settled on using plain docker and ansible instead of swarm or kube, b/c of the learning curve and this). |
Do you need swarm? I covered any other case that you should be able to avoid the issue otherwise. Please provide more context on the problem / use-case you're facing. |
Four years after the issue was opened, it is still not resolved xD |
Are there any news on this topic yet? |
Please provide more context on the problem / use-case you're facing. I provided advice that you should be able to avoid the issue (except for swarm):
If you need more info let me know. If the above isn't a solution for you and you're not using Docker Swarm, then share what is different about your setup. |
I'm using docker on a macOS machine but I don't have access to the macOS system itself. I run a php container. When I run phpinfo() all the relevant IPs are wrong. |
The issue affects macOS, Windows, and Linux. We'd like to see in the roadmap somewhere.
docker/for-mac#180
The text was updated successfully, but these errors were encountered: