Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Original ip is not passed to containers ... #157

Open
ghost opened this issue Oct 14, 2020 · 52 comments
Open

Original ip is not passed to containers ... #157

ghost opened this issue Oct 14, 2020 · 52 comments
Labels
community_new New idea raised by a community contributor open source Improvements to open source projects

Comments

@ghost
Copy link

ghost commented Oct 14, 2020

The issue affects macOS, Windows, and Linux. We'd like to see in the roadmap somewhere.

docker/for-mac#180

@stephen-turner
Copy link

See also moby/moby#15086

@nebuk89 nebuk89 added community_new New idea raised by a community contributor open source Improvements to open source projects labels Oct 16, 2020
@nebuk89
Copy link
Contributor

nebuk89 commented Oct 16, 2020

@aguedeney thanks for raising! I see that @stephen-turner has linked the Moby issue. We will track here as well :)

@justincormack
Copy link
Member

Is everyone here using http? Have had some discussion about explicit http support for Compose at least. In which case having layer 7 routers passing x-forwarded-for is an option, versus tcp changes.

@PavelSosin-320
Copy link

Nginx or Traefic proxies for docker ( are loyal and reliable companions of any Dockerized HTTP server. You can find many ready-to-use examples of compose files using Google. The Dockers images are in the Docker Hub

@deltabrot
Copy link

I achieved a working solution with docker-compose by using the 'X-Real-IP' header provided by the nginx-proxy container's default configuration. Obviously this is a workaround, just thought I'd put it out there.

@olegjo
Copy link

olegjo commented Nov 5, 2020

I achieved a working solution with docker-compose by using the 'X-Real-IP' header provided by the nginx-proxy container's default configuration. Obviously this is a workaround, just thought I'd put it out there.

It is still binding to the docker socket, which is not the best solution. Can anyone comment on how providers like traefik gets around security issues with mounting the socket (if they do)?

@PavelSosin-320
Copy link

Every Kubernetes node has Kube-proxy running, Selinux built-in firewall and a member of Kubernetes cluster VPN. Traffic between Kubernetes nodes is filtered by Istio. NGInx and Traefic Ingresses have no direct access to Docker socket. The Kubernetes security is tested and proven.

@PavelSosin-320
Copy link

Docker containers and pods never expose unix domain sockets. All Docker and Podman networking are TCP IP networking. There is no way to "EXPOSE" unix socket in the Dockerfile. Only Docker engine may expose API socker but User requests never uses Docker API. It is used only by orchestration engines. When Docker / Podman run on SeLinux nodes API sockets are protected by the native SeLinux security very well. Docker API can't extract and use HTTP headers from Docker API calls because the default Docker API host is unix socket. The Docker API host is configured in the daemon.json. It uses TLS 1.1 at least via port 2376. Port 2375 can be used without TLS but with warnings in Docker info output. On the client side security-related stuff is configured in the Docker Context and Kubernetes Context.
It is possible to configure other hosts as API sockets in the daemon.json but only Unix or TCP sockets, not HTTP.

@kolbma
Copy link

kolbma commented Nov 12, 2020

In host mode it is passed. But host mode is only available on Linux.

A special situation is when the docker host is ipv4 and ipv6 (which is quite normal today) and the containers are ipv4 only.
Then clients connecting per ipv4 are connecting the host per ipv4 (host mode/Linux only) and the correct client source address is seen in the container.
But if the client connects with ipv6 it is routed through the docker_gwbridge per ipv4 and the container can only see the IP address of the bridge. MS would say this is a feature 😄
So would be nice we could enable ipv6 in the swarm containers and support in compose >= 3.

It seems to come something ipv6 libnetwork related in the next release v20.10...
But yes there should be some information for the many open ipv6/source ip related and duplicate-closed issues with hacks and special case workarounds on moby/moby. Could fill a book. At least a more complete documentation at https://docs.docker.com/config/daemon/ipv6/

@ksaadDE
Copy link

ksaadDE commented Nov 23, 2020

MS would say this is a feature 👍

M$ would be happy and state in their advertisement: " We finally introduced that thing "privacy" ".
Since the real ip isn't shown it's a good side-effect (who needs the real ip.... no real compliance problems...)

Realtalk: It would be awesome if it's possible to work around it by using host-networking and wiring it to other containers, but since that's only possible in bridge mode as far as I know and the Docker Docs I've really tried to read (and understand) seem to state that too.
TL:DR.: it sucks!

We need a fix.... and a date ASAP. When can we expect the new docker (engine?) version? And when does it include the "Privacy on" Switch or better the "turn off non-compliance mode"? ;-)

@jwillmer
Copy link

Some use cases that I constantly have a problem with:

  • Filtering network traffic based on the IP (corporate (internal) network or public (internet))
    • Used for different apps behind a reverse proxy to limit the accessibility by the public
  • Identifying (micro)services
    • We use a workflow engine that get's triggered by other micro services. Would be great to identify the caller's by IP to distinguish which micro service triggered the action - very useful for debugging!

@MarkErik
Copy link

With rootless docker, the source IP is also not passed to the container (e.g. when trying to log access to a reverse-proxy). This is the default behaviour, however it would be great if this was better documented - took lots of searching to figure out what was happening.

@windmemory
Copy link

Hope this can be implemented soon.

This is badly affecting our sticky services deployed with docker. Now we need to add extra configurations and placement for those services, since host network mode is not suitable for production deployment.

@rjhancock
Copy link

We just came across this issue as well and it's a serious issue for us as we need the client ip for security reasons.

@shaharyar-shamshi
Copy link

This is real issue we need to have the client IP for security reasons but in bridge mode it is not possible hoping for solution soon in bridge mode

@superhero
Copy link

to solve the problem, for now, set up a reverse proxy outside the swarm that keeps track of the IP, and that can forward it as a header to the service if necessary.

@filipeaugustosantos
Copy link

we have a gigantic application for capturing documents and ecm. we need this functionality to validate snmp data from multifunctional devices, for security reasons, the licenses are based on information in the mib, such as serial and mac address. I came across this problem, and the worst is not documented, I spent days trying somehow. unfortunately when researching more deeply this problem dates back more than 4 years, and until today it has not been implemented, I believe that they are not caring about this.

@jerrac
Copy link

jerrac commented Mar 13, 2021

Pardon my lack of networking knowledge (a weakness I am actively working to fix), but why does the X-Forwarded-For header get changed in between HAProxy and my container? option forwardfor should set it correctly since HAProxy's logs show the correct ip address.

I thought the path to my container was taken care of by iptables routing/forwarding the data to the correct location. And since headers are part of http, wouldn't iptables just ignore them?

My specific case, in case I'm missing something obvious: Ubuntu 20.04 running HAProxy and Docker normally (no Kubernetes, or HAProxy in a container.) HAProxy sends traffic to a docker-compose based php:7-apache based app. In the app logs I just see the Docker network's gateway, and the container's Docker ip. While in /var/log/haproxy.log, I see my client ip just fine.

(Edit: Just wondered to myself if the forums would be a better place for this post, but my last few posts there didn't get any replies, and my question is directly related to this issue.)

@kolbma
Copy link

kolbma commented Mar 14, 2021

@jerrac Yes it would be and not asking in an existing issue targeting a different problem.
For your problem... Apache doesn't log the ForwardFor address by default. I think you have to modify your apache config. But let us stop here.

@DarioFra
Copy link

DarioFra commented Apr 5, 2021

I have the problem "original IP is not passed to containers" on docker desktop in windows. The problem made my smtp server function as an open relay since it thought that all connections where "local". (due to the mailu default config)
In linux (centos) the bridge network configuration still shows the correct source address (don't need to use host there, which is good). Will probably run docker in linux under system-v instead if this is not solved anytime soon..

@avatar4d
Copy link

I just had this issue with netflow UDP packets. It was working properly for along while, then I moved container host to a new subnet and it continued to work until I moved the docker-compose instructions to a consolidated file with other containers. For some reason at that point I was seeing one of the two source IPs (the primary router) that go to this service as the Docker gateway IP. The other (the secondary router) was still correct.

I restarted the netflow service on the primary router and the container started seeing it as the correct source IP again, but the secondary router source IP now became the Docker gateway IP inside the container. Restarted the netflow service on the secondary router and both now began showing up correctly in the container again.

@cryptogopher
Copy link

@DarioFra

I have the problem "original IP is not passed to containers" on docker desktop in windows. The problem made my smtp server function as an open relay since it thought that all connections where "local". (due to the mailu default config)

It helps to include host IP when binding SMTP port from containter.
For example I have 2 entries under ports: section:

ports:
  - '127.0.0.1:25:25'
  - '<external-IP>:25:25'

When there is connection to port on external IP, source address is preserved properly and authentication is required.
When there is local delivery, it is sent through 127.0.0.1:25 and source IP is masqueraded to Docker's bridge interface IP (which can be distinguished and mail can be accepted without authentication if needed).

@rjhancock
Copy link

is there any acceptable work around?

Disable the userland proxy.

To underline the "acceptable"

Not to mention, doesn't seem to work (at least on Windows Docker Desktop) with Nginx.

Doesn't work on non-linux OS's.

@ksaadDE
Copy link

ksaadDE commented Nov 13, 2021

is there any acceptable work around?

Disable the userland proxy.

To underline the "acceptable"

Not to mention, doesn't seem to work (at least on Windows Docker Desktop) with Nginx.

Doesn't work on non-linux OS's.

Docker is anyways buggy on non *nix OS. Also there's no reason for using Windows :P

@rizktouma
Copy link

is there any acceptable work around?

Disable the userland proxy.

To underline the "acceptable"

Not to mention, doesn't seem to work (at least on Windows Docker Desktop) with Nginx.

Doesn't work on non-linux OS's.

Docker is anyways buggy on non *nix OS. Also there's no reason for using Windows :P

My issue was specifically with Docker for Windows, should have mentioned that in my original comment. So if it doesn't work there it's not much of a solution in my case (and no I can't just choose Linux, I don't control the deployment machine)

@dawid-woitaschek
Copy link

Oh my freaking god... Im not often freaking out, but c'mon, what is the problem here? Originally coming from PiHole, searching for a serious solution and there is not a single fart coming from the devs? Gawd... Moving to good ol' style VMs then, if something like this basic doesn't work properly.

@FelixSFD
Copy link

@dawid-woitaschek https://github.com/docker/code-of-conduct/blob/master/code-of-conduct-EN.md

@Jollyjohn
Copy link

Lot's of lovely discussion about the issue, but little in the way of when we can expect a fix. Given it's been an issue for over 4 years, and on the list for 18 months I'm guessing it's difficult to fix. Can we get some feedback? I'm rapidly running out of time and will have to de-containerize everything if there is no solution. I just need an estimate, please.

@peabnuts123
Copy link

FWIW for people coming here (like myself) looking for a solution to this issue, I think a workaround is to have nginx (or something) in front of your host forwarding X-Forwarded-For headers and such to your services (e.g. Traefik). Obviously nginx will have to be running on the host to GET the source IP so possibly either install it natively on a machine (could be a small reverse proxy machine) or possibly there is a way to run nginx in docker with host networking and then connect to containers on a bridge network (sounds theoretically possible but I haven't explored it yet).

@rjhancock
Copy link

Although that may work, we're working on a situation where we can't rely on that and must dockerize Nginx and have it at the front to route all the traffic to a variety of other services.

@dannygorter
Copy link

dannygorter commented May 4, 2022

At our server we are running multiple containers and they now get the remote ip address properly. The problem for us was that the problem occurred when we ran the same image multiple times. When we compiled a new image for every container and then ran them (on separate ports of course), they all receive the proper remote addresses. Te port mapping for the 3 containers that we now use are: 8080:443 , 8090:443 , and 9010:443 without the problem. When the problem has occurred the docker service has to be restarted before gearing up the new containers. This is on Linux (CentOS)

@wiredwiz
Copy link

wiredwiz commented Sep 4, 2022

I seriously can't believe that in the nearly 6 years since this issue was first brought up with the Docker folks, this has not gotten fixed nor has there been hardly any communication. I'm at a loss. As a developer myself ...I have to wonder what they are doing. If I told a client "I'm looking into the issue" and that is the only feedback I gave over the course of YEARS, I would be fired. I love Docker, but this is completely ridiculous.

@ksaadDE
Copy link

ksaadDE commented Feb 11, 2023

Doesn't work on non-linux OS's.

Docker is anyways buggy on non *nix OS. Also there's no reason for using Windows :P

My issue was specifically with Docker for Windows, should have mentioned that in my original comment. So if it doesn't work there it's not much of a solution in my case (and no I can't just choose Linux, I don't control the deployment machine)

Yes that and you could have looked into Hyper-V OR WSL.

@Junto026
Copy link

Junto026 commented Jun 6, 2023

I partially worked around this by installing nginx directly on the MacOS host using homebrew. It listens for all port 80/443 traffic hitting the host and proxy_pass’s it to itself on a different port (which a second nginx inside docker is listening on).

The host nginx applies the X-Forwarded-For header value with the original source IP. The nginx inside docker can then read that IP.

It’s not perfect; it would be nice if I didn’t need the host nginx, and it only works for traffic which nginx can proxy_pass, and most web applications won’t know to look for that header, but it works. For example, the original public IPs that access my web applications inside docker are visible in logs. Which is important for me to see in my authelia logs, as an example.

@nitishdhar
Copy link

@Junto026 Do you now manage SSL certs on the host, assuming you have SSL Certs for your applications in the containers? I have the same use case but I am running apache in the containers and SSL certs and virtual hosts are managed inside the container and I just need to reverse proxy all 80 and 443 requests to the container. I would prefer to keep cert management also in my docker logic.

@Junto026
Copy link

@nitishdhar I have SSL Certs for each of the web servers running in docker, generated via certbot. Before, I only had nginx running inside a container and pointed that docker nginx to my cert files. But after I added an additional nginx install onto the host, I now store and point to the certificate files on the host, and I no longer need to reference the certificate files inside the docker nginx.

So I think you'd keep doing everything you're already doing, but install a second apache on the host and point your host apache to the cert files you're already generating.

You can of course still also point your docker apache to the cert files, but it's not required unless you're encrypting the internal communications or an app requires certs on the internal communications.

Example nginx server block on my host nginx:

	server {
		listen 443 ssl http2;
		listen [::]:443 ssl http2;
		ssl_certificate /host-cert-directory/fullchain.pem;
		ssl_certificate_key /host-cert-directory/privkey.pem;

		location / {
			proxy_pass https://127.0.0.1:9443;

			proxy_set_header Host $host;
			proxy_set_header X-Original-URL $scheme://$http_host$request_uri;
			proxy_set_header X-Forwarded-Proto $scheme;
			proxy_set_header X-Forwarded-Host $http_host;
			proxy_set_header X-Forwarded-Uri $request_uri;
			proxy_set_header X-Forwarded-Ssl on;
			proxy_set_header X-Forwarded-For $remote_addr;
			proxy_set_header X-Real-IP $remote_addr;
			proxy_set_header Connection "";

			proxy_set_header Upgrade $http_upgrade;
			proxy_set_header Connection "Upgrade";
		}
	}

Example server block on the second nginx in docker (note I since commented out the certificate reference line):

	server {
		listen 443 ssl http2;
		listen [::]:443 ssl http2;
		server_name app.mydomain.com;
#		include /etc/nginx/conf.d/certificate_locations;
		include /etc/nginx/conf.d/authelia_protectedapp_connection;

		location / {
			proxy_pass https://192.168.32.103:8443;
			include /etc/nginx/conf.d/authelia_protectedapp_config;
		}
	}

@xucian
Copy link

xucian commented Feb 25, 2024

so how many years it's been already? I've lost track.
was this solved properly, i.e. without the header trick (that my web apps need to know of) or it's still in a pending state?

@Kleywalker
Copy link

Same problem here. Are there any updates or at least a roadmap?

@polarathene
Copy link

was this solved properly, i.e. without the header trick (that my web apps need to know of) or it's still in a pending state?

Same problem here. Are there any updates or at least a roadmap?

You might need to share a bit more about the issue you're facing?

This issue is rather generic in scope, there are more specific issues for each IIRC. So if you want to ask about if something is fixed or how to fix it, it helps to document the particular issue you're dealing with, better a reproduction example.

IPv6 client IP replaced by IPv4 gateway IP

TL;DR: Fix with either:

  • userland-proxy: false in daemon settings.
  • Enable IPv6 in daemon settings, and assign a private IPv6 address to the container.

If you have a container using a published port, any external IP should be correct. If your Docker host is reachable via IPv6 but the container does not have an IPv6 address assigned (and the implicit default userland-proxy: true for /etc/docker/daemon.json or equivalent location), you'll have the internal network gateway IP replace the real client IP address.

To fix that, you'll want to either disable userland-proxy daemon setting (this is mostly useful for routing between the Docker host and a container, and to some degree indirect connections between containers). Alternatively, properly enable IPv6 in the daemon config and have your container use IPv6 ULA addresses. The official docs for IPv6 better cover this now, but if something isn't clear let me know.

Client IP replaced by Reverse Proxy IP

For HTTP/HTTPS, this should usually be a non-issue as the reverse proxy software can include the appropriate host/forwarded header.

In some cases it's not sufficient, or you are proxying TCP connections. Reverse proxies can use PROXY protocol, but this requires the proxied service to support accepting PROXY protocol connections (these just include an additional header at the start to preserve the real IP).

Something else?

It should matter less for non-production deployments, but could be a bit inconvenient. Depends what you're doing.

I'm familiar with Linux and WSL, and WSL I imagine is a bit similar to macOS support where there's some extra complexity.

@superhero
Copy link

superhero commented Mar 17, 2024

The issue described here, imo, is with the swarm, that if no load balancer is placed in front of the swarm that sets the original IP as a header when forwarding upstream, then the original IP is lost.

So the reversed proxy that exists in a swarm setup, rather than outside of it as for an example a reverse proxy or load balancer, looses the original IP, and we can not trace where the requests come from.

The solution, as I mentioned before, and as I believe everyone uses, is to simply not rely fully on swarm, but to have an external component/reversed proxy/load balancer in front of the cluster, where the IP still exists, and set as a non standard header, which the solution in the swarm then have to be adopted to if we want to use the IP.

It would however be nice if we could have a global reversed proxy in the swarm, and with the help of a floating IP, solely rely on the swarm cluster, rather then this external component. So if a host goes down, the IP is attached to a different host that now acts as the reverse proxy host, but with the difference that the component is still hosted by the swarm.

@polarathene
Copy link

The issue described here, imo, is with the swarm, that if no load balancer is placed in front of the swarm that sets the original IP as a header when forwarding upstream, then the original IP is lost.

Ah ok, sorry I missed that. I don't have experience with swarm, I thought that swarm had been on the decline in favor of k8s becoming a more dominant choice for scaling production.


The solution, as I mentioned before, and as I believe everyone uses, is to simply not rely fully on swarm, but to have an external component/reversed proxy/load balancer in front of the cluster, where the IP still exists, and set as a non standard header, which the solution in the swarm then have to be adopted to if we want to use the IP.

In that case, you should be able to use something like Traefik to proxy TCP/UDP connections to an internal reverse-proxy (or direct to a container service when viable) and internally when forwarding that connection append the PROXY protocol header. That's rather simple to do and I could provide an example of the config for those two reverse proxies.

Doing that should preserve the client IP and work well for most?


It would however be nice if we could have a global reversed proxy in the swarm, and with the help of a floating IP, solely rely on the swarm cluster, rather then this external component

That would be similar to an ingress controller in k8s?

I recently updated documentation for a project I maintain where we needed to rely on PROXY protocol for preserving the client IP to a mail server container, but within the context of k8s this was more complicated as the containers are a little less flexible with routing through ingress, instead needing direct container to container connections without PROXY protocol supported as well.

@superhero
Copy link

I thought that swarm had been on the decline in favor of k8s becoming a more dominant choice for scaling production.

Yes it has, but it's different product, and aside form the point.

Doing that should preserve the client IP and work well for most?

Well yes, this is what I just explained to you that I believe everyone does to solve the problem, we use a component before we reach the swarm that mapps the IP to a non standard header... or did I misunderstood what you meant? Anyway, this is not the place to converse about it.

That would be similar to an ingress controller in k8s?

Ingress Controllers and Floating IPs operate at different levels and serves different and distinct purposes, so they are not directly similar.

Floating IPs are used to redirect network traffic to any virtual machine within a cloud environment, primarily for ensuring accessibility and facilitating failover.

An Ingress Controller is used to expose HTTP services to the external network. Floating IPs is a feature used for assigning a static IP address to an instance or resource that can be remapped as needed.

Both Ingress Controllers and Floating IPs deal with managing access to services, they do so in fundamentally different ways and can be used together, rather than replacing each other.

Similar to an ingress controller in k8's you have the docker swarm mode routing mech in docker swarm: https://docs.docker.com/engine/swarm/ingress/

within the context of k8s this was more complicated as the containers are a little less flexible with routing through ingress

...which underlines one of some of the reasons why I prefer to work with swarm, I also often find that k8's is more complex to get work done.


I enjoy the conversation, but this is perhaps not the place for it :)

To contextualize and to answer the underlying and previous question to the topic, the issue is simply that we would like to be able to read what source IP the client has that reached our containers. No matter if this could be done by using a different product, or a different approach, we would like to be able to do it using this product.

@polarathene
Copy link

Well yes, this is what I just explained to you that I believe everyone does to solve the problem, we use a component before we reach the swarm that mapps the IP to a non standard header... or did I misunderstood what you meant? Anyway, this is not the place to converse about it.

I am referring to a standard solution for this called PROXY protocol.

It prepends a header at Layer 4 (TCP) connections that the other end of the connection will accept and treat as the actual client IP. You can read about it this docs page I wrote (linking to edge version as it's unreleased rewrite). A related guide is here for k8s.

An Ingress Controller is used to expose HTTP services to the external network. Floating IPs is a feature used for assigning a static IP address to an instance or resource that can be remapped as needed.

Both Ingress Controllers and Floating IPs deal with managing access to services, they do so in fundamentally different ways and can be used together, rather than replacing each other.

I am still inexperienced with k8s myself, I've only worked with Docker / Docker Compose without any scaling needs personally. I helped review and revise the k8s docs I linked.

We have the ingress controller there as the external public IP endpoint to route connections to a service at one or more instances via IPs managed by the load balancer IIRC.


the issue is simply that we would like to be able to read what source IP the client has that reached our containers

As you know swarm well, then it sounds like what you're asking for is support for PROXY protocol? Perhaps a more explicit feature request would better serve that?

Here's some context from those links I provided:

image

image

image

@polarathene
Copy link

Just came here to say from 2024

That kind of comment doesn't help move anything here along, it's just noise and everyone subscribed gets pinged/notified pointlessly. Please only chime in when you have something worthwhile to contribute.

If you have a problem that isn't resolvable from the advice above though, please share it.


My previous comment right above yours explains how to preserve the IP, while an earlier one a few posts back describes how an IPv6 client IP will be lost if you have userland-proxy: true (default) config but you've not enabled the IPv6 routing support (which last I knew was still opt-in and behind the experimental flag).

The configuration is very simple to resolve those concerns. The PROXY protocol one has nothing to do with Docker if you run into it, while the userland-proxy / IPv6 issue is because defaults cannot satisfy all audiences, disabling the proxy has pro/cons.

Ideally the IPv6 support could be enabled by default and no longer need the experimental flag, users may still expect IPv6 addresses in the default address pool though, and perhaps not everyone would be happy with those decisions 🤷‍♂️ I'm not sure what the status is regarding this switch over, I know there was some plans to move from iptables to IPVS and that the current IPv6 support may not be in good enough shape yet so Docker tries not to intervene with routing there?

@runnermann
Copy link

Came here through a circuitous route. This is a deal breaker. Given the amount of time this problem has existed and the lack of a solution... the lack of a simple solution for an important must have is NOT good. Docker already adds complexity and a learning curve. Although it does solve some problems it has a hard line reason to avoid. We cannot operate without the visitors IP address. All of the effort to get this running and hitting a "big-broken-dead-and-not-going-to-fix" like this is disappointing. I have to believe there is a different solution for such a large problem.

@xucian
Copy link

xucian commented Apr 12, 2024

Came here through a circuitous route. This is a deal breaker. Given the amount of time this problem has existed and the lack of a solution... the lack of a simple solution for an important must have is NOT good. Docker already adds complexity and a learning curve. Although it does solve some problems it has a hard line reason to avoid. We cannot operate without the visitors IP address. All of the effort to get this running and hitting a "big-broken-dead-and-not-going-to-fix" like this is disappointing. I have to believe there is a different solution for such a large problem.

yes, it is (though I don't know it, as I've just settled on using plain docker and ansible instead of swarm or kube, b/c of the learning curve and this).
somebody, somewhere, would lose money if this feature gets implemented -- simple as that

@polarathene
Copy link

We cannot operate without the visitors IP address.

Do you need swarm? I covered any other case that you should be able to avoid the issue otherwise.

Please provide more context on the problem / use-case you're facing.

@ksaadDE
Copy link

ksaadDE commented May 7, 2024

Four years after the issue was opened, it is still not resolved xD

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
community_new New idea raised by a community contributor open source Improvements to open source projects
Projects
docker-roadmap
  
Awaiting triage
Development

No branches or pull requests