Join GitHub today
GitHub is home to over 20 million developers working together to host and review code, manage projects, and build software together.
Unable to retrieve user's IP address in docker swarm mode #25526
Comments
GordonTheTurtle
added
the
version/1.12
label
Aug 9, 2016
thaJeztah
added
kind/enhancement
area/networking
area/swarm
labels
Aug 9, 2016
|
/cc @aluzzardi @mrjana ptal |
|
@PanJ can you please share some details on how debugging-simple-server determines the |
PanJ
commented
Aug 9, 2016
justincormack
referenced this issue
Sep 16, 2016
Closed
how to get user real ip when use docker service / dns loadbalance #26625
marech
commented
Sep 19, 2016
|
@PanJ you still use your workaround or found some better solution? |
sanimej
commented
Sep 19, 2016
|
@PanJ When I run your app as a standalone container..
and access the published port from another host I get this
192.168.33.11 is the IP of the host in which I am running curl. Is this the expected behavior ? |
PanJ
commented
Sep 19, 2016
|
@sanimej Yes, it is the expected behavior that should be on swarm mode as well. |
PanJ
commented
Sep 19, 2016
|
@marech I am still using the standalone container as a workaround, which works fine. In my case, there are 2 nginx intances, standalone and swarm instances. SSL termination and reverse proxy is done on standalone nginx. Swarm instance is used to route to other services based on request host. |
sanimej
commented
Sep 19, 2016
|
@PanJ The way the published port of a container is accessed is different in swarm mode. In the swarm mode a service can be reached from any node in the cluster. To facilitate this we route through an |
PanJ
commented
Sep 19, 2016
|
@sanimej I kinda saw how it works when I dug into the issue. But the use case (ability to retrieve user's IP) is quite common. I have limited knowledge on how the fix should be implemented. Maybe a special type of network that does not alter source IP address? Rancher is similar to Docker swarm mode and it seems to have expected behavior. Maybe it is a good place to start. |
marech
commented
Sep 20, 2016
PanJ
commented
Sep 20, 2016
|
@marech standalone container listens to port
If you have to do SSL termination, add another server block that listens to port Swarm mode's nginx publishes
|
This was referenced Oct 5, 2016
o3o3o
commented
Oct 24, 2016
|
In our case, our API RateLimit and other functions is depend on the user's ip address. Is there any way to skip the problem in swarm mode? |
dack
commented
Nov 1, 2016
|
I've also run into the issue when trying to run logstash in swarm mode (for collecting syslog messages from various hosts). The logstash "host" field always appears as 10.255.0.x, instead of the actual IP of the connecting host. This makes it totally unusable, as you can't tell which host the log messages are coming from. Is there some way we can avoid translating the source IP? |
vfarcic
commented
Nov 2, 2016
|
+1 for a solution for this issue. Without the ability to retrieve user's IP prevents us from using monitoring solutions like Prometheus. |
dack
commented
Nov 2, 2016
|
Perhaps the linux kernel IPVS capabilities would be of some use here. I'm guessing that the IP change is taking place because the connections are being proxied in user space. IPVS, on the other hand, can redirect and load balance requests in kernel space without changing the source IP address. IPVS could also be good down the road for building in more advanced functionality, such as different load balancing algorithms, floating IP addresses, and direct routing. |
vfarcic
commented
Nov 2, 2016
|
For me, it would be enough if I could somehow find out the relation between the virtual IP and the IP of the server the endpoint belongs to. That way, when Prometheus send an alert related to some virtual IP, I could find out what is the affected server. It would not be a good solution but it would be better than nothing. |
dack
commented
Nov 2, 2016
|
@vfarcic I don't think that's possible with the way it works now. All client connections come from the same IP, so you can't translate it back. The only way that would work is if whatever is doing the proxy/nat of the connections saved a connection log with timestamp, source ip, and source port. Even then, it wouldn't be much help in most use cases where the source IP is needed. |
vfarcic
commented
Nov 2, 2016
|
I probably did not explain well the use case. I use Prometheus that is configured to scrap exporters that are running as Swarm global services. It uses tasks.<SERVICE_NAME> to get the IPs of all replicas. So, it's not using the service but replica endpoints (no load balancing). What I'd need is to somehow figure out the IP of the node where each of those replica IPs come from. |
vfarcic
commented
Nov 3, 2016
|
I just realized the "docker network inspect <NETWORK_NAME>" provides information about containers and IPv4 addresses of a single node. Can this be extended so that there is a cluster-wide information of a network together with nodes? Something like:
Note the addition of the "Node". If such information would be available for the whole cluster, not only a single node with the addition of a |
tlvenn
commented
Nov 3, 2016
|
I agree with @dack , given the ingress network is using IPVS, we should solve this issue using IPVS so that the source IP is preserved and presented to the service correctly and transparently. The solution need to work at the IP level so that any service that are not based on HTTP can still work properly as well (Can't rely on http headers...). And I cant stress out how important this is, without it, there are many services that simply cant operate at all in swarm mode. |
tlvenn
commented
Nov 3, 2016
|
That's how HaProxy is solving this issue: http://blog.haproxy.com/2012/06/05/preserve-source-ip-address-despite-reverse-proxies/ |
tlvenn
commented
Nov 3, 2016
|
@kobolog might be able to shed some light on this matter given his talk on IPVS at DockerCon. |
thaJeztah
added
the
status/needs-attention
label
Nov 3, 2016
thaJeztah
assigned
mrjana
Nov 3, 2016
ljb2of3
commented
Nov 4, 2016
|
Just adding myself to the list. I'm using logstash to accept syslog messages, and they're all getting pushed into elasticsearch with the host IP set to 10.255.0.4, which makes it unuseable, and I'm going to have to revert to my non-containerized logstash deployment if there's no fix for this. |
|
@mrjana can u pls add the suggestion you had to workaround this problem ? |
|
IPVS is not a userspace reverse proxy which can fix up things in HTTP layer. That is the difference between a userspace proxy like HAProxy and this. If you want to use HAProxy you could do that by putting a HAProxy in the cluster and have all your service instances and HAProxy to participate in the same network. That way HAProxy can fix up HTTP |
dack
commented
Nov 5, 2016
|
@mrjana The whole idea of using IPVS (instead of whatever docker currently does in swarm mode) would be to avoid translating the source IP to begin with. Adding an X-Forwarded-For might help for some HTTP applications, but it's of no use whatsoever for all the other applications that are broken by the current behaviour. |
tlvenn
commented
Nov 5, 2016
|
@dack my understanding is the Docker ingress network already use IPVS. |
tlvenn
commented
Nov 5, 2016
•
That would not work either @mrjana , the only way for HAProxy to get the client ip is to run outside the ingress network using docker run or directly on the host but then you cant use any of your services since they are on a different network and you cant access them. Simply put, there is absolutely no way as far as I know to deal with this as soon as you use docker services and swarm mode. It would be interesting if the author(s) of the docker ingress network could join the discussion as they would probably have some insight as to how IPVS is configured / operated under the hood ( there are many modes for IPVS) and how we can fix the issue. |
dack
commented
Nov 5, 2016
|
@tlvenn Do you know where this is in the source code? I could be wrong, but I don't believe it is using IPVS based on some things I've observed:
|
tlvenn
commented
Nov 6, 2016
|
Hi @dack, From their blog:
The code source should live in swarmkit project if I am not wrong. I wonder if @stevvooe can help us out understand what is the underlying issue here. |
dack
commented
Nov 6, 2016
|
OK, I've had a brief look through the code and I think I have a slightly better understanding of it now. It does indeed appear to be using IPVS as stated in the blog. SNAT is done via an iptables rule which set up in service_linux.go. If I understand correctly, the logic behind it would be something like this (assuming node A receives a client packet for the service running on node B):
I think the reasoning behind the SNAT is that the reply must go through the same node that the original request came through (as that's where the NAT/IPVS state is stored). As requests may come through any node, the SNAT is used so that the service node knows which node to route the request back through. In an IPVS setup with a single load balancing node, that wouldn't be an issue. So, the question is then how to avoid the SNAT while still allowing all nodes handle incoming client requests. I'm not totally sure what the best approach is. Maybe there's a way to have a state table on the service node so that it can use policy routing to direct replies instead of relying on SNAT. Or maybe some kind of encapsulation could help (VXLAN?). Or, the direct routing method of IPVS could be used. This would allow the service node to reply directly to the client (rather than via the node that received the original request) and would allow adding new floating IPs for services. However, it would also mean that the service can only be contacted via the floating IP and not the individual node IPs (not sure if that's a problem for any use cases). |
tlvenn
commented
Nov 6, 2016
•
|
Pretty interesting discovery @dack ! Hopefully a solution will be found to skip that SNAT all together. In the meantime, there is maybe a workaround that has been committed not long ago which introduce a host-level port publishing with |
kobolog
commented
Nov 6, 2016
•
|
@tlvenn as far as I know, Docker Swarm uses masquerading, since it's the most straightforward way and guaranteed to work in most configurations. Plus this is the only mode that actually allows to masquerade ports too [re: @dack], which is handy. In theory, this issue could be solved by using IPIP encapsulation mode – the packet flow will be like this then:
There're, of course, many caveats and things-which-can-go-wrong, but generally this is possible and IPIP mode is widely used in production. |
daninthewoods
commented
Nov 8, 2016
|
Hoping a solution can be found soon for this, as IP-fixation and other security checks need to be able to receive the correct external IP. |
se7enack
commented
Nov 8, 2016
|
Watching. Our product leverages source IP information for security and analytics. |
tlvenn
commented
Nov 15, 2016
|
@aluzzardi any update for us ? |
bluejaguar
commented
Nov 15, 2016
|
bump, we need this to be working for a very large project we are starting early next year. |
dack
commented
Nov 16, 2016
|
Examing the flow, it seems to currently work like this (in this example, node A receives the incoming traffic and node B is running the service container):
I think the SNAT could be avoided with something like this:
As an added bonus, no NAT state needs to be stored and overlay network traffic is reduced. |
tlvenn
commented
Nov 25, 2016
|
@aluzzardi @mrjana Any update on this please ? A little bit of feedback from Docker would be very much appreciated. |
tlvenn
unassigned
mrjana
Nov 25, 2016
thonatos
commented
Nov 26, 2016
|
Watching. without source IP information, most of our services can't work as expected |
tlvenn
commented
Nov 26, 2016
|
@tlvenn seems like a bug in Github ? @PanJ @tlvenn @vfarcic @dack and others, PTAL #27917. We introduced the ability to enable service publish mode = Pls try 1.13.0-rc2 and provide feedback. |
tlvenn
commented
Nov 26, 2016
|
ya pretty weird @mavenugo .. Regarding the publish mode, I had already linked this from swarm kit above, this could be a workaround but I truly hope a proper solution comes with Docker 1.13 to address this issue for good. This issue could very much be categorized as a bug because preserving the source ip is the behaviour we as users expect and it's a very serious limitation of the docker services right now. I believe both @kobolog and @dack have come up with some potential leads on how to solve this and it's been almost 2 weeks with no follow up on those from Docker side. Could we please have some visibility on who is looking into this issue at Docker and a status update ? Thanks in advance. |
|
Other than #27917, there is no other solution for 1.13. The Direct-return functionality needs to be analyzed for various use-cases and should not be taken lightly to be considered as a bug-fix. We can look into this for 1.14. But, this also falls under the category of configurable LB behavior, that includes the algorithm (rr vs 10 other methods), Data-path (LVS-DR, LVS-NAT & LVS-TUN). If someone is willing to contribute to this, pls push a PR and we can get that moving. |
tlvenn
commented
Nov 27, 2016
|
Fair enough I guess @mavenugo given we have an alternative now. At the very least, can we amend the doc for 1.13 so it clearly state that when using docker services with the default ingress publishing mode, the source ip is not preserved and hint at using the host mode if this is a requirement for running the service ? I think it will help people who are migrating to services to not being burnt by this unexpected behaviour. |
|
Sure and yes a doc update to indicate this behavior and the workaround of using the publish |
virtuman
commented
Jan 6, 2017
|
Just checking back in to see if there was no new developments in getting this real up thing figured out? It certainly is a huge limitation for us as well |
bluejaguar
commented
Jan 6, 2017
|
Is a solution on the roadmap for docker 1.14? We are delayed deployed our solutions using docker due in part to this issue. |
hamburml
commented
Jan 18, 2017
|
Would love to see a custom header added to the http/https request which preserves the client-ip. This should be possible, shouldn't it? I don't mind when X_Forwarded_for is overwritten, I just want to have a custom field which is only set the very first time the request enters the swarm. |
sanimej
commented
Feb 17, 2017
|
@dack @kobolog In typical deployments of LVS-Tunnel and LVS-DR mode the destination IP in the incoming packet will be the service VIP which is also programmed as a non ARP IP in the real servers. Routing mesh works in a fundamentally different way, the incoming request could be to any of the hosts. For the real server to accept the packet (in any LVS mode) the destination IP has to be changed to a local IP. There is no way for the reply packet from the backend container to go back with the right source address. Instead of direct return, we can try to get the reply packet back to the ingress host. But there is no clean way to do it except by changing the source IP which brings us back to square one. @thaJeztah I think we should clarify this in the documentation, suggest using the host mod if client IP has to be preserved and close this issue. |
dack
commented
Feb 18, 2017
|
@sanimej I still don't see why it's impossible to do this without NAT. Couldn't we just have the option to use, for example, the regular LVS-DR flow? Docker adds the non-arp vip to the appropriate nodes, LVS directs the incoming packets to the nodes, and outgoing packets return directly. Why does it matter that the incoming packet could hit any host? That's no different than standard LVS with multiple frontend and multiple backend servers. |
pi0
commented
Feb 19, 2017
|
@thaJeztah thanks for workaround :) docker service update nginx_proxy \
--publish-rm 80 \
--publish-add "mode=host,published=80,target=80" \
--publish-rm 443 \
--publish-add "mode=host,published=443,target=443" |
sanimej
commented
Feb 21, 2017
|
@dack In the regular LVS-DR flow the destination IP will be the service VIP. So the LB can send the packet to the backend without any dest IP change. This is not the case with routing mesh because the incoming packet's dest IP will be one of the host's IP. |
tlvenn
commented
Feb 21, 2017
sanimej
commented
Feb 21, 2017
|
@tlvenn LVS-IP tunnel works very similar to LVS-DR, except that the backend gets the packet through an IP in IP tunnel rather than a mac-rewrite. So it has the same problem for the routing mesh use case. From the proposal you referred to.. Destination IP of the packet would be the IP of the host to which the client sent the packet and not the VIP. If its not rewritten, the real server would drop it after removing the outer IP header. If the destination IP is rewritten, the real server's reply to the client will have an incorrect Source IP resulting in connection failure. |
tlvenn
commented
Feb 21, 2017
|
Thanks for the clarification @sanimej. Could you perhaps implement the PROXY protocol ? It would not provide a seamless solution but at least it would offer the service a solution to resolve the user IP. |
sanimej
commented
Feb 21, 2017
•
|
There is a kludgy way to achieve the source IP preservation by splitting the source port range into blocks and assign a block for each host in the cluster. Then its possible to do a hybrid NAT+DR approach, where the ingress host does the usual SNAT and sends the packet to a real server. On the host where the real server is running, based on the source IP do a SNAT to change the source port to a port in the range assigned for the ingress host. Then on the return packet from the container match against the source port range (and the target port) and change the source IP to that of the ingress Host.
|
sanimej
commented
Feb 21, 2017
|
The NAT+DR approach I mentioned wouldn't work because the source IP can't be changed in the ingress host. By changing only the source port to one in the range for that particular host and using the routing policy from the backend host to get the packet back to the ingress host might be an option. This still has other issues I mentioned earlier. |
thaJeztah
referenced this issue
Mar 9, 2017
Closed
How to detect and mitigate DDOS attacks or agggressive scraping with swarm services #31046
lpakula
commented
Mar 17, 2017
•
|
@thaJeztah
|
pi0
commented
Mar 17, 2017
|
@lpakula Please check my answer above + this working nginx configuration |
lpakula
commented
Mar 17, 2017
|
@pi0 Thanks for reply I'm using nginx configuration from the link, but IP address is still wrong, i must have something missing in my configuration I have a docker (17.03.0-ce) swarm cluster with overlay network and two services
Nginx container uses the latest official container https://hub.docker.com/_/nginx/ I'm using global
And then nginx container logs:
Web container logs:
What is missing there? |
PanJ
commented
Mar 17, 2017
|
The IP address will still be wrong. But it will add HTTP headers that
contain real IP address. You must configure your web server of your choice
to trust proxy (use header instead of source IP)
On Fri, Mar 17, 2560 at 7:36 PM Lukasz Pakula ***@***.***> wrote:
@pi0 <https://github.com/pi0> Thanks for reply
I'm using nginx configuration from the link, but IP address is still
wrong, i must have something missing in my configuration
I have a docker (*17.03.0-ce*) swarm cluster with overlay network and two
services
docker service create --name nginx --network overlay_network --mode=global \
--publish mode=host,published=80,target=80 \
--publish mode=host,published=443,target=443 \
nginx:1.11.10
docker service create --name web --network overlay_network \
--replicas 1 \
web:newest
Nginx container uses the latest official container
https://hub.docker.com/_/nginx/ <http://url>
Web container runs uwsgi server on port 8000
I'm using global nginx.conf from the link and conf.d/default.conf looks
as follow:
server {
resolver 127.0.0.11;
set $web_upstream http://web:8000;
listen 80;
server_name domain.com;
location / {
proxy_pass $web_upstream;
}
}
And then nginx container logs:
194.168.X.X - - [17/Mar/2017:12:25:08 +0000] "GET / HTTP/1.1" 200
Web container logs:
10.0.0.47 - - [17/Mar/2017 12:25:08] "GET / HTTP/1.1" 200 -
What is missing there?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#25526 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABtu97EFaCmLwAZiOrYT4nXi4oXPCbLQks5rmn43gaJpZM4Jf2WK>
.
--
*PanJ,*
Panjamapong Sermsawatsri
*Tel.* (+66)869761168
|
pi0
commented
Mar 17, 2017
|
@lpakula Ah there is another thing your |
lpakula
commented
Mar 17, 2017
sirlp2
commented
Mar 17, 2017
•
|
Bind the port using host mode.
|
dsbudiac
referenced this issue
in docker/dockercloud-haproxy
Mar 22, 2017
Closed
real ip, forwarded for etc? #134
teohhanhui
commented
Apr 21, 2017
•
|
nginx supports IP Transparency using the TPROXY kernel module. @stevvooe Can Docker do something like that too? |
This was referenced May 15, 2017
tonysongtl
commented
Jul 25, 2017
|
Can swarm provide the REST API to get the client IP address? |
tonysongtl
unassigned
aboch
Jul 25, 2017
|
@tonysongtl that's not related to this issue |
vonloh
referenced this issue
in containous/traefik
Jul 28, 2017
Closed
expecting X-Forwarded-For: users-real-public-ip #1880
kmbulebu
commented
Aug 8, 2017
•
|
Something else to consider is how your traffic is delivered to your nodes in a highly available setup. A node should be able to go down without creating errors for clients. The current recommendation is to use an external load balancer (ELB, F5, etc) and load balance at Layer 4 to each Swarm node, with a simple Layer 4 health check. I believe F5 uses SNAT, so the best case in this configuration is to capture the single IP of your F5, and not the real client IP. References: |
sandys
commented
Aug 17, 2017
|
mirroring the comment above - can proxy protocol not be used ? All cloud load balancers and haproxy use this for source ip preservation. Calico also has ipip mode - https://docs.projectcalico.org/v2.2/usage/configuration/ip-in-ip - which is one of the reasons why github uses it. https://githubengineering.com/kubernetes-at-github/ |
yangm97
referenced this issue
in GlowstoneMC/Glowstone
Aug 19, 2017
Open
Issue with AuthMe and proxy-support #547
mostolog
commented
Aug 24, 2017
•
|
Hi. For the sake of understanding and completeness, let me summarize and please correct me if I'm wrong: The main issue is that containers aren't receiving original src-IP but swarm VIP. I have replicated this issue with the following scenario:
It seems: When services within swarm are using (default) mesh, swarm does NAT to ensure traffic from same origin is always sent to same host-running-service? Seems @kobolog #25526 (comment) and @dack #25526 (comment) proposals were refuted by @sanimej #25526 (comment) #25526 (comment) but, TBH, his arguments aren't fully clear to me yet, neither I understand why thread hasn't been closed if this is definitively impossible. @stevvooe ? @sanimej wouldn't this work?:
Wouldn't an option to enable "reverse proxy instead of NAT" for specific services solve all this issues satisfying everybody? On the other hand, IIUC, the only option left is to use https://docs.docker.com/engine/swarm/services/#publish-a-services-ports-directly-on-the-swarm-node, which -again IIUC- seems to be like not using mesh at all, hence I don't see the benefits of using swarm mode (vs compose). In fact, it looks like pre-1.12 swarm, needing Consul and so. Thanks for your help and patience. |
mostolog
commented
Aug 25, 2017
•
|
@sanimej
|
Jitsusama
commented
Aug 31, 2017
|
I'd just like to chime in; while I do understand that there is no easy way to do this, not having the originating IP address preserved in some manner severely hampers a number of application use cases. Here's a few I can think of off the top of my head:
From my reading of this issue thread, it does not seem that the given work-around(s) work very well when you want to have scalable services within a Docker Swarm. Limiting yourself to one instance per worker node greatly reduces the flexibility of the offering. Also, maintaining a hybrid approach of having an LB/Proxy on the edge running as a non-Swarm orchestrated container before feeding into Swarm orchestrated containers seems like going back in time. Why should the user need to maintain 2 different paradigms for service orchestration? What about being able to dynamically scale the LB/Proxy at the edge? That would have to be done manually, right? Could the Docker team perhaps consider these comments and see if there is some way to introduce this functionality, while still maintaining the quality and flexibility present in the Docker ecosystem? As a further aside, I'm currently getting hit by this now. I have a web application which forwards authorized/authenticated requests to a downstream web server. Our service technicians need to be able to verify whether people have reached the downstream server, which they like to use web access logs for. In the current scenario, there is no way for me to provide that functionality as my proxy server never sees the originating IP address. I want my application to be easily scalable, and it doesn't seem like I can do this with the work-arounds presented, at least not without throwing new VMs around for each scaled instance. |
trapier
referenced this issue
in docker/docker.github.io
Sep 1, 2017
Open
Indicate mode=ingress published ports change source IP address #4493
trajano
commented
Sep 6, 2017
|
@Jitsusama could Kubernetes solve your issue? |
trajano
commented
Sep 6, 2017
|
@thaJeztah is there a way of doing the work around using docker-compose? I tried
But it seems to take 172.x.x.1 as the source IP |
Jitsusama
commented
Sep 6, 2017
|
@trajano, I have no clue. Does Kubernetes somehow manage to get around this issue? |
monotykamary
commented
Sep 8, 2017
•
|
@Jitsusama
If you're accessing your application locally, that IP should be correct (if you use swarm) since the As for the compose workaround, it is possible. Here, I use the image
This will create a
Note that the end of the header for For port
while also adding networks that I want to reverse-proxy with apps containing the environment variable Ingress controllers on Kubernetes essentially do the same thing, as ingress charts (usually) have support |
sandys
commented
Sep 9, 2017
|
So the kubernetes documentation is not complete. Another way which is being
pretty commonly is actually ingress+proxy protocol.
https://www.haproxy.com/blog/haproxy/proxy-protocol/
Proxy protocol is a widely accepted protocol that preserves source
information. Haproxy comes with built-in support for proxy protocol. Nginx
can read but not inject proxy protocol.
Once the proxy protocol is setup, you can access that information from any
downstream services like
https://github.com/nginxinc/kubernetes-ingress/blob/master/examples/proxy-protocol/README.md
Even openshift leverages this for source IP information
https://docs.openshift.org/latest/install_config/router/proxy_protocol.html
This is the latest haproxy ingress for k8s that injects proxy protocol.
IMHO the way to do this in swarm is to make the ingress able to read proxy
protocol (in case it's receiving traffic from an upstream LB that has
already injected proxy protocol) as well as inject proxy protocol
information (in case all the traffic actually hits the ingress first).
I am not in favour of doing it any other way especially when there is a
generally accepted standard to do this.
|
monotykamary
commented
Sep 10, 2017
|
Traefik did add proxy_protocol support a few weeks ago and is available from v1.4.0-rc1 onwards. |
sandys
commented
Sep 10, 2017
|
This needs to be done at the docker swarm ingress level. If the ingress
does not inject proxy protocol data, none of the downstream services
(including traefix, nginx,etc) will be able to read it.
…
|
sandys
commented
Sep 11, 2017
|
im also confused on the relationship of this bug to infrakit . e.g. docker/infrakit#601 can someone comment on the direction that docker swarm is going to take ? Will swarm rollup into infrakit ? I'm especially keen on the ingress side of it. |
sandys
referenced this issue
in docker/infrakit
Sep 11, 2017
Merged
Ingress controller for Docker Swarm and loadbalancer SPI #601
blazedd
commented
Oct 10, 2017
|
We are running into this issue as well. We want to know the client ip and requested IP for inbound connections. For example if the user performs a raw TCP connection to our server, we want to know what their IP is and which ip on our machine they connected to. |
mostolog
commented
Oct 11, 2017
|
@blazedd As commented previously and in other threads this is actually possible using publishMode. ie: services are not handled by mesh network. IIUC, there are some undergoing progress towards improving how ingress handles this, but that's actually the only solution. We anded deploying our nginx service using publishmode and mode:global, to avoid external LB configuration |
blazedd
commented
Oct 12, 2017
•
|
@mostolog Thanks for your reply. Just a few notes:
|
mostolog
commented
Oct 13, 2017
|
@blazedd In our stack we have:
and so, I would bet we get real IP's on our logs. |
trajano
commented
Oct 13, 2017
|
@mostolog It does not work on Windows at least. I am still getting the 172.0.0.x address as the source. |
blazedd
commented
Oct 13, 2017
|
@mostolog |
caoli5288
commented
Oct 15, 2017
•
|
Why not use IPVS route network to container directly? bind all swarm node's overlay interface's ips as virtual ips, use |
This was referenced Oct 18, 2017
0xcaff
commented
Nov 30, 2017
•
dack
commented
Dec 1, 2017
|
I'm running up against this issue again. My setup is as follows:
I would like to deploy a stack to the swarm and have it listen on port 80 on the virtual IP without mangling the addresses. I can almost get there by doing this: The problem here is that it doesn't allow you to specify which IP address to bind to - it just binds to all. This creates problems if you want to run more than a single service using that port. It needs to to bind only to the one IP. Using different ports isn't an option with DR load balancing. It seems that the devs made the assumption that the same IP will never exist on multiple nodes, which is not the case when using a DR load balancer. In addition, if you use the short syntax, it will ignore the bind IP and still bind to all addresses. The only way I've found to bind to a single IP is to run a non-clustered container (not a service or stack). So now I'm back to having to use standalone containers and having to manage them myself instead of relying on service/stack features to do that. |
mattronix
referenced this issue
in ONLYOFFICE/DocumentServer
Dec 7, 2017
Open
Issue on first load of new document from nextcloud #220
added a commit
to wenzowski-docker/traefik
that referenced
this issue
Dec 9, 2017
added a commit
to xsnippet/xsnippet-infra
that referenced
this issue
Jan 10, 2018
This was referenced Jan 10, 2018
blop
commented
Jan 10, 2018
|
We have the same issue. I could use the "mode=host port publishing" workaround as my service is deployed globally. I created a specific ticket here : docker/libnetwork#2050 |

PanJ commentedAug 9, 2016
•
Edited 1 time
-
PanJ
Aug 9, 2016
Output of
docker version:Output of
docker info:Additional environment details (AWS, VirtualBox, physical, etc.):
Steps to reproduce the issue:
http://<public-ip>/.Describe the results you received:
Neither
ipnorheader.x-forwarded-foris the correct user's IP address.Describe the results you expected:
iporheader.x-forwarded-forshould be user's IP address. The expected result can be archieved using standalone docker containerdocker run -d -p 80:3000 panj/debugging-simple-server. You can see both of the results via following links,http://swarm.issue-25526.docker.takemetour.com:81/
http://container.issue-25526.docker.takemetour.com:82/
Additional information you deem important (e.g. issue happens only occasionally):
This happens on both
globalmode andreplicatedmode.I am not sure if I missed anything that should solve this issue easily.
In the meantime, I think I have to do a workaround which is running a proxy container outside of swarm mode and let it forward to published port in swarm mode (SSL termination should be done on this container too), which breaks the purpose of swarm mode for self-healing and orchestration.