Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preserve client IP address on both HTTP and HTTPS #144

Closed
joshrendek opened this issue Nov 5, 2018 · 37 comments
Closed

Preserve client IP address on both HTTP and HTTPS #144

joshrendek opened this issue Nov 5, 2018 · 37 comments

Comments

@joshrendek
Copy link

On other cloud providers you would generally set externalTrafficPolicy: Local to preserve the IP information being passed along from the LB. Currently DO's load balancers do not do this / support this.

Example:

curl https://ifcfg.net returns 10.136.140.29

Headers from the backend:

map[Accept:[*/*] X-Forwarded-Port:[80] X-Forwarded-Server:[traefik-6f7469496d-xwqlk] Accept-Encoding:[gzip] User-Agent:[curl/7.54.0] X-Forwarded-For:[10.136.140.29] X-Forwarded-Host:[ifcfg.net] X-Forwarded-Proto:[http] X-Real-Ip:[10.136.140.29]]

However if I query the nodePort directly for Traefik:

curl -H "Host: ifcfg.net" 68.183.30.xxx:31978
47.201.184.xx

Headers from the backend:

map[User-Agent:[curl/7.54.0] Accept:[*/*] X-Forwarded-Proto:[http] X-Real-Ip:[47.201.184.xx] X-Forwarded-Server:[traefik-6f7469496d-xwqlk] Accept-Encoding:[gzip] X-Forwarded-For:[47.201.184.xx] X-Forwarded-Host:[ifcfg.net] X-Forwarded-Port:[80]]

I get the right response back.

Outstanding issues:

@andrewsykim
Copy link
Contributor

@joshrendek what is the protocol set on your LB?

@joshrendek
Copy link
Author

@andrewsykim it was the defaults that come up (afaik it was TCP) when setting traefik to class LoadBalancer instead of NodePort.

@andrewsykim
Copy link
Contributor

@joshrendek you probably want to set the LB protocol to http, otherwise it will not do any header modifications (see here for example).

@joshrendek
Copy link
Author

joshrendek commented Nov 5, 2018

@andrewsykim how would that work for SSL if I want the termination to happen at traefik?

I can get the correct behavior with HAProxy using the send-proxy directive on the backends

For example here is what the haproxy config looks like:

listen http
    bind 0.0.0.0:80
    balance roundrobin
    option httpclose
    option forwardfor
    server srv1 10.136.79.198:32573 check send-proxy
    server srv2 10.136.97.101:32573 check send-proxy

# 32573/TCP,443:31125/TCP
listen https
        bind 0.0.0.0:443
	mode            tcp
	log             global
	option          dontlognull
	option          dontlog-normal
	option          log-separate-errors
	timeout         client 30000
	tcp-request     inspect-delay 5s
	tcp-request     content accept if { req.ssl_hello_type 1 }
	acl proto_tls   req.ssl_hello_type 1
	use_backend nodes-https if proto_tls
	default_backend nodes-https

backend nodes-https
    mode            tcp
    log             global
    stick-table     type ip size 512k expire 30m
    stick on        src
    balance         leastconn
    timeout         connect 30000
    timeout         server 300000
    retries         3
    option          ssl-hello-chk
    option          tcplog
    server ssrv1 10.136.79.198:31125 check send-proxy
    server ssrv2 10.136.97.101:31125 check send-proxy

@andrewsykim
Copy link
Contributor

andrewsykim commented Nov 5, 2018

Not familiar with Traefik., but you can specify TLS ports with annotations as well, here's an example that does TLS termination https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/examples/https-with-cert-nginx.yml#L9.

@joshrendek
Copy link
Author

@andrewsykim sorry not sure I was clear - i don't want the LB to do termination since its using LetsEncrypt and has a bunch of different domains behind it, i want traefik to do that and really just have the LB pass through like haproxy is

@andrewsykim
Copy link
Contributor

I see, so seems like you want to still specify TLS ports and configure TLS pass through like this. That should pass through all traffic via TLS and let traefik handle any application level routing. Let me know if that's what you were looking for :)

@joshrendek
Copy link
Author

@andrewsykim awesome i'll give that whirl later today - and at the very least if it doesn't work, give full reproduction steps with a fresh cluster

@joshrendek
Copy link
Author

@andrewsykim this only partially works

So if I setup a load balancer as HTTP -> HTTP I get the X-Forwarded-For header set and get a response like:

47.201.184.xx, 10.244.97.0 - which would be okay

If I setup HTTPS as pass through however, I get:

10.244.25.1

For context the ruleset is:

image

Similarly if I just do TCP -> TCP I get a internal network IP as well: 10.244.25.1

The HAProxy config still works like I'd expect it to (however I lose the ability to use a DO LB and the redundancy that comes with that)

@andrewsykim
Copy link
Contributor

Sorry for the late reply here. I'm not sure if X-Forwarded-For headers will work over HTTPS cause the load balancer cannot modify/see the http headers.

@aurokk
Copy link

aurokk commented Jan 3, 2019

@andrewsykim
Hello, is there some news? :)
As i understand, the only option is terminate TLS with LB?

@andrewsykim
Copy link
Contributor

IIRC that is correct. You won't be able to see X-Forwarded-For headers when doing HTTPS -> HTTPS.

@aurokk
Copy link

aurokk commented Jan 4, 2019

@andrewsykim

I thought i can use K8S ingress and my requests will be processed this way:
client -> K8S ingress (tls termination) -> K8S svc -> pod

But in fact i have:
client -> DO LB -> K8S ingress (tls termination) -> K8S svc -> pod

I thought DO LB and K8S ingress — one entity :)

So...
What is the right way to use DO K8S and LBs?

Should i create a DO LB for each K8S-service?
(without ingress, scheme: client -> DO LB (tls termination) -> K8S svc -> pod)

Isn't NGINX proxy protocol resolves our problem (saving client ip information)?
(client -> DO LB (NGINX with proxy protocol) -> K8S ingress (tls termination) -> K8S svc -> pod)

🤔 🤔 🤔

@aurokk
Copy link

aurokk commented Jan 7, 2019

I asked DO support about client ip and proxy protocol and they answered:

Sorry to hear that you are experiencing issues with this. At this time, Proxy protocol is not supported on our Load Balancers and this is why the client IP is not being forwarded. Our engineering team that manages the Load Balancers service is aware of this limitation and is currently working toward a solution, however it is not clear how long that will take. In the meantime, it would not be possible to change the relationship from our Kubernetes Service to our Load Balancers managed on our platform. We hope to be able to offer a solution to this soon. Sorry for the inconvenience.

I think we can close the issue and wait for proxy protocol on LB :)

Temporary solution — terminate TLS on LB

@timoreimann
Copy link
Collaborator

Let's keep this issue open until client IP addresses can be properly transmitted through TLS connections to LoadBalancer-typed services. Depending on the chosen implementation, this may or may not need a change to CCM once a solution has been found on the DO LB end.

@Berndinox
Copy link

+1

@timoreimann timoreimann changed the title X-Forwarded-For not working with externalTrafficPolicy: Local Preserve client IP address on both HTTP and HTTPS Feb 24, 2019
@gboor
Copy link

gboor commented Mar 1, 2019

I also really need this to work. I have a domain running through Cloudflare, so I cannot move the DNS to DigitalOcean easily - which seems to be required to do TLS termination on the DO LB.

Can we get any sort of timeline for this?

@daanwa
Copy link

daanwa commented Mar 9, 2019

Facing the same problem. Would really like to get this to work.

@lpellegr
Copy link

lpellegr commented Mar 9, 2019

I discussed with the support recently about this issue for a similar need. It seems the Digital Ocean team is working on a solution for the end of this month.

@thedumbtechguy
Copy link

thedumbtechguy commented Mar 17, 2019

Same here. Supporting Proxy Protocol for TCP should be a good start as it will solve 90% of the use cases.

@lpellegr
Copy link

lpellegr commented Mar 19, 2019

@timoreimann Just noticed Proxy Protocol is now available with Digital Ocean load balancers.

I just made a try by creating a fresh cluster and deploying pods, ingress, etc. as explained on the next and great Digital Ocean tutorial:

https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes

Then, I have manually enabled Proxy Protocol from the DO dashboard for the new load balancer.

My services and pods are running properly if I check their status and even their logs. However, when I try to perform an HTTP GET to a REST endpoint, the only response I get is a 400 Bad Request from Nginx. I guess it is related to the load balancer.

Is there any documentation that explains how to configure Proxy Protocol with Kubernetes?

@timoreimann
Copy link
Collaborator

Hi @lpellegr

yes, we released support for PROXY protocol earlier today 🎉. Our official blog post has more information and points at the service annotation you should use, which is

service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"

Adding that to your Service object should do the trick. An full example is available here.

Please note that older clusters need to have their master nodes recycled, which is something our support team can do for you. Newly created clusters will come with PROXY protocol support right away.

Hope this helps!

@timoreimann
Copy link
Collaborator

Closing this issue as the feature has been implemented along #198.

Please open a new issue if you run into problems, thanks.

@ptariche
Copy link

ptariche commented Mar 20, 2019

I can confirm the Proxy Protocol implementation also works with Traefik.

        - --api
        - --kubernetes
        - --logLevel=INFO
        - --ping
        - --ping.entrypoint=hc
        - --defaultentrypoints=http
        - --entryPoints=Name:https Address::443 TLS Compress:true ProxyProtocol.Default:true ProxyProtocol.TrustedIPs:(CIDR)
        - --entryPoints=Name:http Address::80 Compress:true ProxyProtocol.Default:true ProxyProtocol.TrustedIPs:(CIDR)
        - --entryPoints=Name:hc Address::8000 Compress:true
        - --defaultentrypoints=http
        - --sendanonymoususage=false

If anyone is having issues.

@thedumbtechguy
Copy link

I can also confirm that it works with Kong.

@snuxoll
Copy link

snuxoll commented Apr 19, 2019

Unfortunately the PROXY protocol support of the DigitalOcean load balancers does not properly work with cert-manager either, I've opened support ticket 02611202 with DigitalOcean for myself but I'll post here as well.

It appears DO's load balancers are configured with a hairpin NAT or similar configuration, if you try to access them from a droplet that is backed by them you will have your traffic routed straight back instead of proxied via the load balancer. This prevents cert-manager from running checks about the availability of ACME challenges, since configuring an ingress controller to accept PROXY protocol traffic prevents it from accepting normal HTTP(S) requests - and since these requests aren't ACTUALLY going through the LB they fail.

@jcassee
Copy link

jcassee commented Apr 19, 2019

@snuxoll I think you are running into #193

@snuxoll
Copy link

snuxoll commented Apr 19, 2019

@jcassee Similar issue, but I am not using HTTP/HTTPS termination at the load balancer - it is running in pure TCP mode. Either way it appears to be related to k8s and Kube-proxy, so I'll follow kubernetes/kubernetes#66607 which was linked from #193.

@jcassee
Copy link

jcassee commented Apr 20, 2019

@snuxoll Yeah, I meant it's the same core issue (kubernetes/kubernetes#66607), but I see you already found it.

@michiels
Copy link

michiels commented Jun 7, 2019

@timoreimann @ptariche

Hi both. Thank you for the PROXY support and your Traefik startup arguments. I've tested the setup and it does indeed work that PROXY support now passes on the source IP to a Traefik ingress controller service in the Kubernetes cluster.

I have one problem however: which CIDR should we enter to trust? If I enter just the public IP of the DO LB, it does not forward the source IP to the backend apps. However, if I add the "ProxyProtocol.Insecure = true" flag it does work.

If I look at my traffic logs, I do see an internal 10.x.x.x IP address as incoming source to the backend service. However, this does not look like one of the IPs of my nodes. Can I assume this is the internal source IP of the DigitalOcean Load balancer that we need to put in the Traefik startup arguments as trusted IP? Or will this IP address change over time due to DOs internal routing or Kubernetes network setup?

@ptariche
Copy link

ptariche commented Jun 7, 2019

@michiels Traefik startup arguments as trusted IP need to be the ip ranges of the source. So, if you're using cloudflare. You'd want to add those. https://www.cloudflare.com/ips-v4

@michiels
Copy link

michiels commented Jun 7, 2019

@ptariche Thanks. I'm not using CloudFlare but I'm using a DigitalOcean Load Balancer as the Traefik deployment is set up as a Kubernetes Service with type=LoadBalancer. The DO LB IP goes into our DNS records directly.

@ghost
Copy link

ghost commented Jun 8, 2019

@ptariche Thanks. I'm not using CloudFlare but I'm using a DigitalOcean Load Balancer as the Traefik deployment is set up as a Kubernetes Service with type=LoadBalancer. The DO LB IP goes into our DNS records directly.

I'm running into the same problem specifying the LoadBalancer as a trustedIP for ProxyProtocol. I think we need the internal/private IP of the load balancer to accomplish this, but it doesn't seem that DigitalOcean provides this for us.

@ptariche
Copy link

ptariche commented Jun 9, 2019

For the DO LB you don’t know the external Public IP until after it’s been created so it maybe something you have to do retroactively.

@timoreimann
Copy link
Collaborator

timoreimann commented Jun 9, 2019

@michiels that 10.x.x.x IP address you're seeing should be the internal one of the proxy, correct. Unfortunately, it cannot be relied upon to be stable -- even if it looks stable-ish now, there are conditions under which it may change, even across two consecutive requests.

Do I understand correctly that the use case here is to prevent client IP address forgery (in the proxy protocol header) by relying on the fronting DO LB to set the proxy protocol header only? If so, I wonder if a different avenue to tackle the problem might be to firewall off access to the NodePorts from all clients except for the LB. I'd have to double-check, but I believe the existing tagging infrastructure would allow for this kind of filtering already. (In fact, #70 may go in this very direction; the PR lost traction at some point.)

Any thoughts on this idea?

@michiels
Copy link

@timoreimann Right. I already assumed that the internal IP of the LB might change depending on the dynamics for your DO LB internal infrastructure.

It is indeed to prevent client IP address forgery. Only trusted proxies/LBs may forward a trusted client IP to our backend services. Traefik, the ingress router that we use, has been configured to only trust the LB IP for the PROXY protocol (or accepting X-Forwarded-For headers).

If I understand correctly the alternative route you suggest, I think it would indeed be a good alternative to only allow the LB to forward traffic into the cluster by fully disabling external access on the NodePorts of the public IPs of the individual droplets in the cluster. In fact, ideally, I wouldn't want to expose any public IPs or ports of the individual machines in the cluster to the public internet. This is how for example Azure K8s Cluster approaches it too. This way you know all traffic coming into the exposed NodePorts can only come from the LB. Of course, I realize that some users would actually like to expose individual services in their cluster to the outside world. But in the case that you put an LB in front of it, I can't come up with a reason to expose individual nodes to the public internet.

I'll have a look at #70 if that matches my problem or a solution. I'm not a cloud infrastructure developer myself however :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests