New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Preserve client IP address on both HTTP and HTTPS #144
Comments
@joshrendek what is the protocol set on your LB? |
@andrewsykim it was the defaults that come up (afaik it was TCP) when setting traefik to class |
@joshrendek you probably want to set the LB protocol to |
@andrewsykim how would that work for SSL if I want the termination to happen at traefik? I can get the correct behavior with HAProxy using the For example here is what the haproxy config looks like:
|
Not familiar with Traefik., but you can specify TLS ports with annotations as well, here's an example that does TLS termination https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/examples/https-with-cert-nginx.yml#L9. |
@andrewsykim sorry not sure I was clear - i don't want the LB to do termination since its using LetsEncrypt and has a bunch of different domains behind it, i want traefik to do that and really just have the LB pass through like haproxy is |
I see, so seems like you want to still specify TLS ports and configure TLS pass through like this. That should pass through all traffic via TLS and let traefik handle any application level routing. Let me know if that's what you were looking for :) |
@andrewsykim awesome i'll give that whirl later today - and at the very least if it doesn't work, give full reproduction steps with a fresh cluster |
@andrewsykim this only partially works So if I setup a load balancer as
If I setup HTTPS as pass through however, I get:
For context the ruleset is: Similarly if I just do The HAProxy config still works like I'd expect it to (however I lose the ability to use a DO LB and the redundancy that comes with that) |
Sorry for the late reply here. I'm not sure if |
@andrewsykim |
IIRC that is correct. You won't be able to see |
I thought i can use K8S ingress and my requests will be processed this way: But in fact i have: I thought DO LB and K8S ingress — one entity :) So... Should i create a DO LB for each K8S-service? Isn't NGINX proxy protocol resolves our problem (saving client ip information)? 🤔 🤔 🤔 |
I asked DO support about client ip and proxy protocol and they answered:
I think we can close the issue and wait for proxy protocol on LB :) Temporary solution — terminate TLS on LB |
Let's keep this issue open until client IP addresses can be properly transmitted through TLS connections to LoadBalancer-typed services. Depending on the chosen implementation, this may or may not need a change to CCM once a solution has been found on the DO LB end. |
+1 |
I also really need this to work. I have a domain running through Cloudflare, so I cannot move the DNS to DigitalOcean easily - which seems to be required to do TLS termination on the DO LB. Can we get any sort of timeline for this? |
Facing the same problem. Would really like to get this to work. |
I discussed with the support recently about this issue for a similar need. It seems the Digital Ocean team is working on a solution for the end of this month. |
Same here. Supporting Proxy Protocol for TCP should be a good start as it will solve 90% of the use cases. |
@timoreimann Just noticed Proxy Protocol is now available with Digital Ocean load balancers. I just made a try by creating a fresh cluster and deploying pods, ingress, etc. as explained on the next and great Digital Ocean tutorial: Then, I have manually enabled Proxy Protocol from the DO dashboard for the new load balancer. My services and pods are running properly if I check their status and even their logs. However, when I try to perform an HTTP GET to a REST endpoint, the only response I get is a 400 Bad Request from Nginx. I guess it is related to the load balancer. Is there any documentation that explains how to configure Proxy Protocol with Kubernetes? |
Hi @lpellegr yes, we released support for PROXY protocol earlier today 🎉. Our official blog post has more information and points at the service annotation you should use, which is
Adding that to your Service object should do the trick. An full example is available here. Please note that older clusters need to have their master nodes recycled, which is something our support team can do for you. Newly created clusters will come with PROXY protocol support right away. Hope this helps! |
Closing this issue as the feature has been implemented along #198. Please open a new issue if you run into problems, thanks. |
I can confirm the Proxy Protocol implementation also works with Traefik.
If anyone is having issues. |
I can also confirm that it works with Kong. |
Unfortunately the PROXY protocol support of the DigitalOcean load balancers does not properly work with cert-manager either, I've opened support ticket 02611202 with DigitalOcean for myself but I'll post here as well. It appears DO's load balancers are configured with a hairpin NAT or similar configuration, if you try to access them from a droplet that is backed by them you will have your traffic routed straight back instead of proxied via the load balancer. This prevents cert-manager from running checks about the availability of ACME challenges, since configuring an ingress controller to accept PROXY protocol traffic prevents it from accepting normal HTTP(S) requests - and since these requests aren't ACTUALLY going through the LB they fail. |
@snuxoll I think you are running into #193 |
@jcassee Similar issue, but I am not using HTTP/HTTPS termination at the load balancer - it is running in pure TCP mode. Either way it appears to be related to k8s and Kube-proxy, so I'll follow kubernetes/kubernetes#66607 which was linked from #193. |
@snuxoll Yeah, I meant it's the same core issue (kubernetes/kubernetes#66607), but I see you already found it. |
Hi both. Thank you for the PROXY support and your Traefik startup arguments. I've tested the setup and it does indeed work that PROXY support now passes on the source IP to a Traefik ingress controller service in the Kubernetes cluster. I have one problem however: which CIDR should we enter to trust? If I enter just the public IP of the DO LB, it does not forward the source IP to the backend apps. However, if I add the "ProxyProtocol.Insecure = true" flag it does work. If I look at my traffic logs, I do see an internal |
@michiels Traefik startup arguments as trusted IP need to be the ip ranges of the source. So, if you're using cloudflare. You'd want to add those. https://www.cloudflare.com/ips-v4 |
@ptariche Thanks. I'm not using CloudFlare but I'm using a DigitalOcean Load Balancer as the Traefik deployment is set up as a Kubernetes Service with type=LoadBalancer. The DO LB IP goes into our DNS records directly. |
I'm running into the same problem specifying the LoadBalancer as a trustedIP for ProxyProtocol. I think we need the internal/private IP of the load balancer to accomplish this, but it doesn't seem that DigitalOcean provides this for us. |
For the DO LB you don’t know the external Public IP until after it’s been created so it maybe something you have to do retroactively. |
@michiels that 10.x.x.x IP address you're seeing should be the internal one of the proxy, correct. Unfortunately, it cannot be relied upon to be stable -- even if it looks stable-ish now, there are conditions under which it may change, even across two consecutive requests. Do I understand correctly that the use case here is to prevent client IP address forgery (in the proxy protocol header) by relying on the fronting DO LB to set the proxy protocol header only? If so, I wonder if a different avenue to tackle the problem might be to firewall off access to the NodePorts from all clients except for the LB. I'd have to double-check, but I believe the existing tagging infrastructure would allow for this kind of filtering already. (In fact, #70 may go in this very direction; the PR lost traction at some point.) Any thoughts on this idea? |
@timoreimann Right. I already assumed that the internal IP of the LB might change depending on the dynamics for your DO LB internal infrastructure. It is indeed to prevent client IP address forgery. Only trusted proxies/LBs may forward a trusted client IP to our backend services. Traefik, the ingress router that we use, has been configured to only trust the LB IP for the PROXY protocol (or accepting X-Forwarded-For headers). If I understand correctly the alternative route you suggest, I think it would indeed be a good alternative to only allow the LB to forward traffic into the cluster by fully disabling external access on the NodePorts of the public IPs of the individual droplets in the cluster. In fact, ideally, I wouldn't want to expose any public IPs or ports of the individual machines in the cluster to the public internet. This is how for example Azure K8s Cluster approaches it too. This way you know all traffic coming into the exposed NodePorts can only come from the LB. Of course, I realize that some users would actually like to expose individual services in their cluster to the outside world. But in the case that you put an LB in front of it, I can't come up with a reason to expose individual nodes to the public internet. I'll have a look at #70 if that matches my problem or a solution. I'm not a cloud infrastructure developer myself however :) |
On other cloud providers you would generally set
externalTrafficPolicy: Local
to preserve the IP information being passed along from the LB. Currently DO's load balancers do not do this / support this.Example:
curl https://ifcfg.net
returns10.136.140.29
Headers from the backend:
However if I query the nodePort directly for Traefik:
Headers from the backend:
I get the right response back.
Outstanding issues:
The text was updated successfully, but these errors were encountered: