Skip to content

Get proxy protocol working correctly #404

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
rjbaat opened this issue Apr 9, 2021 · 44 comments
Closed

Get proxy protocol working correctly #404

rjbaat opened this issue Apr 9, 2021 · 44 comments
Labels
kind/question Further information is requested

Comments

@rjbaat
Copy link

rjbaat commented Apr 9, 2021

Hi All,
I am trying to get proxy protocol enabled on my Traefik setup on DigitalOcean k8s.
The hw load balancer has proxy protocol. I can enable it manually, but then I get a error 400 respons from my ingressRoute.

I tried to add an additional argument to the entry point web with the trusted ip. I tried it with the internal subnet of DO and 127.0.0.1/32 like stated in the documentation. But on my 3 node cluster it sometimes gives the node ip of one of the 3 cluster nodes, instead of my ISP ip. I think this is because it is the node that Traefik container actual lives on.

--entryPoints.web.proxyProtocol.trustedIPs=127.0.0.1/32,<INTERNAL_IP_SUBNET>

I noticed that it only truly works when I add the additional argument like:

--entryPoints.web.proxyProtocol.insecure

Any idea how to get proxy protocol working with the DO load balancer entry points, without getting the node ip?

@rjbaat rjbaat changed the title How to enable proxy protocol Get proxy protocol working correctly Apr 9, 2021
@tanandy
Copy link

tanandy commented Apr 9, 2021

hi, are you using externalTrafficPolicy to local ? (check the lb from traefik)

fyi, you get 400 response when you setup proxy protocol only 1 side . (remember you need to add it at LB lvl + at your ingress level)

@rjbaat
Copy link
Author

rjbaat commented Apr 9, 2021

Hi, when I check the Traefik LB it stated:

externalTrafficPolicy: Cluster

@tanandy
Copy link

tanandy commented Apr 9, 2021

ok sorry, you dont try to get the real source ip , do you ? or you just want proxy protocol enable .
because to retrieve the real source ip with the proxy protocol, you will need to use traffic policy to local afaik

@rjbaat
Copy link
Author

rjbaat commented Apr 9, 2021

No I actually do want the see client ip and not the ip's in between.
Aha i will test it with setting it to local then.

@tanandy
Copy link

tanandy commented Apr 9, 2021

could you describe the LB created to see if we have some annotation on your lb for the proxy protocol

@tanandy
Copy link

tanandy commented Apr 9, 2021

Aha i will test it with setting it to local then.

ok then if you need also to retrieve the real ip, you will need to be in local at least

@tanandy
Copy link

tanandy commented Apr 9, 2021

for your information i put a working example here, you can get some insights with it on OVH and adapt it to DO

https://github.com/tanandy/helm-ovh-ingress/blob/main/ingress/traefik/values.yaml
https://github.com/tanandy/helm-ovh-ingress/blob/main/ingress/traefik/values-upgrade-ips.yaml

@tanandy
Copy link

tanandy commented Apr 9, 2021

@rjbaat
Copy link
Author

rjbaat commented Apr 9, 2021

I tried to do the

externalTrafficPolicy: Local

But then only 1 node is healthy on the hw load balancer. Because it will not forward traffic to the other nodes. So I don't think that is the solution.

What I did was adding this to the helm chart as extra answers:

additionalArguments:
  - "--entryPoints.web.proxyProtocol.trustedIPs=127.0.0.1/32,10.133.0.0/16"
service:
  annotations: 
    service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"

This is what I will test atm, to see if it solves it.

@rjbaat
Copy link
Author

rjbaat commented Apr 9, 2021

@tanandy thnx!
I think that this indeed solves it. I only see my own ip now.
Stupid, since I had this annotation for NGINX before aswel 🙈.

@tanandy
Copy link

tanandy commented Apr 9, 2021

you have 2 different things :

1/ you need to activate proxy protocol on ingress + lb (to support proxy protocol)

2/ you need to use externalTrafficPolicy local (to be able to retrieve real ip)

@rjbaat
Copy link
Author

rjbaat commented Apr 9, 2021

Number 2 isn't needed, since it then breaks the LB.
I guess that would only work if Traefik runs as a DaemonSet.

Edit i was to soon. The problem stil occurs :(

@tanandy
Copy link

tanandy commented Apr 9, 2021

really, i always needed to setup externaltrafficpolicy to local to get the real ip of the user. you may be lucky
https://blog.getambassador.io/externaltrafficpolicy-local-on-kubernetes-e66e498212f9
https://www.asykim.com/blog/deep-dive-into-kubernetes-external-traffic-policies

@rjbaat
Copy link
Author

rjbaat commented Apr 9, 2021

Well if I do that, 2 of the 3 nodes in the LB pool will show unhealthy.

@tanandy
Copy link

tanandy commented Apr 9, 2021

interesting, im not using DO. it could be useful to understand the reason. Maybe the difference between OVH & DO lbs

@rjbaat
Copy link
Author

rjbaat commented Apr 9, 2021

Wel I think it might be due to the health checks. These are done on the node ports of the Traefik container on every node in the cluster. But if the health check doesn't reach the Traefik container (because of externalTrafficPolicy: Local) the the LB marks the node as unhealthy.

Anyway, iam not sure what can be done. Strange thing is that the same concept is working for NGINX.

@tanandy
Copy link

tanandy commented Apr 9, 2021

https://docs.digitalocean.com/products/kubernetes/how-to/configure-load-balancers/#proxy-protocol

External Traffic Policies and Health Checks
Load balancers managed by DOKS assess the health of the endpoints for the LoadBalancer service that provisioned them.

A health check’s behavior is dependent on the service’s externaltrafficpolicy. A service’s externaltrafficpolicy can be set to either Local or Cluster. A Local policy only accepts health checks if the destination pod is running locally, while a Cluster policy allows the nodes to distribute requests to pods in other nodes within the cluster.

Services with a Local policy assess nodes without any local endpoints for the service as unhealthy.

Services with a Cluster policy can assess nodes as healthy even if they do not contain pods hosting that service. To change this setting for a service, run the following command with your desired policy:

Note

If a service has a Cluster policy, requests will lose the original client IP address due to the extra network hop between the load balancer and the nodes. If your service requires retaining the requests original IP address, a Local policy is requires.

@tanandy
Copy link

tanandy commented Apr 9, 2021

you may need to customize healthcheck ?

metadata:
  name: health-check-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-port: "80"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-protocol: "http"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-path: "/health"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-check-interval-seconds: "3"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-response-timeout-seconds: "5"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-unhealthy-threshold: "3"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-healthy-threshold: "5"
    ```

@rjbaat
Copy link
Author

rjbaat commented Apr 9, 2021

haha I just wanted to paste the same link to the article! :)
So I think I have to deal with the fact that this is a problem that can't be solved. Only if I make sure traffic is scaled to every node or something.

@tanandy
Copy link

tanandy commented Apr 9, 2021

you can mitigate the traffic spread using pod anti affinity btw, its not perfect but its not that bad

@rjbaat
Copy link
Author

rjbaat commented Apr 9, 2021

yes indeed, or it should be set to the kind DaemonSet maybe? But iam not sure if that's possible with this helm chart options.

@tanandy
Copy link

tanandy commented Apr 9, 2021

i dont see why using DaemonSet will resolve it ?

@rjbaat
Copy link
Author

rjbaat commented Apr 9, 2021

A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. So it will make sure every node has at least one Traefik container if I am right about this. https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

So if every node has one Traefik pod and I set the externalTrafficPolicy: Local then the health checks on each node will work and the the routing from each Traefik pod should use the preserved ip.

@tanandy
Copy link

tanandy commented Apr 9, 2021

ok,i was talking about the fact of not using externaltrafficpolicy local but cluster. i dont know about the daemonset, you could test im only using defaut chart currently that works for me but im not on DO.

if you want to avoid using local and downside of this approach, you could mix proxy protocol + x forwarded for header (to preserve security) but you need one more mw to do the proxy protocol stuff
https://devcentral.f5.com/s/articles/How-to-persist-real-IP-into-Kubernetes

@rjbaat
Copy link
Author

rjbaat commented Apr 12, 2021

@tanandy , how does that work? I don't think I quite understand. I tried to add the forwarded headers like adding:

--entryPoints.web.forwardedHeaders.trustedIPs=127.0.0.1/32,10.133.0.0/16

But it does not seam to work in combination with externalTrafficPolicy: Cluster .

It does work, when I add the actual external ip of the node to the array, but that's also not a solution on an elastic setup.

The thing is that with the same setup and with DO loadbalancer + proxy protocol + NGINX it does work with: externalTrafficPolicy: Cluster

So NGINX uses the headers maybe differently then Traefik.

@tanandy
Copy link

tanandy commented Apr 12, 2021

i dont get your point . why are you trying to use forwarded headers ? are you trying to replace proxyprotocol by x forwaded for ?

The thing is that with the same setup and with DO loadbalancer + proxy protocol + NGINX it does work with: externalTrafficPolicy: Cluster

you get the real ip with Cluster ?

@rjbaat
Copy link
Author

rjbaat commented Apr 12, 2021

No I tried to add them both to see if it then will use the headers as client ip, but it doesn't matter.
I am still stuck in the fact it won't work without externalTrafficPolicy: Local. But that would break the DO lb health check.

I was wondering why I do get it to work with NGINX on externalTrafficPolicy: Cluster with the following helm answers:

---
  controller: 
    config: 
      use-proxy-protocol: "true"
    service:
      annotations:
        service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"

So that makes me wonder if NGINX had implemented something so it will be able to pass the client ip.

@tanandy
Copy link

tanandy commented Apr 12, 2021

nginx + traefik support both forwarded headers + proxy protocol

@rjbaat
Copy link
Author

rjbaat commented Apr 12, 2021

Yes, but on NGINX I do get it to work with externalTrafficPolicy: Cluster and on Traefik I don't.

Both have the DO LB annotation and Both have proxy protocol enabled.

@tanandy
Copy link

tanandy commented Apr 12, 2021

because you have to adapt traefik deployment to make DO hck works. i dont have time to dig into it,

maybe you can try in that way ...
#404 (comment)

i let you dig into it

@rjbaat
Copy link
Author

rjbaat commented Apr 12, 2021

Yes, but with the externalTrafficPolicy: Cluster option it would still be better since the routing of the request will run over the whole cluster. With the local option it only looks for Traefik container on the node the request will enter. So then I have to make sure Traefik runs on all nodes.

Anyway thnx for your help! I will see if I can find a good solution.

@tanandy
Copy link

tanandy commented Apr 12, 2021

told you, you cant use externalTrafficPolicy: Cluster to get the real source ip afaik (for the reason above)
if you want to use it and retrieve ip without local , you will probably need to follow Proxyprotocol + forwarded header (for sec reason) with the architecture above. #404 (comment)

@rjbaat
Copy link
Author

rjbaat commented Apr 12, 2021

Yes, I read the article, but didn't quite understand the proposed solution.

@tanandy
Copy link

tanandy commented Apr 12, 2021

the first mw will handle the proxy protocol part, then at your ingress lb you wont need proxy protocol but only forwarded headers to pass the real ip.

(you cant rely only on forwarded headers since a header can be forge)

@SantoDE SantoDE added the kind/question Further information is requested label Apr 21, 2021
@bcdrme
Copy link

bcdrme commented Aug 20, 2021

I also tried to make it work, spent the day on it, and unfortunately it seems that it's a problem with Traefik itself cf traefik/traefik#8304

@Coleslaw3557
Copy link

Coleslaw3557 commented Sep 3, 2021

With some help from this thread and Traefik support, I was successful in getting proxy support to work with Digital Ocean while maintaining "externalTrafficPolicy: Cluster"; thus, working health checks for DigitalOcean load balancers. I'm using proxy support to apply ipwhitelist middlewares on my ingressroutes. For anyone else trying to do this, here are the changes I made to the helm values.

image:
  name: traefik
  tag: "2.5.2"

service:
  enabled: true
  type: LoadBalancer
  annotations:
    # This will tell DigitalOcean to enable the proxy protocol so we can get the client real IP.
    service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
  spec:
   # This is the default and should stay as cluster to keep the DO health checks working.
    externalTrafficPolicy: Cluster
  
additionalArguments:
  # Tell Traefik to only trust incoming headers from the Digital Ocean Load Balancers.
  - "--entryPoints.web.proxyProtocol.trustedIPs=127.0.0.1/32,10.120.0.0/16"
  - "--entryPoints.websecure.proxyProtocol.trustedIPs=127.0.0.1/32,10.120.0.0/16"
  # Also whitelist the source of headers to trust,  the private IPs on the load balancers displayed on the networking page of DO.
  - "--entryPoints.web.forwardedHeaders.trustedIPs=127.0.0.1/32,10.120.0.0/16"
  - "--entryPoints.websecure.forwardedHeaders.trustedIPs=127.0.0.1/32,10.120.0.0/16"

EDIT: I had some intermittent issues where the ipwhitelist middlewares would stop working when updated or after a couple hours of use. Using the whoami container, the headers consistently came through correctly from DO load balancers even when the ipwhitelist wasn't working. In my case, things became stable after I switched from having a separate ipwhitelist middleware for each my ingressroutes to using a single middleware shared across all of them. I also lowered the character count of my middlewares names from 25 to 17 characters. I can't imagine why either of these things would make a difference though. Hopefully it's just a fluke with my deployment.

@kamikazechaser
Copy link

Additionally with @timothydlister's solution, I added a service.beta.kubernetes.io/do-loadbalancer-hostname: "kube.your.tld" annotation to get it working with cert-manager where kube.your.tld (can be any subdomain) is a dedicated A name pointing to the LoadBalancer public IP.

@joeldeteves
Copy link

joeldeteves commented Apr 8, 2022

@timothydlister I have an almost identical setup to yours - DO Managed K8s running behind a DO LB with Traefik ingress, my config file is the same as yours, I have proxyProtocol working and forwardedHeaders enabled but I'm still only seeing interal IP addresses in the logs.

Do you have any idea what else might be missing from my setup?

EDIT: I figured this out.

I thought I was supposed to use the IP range of my DO VPC, however it turns out the ClusterIP of my DO Load Balancer was not in that range. Therefore, --entryPoints.websecure.proxyProtocol.trustedIPs must be set to the ClusterIP of the DO Load Balancer. Or, if you're worried about that IP changing, you can set it to a block with a CIDR range that covers the range the DO Load Balancer resides in.

Tip: You can view the ClusterIP using kubectl get service -n traefik (or whatever namespace your Traefik instance runs under).

@mloiseleur
Copy link
Member

Hello,

It seems this issue is solved.
Let us know if you think there is something we can do to help on this subject within this helm chart.

@badrdouah
Copy link

additionalArguments:

Hi,
can you please help me where should i put the additional arguments,
should it be in the LB service yaml root section
thanx

@mloiseleur
Copy link
Member

Hello @badrdouah

It's in the root section.

@mloiseleur
Copy link
Member

@badrdouah You are editing a Service not values.yaml file for this helm chart. You may compare this Helm Chart output with your k8s object, if you need. You may ask for help on traefik community forum.

@badrdouah
Copy link

@mloiseleur i have tried the traefik forum , i got no help
your answer helped me a bit but still it's not working yet,
what i did
i did a clean install of traefik

helm upgrade --install traefik traefik/traefik --set "ports.websecure.tls.enabled=true" --set "providers.kubernetesIngress.publishedService.enabled=true"

then i applied the following values file

image:
  name: traefik

service:
  enabled: true
  type: LoadBalancer
  annotations:
    # This will tell DigitalOcean to enable the proxy protocol so we can get the client real IP.
    service.beta.kubernetes.io/linode-loadbalancer-enable-proxy-protocol: "true"
  spec:
   # This is the default and should stay as cluster to keep the DO health checks working.
    externalTrafficPolicy: Cluster
  
additionalArguments:
  # Tell Traefik to only trust incoming headers from the Digital Ocean Load Balancers.
  - "--entryPoints.web.proxyProtocol.trustedIPs=127.0.0.1/32,10.120.0.0/16"
  - "--entryPoints.websecure.proxyProtocol.trustedIPs=127.0.0.1/32,10.120.0.0/16"
  # Also whitelist the source of headers to trust,  the private IPs on the load balancers displayed on the networking page of DO.
  - "--entryPoints.web.forwardedHeaders.trustedIPs=127.0.0.1/32,10.120.0.0/16"
  - "--entryPoints.websecure.forwardedHeaders.trustedIPs=127.0.0.1/32,10.120.0.0/16"

helm upgrade -f /Users/dbadr/Desktop/K8s/values.yaml traefik traefik/traefik

in the values file i replaced

service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"

with

service.beta.kubernetes.io/linode-loadbalancer-enable-proxy-protocol: "true"

since im using linode not digital ocean provider
i'm still getting the load balancer ip address instad of client real ip address,

@antoniomyslice
Copy link

Using Kamal I got it working by entering the following:

entryPoints.web.proxyProtocol.insecure: true entryPoints.websecure.proxyProtocol.insecure: true

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/question Further information is requested
Projects
None yet
Development

No branches or pull requests

10 participants