Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please enable proxy protocol in k3s ingress #852

Closed
sandys opened this issue Oct 1, 2019 · 21 comments
Closed

Please enable proxy protocol in k3s ingress #852

sandys opened this issue Oct 1, 2019 · 21 comments
Assignees
Labels
kind/enhancement An improvement to existing functionality
Milestone

Comments

@sandys
Copy link

sandys commented Oct 1, 2019

This issue is related to similar issues discussed in the docker swarm world - moby/moby#39465 and moby/moby#25526

we need source ip address to be passed on through ingress. The only standards compliant way to do this and be compatible with all kinds of upstream LB (including cloud loadbalancers) is by using proxy protocol

https://github.com/containous/traefik/blob/master/docs/content/routing/entrypoints.md

Traefik already has support for proxyprotocol - this needs to be enabled in the right way for both upstream loadbalancers like ELB to pass on proxy protocol info. And in case clients directly connect, then traefik must inject the appropriate headers

@davidnuzik davidnuzik added [zube]: To Triage kind/enhancement An improvement to existing functionality labels Nov 5, 2019
@davidnuzik davidnuzik added this to the v1.x - Backlog milestone Nov 5, 2019
@sandys
Copy link
Author

sandys commented Feb 15, 2020

hi guys,
any update on this ? Since you use traefik already as ingress, this shouldnt be too hard right ?

we are having trouble in a production deploy of k3s on AWS

  1. behind a load balancer (it should pass through the headers)
  2. while just using k3s (it should inject source headers)

@sunpeinju
Copy link

hi, is there any update on this issue?

@niklaskorz
Copy link

niklaskorz commented Apr 23, 2020

I suppose this would have to be implemented in https://github.com/rancher/klipper-lb @ibuildthecloud ? It looks like Amazon supports this behind an annotation for LoadBalancer services: service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*". Maybe something similar could be done for k3s' servicelb. The PROXY protocol is specified at http://www.haproxy.org/download/1.8/doc/proxy-protocol.txt, I might be willing to get my hands on this task.

@brandond
Copy link
Contributor

Not sure about how it works with klipper-lb (the thing behind k3s's servicelb), but with metallb at least you have to use externalTrafficPolicy: Local in the Service entry for the Traefik Ingress, otherwise inbound traffic will come in through other nodes and you'll lose the original source address as it passes through the other node before hitting the node that's actually running the pod.

@niklaskorz
Copy link

As far as I know, the goal of the PROXY protocol is to not make this a problem, as the load balancer keeps track of the source IP to forward it properly to the service.

@sandys
Copy link
Author

sandys commented Apr 23, 2020 via email

@brandond
Copy link
Contributor

brandond commented Apr 23, 2020

Yes, PROXY protocol is supposed to pass the information along. That only works if the first entrypoint into the system supports it. If you pass it through kubelet NAT or kube-proxy tunnel before hitting the Ingress Service which grabs the original layer 3 source and destination, that information is lost. Your LB layer needs to either keep the inbound traffic local (MetalLB) or add the Proxy information itself (ELB).

@ibuildthecloud
Copy link
Contributor

The solution to this issue is going to be flipping some settings in the traefik chart we install by default or special annotation on your ingress resource. I don't know what that is, but I'd start with first finding out how to use PROXY protocol with Traefiks ingress controller. klipper-lb should have nothing to do with this because it's just a TCP pass through using iptables. If you are just using a service load balancer and not ingress, proxy protocol will work already.

@sandys
Copy link
Author

sandys commented Apr 24, 2020 via email

@cjellick
Copy link
Contributor

@ibuildthecloud do you think we should update our traefik chart to support this or is this "on the user" to configure?

@davidnuzik
Copy link
Contributor

Is this configurable now because of HelmChartConfig CRD? (To be discussed in our next design call)

@brandond
Copy link
Contributor

brandond commented Oct 6, 2020

@davidnuzik yes and doing so is actually covered in the example docs!

https://rancher.com/docs/k3s/latest/en/helm/#customizing-packaged-components-with-helmchartconfig - see the proxyProtocol: bit.

@davidnuzik
Copy link
Contributor

Test as low priority. Leverage the docs that Brad outlined.

@davidnuzik davidnuzik modified the milestones: Backlog, v1.19.3+k3s1 Oct 8, 2020
@davidnuzik
Copy link
Contributor

Should not block v1.19.3 release.

@rancher-max
Copy link
Contributor

Validated in v1.19.3+k3s1 using the example mentioned in the docs

  • Viewing logs of the traefik pod shows enabling of the proxyprotocol with the specified trusted IPs.:
{"level":"info","msg":"Enabling ProxyProtocol for trusted IPs [10.0.0.0/8]","time":"2020-10-16T17:43:55Z"}
{"level":"info","msg":"Preparing server https \u0026{Address::443 TLS:0xc0003d17a0 Redirect:\u003cnil\u003e Auth:\u003cnil\u003e WhitelistSourceRange:[] WhiteList:\u003cnil\u003e Compress:true ProxyProtocol:0xc00079b7e0 ForwardedHeaders:0xc00079b8c0} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s","time":"2020-10-16T17:43:55Z"}
{"level":"info","msg":"Enabling ProxyProtocol for trusted IPs [10.0.0.0/8]","time":"2020-10-16T17:43:55Z"}

Notice the difference when not including the config manifest has mention of proxyprotocol but without any trusted ips:

$ k logs -n kube-system traefik-5dd496474-btz5n | grep -i proxy
{"level":"info","msg":"Preparing server https \u0026{Address::443 TLS:0xc0006157a0 Redirect:\u003cnil\u003e Auth:\u003cnil\u003e WhitelistSourceRange:[] WhiteList:\u003cnil\u003e Compress:true ProxyProtocol:\u003cnil\u003e ForwardedHeaders:0xc0005e4320} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s","time":"2020-10-16T18:01:00Z"}
{"level":"info","msg":"Preparing server prometheus \u0026{Address::9100 TLS:\u003cnil\u003e Redirect:\u003cnil\u003e Auth:\u003cnil\u003e WhitelistSourceRange:[] WhiteList:\u003cnil\u003e Compress:false ProxyProtocol:\u003cnil\u003e ForwardedHeaders:0xc0005e4460} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s","time":"2020-10-16T18:01:00Z"}
{"level":"info","msg":"Preparing server http \u0026{Address::80 TLS:\u003cnil\u003e Redirect:\u003cnil\u003e Auth:\u003cnil\u003e WhitelistSourceRange:[] WhiteList:\u003cnil\u003e Compress:true ProxyProtocol:\u003cnil\u003e ForwardedHeaders:0xc0005e4300} with readTimeout=0s writeTimeout=0s idleTimeout=3m0s","time":"2020-10-16T18:01:00Z"}
  • All pods come up successfully and appear to be functional

@arctica
Copy link

arctica commented Nov 28, 2020

This is only half working. Traefik supports Proxy Protocol but it needs the actual proxy data from the upstream injected into the TCP stream. Loadbalancers from AWS et al do this for you but the Loadbalancer service (Klipper) from k3s does not.

Klipper is essentially two lines of iptables and isn't capable of prepending data into the TCP stream. It took me some good time to realize this.

@rancher-max Your test only checks if Traefik enables Proxy Protocol but it probably did not check if Traefik actually sees the real client IP or was using a loadbalancer from AWS etc. Enabling the setting without having the actual Proxy Protocol data being sent will not do much.

@sandys
Copy link
Author

sandys commented Nov 28, 2020 via email

@arctica
Copy link

arctica commented Nov 28, 2020

@sandys Yes we also ripped out the k3s built in loadbalancer and just exposed Traefik directly via HostPort (note this only works if you configure the pod explicitly to run as root as the NET_BIND_SERVICE capability is not working either). Many places suggest not to do this but I can't see the benefit of the Klipper loadblancer in this case or why there need to be svclb-traefik pods running on every node.

@sandys
Copy link
Author

sandys commented Nov 28, 2020 via email

@jon-nfc
Copy link

jon-nfc commented Jan 25, 2024

starting to notice a theme here with k3s, a suggestion is made to fix/make standards compliant and nothing happens except the closing of the issue.

Of note, the validate post above was undone by #852 (comment), this issue needs to be addressed.

Considering that knowing the IP address of the actual endpoint(s) is a must for security, this issue should not have been closed and an actual solution, like OP's suggestion added. I have a barebones cluster and use nginx ingress which also supports proxy protocol however serviceLB doesn't add it and like everyone else can't get the real IP because of that.

Same here. We had to rip out load balancers and traefik and put haproxy everywhere. Not ideal at all.

@sandys, looks like i may have to do the same. I Know it's an old post, any major issues doing this on a production cluster?

@brandond
Copy link
Contributor

brandond commented Jan 25, 2024

Please don't bump a 3+ year old closed issue. If you'd like to reopen the topic, please create a new issue or discussion.

@k3s-io k3s-io locked and limited conversation to collaborators Jan 25, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/enhancement An improvement to existing functionality
Projects
None yet
Development

No branches or pull requests