-
Notifications
You must be signed in to change notification settings - Fork 8.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enabling enable-ssl-passthrough breaks client IP (all clients are 127.0.0.1) #8052
Comments
@logan2211: This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-kind bug Try this documentation https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough Then please provide all related information from the live state of the cluster like curl request in full, ingress describe, service describe, controller pod logs, any other related information |
If you route traffic directly to the controller without using the proxy_protocol, nginx will set the real-ip to the $remote_addr. The $remote_addr contains 127.0.0.1 as value, as this is the local TCP proxy address. |
I have this problem as well. IMHO, this part is not really clear to show how to preserve IP Address
And thanks to @xom4ek. Mine is working correctly now. My setup is MetalLB + Weave |
@xom4ek Does this config work for ingress object with ssl-passthrough annotation? It seems the configuration only fix the scenario for ingress without passthrough annotation. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
/close |
@rikatz: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Thanks @xom4ek this works for our AKS cluster! I tried to dig deeper to understand more.. Please verify my understanding and to help other folks finding this page We don't use proxy protocol, if this is used, we should not be needing additional nginx configuration. The local TCP proxy address is also the nginx ingress controller which handles the incoming TCP connnection for the nginx ingress controller pods.
|
If extra steps are required to recover client IPs after enabling SSL passthrough, I feel that this should be covered more clearly by the documentation. |
PRs will be super welcome for improving docs. But there should be data somewhere showing real config on a real cluster with real curl commands and outputs with corresponding logs of controller showing results. |
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
NGINX Ingress controller
Release: v1.1.0
Build: cacbee8
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.9
Kubernetes version (use
kubectl version
):Environment:
Cloud provider or hardware configuration:
Limestone Networks Bare Metal Cloud
OS (e.g. from /etc/os-release):
Ubuntu 20.04
Kernel (e.g.
uname -a
):5.4.0-90-generic #101-Ubuntu
Install tools:
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
k3s
How was the ingress-nginx-controller installed:
If helm was used then please show output of
helm ls -A | grep -i ingress
ingress-nginx-ingress-nginx-private ingress-nginx 4 2021-12-19 05:48:37.258964116 +0000 UTC deployed ingress-nginx-4.0.13 1.1.0
If helm was used then please show output of
helm -n <ingresscontrollernamepspace> get values <helmreleasename>
What happened:
Prior to enabling "enable-ssl-passthrough", access IPs are reflected correctly in the ingress controller logs, and features like
whitelist-source-range
work as expected. After enablingenable-ssl-passthrough
, client IPs are always shown as 127.0.0.1 in the nginx logs and ingresses usingwhitelist-source-range
stop working as expected. Note that this occurs on all ingresses regardless of whether SSL passthrough is enabled on the ingress. To reproduce, simply enable the SSL passthrough feature on the controller.before:
ingress-nginx-ingress-nginx-private-controller-698f69fd7f-ts5rc controller 10.3.0.66 - - [19/Dec/2021:17:26:20 +0000] "GET / HTTP/2.0" 200 786 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36" 22 0.001 [kube-system-kube-system-kubernetes-dashboard-443] [] 192.168.73.41:8443 786 0.000 200 4d19abd96b2a62d04d74906ecd1ccdac
after:
ingress-nginx-ingress-nginx-private-controller-6f945d6d84-4j9fw controller 2021/12/19 17:23:10 [error] 576#576: *18182 access forbidden by rule, client: 127.0.0.1, server: dashboard.k8s.domain.net, request: "GET / HTTP/2.0", host: "dashboard.k8s.domain.net"
How to reproduce it:
Add
--enable-ssl-passthrough
to the controller command line and then all client IPs will show up as 127.0.0.1 in nginx.The text was updated successfully, but these errors were encountered: