-
Notifications
You must be signed in to change notification settings - Fork 8.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Annotation whitelist-source-range not using client real IP #11319
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-kind bug Please enable proxy-protocol on the NLB as well as in the controller https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#proxy-protocol /kind support |
Because of this bug report we decided to test this before upgrading. We are not experiencing this problem so this does indeed seem to be a problem related to your setup and not with the upgrade itself. |
Hi,
I did the following operations:
Anyway, during these days, while I was checking other similar issues, I changed the ingress-nginx-controller Service by adding more annotations. Here's the full list of annotations present on the ingress-nginx-controller Service. annotations:
meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: nginx
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /healthz
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: '80'
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: preserve_client_ip.enabled=true
service.beta.kubernetes.io/aws-load-balancer-type: nlb None of these annotations actually changed the real behaviour of the controller. @longwuyuan The docs you linked me talk about the proxy protocol on the AWS ELB, which means the Classic Load Balancer and not the Network Load Balancer. On the Classic version, the linked AWS docs talk just about proxy protocol v1, while on the Network version, just the v2 is available. @rouke-broersma Regarding the upgrade, I don't think my problem is strictly related to the ingress-nginx version. I upgraded the controller in another cluster and everything went fine. |
There is one issue about the proxy-protocol-v2 and they had the same problem (which they solved AFAIK). Searching for the issue number now |
Check if the info here helps in any way #10982 |
I am currently having this same issue but on Azure. Adding |
While I was checking on this
I made some modification and redeployed the ingress-nginx with Then I reverted the configuration to the preivous one, which at least served not-whitelisted traffic. Anyway, the situation got worse and the services where the IP was wrong, now weren't serving traffic anymore. The connections to these services were being closed. The Chrome browser showed the Since this was causing a real downtime on the systems, I opted to completely remove ingress-nginx, which led to the removal of the Network Load Balancer on AWS. After reainstalling ingress-nginx, the new Load Balancer has been created and everything started working again, the whitelist annotation too. Something I noticed was that the DNS records on Route53 were actually pointing to the NLB but they were an Alias typed to be used on Classic or Application Load Balancers. I corrected that records too, that may have been managed by an old version of external-dns with an old ingress-nginx. Anyway, I don't have any proof that this may have affected the traffic (which worked until the first update, like I mentioned above in the issue). I suspect that there was something not working with that particular NLB instance. Anyway, if the same problem is happening on Azure, the problem may be in some internal (mis)configuration of ingress-nginx, or something between the ingress-nginx and the Load Balancer? |
What happened:
I upgraded ingress-nginx through the Helm Chart to the latest version. The old Helm Chart version was 4.6.2, now it's 4.10.0 so I'm using NGINX 1.25. After the upgrade, seems that now, wherever the annotation
whitelist-source-range
was already present, the client ip received from the requests is now one from the Load Balancer (AWS EKS with NLB) inside my AWS VPC (172.31.0.0/16). This doesn't allow a proper whitelisting of the IPs in my internal platform. I also tried to revert the upgrade, but the problem persisted once the old version was deployed.What you expected to happen:
I would expect that the IP received by the ingress-nginx would be the real client IP for every service and not just a few, allowing me to correctly whitelist corporate addresses.
NGINX Ingress controller version:
NGINX Ingress controller
Release: v1.10.0
Build: 71f78d4
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.25.3
Kubernetes version (use
kubectl version
):Client Version: v1.25.9
Kustomize Version: v4.5.7
Server Version: v1.29.1-eks-b9c9ed7
Environment:
uname -a
):Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
Helm Chart v4.10.0
kubectl describe ingressclasses
kubectl -n <ingresscontrollernamespace> get all -A -o wide
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
kubectl -n <appnamespace> describe ing <ingressname>
I'm sure my IP was in the whitelist in the moment I started the following request:
I read similar issues where the solution was that the
externalTrafficPolicy
should beLocal
and I'm sure it is. To validate the fact that something is not behaving correctly, I created a copy of one of the services I was having problems with, and deployed it with a different subdomain (e.g. dev.company.com -> test.company.com) and the IP received on the new domain is correctly the real client IP, so the whitelist is working.I printed both the configurations from the nginx.conf file but they are the same, except for the name obviously.
Another event that may be interesting is that half an hour after the upgrade, there was a DiskPressure on some nodes and many pods were evicted, including some ingress-nginx replicas. The problem has been solved by increasing the number of nodes.
How to reproduce this issue:
Actually, I'm not able to describe how to reproduce the issue. I just went through the update and the following problem on nodes, but I didn't take other actions on the controller.
Anything else we need to know:
I checked that the AWS Network Load Balancer Target Groups have the Preserve client IP addresses set to On.
If you need some specific information, please ask me and I will provide it to you.
The text was updated successfully, but these errors were encountered: