-
Notifications
You must be signed in to change notification settings - Fork 38.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NodePort only responding on node where pod is running #70222
Comments
/sig network |
@sfitts Set Ref https://kubernetes.io/docs/tutorials/services/source-ip/. |
@MrHohn thanks -- that caused me to take a closer look at this -- https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#over-a-nodeport-service. If I want to use Local (to preserve the client IP) then I'll need to run the Nginx controller on all nodes (or at least all nodes in the balancing set). Or I can get the routing, but lose the IP preservation. Thanks -- closing. |
Please , can you tell me how you solved this problem. I meet the same problem. |
@SmartLyu Could you elaborate? Are you having issue with |
In my case the use of |
In my case, I configured a Kubernetes cluster on Vultr cloud. The instances on Vultr have 2 NICs - private and public-facing. |
What happened:
I deployed a cluster using kubeadm and Calico. The command line for the cluster creation was:
I then followed the Calico instructions and ran:
Lastly I joined 4 worker to the cluster using the join command generated by the first step.
I then installed the Nginx controller configuring it to use NodePort. The resulting service definition is:
This service is only reachable on the node where the nginx controller pod is running.
What you expected to happen:
The service should be reachable via all nodes in the cluster.
How to reproduce it (as minimally and precisely as possible):
The above steps should do the trick (though any simple NodePort service + Pod should do it). You'll need at least 2 workers to confirm that only one of them provides access to the service.
Anything else we need to know?:
All other communications in the cluster appear to be working as expected. I have multiple pods deployed which communicate with each other via service names and they show no issues. The only problem appears to be the one with NodePort.
FWIW, I tried the iptables workaround described in #58908 to no avail.
Environment:
kubectl version
):Bare metal K8s running on AliCloud ECS instances.
NAME="Ubuntu"
VERSION="16.04.4 LTS (Xenial Xerus)"
uname -a
):Linux kube-master 4.4.0-117-generic #141-Ubuntu SMP Tue Mar 13 11:58:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
kubeadm
Calico 3.1
Helm 2.9.1
/king bug
The text was updated successfully, but these errors were encountered: