New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No local endpoints when service.beta.kubernetes.io/external-traffic: OnlyLocal configured #48437
Comments
@liggetm There are no sig labels on this issue. Please add a sig label by: |
@kubernetes/sig-network |
/sig network |
@liggetm Have you confirmed the endpoint was up and ready? Like
Saw you comment on another thread, this shouldn't be related to health check --- health check is used by LoadBalancer from cloud providers, but your cluster is running on bare metal and using nodePort. From what you described above, it seems like the cause is kube-proxy didn't notice there is a local pod running on node, so it didn't set up iptables rule properly. As you mentioned, your backend pod is running on master, have you tried run the pod on one of the nodes and see if it work? |
I created a v1.5.2 k8s cluster but failed to reproduce your issue. But I was running cluster on GCE and the backend pod is on one of the nodes instead of master. |
Is this still a live issue? Over a month old... |
Any updates on this issue? I've seen the same issue with 1.5.2. |
@lin99shen Any chance to post more details? Is the endpoint ready? Logs from kube-proxy? |
The endpoint is ready. The endpoint is running on the same node as of the master too. I'm basically trying out the source IP preservation beta feature per https://kubernetes.io/docs/tutorials/services/source-ip/. Without the OnlyLocal annotation in the service spec, curl goes through, but the source IP is MASQed. Once I add the annotation to the service spec, curl will timeout and tcpdump shows no packet going into the endpoint any more. iptables-save shows that the same KUBE-MARK-DROP rules is generated: From KUBE-XLB-XP7QDA4CRQ2QA33W chain: |
@lin99shen So this seems to be an issue with onlyLocal endpoints on master? Or it happens on node as well? Took a look at the code, the real thing kube-proxy checks for local endpoint is barely (https://github.com/kubernetes/kubernetes/blob/v1.5.2/pkg/proxy/iptables/proxier.go#L591-L600):
Just in case, could you check if the endpoint on master has
|
kubectl get ep nodeport -o yaml | grep nodeName the nodeName is just the ip address. guess I'll need to give it a name? |
Humm..This looks weird, I'm expecting something like For Endpoint's nodeName comes from podSpec (https://github.com/kubernetes/kubernetes/blob/v1.5.2/pkg/controller/endpoint/endpoints_controller.go#L413), I wonder in which case that would be set to ip address :/ |
nodeName should be auto-filled, right? Not specified in podSpec by the user. |
I tried to delete the below rule manually just to verify, but the rule gets added back automatically right away. I had kube-proxy stopped. Any idea? KUBE-NODEPORTS -p tcp -m comment --comment "default/nodeport:" -m tcp --dport 7895 -j KUBE-XLB-XP7QDA4CRQ2QA33W |
Yeah
That is odd, maybe kube-proxy somehow restarted? It should be the only process that writes to KUBE-NODEPORTS chain. |
I checked, it's inactive. Will confirm it again. |
Just some context, maybe there is other options. We have an app that depends on the originating source IP. I'm trying to use the external-traffic annotation to prevent the source IP of the incoming packets to be SNAT'ed. I followed the tutorial as-is and ran into this problem. Besides this approach, is there any other way I can achieve the same. For now, our k8s runs on a single VM. |
I got the same issue. In my scenario, there are two minon nodes. The iptables on one node is normal, but that on the other node is weird.(actually there is one pod running on this node). |
@lin99shen After poking around, I believe In kubernetes/plugin/pkg/scheduler/generic_scheduler.go Lines 342 to 347 in f715b26
kubernetes/plugin/pkg/scheduler/generic_scheduler.go Lines 389 to 392 in f715b26
In
Or maybe I'm wrong...Is that IP address (
|
@pnuzyf Are you having the same issue that your endpoints use the IP address as hostname (#48437 (comment)), or are you seeing a different behavior? |
Yes, IP address is my node name. I restarted kube-proxy with --override-hostname (the ip address), the feature works now. |
I've the same issue. and I'm using the version 1.6. @lin99shen could you please share how you restart the kube-proxy with the --override-hostname? As what I see on my cluster, the kube-proxy is a pod, I can delete it to have the k8s create a new one, but I can't pass the option to it. |
@tanjinfu It seems kube-proxy runs as static pods in your case. You should be able to modify kube-proxy's manifest file under the path specified by Though it seems odd to me that user needs to explicitly override hostname. Note that kubelet also provides |
I'm not overriding hostname on kubelet, at least not intentional. |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
I'm running a service with NodePort and the OnlyLocal annotation (bare-metal/flannel) but receive no traffic on local pod. When the annotation is applied to the service, packets are not marked for masquerading in iptables but always dropped by a rule that states "servicexyz has no local endpoints". This is not the case, however - the service does indeed have load endpoints available.
What you expected to happen:
Route traffic to the local endpoints.
How to reproduce it (as minimally and precisely as possible):
Create a nodeport service with the onlylocal annotation
apiVersion: v1
kind: Service
metadata:
name: localsvc
annotations:
service.beta.kubernetes.io/external-traffic: OnlyLocal
spec:
ports:
port: 1620
protocol: UDP
nodePort: 30162
type: LoadBalancer
selector:
k8s-app: localapp
Create a replication-controller for a single pod (restricting it to the master via a node-selector):
apiVersion: v1
kind: ReplicationController
metadata:
name: localrc
spec:
replicas: 1
template:
metadata:
labels:
k8s-app:localapp
spec:
containers:
image: myImage
ports:
protocol: UDP
nodeSelector:
runOn: master
Anything else we need to know?:
Relevant iptables nat rules:
From the KUBE-NODEPORTS chain:
KUBE-XLB-WGDEPLIALVG6VF4L udp -- 0.0.0.0/0 0.0.0.0/0 /* default/localsvc:snmp */ udp dpt:30162
Chain KUBE-XLB-WGDEPLIALVG6VF4L (1 references)
target prot opt source destination
KUBE-MARK-DROP all -- 0.0.0.0/0 0.0.0.0/0 /* default/localsvc:snmp has no local endpoints */
Environment:
kubectl version
):Client Version: v1.5.4
Server Version: v1.5.2
bare-metal
CentOS atomic host (CentOS Linux release 7.3.1611 (Core))
uname -a
):Linux atomic64.jnpr.net 3.10.0-514.16.1.el7.x86_64 Unit test coverage in Kubelet is lousy. (~30%) #1 SMP Wed Apr 12 15:04:24 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
The text was updated successfully, but these errors were encountered: