-
Notifications
You must be signed in to change notification settings - Fork 39.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unstable service ip access from another pod #38802
Comments
I also reported a similar issue on stackoverflow. In which case, I used kubernetes 1.5.1. |
Another issue I found that may be related is when I used dig tool to resolve a name within replicates of a pod. Two of them were resolved successfully. But on one pod, the dig reported that the response is from a different ip address. |
I did some network sniffing on the cni0 interface on the actual node that runs the pod failed to connect redis. Found something interesting. The ip of the app pod is 10.1.85.5. And the pod ip of redis is 10.1.81.3, redis service ip is 10.0.0.66.
How come will 10.1.85.5 send SYN to 10.1.81.3? The expected result should be 10.1.81.5 talks to 10.0.0.66 only. And why 10.0.0.66 doesn't send back SYN,ACK. Well, further investigation showed that the redis pod is scheduled on the same node. So the second SYN packet is the packet received by the redis. I believe this should be a basic scenario supported by k8s, especially considering minikube that runs everything on a single node. Is there any possible configuration that I didn't set right? @thockin Execue me, I'm hoping it's ok to bring your attention to this. |
@spacexnice Unfortunately, we're using classical network model. |
I am assuming you ran tcpdump or something? I guess the two SYN are 1) to
svc VIP, 2) post-NAT to backend. The SYNACK response should be un-NAT'ed,
but it's not.
I'm not sure where to start with this - can you run `conntrack` and see
that you are not running out of conntrack records?
…On Thu, Dec 15, 2016 at 5:34 PM, rxwen ***@***.***> wrote:
@spacexnice <https://github.com/spacexnice> Unfortunately, we're using
classical network model.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#38802 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVMq18toiHphhihVg7PrkZ-6crtpzks5rIeqygaJpZM4LN0sA>
.
|
Yes, I used tcpdump. As you said, the SYNACK isn't un-NATed, that's why our app end send a RST. Should there be a iptables rule to do unNAT explicitly? I can't seem to find a rule did this.
|
No, it should un-nat automatically because a conntrack record is created.
Or is supposed to be...
…On Thu, Dec 15, 2016 at 9:39 PM, rxwen ***@***.***> wrote:
Yes, I used tcpdump. As you said, the SYNACK isn't un-NATed, that's why
our app end send a RST.
Should there be a iptables rule to do unNAT explicitly? I can't seem to
find a rule did this.
# conntrack -L
tcp 6 3069 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=59973 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=59973 [ASSURED] mark=0 use=1
tcp 6 1197 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=56533 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=56533 [ASSURED] mark=0 use=1
tcp 6 86388 ESTABLISHED src=10.28.65.157 dst=10.28.65.112 sport=56912 dport=10250 src=10.28.65.112 dst=10.28.65.157 sport=10250 dport=56912 [ASSURED] mark=0 use=1
tcp 6 1037 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=53907 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=53907 [ASSURED] mark=0 use=1
tcp 6 2141 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=57557 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=57557 [ASSURED] mark=0 use=1
tcp 6 86373 ESTABLISHED src=10.28.65.112 dst=10.28.65.157 sport=51402 dport=8080 src=10.28.65.157 dst=10.28.65.112 sport=8080 dport=51402 [ASSURED] mark=0 use=1
tcp 6 86399 ESTABLISHED src=10.10.28.7 dst=10.28.65.157 sport=59492 dport=2379 src=10.28.65.157 dst=10.28.65.112 sport=2379 dport=59492 [ASSURED] mark=0 use=1
tcp 6 86373 ESTABLISHED src=10.28.65.112 dst=10.28.65.157 sport=60802 dport=2379 src=10.28.65.157 dst=10.28.65.112 sport=2379 dport=60802 [ASSURED] mark=0 use=1
tcp 6 1092 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=54808 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=54808 [ASSURED] mark=0 use=1
tcp 6 86395 ESTABLISHED src=10.28.65.112 dst=10.28.65.157 sport=51383 dport=8080 src=10.28.65.157 dst=10.28.65.112 sport=8080 dport=51383 [ASSURED] mark=0 use=1
tcp 6 2713 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=59036 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=59036 [ASSURED] mark=0 use=1
tcp 6 2182 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=58314 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=58314 [ASSURED] mark=0 use=1
tcp 6 86293 ESTABLISHED src=10.28.65.157 dst=10.28.65.112 sport=56684 dport=10250 src=10.28.65.112 dst=10.28.65.157 sport=10250 dport=56684 [ASSURED] mark=0 use=1
tcp 6 3316 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=60865 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=60865 [ASSURED] mark=0 use=1
tcp 6 1182 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=56287 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=56287 [ASSURED] mark=0 use=1
tcp 6 3059 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=59804 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=59804 [ASSURED] mark=0 use=1
tcp 6 86373 ESTABLISHED src=10.28.65.112 dst=10.28.65.157 sport=60804 dport=2379 src=10.28.65.157 dst=10.28.65.112 sport=2379 dport=60804 [ASSURED] mark=0 use=1
tcp 6 1227 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=57034 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=57034 [ASSURED] mark=0 use=2
tcp 6 3054 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=59725 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=59725 [ASSURED] mark=0 use=1
tcp 6 1117 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=55229 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=55229 [ASSURED] mark=0 use=1
tcp 6 3064 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=59884 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=59884 [ASSURED] mark=0 use=1
tcp 6 2146 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=57718 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=57718 [ASSURED] mark=0 use=1
tcp 6 3276 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=60202 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=60202 [ASSURED] mark=0 use=1
tcp 6 86399 ESTABLISHED src=10.28.65.106 dst=10.28.65.112 sport=47056 dport=22 src=10.28.65.112 dst=10.28.65.106 sport=22 dport=47056 [ASSURED] mark=0 use=1
tcp 6 86382 ESTABLISHED src=10.28.65.112 dst=10.28.65.157 sport=51390 dport=8080 src=10.28.65.157 dst=10.28.65.112 sport=8080 dport=51390 [ASSURED] mark=0 use=1
tcp 6 1027 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=53741 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=53741 [ASSURED] mark=0 use=1
tcp 6 86399 ESTABLISHED src=10.28.65.112 dst=10.28.65.157 sport=51398 dport=8080 src=10.28.65.157 dst=10.28.65.112 sport=8080 dport=51398 [ASSURED] mark=0 use=1
tcp 6 3074 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=60050 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=60050 [ASSURED] mark=0 use=1
tcp 6 3291 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=60461 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=60461 [ASSURED] mark=0 use=1
tcp 6 86384 ESTABLISHED src=10.28.65.112 dst=10.28.65.157 sport=51394 dport=8080 src=10.28.65.157 dst=10.28.65.112 sport=8080 dport=51394 [ASSURED] mark=0 use=1
tcp 6 2142 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=57602 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=57602 [ASSURED] mark=0 use=1
tcp 6 1162 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=55960 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=55960 [ASSURED] mark=0 use=1
tcp 6 1187 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=56369 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=56369 [ASSURED] mark=0 use=1
tcp 6 1207 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=56699 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=56699 [ASSURED] mark=0 use=1
tcp 6 1152 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=55794 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=55794 [ASSURED] mark=0 use=1
tcp 6 3301 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=60622 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=60622 [ASSURED] mark=0 use=1
tcp 6 1147 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=55714 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=55714 [ASSURED] mark=0 use=1
tcp 6 1057 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=54235 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=54235 [ASSURED] mark=0 use=1
tcp 6 1047 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=54071 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=54071 [ASSURED] mark=0 use=1
tcp 6 1032 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=53823 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=53823 [ASSURED] mark=0 use=1
tcp 6 1247 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=57364 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=57364 [ASSURED] mark=0 use=1
tcp 6 1172 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=56122 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=56122 [ASSURED] mark=0 use=1
tcp 6 86392 ESTABLISHED src=10.28.65.112 dst=100.100.25.3 sport=54028 dport=80 src=100.100.25.3 dst=10.28.65.112 sport=80 dport=54028 [ASSURED] mark=0 use=1
tcp 6 1112 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=55150 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=55150 [ASSURED] mark=0 use=1
tcp 6 1202 CLOSE_WAIT src=10.10.12.0 dst=10.10.28.7 sport=56617 dport=5222 src=10.10.28.7 dst=10.10.12.0 sport=5222 dport=56617 [ASSURED] mark=0 use=1
tcp 6 86395 ESTABLISHED src=10.28.65.112 dst=10.28.65.157 sport=51948 dport=8080 src=10.28.65.157 dst=10.28.65.112 sport=8080 dport=51948 [ASSURED] mark=0 use=1
conntrack v1.4.3 (conntrack-tools): 46 flow entries have been shown.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#38802 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVCdHKLzUtAY9hIePUhPCcX0sw0D5ks5rIiQmgaJpZM4LN0sA>
.
|
@chenchun No, it's 0. Is it a requirement? |
Yes, if your are using bridge device. Can you try to set it and test again? |
Seems to find relevant doc here. http://kubernetes.io/docs/admin/network-plugins/ |
@chenchun A quick test succeed! Thanks. |
@freehan we should make sure the CNI bridge driver takes over this responsibility. |
should we close this bug? |
Opened an issue for tracking #38890 closing this one |
i meet the same issue. i am using k8s v1.6.4 with flannel v0.7.1. Do flannel need /proc/sys/net/bridge/bridge-nf-call-iptables? the file does not exist on my host system. |
Is this a request for help? :
Yes
What keywords did you search in Kubernetes issues before filing this one?:
service ip
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Kubernetes version (use
kubectl version
):Environment:
uname -a
): Linux jumpbox 3.10.0-327.22.2.el7.x86_64 Unit test coverage in Kubelet is lousy. (~30%) #1 SMP Thu Jun 23 17:05:11 UTC 2016 x86_64 x86_64 x86_64 GNU/LinuxWhat happened:
I deployed pods and a redis pod. The redis exposes 6379 port as a cluster service.
From one of my pod, I can't establish tcp connection to the redis service (netstat showed the tcp connection is in SYN_SENT state). And it's ok to use redis service from other pods. But it's fine if I tried to access the redis with its pod ip, instead of service ip.
I scaled the pod to larger numbers, then tried to access redis from other copies of the pod, the redis service was ok.
What you expected to happen:
The redis service can be used from any of pod in the cluster.
How to reproduce it (as minimally and precisely as possible):
Anything else do we need to know:
The overlay network I'm using is flannel:0.6.2
access redis ok from a pod:
access redis failed from a pod: (exact copy of the successed pod created via
kutectl scale
)access redis ok with its pod ip:
The text was updated successfully, but these errors were encountered: