New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kube-proxy cannot set iptables rules for services #36652
Comments
Turn up the loglevel of kube-proxy to --v=3
look for a section that looks like this
and change --v=2 to --v=3. The logs on the next failure will contain more info. |
Sorry for the long log file. I did not know exactly what you need, so I posted last 100 lines.
Also: My kube-proxy systemd service file looks like this:
|
Can you add the following argument
to ExecStart ? It is potentially generating incorrect rules because of missing cluster cidr |
@bprashanth |
Thanks for the report |
We want the vm to accept the lb vip as "local" if the traffic is from outside the cluster coming in, but we don't want the vm to accept the lb vip as local for traffic from the vm going out. In both cases the destination is the vip, so to distinguish them, we check if the source is within the clusterCIDR. If we don't have the CIDR, we have 2 options:
In an ideal situation, we'd find a way to hit the literal public vip. Given that both 1 and 2 are hopefully short term, we're doing the easier one (2) and documenting that without podCIDR, traffic gets blackholed in the specific scenario. The intersection of platforms that use packet forwarding (and hence need the local rule for public lb) but don't provide clusterCIDR, should be small enough if it even exists (i.e you can setup a kube cluster by hand on gce and not provider clusterCIDR, then accessing a public vip of a Type=LoadBalancer service from within a pod might not work if the same node doesn't have any endpoints). |
FTR @mandarjog volunteered to send a pr |
The specific issue here arises when
In this case we populate XLB chain as follows
If cluster_cidr is not available, There are 2 options:
I think with adequate error feedback option (1) is better. |
Is 2 valid? you don't need podCIDR to actually implement onlyLocal. If I startup without podCIDR, and you add a blanket rule that just jumps straight to SVC because we don't have podCIDR, that mens traffic coming in from the internet will get sprayed across service endpoints. Traffic from the internet going to a Service with onlyLocal should always be either kept local, or blackholed. Traffic from pods in the cluster can be sprayed (because source ip is preseved in either case). If we can't differentiate the 2, we have no choice but to handle them the same way (i.e 1). |
while writing tests, is iptables-restore available ? I don't see a way to invoke iptable-restore from the tests with custom args. specifically, how how to verify if the produced filter chains are syntactically correct?
does this verification. |
Empty clusterCIDR causes invalid rules generation. Fixes issue kubernetes#36652
while you are in there fixing this issue, also please take a look at issue #36835 |
Empty clusterCIDR causes invalid rules generation. Fixes issue kubernetes#36652
I think we can do better when clusterCIDR is not specified.
We know with certainty that --src-type LOCAL is in the cluster. Similar logic can be applied when dealing with Masquerade |
Automatic merge from submit-queue Handle Empty clusterCIDR **What this PR does / why we need it**: Handles empty clusterCIDR by skipping the corresponding rule. **Which issue this PR fixes** fixes #36652 **Special notes for your reviewer**: 1. Added test to check for presence/absence of XLB to SVC rule 2. Changed an error statement to log rules along with the error string in case of a failure; This ensures that full debug info is available in case of iptables-restore errors. Empty clusterCIDR causes invalid rules generation. Fixes issue #36652
Kubernetes version (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0+a16c0a7", GitCommit:"a16c0a7f71a6f93c7e0f222d961f4675cd97a46b", GitTreeState:"not a git tree", BuildDate:"2016-11-08T01:00:01Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5+", GitVersion:"v1.5.0-alpha.2", GitCommit:"cfdaf18277e1ebaa28fcdaed1160a0243eb81be1", GitTreeState:"clean", BuildDate:"2016-10-27T22:10:26Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"}
Environment:
x86_64
Ubuntu 16.04
uname -a
):4.4.0-38-generic
downloaded tarball from github and installed it manually
What happened:
Installed kubernetes on fresh ubuntu machine. Everthing was fine. all services started up. I wanted to install a service of type "NodePort" (see below) but the service was not accessable. I checked the kube-proxy logs and it gave me errors (see also below).
service:
errors from kube-proxy:
How to reproduce it (as minimally and precisely as possible):
Install kubernetes 1.5.0-alpha.2 on a ubuntu machine with kube-proxy set to iptables mode, launch a service of type NodePort.
I don't know what else might help you, please ask if you need further information on the issue, and I'll be happy to give more Info.
The text was updated successfully, but these errors were encountered: