New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need tests of KUBE-MARK-DROP #85572
Comments
/triage unresolved Comment 🤖 I am a bot run by vllry. 👩🔬 |
/remove-triage unresolved |
/assign @robscott |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
That was wrong; kube-proxy adds rules to redirect pod-to-LB and node-to-LB traffic directly to the service IP, so traffic from within the cluster will never hit the actual LB regardless of whether there is a DROP rule. (And traffic from outside the cluster is never expected to hit a node with no endpoints.)
This was also wrong, because, as above, we also rewrite node-to-LB traffic (a few lines below the pod-to-LB rewrite linked above). For my next trick, I tried connecting from Node A to the service's NodePort on Node B, where Node B has no endpoints; this then gets forwarded to the XLB chain, but doesn't hit the "node-to-LB" rewrite rule because it's not from the local node. But the traffic still seems to get dropped, even when there is no DROP rule, presumably because the node isn't set up to route traffic that comes from off-node to another off-node IP. So there doesn't seem to be any easy way to test that the KUBE-MARK-DROP rules are working. However, once we implement KEP-3178, then all of the drop-related rules would be in kube-proxy itself rather than some being in kube-proxy and some being in kubelet. And then, either the existing |
/remove-lifecycle rotten |
/triage accepted |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The issue has been marked as an important bug and triaged. Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle frozen |
Fixed by getting rid of |
I've found that the spanish translation of the saying "muerto el perro se acabó la rabia" is "Dead dogs don't bite" 😄 |
We have apparently been accidentally deleting the
rule for a few weeks (#85527), and no one noticed.
I suspect this is because
KUBE-MARK-DROP
is really only needed if the host accepts all incoming packets by default, and if you have any sort of plausible firewall, thenKUBE-MARK-DROP
is redundant, and so therefore the e2e tests that might otherwise catchKUBE-MARK-DROP
failures don't actually catch them.The iptables proxier uses
KUBE-MARK-DROP
in two cases, on cloud platforms where we create iptables rules for LoadBalancer IPs (eg, GCE but not AWS), when a service has a load balancer IP, and it has endpoints, and a packet arrives on the node whose destination is the load-balancer IP:spec.loadBalancerSourceRanges
, and the packet's source IP is not in the source ranges, then we callKUBE-MARK-DROP
on the packet to drop it later."It should only allow access from service loadbalancer source ranges"
. However, if theKUBE-MARK-DROP
rule becomes a no-op, then the pod-to-LoadBalancer-IP connection will fall through the firewall chain, never hit the XLB chain, and eventually just get masqueraded and delivered to the LoadBalancer IP like it would for any other cluster-external IP. Since the cloud loadbalancer is also programmed with the source ranges, and the source range in this test is a single pod IP, the load balancer will then reject the packet (since it has the node's IP as its source at this point).ServiceExternalTrafficPolicyTypeLocal
and no local endpoints then we callKUBE-MARK-DROP
on the packet to drop it later."It should only target nodes with endpoints"
, because if the load balancers are working correctly then they won't send any traffic to the nodes that are creating the drop rules anyway."It should work from pods"
because a pod-to-LoadBalancer-IP connection will be rewritten to be pod-to-ClusterIP before the only-local check and bypass the drop rule.hostNetwork
pod on a node that has no endpoints for the service. The drop rule ought to cause that connection to fail, but if the drop rule was missing then it would make a connection directly to the LoadBalancer and then succeed.The ipvs proxier refers to the
KUBE-MARK-DROP
chain, but I think it doesn't actually use it.../sig network
/priority important-soon
The text was updated successfully, but these errors were encountered: