-
Notifications
You must be signed in to change notification settings - Fork 39.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
K8s can't live without a default route #123120
Comments
@uablrek: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/kind documentation |
|
I don't think this has anything to do with the default route, but actually, a route is needed on the host to allow access to service packets to be accepted by netfilter. Before the packet arrives in the PREROUTING chain, there is a routing judgment there, and if the destination is found to be unreachable (similar to
|
ipvs is working properly because the host's kube-ipvs0 interface is configured with the address of the service, so it allows the packets accessing the service to arrive at the PREROUING chain and then passes them to the IPVS program on the INPUT chain for processing. |
Good observation. Any route that matches the ClusterIP range will do it seems. I tested on KinD:
But then externalIP's will not be routed, and since they are hard to predict, I think a default route really is a requirement. |
works 😄 And should be good enough for security, since stray packets will be discarded as martians |
Yeah, we may need to document that the default route is a basic requirement, and most of the time we don't create special routes for the service. |
hmm, we may need to document that nodes need IPs to communicate too :) People may want to use custom routes and this still work, I don't think we should make this a requirements, is a network design of the cluster and is not kubernetes the one to define this |
But the custom route must be to the ClusterIP CIDR, of which a network admin would not be aware. |
this is an installation and routing problem, why should not be aware of that? in addition, there are deployments that use multihoming on nodes, one interface facing the internet and other interface for internal routing, and the ServiceCIDR should be routed through the internal interface for their services to work. What I'm trying to point out is that one thing is the cluster setup and installation, that is a responsibility of cluster-api, kubeadm, kube-spray, home made scripts ... and these are the responsibles of designing the cluster and its routing, and other thing is the behavior of the components, those are different things. |
same page recommends to have a default route vs passing custom IPs to all components. |
It's a bit worse actually. If you don't have a default route, but you do pass a custom IPs to all components, the installation with |
My point is that k8s should not mandate on how cluster should be implemented, just define the architecture and best practices, but asserting that k8s can not live without a default route is not true, is complex to setup but is feasible, and in depending how much complex scenarios, is sometimes required |
So, an acceptable best practice recommendation would be something like:
|
or something like "when setting up your cluster, you must ensure that your nodes are able to forward packets to the Service CIDR, this is commonly done by defining a default route, but in more complex network setups you must want to set it explicitly" |
What are we going to do with this issue? It sounds like we are going to update documentation? |
Yes, that's my proposal. But we can also just ignore it since no default route is rare. I can only think of security reasons. |
The thing that breaks without a default route is load-balancing with Many crucial components, like CNI-plugins, assume that the "kubernetes" service can be used to access the API-server. I can explain why it doesn't work, and what routes that must be setup if a default route doesn't exist, but this should be done in @danwinship Is there documentation for |
it does require, Services is part of Conformance
I just think the opposite, it should be great addition to the generic documentations in https://kubernetes.io/docs/concepts/cluster-administration/networking/ , you already have a diagram there with the 3 conceptual networks, the nodes, pods and services ... |
Yes, if you want to sell a conformant K8s platform. But the slogan on github is:
And I have heard about installations where K8s is used for SW-management only. (but I haven't actually seen it I admit) |
Is not about selling, the project defines APIs and behaviors, that is why e2e tests are so important, those are the ones that explain and assert how the APIs behave, so the platforms have consistency ... Services is an integral part of the kubernetes, if people want to do custom things with pieces or strip parts of course they can, no problem at all, but all bugs are for them ;) |
As @neolit123 points out in #123120 (comment), the |
/close |
@uablrek: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened?
Derived from projectcalico/calico#8481
I use a virtual cluster with router VMs. When I start without any router VM, no default route is setup on the K8s nodes. This makes load-balancing to services to fail, at least with proxy-mode=iptables/nftables, and just about all CNI-plugins to fail. In short, the cluster is dead.
With proxy-mode=ipvs service routing works, but there are more subtle problems, e.g. Calico doesn't start. I haven't investigated further.
What did you expect to happen?
Well, to me it seems OK to require a default route, but the problem must be documented somewhere where cluster admins will see it.
I can't really see any use-case where a default route is not set. Maybe when K8s is only used for SW-management, or security reasons.
How can we reproduce it (as minimally and precisely as possible)?
In a test cluster:
For instance on KinD:
Anything else we need to know?
IMO this is a documentation issue.
/sig network
/area kube-proxy
/area documentation
Kubernetes version
All?
Cloud provider
N/A
OS version
N/A
Install tools
N/A
Container runtime (CRI) and version (if applicable)
crio version 1.28.1
Related plugins (CNI, CSI, ...) and versions (if applicable)
Tested with Calico and Flannel (neither works without a default route)
The text was updated successfully, but these errors were encountered: