-
-
Notifications
You must be signed in to change notification settings - Fork 318
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow CNI networking to be set to none #1419
Conversation
dghubble
commented
Feb 24, 2024
- Set CNI networking to "none" to skip installing any CNI provider (i.e. no flannel, Calico, or Cilium). In this mode, cluster nodes will be NotReady until you add your own CNI stack
- Motivation: I now tend to manage CNI components as addon modules just like other applications overlaid onto a cluster. It allows for faster iteration and may eventually become the recommendation
* Set CNI networking to "none" to skip installing any CNI provider (i.e. no flannel, Calico, or Cilium). In this mode, cluster nodes will be NotReady until you add your own CNI stack * Motivation: I now tend to manage CNI components as addon modules just like other applications overlaid onto a cluster. It allows for faster iteration and may eventually become the recommendation
1a1fac6
to
0e79776
Compare
Thanks! Been looking forward to this feature. Would this be the best way to use it: networking = "none" Or through the new variable from #1421? I see you included the change from #1421 in the release notes but not this #1419. Personally I like setting up this way as it looks cleaner, and since you already checking the state of both |
You should set |
Are you saying the Looking at the networking = var.install_container_networking ? var.networking : "none" i.e what I did by setting if var.install_container_networking == true:
networking = "CNI" # in my case I set it to "none"
else:
networking = "none" So wouldn't |
The two options serve different purposes. In the general case, deploying a CNI is more than just a bunch of Kubernetes resources, Typhoon needs to know which one you're choosing. It doesn't currently make much difference on bare-metal where host-level firewall rules aren't used, but it may in future, you're just in narrow special case. On cloud platforms it obviously matters. Continue to use them as described in release notes |
* Allow for more minimal base cluster setups, that manage CoreDNS or kube-proxy as applications, with rolling updates, or deploy systems. Or in the case of kube-proxy, its becoming more common to not install it and instead use Cilium * Add a `components` pass-through variable to configure pre-installed components like kube-proxy and CoreDNS. These components can be disabled (individually or together) to allow for managing components with separate plan/apply processes or automations * terraform-render-bootstrap manifest assets are now structured as manifests/{coredns,kube-proxy,network} so adapt the controller layout scripts accordingly * This is similar to some changes in v1.29.2 that allowed for the container networking provider manifests to be skipped Related: #1419, #1421
* Allow for more minimal base cluster setups, that manage CoreDNS or kube-proxy as applications, with rolling updates, or deploy systems. Or in the case of kube-proxy, its becoming more common to not install it and instead use Cilium * Add a `components` pass-through variable to configure pre-installed components like kube-proxy and CoreDNS. These components can be disabled (individually or together) to allow for managing components with separate plan/apply processes or automations * terraform-render-bootstrap manifest assets are now structured as manifests/{coredns,kube-proxy,network} so adapt the controller layout scripts accordingly * This is similar to some changes in v1.29.2 that allowed for the container networking provider manifests to be skipped Related: #1419, #1421