Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow CNI networking to be set to none #1419

Merged
merged 1 commit into from
Feb 24, 2024
Merged

Conversation

dghubble
Copy link
Member

  • Set CNI networking to "none" to skip installing any CNI provider (i.e. no flannel, Calico, or Cilium). In this mode, cluster nodes will be NotReady until you add your own CNI stack
  • Motivation: I now tend to manage CNI components as addon modules just like other applications overlaid onto a cluster. It allows for faster iteration and may eventually become the recommendation

* Set CNI networking to "none" to skip installing any CNI provider
(i.e. no flannel, Calico, or Cilium). In this mode, cluster nodes
will be NotReady until you add your own CNI stack
* Motivation: I now tend to manage CNI components as addon modules
just like other applications overlaid onto a cluster. It allows for
faster iteration and may eventually become the recommendation
@dghubble dghubble merged commit 0e79776 into main Feb 24, 2024
@dghubble dghubble deleted the dev/dghubble/cni-none branch February 24, 2024 06:58
@kalmufti
Copy link

Thanks! Been looking forward to this feature.

Would this be the best way to use it:

networking = "none"

Or through the new variable from #1421? I see you included the change from #1421 in the release notes but not this #1419.

Personally I like setting up this way as it looks cleaner, and since you already checking the state of both var.networking : "none" and the new install_container_networking variable in kubernetes/bootstrap.tf

@dghubble
Copy link
Member Author

You should set install_container_networking to false if you want to manage the flannel, Calico, or Cilium Kubernetes resources yourself. You will still need to have networking set to the corresponding CNI provider to configure cloud firewall / security group rules or other conditionals. The release notes guide you this direction.

@kalmufti
Copy link

Are you saying the networking variable is still important to be set for cloud providers only? I'm deploying on bare-metal clusters. I've tested 2 clusters by just setting networking = "none" and it worked as expected, meaning it deployed without CNI and was able to deploy my Cilium setup as I needed.

Looking at the <platform>/fedora-coreos/kubernetes/bootstrap.tf files I see this condition.

networking = var.install_container_networking ? var.networking : "none"

i.e what I did by setting networking = "none" it eventually achieved what I'm after.

if var.install_container_networking == true:
  networking = "CNI" # in my case I set it to "none"
else:
  networking = "none"

So wouldn't var.install_container_networking be redundant in this case, unless var.networking is still needed elsewhere?

@dghubble
Copy link
Member Author

dghubble commented Mar 30, 2024

The two options serve different purposes. In the general case, deploying a CNI is more than just a bunch of Kubernetes resources, Typhoon needs to know which one you're choosing. It doesn't currently make much difference on bare-metal where host-level firewall rules aren't used, but it may in future, you're just in narrow special case. On cloud platforms it obviously matters.

Continue to use them as described in release notes

dghubble added a commit that referenced this pull request May 13, 2024
* Allow for more minimal base cluster setups, that manage CoreDNS or
kube-proxy as applications, with rolling updates, or deploy systems.
Or in the case of kube-proxy, its becoming more common to not install
it and instead use Cilium
* Add a `components` pass-through variable to configure pre-installed
components like kube-proxy and CoreDNS. These components can be
disabled (individually or together) to allow for managing components
with separate plan/apply processes or automations
* terraform-render-bootstrap manifest assets are now structured as
manifests/{coredns,kube-proxy,network} so adapt the controller
layout scripts accordingly
* This is similar to some changes in v1.29.2 that allowed for the
container networking provider manifests to be skipped

Related: #1419, #1421
dghubble added a commit that referenced this pull request May 13, 2024
* Allow for more minimal base cluster setups, that manage CoreDNS or
kube-proxy as applications, with rolling updates, or deploy systems.
Or in the case of kube-proxy, its becoming more common to not install
it and instead use Cilium
* Add a `components` pass-through variable to configure pre-installed
components like kube-proxy and CoreDNS. These components can be
disabled (individually or together) to allow for managing components
with separate plan/apply processes or automations
* terraform-render-bootstrap manifest assets are now structured as
manifests/{coredns,kube-proxy,network} so adapt the controller
layout scripts accordingly
* This is similar to some changes in v1.29.2 that allowed for the
container networking provider manifests to be skipped

Related: #1419, #1421
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants