This Helm chart deploys a Kubernetes cluster on vSphere using Cluster API with Kamaji as the control plane provider. The chart implements a hosted control plane architecture where certain controllers run on the management cluster while providing full integration with vSphere.
The chart supports seamless rolling updates of the entire cluster when configuration changes. This works through Cluster API's machine lifecycle management for:
- Physical machine parameter changes, e.g. CPU, memory, disk
- Kubernetes version upgrades
- vSphere template changes
cloud-init
configuration updates
The implementation uses hash-suffixed templates, VSphereMachineTemplate
and KubeadmConfigTemplate
that:
- Generate a new template with updated configuration and unique name on
helm upgrade
- Update references in
MachineDeployment
to the new template - Trigger Cluster API's built-in rolling update process
- Update
values.yaml
with new configuration - Run:
helm upgrade cluster-name ./cluster-api-kamaji-vsphere
- Cluster API automatically replaces nodes using the new configuration
The chart deploys vSphere controllers on the management cluster instead of the workload cluster.
This architecture enables:
- Tenant isolation from vSphere credentials
- Simplified networking requirements
- Centralized controller management
The chart includes support for enabling the Cluster Autoscaler for each node pool. This feature allows you to mark node pool machines to be autoscaled. However, you still need to install the Cluster Autoscaler separately.
The Cluster Autoscaler runs in the management cluster, following the hosted control plane model, and manages the scaling of the workload cluster. To enable autoscaling for a node pool, set the autoscaling.enabled
field to true
in your values.yaml
file:
nodePools:
- name: default
replicas: 3
autoscaling:
enabled: true
minSize: 2
maxSize: 6
labels:
autoscaling: "enabled"
This configuration marks the node pool for autoscaling. The Cluster Autoscaler will use these settings to scale the node pool within the specified limits.
You need to install the Cluster Autoscaler in the management cluster. Here is an example using Helm:
helm repo add autoscaler https://kubernetes.github.io/autoscaler
helm repo update
helm upgrade --install ${CLUSTER_NAME}-autoscaler autoscaler/cluster-autoscaler \
--set cloudProvider=clusterapi \
--set autodiscovery.namespace=default \
--set "autoDiscovery.labels[0].autoscaling=enabled" \
--set clusterAPIKubeconfigSecret=${CLUSTER_NAME}-kubeconfig \
--set clusterAPIMode=kubeconfig-incluster
This command installs the Cluster Autoscaler and configures it to manage the workload cluster from the management cluster.
- Kamaji installed and configured
- Cluster API vSphere provider installed and configured
- Cluster API IPAM provider installed and configure (optional)
- Access to vSphere environment
# Add repository (if published)
helm repo add clastix https://clastix.github.io/charts
helm repo update
# Install with custom values
helm install cluster-name clastix/capi-kamaji-vsphere -f values.yaml
Cluster API Provider vSphere (CAPV) supports multiple methods to provide vCenter credentials and authorize clusters to use them:
-
Secrets: credentials are provided via
secret
used byVSphereCluster
. This will create a unique relationship between theVSphereCluster
andsecret
and thesecret
cannot be utilized by other clusters. -
VSphereClusterIdentity: credentials are provided via
VSphereClusterIdentity
, a cluster scoped resource and enables multipleVSphereClusters
to share the same set of credentials. The namespaces that are allowed to use theVSphereClusterIdentity
can also be configured via aLabelSelector
.
More details on the CAPV documentation: Cluster API Provider vSphere
The chart creates three secrets by default, one for each component that requires vSphere credentials. These secrets are created in the same namespace as the Cluster
resource and are labeled with the cluster name:
# Create the vsphere-secret for Cluster API
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: vsphere-secret
namespace: cluster-namespace
labels:
cluster.x-k8s.io/cluster-name: "cluster-name"
stringData:
username: "administrator@vsphere.local"
password: "password"
EOF
# Create the vsphere-config-secret for Cloud Controller Manager
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: vsphere-config-secret
namespace: cluster-namespace
labels:
cluster.x-k8s.io/cluster-name: "cluster-name"
stringData:
vsphere.conf: |
global:
port: 443
insecureFlag: false
password: "password"
user: "administrator@vsphere.local"
vcenter:
vcenter.example.com:
datacenters:
- "datacenter-name"
server: "vcenter.example.com"
EOF
The chart can also be configured to use VSphereClusterIdentity
for managing vSphere credentials. This allows multiple clusters to share the same credentials.
Deploy a secret with the credentials in the CAPV manager namespace (capv-system by default):
# Create the vsphere-secret for Cluster API
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: vsphere-secret
namespace: capv-system
stringData:
username: "administrator@vsphere.local"
password: "password"
EOF
Deploy a VSphereClusterIdentity
that references the secret above. The allowedNamespaces
selector can also be used to control which namespaces are allowed to use the identity:
# Create the VSphereClusterIdentity
cat <<EOF | kubectl apply -f -
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereClusterIdentity
metadata:
name: vsphere-cluster-identity
spec:
secretName: vsphere-secret
allowedNamespaces:
selector:
matchLabels: {} # allow all namespaces
EOF
# Create the vsphere-config-secret for Cloud Controller Manager
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: vsphere-config-secret
namespace: cluster-namespace
labels:
cluster.x-k8s.io/cluster-name: "cluster-name"
stringData:
vsphere.conf: |
global:
port: 443
insecure-flag: false
password: "password"
user: "administrator@vsphere.local"
vcenter:
vcenter.example.com:
datacenters:
- "datacenter-name"
server: "vcenter.example.com"
EOF
# Deploy using the chart
helm install cluster-name ./cluster-api-kamaji-vsphere -f values.yaml
# Check status
kubectl get cluster,machines
# Get kubeconfig
clusterctl get kubeconfig cluster-name > cluster-name.kubeconfig
# Update values.yaml
cluster:
version: "v1.32.0"
nodePools:
- name: default
template: "ubuntu-2204-kube-v1.32.0"
vSphereCloudControllerManager:
image:
tag: "v1.32.0"
# Apply upgrade
helm upgrade cluster-name ./cluster-api-kamaji-vsphere -f values.yaml
# Watch the rolling update
kubectl get machines -w
# Update values.yaml
nodePools:
- name: default
replicas: 5
# Apply scaling
helm upgrade cluster-name ./cluster-api-kamaji-vsphere -f values.yaml
# Watch the scaling
kubectl get machines -w
# Delete the cluster
helm uninstall cluster-name
If Helm uninstall fails with IP pool deletion errors:
# Wait for machines to be deleted first
kubectl delete machinedeployment -l cluster.x-k8s.io/cluster-name=cluster-name
kubectl wait --for=delete vspheremachines -l cluster.x-k8s.io/cluster-name=cluster-name
# Retry helm uninstall
helm uninstall cluster-name
If nodes taints are not removed, check Cloud Controller Manager logs:
# Check CPI Controller logs
kubectl logs -l component=cloud-controller-manager
Most of the time the issue is related to authentication issues with vSphere credentials. Check the secret used by the VSphereClusterIdentity
or VSphereCluster
and ensure that the credentials are correct.
See the values you can override here.
Name | Url | |
---|---|---|
Clastix Labs | authors@clastix.labs |
This project is licensed under the Apache2 License. See the LICENSE file for more details.