Multi-node Kubernetes clusters in VirtualBox.
Still in the early stages...
There must be over a half dozen different ways to get K8s on your desktop these days, but none of them are a very good likeness to real kubernetes. Nearly all are single node master workloads - and that isn't really a cluster. I found myself missing the true K8s experience I used to have with cluster builder.
VMware jumped the shark. No point in looking back.
It was time to do a VirtualBox edition.
- VirtualBox 7.x Latest
- Packer 1.8+
- Ansible 2.14+
- kubectl
brew install virtualbox packer ansible kubectl
How great is that?
Developed and tested on a macOS Monterey Macbook 2019 i9 host thus far and likely won't migrate much further. The current k8s build is Kubernetes 1.27.1.
Before building the cluster for the first time there are few setup steps required:
Before building your cluster make sure a host only network exists in VirtualBox (File -> Host Network Manager). If vboxnet0 does not already exist, hit the Create button to create it and name it vboxnet0. Leave the IP address and DHCP server settings as default.
You will also need to download the Ubuntu Live Server 22.04 ISO image into the node-packer/iso folder:
curl https://releases.ubuntu.com/22.04/ubuntu-22.04.1-live-server-amd64.iso --output node-packer/iso/ubuntu-22.04.1-live-server-amd64.iso
This is used by packer to build the base cluster nodes. If it is not pre-downloaded, packer will auto-download it, but unfortunately this happens every time you run a packer build, and for some reason packer does not cache the isos.
When packer builds the nodes it includes the node-packer/keys/authorized_keys file in the image for passwordless SSH access, which is used by packer and ansible and ultimately provides access to the K8s nodes. If this file does not exist in node-packer/keys the build script will attempt to copy ~/.ssh/id_rsa.pub
into an authorized_keys
file.
In the style of the original VMware based cluster-builder the ansible inventory host files are the K8s configuration files, and are stored in:
clusters/[org folder]/[cluster name folder]/hosts
For example:
#clusters/local/k8s/hosts
[all:vars]
cluster_name=k8s-local
remote_user=sysop
network_mask=255.255.255.0
network_cidr=/24
network_dn=vm.idstudios.io
[k8s_masters]
k8s-m1.vm.idstudios.io ansible_host=192.168.56.50 numvcpus=2 memsize=2524
[k8s_workers]
k8s-w1.vm.idstudios.io ansible_host=192.168.56.60 numvcpus=2 memsize=2524
k8s-w2.vm.idstudios.io ansible_host=192.168.56.61 numvcpus=2 memsize=2524
There is an example file in clusters/eg
Note: It is important that all host names resolve, at least at the host machine level. The examples use AWS Route 53 for the DNS names (k8s-m1.vm.idstudios.io), so all of the example DNS names will resolve on any machine. If you use your own host names, which is highly recommended, make sure they resolve for VirtualBox (such as by putting them in your machine's etc/hosts file). Also make sure to also update the network_dn accordingly, as this is used to derrive the short name for the VM name.
bash build-cluster eg/k8s
For your own cluster you might copy the eg/k8s/hosts file to a new folder named for your group of clusters. It can be any sort of organization name.
mkdir -p clusters/my-clusters/k8s
cp cluster/eg/k8s/hosts clusters/my-clusters/k8s/
Any folder apart from eg in the clusters folder will not be tracked by git for this repo, and may be initialized as a git sub repo to store your cluster configurations elsewhere.
Once you have created your cluster package folder and inventory hosts file, you can build the k8s cluster:
bash build-cluster my-clusters/k8s
This has 2-3 phases:
- Packer build of cluster node image (only happens once)
- Ansible deployment of cluster node image to VirtualBox nodes defined in hosts file
- Ansible deployment of Kubernetes to nodes
After the cluster node is built, stage one is bypassed. To rebuild the cluster-node (which takes some time), you can remove the image:
rm -rf node-packer/images
And then stage one will repeat and a fresh cluster-node.ova will be built by packer.
When the cluster has finished a message will be displayed with instructions for using the cluster. The kubeconfig
file is downloaded to the cluster package folder (eg. eg/k8s), which you can then merge to your ~/.kube/config, or reference explicitly.
Note: Multi-master k8s is not yet implemented, so only the first master specified in the hosts file will be used. Hopefully coming soon.
When everything is complete you should see something like this in the terminal:
------------------------------------------------------------
SUCCESS: cluster created!
Deployed in: 24 min 19 sec
------------------------------------------------------------
The kube-config file can be found at clusters/local/k8s/kube-config
kubectl --kubeconfig=clusters/local/k8s/kube-config get pods --all-namespaces
To add the cluster to your existing contexts...
export KUBECONFIG="/Users/seanhig/Workspace/cluster-builder-vbox/local/k8s/kube-config:/Users/seanhig/.kube/config"
Enjoy your Kubernetes!
bash clusterctl eg/k8s [start | stop | pause | resume | savestate]
The clusterctl script uses ansible and the hosts file to easily suspend a functioning cluster and resume it later, which is very useful in development.
There are a number of additional Kubernetes components and stacks in the addons folder that can be useful.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml
This is setup to use one of the local worker nodes as an NFS storage location.
Each worker node has a directory /storage/nfs-provisioner and will be used to store the PVCs depending on which node the POD is deployed.
It creates a StorageClass called nfs-dynamic.
The IP pool is set to a range within the VirtualBox host only default network of 192.168.56.0/24, and uses the range 30-45 for the local pool. This will allow any services using LoadBalancer to get a dedicated address on the host only network.
A quick client configuration that leverages both the Metallb and NFS Provisioner.
Not using helm
is always refreshing:
kubectl apply -f elastic.yaml
kubectl apply -f filebeat.yaml
kubectl apply -f logstash.yaml
kubectl apply -f kibana.yaml
Everything goes into the
kube-system
namespace. The ElasticSearch index is prefixed withk8s-logs
as configured in the Filebeat config yaml.
The Istio Install Guide actually works on these clusters.
kubectl patch pvc {PVC_NAME} -p '{"metadata":{"finalizers":null}}'
You need to patch the PVC to set the “finalizers” setting to null, this allows the final unmount from the node, and the PVC can be deleted.