Skip to content
Multi-node Kubernetes Cluster on Hyperkit
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.

Kubernetes Cluster on Hyperkit

Practice real Kubernetes configurations on a local multi-node cluster.

Tested on: Hyperkit 0.20190802 on macOS 10.14.5 w/ APFS, guest images Ubuntu 18.04 and 19.04.

For Hyper-V on Windows see here.


Current state: pre-release; TODO: k8s helm setup

Example usage:

# note: `sudo` is necessary for access to macOS Hypervisor and vmnet frameworks, and /etc/hosts config

# download the script
cd workdir
git clone && cd k8s-hyperkit
# ---- or -----
curl -O
chmod +x

# examine and customize the script, e.g.:

# display short synopsis for the available commands
./ help

# performs `brew install hyperkit qemu kubernetes-cli kubernetes-helm`.
# (qemu is necessary for `qemu-img`)
# you may perform these manually / selectively instead.
./ install

# display configured variables (edit the script to change them)
./ config
   WORKDIR: ./tmp
   SSHPATH: /Users/name/.ssh/
  DISKFILE: ubuntu-19.04-server-cloudimg-amd64.raw
      CPUS: 4
       RAM: 4GB
       HDD: 40GB
# (optional)
# replaces /Library/Preferences/SystemConfiguration/,
# while setting a new CIDR (by default to avoid colliding with the
# default CIDRs of Kubernetes Pod networking plugins (Calico etc.).
# (you should examine the vmnet.plist first to see if other apps are using it)
# default CIDRs to avoid:
# - Calico (<->
# - Weave Net (<->
# - Flannel (<->
./ net

# (optional)
# only resets the CIDR in /Library/Preferences/SystemConfiguration/,
# while perserving the contents (the file must exist / or is later auto-created).
./ cidr

# (optional)
# updates /etc/hosts with currently configured CIDR;
# then you can use e.g. `ssh master` or `ssh node1` etc.
# note: if your Mac's vmnet was already used with this CIDR, you will need to
# adjust the /etc/hosts values manually (according to /var/db/dhcpd_leases).
# (you should examine the dhcpd_leases first to see if other apps are using it)
./ hosts

# (optional)
# after changing your CIDR, you may want to prune the MAC address associations in
# the file /var/db/dhcpd_leases (the file must exist / or is later auto-created)
./ clean-dhcp

# download, prepare and cache the VM image templates
./ image

# launch the nodes
./ master
./ node1
./ node2
# ---- or -----
./ master node1 node2

# note: the initial cloud-init is set to power-down the nodes to give a clear message that it has finished.
# use the 'info' command to see when the nodes finished initializing, and
# then run them again to setup your k8s cluster.
# you can disable this behavior by commenting out the `powerdown` in the cloud-config.

# show info about existing VMs (size, run state)
./ info

master  36399  0.4   2.1   341M  3:51AM   0:26.30  40G   3.1G    RUNNING
node1   36418  0.3   2.1   341M  3:51AM   0:25.59  40G   3.1G    RUNNING
node2   37799  0.4   2.0   333M  3:56AM   0:16.78  40G   3.1G    RUNNING

# ssh to the nodes and install basic Kubernetes cluster here.
# IPs can be found in `/var/db/dhcpd_leases` mapped by MAC address.
# by default, your `.ssh/` key was copied into the VMs' ~/.ssh/authorized_keys
# (note: this works only after `./ hosts`, otherwise use IP addresses)
# use your host username (which is default), e.g.:
ssh master
ssh node1
ssh node2

# stop all nodes
./ stop

# force-stop all nodes
./ kill

# delete all nodes' data (will not delete image templates)
./ delete

# kill only a particular node
sudo kill -TERM 36399

# delete only a particular node
rm -rf ./tmp/node1/

# remove everything
rm -rf ./tmp


You can’t perform that action at this time.