Deploy multinode Kubernetes clusters on Kubernetes
Clone or download
marun Update
Fix formatting of helm deployment instruction.
Latest commit b6e7f68 Jul 17, 2018
Type Name Latest commit message Commit time
Failed to load latest commit information.
image Fix node initialization Jul 17, 2018
templates Remove unnecessary discovery port from templates Jul 17, 2018
.gitignore Add start script to enable external access to cluster Oct 20, 2016
.helmignore Rewrite as a helm chart Oct 19, 2016
Chart.yaml Rewrite as a helm chart Oct 19, 2016
LICENSE Initial commit Oct 1, 2016 Update Jul 17, 2018 Update for helm 2.0 and kube 1.11 compatibility Jul 17, 2018
values.yaml Enable image caching on hostpath volume Oct 27, 2016

nkube (Nested Kubernetes)

nkube is a tool for deploying multinode Kubernetes clusters on Kubernetes itself. It uses helm to deploy a chart consisting of containers running systemd and docker-in-docker. kubeadm is then invoked to bootstrap a new Kubernetes cluster.


While nkube can potentially target any kubernetes deployment, it is currently only tested with minikube. To get started:

minikube start
  • Initialize helm:
helm init
  • Ensure that the ip6_tables module is loaded on the docker host (required for calico):
minikube ssh
sudo modprobe ip6_tables
  • From the root of a clone of this repo, start a new nested cluster with the calico plugin. Deployment is likely to take 3-5m, depending on the speed of the host and its network connection.

    If you see the error no available release name found, it may be necessary to grant cluster admin privileges to the deployed helm tiller.

./ [helm install args]
  • Once has finished, a context will have been added that will allow access to the cluster:
kubectl --context=[cluster-id]
  • More than one nested cluster can be deployed at once.

  • Since the cluster is deployed with helm, helm commands can be used to manage the cluster (e.g helm delete [cluster id] removes the cluster).

  • ssh access to the nodes of the cluster is not supported. Instead, use kubectl exec to gain shell access to the master and node pods.

  • The number of nodes can be scaled by setting the replica count of the node deployment. The number of nodes is limited only by the capacity of the hosting cluster.


  • The use of persistent storage for etcd is currently unsupported. If the nested master fails, the cluster state is lost.
  • Due to the way docker-in-docker handles volumes, manual cleanup on the host docker is required:
docker volume ls -qf dangling=true | xargs -r docker volume rm