Skip to content
This repository has been archived by the owner. It is now read-only.
Go to file

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Dynamic Provisioning of Kubernetes HostPath Volumes

This Reposirory and Code is #Unmaintained. You should not use it in production.


# install dynamic hostpath provisioner
kubectl create -f
kubectl create -f
kubectl create -f

# create a test-pvc and a pod writing to it
kubectl create -f
kubectl create -f

# expect a file to exist on your host
$ ls -la /var/kubernetes/default-hostpath-test-claim-pvc-*/

kubectl delete pod hostpath-test-pod
kubectl delete pvc hostpath-test-claim

# expect the file and folder to be removed from your host
$ ls -la /var/kubernetes/default/hostpath-test-claim/

The Do-it-your-self single-node Cluster

Kubernetes is a Cloud-Cluster Orchestration system and as such is 100% geared towards running a cluster on multiple computational nodes.

In order to allow Storage to be shared between those nodes (be it VMs or physical computers), kubernetes highly recommends using a storage subsystem that mount volumes on any of the available nodes. For do-it-your-self clusters this usually means setting up an NFS-Server backing all containers' volumes.

I wanted to have Kubernetes on my single VM up on netcup to orchestrate my hand full of services. I do not plan nor do I need multiple nodes, for me Kubernetes is just a nice way to manage the services that would otherwise run with simple systemd units.

In this simple scenario I do not need NFS or another clever Storage-Provider. All I want is hostPath backed volumes – easy to manage, easy to inspect, easy to backup and no protocol or network overhead on my small VM.

Dynamic Provisioning

In order for dynamic provision, the process of allocating and binding a suitable Volume to a PersistentVolumeClaim, to happen, a Workload (usually a single Pod) needs to watch the Kubernetes API for new Claims, create Volumes for them and Bind the Volume to the Claim. Similar the same Workload is responsible to remove unneeded Volumes when the Claim goes away and the RetainPolicy does not tell otherwise.

For GoogleComputeEngine, Amazon AWS and even for Minikube there are such Provisioners that know how to handle the creation of GCE-Disks, AWS-Disks or HostPaths for Minikube.

The Dynamic HostPath Provisioner

This is a small modification to the Example given in the kubernetes-incubator-Project on how one could implement such a VolumeProvider. It adds the ability to choose a Target-Directory outside of /tmp on the host-system and an option to retain the Directories when the Claim goes away (by setting PV_RECLAIM_POLICY in the deployment.yaml).

It also improves upon the manifest-files by adding RBAC (RoleBasedAccessControl)-Configuration and a Deployment-Object.

Finally it documents why and how Kubernetes will not autoprovision hostPath VolumeClaims without a Provisioner like this – something that was not clear to me when reading the Docs (they sound like dynamic Provisioning should magically happen when you define a StorageClass. Well, it doesn't.)

So now that you read all this, you can scroll up to the TL;DR section and install the Provisioner. Good Luck!


Dynamic Provisioning of Kubernetes HostPath Volumes




No releases published


No packages published
You can’t perform that action at this time.