Skip to content
A lightweight Persistent storage solution for Kubernetes / OpenShift using GlusterFS in background.
Python Shell Dockerfile Makefile
Branch: master
Clone or download


Build Status Operator Docker Pulls Server Docker Pulls

Note 1: If you like the project, give a github star :-)


Kadalu project started with a goal of keeping minimum layers to provide Persistent Volumes in k8s eco system. kadalu uses 'GlusterFS' as soul to provide storage, but strips off 'gluster' project's management layer.

The focus of the project is simplicity and stability. We believe simple things can solve things better, instead of complicating already complicated eco-system.

Get Started


  • Kubernetes 1.13.0 + version
  • The host should support xfs (mkfs.xfs)
    • On some systems this might require installation of xfsprogs package
  • The mount -t xfs with -oprjquota should work


  1. Deploy KaDalu Operator using,

    kubectl create -f

    In the case of OpenShift, deploy Kadalu Operator using,

    oc create -f

    Note: Security Context Constraints can be applied only by admins, Run oc login -u system:admin to login as admin

2.1 Prepare your configuration file.

KaDalu Operator listens to Storage setup configuration changes and starts the required pods. For example,

# File: storage-config.yaml
kind: KadaluStorage
 # This will be used as name of PV Hosting Volume
  name: storage-pool-1
  type: Replica1
    - node: kube1      # node name as shown in `kubectl get nodes`
      device: /dev/vdc # Device to provide storage to all PVs

More config options can be found here

2.2 Now request kadalu-operator to setup storage using,

kubectl create -f storage-config.yaml

Operator will start the storage export pods as required. And, in 2 steps, your storage system is up and running.

Check the status of Pods using,

$ kubectl get pods -nkadalu
NAME                             READY   STATUS    RESTARTS   AGE
server-storage-pool-1-kube1-0    1/1     Running   0          84s
csi-nodeplugin-5hfms             2/2     Running   0          30m
csi-nodeplugin-924cc             2/2     Running   0          30m
csi-nodeplugin-cbjl9             2/2     Running   0          30m
csi-provisioner-0                3/3     Running   0          30m
operator-6dfb65dcdd-r664t        1/1     Running   0          30m


CSI to claim Persistent Volumes (PVC/PV)

Now we are ready to create Persistent volumes and use them in application Pods.

Create PVC using,

$ kubectl create -f examples/sample-pvc.yaml
persistentvolumeclaim/pv1 created

and check the status of PVC using,

$ kubectl get pvc
NAME   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
pv1    Bound    pvc-8cbe80f1-428f-11e9-b31e-525400f59aef   1Gi        RWO            kadalu.replica1  42s

Now, this PVC is ready to be consumed in your application pod. You can see the sample usage of PVC in an application pod by below:

$ kubectl create -f examples/sample-app.yaml
pod1 created


For more information checkout the Try it out

If you want 'External' Gluster Storage to be used as PV, checkout this doc


We are tracking the number of downloads based on 'docker pull' stats, and also through google analytics. This Commit gives detail of what is added to code tracking.


see Troubleshooting

Talks and Blog posts

  1. [Blog] Gluster’s management in k8s
  2. [Blog] Gluster and Kubernetes - Portmap
  3. [Talk] DevConf India - Rethinking Gluster Management using k8s (slides)
  4. [Demo] Asciinema recording - Kadalu Setup
  5. [Demo] Asciinema recording - KaDalu CSI to claim Persistent Volumes
  6. [Blog] kaDalu - Ocean of opportunities

Reach out to some of the developers

You can reach to the developers using certain ways.

  1. Best is opening an issue in github.
  2. Reach to us on Slack (Note, there would be no history) -
You can’t perform that action at this time.