Note 1: If you like the project, give a github star :-)
Kadalu project started with a goal of keeping minimum layers to provide Persistent Volumes in k8s eco system. kadalu uses 'GlusterFS' as soul to provide storage, but strips off 'gluster' project's management layer.
The focus of the project is simplicity and stability. We believe simple things can solve things better, instead of complicating already complicated eco-system.
- Kubernetes 1.13.0 + version
- The host should support xfs (
- On some systems this might require installation of xfsprogs package
mount -t xfswith
Deploy KaDalu Operator using,
kubectl create -f https://raw.githubusercontent.com/kadalu/kadalu/master/manifests/kadalu-operator.yaml
In the case of OpenShift, deploy Kadalu Operator using,
oc create -f https://raw.githubusercontent.com/kadalu/kadalu/master/manifests/kadalu-operator-openshift.yaml
Note: Security Context Constraints can be applied only by admins, Run
oc login -u system:adminto login as admin
2.1 Prepare your configuration file.
KaDalu Operator listens to Storage setup configuration changes and starts the required pods. For example,
# File: storage-config.yaml --- apiVersion: kadalu-operator.storage/v1alpha1 kind: KadaluStorage metadata: # This will be used as name of PV Hosting Volume name: storage-pool-1 spec: type: Replica1 storage: - node: kube1 # node name as shown in `kubectl get nodes` device: /dev/vdc # Device to provide storage to all PVs
More config options can be found here
2.2 Now request kadalu-operator to setup storage using,
kubectl create -f storage-config.yaml
Operator will start the storage export pods as required. And, in 2 steps, your storage system is up and running.
Check the status of Pods using,
$ kubectl get pods -nkadalu NAME READY STATUS RESTARTS AGE server-storage-pool-1-kube1-0 1/1 Running 0 84s csi-nodeplugin-5hfms 2/2 Running 0 30m csi-nodeplugin-924cc 2/2 Running 0 30m csi-nodeplugin-cbjl9 2/2 Running 0 30m csi-provisioner-0 3/3 Running 0 30m operator-6dfb65dcdd-r664t 1/1 Running 0 30m
CSI to claim Persistent Volumes (PVC/PV)
Now we are ready to create Persistent volumes and use them in application Pods.
Create PVC using,
$ kubectl create -f examples/sample-pvc.yaml persistentvolumeclaim/pv1 created
and check the status of PVC using,
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pv1 Bound pvc-8cbe80f1-428f-11e9-b31e-525400f59aef 1Gi RWO kadalu.replica1 42s
Now, this PVC is ready to be consumed in your application pod. You can see the sample usage of PVC in an application pod by below:
$ kubectl create -f examples/sample-app.yaml pod1 created
For more information checkout the Try it out
If you want 'External' Gluster Storage to be used as PV, checkout this doc
We are tracking the number of downloads based on 'docker pull' stats, and also through google analytics. This Commit gives detail of what is added to code w.r.to tracking.
Talks and Blog posts
- [Blog] Gluster’s management in k8s
- [Blog] Gluster and Kubernetes - Portmap
- [Talk] DevConf India - Rethinking Gluster Management using k8s (slides)
- [Demo] Asciinema recording - Kadalu Setup
- [Demo] Asciinema recording - KaDalu CSI to claim Persistent Volumes
- [Blog] kaDalu - Ocean of opportunities
Reach out to some of the developers
You can reach to the developers using certain ways.