Skip to content
Container Storage Interface (CSI) plugin for the IaaS platform
Go Makefile Shell Dockerfile
Branch: master
Clone or download

csi-cloudscale Build Status

A Container Storage Interface (CSI) Driver for volumes. The CSI plugin allows you to use volumes with your preferred Container Orchestrator.

The CSI plugin is mostly tested on Kubernetes. Theoretically it should also work on other Container Orchestrators, such as Mesos or Cloud Foundry. Feel free to test it on other COs and give us a feedback.

Note: If you're using CSI with Ubuntu (typical for Rancher), it might sometimes not work, because of this issue.

Volume parameters

This plugin supports the following volume parameters (in case of kubernetes: parameters on the StorageClass object):

  • ssd or bulk; defaults to ssd if not set

For LUKS encryption:

  • set to the string "true" if the volume should be encrypted with LUKS
  • cipher to use; must be supported by the kernel and luks, we suggest aes-xts-plain64
  • key-size to use; we suggest 512 for aes-xts-plain64

For LUKS encrypted volumes, a secret that contains the LUKS key needs to be referenced through the and parameter. See the included StorageClass definitions and the examples/kubernetes/luks-encrypted-volumes folder for examples.

Pre-defined storage classes

The default deployment bundled in the deploy/kubernetes/releases folder includes the following storage classes:

  • cloudscale-volume-ssd - the default storage class; uses an ssd volume, no luks encryption
  • cloudscale-volume-bulk - uses a bulk volume, no luks encryption
  • cloudscale-volume-ssd-luks - uses an ssd volume that will be encrypyted with luks; a luks-key must be supplied
  • cloudscale-volume-bulk-luks - uses a bulk volume that will be encrypyted with luks; a luks-key must be supplied

To use one of the shipped luks storage classes, you need to create a secret named ${}-luks-key in the same namespace as the persistent volume claim. The secret must contain an element called luksKey that will be used as the luks encryption key.

Example: If you create a persistent volume claim with the name my-pvc, you need to create a secret my-pvc-luks-key.


The CSI plugin follows semantic versioning. The current version is: v1.0.0. The project is still under active development and may not be production ready.

  • Bug fixes will be released as a PATCH update.
  • New features (such as CSI spec bumps) will be released as a MINOR update.
  • Significant breaking changes makes a MAJOR update.

Installing to Kubernetes


  • Kubernetes v1.13.0 minimum
  • --allow-privileged flag must be set to true for both the API server and the kubelet
  • (if you use Docker) the Docker daemon of the cluster nodes must allow shared mounts
  • If you want to use LUKS encrypted volumes, the kernel on your nodes must have support for device mapper infrastructure with the crypt target and the appropriate cryptographic APIs

1. Create a secret with your API Access Token:

Replace the placeholder string starting with a05... with your own secret and save it as secret.yml:

apiVersion: v1
kind: Secret
  name: cloudscale
  namespace: kube-system
  access-token: "a05dd2f26b9b9ac2asdas__REPLACE_ME____123cb5d1ec17513e06da"

and create the secret using kubectl:

$ kubectl create -f ./secret.yml
secret "cloudscale" created

You should now see the cloudscale secret in the kube-system namespace along with other secrets

$ kubectl -n kube-system get secrets
NAME                  TYPE                                  DATA      AGE
default-token-jskxx   3         18h
cloudscale            Opaque                                1         18h

2. Deploy the CSI plugin and sidecars:

Before you continue, be sure to checkout to a tagged release. Always use the latest stable version For example, to use the latest stable version (v1.0.0) you can execute the following command:

$ kubectl apply -f

There are also dev images available:

$ kubectl apply -f

The storage classes cloudscale-volume-ssd and cloudscale-volume-bulk will be created. The storage class cloudscale-volume-ssd is set to "default" for dynamic provisioning. If you're using multiple storage classes you might want to remove the annotation and re-deploy it. This is based on the recommended mechanism of deploying CSI drivers on Kubernetes

Note that the deployment proposal to Kubernetes is still a work in progress and not all of the written features are implemented. When in doubt, open an issue or ask #sig-storage in Kubernetes Slack

3. Test and verify:

Create a PersistentVolumeClaim. This makes sure a volume is created and provisioned on your behalf:

apiVersion: v1
kind: PersistentVolumeClaim
  name: csi-pvc
  - ReadWriteOnce
      storage: 5Gi
  storageClassName: cloudscale-volume-ssd

Check that a new PersistentVolume is created based on your claim:

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM             STORAGECLASS            REASON    AGE
pvc-0879b207-9558-11e8-b6b4-5218f75c62b9   5Gi        RWO            Delete           Bound     default/csi-pvc   cloudscale-volume-ssd             3m

The above output means that the CSI plugin successfully created (provisioned) a new Volume on behalf of you. You should be able to see this newly created volumes in the server detail view in the UI.

The volume is not attached to any node yet. It'll only attached to a node if a workload (i.e: pod) is scheduled to a specific node. Now let us create a Pod that refers to the above volume. When the Pod is created, the volume will be attached, formatted and mounted to the specified Container:

kind: Pod
apiVersion: v1
  name: my-csi-app
    - name: my-frontend
      image: busybox
      - mountPath: "/data"
        name: my-cloudscale-volume
      command: [ "sleep", "1000000" ]
    - name: my-cloudscale-volume
        claimName: csi-pvc 

Check if the pod is running successfully:

$ kubectl describe pods/my-csi-app

Write inside the app container:

$ kubectl exec -ti my-csi-app /bin/sh
/ # touch /data/hello-world
/ # exit
$ kubectl exec -ti my-csi-app /bin/sh
/ # ls /data



  • Go: min v1.10.x

After making your changes, run the unit tests:

$ make test

If you want to test your changes, create a new image with the version set to dev:

apt install
# At this point you probably need to add your user to the docker group
docker login --username=cloudscalech
$ VERSION=dev make publish

This will create a binary with version dev and docker image pushed to cloudscalech/cloudscale-csi-plugin:dev

To run the integration tests run the following:

$ KUBECONFIG=$(pwd)/kubeconfig make test-integration

Release a new version

To release a new version bump first the version:

$ make bump-version

Make sure everything looks good. Create a new branch with all changes:

$ git checkout -b new-release
$ git add .
$ git push origin

After it's merged to master, create a new Github release from master with the version v1.0.0 and then publish a new docker build:

$ git checkout master
$ make publish

This will create a binary with version v1.0.0 and docker image pushed to cloudscalech/cloudscale-csi-plugin:v1.0.0


At we value and love our community! If you have any issues or would like to contribute, feel free to open an issue/PR

You can’t perform that action at this time.