Skip to content

Latest commit

 

History

History
562 lines (455 loc) · 23.8 KB

README.md

File metadata and controls

562 lines (455 loc) · 23.8 KB

NexentaStor CSI Driver

Build Status Go Report Card Conventional Commits

NexentaStor product page: https://nexenta.com/products/nexentastor.

This is a development branch, for the most recent stable version see "Supported versions".

Overview

The NexentaStor Container Storage Interface (CSI) Driver provides a CSI interface used by Container Orchestrators (CO) to manage the lifecycle of NexentaStor volumes over NFS and SMB protocols.

Supported kubernetes versions matrix

NexentaStor 5.3+
Kubernetes 1.13 1.1.0
Kubernetes 1.14 & 1.15 1.2.0
Kubernetes >=1.16 1.3.0
Kubernetes >=1.20 1.3.1
Kubernetes >=1.20 master

Releases can be found here - https://github.com/Nexenta/nexentastor-csi-driver/releases

Feature List

Feature Feature Status CSI Driver Version CSI Spec Version Kubernetes Version
Static Provisioning GA >= v1.0.0 >= v1.0.0 >=1.13
Dynamic Provisioning GA >= v1.0.0 >= v1.0.0 >=1.13
RW mode GA >= v1.0.0 >= v1.0.0 >=1.13
RO mode GA >= v1.0.0 >= v1.0.0 >=1.13
Creating and deleting snapshot GA >= v1.2.0 >= v1.0.0 >=1.17
Provision volume from snapshot GA >= v1.2.0 >= v1.0.0 >=1.17
Provision volume from another volume GA >= v1.3.0 >= v1.0.0 >=1.17
List snapshots of a volume Beta >= v1.2.0 >= v1.0.0 >=1.17
Expand volume GA >= v1.3.0 >= v1.1.0 >=1.16
Access list for volume (NFS only) GA >= v1.3.0 >= v1.0.0 >=1.13
Topology Beta >= v1.4.0 >= v1.0.0 >=1.17
Raw block device In development future >= v1.0.0 >=1.14
StorageClass Secrets Beta >= v1.3.0 >=1.0.0 >=1.13
Mount options GA >=v1.0.0 >=v1.0.0 >=v1.13

Requirements

  • Kubernetes cluster must allow privileged pods, this flag must be set for the API server and the kubelet (instructions):
    --allow-privileged=true
    
  • Required the API server and the kubelet feature gates (instructions):
    --feature-gates=VolumeSnapshotDataSource=true,VolumePVCDataSource=true,ExpandInUsePersistentVolumes=true,ExpandCSIVolumes=true,ExpandPersistentVolumes=true,Topology=true,CSINodeInfo=true
    
    If you are planning on using topology, the following feature-gates are required
    ServiceTopology=true,CSINodeInfo=true
    
  • Mount propagation must be enabled, the Docker daemon for the cluster must allow shared mounts (instructions)
  • Depends on preferred mount filesystem type, following utilities must be installed on each Kubernetes node:
    # for NFS
    apt install -y rpcbind nfs-common
    # for SMB
    apt install -y rpcbind cifs-utils

Installation

  1. Create NexentaStor dataset for the driver, example: csiDriverPool/csiDriverDataset. By default, the driver will create filesystems in this dataset and mount them to use as Kubernetes volumes.

  2. Clone driver repository

    git clone https://github.com/Nexenta/nexentastor-csi-driver.git
    cd nexentastor-csi-driver
    git checkout master
  3. Edit deploy/kubernetes/nexentastor-csi-driver-config.yaml file. Driver configuration example:

    nexentastor_map:
      nstor-ssd:
        restIp: https://10.3.3.4:8443,https://10.3.3.5:8443     # [required] NexentaStor REST API endpoint(s)
        username: admin                                         # [required] NexentaStor REST API username
        password: p@ssword                                      # [required] NexentaStor REST API password
        defaultDataset: csiDriverPool/csiDriverDataset          # default 'pool/dataset' to use
        defaultDataIp: 20.20.20.21                              # default NexentaStor data IP or HA VIP
        defaultMountFsType: nfs                                 # default mount fs type [nfs|cifs]
        defaultMountOptions: noatime                            # default mount options (mount -o ...)
        zone: us-east                                           # zone to match kubernetes topology
      nstor-slow:
        restIp: https://10.3.4.4:8443,https://10.3.4.5:8443     # [required] NexentaStor REST API endpoint(s)
        username: admin                                         # [required] NexentaStor REST API username
        password: p@ssword                                      # [required] NexentaStor REST API password
        defaultDataset: csiDriverPool/csiDriverDataset          # default 'pool/dataset' to use
        defaultDataIp: 10.10.10.21                              # default NexentaStor data IP or HA VIP
        defaultMountFsType: nfs                                 # default mount fs type [nfs|cifs]
        defaultMountOptions: noatime                            # default mount options (mount -o ...)
      nstor-slow-NFSv4:
        restIp: https://10.3.3.14:8443,https://10.3.3.15:8443   # [required] NexentaStor REST API endpoint(s)
        username: admin                                         # [required] NexentaStor REST API username
        password: p@ssword                                      # [required] NexentaStor REST API password
        defaultDataset: otherPool/otherDataset                  # default 'pool/dataset' to use
        defaultDataIp: 11.11.22.33                              # default NexentaStor data IP or HA VIP
        defaultMountFsType: nfs                                 # default mount fs type [nfs|cifs]
        defaultMountOptions: vers=4                             # default mount options (mount -o ...)
    
    
    # for CIFS mounts:
    #defaultMountFsType: cifs                               # default mount fs type [nfs|cifs]
    #defaultMountOptions: username=admin,password=Nexenta@1 # username/password must be defined for CIFS

    Note: keyword nexentastor_map followed by cluster name of your choice MUST be used even if you are only using 1 NexentaStor cluster.

    All driver configuration options:

    Name Description Required Example
    restIp NexentaStor REST API endpoint(s); , to separate cluster nodes yes https://10.3.3.4:8443
    username NexentaStor REST API username yes admin
    password NexentaStor REST API password yes p@ssword
    defaultDataset parent dataset for driver's filesystems [pool/dataset] no csiDriverPool/csiDriverDataset
    defaultDataIp NexentaStor data IP or HA VIP for mounting shares yes for PV 20.20.20.21
    defaultMountFsType mount filesystem type nfs, cifs no cifs
    defaultMountOptions NFS/CIFS mount options: mount -o ... (default: "") no NFS: noatime,nosuid
    CIFS: username=admin,password=123
    debug print more logs (default: false) no true
    zone Zone to match topology.kubernetes.io/zone. no
    v13Compatibility Flag to support already created volumes for driver version 1.3 no false
    mountPointPermissions Permissions to be set on volume's mount point no 0777
    insecureSkipVerify TLS certificates check will be skipped when true (default: 'true') no false

    Note: if parameter defaultDataset/defaultDataIp is not specified in driver configuration, then parameter dataset/dataIp must be specified in StorageClass configuration.

    Note: all default parameters (default*) may be overwritten in specific StorageClass configuration.

    Note: if defaultMountFsType is set to cifs then parameter defaultMountOptions must include CIFS username and password (username=admin,password=123).

    Note: if v13Compatibility is set to true then parameter zone must not be used. And v13Compatibility must be set for only one NexentaStor backend configuration per driver.

  4. Create Kubernetes secret from the file:

    kubectl create secret generic nexentastor-csi-driver-config --from-file=deploy/kubernetes/nexentastor-csi-driver-config.yaml
  5. Register driver to Kubernetes:

    kubectl apply -f deploy/kubernetes/nexentastor-csi-driver.yaml

NexentaStor CSI driver's pods should be running after installation:

$ kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
nexentastor-csi-controller-0   3/3     Running   0          42s
nexentastor-csi-node-cwp4v     2/2     Running   0          42s

Upgrade driver

Upgrade driver from version 1.3 to 1.4.x

To upgrade NexentaStor CSI driver from version 1.3 to 1.4.x need to:

  1. Uninstall current CSI driver from Kubernetes cluster
kubectl delete -f deploy/kubernetes/nexentastor-csi-driver.yaml

Note: this manifest should be taken from CSI driver version 1.3

  1. Delete current driver configuration
kubectl delete secret generic nexentastor-csi-driver-config
  1. Create new configuration secret (see Installation part above) and add v13Compatibility = true parameter to the configuration section witch backend has existing volumes on Kubernetes cluster.

  2. Install new version of NexentaStor CSI driver.

Usage

Dynamically provisioned volumes

For dynamic volume provisioning, the administrator needs to set up a StorageClass pointing to the driver. In this case Kubernetes generates volume name automatically (for example pvc-ns-cfc67950-fe3c-11e8-a3ca-005056b857f8). Default driver configuration may be overwritten in parameters section:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nexentastor-csi-driver-cs-nginx-dynamic
provisioner: nexentastor-csi-driver.nexenta.com
mountOptions:                        # list of options for `mount -o ...` command
#  - noatime                         #
#- matchLabelExpressions:            # use to following lines to configure topology by zones
#  - key: topology.kubernetes.io/zone
#    values:
#    - us-east
parameters:
  #configName: nstor-slow            # specify exact NexentaStor appliance that you want to use to provision volumes.
  #dataset: customPool/customDataset # to overwrite "defaultDataset" config property [pool/dataset]
  #dataIp: 20.20.20.253              # to overwrite "defaultDataIp" config property
  #mountFsType: nfs                  # to overwrite "defaultMountFsType" config property
  #mountOptions: noatime             # to overwrite "defaultMountOptions" config property
  #nfsAccessList: rw:10.3.196.93, ro:2.2.2.2, 3.3.3.3/10   # optional list to manage access by fqdn.

Parameters

Name Description Example
dataset parent dataset for driver's filesystems [pool/dataset] customPool/customDataset
dataIp NexentaStor data IP or HA VIP for mounting shares 20.20.20.253
mountFsType mount filesystem type nfs, cifs cifs
mountOptions NFS/CIFS mount options: mount -o ... NFS: noatime
CIFS: username=admin,password=123
configName name of NexentaStor appliance from config file nstor-ssd
nfsAccessList List of addresses to allow NFS access to. Format: [accessMode]:[address]/[mask]. accessMode and mask are optional, default mode is rw. rw:10.3.196.93, ro:2.2.2.2, 3.3.3.3/10

Example

Run Nginx pod with dynamically provisioned volume:

kubectl apply -f examples/kubernetes/nginx-dynamic-volume.yaml

# to delete this pod:
kubectl delete -f examples/kubernetes/nginx-dynamic-volume.yaml

Pre-provisioned volumes

The driver can use already existing NexentaStor filesystem, in this case, StorageClass, PersistentVolume and PersistentVolumeClaim should be configured.

StorageClass configuration

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nexentastor-csi-driver-cs-nginx-persistent
provisioner: nexentastor-csi-driver.nexenta.com
mountOptions:                        # list of options for `mount -o ...` command
#  - noatime                         #
parameters:
  #dataset: customPool/customDataset # to overwrite "defaultDataset" config property [pool/dataset]
  #dataIp: 20.20.20.253              # to overwrite "defaultDataIp" config property
  #mountFsType: nfs                  # to overwrite "defaultMountFsType" config property
  #mountOptions: noatime             # to overwrite "defaultMountOptions" config property

PersistentVolume configuration

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nexentastor-csi-driver-pv-nginx-persistent
  labels:
    name: nexentastor-csi-driver-pv-nginx-persistent
spec:
  storageClassName: nexentastor-csi-driver-cs-nginx-persistent
  accessModes:
    - ReadWriteMany
  capacity:
    storage: 1Gi
  csi:
    driver: nexentastor-csi-driver.nexenta.com
    volumeHandle: nstor-ssd:csiDriverPool/csiDriverDataset/nginx-persistent
  #mountOptions:  # list of options for `mount` command
  #  - noatime    #

CSI Parameters:

Name Description Example
driver installed driver name "nexentastor-csi-driver.nexenta.com" nexentastor-csi-driver.nexenta.com
volumeHandle NS appliance name from config and path to existing NexentaStor filesystem [configName:pool/dataset/filesystem] nstor-ssd:PoolA/datasetA/nginx

PersistentVolumeClaim (pointed to created PersistentVolume)

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nexentastor-csi-driver-pvc-nginx-persistent
spec:
  storageClassName: nexentastor-csi-driver-cs-nginx-persistent
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      # to create 1-1 relationship for pod - persistent volume use unique labels
      name: nexentastor-csi-driver-pv-nginx-persistent

Example

Run nginx server using PersistentVolume.

Note: Pre-configured filesystem should exist on the NexentaStor: csiDriverPool/csiDriverDataset/nginx-persistent.

kubectl apply -f examples/kubernetes/nginx-persistent-volume.yaml

# to delete this pod:
kubectl delete -f examples/kubernetes/nginx-persistent-volume.yaml

Cloned volumes

We can create a clone of an existing csi volume. To do so, we need to create a PersistentVolumeClaim with dataSource spec pointing to an existing PVC that we want to clone. In this case Kubernetes generates volume name automatically (for example pvc-ns-cfc67950-fe3c-11e8-a3ca-005056b857f8).

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nexentastor-csi-driver-pvc-nginx-dynamic-clone
spec:
  storageClassName: nexentastor-csi-driver-cs-nginx-dynamic
  dataSource:
    kind: PersistentVolumeClaim
    apiGroup: ""
    name: nexentastor-csi-driver-pvc-nginx-dynamic # pvc name
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

Example

Run Nginx pod with dynamically provisioned volume:

kubectl apply -f examples/kubernetes/nginx-clone-volume.yaml

# to delete this pod:
kubectl delete -f examples/kubernetes/nginx-clone-volume.yaml

Snapshots

Note: this feature is an alpha feature.

# create snapshot class
kubectl apply -f examples/kubernetes/snapshot-class.yaml

# take a snapshot
kubectl apply -f examples/kubernetes/take-snapshot.yaml

# deploy nginx pod with volume restored from a snapshot
kubectl apply -f examples/kubernetes/nginx-snapshot-volume.yaml

# snapshot classes
kubectl get volumesnapshotclasses.snapshot.storage.k8s.io

# snapshot list
kubectl get volumesnapshots.snapshot.storage.k8s.io

# snapshot content list
kubectl get volumesnapshotcontents.snapshot.storage.k8s.io

Checking TLS cecrtificates

Default driver behavior is to skip certificate checks for all Rest API calls. v1.4.4 Release introduces new config parameter insecureSkipVerify=. When InsecureSkipVerify is set to false, the driver will enforce certificate checking. To allow adding certificates, nexentastor-csi-driver.yaml has additional volumes added to cnexentastor-csi-controller deployment and nexentastor-csi-node daemonset.

            - name: certs-dir
              mountPropagation: HostToContainer
              mountPath: /usr/local/share/ca-certificates
        - name: certs-dir
          hostPath:
            path: /etc/ssl/  # change this to your tls certificates folder
            type: Directory

/etc/ssl folder is the default certificates location for Ubuntu. Change this according to your OS configuration. If you only want to propagate a specific set of certificates instead of the whole cert folder from the host, you can put them in any folder on the host and set in the yaml file accordingly. Note that this should be done on every node of the kubernetes cluster.

Uninstall

Using the same files as for installation:

# delete driver
kubectl delete -f deploy/kubernetes/nexentastor-csi-driver.yaml

# delete secret
kubectl delete secret nexentastor-csi-driver-config

Troubleshooting

  • Show installed drivers:
    kubectl get csidrivers
    kubectl describe csidrivers
  • Error:
    MountVolume.MountDevice failed for volume "pvc-ns-<...>" :
    driver name nexentastor-csi-driver.nexenta.com not found in the list of registered CSI drivers
    
    Make sure kubelet configured with --root-dir=/var/lib/kubelet, otherwise update paths in the driver yaml file (all requirements).
  • "VolumeSnapshotDataSource" feature gate is disabled:
    vim /var/lib/kubelet/config.yaml
    # ```
    # featureGates:
    #   VolumeSnapshotDataSource: true
    # ```
    vim /etc/kubernetes/manifests/kube-apiserver.yaml
    # ```
    #     - --feature-gates=VolumeSnapshotDataSource=true
    # ```
  • Driver logs
    kubectl logs -f nexentastor-csi-controller-0 driver
    kubectl logs -f $(kubectl get pods | awk '/nexentastor-csi-node/ {print $1;exit}') driver
  • Show termination message in case driver failed to run:
    kubectl get pod nexentastor-csi-controller-0 -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.message}}{{end}}"
  • Configure Docker to trust insecure registries:
    # add `{"insecure-registries":["10.3.199.92:5000"]}` to:
    vim /etc/docker/daemon.json
    service docker restart

Development

Commits should follow Conventional Commits Spec. Commit messages which include feat: and fix: prefixes will be included in CHANGELOG automatically.

Build

# print variables and help
make

# build go app on local machine
make build

# build container (+ using build container)
make container-build

# update deps
~/go/bin/dep ensure

Run

Without installation to k8s cluster only version command works:

./bin/nexentastor-csi-driver --version

Publish

# push the latest built container to the local registry (see `Makefile`)
make container-push-local

# push the latest built container to hub.docker.com
make container-push-remote

Tests

test-all-* instructions run:

See Makefile for more examples.

# Test options to be set before run tests:
# - NOCOLORS=true            # to run w/o colors
# - TEST_K8S_IP=10.3.199.250 # e2e k8s tests

# run all tests using local registry (`REGISTRY_LOCAL` in `Makefile`)
TEST_K8S_IP=10.3.199.250 make test-all-local-image
# run all tests using hub.docker.com registry (`REGISTRY` in `Makefile`)
TEST_K8S_IP=10.3.199.250 make test-all-remote-image

# run tests in container:
# - RSA keys from host's ~/.ssh directory will be used by container.
#   Make sure all remote hosts used in tests have host's RSA key added as trusted
#   (ssh-copy-id -i ~/.ssh/id_rsa.pub user@host)
#
# run all tests using local registry (`REGISTRY_LOCAL` in `Makefile`)
TEST_K8S_IP=10.3.199.250 make test-all-local-image-container
# run all tests using hub.docker.com registry (`REGISTRY` in `Makefile`)
TEST_K8S_IP=10.3.199.250 make test-all-remote-image-container

End-to-end K8s test parameters:

# Tests install driver to k8s and run nginx pod with mounted volume
# "export NOCOLORS=true" to run w/o colors
go test tests/e2e/driver_test.go -v -count 1 \
    --k8sConnectionString="root@10.3.199.250" \
    --k8sDeploymentFile="../../deploy/kubernetes/nexentastor-csi-driver.yaml" \
    --k8sSecretFile="./_configs/driver-config-single-default.yaml"

All development happens in master branch, when it's time to publish a new version, new git tag should be created.

  1. Build and test the new version using local registry:

    # build development version:
    make container-build
    # publish to local registry
    make container-push-local
    # test plugin using local registry
    TEST_K8S_IP=10.3.199.250 make test-all-local-image-container
  2. To release a new version run command:

    VERSION=X.X.X make release

    This script does following:

    • generates new CHANGELOG.md
    • builds driver container 'nexentastor-csi-driver'
    • Login to hub.docker.com will be requested
    • publishes driver version 'nexenta/nexentastor-csi-driver:X.X.X' to hub.docker.com
    • creates new Git tag 'vX.X.X' and pushes to the repository.
  3. Update Github releases.