Skip to content

Commit

Permalink
Update CSI Deployment
Browse files Browse the repository at this point in the history
1. support CentOS8
2. add Fio example
  • Loading branch information
weipengzhu committed Sep 23, 2020
1 parent 8cf7b77 commit 82cde27
Showing 1 changed file with 179 additions and 22 deletions.
201 changes: 179 additions & 22 deletions docs/zbs-csi-driver-deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,16 +8,22 @@ This topic explains how to install ZBS CSI Driver with kubernetes. Follow the st

## Env

- Centos7
- CentOS7 / CentOS8

- Kubernetes v1.17 or higher

- ZBS v4.0.8-rc2 or higher

## Setup Kubernetes

If there is no kubernetes cluster,please refer to [Installing Kubernetes](https://kubernetes.io/docs/setup/production-environment/tools/).

### Enable Kubernetes features

Enable the CSI related features to ensure that the driver works normally.

After a feature is GA, the feature gate will be removed in the next few versions. Please refer to **[feature-gates](https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/)** to selectively enable features.

1. Enable feature gates on each `kube-apiserver`: `--feature-gates=CSINodeInfo=true,CSIDriverRegistry=true,CSIBlockVolume=true,VolumeSnapshotDataSource=true,VolumePVCDataSource=true,VolumePVCDataSource=true,ExpandCSIVolumes=true,ExpandInUsePersistentVolumes=true` and `--allow-privileged=true`.

```yaml
Expand All @@ -31,17 +37,17 @@ spec:
containers:
- command:
- kube-apiserver
- --feature-gates=CSINodeInfo=true,CSIDriverRegistry=true,CSIBlockVolume=true,VolumeSnapshotDataSource=true, VolumePVCDataSource=true,VolumePVCDataSource=true,ExpandCSIVolumes=true,ExpandInUsePersistentVolumes=true
- --feature-gates=CSINodeInfo=true,CSIDriverRegistry=true,CSIBlockVolume=true,VolumeSnapshotDataSource=true,VolumePVCDataSource=true,ExpandCSIVolumes=true,ExpandInUsePersistentVolumes=true
- --allow-privileged=true
```

2. Enable feature gates on each `kubelet`: `--feature-gates=CSINodeInfo=true,CSIDriverRegistry=true,CSIBlockVolume=true,VolumeSnapshotDataSource=true, VolumePVCDataSource=true,VolumePVCDataSource=true,ExpandCSIVolumes=true,ExpandInUsePersistentVolumes=true` and `--allow-privileged=true`.
2. Enable feature gates on each `kubelet`: `--feature-gates=CSINodeInfo=true,CSIDriverRegistry=true,CSIBlockVolume=true,VolumeSnapshotDataSource=true,VolumePVCDataSource=true,ExpandCSIVolumes=true,ExpandInUsePersistentVolumes=true`.

```yaml
# /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --feature-gates=CSINodeInfo=true,CSIDriverRegistry=true,CSIBlockVolume=true,VolumeSnapshotDataSource=true,VolumePVCDataSource=true,VolumePVCDataSource=true,ExpandCSIVolumes=true,ExpandInUsePersistentVolumes=true --allow-privileged=true"
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --feature-gates=CSINodeInfo=true,CSIDriverRegistry=true,CSIBlockVolume=true,VolumeSnapshotDataSource=true,VolumePVCDataSource=true,ExpandCSIVolumes=true,ExpandInUsePersistentVolumes=true"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
Expand All @@ -55,59 +61,115 @@ ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELE
> **_Note:_ This configuration file is generated by kubeadm, other installation methods are similarly modified**
3. Reload Config
```sh
$ systemctl daemon-reload
$ systemctl restart kubelet
```

4. Wait kubelet and kube-apiserver ready
```sh
systemctl daemon-reload
systemctl restart kubelet
$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Mon 2020-09-23 14:36:18 CST;
...
$ kubectl wait --for=condition=Ready pod/kube-apiserver-<suffix> -n kube-system

This comment has been minimized.

Copy link
@kylezh

kylezh Sep 23, 2020

Contributor

这个 suffix 要怎么填?

pod/kube-apiserver-<suffix> condition met
```

### Deploy Common Snapshot Controller

The volume snapshot controller management is similar to pv/pvc controller, it manages the snapshot CRDs.
Regardless of the number CSI drivers deployed on the cluster, there must be only one instance of the volume snapshot controller running and one set of volume snapshot CRDs installed per cluster.

1. Download **[external-controller repo](https://github.com/kubernetes-csi/external-snapshotter/tree/release-2.1)**

```sh
wget https://github.com/kubernetes-csi/external-snapshotter/archive/release-2.1.zip
unzip external-snapshotter-release-2.1.zip
$ wget https://github.com/kubernetes-csi/external-snapshotter/archive/release-2.1.zip
$ unzip release-2.1.zip && cd external-snapshotter-release-2.1
```

2. Create Snapshot Beta CRD

```sh
kubectl create -f external-snapshotter-release-2.1/config/crd
$ kubectl create -f ./config/crd

This comment has been minimized.

Copy link
@kylezh

kylezh Sep 23, 2020

Contributor

前面带 $ 符号,后面如果 copy 的话会比较麻烦,建议删掉吧。

```

3. Install Common Snapshot Controller

```sh
kubectl apply -f external-snapshotter-release-2.1/deploy/kubernetes/snapshot-controller
$ kubectl apply -f ./deploy/kubernetes/snapshot-controller
```

> **_Note:_ replace with the namespace you want for your controller, e.g. kube-system**
4. Verify

```sh
kubectl get statefulsets.apps snapshot-controller
$ watch kubectl get statefulset snapshot-controller -n <your-namespace>
NAME READY AGE
snapshot-controller 1/1 32s
```

## Setup ZBS Cluster

Configure a `zbs-cluster-vip`
1. Ensure that the kubernetes cluster can access the ZBS cluster through the access network

2. Configure `zbs-cluster-vip` in the access network segment

```sh
$ zbs-task vip set iscsi <zbs-cluster-vip>
```

## Setup open-iscsi

1. Install open-iscsi on each kubernetes node
```sh
$ yum install iscsi-initiator-utils
```

2. Ensure that the node.startup option of /etc/iscsi/iscsid.conf is manual
```sh
yum install iscsi-initiator-utils
$ sed -i 's/^node.startup = automatic$/node.startup = manual/' /etc/iscsi/iscsid.conf
```

3. Disable selinux
```sh
$ setenforce 0
$ sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
```

4. Enable and start `iscsid`

```sh
$ systemctl enable --now iscsid
```

## Deploy zbs-csi-driver

Obtain the `kubernetes-cluster-id` from the cluster administrator or the cluster management system and download **[zbs-csi-driver-deploy](https://github.com/iomesh/zbs-csi-driver/blob/master/deploy)**.
1. Obtain the `kubernetes-cluster-id` from the cluster administrator or the cluster management system

> **_Note:_ `kubernetes-cluster-id` should be unique and cannot be modified**.
1. Configure controller plugin
2. Download **[zbs-csi-driver](https://github.com/iomesh/zbs-csi-driver/archive/master.zip)**

```sh
$ wget https://github.com/iomesh/zbs-csi-driver/archive/master.zip && unzip master.zip && cd zbs-csi-driver-master
```

> **_Note:_ zbs-csi-driver is currently a private repo and cannot be downloaded yet**
3. Configure controller plugin

```yaml
# deploy/zbs-csi-driver.yaml
spec:
# for zbs-cluster-vip
hostNetwork: true
serviceAccountName: zbs-csi-controller-account
- containers:
- name: zbs-csi-driver
image: iomesh/zbs-csi-driver:v0.1.1
args:
Expand All @@ -118,10 +180,13 @@ Obtain the `kubernetes-cluster-id` from the cluster administrator or the clust
- "--deployment_mode=EXTERNAL"
```

2. Configure node plugin
4. Configure node plugin

If the OS is CentOS8, you need to mount iscsi-lock.

```yaml
# deploy/zbs-csi-driver.yaml
containers:
- name: zbs-csi-driver
image: iomesh/zbs-csi-driver:v0.1.1
args:
Expand All @@ -131,11 +196,20 @@ Obtain the `kubernetes-cluster-id` from the cluster administrator or the clust
- "--cluster_id=kubernetes-cluster-id"
- "--iscsi_portal=zbs-cluster-vip:3260"
- "--deployment_mode=EXTERNAL"
volumeMounts:
# - name: iscsi-lock
# mountPath: /run/lock/iscsi
volumes:
# - name: iscsi-lock
# hostPath:
# path: /run/lock/iscsi
# type: Directory

```

> **_Note:_ For HCI Deployment, `deployment_mode` is `HCI` , `iscsi_portal` is `127.0.0.1:3260`**
3. Configure StorageClass
5. Configure StorageClass

```yaml
# deploy/zbs-csi-driver.yaml
Expand All @@ -147,20 +221,103 @@ metadata:
provisioner: zbs-csi-driver.iomesh.com
reclaimPolicy: Retain
allowVolumeExpansion: true
parameters:
csi.storage.k8s.io/fstype: "ext4"
replicaFactor: "1"
thinProvision: "true"
```

4. Deploy
6. Deploy

```sh
kubectl apply -f ./deploy
$ kubectl apply -f ./deploy
```

5. Verify
7. Verify

```sh
kubectl get pod -n iomesh-system
$ watch kubectl get pod -n iomesh-system
Every 2.0s: kubectl get pod -n iomesh-system Wed Sep 23 14:33:52 2020

NAME READY STATUS RESTARTS AGE
zbs-csi-driver-controller-plugin-5dbfb48d5c-2sk97 6/6 Running 0 42s
zbs-csi-driver-controller-plugin-5dbfb48d5c-cfhwt 6/6 Running 0 42s
zbs-csi-driver-controller-plugin-5dbfb48d5c-drl7s 6/6 Running 0 42s
zbs-csi-driver-node-plugin-25585 3/3 Running 0 39s
zbs-csi-driver-node-plugin-fscsp 3/3 Running 0 30s
zbs-csi-driver-node-plugin-g4c4v 3/3 Running 0 39s

```

## Example

TODO
### Fio
1. kubectl apply -f
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fio-pvc
spec:
storageClassName: zbs-csi-driver-default
volumeMode: Block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
---
apiVersion: v1
kind: Pod
metadata:
name: fio
labels:
app: fio
spec:
volumes:
- name: fio-pvc
persistentVolumeClaim:
claimName: fio-pvc
containers:
- name: fio
image: clusterhq/fio-tool
command:
- tail
args:
- '-f'
- /dev/null
imagePullPolicy: IfNotPresent
volumeDevices:
- devicePath: /mnt/fio
name: fio-pvc
restartPolicy: Always
```

2. Wait fio-pvc bound and fio pod ready
```sh
$ watch kubectl get pvc fio-pvc
Every 2.0s: kubectl get pvc fio-pvc localhost.localdomain: Wed Sep 23 14:40:03 2020

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
fio-pvc Bound pvc-d7916b34-50cd-49bd-86f9-5287db1265cb 30Gi RWO zbs-csi-driver-default 15s

$ kubectl wait --for=condition=Ready pod/fio
pod/fio condition met
```

3. Run test
```sh
$ kubectl exec -it fio sh

$ fio --name fio --filename=/mnt/fio --bs=256k --rw=write --ioengine=libaio --direct=1 --iodepth=128 --numjobs=1 --size=$(blockdev --getsize64 /mnt/fio)

$ fio --name fio --filename=/mnt/fio --bs=4k --rw=randread --ioengine=libaio --direct=1 --iodepth=128 --numjobs=1 --size=$(blockdev --getsize64 /mnt/fio)
```

4. Cleanup
```sh
$ kubectl delete pod fio
$ kubectl delete pvc fio-pvc
# You need to delete pv when reclaimPolicy is Retain
$ kubectl delete pvc-b0d74bab-2d1a-4727-a236-47c93840545f
```

0 comments on commit 82cde27

Please sign in to comment.