From bf52bd4f46813ce133232f28eacde2ba51651b85 Mon Sep 17 00:00:00 2001 From: anotherrachel Date: Thu, 31 Oct 2019 17:41:18 +0800 Subject: [PATCH 01/11] tidb-in-kubernetes: add local pv example --- .../deploy/tidb-operator.md | 21 --- .../reference/configuration/storage-class.md | 131 +++++++++++++++++- 2 files changed, 127 insertions(+), 25 deletions(-) diff --git a/dev/tidb-in-kubernetes/deploy/tidb-operator.md b/dev/tidb-in-kubernetes/deploy/tidb-operator.md index 0f6e011b83eb..b75dcc1e6d0e 100644 --- a/dev/tidb-in-kubernetes/deploy/tidb-operator.md +++ b/dev/tidb-in-kubernetes/deploy/tidb-operator.md @@ -70,27 +70,6 @@ Refer to [Use Helm](/dev/tidb-in-kubernetes/reference/tools/in-kubernetes.md#use Refer to [Local PV Configuration](/dev/tidb-in-kubernetes/reference/configuration/storage-class.md) to set up local persistent volumes in your Kubernetes cluster. -### Deploy local-static-provisioner - -After mounting all data disks on Kubernetes nodes, you can deploy [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) that can automatically provision the mounted disks as Local PersistentVolumes. - -{{< copyable "shell-regular" >}} - -```shell -kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml -``` - -Check the Pod and PV status with the following commands: - -{{< copyable "shell-regular" >}} - -```shell -kubectl get po -n kube-system -l app=local-volume-provisioner && \ -kubectl get pv | grep local-storage -``` - -The local-volume-provisioner creates a volume for each mounted disk. Note that on GKE, this will create local volumes of only 375GiB in size and that you need to manually alter the setup to create larger disks. - ## Install TiDB Operator TiDB Operator uses [CRD (Custom Resource Definition)](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/) to extend Kubernetes. Therefore, to use TiDB Operator, you must first create the `TidbCluster` custom resource type, which is a one-time job in your Kubernetes cluster. diff --git a/dev/tidb-in-kubernetes/reference/configuration/storage-class.md b/dev/tidb-in-kubernetes/reference/configuration/storage-class.md index 6a8780ad2e41..f2054402e9fb 100644 --- a/dev/tidb-in-kubernetes/reference/configuration/storage-class.md +++ b/dev/tidb-in-kubernetes/reference/configuration/storage-class.md @@ -67,11 +67,28 @@ After volume expansion is enabled, expand the PV using the following method: ## Local PV configuration -Kubernetes currently supports statically allocated local storage. To create a local storage object, use local-volume-provisioner in the [local-static-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) repository. The procedure is as follows: +Kubernetes currently supports statically allocated local storage. To create a local storage object, use `local-volume-provisioner` in the [local-static-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) repository. The procedure is as follows: -1. Allocate local storage in the nodes of the TiKV cluster. See also [Manage Local Volumes in Kubernetes Cluster](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md). +1. Pre-allocate local storage in cluster nodes. See the [operation guide](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md) provided by Kubernetes. -2. Deploy local-volume-provisioner. See also [Install local-volume-provisioner with helm](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/tree/master/helm). +2. Deploy `local-volume-provisioner`. + + {{< copyable "shell-regular" >}} + + ```shell + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml + ``` + + Check the Pod and PV status with the following commands: + + {{< copyable "shell-regular" >}} + + ```shell + kubectl get po -n kube-system -l app=local-volume-provisioner && \ + kubectl get pv | grep local-storage + ``` + + `local-volume-provisioner` creates a volume for each mounted disk. Note that on GKE, this will create local volumes of only 375GiB in size and that you need to manually alter the setup to create larger disks. For more information, refer to [Kubernetes local storage](https://kubernetes.io/docs/concepts/storage/volumes/#local) and [local-static-provisioner document](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner#overview). @@ -83,6 +100,112 @@ For more information, refer to [Kubernetes local storage](https://kubernetes.io/ Refer to [Best Practices](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/best-practices.md) for more information on local PV in Kubernetes. +## Instance + +For monitoring purposes, components like TiDB Binlog and backup use a local disk to storage data. Or you can mount the data into the corresponding directory and create different `StorageClass` in a SAS disk. Specifically: + +- For a disk storing TiDB Binlog and backup data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/backup` directory. `local-storage` and `StorageClass` can be created afterwards. + +- For a disk storing PD data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/sharedssd` directory. `local-storage` and `StorageClass` can be created afterwards. + +- For a disk storing TiKV data, you can [mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv) them into `/mnt/ssd` directory. `local-storage` and `StorageClass` can be created afterwards. + +When you install `local-volume-provisioner`, you need to modify `local-volume-provisioner` [YAML](https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml)file and create the necessary `StorageClass` before executing `kubectl apply` statement. The following is an example of a modified YAML file according to the mount above: + +``` +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: "local-storage" +provisioner: "kubernetes.io/no-provisioner" +volumeBindingMode: "WaitForFirstConsumer" +--- +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: "ssd-storage" +provisioner: "kubernetes.io/no-provisioner" +volumeBindingMode: "WaitForFirstConsumer" +--- +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: "shared-ssd-storage" +provisioner: "kubernetes.io/no-provisioner" +volumeBindingMode: "WaitForFirstConsumer" +--- +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: "backup-storage" +provisioner: "kubernetes.io/no-provisioner" +volumeBindingMode: "WaitForFirstConsumer" +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: local-provisioner-config + namespace: kube-system +data: + nodeLabelsForPV: | + - kubernetes.io/hostname + storageClassMap: | + shared-ssd-storage: + hostDir: /mnt/sharedssd + mountDir: /mnt/sharedssd + ssd-storage: + hostDir: /mnt/ssd + mountDir: /mnt/ssd + local-storage: + hostDir: /mnt/disks + mountDir: /mnt/disks + backup-storage: + hostDir: /mnt/backup + mountDir: /mnt/backup +--- + +...... + + volumeMounts: + + ...... + + - mountPath: /mnt/ssd + name: local-ssd + mountPropagation: "HostToContainer" + - mountPath: /mnt/sharedssd + name: local-sharedssd + mountPropagation: "HostToContainer" + - mountPath: /mnt/disks + name: local-disks + mountPropagation: "HostToContainer" + - mountPath: /mnt/backup + name: local-backup + mountPropagation: "HostToContainer" + volumes: + + ...... + + - name: local-ssd + hostPath: + path: /mnt/ssd + - name: local-sharedssd + hostPath: + path: /mnt/sharedssd + - name: local-disks + hostPath: + path: /mnt/disks + - name: local-backup + hostPath: + path: /mnt/backup +...... + +``` + +Finally, execute `kubectl apply` statement to install `local-volume-provisioner`. + +When you later create a TiDB cluster or backup, configure the corresponding `StorageClass` before use. + ## Data safety In general, after a PVC is no longer used and deleted, the PV bound to it is reclaimed and placed in the resource pool for scheduling by the provisioner. To avoid accidental data loss, you can globally configure the reclaim policy of the `StorageClass` to `Retain` or only change the reclaim policy of a single PV to `Retain`. With the `Retain` policy, a PV is not automatically reclaimed. @@ -125,4 +248,4 @@ When the reclaim policy of PVs is set to `Retain`, if the data of a PV can be de kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' ``` -For more details, refer to [Change the Reclaim Policy of a PersistentVolume](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/). \ No newline at end of file +For more details, refer to [Change the Reclaim Policy of a PersistentVolume](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/). From afd3d7214e7a66b835b3b77a7b242621375cb766 Mon Sep 17 00:00:00 2001 From: anotherrachel Date: Mon, 4 Nov 2019 14:18:53 +0800 Subject: [PATCH 02/11] Update storage-class.md --- dev/tidb-in-kubernetes/reference/configuration/storage-class.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/dev/tidb-in-kubernetes/reference/configuration/storage-class.md b/dev/tidb-in-kubernetes/reference/configuration/storage-class.md index f2054402e9fb..d10d7c87e802 100644 --- a/dev/tidb-in-kubernetes/reference/configuration/storage-class.md +++ b/dev/tidb-in-kubernetes/reference/configuration/storage-class.md @@ -110,7 +110,7 @@ For monitoring purposes, components like TiDB Binlog and backup use a local disk - For a disk storing TiKV data, you can [mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv) them into `/mnt/ssd` directory. `local-storage` and `StorageClass` can be created afterwards. -When you install `local-volume-provisioner`, you need to modify `local-volume-provisioner` [YAML](https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml)file and create the necessary `StorageClass` before executing `kubectl apply` statement. The following is an example of a modified YAML file according to the mount above: +When you install `local-volume-provisioner`, you need to modify `local-volume-provisioner` [YAML](https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml) file and create the necessary `StorageClass` before executing `kubectl apply` statement. The following is an example of a modified YAML file according to the mount above: ``` apiVersion: storage.k8s.io/v1 From c532b6a8c6404f828bedb8cb1e9c57054cb630a0 Mon Sep 17 00:00:00 2001 From: anotherrachel Date: Wed, 6 Nov 2019 15:17:09 +0800 Subject: [PATCH 03/11] address comments --- .../reference/configuration/storage-class.md | 19 +++++++++---------- 1 file changed, 9 insertions(+), 10 deletions(-) diff --git a/dev/tidb-in-kubernetes/reference/configuration/storage-class.md b/dev/tidb-in-kubernetes/reference/configuration/storage-class.md index d10d7c87e802..b12adac75329 100644 --- a/dev/tidb-in-kubernetes/reference/configuration/storage-class.md +++ b/dev/tidb-in-kubernetes/reference/configuration/storage-class.md @@ -27,7 +27,7 @@ PVs are created automatically by the system administrator or volume provisioner. TiKV uses the Raft protocol to replicate data. When a node fails, PD automatically schedules data to fill the missing data replicas; TiKV requires low read and write latency, so local SSD storage is strongly recommended in the production environment. -PD also uses Raft to replicate data. PD is not an I/O-intensive application, but a database for storing cluster meta information, so a local SAS disk or network SSD storage such as EBS General Purpose SSD (gp2) volumes on AWS or SSD persistent disks on GCP can meet the requirements. +PD also uses Raft to replicate data. PD is not an I/O-intensive application, but a database for storing cluster meta information, so a local SAS drive or network SSD storage such as EBS General Purpose SSD (gp2) volumes on AWS or SSD persistent disks on GCP can meet the requirements. To ensure availability, it is recommended to use network storage for components such as TiDB monitoring, TiDB Binlog and `tidb-backup` because they do not have redundant replicas. TiDB Binlog's Pump and Drainer components are I/O-intensive applications that require low read and write latency, so it is recommended to use high-performance network storage such as EBS Provisioned IOPS SSD (io1) volumes on AWS or SSD persistent disks on GCP. @@ -88,7 +88,7 @@ Kubernetes currently supports statically allocated local storage. To create a lo kubectl get pv | grep local-storage ``` - `local-volume-provisioner` creates a volume for each mounted disk. Note that on GKE, this will create local volumes of only 375GiB in size and that you need to manually alter the setup to create larger disks. + `local-volume-provisioner` creates a volume for each mounted disk. Note that on GKE, this will create local volumes of only 375GiB in size. For more information, refer to [Kubernetes local storage](https://kubernetes.io/docs/concepts/storage/volumes/#local) and [local-static-provisioner document](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner#overview). @@ -100,15 +100,14 @@ For more information, refer to [Kubernetes local storage](https://kubernetes.io/ Refer to [Best Practices](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/best-practices.md) for more information on local PV in Kubernetes. -## Instance +## Data mounting -For monitoring purposes, components like TiDB Binlog and backup use a local disk to storage data. Or you can mount the data into the corresponding directory and create different `StorageClass` in a SAS disk. Specifically: +If monitoring data, as well as TiDB Binlog and backup data are also stored on the local disk, you can mount them in a SAS drive and create different `StorageClass` for them. For example: -- For a disk storing TiDB Binlog and backup data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/backup` directory. `local-storage` and `StorageClass` can be created afterwards. - -- For a disk storing PD data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/sharedssd` directory. `local-storage` and `StorageClass` can be created afterwards. - -- For a disk storing TiKV data, you can [mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv) them into `/mnt/ssd` directory. `local-storage` and `StorageClass` can be created afterwards. +- For a drive storing monitoring data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/disks` directory, and create `local-storage` `StorageClass` for them. +- For a drive storing TiDB Binlog and backup data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/backup` directory, and create `backup-storage` `StorageClass` for them. +- For a drive storing PD data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/sharedssd` directory, and create `shared-ssd-storage` `StorageClass` for them. +- For a drive storing TiKV data, you can [mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv) them into `/mnt/ssd` directory, and create `ssd-storage` `StorageClass` for them. When you install `local-volume-provisioner`, you need to modify `local-volume-provisioner` [YAML](https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml) file and create the necessary `StorageClass` before executing `kubectl apply` statement. The following is an example of a modified YAML file according to the mount above: @@ -202,7 +201,7 @@ data: ``` -Finally, execute `kubectl apply` statement to install `local-volume-provisioner`. +Finally, execute `kubectl apply` to install `local-volume-provisioner`. When you later create a TiDB cluster or backup, configure the corresponding `StorageClass` before use. From 87995aa8d9ed7f052b5e12ef78088efb2514193c Mon Sep 17 00:00:00 2001 From: anotherrachel Date: Wed, 6 Nov 2019 15:33:51 +0800 Subject: [PATCH 04/11] Update storage-class.md --- dev/tidb-in-kubernetes/reference/configuration/storage-class.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/dev/tidb-in-kubernetes/reference/configuration/storage-class.md b/dev/tidb-in-kubernetes/reference/configuration/storage-class.md index b12adac75329..9762dd148c81 100644 --- a/dev/tidb-in-kubernetes/reference/configuration/storage-class.md +++ b/dev/tidb-in-kubernetes/reference/configuration/storage-class.md @@ -100,7 +100,7 @@ For more information, refer to [Kubernetes local storage](https://kubernetes.io/ Refer to [Best Practices](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/best-practices.md) for more information on local PV in Kubernetes. -## Data mounting +## Disk mount examples If monitoring data, as well as TiDB Binlog and backup data are also stored on the local disk, you can mount them in a SAS drive and create different `StorageClass` for them. For example: From 51823ab4331365cba99fa4267fc8f58980e52534 Mon Sep 17 00:00:00 2001 From: anotherrachel Date: Fri, 8 Nov 2019 15:06:37 +0800 Subject: [PATCH 05/11] address comments and update v3.0 --- .../reference/configuration/storage-class.md | 14 +- .../deploy/tidb-operator.md | 21 --- .../reference/configuration/storage-class.md | 128 +++++++++++++++++- 3 files changed, 132 insertions(+), 31 deletions(-) diff --git a/dev/tidb-in-kubernetes/reference/configuration/storage-class.md b/dev/tidb-in-kubernetes/reference/configuration/storage-class.md index 9762dd148c81..c32faaf52ef4 100644 --- a/dev/tidb-in-kubernetes/reference/configuration/storage-class.md +++ b/dev/tidb-in-kubernetes/reference/configuration/storage-class.md @@ -27,7 +27,7 @@ PVs are created automatically by the system administrator or volume provisioner. TiKV uses the Raft protocol to replicate data. When a node fails, PD automatically schedules data to fill the missing data replicas; TiKV requires low read and write latency, so local SSD storage is strongly recommended in the production environment. -PD also uses Raft to replicate data. PD is not an I/O-intensive application, but a database for storing cluster meta information, so a local SAS drive or network SSD storage such as EBS General Purpose SSD (gp2) volumes on AWS or SSD persistent disks on GCP can meet the requirements. +PD also uses Raft to replicate data. PD is not an I/O-intensive application, but a database for storing cluster meta information, so a local SAS disk or network SSD storage such as EBS General Purpose SSD (gp2) volumes on AWS or SSD persistent disks on GCP can meet the requirements. To ensure availability, it is recommended to use network storage for components such as TiDB monitoring, TiDB Binlog and `tidb-backup` because they do not have redundant replicas. TiDB Binlog's Pump and Drainer components are I/O-intensive applications that require low read and write latency, so it is recommended to use high-performance network storage such as EBS Provisioned IOPS SSD (io1) volumes on AWS or SSD persistent disks on GCP. @@ -88,7 +88,7 @@ Kubernetes currently supports statically allocated local storage. To create a lo kubectl get pv | grep local-storage ``` - `local-volume-provisioner` creates a volume for each mounted disk. Note that on GKE, this will create local volumes of only 375GiB in size. + `local-volume-provisioner` creates a volume for each mounted disk. Note that on GKE, this will create local volumes of only 375 GiB in size. For more information, refer to [Kubernetes local storage](https://kubernetes.io/docs/concepts/storage/volumes/#local) and [local-static-provisioner document](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner#overview). @@ -102,12 +102,12 @@ Refer to [Best Practices](https://github.com/kubernetes-sigs/sig-storage-local-s ## Disk mount examples -If monitoring data, as well as TiDB Binlog and backup data are also stored on the local disk, you can mount them in a SAS drive and create different `StorageClass` for them. For example: +If the components such as monitoring, TiDB Binlog, and `tidb-backup` use local disk to store data, you can mount them on a SAS disk and create separate `StorageClass` for use. For example: -- For a drive storing monitoring data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/disks` directory, and create `local-storage` `StorageClass` for them. -- For a drive storing TiDB Binlog and backup data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/backup` directory, and create `backup-storage` `StorageClass` for them. -- For a drive storing PD data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/sharedssd` directory, and create `shared-ssd-storage` `StorageClass` for them. -- For a drive storing TiKV data, you can [mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv) them into `/mnt/ssd` directory, and create `ssd-storage` `StorageClass` for them. +- For a disk storing monitoring data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/disks` directory, and create `local-storage` `StorageClass` for them. +- For a disk storing TiDB Binlog and backup data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/backup` directory, and create `backup-storage` `StorageClass` for them. +- For a disk storing PD data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/sharedssd` directory, and create `shared-ssd-storage` `StorageClass` for them. +- For a disk storing TiKV data, you can [mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv) them into `/mnt/ssd` directory, and create `ssd-storage` `StorageClass` for them. When you install `local-volume-provisioner`, you need to modify `local-volume-provisioner` [YAML](https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml) file and create the necessary `StorageClass` before executing `kubectl apply` statement. The following is an example of a modified YAML file according to the mount above: diff --git a/v3.0/tidb-in-kubernetes/deploy/tidb-operator.md b/v3.0/tidb-in-kubernetes/deploy/tidb-operator.md index 94e65128ee75..6b85b92855bb 100644 --- a/v3.0/tidb-in-kubernetes/deploy/tidb-operator.md +++ b/v3.0/tidb-in-kubernetes/deploy/tidb-operator.md @@ -71,27 +71,6 @@ Refer to [Use Helm](/v3.0/tidb-in-kubernetes/reference/tools/in-kubernetes.md#us Refer to [Local PV Configuration](/v3.0/tidb-in-kubernetes/reference/configuration/storage-class.md#local-pv-configuration) to set up local persistent volumes in your Kubernetes cluster. -### Deploy local-static-provisioner - -After mounting all data disks on Kubernetes nodes, you can deploy [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) that can automatically provision the mounted disks as Local PersistentVolumes. - -{{< copyable "shell-regular" >}} - -```shell -kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml -``` - -Check the Pod and PV status with the following commands: - -{{< copyable "shell-regular" >}} - -```shell -kubectl get po -n kube-system -l app=local-volume-provisioner && \ -kubectl get pv | grep local-storage -``` - -The local-volume-provisioner creates a volume for each mounted disk. Note that on GKE, this will create local volumes of only 375GiB in size and that you need to manually alter the setup to create larger disks. - ## Install TiDB Operator TiDB Operator uses [CRD (Custom Resource Definition)](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/) to extend Kubernetes. Therefore, to use TiDB Operator, you must first create the `TidbCluster` custom resource type, which is a one-time job in your Kubernetes cluster. diff --git a/v3.0/tidb-in-kubernetes/reference/configuration/storage-class.md b/v3.0/tidb-in-kubernetes/reference/configuration/storage-class.md index e61dbccc437a..4642d7f3afc1 100644 --- a/v3.0/tidb-in-kubernetes/reference/configuration/storage-class.md +++ b/v3.0/tidb-in-kubernetes/reference/configuration/storage-class.md @@ -67,11 +67,28 @@ After volume expansion is enabled, expand the PV using the following method: ## Local PV configuration -Kubernetes currently supports statically allocated local storage. To create a local storage object, use local-volume-provisioner in the [local-static-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) repository. The procedure is as follows: +Kubernetes currently supports statically allocated local storage. To create a local storage object, use `local-volume-provisioner` in the [local-static-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) repository. The procedure is as follows: -1. Allocate local storage in the nodes of the TiKV cluster. See also [Manage Local Volumes in Kubernetes Cluster](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md). +1. Pre-allocate local storage in cluster nodes. See the [operation guide](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md) provided by Kubernetes. -2. Deploy local-volume-provisioner. See also [Install local-volume-provisioner with helm](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/tree/master/helm). +2. Deploy `local-volume-provisioner`. + + {{< copyable "shell-regular" >}} + + ```shell + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml + ``` + + Check the Pod and PV status with the following commands: + + {{< copyable "shell-regular" >}} + + ```shell + kubectl get po -n kube-system -l app=local-volume-provisioner && \ + kubectl get pv | grep local-storage + ``` + + `local-volume-provisioner` creates a volume for each mounted disk. Note that on GKE, this will create local volumes of only 375 GiB in size. For more information, refer to [Kubernetes local storage](https://kubernetes.io/docs/concepts/storage/volumes/#local) and [local-static-provisioner document](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner#overview). @@ -83,6 +100,111 @@ For more information, refer to [Kubernetes local storage](https://kubernetes.io/ Refer to [Best Practices](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/best-practices.md) for more information on local PV in Kubernetes. +## Disk mount examples + +If the components such as monitoring, TiDB Binlog, and `tidb-backup` use local disk to store data, you can mount them on a SAS disk and create separate `StorageClass` for use. For example: + +- For a disk storing monitoring data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/disks` directory, and create `local-storage` `StorageClass` for them. +- For a disk storing TiDB Binlog and backup data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/backup` directory, and create `backup-storage` `StorageClass` for them. +- For a disk storing PD data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/sharedssd` directory, and create `shared-ssd-storage` `StorageClass` for them. +- For a disk storing TiKV data, you can [mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv) them into `/mnt/ssd` directory, and create `ssd-storage` `StorageClass` for them. + +When you install `local-volume-provisioner`, you need to modify `local-volume-provisioner` [YAML](https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml) file and create the necessary `StorageClass` before executing `kubectl apply` statement. The following is an example of a modified YAML file according to the mount above: + +``` +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: "local-storage" +provisioner: "kubernetes.io/no-provisioner" +volumeBindingMode: "WaitForFirstConsumer" +--- +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: "ssd-storage" +provisioner: "kubernetes.io/no-provisioner" +volumeBindingMode: "WaitForFirstConsumer" +--- +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: "shared-ssd-storage" +provisioner: "kubernetes.io/no-provisioner" +volumeBindingMode: "WaitForFirstConsumer" +--- +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: "backup-storage" +provisioner: "kubernetes.io/no-provisioner" +volumeBindingMode: "WaitForFirstConsumer" +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: local-provisioner-config + namespace: kube-system +data: + nodeLabelsForPV: | + - kubernetes.io/hostname + storageClassMap: | + shared-ssd-storage: + hostDir: /mnt/sharedssd + mountDir: /mnt/sharedssd + ssd-storage: + hostDir: /mnt/ssd + mountDir: /mnt/ssd + local-storage: + hostDir: /mnt/disks + mountDir: /mnt/disks + backup-storage: + hostDir: /mnt/backup + mountDir: /mnt/backup +--- + +...... + + volumeMounts: + + ...... + + - mountPath: /mnt/ssd + name: local-ssd + mountPropagation: "HostToContainer" + - mountPath: /mnt/sharedssd + name: local-sharedssd + mountPropagation: "HostToContainer" + - mountPath: /mnt/disks + name: local-disks + mountPropagation: "HostToContainer" + - mountPath: /mnt/backup + name: local-backup + mountPropagation: "HostToContainer" + volumes: + + ...... + + - name: local-ssd + hostPath: + path: /mnt/ssd + - name: local-sharedssd + hostPath: + path: /mnt/sharedssd + - name: local-disks + hostPath: + path: /mnt/disks + - name: local-backup + hostPath: + path: /mnt/backup +...... + +``` + +Finally, execute `kubectl apply` to install `local-volume-provisioner`. + +When you later create a TiDB cluster or backup, configure the corresponding `StorageClass` before use. + ## Data safety In general, after a PVC is no longer used and deleted, the PV bound to it is reclaimed and placed in the resource pool for scheduling by the provisioner. To avoid accidental data loss, you can globally configure the reclaim policy of the `StorageClass` to `Retain` or only change the reclaim policy of a single PV to `Retain`. With the `Retain` policy, a PV is not automatically reclaimed. From d8a6f1440f4a17c1c33dcad016cfd7b414640ea4 Mon Sep 17 00:00:00 2001 From: anotherrachel Date: Thu, 14 Nov 2019 14:19:35 +0800 Subject: [PATCH 06/11] address comments --- .../reference/configuration/storage-class.md | 43 ++++++++++++++----- 1 file changed, 32 insertions(+), 11 deletions(-) diff --git a/dev/tidb-in-kubernetes/reference/configuration/storage-class.md b/dev/tidb-in-kubernetes/reference/configuration/storage-class.md index c32faaf52ef4..e86b702c9602 100644 --- a/dev/tidb-in-kubernetes/reference/configuration/storage-class.md +++ b/dev/tidb-in-kubernetes/reference/configuration/storage-class.md @@ -1,6 +1,6 @@ --- title: Persistent Storage Class Configuration in Kubernetes -summary: Learn how to Configure local PVs and network PVs. +summary: Learn how to configure local PVs and network PVs. category: reference aliases: ['/docs/dev/tidb-in-kubernetes/reference/configuration/local-pv/'] --- @@ -88,7 +88,7 @@ Kubernetes currently supports statically allocated local storage. To create a lo kubectl get pv | grep local-storage ``` - `local-volume-provisioner` creates a volume for each mounted disk. Note that on GKE, this will create local volumes of only 375 GiB in size. + `local-volume-provisioner` creates a PV for each mounted point under discovery directory. Note that on GKE, `local-volume-provisioner` creates a local volume of only 375 GiB in size by default. For more information, refer to [Kubernetes local storage](https://kubernetes.io/docs/concepts/storage/volumes/#local) and [local-static-provisioner document](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner#overview). @@ -96,20 +96,35 @@ For more information, refer to [Kubernetes local storage](https://kubernetes.io/ - The path of a local PV is the unique identifier for the local volume. To avoid conflicts, it is recommended to use the UUID of the device to generate a unique path. - For I/O isolation, a dedicated physical disk per volume is recommended to ensure hardware-based isolation. -- For capacity isolation, a dedicated partition per volume is recommended. +- For capacity isolation, a dedicated partition or physical disk per volume is recommended. Refer to [Best Practices](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/best-practices.md) for more information on local PV in Kubernetes. ## Disk mount examples -If the components such as monitoring, TiDB Binlog, and `tidb-backup` use local disk to store data, you can mount them on a SAS disk and create separate `StorageClass` for use. For example: +If the components such as monitoring, TiDB Binlog, and `tidb-backup` use a local disk to store data, you can mount a SAS disk and create separate `StorageClass` for them to use. Procedures are as follows: -- For a disk storing monitoring data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/disks` directory, and create `local-storage` `StorageClass` for them. -- For a disk storing TiDB Binlog and backup data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/backup` directory, and create `backup-storage` `StorageClass` for them. -- For a disk storing PD data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/sharedssd` directory, and create `shared-ssd-storage` `StorageClass` for them. -- For a disk storing TiKV data, you can [mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv) them into `/mnt/ssd` directory, and create `ssd-storage` `StorageClass` for them. +- For a disk storing monitoring data, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk. Then, create multiple directories in disk, and bind mount them into `/mnt/disks` directory. After that, create `local-storage` `StorageClass` for them to use. -When you install `local-volume-provisioner`, you need to modify `local-volume-provisioner` [YAML](https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml) file and create the necessary `StorageClass` before executing `kubectl apply` statement. The following is an example of a modified YAML file according to the mount above: + >**Note:** + > + > In this operation, the number of directories depends on the planned number of TiDB clusters. Each directory has a corresponding PV created. Monitoring data for each TiDB cluster use the 1 PV. + +- For a disk storing TiDB Binlog and backup data, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk first. Then, create multiple directories in disk, and bind mount them into `/mnt/backup` directory. Finally, create `backup-storage` `StorageClass` for them to use. + + >**Note:** + > + > In this operation, the number of directories depends on the planned number of TiDB clusters, the number of Pumps in each cluster and the backup method. Each directory has a corresponding PV created. Each Pump uses 1 PV and each Drainer uses 1 PV. Each [Ad-hoc full backup](/dev/tidb-in-kubernetes/maintain/backup-and-restore.md#ad-hoc-full-backup) uses 1 PV, and all [scheduled full backups](/dev/tidb-in-kubernetes/maintain/backup-and-restore.md#scheduled-full-backup) use 1 PV. + +- For a disk storing PD data, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk first. Then, create multiple directories in disk, and bind mount them into `/mnt/sharedssd` directory. Finally, create `shared-ssd-storage` `StorageClass` for them to use. + + >**Note:** + > + > In this operation, the number of directories depends on the planned number of TiDB clusters, and the number of PDs in each cluster. Each directory has a corresponding PV created and each PD uses 1 PV. + +- For a disk storing TiKV data, you can [mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv) it into `/mnt/ssd` directory, and create `ssd-storage` `StorageClass` for it to use. + +Based on the disk mounts above, you need to modify the [`local-volume-provisioner` YAML file](https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml) accordingly, configure discovery directory and create the necessary `StorageClass`. Here is an example of a modified YAML file: ``` apiVersion: storage.k8s.io/v1 @@ -201,9 +216,15 @@ data: ``` -Finally, execute `kubectl apply` to install `local-volume-provisioner`. +Finally, execute the `kubectl apply` command to deploy `local-volume-provisioner`. + +{{< copyable "shell-regular" >}} + +```shell +kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml +``` -When you later create a TiDB cluster or backup, configure the corresponding `StorageClass` before use. +When you later create a TiDB cluster or do a backup, configure the corresponding `StorageClass` for use. ## Data safety From fc27f41be99a5a579f691cfb6e32f5f15a85226f Mon Sep 17 00:00:00 2001 From: anotherrachel Date: Thu, 14 Nov 2019 14:27:37 +0800 Subject: [PATCH 07/11] Update storage-class.md --- dev/tidb-in-kubernetes/reference/configuration/storage-class.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/dev/tidb-in-kubernetes/reference/configuration/storage-class.md b/dev/tidb-in-kubernetes/reference/configuration/storage-class.md index e86b702c9602..752f26db3f9d 100644 --- a/dev/tidb-in-kubernetes/reference/configuration/storage-class.md +++ b/dev/tidb-in-kubernetes/reference/configuration/storage-class.md @@ -108,7 +108,7 @@ If the components such as monitoring, TiDB Binlog, and `tidb-backup` use a local >**Note:** > - > In this operation, the number of directories depends on the planned number of TiDB clusters. Each directory has a corresponding PV created. Monitoring data for each TiDB cluster use the 1 PV. + > In this operation, the number of directories depends on the planned number of TiDB clusters. Each directory has a corresponding PV created. The monitoring data for each TiDB cluster uses 1 PV. - For a disk storing TiDB Binlog and backup data, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk first. Then, create multiple directories in disk, and bind mount them into `/mnt/backup` directory. Finally, create `backup-storage` `StorageClass` for them to use. From 0a9369ebe59d9672bc2f0544262dccabc981cc27 Mon Sep 17 00:00:00 2001 From: anotherrachel Date: Thu, 21 Nov 2019 15:13:20 +0800 Subject: [PATCH 08/11] address comments --- .../reference/configuration/storage-class.md | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/dev/tidb-in-kubernetes/reference/configuration/storage-class.md b/dev/tidb-in-kubernetes/reference/configuration/storage-class.md index 752f26db3f9d..bcb77f69a9fe 100644 --- a/dev/tidb-in-kubernetes/reference/configuration/storage-class.md +++ b/dev/tidb-in-kubernetes/reference/configuration/storage-class.md @@ -88,41 +88,41 @@ Kubernetes currently supports statically allocated local storage. To create a lo kubectl get pv | grep local-storage ``` - `local-volume-provisioner` creates a PV for each mounted point under discovery directory. Note that on GKE, `local-volume-provisioner` creates a local volume of only 375 GiB in size by default. + `local-volume-provisioner` creates a PV for each mounting point under discovery directory. Note that on GKE, `local-volume-provisioner` creates a local volume of only 375 GiB in size by default. For more information, refer to [Kubernetes local storage](https://kubernetes.io/docs/concepts/storage/volumes/#local) and [local-static-provisioner document](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner#overview). ### Best practices -- The path of a local PV is the unique identifier for the local volume. To avoid conflicts, it is recommended to use the UUID of the device to generate a unique path. -- For I/O isolation, a dedicated physical disk per volume is recommended to ensure hardware-based isolation. -- For capacity isolation, a dedicated partition or physical disk per volume is recommended. +- A local PV's path is its unique identifier. To avoid conflicts, it is recommended to use the UUID of the device to generate a unique path. +- For I/O isolation, a dedicated physical disk per PV is recommended to ensure hardware-based isolation. +- For capacity isolation, a partition per PV or a physical disk per PV is recommended. Refer to [Best Practices](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/best-practices.md) for more information on local PV in Kubernetes. ## Disk mount examples -If the components such as monitoring, TiDB Binlog, and `tidb-backup` use a local disk to store data, you can mount a SAS disk and create separate `StorageClass` for them to use. Procedures are as follows: +If the components such as monitoring, TiDB Binlog, and `tidb-backup` use local disks to store data, you can mount SAS disks and create separate `StorageClass` for them to use. Procedures are as follows: -- For a disk storing monitoring data, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk. Then, create multiple directories in disk, and bind mount them into `/mnt/disks` directory. After that, create `local-storage` `StorageClass` for them to use. +- For a disk storing monitoring data, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk. First, create multiple directories in disk, and bind mount them into `/mnt/disks` directory. Then, create `local-storage` `StorageClass` for them to use. >**Note:** > - > In this operation, the number of directories depends on the planned number of TiDB clusters. Each directory has a corresponding PV created. The monitoring data for each TiDB cluster uses 1 PV. + > The number of directories you create depends on the planned number of TiDB clusters. For each directory, a corresponding PV will be created. The monitoring data in each TiDB cluster uses one PV. -- For a disk storing TiDB Binlog and backup data, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk first. Then, create multiple directories in disk, and bind mount them into `/mnt/backup` directory. Finally, create `backup-storage` `StorageClass` for them to use. +- For a disk storing TiDB Binlog and backup data, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk. First, create multiple directories in disk, and bind mount them into `/mnt/backup` directory. Then, create `backup-storage` `StorageClass` for them to use. >**Note:** > - > In this operation, the number of directories depends on the planned number of TiDB clusters, the number of Pumps in each cluster and the backup method. Each directory has a corresponding PV created. Each Pump uses 1 PV and each Drainer uses 1 PV. Each [Ad-hoc full backup](/dev/tidb-in-kubernetes/maintain/backup-and-restore.md#ad-hoc-full-backup) uses 1 PV, and all [scheduled full backups](/dev/tidb-in-kubernetes/maintain/backup-and-restore.md#scheduled-full-backup) use 1 PV. + > The number of directories you create depends on the planned number of TiDB clusters, the number of Pumps in each cluster, and your backup method. For each directory, a corresponding PV will be created. Each Pump uses one PV and each Drainer uses one PV. Each [Ad-hoc full backup](/dev/tidb-in-kubernetes/maintain/backup-and-restore.md#ad-hoc-full-backup) task uses one PV, and all [scheduled full backup](/dev/tidb-in-kubernetes/maintain/backup-and-restore.md#scheduled-full-backup) tasks share one PV. -- For a disk storing PD data, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk first. Then, create multiple directories in disk, and bind mount them into `/mnt/sharedssd` directory. Finally, create `shared-ssd-storage` `StorageClass` for them to use. +- For a disk storing data in PD, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk. First, create multiple directories in disk, and bind mount them into `/mnt/sharedssd` directory. Then, create `shared-ssd-storage` `StorageClass` for them to use. >**Note:** > - > In this operation, the number of directories depends on the planned number of TiDB clusters, and the number of PDs in each cluster. Each directory has a corresponding PV created and each PD uses 1 PV. + > The number of directories you create depends on the planned number of TiDB clusters, and the number of PD servers in each cluster. For each directory, a corresponding PV will be created. Each PD server uses one PV. -- For a disk storing TiKV data, you can [mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv) it into `/mnt/ssd` directory, and create `ssd-storage` `StorageClass` for it to use. +- For a disk storing data in TiKV, you can [mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv) it into `/mnt/ssd` directory, and create `ssd-storage` `StorageClass` for it to use. Based on the disk mounts above, you need to modify the [`local-volume-provisioner` YAML file](https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml) accordingly, configure discovery directory and create the necessary `StorageClass`. Here is an example of a modified YAML file: @@ -224,7 +224,7 @@ Finally, execute the `kubectl apply` command to deploy `local-volume-provisioner kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml ``` -When you later create a TiDB cluster or do a backup, configure the corresponding `StorageClass` for use. +When you later deploy tidb clusters, deploy TiDB Binlog for incremental backups, or do full backups, configure the corresponding `StorageClass` for use. ## Data safety From 355bb3febdc1368dcb9f3f0d377819286888069a Mon Sep 17 00:00:00 2001 From: anotherrachel Date: Thu, 21 Nov 2019 15:59:18 +0800 Subject: [PATCH 09/11] update v3.0 and v3.1 --- .../reference/configuration/storage-class.md | 46 ++++-- .../deploy/tidb-operator.md | 21 --- .../reference/configuration/storage-class.md | 147 +++++++++++++++++- 3 files changed, 173 insertions(+), 41 deletions(-) diff --git a/v3.0/tidb-in-kubernetes/reference/configuration/storage-class.md b/v3.0/tidb-in-kubernetes/reference/configuration/storage-class.md index 4642d7f3afc1..f246b2f770cc 100644 --- a/v3.0/tidb-in-kubernetes/reference/configuration/storage-class.md +++ b/v3.0/tidb-in-kubernetes/reference/configuration/storage-class.md @@ -1,6 +1,6 @@ --- title: Persistent Storage Class Configuration in Kubernetes -summary: Learn how to Configure local PVs and network PVs. +summary: Learn how to configure local PVs and network PVs. category: reference aliases: ['/docs/v3.0/tidb-in-kubernetes/reference/configuration/local-pv/'] --- @@ -88,28 +88,42 @@ Kubernetes currently supports statically allocated local storage. To create a lo kubectl get pv | grep local-storage ``` - `local-volume-provisioner` creates a volume for each mounted disk. Note that on GKE, this will create local volumes of only 375 GiB in size. + `local-volume-provisioner` creates a PV for each mounting point under discovery directory. Note that on GKE, `local-volume-provisioner` creates a local volume of only 375 GiB in size by default. For more information, refer to [Kubernetes local storage](https://kubernetes.io/docs/concepts/storage/volumes/#local) and [local-static-provisioner document](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner#overview). ### Best practices -- The path of a local PV is the unique identifier for the local volume. To avoid conflicts, it is recommended to use the UUID of the device to generate a unique path. -- For I/O isolation, a dedicated physical disk per volume is recommended to ensure hardware-based isolation. -- For capacity isolation, a dedicated partition per volume is recommended. +- A local PV's path is its unique identifier. To avoid conflicts, it is recommended to use the UUID of the device to generate a unique path. +- For I/O isolation, a dedicated physical disk per PV is recommended to ensure hardware-based isolation. +- For capacity isolation, a partition per PV or a physical disk per PV is recommended. Refer to [Best Practices](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/best-practices.md) for more information on local PV in Kubernetes. ## Disk mount examples -If the components such as monitoring, TiDB Binlog, and `tidb-backup` use local disk to store data, you can mount them on a SAS disk and create separate `StorageClass` for use. For example: +## Disk mount examples + +If the components such as monitoring, TiDB Binlog, and `tidb-backup` use local disks to store data, you can mount SAS disks and create separate `StorageClass` for them to use. Procedures are as follows: + +- For a disk storing monitoring data, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk. First, create multiple directories in disk, and bind mount them into `/mnt/disks` directory. Then, create `local-storage` `StorageClass` for them to use. + + >**Note:** + > + > The number of directories you create depends on the planned number of TiDB clusters. For each directory, a corresponding PV will be created. The monitoring data in each TiDB cluster uses one PV. +- For a disk storing TiDB Binlog and backup data, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk. First, create multiple directories in disk, and bind mount them into `/mnt/backup` directory. Then, create `backup-storage` `StorageClass` for them to use. + + >**Note:** + > + > The number of directories you create depends on the planned number of TiDB clusters, the number of Pumps in each cluster, and your backup method. For each directory, a corresponding PV will be created. Each Pump uses one PV and each Drainer uses one PV. Each [Ad-hoc full backup](/dev/tidb-in-kubernetes/maintain/backup-and-restore.md#ad-hoc-full-backup) task uses one PV, and all [scheduled full backup](/dev/tidb-in-kubernetes/maintain/backup-and-restore.md#scheduled-full-backup) tasks share one PV. +- For a disk storing data in PD, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk. First, create multiple directories in disk, and bind mount them into `/mnt/sharedssd` directory. Then, create `shared-ssd-storage` `StorageClass` for them to use. -- For a disk storing monitoring data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/disks` directory, and create `local-storage` `StorageClass` for them. -- For a disk storing TiDB Binlog and backup data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/backup` directory, and create `backup-storage` `StorageClass` for them. -- For a disk storing PD data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/sharedssd` directory, and create `shared-ssd-storage` `StorageClass` for them. -- For a disk storing TiKV data, you can [mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv) them into `/mnt/ssd` directory, and create `ssd-storage` `StorageClass` for them. + >**Note:** + > + > The number of directories you create depends on the planned number of TiDB clusters, and the number of PD servers in each cluster. For each directory, a corresponding PV will be created. Each PD server uses one PV. +- For a disk storing data in TiKV, you can [mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv) it into `/mnt/ssd` directory, and create `ssd-storage` `StorageClass` for it to use. -When you install `local-volume-provisioner`, you need to modify `local-volume-provisioner` [YAML](https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml) file and create the necessary `StorageClass` before executing `kubectl apply` statement. The following is an example of a modified YAML file according to the mount above: +Based on the disk mounts above, you need to modify the [`local-volume-provisioner` YAML file](https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml) accordingly, configure discovery directory and create the necessary `StorageClass`. Here is an example of a modified YAML file: ``` apiVersion: storage.k8s.io/v1 @@ -201,9 +215,15 @@ data: ``` -Finally, execute `kubectl apply` to install `local-volume-provisioner`. +Finally, execute the `kubectl apply` command to deploy `local-volume-provisioner`. + +{{< copyable "shell-regular" >}} + +```shell +kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml +``` -When you later create a TiDB cluster or backup, configure the corresponding `StorageClass` before use. +When you later deploy tidb clusters, deploy TiDB Binlog for incremental backups, or do full backups, configure the corresponding `StorageClass` for use. ## Data safety diff --git a/v3.1/tidb-in-kubernetes/deploy/tidb-operator.md b/v3.1/tidb-in-kubernetes/deploy/tidb-operator.md index b326eeba9c94..0a7de2bed908 100644 --- a/v3.1/tidb-in-kubernetes/deploy/tidb-operator.md +++ b/v3.1/tidb-in-kubernetes/deploy/tidb-operator.md @@ -70,27 +70,6 @@ Refer to [Use Helm](/v3.1/tidb-in-kubernetes/reference/tools/in-kubernetes.md#us Refer to [Local PV Configuration](/v3.1/tidb-in-kubernetes/reference/configuration/storage-class.md#local-pv-configuration) to set up local persistent volumes in your Kubernetes cluster. -### Deploy local-static-provisioner - -After mounting all data disks on Kubernetes nodes, you can deploy [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) that can automatically provision the mounted disks as Local PersistentVolumes. - -{{< copyable "shell-regular" >}} - -```shell -kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml -``` - -Check the Pod and PV status with the following commands: - -{{< copyable "shell-regular" >}} - -```shell -kubectl get po -n kube-system -l app=local-volume-provisioner && \ -kubectl get pv | grep local-storage -``` - -The local-volume-provisioner creates a volume for each mounted disk. Note that on GKE, this will create local volumes of only 375GiB in size and that you need to manually alter the setup to create larger disks. - ## Install TiDB Operator TiDB Operator uses [CRD (Custom Resource Definition)](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/) to extend Kubernetes. Therefore, to use TiDB Operator, you must first create the `TidbCluster` custom resource type, which is a one-time job in your Kubernetes cluster. diff --git a/v3.1/tidb-in-kubernetes/reference/configuration/storage-class.md b/v3.1/tidb-in-kubernetes/reference/configuration/storage-class.md index 6900b3fa06d1..958017f89a67 100644 --- a/v3.1/tidb-in-kubernetes/reference/configuration/storage-class.md +++ b/v3.1/tidb-in-kubernetes/reference/configuration/storage-class.md @@ -1,6 +1,6 @@ --- title: Persistent Storage Class Configuration in Kubernetes -summary: Learn how to Configure local PVs and network PVs. +summary: Learn how to configure local PVs and network PVs. category: reference --- @@ -66,22 +66,155 @@ After volume expansion is enabled, expand the PV using the following method: ## Local PV configuration -Kubernetes currently supports statically allocated local storage. To create a local storage object, use local-volume-provisioner in the [local-static-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) repository. The procedure is as follows: +Kubernetes currently supports statically allocated local storage. To create a local storage object, use `local-volume-provisioner` in the [local-static-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) repository. The procedure is as follows: -1. Allocate local storage in the nodes of the TiKV cluster. See also [Manage Local Volumes in Kubernetes Cluster](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md). +1. Pre-allocate local storage in cluster nodes. See the [operation guide](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md) provided by Kubernetes. -2. Deploy local-volume-provisioner. See also [Install local-volume-provisioner with helm](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/tree/master/helm). +2. Deploy `local-volume-provisioner`. + + {{< copyable "shell-regular" >}} + + ```shell + kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml + ``` + + Check the Pod and PV status with the following commands: + + {{< copyable "shell-regular" >}} + + ```shell + kubectl get po -n kube-system -l app=local-volume-provisioner && \ + kubectl get pv | grep local-storage + ``` + + `local-volume-provisioner` creates a PV for each mounting point under discovery directory. Note that on GKE, `local-volume-provisioner` creates a local volume of only 375 GiB in size by default. For more information, refer to [Kubernetes local storage](https://kubernetes.io/docs/concepts/storage/volumes/#local) and [local-static-provisioner document](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner#overview). ### Best practices -- The path of a local PV is the unique identifier for the local volume. To avoid conflicts, it is recommended to use the UUID of the device to generate a unique path. -- For I/O isolation, a dedicated physical disk per volume is recommended to ensure hardware-based isolation. -- For capacity isolation, a dedicated partition per volume is recommended. +- A local PV's path is its unique identifier. To avoid conflicts, it is recommended to use the UUID of the device to generate a unique path. +- For I/O isolation, a dedicated physical disk per PV is recommended to ensure hardware-based isolation. +- For capacity isolation, a partition per PV or a physical disk per PV is recommended. Refer to [Best Practices](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/best-practices.md) for more information on local PV in Kubernetes. +## Disk mount examples + +If the components such as monitoring, TiDB Binlog, and `tidb-backup` use local disks to store data, you can mount SAS disks and create separate `StorageClass` for them to use. Procedures are as follows: + +- For a disk storing monitoring data, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk. First, create multiple directories in disk, and bind mount them into `/mnt/disks` directory. Then, create `local-storage` `StorageClass` for them to use. + + >**Note:** + > + > The number of directories you create depends on the planned number of TiDB clusters. For each directory, a corresponding PV will be created. The monitoring data in each TiDB cluster uses one PV. +- For a disk storing TiDB Binlog and backup data, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk. First, create multiple directories in disk, and bind mount them into `/mnt/backup` directory. Then, create `backup-storage` `StorageClass` for them to use. + + >**Note:** + > + > The number of directories you create depends on the planned number of TiDB clusters, the number of Pumps in each cluster, and your backup method. For each directory, a corresponding PV will be created. Each Pump uses one PV and each Drainer uses one PV. Each [Ad-hoc full backup](/dev/tidb-in-kubernetes/maintain/backup-and-restore.md#ad-hoc-full-backup) task uses one PV, and all [scheduled full backup](/dev/tidb-in-kubernetes/maintain/backup-and-restore.md#scheduled-full-backup) tasks share one PV. +- For a disk storing data in PD, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk. First, create multiple directories in disk, and bind mount them into `/mnt/sharedssd` directory. Then, create `shared-ssd-storage` `StorageClass` for them to use. + + >**Note:** + > + > The number of directories you create depends on the planned number of TiDB clusters, and the number of PD servers in each cluster. For each directory, a corresponding PV will be created. Each PD server uses one PV. +- For a disk storing data in TiKV, you can [mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv) it into `/mnt/ssd` directory, and create `ssd-storage` `StorageClass` for it to use. + +Based on the disk mounts above, you need to modify the [`local-volume-provisioner` YAML file](https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml) accordingly, configure discovery directory and create the necessary `StorageClass`. Here is an example of a modified YAML file: + +``` +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: "local-storage" +provisioner: "kubernetes.io/no-provisioner" +volumeBindingMode: "WaitForFirstConsumer" +--- +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: "ssd-storage" +provisioner: "kubernetes.io/no-provisioner" +volumeBindingMode: "WaitForFirstConsumer" +--- +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: "shared-ssd-storage" +provisioner: "kubernetes.io/no-provisioner" +volumeBindingMode: "WaitForFirstConsumer" +--- +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: "backup-storage" +provisioner: "kubernetes.io/no-provisioner" +volumeBindingMode: "WaitForFirstConsumer" +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: local-provisioner-config + namespace: kube-system +data: + nodeLabelsForPV: | + - kubernetes.io/hostname + storageClassMap: | + shared-ssd-storage: + hostDir: /mnt/sharedssd + mountDir: /mnt/sharedssd + ssd-storage: + hostDir: /mnt/ssd + mountDir: /mnt/ssd + local-storage: + hostDir: /mnt/disks + mountDir: /mnt/disks + backup-storage: + hostDir: /mnt/backup + mountDir: /mnt/backup +--- +...... + volumeMounts: + ...... + - mountPath: /mnt/ssd + name: local-ssd + mountPropagation: "HostToContainer" + - mountPath: /mnt/sharedssd + name: local-sharedssd + mountPropagation: "HostToContainer" + - mountPath: /mnt/disks + name: local-disks + mountPropagation: "HostToContainer" + - mountPath: /mnt/backup + name: local-backup + mountPropagation: "HostToContainer" + volumes: + ...... + - name: local-ssd + hostPath: + path: /mnt/ssd + - name: local-sharedssd + hostPath: + path: /mnt/sharedssd + - name: local-disks + hostPath: + path: /mnt/disks + - name: local-backup + hostPath: + path: /mnt/backup +...... +``` + +Finally, execute the `kubectl apply` command to deploy `local-volume-provisioner`. + +{{< copyable "shell-regular" >}} + +```shell +kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml +``` + +When you later deploy tidb clusters, deploy TiDB Binlog for incremental backups, or do full backups, configure the corresponding `StorageClass` for use. + ## Data safety In general, after a PVC is no longer used and deleted, the PV bound to it is reclaimed and placed in the resource pool for scheduling by the provisioner. To avoid accidental data loss, you can globally configure the reclaim policy of the `StorageClass` to `Retain` or only change the reclaim policy of a single PV to `Retain`. With the `Retain` policy, a PV is not automatically reclaimed. From fa004b023037dc8d67a7c30ebff304d59d541996 Mon Sep 17 00:00:00 2001 From: anotherrachel Date: Fri, 22 Nov 2019 12:06:37 +0800 Subject: [PATCH 10/11] fix CI --- .../tidb-in-kubernetes/reference/configuration/storage-class.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/v3.0/tidb-in-kubernetes/reference/configuration/storage-class.md b/v3.0/tidb-in-kubernetes/reference/configuration/storage-class.md index f246b2f770cc..ffe93d329c84 100644 --- a/v3.0/tidb-in-kubernetes/reference/configuration/storage-class.md +++ b/v3.0/tidb-in-kubernetes/reference/configuration/storage-class.md @@ -102,8 +102,6 @@ Refer to [Best Practices](https://github.com/kubernetes-sigs/sig-storage-local-s ## Disk mount examples -## Disk mount examples - If the components such as monitoring, TiDB Binlog, and `tidb-backup` use local disks to store data, you can mount SAS disks and create separate `StorageClass` for them to use. Procedures are as follows: - For a disk storing monitoring data, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk. First, create multiple directories in disk, and bind mount them into `/mnt/disks` directory. Then, create `local-storage` `StorageClass` for them to use. From e7c7db06a2915cb2667839238dd4003a2fa2dfa5 Mon Sep 17 00:00:00 2001 From: anotherrachel Date: Fri, 22 Nov 2019 13:34:02 +0800 Subject: [PATCH 11/11] fix CI --- .../tidb-in-kubernetes/reference/configuration/storage-class.md | 2 +- .../tidb-in-kubernetes/reference/configuration/storage-class.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/v3.0/tidb-in-kubernetes/reference/configuration/storage-class.md b/v3.0/tidb-in-kubernetes/reference/configuration/storage-class.md index ffe93d329c84..1ab1b98a7b24 100644 --- a/v3.0/tidb-in-kubernetes/reference/configuration/storage-class.md +++ b/v3.0/tidb-in-kubernetes/reference/configuration/storage-class.md @@ -113,7 +113,7 @@ If the components such as monitoring, TiDB Binlog, and `tidb-backup` use local d >**Note:** > - > The number of directories you create depends on the planned number of TiDB clusters, the number of Pumps in each cluster, and your backup method. For each directory, a corresponding PV will be created. Each Pump uses one PV and each Drainer uses one PV. Each [Ad-hoc full backup](/dev/tidb-in-kubernetes/maintain/backup-and-restore.md#ad-hoc-full-backup) task uses one PV, and all [scheduled full backup](/dev/tidb-in-kubernetes/maintain/backup-and-restore.md#scheduled-full-backup) tasks share one PV. + > The number of directories you create depends on the planned number of TiDB clusters, the number of Pumps in each cluster, and your backup method. For each directory, a corresponding PV will be created. Each Pump uses one PV and each Drainer uses one PV. Each [Ad-hoc full backup](/v3.0/tidb-in-kubernetes/maintain/backup-and-restore.md#ad-hoc-full-backup) task uses one PV, and all [scheduled full backup](/v3.0/tidb-in-kubernetes/maintain/backup-and-restore.md#scheduled-full-backup) tasks share one PV. - For a disk storing data in PD, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk. First, create multiple directories in disk, and bind mount them into `/mnt/sharedssd` directory. Then, create `shared-ssd-storage` `StorageClass` for them to use. >**Note:** diff --git a/v3.1/tidb-in-kubernetes/reference/configuration/storage-class.md b/v3.1/tidb-in-kubernetes/reference/configuration/storage-class.md index 958017f89a67..c882195bd3db 100644 --- a/v3.1/tidb-in-kubernetes/reference/configuration/storage-class.md +++ b/v3.1/tidb-in-kubernetes/reference/configuration/storage-class.md @@ -112,7 +112,7 @@ If the components such as monitoring, TiDB Binlog, and `tidb-backup` use local d >**Note:** > - > The number of directories you create depends on the planned number of TiDB clusters, the number of Pumps in each cluster, and your backup method. For each directory, a corresponding PV will be created. Each Pump uses one PV and each Drainer uses one PV. Each [Ad-hoc full backup](/dev/tidb-in-kubernetes/maintain/backup-and-restore.md#ad-hoc-full-backup) task uses one PV, and all [scheduled full backup](/dev/tidb-in-kubernetes/maintain/backup-and-restore.md#scheduled-full-backup) tasks share one PV. + > The number of directories you create depends on the planned number of TiDB clusters, the number of Pumps in each cluster, and your backup method. For each directory, a corresponding PV will be created. Each Pump uses one PV and each Drainer uses one PV. Each [Ad-hoc full backup](/v3.1/tidb-in-kubernetes/maintain/backup-and-restore.md#ad-hoc-full-backup) task uses one PV, and all [scheduled full backup](/v3.1/tidb-in-kubernetes/maintain/backup-and-restore.md#scheduled-full-backup) tasks share one PV. - For a disk storing data in PD, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk. First, create multiple directories in disk, and bind mount them into `/mnt/sharedssd` directory. Then, create `shared-ssd-storage` `StorageClass` for them to use. >**Note:**