Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tidb-in-kubernetes: add local pv example #1623

Merged
merged 19 commits into from
Nov 22, 2019
Merged
Show file tree
Hide file tree
Changes from 11 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
21 changes: 0 additions & 21 deletions dev/tidb-in-kubernetes/deploy/tidb-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,27 +70,6 @@ Refer to [Use Helm](/dev/tidb-in-kubernetes/reference/tools/in-kubernetes.md#use

Refer to [Local PV Configuration](/dev/tidb-in-kubernetes/reference/configuration/storage-class.md) to set up local persistent volumes in your Kubernetes cluster.

### Deploy local-static-provisioner

After mounting all data disks on Kubernetes nodes, you can deploy [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) that can automatically provision the mounted disks as Local PersistentVolumes.

{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml
```

Check the Pod and PV status with the following commands:

{{< copyable "shell-regular" >}}

```shell
kubectl get po -n kube-system -l app=local-volume-provisioner && \
kubectl get pv | grep local-storage
```

The local-volume-provisioner creates a volume for each mounted disk. Note that on GKE, this will create local volumes of only 375GiB in size and that you need to manually alter the setup to create larger disks.

## Install TiDB Operator

TiDB Operator uses [CRD (Custom Resource Definition)](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/) to extend Kubernetes. Therefore, to use TiDB Operator, you must first create the `TidbCluster` custom resource type, which is a one-time job in your Kubernetes cluster.
Expand Down
155 changes: 149 additions & 6 deletions dev/tidb-in-kubernetes/reference/configuration/storage-class.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Persistent Storage Class Configuration in Kubernetes
summary: Learn how to Configure local PVs and network PVs.
summary: Learn how to configure local PVs and network PVs.
category: reference
aliases: ['/docs/dev/tidb-in-kubernetes/reference/configuration/local-pv/']
---
Expand Down Expand Up @@ -67,22 +67,165 @@ After volume expansion is enabled, expand the PV using the following method:

## Local PV configuration

Kubernetes currently supports statically allocated local storage. To create a local storage object, use local-volume-provisioner in the [local-static-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) repository. The procedure is as follows:
Kubernetes currently supports statically allocated local storage. To create a local storage object, use `local-volume-provisioner` in the [local-static-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) repository. The procedure is as follows:

1. Allocate local storage in the nodes of the TiKV cluster. See also [Manage Local Volumes in Kubernetes Cluster](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md).
1. Pre-allocate local storage in cluster nodes. See the [operation guide](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md) provided by Kubernetes.

2. Deploy local-volume-provisioner. See also [Install local-volume-provisioner with helm](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/tree/master/helm).
2. Deploy `local-volume-provisioner`.

{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml
```

Check the Pod and PV status with the following commands:

{{< copyable "shell-regular" >}}

```shell
kubectl get po -n kube-system -l app=local-volume-provisioner && \
kubectl get pv | grep local-storage
```

`local-volume-provisioner` creates a PV for each mounted point under discovery directory. Note that on GKE, `local-volume-provisioner` creates a local volume of only 375 GiB in size by default.
anotherrachel marked this conversation as resolved.
Show resolved Hide resolved

For more information, refer to [Kubernetes local storage](https://kubernetes.io/docs/concepts/storage/volumes/#local) and [local-static-provisioner document](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner#overview).

### Best practices

- The path of a local PV is the unique identifier for the local volume. To avoid conflicts, it is recommended to use the UUID of the device to generate a unique path.
anotherrachel marked this conversation as resolved.
Show resolved Hide resolved
- For I/O isolation, a dedicated physical disk per volume is recommended to ensure hardware-based isolation.
anotherrachel marked this conversation as resolved.
Show resolved Hide resolved
- For capacity isolation, a dedicated partition per volume is recommended.
- For capacity isolation, a dedicated partition or physical disk per volume is recommended.
anotherrachel marked this conversation as resolved.
Show resolved Hide resolved

Refer to [Best Practices](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/best-practices.md) for more information on local PV in Kubernetes.

## Disk mount examples

If the components such as monitoring, TiDB Binlog, and `tidb-backup` use a local disk to store data, you can mount a SAS disk and create separate `StorageClass` for them to use. Procedures are as follows:
anotherrachel marked this conversation as resolved.
Show resolved Hide resolved

- For a disk storing monitoring data, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk. Then, create multiple directories in disk, and bind mount them into `/mnt/disks` directory. After that, create `local-storage` `StorageClass` for them to use.
anotherrachel marked this conversation as resolved.
Show resolved Hide resolved

>**Note:**
>
> In this operation, the number of directories depends on the planned number of TiDB clusters. Each directory has a corresponding PV created. The monitoring data for each TiDB cluster uses 1 PV.
anotherrachel marked this conversation as resolved.
Show resolved Hide resolved

- For a disk storing TiDB Binlog and backup data, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk first. Then, create multiple directories in disk, and bind mount them into `/mnt/backup` directory. Finally, create `backup-storage` `StorageClass` for them to use.
anotherrachel marked this conversation as resolved.
Show resolved Hide resolved

>**Note:**
>
> In this operation, the number of directories depends on the planned number of TiDB clusters, the number of Pumps in each cluster and the backup method. Each directory has a corresponding PV created. Each Pump uses 1 PV and each Drainer uses 1 PV. Each [Ad-hoc full backup](/dev/tidb-in-kubernetes/maintain/backup-and-restore.md#ad-hoc-full-backup) uses 1 PV, and all [scheduled full backups](/dev/tidb-in-kubernetes/maintain/backup-and-restore.md#scheduled-full-backup) use 1 PV.
anotherrachel marked this conversation as resolved.
Show resolved Hide resolved

- For a disk storing PD data, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk first. Then, create multiple directories in disk, and bind mount them into `/mnt/sharedssd` directory. Finally, create `shared-ssd-storage` `StorageClass` for them to use.
anotherrachel marked this conversation as resolved.
Show resolved Hide resolved

>**Note:**
>
> In this operation, the number of directories depends on the planned number of TiDB clusters, and the number of PDs in each cluster. Each directory has a corresponding PV created and each PD uses 1 PV.
anotherrachel marked this conversation as resolved.
Show resolved Hide resolved

- For a disk storing TiKV data, you can [mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv) it into `/mnt/ssd` directory, and create `ssd-storage` `StorageClass` for it to use.
anotherrachel marked this conversation as resolved.
Show resolved Hide resolved

Based on the disk mounts above, you need to modify the [`local-volume-provisioner` YAML file](https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml) accordingly, configure discovery directory and create the necessary `StorageClass`. Here is an example of a modified YAML file:

```
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: "local-storage"
provisioner: "kubernetes.io/no-provisioner"
volumeBindingMode: "WaitForFirstConsumer"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: "ssd-storage"
provisioner: "kubernetes.io/no-provisioner"
volumeBindingMode: "WaitForFirstConsumer"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: "shared-ssd-storage"
provisioner: "kubernetes.io/no-provisioner"
volumeBindingMode: "WaitForFirstConsumer"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: "backup-storage"
provisioner: "kubernetes.io/no-provisioner"
volumeBindingMode: "WaitForFirstConsumer"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: local-provisioner-config
namespace: kube-system
data:
nodeLabelsForPV: |
- kubernetes.io/hostname
storageClassMap: |
shared-ssd-storage:
hostDir: /mnt/sharedssd
mountDir: /mnt/sharedssd
ssd-storage:
hostDir: /mnt/ssd
mountDir: /mnt/ssd
local-storage:
hostDir: /mnt/disks
mountDir: /mnt/disks
backup-storage:
hostDir: /mnt/backup
mountDir: /mnt/backup
---

......

volumeMounts:

......

- mountPath: /mnt/ssd
name: local-ssd
mountPropagation: "HostToContainer"
- mountPath: /mnt/sharedssd
name: local-sharedssd
mountPropagation: "HostToContainer"
- mountPath: /mnt/disks
name: local-disks
mountPropagation: "HostToContainer"
- mountPath: /mnt/backup
name: local-backup
mountPropagation: "HostToContainer"
volumes:

......

- name: local-ssd
hostPath:
path: /mnt/ssd
- name: local-sharedssd
hostPath:
path: /mnt/sharedssd
- name: local-disks
hostPath:
path: /mnt/disks
- name: local-backup
hostPath:
path: /mnt/backup
......

```

Finally, execute the `kubectl apply` command to deploy `local-volume-provisioner`.

{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml
```

When you later create a TiDB cluster or do a backup, configure the corresponding `StorageClass` for use.
anotherrachel marked this conversation as resolved.
Show resolved Hide resolved

## Data safety

In general, after a PVC is no longer used and deleted, the PV bound to it is reclaimed and placed in the resource pool for scheduling by the provisioner. To avoid accidental data loss, you can globally configure the reclaim policy of the `StorageClass` to `Retain` or only change the reclaim policy of a single PV to `Retain`. With the `Retain` policy, a PV is not automatically reclaimed.
Expand Down Expand Up @@ -125,4 +268,4 @@ When the reclaim policy of PVs is set to `Retain`, if the data of a PV can be de
kubectl patch pv <pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'
```

For more details, refer to [Change the Reclaim Policy of a PersistentVolume](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/).
For more details, refer to [Change the Reclaim Policy of a PersistentVolume](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/).
21 changes: 0 additions & 21 deletions v3.0/tidb-in-kubernetes/deploy/tidb-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,27 +71,6 @@ Refer to [Use Helm](/v3.0/tidb-in-kubernetes/reference/tools/in-kubernetes.md#us

Refer to [Local PV Configuration](/v3.0/tidb-in-kubernetes/reference/configuration/storage-class.md#local-pv-configuration) to set up local persistent volumes in your Kubernetes cluster.

### Deploy local-static-provisioner

After mounting all data disks on Kubernetes nodes, you can deploy [local-volume-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) that can automatically provision the mounted disks as Local PersistentVolumes.

{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml
```

Check the Pod and PV status with the following commands:

{{< copyable "shell-regular" >}}

```shell
kubectl get po -n kube-system -l app=local-volume-provisioner && \
kubectl get pv | grep local-storage
```

The local-volume-provisioner creates a volume for each mounted disk. Note that on GKE, this will create local volumes of only 375GiB in size and that you need to manually alter the setup to create larger disks.

## Install TiDB Operator

TiDB Operator uses [CRD (Custom Resource Definition)](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/) to extend Kubernetes. Therefore, to use TiDB Operator, you must first create the `TidbCluster` custom resource type, which is a one-time job in your Kubernetes cluster.
Expand Down
128 changes: 125 additions & 3 deletions v3.0/tidb-in-kubernetes/reference/configuration/storage-class.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,11 +67,28 @@ After volume expansion is enabled, expand the PV using the following method:

## Local PV configuration

Kubernetes currently supports statically allocated local storage. To create a local storage object, use local-volume-provisioner in the [local-static-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) repository. The procedure is as follows:
Kubernetes currently supports statically allocated local storage. To create a local storage object, use `local-volume-provisioner` in the [local-static-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner) repository. The procedure is as follows:

1. Allocate local storage in the nodes of the TiKV cluster. See also [Manage Local Volumes in Kubernetes Cluster](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md).
1. Pre-allocate local storage in cluster nodes. See the [operation guide](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md) provided by Kubernetes.

2. Deploy local-volume-provisioner. See also [Install local-volume-provisioner with helm](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/tree/master/helm).
2. Deploy `local-volume-provisioner`.

{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml
```

Check the Pod and PV status with the following commands:

{{< copyable "shell-regular" >}}

```shell
kubectl get po -n kube-system -l app=local-volume-provisioner && \
kubectl get pv | grep local-storage
```

`local-volume-provisioner` creates a volume for each mounted disk. Note that on GKE, this will create local volumes of only 375 GiB in size.

For more information, refer to [Kubernetes local storage](https://kubernetes.io/docs/concepts/storage/volumes/#local) and [local-static-provisioner document](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner#overview).

Expand All @@ -83,6 +100,111 @@ For more information, refer to [Kubernetes local storage](https://kubernetes.io/

Refer to [Best Practices](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/best-practices.md) for more information on local PV in Kubernetes.

## Disk mount examples

If the components such as monitoring, TiDB Binlog, and `tidb-backup` use local disk to store data, you can mount them on a SAS disk and create separate `StorageClass` for use. For example:

- For a disk storing monitoring data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/disks` directory, and create `local-storage` `StorageClass` for them.
- For a disk storing TiDB Binlog and backup data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/backup` directory, and create `backup-storage` `StorageClass` for them.
- For a disk storing PD data, you can [bind mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) them into `/mnt/sharedssd` directory, and create `shared-ssd-storage` `StorageClass` for them.
- For a disk storing TiKV data, you can [mount](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#use-a-whole-disk-as-a-filesystem-pv) them into `/mnt/ssd` directory, and create `ssd-storage` `StorageClass` for them.

When you install `local-volume-provisioner`, you need to modify `local-volume-provisioner` [YAML](https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml) file and create the necessary `StorageClass` before executing `kubectl apply` statement. The following is an example of a modified YAML file according to the mount above:

```
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: "local-storage"
provisioner: "kubernetes.io/no-provisioner"
volumeBindingMode: "WaitForFirstConsumer"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: "ssd-storage"
provisioner: "kubernetes.io/no-provisioner"
volumeBindingMode: "WaitForFirstConsumer"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: "shared-ssd-storage"
provisioner: "kubernetes.io/no-provisioner"
volumeBindingMode: "WaitForFirstConsumer"
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: "backup-storage"
provisioner: "kubernetes.io/no-provisioner"
volumeBindingMode: "WaitForFirstConsumer"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: local-provisioner-config
namespace: kube-system
data:
nodeLabelsForPV: |
- kubernetes.io/hostname
storageClassMap: |
shared-ssd-storage:
hostDir: /mnt/sharedssd
mountDir: /mnt/sharedssd
ssd-storage:
hostDir: /mnt/ssd
mountDir: /mnt/ssd
local-storage:
hostDir: /mnt/disks
mountDir: /mnt/disks
backup-storage:
hostDir: /mnt/backup
mountDir: /mnt/backup
---

......

volumeMounts:

......

- mountPath: /mnt/ssd
name: local-ssd
mountPropagation: "HostToContainer"
- mountPath: /mnt/sharedssd
name: local-sharedssd
mountPropagation: "HostToContainer"
- mountPath: /mnt/disks
name: local-disks
mountPropagation: "HostToContainer"
- mountPath: /mnt/backup
name: local-backup
mountPropagation: "HostToContainer"
volumes:

......

- name: local-ssd
hostPath:
path: /mnt/ssd
- name: local-sharedssd
hostPath:
path: /mnt/sharedssd
- name: local-disks
hostPath:
path: /mnt/disks
- name: local-backup
hostPath:
path: /mnt/backup
......

```

Finally, execute `kubectl apply` to install `local-volume-provisioner`.

When you later create a TiDB cluster or backup, configure the corresponding `StorageClass` before use.

## Data safety

In general, after a PVC is no longer used and deleted, the PV bound to it is reclaimed and placed in the resource pool for scheduling by the provisioner. To avoid accidental data loss, you can globally configure the reclaim policy of the `StorageClass` to `Retain` or only change the reclaim policy of a single PV to `Retain`. With the `Retain` policy, a PV is not automatically reclaimed.
Expand Down