Skip to content

Commit

Permalink
tidb-in-kubernetes: unify expressions of horizontal scaling (#1594)
Browse files Browse the repository at this point in the history
  • Loading branch information
junlan-zhang authored and lilin90 committed Oct 17, 2019
1 parent fbabc5f commit 3828246
Show file tree
Hide file tree
Showing 2 changed files with 16 additions and 18 deletions.
17 changes: 8 additions & 9 deletions dev/tidb-in-kubernetes/scale-in-kubernetes.md
Expand Up @@ -6,19 +6,19 @@ Category: how-to

# Scale TiDB in Kubernetes

This document introduces how to horizontally and vertically scale up and down a TiDB cluster in Kubernetes.
This document introduces how to horizontally and vertically scale a TiDB cluster in Kubernetes.

## Horizontal scaling

Horizontally scaling TiDB means that you scale TiDB up or down by adding or remove nodes in your pool of resources. When you scale a TiDB cluster, PD, TiKV, and TiDB are scaled up or down sequentially according to the values of their replicas. Scaling up operations add nodes based on the node ID in ascending order, while scaling down operations remove nodes based on the node ID in descending order.
Horizontally scaling TiDB means that you scale TiDB out or in by adding or remove nodes in your pool of resources. When you scale a TiDB cluster, PD, TiKV, and TiDB are scaled out or in sequentially according to the values of their replicas. Scaling out operations add nodes based on the node ID in ascending order, while scaling in operations remove nodes based on the node ID in descending order.

### Horizontal scaling operations

To perform a horizontal scaling operation:

1. Modify `pd.replicas`, `tidb.replicas`, `tikv.replicas` in the `value.yaml` file of the cluster to a desired value.

2. Run the `helm upgrade` command to scale up or down:
2. Run the `helm upgrade` command to scale out or in:

{{< copyable "shell-regular" >}}

Expand All @@ -34,15 +34,14 @@ To perform a horizontal scaling operation:
watch kubectl -n <namespace> get pod -o wide
```

When the number of Pods for all components reaches the preset value and all components are in the `Running` state, the horizontal scaling is completed.
When the number of Pods for all components reaches the preset value and all components go to the `Running` state, the horizontal scaling is completed.

> **Note:**
>
> - The PD and TiKV components do not trigger scaling up and down operations during the rolling update.
> - When the PD and TiKV components scale downcall the corresponding interface to take offline the PD and TiKV nodes being deleted. This involves data migration operations, so it might take some time to finish the process.
When the TiKV component scales in, it calls the PD interface to mark the corresponding TiKV instance as offline, and then migrates the data on it to other TiKV nodes. During the data migration, the TiKV Pod is still in the `Running` state, and the corresponding Pod is deleted only after the data migration is completed. The time consumed by scaling in depends on the amount of data on the TiKV instance to be scaled in. You can check whether TiKV is in the `Offline` state by running `kubectl get tidbcluster -n <namespace> <release-name> -o json | jq '.status.tikv.stores'`.
> - The PVC of the deleted node is retained during the scaling down process, and because the PV's `Reclaim Policy` value is set to `Retain`, the data can be retrieved even if the PVC is deleted.
> - The TiKV component does not support scale-out while a scale-in operation is in progress. Forcing a scale-out operation might cause anomalies in the cluster. If an anomaly already happens, refer to [TiKV Store is in Tombstone status abnormally](/dev/tidb-in-kubernetes/troubleshoot.md#tikv-store-is-in-tombstone-status-abnormally) to fix it.
> - The PD and TiKV components do not trigger scaling in and out operations during the rolling update.
> - When the TiKV component scales in, it calls the PD interface to mark the corresponding TiKV instance as offline, and then migrates the data on it to other TiKV nodes. During the data migration, the TiKV Pod is still in the `Running` state, and the corresponding Pod is deleted only after the data migration is completed. The time consumed by scaling in depends on the amount of data on the TiKV instance to be scaled in. You can check whether TiKV is in the `Offline` state by running `kubectl get tidbcluster -n <namespace> <release-name> -o json | jq '.status.tikv.stores'`.
> - When the PD and TiKV components scale in, the PVC of the deleted node is retained during the scaling in process. Because the PV's reclaim policy is changed to `Retain`, the data can still be retrieved even if the PVC is deleted.
> - The TiKV component does not support scale out while a scale-in operation is in progress. Forcing a scale-out operation might cause anomalies in the cluster. If an anomaly already happens, refer to [TiKV Store is in Tombstone status abnormally](/dev/tidb-in-kubernetes/troubleshoot.md#tikv-store-is-in-tombstone-status-abnormally) to fix it.
## Vertical scaling

Expand Down
17 changes: 8 additions & 9 deletions v3.0/tidb-in-kubernetes/scale-in-kubernetes.md
Expand Up @@ -7,19 +7,19 @@ aliases: ['/docs/v3.0/how-to/scale/tidb-in-kubernetes/']

# Scale TiDB in Kubernetes

This document introduces how to horizontally and vertically scale up and down a TiDB cluster in Kubernetes.
This document introduces how to horizontally and vertically scale a TiDB cluster in Kubernetes.

## Horizontal scaling

Horizontally scaling TiDB means that you scale TiDB up or down by adding or remove nodes in your pool of resources. When you scale a TiDB cluster, PD, TiKV, and TiDB are scaled up or down sequentially according to the values of their replicas. Scaling up operations add nodes based on the node ID in ascending order, while scaling down operations remove nodes based on the node ID in descending order.
Horizontally scaling TiDB means that you scale TiDB out or in by adding or remove nodes in your pool of resources. When you scale a TiDB cluster, PD, TiKV, and TiDB are scaled out or in sequentially according to the values of their replicas. Scaling out operations add nodes based on the node ID in ascending order, while scaling in operations remove nodes based on the node ID in descending order.

### Horizontal scaling operations

To perform a horizontal scaling operation:

1. Modify `pd.replicas`, `tidb.replicas`, `tikv.replicas` in the `value.yaml` file of the cluster to a desired value.

2. Run the `helm upgrade` command to scale up or down:
2. Run the `helm upgrade` command to scale out or in:

{{< copyable "shell-regular" >}}

Expand All @@ -35,15 +35,14 @@ To perform a horizontal scaling operation:
watch kubectl -n <namespace> get pod -o wide
```

When the number of Pods for all components reaches the preset value and all components are in the `Running` state, the horizontal scaling is completed.
When the number of Pods for all components reaches the preset value and all components go to the `Running` state, the horizontal scaling is completed.

> **Note:**
>
> - The PD and TiKV components do not trigger scaling up and down operations during the rolling update.
> - When the PD and TiKV components scale downcall the corresponding interface to take offline the PD and TiKV nodes being deleted. This involves data migration operations, so it might take some time to finish the process.
When the TiKV component scales in, it calls the PD interface to mark the corresponding TiKV instance as offline, and then migrates the data on it to other TiKV nodes. During the data migration, the TiKV Pod is still in the `Running` state, and the corresponding Pod is deleted only after the data migration is completed. The time consumed by scaling in depends on the amount of data on the TiKV instance to be scaled in. You can check whether TiKV is in the `Offline` state by running `kubectl get tidbcluster -n <namespace> <release-name> -o json | jq '.status.tikv.stores'`.
> - The PVC of the deleted node is retained during the scaling down process, and because the PV's `Reclaim Policy` value is set to `Retain`, the data can be retrieved even if the PVC is deleted.
> - The TiKV component does not support scale-out while a scale-in operation is in progress. Forcing a scale-out operation might cause anomalies in the cluster. If an anomaly already happens, refer to [TiKV Store is in Tombstone status abnormally](/v3.0/tidb-in-kubernetes/troubleshoot.md#tikv-store-is-in-tombstone-status-abnormally) to fix it.
> - The PD and TiKV components do not trigger scaling in and out operations during the rolling update.
> - When the TiKV component scales in, it calls the PD interface to mark the corresponding TiKV instance as offline, and then migrates the data on it to other TiKV nodes. During the data migration, the TiKV Pod is still in the `Running` state, and the corresponding Pod is deleted only after the data migration is completed. The time consumed by scaling in depends on the amount of data on the TiKV instance to be scaled in. You can check whether TiKV is in the `Offline` state by running `kubectl get tidbcluster -n <namespace> <release-name> -o json | jq '.status.tikv.stores'`.
> - When the PD and TiKV components scale in, the PVC of the deleted node is retained during the scaling in process. Because the PV's reclaim policy is changed to `Retain`, the data can still be retrieved even if the PVC is deleted.
> - The TiKV component does not support scale out while a scale-in operation is in progress. Forcing a scale-out operation might cause anomalies in the cluster. If an anomaly already happens, refer to [TiKV Store is in Tombstone status abnormally](/v3.0/tidb-in-kubernetes/troubleshoot.md#tikv-store-is-in-tombstone-status-abnormally) to fix it.
## Vertical scaling

Expand Down

0 comments on commit 3828246

Please sign in to comment.