Skip to content
Permalink
Browse files

*: update the Note format (#1079)

* *: update the Note format

* Update note format

* Update the Note format
  • Loading branch information...
CaitinChen authored and lilin90 committed Apr 24, 2019
1 parent 421bd39 commit 0cf5dddaa90da93c3d1ce7698e8c8f05839eb8e4
Showing with 898 additions and 316 deletions.
  1. +3 −1 FAQ.md
  2. +3 −1 benchmark/dm-v1-alpha.md
  3. +6 −2 benchmark/sysbench-v4.md
  4. +3 −1 benchmark/sysbench.md
  5. +3 −1 benchmark/tpch-v2.md
  6. +3 −1 benchmark/tpch.md
  7. +3 −1 dev-guide/deployment.md
  8. +3 −1 dev/how-to/get-started/local-cluster/install-from-dbdeployer.md
  9. +3 −1 dev/how-to/get-started/local-cluster/install-from-homebrew.md
  10. +18 −6 dev/how-to/get-started/local-cluster/install-from-kubernetes.md
  11. +12 −4 dev/how-to/get-started/read-historical-data.md
  12. +9 −3 op-guide/ansible-deployment-rolling-update.md
  13. +12 −4 op-guide/ansible-deployment-scale.md
  14. +42 −14 op-guide/ansible-deployment.md
  15. +3 −1 op-guide/ansible-operation.md
  16. +1 −1 op-guide/binary-deployment.md
  17. +9 −3 op-guide/gc.md
  18. +3 −1 op-guide/horizontal-scale.md
  19. +3 −1 op-guide/migration-incremental.md
  20. +3 −1 op-guide/migration-overview.md
  21. +6 −2 op-guide/migration.md
  22. +3 −1 op-guide/monitor.md
  23. +15 −5 op-guide/offline-ansible-deployment.md
  24. +3 −1 op-guide/security.md
  25. +3 −1 op-guide/tidb-v2.1-upgrade-guide.md
  26. +2 −2 sql/admin.md
  27. +9 −3 sql/character-set-support.md
  28. +3 −1 sql/date-and-time-types.md
  29. +3 −1 sql/ddl.md
  30. +3 −1 sql/encrypted-connections.md
  31. +6 −2 sql/mysql-compatibility.md
  32. +3 −1 sql/optimizer-hints.md
  33. +9 −3 sql/privilege.md
  34. +4 −2 sql/time-zone.md
  35. +3 −1 sql/transaction-isolation.md
  36. +3 −1 sql/transaction-model.md
  37. +3 −1 sql/transaction.md
  38. +3 −1 sql/variable.md
  39. +3 −1 tikv/deploy-tikv-docker-compose.md
  40. +33 −11 tikv/deploy-tikv-using-ansible.md
  41. +3 −1 tispark/tispark-quick-start-guide.md
  42. +3 −1 tispark/tispark-quick-start-guide_v1.x.md
  43. +1 −1 tools/binlog-slave-client.md
  44. +9 −3 tools/dm/cluster-operations.md
  45. +12 −4 tools/dm/data-synchronization-features.md
  46. +12 −4 tools/dm/deployment.md
  47. +3 −1 tools/dm/dm-upgrade.md
  48. +3 −1 tools/dm/dm-worker-intro.md
  49. +3 −1 tools/dm/manage-task.md
  50. +6 −2 tools/dm/manually-handling-sharding-ddl-locks.md
  51. +3 −1 tools/dm/monitor.md
  52. +7 −3 tools/dm/practice.md
  53. +3 −1 tools/dm/relay-log.md
  54. +3 −1 tools/dm/shard-merge-scenario.md
  55. +3 −1 tools/dm/shard-merge.md
  56. +3 −1 tools/dm/simple-synchronization-scenario.md
  57. +2 −1 tools/dm/skip-replace-sqls.md
  58. +3 −1 tools/dm/troubleshooting.md
  59. +3 −1 tools/lightning/checkpoints.md
  60. +3 −1 tools/lightning/deployment.md
  61. +6 −2 tools/pd-control.md
  62. +1 −1 tools/sync-diff-inspector.md
  63. +7 −3 tools/tidb-binlog-cluster.md
  64. +3 −1 tools/tidb-binlog-kafka.md
  65. +4 −2 tools/tikv-control.md
  66. +9 −3 v1.0/QUICKSTART.md
  67. +3 −1 v1.0/benchmark/sysbench.md
  68. +3 −1 v1.0/dev-guide/deployment.md
  69. +21 −7 v1.0/op-guide/ansible-deployment.md
  70. +3 −1 v1.0/op-guide/backup-restore.md
  71. +4 −2 v1.0/op-guide/binary-deployment.md
  72. +9 −3 v1.0/op-guide/history-read.md
  73. +6 −2 v1.0/op-guide/migration-overview.md
  74. +10 −4 v1.0/op-guide/migration.md
  75. +3 −1 v1.0/op-guide/monitor.md
  76. +6 −2 v1.0/op-guide/offline-ansible-deployment.md
  77. +3 −3 v1.0/op-guide/recommendation.md
  78. +6 −2 v1.0/op-guide/root-ansible-deployment.md
  79. +3 −1 v1.0/op-guide/security.md
  80. +2 −2 v1.0/sql/admin.md
  81. +9 −3 v1.0/sql/character-set-support.md
  82. +3 −1 v1.0/sql/ddl.md
  83. +3 −1 v1.0/sql/encrypted-connections.md
  84. +3 −1 v1.0/sql/json-functions-generated-column.md
  85. +6 −2 v1.0/sql/mysql-compatibility.md
  86. +9 −3 v1.0/sql/privilege.md
  87. +4 −2 v1.0/sql/time-zone.md
  88. +3 −1 v1.0/sql/transaction-isolation.md
  89. +3 −1 v1.0/sql/variable.md
  90. +3 −1 v1.0/tispark/tispark-quick-start-guide.md
  91. +1 −0 v1.0/tools/loader.md
  92. +6 −2 v1.0/tools/pd-control.md
  93. +3 −1 v1.0/tools/tidb-binlog-kafka.md
  94. +3 −1 v2.0/FAQ.md
  95. +3 −1 v2.0/benchmark/sysbench.md
  96. +3 −1 v2.0/benchmark/tpch.md
  97. +3 −1 v2.0/dev-guide/deployment.md
  98. +6 −2 v2.0/op-guide/ansible-deployment-rolling-update.md
  99. +12 −4 v2.0/op-guide/ansible-deployment-scale.md
  100. +42 −14 v2.0/op-guide/ansible-deployment.md
  101. +3 −1 v2.0/op-guide/ansible-operation.md
  102. +3 −1 v2.0/op-guide/backup-restore.md
  103. +9 −3 v2.0/op-guide/gc.md
  104. +12 −4 v2.0/op-guide/history-read.md
  105. +3 −1 v2.0/op-guide/horizontal-scale.md
  106. +6 −2 v2.0/op-guide/migration-overview.md
  107. +9 −3 v2.0/op-guide/migration.md
  108. +3 −1 v2.1/FAQ.md
  109. +3 −1 v2.1/benchmark/sysbench.md
  110. +3 −1 v2.1/benchmark/tpch-v2.md
  111. +3 −1 v2.1/benchmark/tpch.md
  112. +3 −1 v2.1/dev-guide/deployment.md
  113. +9 −3 v2.1/op-guide/ansible-deployment-rolling-update.md
  114. +12 −4 v2.1/op-guide/ansible-deployment-scale.md
  115. +39 −13 v2.1/op-guide/ansible-deployment.md
  116. +3 −1 v2.1/op-guide/ansible-operation.md
  117. +9 −3 v2.1/op-guide/gc.md
  118. +12 −4 v2.1/op-guide/history-read.md
  119. +3 −1 v2.1/op-guide/horizontal-scale.md
  120. +6 −2 v2.1/op-guide/migration-overview.md
  121. +9 −3 v2.1/op-guide/migration.md
  122. +3 −1 v2.1/op-guide/monitor.md
  123. +12 −4 v2.1/op-guide/offline-ansible-deployment.md
  124. +2 −2 v2.1/op-guide/recommendation.md
  125. +3 −1 v2.1/op-guide/security.md
  126. +3 −1 v2.1/op-guide/tidb-v2.1-upgrade-guide.md
  127. +2 −2 v2.1/sql/admin.md
  128. +9 −3 v2.1/sql/character-set-support.md
  129. +3 −1 v2.1/sql/ddl.md
  130. +3 −1 v2.1/sql/encrypted-connections.md
  131. +6 −2 v2.1/sql/mysql-compatibility.md
  132. +9 −3 v2.1/sql/privilege.md
  133. +3 −1 v2.1/sql/tidb-specific.md
  134. +4 −2 v2.1/sql/time-zone.md
  135. +3 −1 v2.1/sql/variable.md
  136. +3 −1 v2.1/tikv/deploy-tikv-docker-compose.md
  137. +3 −1 v2.1/tispark/tispark-quick-start-guide.md
  138. +12 −4 v2.1/tools/data-migration-cluster-operations.md
  139. +9 −3 v2.1/tools/data-migration-deployment.md
  140. +3 −1 v2.1/tools/data-migration-manage-task.md
  141. +1 −1 v2.1/tools/data-migration-practice.md
  142. +3 −1 v2.1/tools/data-migration-troubleshooting.md
  143. +3 −1 v2.1/tools/dm-configuration-file-overview.md
  144. +3 −1 v2.1/tools/dm-monitor.md
  145. +3 −1 v2.1/tools/dm-task-config-argument-description.md
  146. +3 −1 v2.1/tools/dm-worker-intro.md
  147. +3 −1 v2.1/tools/lightning/checkpoints.md
  148. +3 −1 v2.1/tools/lightning/deployment.md
  149. +6 −2 v2.1/tools/pd-control.md
  150. +3 −1 v2.1/tools/sync-diff-inspector.md
  151. +7 −3 v2.1/tools/tidb-binlog-cluster.md
  152. +3 −1 v2.1/tools/tidb-binlog-kafka.md
  153. +4 −2 v2.1/tools/tikv-control.md
  154. +3 −1 v2.1/tools/troubleshooting-sharding-ddl-locks.md
4 FAQ.md
@@ -1072,7 +1072,9 @@ The interval of `GC Life Time` is too short. The data that should have been read
update mysql.tidb set variable_value='30m' where variable_name='tikv_gc_life_time';
```

> **Note:** "30m" means only cleaning up the data generated 30 minutes ago, which might consume some extra storage space.
> **Note:**
>
> "30m" means only cleaning up the data generated 30 minutes ago, which might consume some extra storage space.
### MySQL native error messages

@@ -12,7 +12,9 @@ This DM benchmark report describes the test purpose, environment, scenario, and

The purpose of this test is to test the performance of DM incremental replication.

> **Note**: The results of the testing might vary based on different environmental dependencies.
> **Note:**
>
> The results of the testing might vary based on different environmental dependencies.
## Test environment

@@ -92,7 +92,9 @@ For more detailed information on TiKV performance tuning, see [Tune TiKV Perform

## Test process

> **Note:** This test was performed without load balancing tools such as HAproxy. We run the Sysbench test on individual TiDB node and added the results up. The load balancing tools and the parameters of different versions might also impact the performance.
> **Note:**
>
> This test was performed without load balancing tools such as HAproxy. We run the Sysbench test on individual TiDB node and added the results up. The load balancing tools and the parameters of different versions might also impact the performance.
### Sysbench configuration

@@ -137,7 +139,9 @@ Adjust the order in which Sysbench scripts create indexes. Sysbench imports data
1. Download the TiDB-modified [oltp_common.lua](https://raw.githubusercontent.com/pingcap/tidb-bench/master/sysbench-patch/oltp_common.lua) file and overwrite the `/usr/share/sysbench/oltp_common.lua` file with it.
2. Move the [235th](https://github.com/akopytov/sysbench/blob/1.0.14/src/lua/oltp_common.lua#L235) to [240th](https://github.com/akopytov/sysbench/blob/1.0.14/src/lua/oltp_common.lua#L240) lines of `/usr/share/sysbench/oltp_common.lua` to be right behind 198th lines.

> **Note:** This operation is optional and is only to save the time consumed by data import.
> **Note:**
>
> This operation is optional and is only to save the time consumed by data import.
At the command line, enter the following command to start importing data. The config file is the one configured in the previous step:

@@ -10,7 +10,9 @@ draft: true

The purpose of this test is to test the performance and horizontal scalability of TiDB in OLTP scenarios.

> **Note**: The results of the testing might vary based on different environmental dependencies.
> **Note:**
>
> The results of the testing might vary based on different environmental dependencies.
## Test version, date and place

@@ -9,7 +9,9 @@ category: benchmark

This test aims to compare the performances of TiDB 2.0 and TiDB 2.1 in the OLAP scenario.

> **Note**: Different test environments might lead to different test results.
> **Note:**
>
> Different test environments might lead to different test results.
## Test environment

@@ -9,7 +9,9 @@ category: benchmark

This test aims to compare the performances of TiDB 1.0 and TiDB 2.0 in the OLAP scenario.

> **Note**: Different test environments might lead to different test results.
> **Note:**
>
> Different test environments might lead to different test results.
## Test environment

@@ -2,7 +2,9 @@

## Overview

Note: **The easiest way to deploy TiDB is to use TiDB Ansible, see [Ansible Deployment](../op-guide/ansible-deployment.md).**
> **Note:**
>
> The easiest way to deploy TiDB is to use TiDB Ansible, see [Ansible Deployment](../op-guide/ansible-deployment.md).**
Before you start, check the [supported platforms](../dev-guide/requirements.md#supported-platforms) and [prerequisites](../dev-guide/requirements.md#prerequisites) first.

@@ -10,7 +10,9 @@ DBdeployer is designed to allow multiple versions of TiDB deployed concurrently.

Similar to [Homebrew](/dev/how-to/get-started/local-cluster/install-from-homebrew.md), the DBdeployer installation method installs the tidb-server **without** the tikv-server or pd-server. This is useful for development environments, since you can test your application's compatibility with TiDB without needing to deploy a full TiDB platform.

> **Note**: Internally this installation uses goleveldb as the storage engine. It is much slower than TiKV, and any benchmarks will be unreliable.
> **Note:**
>
> Internally this installation uses goleveldb as the storage engine. It is much slower than TiKV, and any benchmarks will be unreliable.
<main class="tabs">
<input id="tabMacOS" type="radio" name="tabs" value="MacOSContent" checked>
@@ -10,7 +10,9 @@ TiDB on Homebrew supports a minimal installation mode of the tidb-server **witho

This installation method is supported on macOS, Linux and Windows (via [WSL](https://docs.microsoft.com/en-us/windows/wsl/install-win10)).

> **Note**: Internally this installation uses goleveldb as the storage engine. It is much slower than TiKV, and any benchmarks will be unreliable.
> **Note:**
>
> Internally this installation uses goleveldb as the storage engine. It is much slower than TiKV, and any benchmarks will be unreliable.
## Installation steps

@@ -17,16 +17,22 @@ Before deploying a TiDB cluster to Kubernetes, make sure the following requireme

- Resources requirement: CPU 2+, Memory 4G+

> **Note:** For macOS, you need to allocate 2+ CPU and 4G+ Memory to Docker. For details, see [Docker for Mac configuration](https://docs.docker.com/docker-for-mac/#advanced).
> **Note:**
>
> For macOS, you need to allocate 2+ CPU and 4G+ Memory to Docker. For details, see [Docker for Mac configuration](https://docs.docker.com/docker-for-mac/#advanced).
- [Docker](https://docs.docker.com/install/): 17.03 or later

> **Note:** [Legacy Docker Toolbox](https://docs.docker.com/toolbox/toolbox_install_mac/) users must migrate to [Docker for Mac](https://store.docker.com/editions/community/docker-ce-desktop-mac) by uninstalling Legacy Docker Toolbox and installing Docker for Mac, because DinD cannot run on Docker Toolbox and Docker Machine.
> **Note:**
>
> [Legacy Docker Toolbox](https://docs.docker.com/toolbox/toolbox_install_mac/) users must migrate to [Docker for Mac](https://store.docker.com/editions/community/docker-ce-desktop-mac) by uninstalling Legacy Docker Toolbox and installing Docker for Mac, because DinD cannot run on Docker Toolbox and Docker Machine.
- [Helm Client](https://github.com/helm/helm/blob/master/docs/install.md#installing-the-helm-client): 2.9.0 or later
- [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl): 1.10 or later

> **Note:** The outputs of different versions of `kubectl` might be slightly different.
> **Note:**
>
> The outputs of different versions of `kubectl` might be slightly different.
## Step 1: Deploy a Kubernetes cluster using DinD

@@ -36,7 +42,9 @@ $ cd tidb-operator
$ manifests/local-dind/dind-cluster-v1.12.sh up
```

> **Note:** If the cluster fails to pull Docker images during the startup due to the firewall, you can set the environment variable `KUBE_REPO_PREFIX` to `uhub.ucloud.cn/pingcap` before running the script `dind-cluster-v1.12.sh` as follows (the Docker images used are pulled from [UCloud Docker Registry](https://docs.ucloud.cn/compute/uhub/index)):
> **Note:**
>
> If the cluster fails to pull Docker images during the startup due to the firewall, you can set the environment variable `KUBE_REPO_PREFIX` to `uhub.ucloud.cn/pingcap` before running the script `dind-cluster-v1.12.sh` as follows (the Docker images used are pulled from [UCloud Docker Registry](https://docs.ucloud.cn/compute/uhub/index)):
```
$ KUBE_REPO_PREFIX=uhub.ucloud.cn/pingcap manifests/local-dind/dind-cluster-v1.12.sh up
@@ -157,7 +165,9 @@ You can scale out or scale in the TiDB cluster simply by modifying the number of
helm upgrade tidb-cluster charts/tidb-cluster --namespace=tidb
```

> **Note:** If you need to scale in TiKV, the consumed time depends on the volume of your existing data, because the data needs to be migrated safely.
> **Note:**
>
> If you need to scale in TiKV, the consumed time depends on the volume of your existing data, because the data needs to be migrated safely.
## Upgrade the TiDB cluster

@@ -179,7 +189,9 @@ When you are done with your test, use the following command to destroy the TiDB
$ helm delete tidb-cluster --purge
```

> **Note:** This only deletes the running pods and other resources, the data is persisted. If you do not need the data anymore, run the following commands to clean up the data. (Be careful, this permanently deletes the data).
> **Note:**
>
> This only deletes the running pods and other resources, the data is persisted. If you do not need the data anymore, run the following commands to clean up the data. (Be careful, this permanently deletes the data).
```sh
$ kubectl get pv -l app.kubernetes.io/namespace=tidb -o name | xargs -I {} kubectl patch {} -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'
@@ -26,7 +26,9 @@ The `tidb_snapshot` system variable is introduced to support reading history dat
- The variable accepts TSO (Timestamp Oracle) and datetime. TSO is a globally unique time service, which is obtained from PD. The acceptable datetime format is "2016-10-08 16:45:26.999". Generally, the datetime can be set using second precision, for example "2016-10-08 16:45:26".
- When the variable is set, TiDB creates a Snapshot using its value as the timestamp, just for the data structure and there is no any overhead. After that, all the `Select` operations will read data from this Snapshot.

> **Note:** Because the timestamp in TiDB transactions is allocated by Placement Driver (PD), the version of the stored data is also marked based on the timestamp allocated by PD. When a Snapshot is created, the version number is based on the value of the `tidb_snapshot` variable. If there is a large difference between the local time of the TiDB server and the PD server, use the time of the PD server.
> **Note:**
>
> Because the timestamp in TiDB transactions is allocated by Placement Driver (PD), the version of the stored data is also marked based on the timestamp allocated by PD. When a Snapshot is created, the version number is based on the value of the `tidb_snapshot` variable. If there is a large difference between the local time of the TiDB server and the PD server, use the time of the PD server.
After reading data from history versions, you can read data from the latest version by ending the current Session or using the `Set` statement to set the value of the `tidb_snapshot` variable to "" (empty string).

@@ -102,14 +104,18 @@ Pay special attention to the following two variables:

6. Set the `tidb_snapshot` variable whose scope is Session. The variable is set so that the latest version before the value can be read.

> **Note:** In this example, the value is set to be the time before the update operation.
> **Note:**
>
> In this example, the value is set to be the time before the update operation.
```sql
mysql> set @@tidb_snapshot="2016-10-08 16:45:26";
Query OK, 0 rows affected (0.00 sec)
```
> **Note:** You should use `@@` instead of `@` before `tidb_snapshot` because `@@` is used to denote the system variable while `@` is used to denote the user variable.
> **Note:**
>
> You should use `@@` instead of `@` before `tidb_snapshot` because `@@` is used to denote the system variable while `@` is used to denote the user variable.
**Result:** The read from the following statement is the data before the update operation, which is the history data.

@@ -144,4 +150,6 @@ Pay special attention to the following two variables:
3 rows in set (0.00 sec)
```

> **Note:** You should use `@@` instead of `@` before `tidb_snapshot` because `@@` is used to denote the system variable while `@` is used to denote the user variable.
> **Note:**
>
> You should use `@@` instead of `@` before `tidb_snapshot` because `@@` is used to denote the system variable while `@` is used to denote the user variable.
@@ -8,7 +8,9 @@ category: operations

When you perform a rolling update for a TiDB cluster, the service is shut down serially and is started after you update the service binary and the configuration file. If the load balancing is configured in the front-end, the rolling update of TiDB does not impact the running applications. Minimum requirements: `pd*3, tidb*2, tikv*3`.

> **Note:** If the binlog is enabled, and Pump and Drainer services are deployed in the TiDB cluster, stop the Drainer service before the rolling update. The Pump service is automatically updated in the rolling update of TiDB.
> **Note:**
>
> If the binlog is enabled, and Pump and Drainer services are deployed in the TiDB cluster, stop the Drainer service before the rolling update. The Pump service is automatically updated in the rolling update of TiDB.
## Upgrade the component version

@@ -29,7 +31,9 @@ When you perform a rolling update for a TiDB cluster, the service is shut down s
tidb_version = v2.0.7
```

> **Note:** If you use `tidb-ansible` of the master branch, you can keep `tidb_version = latest`. The installation package of the latest TiDB version is updated each day.
> **Note:**
>
> If you use `tidb-ansible` of the master branch, you can keep `tidb_version = latest`. The installation package of the latest TiDB version is updated each day.
2. Delete the existing `downloads` directory `/home/tidb/tidb-ansible/downloads/`.

@@ -52,7 +56,9 @@ You can also download the binary manually. Use `wget` to download the binary and
wget http://download.pingcap.org/tidb-v2.0.7-linux-amd64.tar.gz
```

> **Note:** Remember to replace the version number in the download link with the one you need.
> **Note:**
>
> Remember to replace the version number in the download link with the one you need.
If you use `tidb-ansible` of the master branch, download the binary using the following command:

@@ -90,7 +90,9 @@ For example, if you want to add two TiDB nodes (node101, node102) with the IP ad
ansible-playbook bootstrap.yml -l 172.16.10.101,172.16.10.102
```

> **Note:** If an alias is configured in the `inventory.ini` file, for example, `node101 ansible_host=172.16.10.101`, use `-l` to specify the alias when executing `ansible-playbook`. For example, `ansible-playbook bootstrap.yml -l node101,node102`. This also applies to the following steps.
> **Note:**
>
> If an alias is configured in the `inventory.ini` file, for example, `node101 ansible_host=172.16.10.101`, use `-l` to specify the alias when executing `ansible-playbook`. For example, `ansible-playbook bootstrap.yml -l node101,node102`. This also applies to the following steps.
3. Deploy the newly added node:

@@ -191,7 +193,9 @@ For example, if you want to add a PD node (node103) with the IP address `172.16.

1. Remove the `--initial-cluster="xxxx" \` configuration.

> **Note:** You cannot add the `#` character at the beginning of the line. Otherwise, the following configuration cannot take effect.
> **Note:**
>
> You cannot add the `#` character at the beginning of the line. Otherwise, the following configuration cannot take effect.
2. Add `--join="http://172.16.10.1:2379" \`. The IP address (`172.16.10.1`) can be any of the existing PD IP address in the cluster.
3. Manually start the PD service in the newly added PD node:
@@ -206,7 +210,9 @@ For example, if you want to add a PD node (node103) with the IP address `172.16.
./pd-ctl -u "http://172.16.10.1:2379"
```

> **Note:** `pd-ctl` is a command used to check the number of PD nodes.
> **Note:**
>
> `pd-ctl` is a command used to check the number of PD nodes.
5. Apply a rolling update to the entire cluster:

@@ -314,7 +320,9 @@ For example, if you want to remove a TiKV node (node9) with the IP address `172.
./pd-ctl -u "http://172.16.10.1:2379" -d store 10
```

> **Note:** It takes some time to remove the node. If the status of the node you remove becomes Tombstone, then this node is successfully removed.
> **Note:**
>
> It takes some time to remove the node. If the status of the node you remove becomes Tombstone, then this node is successfully removed.
3. After the node is successfully removed, stop the services on node9:

0 comments on commit 0cf5ddd

Please sign in to comment.
You can’t perform that action at this time.