From 29d9d351f577ca05b6a3c725b3d4e40ae7e25b4f Mon Sep 17 00:00:00 2001 From: yikeke Date: Thu, 4 Jun 2020 17:51:01 +0800 Subject: [PATCH 1/8] Update upgrade-tidb-using-tiup.md --- upgrade-tidb-using-tiup.md | 66 +++++++++++++++++++++++--------------- 1 file changed, 41 insertions(+), 25 deletions(-) diff --git a/upgrade-tidb-using-tiup.md b/upgrade-tidb-using-tiup.md index 5c961091a451d..c8e993a8dd8d9 100644 --- a/upgrade-tidb-using-tiup.md +++ b/upgrade-tidb-using-tiup.md @@ -13,7 +13,7 @@ If you have deployed the TiDB cluster using TiDB Ansible, you can use TiUP to im ## Upgrade caveat -- Rolling back to 3.0 versions after the update is not supported. +- After the upgrade, rolling back to 3.0 or earlier versions is not supported. - To update versions earlier than 3.0 to 4.0, first update this version to 3.0 using TiDB Ansible, and use TiUP to import the TiDB Ansible configuration and update the 3.0 version to 4.0. - After the TiDB Ansible configuration is imported into and managed by TiUP, you can no longer operate on the cluster using TiDB Ansible. Otherwise, conflicts might occur because of the inconsistent metadata. - Currently, you cannot import the TiDB Ansible configuration if the cluster deployed using TiDB Ansible meets one of the following situations: @@ -24,6 +24,15 @@ If you have deployed the TiDB cluster using TiDB Ansible, you can use TiUP to im - `Lightning` / `Importer` is enabled for the cluster. - You still use the `'push'` method to collect monitoring metrics (since v3.0, `pull` is the default mode, which is supported if you have not modified this mode). - In the `inventory.ini` configuration file, the `node_exporter` or `blackbox_exporter` item of the machine is set to non-default ports through `node_exporter_port` or `blackbox_exporter_port`, which is compatible if you have unified the configuration in the `group_vars` directory. +- Support upgrading the versions of TiDB Binlog, TiCDC, TiFlash, and other components. +- Before you upgrade from v2.0.6 or an earlier version to 4.0.0, you have to make sure that no DDL operations are running in the cluster, especially the `Add Index` operation that is time-consuming. Perform the upgrade after all DDL operations are completed. +- Starting from v2.1, TiDB enables parallel DDL. Therefore, clusters **older than v2.0.1** cannot be rolling upgraded to version 4.0.0. Instead, you can choose the following solutions: + - Upgrade directly from TiDB v2.0.1 or earlier to 4.0.0 in planned downtime + - Rolling upgrade to v2.0.1 or a later 2.0 version, then rolling upgrade to v4.0.0 + +> **Note:** +> +> Do not execute any DDL request during the upgrade, or an undefined behavior issue might occur. ## Install TiUP on the Control Machine @@ -61,6 +70,10 @@ If you have deployed the TiDB cluster using TiDB Ansible, you can use TiUP to im If you have installed TiUP before, execute the following command to update TiUP to the latest version: +> **Note:** +> +> If the result of `tiup --version` shows that your TiUP version is below v1.0.0, run `tiup update --self` first to update the TiUP version before running the following command. + {{< copyable "shell-regular" >}} ```shell @@ -74,10 +87,11 @@ tiup update cluster > + If the original cluster is deployed using TiUP, you can skip this step. > + Currently, the `inventory.ini` configuration file is identified by default. If your configuration file uses another name, specify this name. > + Ensure that the state of the current cluster is consistent with the topology in `inventory.ini`; that components of the cluster are operating normally. Otherwise, the cluster metadata becomes abnormal after the import. +> + If multiple different `inventory.ini` files and TiDB clusters are managed in one TiDB Ansible directory, when importing one of the clusters into TiUP, you need to specify `--no-backup` to avoid moving the Ansible directory to the TiUP management directory. ### Import the TiDB Ansible cluster to TiUP -1. Execute the following command to import the TiDB Ansible cluster into TiUP (for example, in the `/home/tidb/tidb-ansible` path). Do not execute this command in the Ansible directory. +1. Execute the following command to import the TiDB Ansible cluster into TiUP (for example, in the `/home/tidb/tidb-ansible` path). {{< copyable "shell-regular" >}} @@ -127,7 +141,7 @@ After the import is complete, you can check the current cluster status by execut tiup cluster edit-config ``` -3. See the configuration template format of [topology](https://github.com/pingcap-incubator/tiup-cluster/blob/master/examples/topology.example.yaml) and fill in the modified parameters of the original cluster in the `server_configs` section of the topology file. +3. See the configuration template format of [topology](https://github.com/pingcap/tiup/blob/master/examples/topology.example.yaml) and fill in the modified parameters of the original cluster in the `server_configs` section of the topology file. Even if the label has been configured for the cluster, you also need to fill in the label in the configuration according to the format in the template. In later versions, the label will be automatically imported. @@ -141,12 +155,20 @@ After the import is complete, you can check the current cluster status by execut This section describes how to perform a rolling update to the TiDB cluster and how to verify the version after the update. -### Perform a rolling update to the TiDB cluster (to v4.0.0-rc) +### Rolling update the TiDB cluster to a specified version {{< copyable "shell-regular" >}} ```shell -tiup cluster upgrade v4.0.0-rc +tiup cluster upgrade +``` + +For example, if you want to update the cluster to v4.0.0: + +{{< copyable "shell-regular" >}} + +```shell +tiup cluster upgrade v4.0.0 ``` Performing the rolling update to the cluster will update all components one by one. During the upgrade of TiKV, all leaders in a TiKV instance are evicted before stopping the instance. The default timeout time is 5 minutes. The instance is directly stopped after this timeout time. @@ -166,32 +188,18 @@ tiup cluster display ``` ``` -Starting /home/tidblk/.tiup/components/cluster/v0.4.3/cluster display +Starting /home/tidblk/.tiup/components/cluster/v1.0.0/cluster display TiDB Cluster: -TiDB Version: v4.0.0-rc +TiDB Version: v4.0.0 ``` ## FAQ This section describes common problems encountered when updating the TiDB cluster using TiUP. -### If an error occurs and the updated is interrupted, how to resume the update from the point of the interruption after fixing this error? +### If an error occurs and the update is interrupted, how to resume the update after fixing this error? -You can specify `--role` or `--node` to update the specified component or node. Here is the command: - -{{< copyable "shell-regular" >}} - -```shell -tiup cluster upgrade v4.0.0-rc --role tidb -``` - -or - -{{< copyable "shell-regular" >}} - -```shell -tiup cluster upgrade v4.0.0-rc --node -``` +Re-execute the `tiup cluster upgrade` command to resume the update. The upgrade operation restarts the nodes that have been previously upgraded. In subsequent 4.0 versions, TiDB will support resuming the upgrade from the interrupted point. ### The evict leader has waited too long during the update. How to skip this step for a quick update? @@ -200,12 +208,12 @@ You can specify `--force`. Then the processes of transferring PD leader and evic {{< copyable "shell-regular" >}} ```shell -tiup cluster upgrade v4.0.0-rc --force +tiup cluster upgrade v4.0.0 --force ``` ### How to update the version of tools such as pd-ctl after updating the TiDB cluster? -Currently, TiUP does not update and manage the version of tools. If you need the tool package of the latest version, directly download the TiDB package and replace `{version}` with the corresponding version such as `v4.0.0-rc`. Here is the download address: +Currently, TiUP does not update and manage the version of tools. If you need the tool package of the latest version, directly download the TiDB package and replace `{version}` with the corresponding version such as `v4.0.0`. Here is the download address: {{< copyable "" >}} @@ -213,8 +221,16 @@ Currently, TiUP does not update and manage the version of tools. If you need the https://download.pingcap.org/tidb-{version}-linux-amd64.tar.gz ``` +### Failure to upgrade the TiFlash component during the cluster upgrade + +Before v4.0.0-rc.2, TiFlash might have some incompatibility issues. This might cause problems when you upgrade a cluster that includes the TiFlash component to v4.0.0-rc.2 or a later version. If so, go to [ASK TUG](https://asktug.com/) and ask for R&D support. + ## TiDB 4.0 compatibility changes - If you set the value of the `oom-action` parameter to `cancel`, when the query statement triggers the OOM threshold, the statement is killed. In v4.0, in addition to `select`, DML statements such as `insert`/`update`/`delete` might also be killed. - TiDB v4.0 supports the length check for table names. The length limit is 64 characters. If you rename a table after the upgrade and the new name exceeds this length limit, an error is reported. v3.0 and earlier versions do not have this error reporting. +- TiDB v4.0 supports the length check for partition names of the partitioned tables. The length limit is 64 characters. After the upgrade, if you create or alter a partitioned table with a partition name that exceeds the length limit, an error is expected to occur in 4.0 versions, but not in 3.0 and earlier versions. - In v4.0, the format of the `explain` execution plan is improved. Pay attention to any automatic analysis program that is customized for `explain`. +- TiDB v4.0 supports [read committed isolation level](/transaction-isolation-levels.md#read-committed-isolation-level). After upgrading to v4.0, setting the isolation level to `READ-COMMITTED` in a pessimistic transaction takes effect. (Not valid in 3.0 and earlier versions) +- In v4.0, executing `alter reorganize partition` returns an error. In earlier versions, no error is reported because only the syntax is supported and the statement is not taking any effect. +- In v4.0, creating `linear hash partition` or `subpartition` tables does not take effect and they are converted to regular tables. In earlier versions, they are converted to regular partitioned tables. \ No newline at end of file From 153fe4837414919281392a9aec6fd9eb3910965c Mon Sep 17 00:00:00 2001 From: yikeke Date: Thu, 4 Jun 2020 18:01:41 +0800 Subject: [PATCH 2/8] noun: upgrade; verb: update; use rolling upgrade --- upgrade-tidb-using-tiup.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/upgrade-tidb-using-tiup.md b/upgrade-tidb-using-tiup.md index c8e993a8dd8d9..164b0242cdab2 100644 --- a/upgrade-tidb-using-tiup.md +++ b/upgrade-tidb-using-tiup.md @@ -151,11 +151,11 @@ After the import is complete, you can check the current cluster status by execut > > Before upgrading to v4.0, confirm that the parameters modified in v3.0 are compatible in v4.0. See [configuration template](/tikv-configuration-file.md) for details. -## Perform a rolling update to the TiDB cluster +## Perform a rolling upgrade to the TiDB cluster -This section describes how to perform a rolling update to the TiDB cluster and how to verify the version after the update. +This section describes how to perform a rolling upgrade to the TiDB cluster and how to verify the version after the upgrade. -### Rolling update the TiDB cluster to a specified version +### Rolling upgrade the TiDB cluster to a specified version {{< copyable "shell-regular" >}} @@ -171,7 +171,7 @@ For example, if you want to update the cluster to v4.0.0: tiup cluster upgrade v4.0.0 ``` -Performing the rolling update to the cluster will update all components one by one. During the upgrade of TiKV, all leaders in a TiKV instance are evicted before stopping the instance. The default timeout time is 5 minutes. The instance is directly stopped after this timeout time. +Performing the rolling upgrade to the cluster will upgrade all components one by one. During the upgrade of TiKV, all leaders in a TiKV instance are evicted before stopping the instance. The default timeout time is 5 minutes. The instance is directly stopped after this timeout time. To perform the upgrade immediately without evicting the leader, specify `--force` in the command above. This method causes performance jitter but not data loss. @@ -197,13 +197,13 @@ TiDB Version: v4.0.0 This section describes common problems encountered when updating the TiDB cluster using TiUP. -### If an error occurs and the update is interrupted, how to resume the update after fixing this error? +### If an error occurs and the upgrade is interrupted, how to resume the upgrade after fixing this error? -Re-execute the `tiup cluster upgrade` command to resume the update. The upgrade operation restarts the nodes that have been previously upgraded. In subsequent 4.0 versions, TiDB will support resuming the upgrade from the interrupted point. +Re-execute the `tiup cluster upgrade` command to resume the upgrade. The upgrade operation restarts the nodes that have been previously upgraded. In subsequent 4.0 versions, TiDB will support resuming the upgrade from the interrupted point. -### The evict leader has waited too long during the update. How to skip this step for a quick update? +### The evict leader has waited too long during the upgrade. How to skip this step for a quick upgrade? -You can specify `--force`. Then the processes of transferring PD leader and evicting TiKV leader are skipped during the update. The cluster is directly restarted to update the version, which has a great impact on the cluster that runs online. Here is the command: +You can specify `--force`. Then the processes of transferring PD leader and evicting TiKV leader are skipped during the upgrade. The cluster is directly restarted to update the version, which has a great impact on the cluster that runs online. Here is the command: {{< copyable "shell-regular" >}} @@ -211,7 +211,7 @@ You can specify `--force`. Then the processes of transferring PD leader and evic tiup cluster upgrade v4.0.0 --force ``` -### How to update the version of tools such as pd-ctl after updating the TiDB cluster? +### How to update the version of tools such as pd-ctl after upgrading the TiDB cluster? Currently, TiUP does not update and manage the version of tools. If you need the tool package of the latest version, directly download the TiDB package and replace `{version}` with the corresponding version such as `v4.0.0`. Here is the download address: From 24bd4bbae02662519170166f14a4361b62796d68 Mon Sep 17 00:00:00 2001 From: Keke Yi <40977455+yikeke@users.noreply.github.com> Date: Thu, 4 Jun 2020 18:07:32 +0800 Subject: [PATCH 3/8] Update upgrade-tidb-using-tiup.md --- upgrade-tidb-using-tiup.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/upgrade-tidb-using-tiup.md b/upgrade-tidb-using-tiup.md index 164b0242cdab2..7c68d0be233a6 100644 --- a/upgrade-tidb-using-tiup.md +++ b/upgrade-tidb-using-tiup.md @@ -231,6 +231,6 @@ Before v4.0.0-rc.2, TiFlash might have some incompatibility issues. This might c - TiDB v4.0 supports the length check for table names. The length limit is 64 characters. If you rename a table after the upgrade and the new name exceeds this length limit, an error is reported. v3.0 and earlier versions do not have this error reporting. - TiDB v4.0 supports the length check for partition names of the partitioned tables. The length limit is 64 characters. After the upgrade, if you create or alter a partitioned table with a partition name that exceeds the length limit, an error is expected to occur in 4.0 versions, but not in 3.0 and earlier versions. - In v4.0, the format of the `explain` execution plan is improved. Pay attention to any automatic analysis program that is customized for `explain`. -- TiDB v4.0 supports [read committed isolation level](/transaction-isolation-levels.md#read-committed-isolation-level). After upgrading to v4.0, setting the isolation level to `READ-COMMITTED` in a pessimistic transaction takes effect. (Not valid in 3.0 and earlier versions) +- TiDB v4.0 supports [read committed isolation level](/transaction-isolation-levels.md#read-committed-isolation-level). After upgrading to v4.0, setting the isolation level to `READ-COMMITTED` in a pessimistic transaction takes effect. In 3.0 and earlier versions, the setting is not valid. - In v4.0, executing `alter reorganize partition` returns an error. In earlier versions, no error is reported because only the syntax is supported and the statement is not taking any effect. -- In v4.0, creating `linear hash partition` or `subpartition` tables does not take effect and they are converted to regular tables. In earlier versions, they are converted to regular partitioned tables. \ No newline at end of file +- In v4.0, creating `linear hash partition` or `subpartition` tables does not take effect and they are converted to regular tables. In earlier versions, they are converted to regular partitioned tables. From c52dabd84eb5d12c82d0d03f7c4570bc2082fcf8 Mon Sep 17 00:00:00 2001 From: Keke Yi <40977455+yikeke@users.noreply.github.com> Date: Thu, 4 Jun 2020 18:08:34 +0800 Subject: [PATCH 4/8] Update upgrade-tidb-using-tiup.md --- upgrade-tidb-using-tiup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/upgrade-tidb-using-tiup.md b/upgrade-tidb-using-tiup.md index 7c68d0be233a6..705196279d70e 100644 --- a/upgrade-tidb-using-tiup.md +++ b/upgrade-tidb-using-tiup.md @@ -231,6 +231,6 @@ Before v4.0.0-rc.2, TiFlash might have some incompatibility issues. This might c - TiDB v4.0 supports the length check for table names. The length limit is 64 characters. If you rename a table after the upgrade and the new name exceeds this length limit, an error is reported. v3.0 and earlier versions do not have this error reporting. - TiDB v4.0 supports the length check for partition names of the partitioned tables. The length limit is 64 characters. After the upgrade, if you create or alter a partitioned table with a partition name that exceeds the length limit, an error is expected to occur in 4.0 versions, but not in 3.0 and earlier versions. - In v4.0, the format of the `explain` execution plan is improved. Pay attention to any automatic analysis program that is customized for `explain`. -- TiDB v4.0 supports [read committed isolation level](/transaction-isolation-levels.md#read-committed-isolation-level). After upgrading to v4.0, setting the isolation level to `READ-COMMITTED` in a pessimistic transaction takes effect. In 3.0 and earlier versions, the setting is not valid. +- TiDB v4.0 supports [read committed isolation level](/transaction-isolation-levels.md#read-committed-isolation-level). After upgrading to v4.0, setting the isolation level to `READ-COMMITTED` in a pessimistic transaction takes effect. In 3.0 and earlier versions, the setting does not take effect. - In v4.0, executing `alter reorganize partition` returns an error. In earlier versions, no error is reported because only the syntax is supported and the statement is not taking any effect. - In v4.0, creating `linear hash partition` or `subpartition` tables does not take effect and they are converted to regular tables. In earlier versions, they are converted to regular partitioned tables. From 7b942615db0367c70c453f21680d1e4f309ffd55 Mon Sep 17 00:00:00 2001 From: Keke Yi <40977455+yikeke@users.noreply.github.com> Date: Mon, 8 Jun 2020 14:10:15 +0800 Subject: [PATCH 5/8] Apply suggestions from code review --- upgrade-tidb-using-tiup.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/upgrade-tidb-using-tiup.md b/upgrade-tidb-using-tiup.md index 705196279d70e..4ca2db54b02e8 100644 --- a/upgrade-tidb-using-tiup.md +++ b/upgrade-tidb-using-tiup.md @@ -25,10 +25,10 @@ If you have deployed the TiDB cluster using TiDB Ansible, you can use TiUP to im - You still use the `'push'` method to collect monitoring metrics (since v3.0, `pull` is the default mode, which is supported if you have not modified this mode). - In the `inventory.ini` configuration file, the `node_exporter` or `blackbox_exporter` item of the machine is set to non-default ports through `node_exporter_port` or `blackbox_exporter_port`, which is compatible if you have unified the configuration in the `group_vars` directory. - Support upgrading the versions of TiDB Binlog, TiCDC, TiFlash, and other components. -- Before you upgrade from v2.0.6 or an earlier version to 4.0.0, you have to make sure that no DDL operations are running in the cluster, especially the `Add Index` operation that is time-consuming. Perform the upgrade after all DDL operations are completed. -- Starting from v2.1, TiDB enables parallel DDL. Therefore, clusters **older than v2.0.1** cannot be rolling upgraded to version 4.0.0. Instead, you can choose the following solutions: - - Upgrade directly from TiDB v2.0.1 or earlier to 4.0.0 in planned downtime - - Rolling upgrade to v2.0.1 or a later 2.0 version, then rolling upgrade to v4.0.0 +- Before you upgrade from v2.0.6 or earlier to v4.0.0 or later, you have to make sure that no DDL operations are running in the cluster, especially the `Add Index` operation that is time-consuming. Perform the upgrade after all DDL operations are completed. +- Starting from v2.1, TiDB enables parallel DDL. Therefore, clusters **older than v2.0.1** cannot be rolling upgraded to v4.0.0 or later. Instead, you can choose the following solutions: + - Upgrade directly from TiDB v2.0.1 or earlier to v4.0.0 or later in planned downtime + - Rolling upgrade to v2.0.1 or a later 2.0 version, then rolling upgrade to v4.0.0 or later > **Note:** > From 664fb6c41c64f72a73caf5eb9fad4d44b61a4760 Mon Sep 17 00:00:00 2001 From: Keke Yi <40977455+yikeke@users.noreply.github.com> Date: Mon, 8 Jun 2020 14:54:59 +0800 Subject: [PATCH 6/8] Apply suggestions from code review Co-authored-by: Lilian Lee --- upgrade-tidb-using-tiup.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/upgrade-tidb-using-tiup.md b/upgrade-tidb-using-tiup.md index 4ca2db54b02e8..6cc6ae4fec942 100644 --- a/upgrade-tidb-using-tiup.md +++ b/upgrade-tidb-using-tiup.md @@ -25,14 +25,14 @@ If you have deployed the TiDB cluster using TiDB Ansible, you can use TiUP to im - You still use the `'push'` method to collect monitoring metrics (since v3.0, `pull` is the default mode, which is supported if you have not modified this mode). - In the `inventory.ini` configuration file, the `node_exporter` or `blackbox_exporter` item of the machine is set to non-default ports through `node_exporter_port` or `blackbox_exporter_port`, which is compatible if you have unified the configuration in the `group_vars` directory. - Support upgrading the versions of TiDB Binlog, TiCDC, TiFlash, and other components. -- Before you upgrade from v2.0.6 or earlier to v4.0.0 or later, you have to make sure that no DDL operations are running in the cluster, especially the `Add Index` operation that is time-consuming. Perform the upgrade after all DDL operations are completed. -- Starting from v2.1, TiDB enables parallel DDL. Therefore, clusters **older than v2.0.1** cannot be rolling upgraded to v4.0.0 or later. Instead, you can choose the following solutions: +- Before you upgrade from v2.0.6 or earlier to v4.0.0 or later, you must make sure that no DDL operations are running in the cluster, especially the `Add Index` operation that is time-consuming. Perform the upgrade after all DDL operations are completed. +- Starting from v2.1, TiDB enables parallel DDL. Therefore, clusters **older than v2.0.1** cannot be upgraded to v4.0.0 or later via a direct rolling upgrade. Instead, you can choose one of the following solutions: - Upgrade directly from TiDB v2.0.1 or earlier to v4.0.0 or later in planned downtime - Rolling upgrade to v2.0.1 or a later 2.0 version, then rolling upgrade to v4.0.0 or later > **Note:** > -> Do not execute any DDL request during the upgrade, or an undefined behavior issue might occur. +> Do not execute any DDL request during the upgrade, otherwise an undefined behavior issue might occur. ## Install TiUP on the Control Machine @@ -72,7 +72,7 @@ If you have installed TiUP before, execute the following command to update TiUP > **Note:** > -> If the result of `tiup --version` shows that your TiUP version is below v1.0.0, run `tiup update --self` first to update the TiUP version before running the following command. +> If the result of `tiup --version` shows that your TiUP version is earlier than v1.0.0, run `tiup update --self` first to update the TiUP version before running the following command. {{< copyable "shell-regular" >}} @@ -155,7 +155,7 @@ After the import is complete, you can check the current cluster status by execut This section describes how to perform a rolling upgrade to the TiDB cluster and how to verify the version after the upgrade. -### Rolling upgrade the TiDB cluster to a specified version +### Upgrade the TiDB cluster to a specified version {{< copyable "shell-regular" >}} @@ -163,7 +163,7 @@ This section describes how to perform a rolling upgrade to the TiDB cluster and tiup cluster upgrade ``` -For example, if you want to update the cluster to v4.0.0: +For example, if you want to upgrade the cluster to v4.0.0: {{< copyable "shell-regular" >}} @@ -171,7 +171,7 @@ For example, if you want to update the cluster to v4.0.0: tiup cluster upgrade v4.0.0 ``` -Performing the rolling upgrade to the cluster will upgrade all components one by one. During the upgrade of TiKV, all leaders in a TiKV instance are evicted before stopping the instance. The default timeout time is 5 minutes. The instance is directly stopped after this timeout time. +Performing a rolling upgrade to the cluster will upgrade all components one by one. During the upgrade of TiKV, all leaders in a TiKV instance are evicted before stopping the instance. The default timeout time is 5 minutes. The instance is directly stopped after this timeout time. To perform the upgrade immediately without evicting the leader, specify `--force` in the command above. This method causes performance jitter but not data loss. @@ -223,7 +223,7 @@ https://download.pingcap.org/tidb-{version}-linux-amd64.tar.gz ### Failure to upgrade the TiFlash component during the cluster upgrade -Before v4.0.0-rc.2, TiFlash might have some incompatibility issues. This might cause problems when you upgrade a cluster that includes the TiFlash component to v4.0.0-rc.2 or a later version. If so, go to [ASK TUG](https://asktug.com/) and ask for R&D support. +Before v4.0.0-rc.2, TiFlash might have some incompatibility issues. This might cause problems when you upgrade a cluster that includes the TiFlash component to v4.0.0-rc.2 or a later version. If so, [contact the R&D support](mailto:support@pingcap.com). ## TiDB 4.0 compatibility changes @@ -231,6 +231,6 @@ Before v4.0.0-rc.2, TiFlash might have some incompatibility issues. This might c - TiDB v4.0 supports the length check for table names. The length limit is 64 characters. If you rename a table after the upgrade and the new name exceeds this length limit, an error is reported. v3.0 and earlier versions do not have this error reporting. - TiDB v4.0 supports the length check for partition names of the partitioned tables. The length limit is 64 characters. After the upgrade, if you create or alter a partitioned table with a partition name that exceeds the length limit, an error is expected to occur in 4.0 versions, but not in 3.0 and earlier versions. - In v4.0, the format of the `explain` execution plan is improved. Pay attention to any automatic analysis program that is customized for `explain`. -- TiDB v4.0 supports [read committed isolation level](/transaction-isolation-levels.md#read-committed-isolation-level). After upgrading to v4.0, setting the isolation level to `READ-COMMITTED` in a pessimistic transaction takes effect. In 3.0 and earlier versions, the setting does not take effect. +- TiDB v4.0 supports [Read Committed isolation level](/transaction-isolation-levels.md#read-committed-isolation-level). After upgrading to v4.0, setting the isolation level to `READ-COMMITTED` in a pessimistic transaction takes effect. In 3.0 and earlier versions, the setting does not take effect. - In v4.0, executing `alter reorganize partition` returns an error. In earlier versions, no error is reported because only the syntax is supported and the statement is not taking any effect. - In v4.0, creating `linear hash partition` or `subpartition` tables does not take effect and they are converted to regular tables. In earlier versions, they are converted to regular partitioned tables. From 8a9c0edd84759dcb4895eb3820e44d68fa43b10a Mon Sep 17 00:00:00 2001 From: Keke Yi <40977455+yikeke@users.noreply.github.com> Date: Mon, 8 Jun 2020 14:55:13 +0800 Subject: [PATCH 7/8] Apply suggestions from code review Co-authored-by: Lilian Lee --- upgrade-tidb-using-tiup.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/upgrade-tidb-using-tiup.md b/upgrade-tidb-using-tiup.md index 6cc6ae4fec942..98f17279430f4 100644 --- a/upgrade-tidb-using-tiup.md +++ b/upgrade-tidb-using-tiup.md @@ -28,7 +28,7 @@ If you have deployed the TiDB cluster using TiDB Ansible, you can use TiUP to im - Before you upgrade from v2.0.6 or earlier to v4.0.0 or later, you must make sure that no DDL operations are running in the cluster, especially the `Add Index` operation that is time-consuming. Perform the upgrade after all DDL operations are completed. - Starting from v2.1, TiDB enables parallel DDL. Therefore, clusters **older than v2.0.1** cannot be upgraded to v4.0.0 or later via a direct rolling upgrade. Instead, you can choose one of the following solutions: - Upgrade directly from TiDB v2.0.1 or earlier to v4.0.0 or later in planned downtime - - Rolling upgrade to v2.0.1 or a later 2.0 version, then rolling upgrade to v4.0.0 or later + - Perform a rolling upgrade from the current version to v2.0.1 or a later 2.0 version, then perform another rolling upgrade to v4.0.0 or later > **Note:** > From 8f0beef48861b7876982948ce6f65345c125f606 Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 8 Jun 2020 15:11:53 +0800 Subject: [PATCH 8/8] Update tiup-overview.md --- tiup/tiup-overview.md | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/tiup/tiup-overview.md b/tiup/tiup-overview.md index 5e5749e6ae2ed..a726ebbecece8 100644 --- a/tiup/tiup-overview.md +++ b/tiup/tiup-overview.md @@ -7,11 +7,7 @@ aliases: ['/docs/dev/reference/tools/tiup/overview/'] # TiUP Overview -Package manager or package management system is widely used to automate the process of installing and managing system software and application software. Package management tools greatly simplify software's installation, upgrade, and maintenance processes. For example, almost all Linux operating systems that use RPM use Yum for package management, while Anaconda makes it very easy to manage the Python environment and related packages. - -In the past, there was no dedicated package management tool in the TiDB ecosystem. Users could only manually manage various packages through different configuration files and folders. Some third-party monitoring and reporting tools such as Prometheus even required additional special management, which made the operation and maintenance work much more difficult. - -Starting with TiDB 4.0, TiUP, as a new tool, assumes the role of a package manager and is responsible for managing components in the TiDB ecosystem, such as TiDB, PD, TiKV, and so on. When you want to run any component in the TiDB ecosystem, you just need to execute a single line of TiUP commands, which is far easier to manage. +Starting with TiDB 4.0, TiUP, as the package manager, makes it far easier to manage different cluster components in the TiDB ecosystem. Now you can run any component with only a single line of TiUP commands. ## Install TiUP