From 3ed7aeb41007befa4d42e4ef9dd4d4624a9a7997 Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Tue, 23 Jun 2020 22:08:27 +0800 Subject: [PATCH 1/4] Update GC documents --- garbage-collection-configuration.md | 39 +++++++++++++++++++++++------ garbage-collection-overview.md | 25 ++++++++++-------- 2 files changed, 45 insertions(+), 19 deletions(-) diff --git a/garbage-collection-configuration.md b/garbage-collection-configuration.md index 7f4298d132009..c3663dccfc906 100644 --- a/garbage-collection-configuration.md +++ b/garbage-collection-configuration.md @@ -9,14 +9,16 @@ aliases: ['/docs/dev/reference/garbage-collection/configuration/'] The GC (Garbage Collection) configuration and operational status are recorded in the `mysql.tidb` system table. You can use SQL statements to query or modify them: -```plain -mysql> select VARIABLE_NAME, VARIABLE_VALUE from mysql.tidb; +{{< copyable "sql" >}} + +```sql +select VARIABLE_NAME, VARIABLE_VALUE from mysql.tidb where VARIABLE_NAME like "tikv_gc%"; +``` + +```sql +--------------------------+----------------------------------------------------------------------------------------------------+ | VARIABLE_NAME | VARIABLE_VALUE | +--------------------------+----------------------------------------------------------------------------------------------------+ -| bootstrapped | True | -| tidb_server_version | 33 | -| system_tz | UTC | | tikv_gc_leader_uuid | 5afd54a0ea40005 | | tikv_gc_leader_desc | host:tidb-cluster-tidb-0, pid:215, start at 2019-07-15 11:09:14.029668932 +0000 UTC m=+0.463731223 | | tikv_gc_leader_lease | 20190715-12:12:14 +0000 | @@ -42,8 +44,8 @@ update mysql.tidb set VARIABLE_VALUE="24h" where VARIABLE_NAME="tikv_gc_life_tim > In addition to the following GC configuration parameters, the `mysql.tidb` system table also contains records that store the status of the storage components in a TiDB cluster, among which GC related ones are included, as listed below: > > - `tikv_gc_leader_uuid`, `tikv_gc_leader_desc` and `tikv_gc_leader_lease`: Records the information of the GC leader -> - `tikv_gc_last_run_time`: The duration of the previous GC -> - `tikv_gc_safe_point`: The safe point for the current GC +> - `tikv_gc_last_run_time`: The duration of the previous GC (updated when each round of GC begins) +> - `tikv_gc_safe_point`: The current safe point (updated when each round of GC begins) ## `tikv_gc_enable` @@ -62,10 +64,10 @@ update mysql.tidb set VARIABLE_VALUE="24h" where VARIABLE_NAME="tikv_gc_life_tim > **Note:** > - > - The value of `tikv_gc_life_time` must be greater than that of [`max-txn-time-use`](/tidb-configuration-file.md#max-txn-time-use) in the TiDB configuration file by at least 10 seconds, and must than or equal to 10 minutes. > - In scenarios of frequent updates, a large value (days or even months) for `tikv_gc_life_time` may cause potential issues, such as: > - Larger storage use > - A large amount of history data may affect performance to a certain degree, especially for range queries such as `select count(*) from t` + > - If there is any transaction that has been running longer than `tikv_gc_life_time`, during GC, the data since `start_ts` is retained for this transaction to continue execution. For example, if `tikv_gc_life_time` is configured to 10 minutes, among all transactions being executed, the transaction that starts earliest has been running for 15 minutes, GC will retain data of the recent 15 minutes. ## `tikv_gc_mode` @@ -89,6 +91,25 @@ update mysql.tidb set VARIABLE_VALUE="24h" where VARIABLE_NAME="tikv_gc_life_tim - Specifies the GC concurrency manually. This parameter works only when you set [`tikv_gc_auto_concurrency`](#tikv_gc_auto_concurrency) to `false`. - Default: 2 +## `tikv_gc_scan_lock_mode` (**experimental feature**) + +This parameter specifies the way of scanning locks in the Resolve Locks step of GC, which means whether or not to enable Green GC (experimental feature). In the Resolve Locks step of GC, TiKV needs to scan all locks in the cluster. With Green GC disabled, TiDB scan locks in the unit of Regions. Green GC provides the "physical scanning" feature, which means that each TiKV node can bypass the Raft layer to directly scan data. This feature can effectively mitigate the impact of GC wakening up all Regions when the [Hibernate Region](/tikv-configuration-file.md#raftstorehibernate-regions-experimental) feature is enabled, thus improving the execution speed in the Resolve Locks step. + +- `"legacy"` (default): Uses the old way of scanning, which means to disable Green GC. +- `"physical"`: Uses the physical scanning method, which means to enable Green GC. + +> **Note:** +> +> Green GC is still an experimental feature. It is recommended **NOT** to use it in the production environment. +> +> The configuration of Green GC is hidden. Execute the following statement when you enable Green GC for the first time: +> +> {{< copyable "sql" >}} +> +> ```sql +> insert into mysql.tidb values ('tikv_gc_scan_lock_mode', 'legacy', ''); +> ``` + ## Notes on GC process changes Since TiDB 3.0, some configuration options have changed with support for the distributed GC mode and concurrent Resolve Locks processing. The changes are shown in the following table: @@ -106,6 +127,8 @@ Since TiDB 3.0, some configuration options have changed with support for the dis - Auto-concurrent: requests are sent to each Region concurrently with the number of TiKV nodes as concurrency value. - Distributed: no need for TiDB to send requests to TiKV to trigger GC because each TiKV handles GC on its own. +In addition, if Green GC (experimental feature) is enabled, which means setting the value of [`tikv_gc_scan_lock_mode`](#tikv_gc_scan_lock_mode-experimental-feature) to `physical`, the processing of GC is not affected by the configuration above. + ## GC I/O limit TiKV supports the GC I/O limit. You can configure `gc.max-write-bytes-per-sec` to limit writes of a GC worker per second, and thus to reduce the impact on normal requests. diff --git a/garbage-collection-overview.md b/garbage-collection-overview.md index 665518cb3c687..fa6cce5188ee6 100644 --- a/garbage-collection-overview.md +++ b/garbage-collection-overview.md @@ -13,20 +13,23 @@ TiDB uses MVCC to control transaction concurrency. When you update the data, the Each TiDB cluster contains a TiDB instance that is selected as the GC leader, which controls the GC process. -GC runs periodically on TiDB. The default frequency is once every 10 minutes. For each GC, TiDB firstly calculates a timestamp called "safe point" (defaults to the current time minus 10 minutes). Then, TiDB clears the obsolete data under the premise that all the snapshots after the safe point retain the integrity of the data. Specifically, there are three steps involved in the GC process: +GC runs periodically on TiDB. For each GC, TiDB firstly calculates a timestamp called "safe point". Then, TiDB clears the obsolete data under the premise that all the snapshots after the safe point retain the integrity of the data. Specifically, there are three steps involved in each GC process: -1. Resolve Locks -2. Delete Ranges -3. Do GC +1. Resolve Locks. During this step, TiDB scans locks before the safe point on all Regions and clears these locks. +2. Delete Ranges. During this step, the obsolete data of the entire range generated from the `DROP TABLE`/`DROP INDEX` operation is quickly cleared. +3. Do GC. During this step, each TiKV node scans data on respective node and deletes unneeded old versions of each key. + +In the default configuration, GC is triggered every 10 minutes. Each GC retains data of the recent 10 minutes, which means that the the GC life time is 10 minutes by default (safe point = the current time - GC life time). If one round of GC has been running for too long, before this round of GC is completed, the next round of GC will not start even if it is time to trigger the next GC. In addition, for long-duration transactions to run properly after exceeding the GC life time, the safe point does not exceed the start time (start_ts) of the ongoing transactions. + +## Implementation details ### Resolve Locks -The TiDB transaction model is implemented based on [Google's Percolator](https://ai.google/research/pubs/pub36726). It's mainly a two-phase commit protocol with some practical optimizations. When the first phase is finished, all the related keys are locked. Among these locks, one is the primary lock and the others are secondary locks which contain a pointer to the primary lock; in the second phase, the key with the primary lock gets a write record and its lock is removed. The write record indicates the write or delete operation in the history or the transactional rollback record of this key. The type of write record that replaces the primary lock indicates whether the corresponding transaction is committed successfully. Then all the secondary locks are replaced successively. If the threads fail to replace the secondary locks, these locks are retained. +The TiDB transaction model is implemented based on [Google's Percolator](https://ai.google/research/pubs/pub36726). It's mainly a two-phase commit protocol with some practical optimizations. When the first phase is finished, all the related keys are locked. Among these locks, one is the primary lock and the others are secondary locks which contain a pointer to the primary lock; in the second phase, the key with the primary lock gets a write record and its lock is removed. The write record indicates the write or delete operation in the history or the transactional rollback record of this key. The type of write record that replaces the primary lock indicates whether the corresponding transaction is committed successfully. Then all the secondary locks are replaced successively. If, for some reasons such as failure, the threads fail to replace the secondary locks and these locks are retained, you can still find the primary key based on the information contained in the secondary locks and determines whether the entire transaction is committed based on whether the primary key is committed. However, if the primary key information is cleared by GC and this transaction has uncommitted secondary locks, you will never learn whether these lock should be committed. As a result, data integrity cannot be guaranteed. -The Resolve Locks step rolls back or commits the locks before the safe point, depending on whether their primary key has been committed or not. If the primary key is also retained, the transaction times out and is rolled back. -This step is required. Once GC has cleared the write record of the primary lock, you can never know whether this transaction is successful or not. Also, if the transaction contains retained secondary keys, it's important to know whether it should be rolled back or committed. As a result, data consistency cannot be guaranteed. +The Resolve Locks step clears the locks before the safe point. This means that if the corresponding primary key of a lock is committed, this lock should be committed; otherwise, it should be rolled back. If the primary key is still locked (not committed or rolled back), this transaction is seen as timing out and rolled back. -In the Resolve Lock step, the GC leader processes requests from all Regions. From TiDB 3.0, this process runs concurrently by default, with the default concurrency identical to the number of TiKV nodes in the cluster. For more details on how to configure, see [GC Configuration](/garbage-collection-configuration.md#tikv_gc_auto_concurrency). +In the Resolve Lock step, the GC leader sends requests to all Regions to scan obsolete locks, checks the primary key statuses of scanned locks, and sends requests to commit or roll back the corresponding transaction. This process is performed by default. The concurrency number is the same as the number of TiKV nodes. ### Delete Ranges @@ -34,10 +37,10 @@ A great amount of data with consecutive keys is removed during operations such a ### Do GC -The Do GC step clears the outdated versions for all keys. To guarantee that all timestamps after the safe point have consistent snapshots, this step deletes the data committed before the safe point, but retains the last write before the safe point as long as it is not a deletion. +The Do GC step clears the outdated versions for all keys. To guarantee that all timestamps after the safe point have consistent snapshots, this step deletes the data committed before the safe point, but retains the last write for each key before the safe point as long as it is not a deletion. -In the previous GC mechanism for TiDB 2.1 and earlier versions, the GC leader sends GC requests to all Regions. From TiDB 3.0, the GC leader only uploads the safe point to PD for each TiKV node to obtain. When the TiKV node detects a change on the safe point, it performs GC on all leader Regions on the current node. In the meantime, the GC leader can trigger the next round of GC. +In this step, TiDB only needs to send the safe point to PD, and then the whole round of GC is completed. TiKV automatically detects the change of safe point and performs GC for all Region leaders on the current node. At the same time, the GC leader can continue to trigger the next round of GC. > **Note:** > -> You can modify the `tikv_gc_mode` to use the previous GC mechanism. For more details, refer to [GC Configuration](/garbage-collection-configuration.md). +> In TiDB v2.1 or earlier versions, the Do GC step is implemented by TiDB sending requests to each Region. In v3.0 or later versions, you can modify the `tikv_gc_mode` to use the previous GC mechanism. For more details, refer to [GC Configuration](/garbage-collection-configuration.md#tikv_gc_mode). From a9bd6406915d28f48d339dc5d408e1d68f08ef2a Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Tue, 23 Jun 2020 22:11:18 +0800 Subject: [PATCH 2/4] Update garbage-collection-configuration.md --- garbage-collection-configuration.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/garbage-collection-configuration.md b/garbage-collection-configuration.md index c3663dccfc906..22e78baf05460 100644 --- a/garbage-collection-configuration.md +++ b/garbage-collection-configuration.md @@ -93,6 +93,10 @@ update mysql.tidb set VARIABLE_VALUE="24h" where VARIABLE_NAME="tikv_gc_life_tim ## `tikv_gc_scan_lock_mode` (**experimental feature**) +> **Note:** +> +> This is still an experimental feature. It is recommended **NOT** to use it in the production environment. + This parameter specifies the way of scanning locks in the Resolve Locks step of GC, which means whether or not to enable Green GC (experimental feature). In the Resolve Locks step of GC, TiKV needs to scan all locks in the cluster. With Green GC disabled, TiDB scan locks in the unit of Regions. Green GC provides the "physical scanning" feature, which means that each TiKV node can bypass the Raft layer to directly scan data. This feature can effectively mitigate the impact of GC wakening up all Regions when the [Hibernate Region](/tikv-configuration-file.md#raftstorehibernate-regions-experimental) feature is enabled, thus improving the execution speed in the Resolve Locks step. - `"legacy"` (default): Uses the old way of scanning, which means to disable Green GC. From 08e327ed479a58271acfc5584d33f1be1acc5e6e Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 29 Jun 2020 10:28:08 +0800 Subject: [PATCH 3/4] remove duplicate note --- garbage-collection-configuration.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/garbage-collection-configuration.md b/garbage-collection-configuration.md index 22e78baf05460..b396967febb98 100644 --- a/garbage-collection-configuration.md +++ b/garbage-collection-configuration.md @@ -95,7 +95,7 @@ update mysql.tidb set VARIABLE_VALUE="24h" where VARIABLE_NAME="tikv_gc_life_tim > **Note:** > -> This is still an experimental feature. It is recommended **NOT** to use it in the production environment. +> Green GC is still an experimental feature. It is recommended **NOT** to use it in the production environment. This parameter specifies the way of scanning locks in the Resolve Locks step of GC, which means whether or not to enable Green GC (experimental feature). In the Resolve Locks step of GC, TiKV needs to scan all locks in the cluster. With Green GC disabled, TiDB scan locks in the unit of Regions. Green GC provides the "physical scanning" feature, which means that each TiKV node can bypass the Raft layer to directly scan data. This feature can effectively mitigate the impact of GC wakening up all Regions when the [Hibernate Region](/tikv-configuration-file.md#raftstorehibernate-regions-experimental) feature is enabled, thus improving the execution speed in the Resolve Locks step. @@ -104,8 +104,6 @@ This parameter specifies the way of scanning locks in the Resolve Locks step of > **Note:** > -> Green GC is still an experimental feature. It is recommended **NOT** to use it in the production environment. -> > The configuration of Green GC is hidden. Execute the following statement when you enable Green GC for the first time: > > {{< copyable "sql" >}} From fcb0dbb97c303a566d7a88067b8d37310f4972da Mon Sep 17 00:00:00 2001 From: TomShawn <41534398+TomShawn@users.noreply.github.com> Date: Mon, 29 Jun 2020 16:18:23 +0800 Subject: [PATCH 4/4] Apply suggestions from code review Co-authored-by: Keke Yi <40977455+yikeke@users.noreply.github.com> --- garbage-collection-configuration.md | 12 ++++++------ garbage-collection-overview.md | 8 ++++---- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/garbage-collection-configuration.md b/garbage-collection-configuration.md index b396967febb98..ff3ae32511f12 100644 --- a/garbage-collection-configuration.md +++ b/garbage-collection-configuration.md @@ -44,8 +44,8 @@ update mysql.tidb set VARIABLE_VALUE="24h" where VARIABLE_NAME="tikv_gc_life_tim > In addition to the following GC configuration parameters, the `mysql.tidb` system table also contains records that store the status of the storage components in a TiDB cluster, among which GC related ones are included, as listed below: > > - `tikv_gc_leader_uuid`, `tikv_gc_leader_desc` and `tikv_gc_leader_lease`: Records the information of the GC leader -> - `tikv_gc_last_run_time`: The duration of the previous GC (updated when each round of GC begins) -> - `tikv_gc_safe_point`: The current safe point (updated when each round of GC begins) +> - `tikv_gc_last_run_time`: The duration of the latest GC (updated at the beginning of each round of GC) +> - `tikv_gc_safe_point`: The current safe point (updated at the beginning of each round of GC) ## `tikv_gc_enable` @@ -97,10 +97,10 @@ update mysql.tidb set VARIABLE_VALUE="24h" where VARIABLE_NAME="tikv_gc_life_tim > > Green GC is still an experimental feature. It is recommended **NOT** to use it in the production environment. -This parameter specifies the way of scanning locks in the Resolve Locks step of GC, which means whether or not to enable Green GC (experimental feature). In the Resolve Locks step of GC, TiKV needs to scan all locks in the cluster. With Green GC disabled, TiDB scan locks in the unit of Regions. Green GC provides the "physical scanning" feature, which means that each TiKV node can bypass the Raft layer to directly scan data. This feature can effectively mitigate the impact of GC wakening up all Regions when the [Hibernate Region](/tikv-configuration-file.md#raftstorehibernate-regions-experimental) feature is enabled, thus improving the execution speed in the Resolve Locks step. +This parameter specifies the way of scanning locks in the Resolve Locks step of GC, that is, whether to enable Green GC (experimental feature) or not. In the Resolve Locks step of GC, TiKV needs to scan all locks in the cluster. With Green GC disabled, TiDB scans locks by Regions. Green GC provides the "physical scanning" feature, which means that each TiKV node can bypass the Raft layer to directly scan data. This feature can effectively mitigate the impact of GC wakening up all Regions when the [Hibernate Region](/tikv-configuration-file.md#raftstorehibernate-regions-experimental) feature is enabled, thus improving the execution speed in the Resolve Locks step. -- `"legacy"` (default): Uses the old way of scanning, which means to disable Green GC. -- `"physical"`: Uses the physical scanning method, which means to enable Green GC. +- `"legacy"` (default): Uses the old way of scanning, that is, disable Green GC. +- `"physical"`: Uses the physical scanning method, that is, enable Green GC. > **Note:** > @@ -129,7 +129,7 @@ Since TiDB 3.0, some configuration options have changed with support for the dis - Auto-concurrent: requests are sent to each Region concurrently with the number of TiKV nodes as concurrency value. - Distributed: no need for TiDB to send requests to TiKV to trigger GC because each TiKV handles GC on its own. -In addition, if Green GC (experimental feature) is enabled, which means setting the value of [`tikv_gc_scan_lock_mode`](#tikv_gc_scan_lock_mode-experimental-feature) to `physical`, the processing of GC is not affected by the configuration above. +In addition, if Green GC (experimental feature) is enabled, that is, setting the value of [`tikv_gc_scan_lock_mode`](#tikv_gc_scan_lock_mode-experimental-feature) to `physical`, the processing of Resolve Lock is not affected by the concurrency configuration above. ## GC I/O limit diff --git a/garbage-collection-overview.md b/garbage-collection-overview.md index fa6cce5188ee6..e6ef66cc31d99 100644 --- a/garbage-collection-overview.md +++ b/garbage-collection-overview.md @@ -17,7 +17,7 @@ GC runs periodically on TiDB. For each GC, TiDB firstly calculates a timestamp c 1. Resolve Locks. During this step, TiDB scans locks before the safe point on all Regions and clears these locks. 2. Delete Ranges. During this step, the obsolete data of the entire range generated from the `DROP TABLE`/`DROP INDEX` operation is quickly cleared. -3. Do GC. During this step, each TiKV node scans data on respective node and deletes unneeded old versions of each key. +3. Do GC. During this step, each TiKV node scans data on it and deletes unneeded old versions of each key. In the default configuration, GC is triggered every 10 minutes. Each GC retains data of the recent 10 minutes, which means that the the GC life time is 10 minutes by default (safe point = the current time - GC life time). If one round of GC has been running for too long, before this round of GC is completed, the next round of GC will not start even if it is time to trigger the next GC. In addition, for long-duration transactions to run properly after exceeding the GC life time, the safe point does not exceed the start time (start_ts) of the ongoing transactions. @@ -25,11 +25,11 @@ In the default configuration, GC is triggered every 10 minutes. Each GC retains ### Resolve Locks -The TiDB transaction model is implemented based on [Google's Percolator](https://ai.google/research/pubs/pub36726). It's mainly a two-phase commit protocol with some practical optimizations. When the first phase is finished, all the related keys are locked. Among these locks, one is the primary lock and the others are secondary locks which contain a pointer to the primary lock; in the second phase, the key with the primary lock gets a write record and its lock is removed. The write record indicates the write or delete operation in the history or the transactional rollback record of this key. The type of write record that replaces the primary lock indicates whether the corresponding transaction is committed successfully. Then all the secondary locks are replaced successively. If, for some reasons such as failure, the threads fail to replace the secondary locks and these locks are retained, you can still find the primary key based on the information contained in the secondary locks and determines whether the entire transaction is committed based on whether the primary key is committed. However, if the primary key information is cleared by GC and this transaction has uncommitted secondary locks, you will never learn whether these lock should be committed. As a result, data integrity cannot be guaranteed. +The TiDB transaction model is implemented based on [Google's Percolator](https://ai.google/research/pubs/pub36726). It's mainly a two-phase commit protocol with some practical optimizations. When the first phase is finished, all the related keys are locked. Among these locks, one is the primary lock and the others are secondary locks which contain a pointer to the primary lock; in the second phase, the key with the primary lock gets a write record and its lock is removed. The write record indicates the write or delete operation in the history or the transactional rollback record of this key. The type of write record that replaces the primary lock indicates whether the corresponding transaction is committed successfully. Then all the secondary locks are replaced successively. If, for some reason such as failure, these secondary locks are retained and not replaced, you can still find the primary key based on the information in the secondary locks and determines whether the entire transaction is committed based on whether the primary key is committed. However, if the primary key information is cleared by GC and this transaction has uncommitted secondary locks, you will never learn whether these locks can be committed. As a result, data integrity cannot be guaranteed. -The Resolve Locks step clears the locks before the safe point. This means that if the corresponding primary key of a lock is committed, this lock should be committed; otherwise, it should be rolled back. If the primary key is still locked (not committed or rolled back), this transaction is seen as timing out and rolled back. +The Resolve Locks step clears the locks before the safe point. This means that if the primary key of a lock is committed, this lock needs to be committed; otherwise, it needs to be rolled back. If the primary key is still locked (not committed or rolled back), this transaction is seen as timing out and rolled back. -In the Resolve Lock step, the GC leader sends requests to all Regions to scan obsolete locks, checks the primary key statuses of scanned locks, and sends requests to commit or roll back the corresponding transaction. This process is performed by default. The concurrency number is the same as the number of TiKV nodes. +In the Resolve Lock step, the GC leader sends requests to all Regions to scan obsolete locks, checks the primary key statuses of scanned locks, and sends requests to commit or roll back the corresponding transaction. By default, this process is performed concurrently, and the concurrency number is the same as the number of TiKV nodes. ### Delete Ranges