diff --git a/TOC.md b/TOC.md index f18f5907957bf..c42810781d951 100644 --- a/TOC.md +++ b/TOC.md @@ -80,6 +80,7 @@ + [TiFlash Alert Rules](/tiflash/tiflash-alert-rules.md) + Troubleshoot + [Identify Slow Queries](/identify-slow-queries.md) + + [SQL Diagnostics](/system-tables/system-table-sql-diagnostics.md) + [Identify Expensive Queries](/identify-expensive-queries.md) + [Statement Summary Tables](/statement-summary-tables.md) + [Troubleshoot Cluster Setup](/troubleshoot-tidb-cluster.md) diff --git a/system-tables/system-table-inspection-result.md b/system-tables/system-table-inspection-result.md index 14a646986393a..c0604c2269af2 100644 --- a/system-tables/system-table-inspection-result.md +++ b/system-tables/system-table-inspection-result.md @@ -1,17 +1,17 @@ --- title: INSPECTION_RESULT -summary: Learn the `INSPECTION_RESULT` diagnosis result table. +summary: Learn the `INSPECTION_RESULT` diagnostic result table. category: reference aliases: ['/docs/dev/reference/system-databases/inspection-result/'] --- # INSPECTION_RESULT -TiDB has some built-in diagnosis rules for detecting faults and hidden issues in the system. +TiDB has some built-in diagnostic rules for detecting faults and hidden issues in the system. -The `INSPECTION_RESULT` diagnosis feature can help you quickly find problems and reduce your repetitive manual work. You can use the `select * from information_schema.inspection_result` statement to trigger the internal diagnosis. +The `INSPECTION_RESULT` diagnostic feature can help you quickly find problems and reduce your repetitive manual work. You can use the `select * from information_schema.inspection_result` statement to trigger the internal diagnostics. -The structure of the `information_schema.inspection_result` diagnosis result table `information_schema.inspection_result` is as follows: +The structure of the `information_schema.inspection_result` diagnostic result table `information_schema.inspection_result` is as follows: {{< copyable "sql" >}} @@ -38,22 +38,22 @@ desc information_schema.inspection_result; Field description: -* `RULE`: The name of the diagnosis rule. Currently, the following rules are available: - * `config`: Checks whether the configuration is consistent and proper. If the same configuration is inconsistent on different instances, a `warning` diagnosis result is generated. - * `version`: The consistency check of version. If the same version is inconsistent on different instances, a `warning` diagnosis result is generated. - * `node-load`: Checks the server load. If the current system load is too high, the corresponding `warning` diagnosis result is generated. - * `critical-error`: Each module of the system defines critical errors. If a critical error exceeds the threshold within the corresponding time period, a warning diagnosis result is generated. - * `threshold-check`: The diagnosis system checks the thresholds of key metrics. If a threshold is exceeded, the corresponding diagnosis information is generated. -* `ITEM`: Each rule diagnoses different items. This field indicates the specific diagnosis items corresponding to each rule. -* `TYPE`: The instance type of the diagnosis. The optional values are `tidb`, `pd`, and `tikv`. +* `RULE`: The name of the diagnostic rule. Currently, the following rules are available: + * `config`: Checks whether the configuration is consistent and proper. If the same configuration is inconsistent on different instances, a `warning` diagnostic result is generated. + * `version`: The consistency check of version. If the same version is inconsistent on different instances, a `warning` diagnostic result is generated. + * `node-load`: Checks the server load. If the current system load is too high, the corresponding `warning` diagnostic result is generated. + * `critical-error`: Each module of the system defines critical errors. If a critical error exceeds the threshold within the corresponding time period, a warning diagnostic result is generated. + * `threshold-check`: The diagnostic system checks the thresholds of key metrics. If a threshold is exceeded, the corresponding diagnostic information is generated. +* `ITEM`: Each rule diagnoses different items. This field indicates the specific diagnostic items corresponding to each rule. +* `TYPE`: The instance type of the diagnostics. The optional values are `tidb`, `pd`, and `tikv`. * `INSTANCE`: The specific address of the diagnosed instance. * `STATUS_ADDRESS`: The HTTP API service address of the instance. -* `VALUE`: The value of a specific diagnosis item. -* `REFERENCE`: The reference value (threshold value) for this diagnosis item. If `VALUE` exceeds the threshold, the corresponding diagnosis information is generated. +* `VALUE`: The value of a specific diagnostic item. +* `REFERENCE`: The reference value (threshold value) for this diagnostic item. If `VALUE` exceeds the threshold, the corresponding diagnostic information is generated. * `SEVERITY`: The severity level. The optional values are `warning` and `critical`. -* `DETAILS`: Diagnosis details, which might also contain SQL statement(s) or document links for further diagnosis. +* `DETAILS`: Diagnostic details, which might also contain SQL statement(s) or document links for further diagnostics. -## Diagnosis example +## Diagnostics example Diagnose issues currently existing in the cluster. @@ -102,7 +102,7 @@ SEVERITY | warning DETAILS | max duration of 172.16.5.40:20151 tikv rocksdb-write-duration was too slow ``` -The following issues can be detected from the diagnosis result above: +The following issues can be detected from the diagnostic result above: * The first row indicates that TiDB's `log.slow-threshold` value is configured to `0`, which might affect performance. * The second row indicates that two different TiDB versions exist in the cluster. @@ -137,12 +137,12 @@ SEVERITY | warning DETAILS | max duration of 172.16.5.40:10089 tidb get-token-duration is too slow ``` -The following issues can be detected from the diagnosis result above: +The following issues can be detected from the diagnostic result above: * The first row indicates that the `172.16.5.40:4009` TiDB instance is restarted at `2020/03/26 00:05:45.670`. * The second row indicates that the maximum `get-token-duration` time of the `172.16.5.40:10089` TiDB instance is 0.234s, but the expected time is less than 0.001s. -You can also specify conditions, for example, to query the `critical` level diagnosis results: +You can also specify conditions, for example, to query the `critical` level diagnostic results: {{< copyable "sql" >}} @@ -150,7 +150,7 @@ You can also specify conditions, for example, to query the `critical` level diag select * from information_schema.inspection_result where severity='critical'; ``` -Query only the diagnosis result of the `critical-error` rule: +Query only the diagnostic result of the `critical-error` rule: {{< copyable "sql" >}} @@ -158,11 +158,11 @@ Query only the diagnosis result of the `critical-error` rule: select * from information_schema.inspection_result where rule='critical-error'; ``` -## Diagnosis rules +## Diagnostic rules -The diagnosis module contains a series of rules. These rules compare the results with the thresholds after querying the existing monitoring tables and cluster information tables. If the results exceed the thresholds, the diagnosis of `warning` or `critical` is generated and the corresponding information is provided in the `details` column. +The diagnostic module contains a series of rules. These rules compare the results with the thresholds after querying the existing monitoring tables and cluster information tables. If the results exceed the thresholds, the diagnostics of `warning` or `critical` is generated and the corresponding information is provided in the `details` column. -You can query the existing diagnosis rules by querying the `inspection_rules` system table: +You can query the existing diagnostic rules by querying the `inspection_rules` system table: {{< copyable "sql" >}} @@ -182,9 +182,9 @@ select * from information_schema.inspection_rules where type='inspection'; +-----------------+------------+---------+ ``` -### `config` diagnosis rule +### `config` diagnostic rule -In the `config` diagnosis rule, the following two diagnosis rules are executed by querying the `CLUSTER_CONFIG` system table: +In the `config` diagnostic rule, the following two diagnostic rules are executed by querying the `CLUSTER_CONFIG` system table: * Check whether the configuration values of the same component are consistent. Not all configuration items has this consistency check. The white list of consistency check is as follows: @@ -228,9 +228,9 @@ In the `config` diagnosis rule, the following two diagnosis rules are executed b | TiDB | log.slow-threshold | larger than `0` | | TiKV | raftstore.sync-log | `true` | -### `version` diagnosis rule +### `version` diagnostic rule -The `version` diagnosis rule checks whether the version hash of the same component is consistent by querying the `CLUSTER_INFO` system table. See the following example: +The `version` diagnostic rule checks whether the version hash of the same component is consistent by querying the `CLUSTER_INFO` system table. See the following example: {{< copyable "sql" >}} @@ -250,9 +250,9 @@ SEVERITY | critical DETAILS | the cluster has 2 different tidb versions, execute the sql to see more detail: select * from information_schema.cluster_info where type='tidb' ``` -### `critical-error` diagnosis rule +### `critical-error` diagnostic rule -In `critical-error` diagnosis rule, the following two diagnosis rules are executed: +In `critical-error` diagnostic rule, the following two diagnostic rules are executed: * Detect whether the cluster has the following errors by querying the related monitoring system tables in the metrics schema: @@ -268,9 +268,9 @@ In `critical-error` diagnosis rule, the following two diagnosis rules are execut * Check whether any component is restarted by querying the `metrics_schema.up` monitoring table and the `CLUSTER_LOG` system table. -### `threshold-check` diagnosis rule +### `threshold-check` diagnostic rule -The `threshold-check` diagnosis rule checks whether the following metrics in the cluster exceed the threshold by querying the related monitoring system tables in the metrics schema: +The `threshold-check` diagnostic rule checks whether the following metrics in the cluster exceed the threshold by querying the related monitoring system tables in the metrics schema: | Component | Monitoring metric | Monitoring table | Expected value | Description | | :---- | :---- | :---- | :---- | :---- | @@ -308,4 +308,4 @@ In addition, this rule also checks whether the CPU usage of the following thread * storage-readpool-low-cpu * split-check-cpu -The built-in diagnosis rules are constantly being improved. If you have more diagnosis rules, welcome to create a PR or an issue in the [`tidb` repository](https://github.com/pingcap/tidb). +The built-in diagnostic rules are constantly being improved. If you have more diagnostic rules, welcome to create a PR or an issue in the [`tidb` repository](https://github.com/pingcap/tidb). diff --git a/system-tables/system-table-inspection-summary.md b/system-tables/system-table-inspection-summary.md index 416a37f1b6596..85a2c2f49e74f 100644 --- a/system-tables/system-table-inspection-summary.md +++ b/system-tables/system-table-inspection-summary.md @@ -48,12 +48,12 @@ Field description: Usage example: -Both the diagnosis result table and the diagnosis monitoring summary table can specify the diagnosis time range using `hint`. `select /*+ time_range('2020-03-07 12:00:00','2020-03-07 13:00:00') */* from inspection_summary` is the monitoring summary for the `2020-03-07 12:00:00` to `2020-03-07 13:00:00` period. Like the monitoring summary table, you can use the `inspection_summary` table to quickly find the monitoring items with large differences by comparing the data of two different periods. +Both the diagnostic result table and the diagnostic monitoring summary table can specify the diagnostic time range using `hint`. `select /*+ time_range('2020-03-07 12:00:00','2020-03-07 13:00:00') */* from inspection_summary` is the monitoring summary for the `2020-03-07 12:00:00` to `2020-03-07 13:00:00` period. Like the monitoring summary table, you can use the `inspection_summary` table to quickly find the monitoring items with large differences by comparing the data of two different periods. The following example compares the monitoring metrics of read links in two time periods: * `(2020-01-16 16:00:54.933, 2020-01-16 16:10:54.933)` -* `(2020-01-16 16:10:54.933, 2020-01-16 16:20:54.933)` +* `(2020-01-16 16:10:54.933, 2020-01-16 16:20:54.933)` {{< copyable "sql" >}} diff --git a/system-tables/system-table-metrics-schema.md b/system-tables/system-table-metrics-schema.md index c1dd0aba925e9..6a61d08df5d93 100644 --- a/system-tables/system-table-metrics-schema.md +++ b/system-tables/system-table-metrics-schema.md @@ -7,7 +7,7 @@ aliases: ['/docs/dev/reference/system-databases/metrics-schema/'] # Metrics Schema -To dynamically observe and compare cluster conditions of different time ranges, the SQL diagnosis system introduces cluster monitoring system tables. All monitoring tables are in the `metrics_schema` database. You can query the monitoring information using SQL statements in this schema. The data of the three monitoring-related summary tables ([`metrics_summary`](/system-tables/system-table-metrics-summary.md), [`metrics_summary_by_label`](/system-tables/system-table-metrics-summary.md), and `inspection_result`) are all obtained by querying the monitoring tables in the metrics schema. Currently, many system tables are added, so you can query the information of these tables using the [`information_schema.metrics_tables`](/system-tables/system-table-metrics-tables.md) table. +To dynamically observe and compare cluster conditions of different time ranges, the SQL diagnostic system introduces cluster monitoring system tables. All monitoring tables are in the `metrics_schema` database. You can query the monitoring information using SQL statements in this schema. The data of the three monitoring-related summary tables ([`metrics_summary`](/system-tables/system-table-metrics-summary.md), [`metrics_summary_by_label`](/system-tables/system-table-metrics-summary.md), and `inspection_result`) are all obtained by querying the monitoring tables in the metrics schema. Currently, many system tables are added, so you can query the information of these tables using the [`information_schema.metrics_tables`](/system-tables/system-table-metrics-tables.md) table. ## Overview diff --git a/system-tables/system-table-sql-diagnosis.md b/system-tables/system-table-sql-diagnosis.md deleted file mode 100644 index 13f916944bffd..0000000000000 --- a/system-tables/system-table-sql-diagnosis.md +++ /dev/null @@ -1,62 +0,0 @@ ---- -title: SQL Diagnosis -summary: Understand SQL diagnosis in TiDB. -category: reference -aliases: ['/docs/dev/reference/system-databases/sql-diagnosis/'] ---- - -# SQL Diagnosis - -> **Warning:** -> -> SQL diagnosis is still an experimental feature. It is **NOT** recommended that you use it in the production environment. - -SQL diagnosis is a feature introduced in TiDB v4.0. You can use this feature to locate problems in TiDB with higher efficiency. Before TiDB v4.0, you need to use different tools to obtain different information. - -The SQL diagnosis system has the following advantages: - -+ It integrates information from all components of the system as a whole. -+ It provides a consistent interface to the upper layer through system tables. -+ It provides monitoring summaries and automatic diagnosis. -+ You will find it easier to query cluster information. - -## Overview - -The SQL diagnosis system consists of three major parts: - -+ **Cluster information table**: The SQL diagnosis system introduces cluster information tables that provide a unified way to get the discrete information of each instance. This system fully integrates the cluster topology, hardware information, software information, kernel parameters, monitoring, system information, slow queries, statements, and logs of the entire cluster into the table. So you can query these information using SQL statements. - -+ **Cluster monitoring table**: The SQL diagnosis system introduces cluster monitoring tables. All of these tables are in `metrics_schema`, and you can query monitoring information using SQL statements. Compared to the visualized monitoring before v4.0, you can use this SQL-based method to perform correlated queries on all the monitoring information of the entire cluster, and compare the results of different time periods to quickly identify performance bottlenecks. Because the TiDB cluster has many monitoring metrics, the SQL diagnosis system also provides monitoring summary tables, so you can find abnormal monitoring items more easily. - -+ **Automatic diagnosis**: Although you can manually execute SQL statements to query cluster information tables, cluster monitoring tables, and summary tables to locate issues, the automatic diagnosis allows you to quickly locate common issues. The SQL diagnosis system performs automatic diagnosis based on the existing cluster information tables and monitoring tables, and provides relevant diagnosis result tables and diagnosis summary tables. - -## Cluster information tables - -The cluster information tables bring together the information of all instances and instances in a cluster. With these tables, you can query all cluster information using only one SQL statement. The following is a list of cluster information tables: - -+ From the cluster topology table [`information_schema.cluster_info`](/system-tables/system-table-cluster-info.md), you can get the current topology information of the cluster, the version of each instance, the Git Hash corresponding to the version, the starting time of each instance, and the running time of each instance. -+ From the cluster configuration table [`information_schema.cluster_config`](/system-tables/system-table-cluster-config.md), you can get the configuration of all instances in the cluster. For versions earlier than 4.0, you need to access the HTTP API of each instance one by one to get these configuration information. -+ On the cluster hardware table [`information_schema.cluster_hardware`](/system-tables/system-table-cluster-hardware.md), you can quickly query the cluster hardware information. -+ On the cluster load table [`information_schema.cluster_load`](/system-tables/system-table-cluster-load.md), you can query the load information of different instances and hardware types of the cluster. -+ On the kernel parameter table [`information_schema.cluster_systeminfo`](/system-tables/system-table-cluster-systeminfo.md), you can query the kernel configuration information of different instances in the cluster. Currently, TiDB supports querying the sysctl information. -+ On the cluster log table [`information_schema.cluster_log`](/system-tables/system-table-cluster-log.md), you can query cluster logs. By pushing down query conditions to each instance, the impact of the query on cluster performance is less than that of the `grep` command. - -On the system tables earlier than TiDB v4.0, you can only view the current instance. TiDB v4.0 introduces the corresponding cluster tables and you can have a global view of the entire cluster on a single TiDB instance. These tables are currently in `information_schema`, and the query method is the same as other `information_schema` system tables. - -## Cluster monitoring tables - -To dynamically observe and compare cluster conditions in different time periods, the SQL diagnosis system introduces cluster monitoring system tables. All monitoring tables are in `metrics_schema`, and you can query the monitoring information using SQL statements. Using this method, you can perform correlated queries on all monitoring information of the entire cluster and compare the results of different time periods to quickly identify performance bottlenecks. - -+ [`information_schema.metrics_tables`](/system-tables/system-table-metrics-tables.md)): Because many system tables exist now, you can query meta-information of these monitoring tables on the `information_schema.metrics_tables` table. - -Because the TiDB cluster has many monitoring metrics, TiDB provides the following monitoring summary tables in v4.0: - -+ The monitoring summary table [`information_schema.metrics_summary`](/system-tables/system-table-metrics-summary.md) summarizes all monitoring data to for you to check each monitoring metric with higher efficiency. -+ [`information_schema.metrics_summary_by_label`](/system-tables/system-table-metrics-summary.md)) also summarizes all monitoring data. Particularly, this table aggregates statistics using different labels of each monitoring metric. - -## Automatic diagnosis - -On the above cluster information tables and cluster monitoring tables, you need to manually execute SQL statements to troubleshoot the cluster. TiDB v4.0 supports the automatic diagnosis. You can use diagnosis-related system tables based on the existing basic information tables, so that the diagnosis is automatically executed. The following are the system tables related to the automatic diagnosis: - -+ The diagnosis result table [`information_schema.inspection_result`](/system-tables/system-table-inspection-result.md) displays the diagnosis result of the system. The diagnosis is passively triggered. Executing `select * from inspection_result` triggers all diagnostic rules to diagnose the system, and the faults or risks in the system are displayed in the results. -+ The diagnosis summary table [`information_schema.inspection_summary`](/system-tables/system-table-inspection-summary.md) summarizes the monitoring information of a specific link or module. You can troubleshoot and locate problems based on the context of the entire module or link. diff --git a/system-tables/system-table-sql-diagnostics.md b/system-tables/system-table-sql-diagnostics.md new file mode 100644 index 0000000000000..4561c6264e4bd --- /dev/null +++ b/system-tables/system-table-sql-diagnostics.md @@ -0,0 +1,62 @@ +--- +title: SQL Diagnostics +summary: Understand SQL diagnostics in TiDB. +category: reference +aliases: ['/docs/dev/reference/system-databases/sql-diagnosis/','/docs/dev/system-tables/system-table-sql-diagnosis/'] +--- + +# SQL Diagnostics + +> **Warning:** +> +> SQL diagnostics is still an experimental feature. It is **NOT** recommended that you use it in the production environment. + +SQL diagnostics is a feature introduced in TiDB v4.0. You can use this feature to locate problems in TiDB with higher efficiency. Before TiDB v4.0, you need to use different tools to obtain different information. + +The SQL diagnostic system has the following advantages: + ++ It integrates information from all components of the system as a whole. ++ It provides a consistent interface to the upper layer through system tables. ++ It provides monitoring summaries and automatic diagnostics. ++ You will find it easier to query cluster information. + +## Overview + +The SQL diagnostic system consists of three major parts: + ++ **Cluster information table**: The SQL diagnostics system introduces cluster information tables that provide a unified way to get the discrete information of each instance. This system fully integrates the cluster topology, hardware information, software information, kernel parameters, monitoring, system information, slow queries, statements, and logs of the entire cluster into the table. So you can query these information using SQL statements. + ++ **Cluster monitoring table**: The SQL diagnostic system introduces cluster monitoring tables. All of these tables are in `metrics_schema`, and you can query monitoring information using SQL statements. Compared to the visualized monitoring before v4.0, you can use this SQL-based method to perform correlated queries on all the monitoring information of the entire cluster, and compare the results of different time periods to quickly identify performance bottlenecks. Because the TiDB cluster has many monitoring metrics, the SQL diagnostic system also provides monitoring summary tables, so you can find abnormal monitoring items more easily. + +**Automatic diagnostics**: Although you can manually execute SQL statements to query cluster information tables, cluster monitoring tables, and summary tables to locate issues, the automatic diagnostics allows you to quickly locate common issues. The SQL diagnostic system performs automatic diagnostics based on the existing cluster information tables and monitoring tables, and provides relevant diagnostic result tables and diagnostic summary tables. + +## Cluster information tables + +The cluster information tables bring together the information of all instances and instances in a cluster. With these tables, you can query all cluster information using only one SQL statement. The following is a list of cluster information tables: + ++ From the cluster topology table [`information_schema.cluster_info`](/system-tables/system-table-cluster-info.md), you can get the current topology information of the cluster, the version of each instance, the Git Hash corresponding to the version, the starting time of each instance, and the running time of each instance. ++ From the cluster configuration table [`information_schema.cluster_config`](/system-tables/system-table-cluster-config.md), you can get the configuration of all instances in the cluster. For versions earlier than 4.0, you need to access the HTTP API of each instance one by one to get these configuration information. ++ On the cluster hardware table [`information_schema.cluster_hardware`](/system-tables/system-table-cluster-hardware.md), you can quickly query the cluster hardware information. ++ On the cluster load table [`information_schema.cluster_load`](/system-tables/system-table-cluster-load.md), you can query the load information of different instances and hardware types of the cluster. ++ On the kernel parameter table [`information_schema.cluster_systeminfo`](/system-tables/system-table-cluster-systeminfo.md), you can query the kernel configuration information of different instances in the cluster. Currently, TiDB supports querying the sysctl information. ++ On the cluster log table [`information_schema.cluster_log`](/system-tables/system-table-cluster-log.md), you can query cluster logs. By pushing down query conditions to each instance, the impact of the query on cluster performance is less than that of the `grep` command. + +On the system tables earlier than TiDB v4.0, you can only view the current instance. TiDB v4.0 introduces the corresponding cluster tables and you can have a global view of the entire cluster on a single TiDB instance. These tables are currently in `information_schema`, and the query method is the same as other `information_schema` system tables. + +## Cluster monitoring tables + +To dynamically observe and compare cluster conditions in different time periods, the SQL diagnostic system introduces cluster monitoring system tables. All monitoring tables are in `metrics_schema`, and you can query the monitoring information using SQL statements. Using this method, you can perform correlated queries on all monitoring information of the entire cluster and compare the results of different time periods to quickly identify performance bottlenecks. + ++ [`information_schema.metrics_tables`](/system-tables/system-table-metrics-tables.md)): Because many system tables exist now, you can query meta-information of these monitoring tables on the `information_schema.metrics_tables` table. + +Because the TiDB cluster has many monitoring metrics, TiDB provides the following monitoring summary tables in v4.0: + ++ The monitoring summary table [`information_schema.metrics_summary`](/system-tables/system-table-metrics-summary.md) summarizes all monitoring data to for you to check each monitoring metric with higher efficiency. ++ [`information_schema.metrics_summary_by_label`](/system-tables/system-table-metrics-summary.md)) also summarizes all monitoring data. Particularly, this table aggregates statistics using different labels of each monitoring metric. + +## Automatic diagnostics + +On the cluster information tables and cluster monitoring tables above, you need to manually execute SQL statements to troubleshoot the cluster. TiDB v4.0 supports the automatic diagnostics. You can use diagnostic-related system tables based on the existing basic information tables, so that the diagnostics is automatically executed. The following are the system tables related to the automatic diagnostics: + ++ The diagnostic result table [`information_schema.inspection_result`](/system-tables/system-table-inspection-result.md) displays the diagnostic result of the system. The diagnostics is passively triggered. Executing `select * from inspection_result` triggers all diagnostic rules to diagnose the system, and the faults or risks in the system are displayed in the results. ++ The diagnostic summary table [`information_schema.inspection_summary`](/system-tables/system-table-inspection-summary.md) summarizes the monitoring information of a specific link or module. You can troubleshoot and locate problems based on the context of the entire module or link. diff --git a/whats-new-in-tidb-4.0.md b/whats-new-in-tidb-4.0.md index 34f50b892ca74..634ac2d070f85 100644 --- a/whats-new-in-tidb-4.0.md +++ b/whats-new-in-tidb-4.0.md @@ -59,7 +59,7 @@ TiUP is a new package manager tool introduced in v4.0 that is used to manage all - Support using the Index Merge feature to access tables. When you make a query on a single table, the TiDB optimizer automatically reads multiple index data according to the query condition and makes a union of the result, which improves the performance of querying on a single table. See [Index Merge](/index-merge.md) for details. - Support the expression index feature (**experimental**). The expression index is also called the function-based index. When you create an index, the index fields do not have to be a specific column but can be an expression calculated from one or more columns. This feature is useful for quickly accessing the calculation-based tables. See [Expression index](/sql-statements/sql-statement-create-index.md) for details. - Support `AUTO_RANDOM` keys as an extended syntax for the TiDB columnar attribute (**experimental**). `AUTO_RANDOM` is designed to address the hotspot issue caused by the auto-increment column and provides a low-cost migration solution from MySQL for users who work with auto-increment columns. See [`AUTO_RANDOM` Key](/auto-random.md) for details. -- Add system tables that provide information of cluster topology, configuration, logs, hardware, operating systems, and slow queries, which helps DBAs to quickly learn, analyze system metrics. See [SQL Diagnosis](/system-tables/system-table-sql-diagnosis.md) for details. +- Add system tables that provide information of cluster topology, configuration, logs, hardware, operating systems, and slow queries, which helps DBAs to quickly learn, analyze system metrics. See [SQL Diagnosis](/system-tables/system-table-sql-diagnostics.md) for details. - Add system tables that provide information of cluster topology, configuration, logs, hardware, operating systems to help DBAs quickly learn the cluster configuration and status: - The `cluster_info` table that stores the cluster topology information.