diff --git a/TOC.md b/TOC.md index c19785d416126..8b5eefb2e4063 100644 --- a/TOC.md +++ b/TOC.md @@ -287,6 +287,8 @@ - [`metrics_schema`](/reference/system-databases/metrics-schema.md) - [`metrics_tables`](/reference/system-databases/metrics-tables.md) - [`metrics_summary`](/reference/system-databases/metrics-summary.md) + - [`inspection_result`](/reference/system-databases/inspection-result.md) + - [`inspection_summary`](/reference/system-databases/inspection-summary.md) - [Errors Codes](/reference/error-codes.md) - [Supported Client Drivers](/reference/supported-clients.md) + Garbage Collection (GC) diff --git a/reference/system-databases/inspection-result.md b/reference/system-databases/inspection-result.md new file mode 100644 index 0000000000000..b8db51bbecdcc --- /dev/null +++ b/reference/system-databases/inspection-result.md @@ -0,0 +1,306 @@ +--- +title: INSPECTION_RESULT +summary: Learn the `INSPECTION_RESULT` diagnosis result table. +category: reference +--- + +# INSPECTION_RESULT + +TiDB has some built-in diagnosis rules for detecting faults and hidden issues in the system. + +The `INSPECTION_RESULT` diagnosis feature can help you quickly find problems and reduce your repetitive manual work. You can use the `select * from information_schema.inspection_result` statement to trigger the internal diagnosis. + +The structure of the `information_schema.inspection_result` diagnosis result table `information_schema.inspection_result` is as follows: + +{{< copyable "sql" >}} + +```sql +desc inspection_result; +``` + +```sql ++-----------+--------------+------+------+---------+-------+ +| Field | Type | Null | Key | Default | Extra | ++-----------+--------------+------+------+---------+-------+ +| RULE | varchar(64) | YES | | NULL | | +| ITEM | varchar(64) | YES | | NULL | | +| TYPE | varchar(64) | YES | | NULL | | +| INSTANCE | varchar(64) | YES | | NULL | | +| VALUE | varchar(64) | YES | | NULL | | +| REFERENCE | varchar(64) | YES | | NULL | | +| SEVERITY | varchar(64) | YES | | NULL | | +| DETAILS | varchar(256) | YES | | NULL | | ++-----------+--------------+------+------+---------+-------+ +8 rows in set (0.00 sec) +``` + +Field description: + +* `RULE`: The name of the diagnosis rule. Currently, the following rules are available: + * `config`: The consistency check of configuration. If the same configuration is inconsistent on different instances, a `warning` diagnosis result is generated. + * `version`: The consistency check of version. If the same version is inconsistent on different instances, a `warning` diagnosis result is generated. + * `current-load`: If the current system load is too high, the corresponding `warning` diagnosis result is generated. + * `critical-error`: Each module of the system defines critical errors. If a critical error exceeds the threshold within the corresponding time period, a warning diagnosis result is generated. + * `threshold-check`: The diagnosis system checks the thresholds of a large number of metrics. If a threshold is exceeded, the corresponding diagnosis information is generated. +* `ITEM`: Each rule diagnoses different items. This field indicates the specific diagnosis items corresponding to each rule. +* `TYPE`: The instance type of the diagnosis. The optional values are `tidb`, `pd`, and `tikv`. +* `INSTANCE`: The specific address of the diagnosed instance. +* `VALUE`: The value of a specific diagnosis item. +* `REFERENCE`: The reference value (threshold value) for this diagnosis item. If the difference between `VALUE` and the threshold is very large, the corresponding diagnosis information is generated. +* `SEVERITY`: The severity level. The optional values are `warning` and `critical`. +* `DETAILS`: Diagnosis details, which might also contain SQL statement(s) or document links for further diagnosis. + +## Diagnosis example + +Diagnose issues currently existing in the cluster. + +{{< copyable "sql" >}} + +```sql +select * from inspection_result\G +``` + +```sql +***************************[ 1. row ]*************************** +RULE | config +ITEM | log.slow-threshold +TYPE | tidb +INSTANCE | 172.16.5.40:4000 +VALUE | 0 +REFERENCE | not 0 +SEVERITY | warning +DETAILS | slow-threshold = 0 will record every query to slow log, it may affect performance +***************************[ 2. row ]*************************** +RULE | version +ITEM | git_hash +TYPE | tidb +INSTANCE | +VALUE | inconsistent +REFERENCE | consistent +SEVERITY | critical +DETAILS | the cluster has 2 different tidb version, execute the sql to see more detail: select * from information_schema.cluster_info where type='tidb' +***************************[ 3. row ]*************************** +RULE | threshold-check +ITEM | storage-write-duration +TYPE | tikv +INSTANCE | 172.16.5.40:23151 +VALUE | 130.417 +REFERENCE | < 0.100 +SEVERITY | warning +DETAILS | max duration of 172.16.5.40:23151 tikv storage-write-duration was too slow +***************************[ 4. row ]*************************** +RULE | threshold-check +ITEM | rocksdb-write-duration +TYPE | tikv +INSTANCE | 172.16.5.40:20151 +VALUE | 108.105 +REFERENCE | < 0.100 +SEVERITY | warning +DETAILS | max duration of 172.16.5.40:20151 tikv rocksdb-write-duration was too slow +``` + +The following issues can be detected from the diagnosis result above: + +* The first row indicates that TiDB's `log.slow-threshold` value is configured to `0`, which might affect performance. +* The second row indicates that two different TiDB versions exist in the cluster. +* The third and fourth rows indicate that the TiKV write delay is too long. The expected delay is no more than 0.1 second, while the actual delay is far longer than expected. + +You can also diagnose issues existing within a specified range, such as from "2020-03-26 00:03:00" to "2020-03-26 00:08:00". To specify the time range, use the SQL Hint of `/*+ time_range() */`. See the following query example: + +{{< copyable "sql" >}} + +```sql +select /*+ time_range("2020-03-26 00:03:00", "2020-03-26 00:08:00") */ * from inspection_result\G +``` + +```sql +***************************[ 1. row ]*************************** +RULE | critical-error +ITEM | server-down +TYPE | tidb +INSTANCE | 172.16.5.40:4009 +VALUE | +REFERENCE | +SEVERITY | critical +DETAILS | tidb 172.16.5.40:4009 restarted at time '2020/03/26 00:05:45.670' +***************************[ 2. row ]*************************** +RULE | threshold-check +ITEM | get-token-duration +TYPE | tidb +INSTANCE | 172.16.5.40:10089 +VALUE | 0.234 +REFERENCE | < 0.001 +SEVERITY | warning +DETAILS | max duration of 172.16.5.40:10089 tidb get-token-duration is too slow +``` + +The following issues can be detected from the diagnosis result above: + +* The first row indicates that the `172.16.5.40:4009` TiDB instance is restarted at `2020/03/26 00:05:45.670`. +* The second row indicates that the maximum `get-token-duration` time of the `172.16.5.40:10089` TiDB instance is 0.234s, but the expected time is less than 0.001s. + +You can also specify conditions, for example, to query the `critical` level diagnosis results: + +{{< copyable "sql" >}} + +```sql +select * from inspection_result where severity='critical'; +``` + +Query only the diagnosis result of the `critical-error` rule: + +{{< copyable "sql" >}} + +```sql +select * from inspection_result where rule='critical-error'; +``` + +## Diagnosis rules + +The diagnosis module contains a series of rules. These rules compare the results with the preset thresholds after querying the existing monitoring tables and cluster information tables. If the results exceed the thresholds or fall below the thresholds, the result of `warning` or `critical` is generated and the corresponding information is provided in the `details` column. + +You can query the existing diagnosis rules by querying the `inspection_rules` system table: + +{{< copyable "sql" >}} + +```sql +select * from inspection_rules where type='inspection'; +``` + +```sql ++-----------------+------------+---------+ +| NAME | TYPE | COMMENT | ++-----------------+------------+---------+ +| config | inspection | | +| version | inspection | | +| current-load | inspection | | +| critical-error | inspection | | +| threshold-check | inspection | | ++-----------------+------------+---------+ +``` + +### `config` diagnosis rule + +In the `config` diagnosis rule, the following two diagnosis rules are executed by querying the `CLUSTER_CONFIG` system table: + +* Check whether the configuration values of the same component are consistent. Not all configuration items has this consistency check. The white list of consistency check is as follows: + + ```go + // The whitelist of the TiDB configuration consistency check + port + status.status-port + host + path + advertise-address + status.status-port + log.file.filename + log.slow-query-file + + // The whitelist of the PD configuration consistency check + advertise-client-urls + advertise-peer-urls + client-urls + data-dir + log-file + log.file.filename + metric.job + name + peer-urls + + // The whitelist of the TiKV configuration consistency check + server.addr + server.advertise-addr + server.status-addr + log-file + raftstore.raftdb-path + storage.data-dir + ``` + +* Check whether the values of the following configuration items are as expected. + + | Component | Configuration item | Expected value | + | ---- | ---- | ---- | + | TiDB | log.slow-threshold | larger than `0` | + | TiKV | raftstore.sync-log | `true` | + +### `version` diagnosis rule + +The `version` diagnosis rule checks whether the version hash of the same component is consistent by querying the `CLUSTER_INFO` system table. See the following example: + +{{< copyable "sql" >}} + +```sql +select * from inspection_result where rule='version'\G +``` + +```sql +***************************[ 1. row ]*************************** +RULE | version +ITEM | git_hash +TYPE | tidb +INSTANCE | +VALUE | inconsistent +REFERENCE | consistent +SEVERITY | critical +DETAILS | the cluster has 2 different tidb versions, execute the sql to see more detail: select * from information_schema.cluster_info where type='tidb' +``` + +### `critical-error` diagnosis rule + +In `critical-error` diagnosis rule, the following two diagnosis rules are executed: + +* Detect whether the cluster has the following errors by querying the related monitoring system tables in the metrics schema: + + | Component | Error name | Monitoring table | Error description | + | ---- | ---- | ---- | ---- | + | TiDB | panic-count | tidb_panic_count_total_count | Panic occurs in TiDB. | + | TiDB | binlog-error | tidb_binlog_error_total_count | An error occurs when TiDB writes binlog. | + | TiKV | critical-error | tikv_critical_error_total_coun | The critical error of TiKV. | + | TiKV | scheduler-is-busy | tikv_scheduler_is_busy_total_count | The TiKV scheduler is too busy, which makes TiKV temporarily unavailable. | + | TiKV | coprocessor-is-busy | tikv_coprocessor_is_busy_total_count | The TiKV Coprocessor is too busy. | + | TiKV | channel-is-full | tikv_channel_full_total_count | The "channel full" error occurs in TiKV. | + | TiKV | tikv_engine_write_stall | tikv_engine_write_stall | The "stall" error occurs in TiKV. | + +* Check whether any component is restarted by querying the `metrics_schema.up` monitoring table and the `CLUSTER_LOG` system table. + +### `threshold-check` diagnosis rule + +The `threshold-check` diagnosis rule checks whether the following metrics in the cluster exceed the threshold by querying the related monitoring system tables in the metrics schema: + +| Component | Monitoring metric | Monitoring table | Expected value | Description | +| :---- | :---- | :---- | :---- | :---- | +| TiDB | tso-duration | pd_tso_wait_duration | < 50ms | The time it takes to get the transaction TSO timestamp. | +| TiDB | get-token-duration | tidb_get_token_duration | < 1ms | Queries the time it takes to get the token. The related TiDB configuration item is [`token-limit`](/reference/configuration/tidb-server/configuration.md#token-limit). | +| TiDB | load-schema-duration | tidb_load_schema_duration | < 1s | The time it takes for TiDB to update and load the schema metadata.| +| TiKV | scheduler-cmd-duration | tikv_scheduler_command_duration | < 0.1s | The time it takes for TiKV to execute the KV `cmd` request. | +| TiKV | handle-snapshot-duration | tikv_handle_snapshot_duration | < 30s | The time it takes for TiKV to handle the snapshot. | +| TiKV | storage-write-duration | tikv_storage_async_request_duration | < 0.1s | The write latency of TiKV. | +| TiKV | storage-snapshot-duration | tikv_storage_async_request_duration | < 50ms | The time it takes for TiKV to get the snapshot. | +| TiKV | rocksdb-write-duration | tikv_engine_write_duration | < 100ms | The write latency of TiKV RocksDB. | +| TiKV | rocksdb-get-duration | tikv_engine_max_get_duration | < 50ms | The read latency of TiKV RocksDB. | +| TiKV | rocksdb-seek-duration | tikv_engine_max_seek_duration | < 50ms | The latency of TiKV RocksDB to execute `seek`. | +| TiKV | scheduler-pending-cmd-coun | tikv_scheduler_pending_commands | < 1000 | The number of commands stalled in TiKV. | +| TiKV | index-block-cache-hit | tikv_block_index_cache_hit | > 0.95 | The hit rate of index block cache in TiKV. | +| TiKV | filter-block-cache-hit | tikv_block_filter_cache_hit | > 0.95 | The hit rate of filter block cache in TiKV. | +| TiKV | data-block-cache-hit | tikv_block_data_cache_hit | > 0.80 | The hit rate of data block cache in TiKV. | +| TiKV | leader-score-balance | pd_scheduler_store_status | < 0.05 | Checks whether the leader score of each TiKV instance is balanced. The expected difference between instances is less than 5%. | +| TiKV | region-score-balance | pd_scheduler_store_status | < 0.05 | Checks whether the Region score of each TiKV instance is balanced. The expected difference between instances is less than 5%. | +| TiKV | store-available-balance | pd_scheduler_store_status | < 0.2 | Checks whether the available storage of each TiKV instance is balanced. The expected difference between instances is less than 20%. | +| TiKV | region-count | pd_scheduler_store_status | < 20000 | Checks the number of Regions on each TiKV instance. The expected number of Regions in a single instance is less than 20,000. | +| PD | region-health | pd_region_health | < 100 | Detects the number of Regions that are in the process of scheduling in the cluster. The expected number is less than 100 in total. | + +In addition, this rule also checks whether the CPU usage of the following threads in a TiKV instance is too high: + +* scheduler-worker-cpu +* coprocessor-normal-cpu +* coprocessor-high-cpu +* coprocessor-low-cpu +* grpc-cpu +* raftstore-cpu +* apply-cpu +* storage-readpool-normal-cpu +* storage-readpool-high-cpu +* storage-readpool-low-cpu +* split-check-cpu + +The built-in diagnosis rules are constantly being improved. If you have more diagnosis rules, welcome to create a PR or an issue in the [`tidb` repository](https://github.com/pingcap/tidb). diff --git a/reference/system-databases/inspection-summary.md b/reference/system-databases/inspection-summary.md new file mode 100644 index 0000000000000..833feb551d95b --- /dev/null +++ b/reference/system-databases/inspection-summary.md @@ -0,0 +1,75 @@ +--- +title: INSPECTION_SUMMARY +summary: Learn the `INSPECTION_SUMMARY` inspection summary table. +category: reference +--- + +# INSPECTION_SUMMARY + +In some scenarios, you might pay attention only to the monitoring summary of specific links or modules. For example, the number of threads for Coprocessor in the thread pool is configured as 8. If the CPU usage of Coprocessor reaches 750%, you can determine that a risk exists and Coprocessor might become a bottleneck in advance. However, some monitoring metrics vary greatly due to different user workloads, so it is difficult to define specific thresholds. It is important to troubleshoot issues in this scenario, so TiDB provides the `inspection_summary` table for link summary. + +The structure of the `information_schema.inspection_summary` inspection summary table is as follows: + +{{< copyable "sql" >}} + +```sql +desc inspection_summary; +``` + +```sql ++--------------+-----------------------+------+------+---------+-------+ +| Field | Type | Null | Key | Default | Extra | ++--------------+-----------------------+------+------+---------+-------+ +| RULE | varchar(64) | YES | | NULL | | +| INSTANCE | varchar(64) | YES | | NULL | | +| METRICS_NAME | varchar(64) | YES | | NULL | | +| LABEL | varchar(64) | YES | | NULL | | +| QUANTILE | double unsigned | YES | | NULL | | +| AVG_VALUE | double(22,6) unsigned | YES | | NULL | | +| MIN_VALUE | double(22,6) unsigned | YES | | NULL | | +| MAX_VALUE | double(22,6) unsigned | YES | | NULL | | ++--------------+-----------------------+------+------+---------+-------+ +``` + +Field description: + +* `RULE`: Summary rules. Because new rules are being added continuously, you can execute the `select * from inspection_rules where type='summary'` statement to query the latest rule list. +* `INSTANCE`: The monitored instance. +* `METRICS_NAME`: The monitoring metrics name. +* `QUANTILE`: Takes effect on monitoring tables that contain `QUANTILE`. You can specify multiple percentiles by pushing down predicates. For example, you can execute `select * from inspection_summary where rule='ddl' and quantile in (0.80, 0.90, 0.99, 0.999)` to summarize the DDL-related monitoring metrics and query the P80/P90/P99/P999 results. `AVG_VALUE`, `MIN_VALUE`, and `MAX_VALUE` respectively indicate the average value, minimum value, and maximum value of the aggregation. + +> **Note:** +> +> Because summarizing all results causes overhead, the rules in `information_summary` are triggered passively. That is, the specified `rule` runs only when it displays in the SQL predicate. For example, executing the `select * from inspection_summary` statement returns an empty result set. Executing `select * from inspection_summary where rule in ('read-link', 'ddl')` summarizes the read link and DDL-related monitoring metrics. + +Usage example: + +Both the diagnosis result table and the diagnosis monitoring summary table can specify the diagnosis time range using `hint`. `select **+ time_range('2020-03-07 12:00:00','2020-03-07 13:00:00') */* from inspection_summary` is the monitoring summary for the `2020-03-07 12:00:00` to `2020-03-07 13:00:00` period. Like the monitoring summary table, you can use the diagnosis result table to quickly find the monitoring items with large differences by comparing the data of two different periods. + +See the following example that diagnoses issues within a specified range, from "2020-01-16 16:00:54.933" to "2020-01-16 16:10:54.933": + +{{< copyable "sql" >}} + +```sql +SELECT + t1.avg_value / t2.avg_value AS ratio, + t1.*, + t2.* +FROM + ( + SELECT + /*+ time_range("2020-01-16 16:00:54.933", "2020-01-16 16:10:54.933")*/ * + FROM inspection_summary WHERE rule='read-link' + ) t1 + JOIN + ( + SELECT + /*+ time_range("2020-01-16 16:10:54.933","2020-01-16 16:20:54.933")*/ * + FROM inspection_summary WHERE rule='read-link' + ) t2 + ON t1.metrics_name = t2.metrics_name + and t1.instance = t2.instance + and t1.label = t2.label +ORDER BY + ratio DESC; +``` diff --git a/reference/system-databases/sql-diagnosis.md b/reference/system-databases/sql-diagnosis.md index e08ae1b9b25b7..683a60c4a5d53 100644 --- a/reference/system-databases/sql-diagnosis.md +++ b/reference/system-databases/sql-diagnosis.md @@ -53,5 +53,5 @@ Because the TiDB cluster has many monitoring metrics, TiDB provides the followin On the above cluster information tables and cluster monitoring tables, you need to manually execute SQL statements of a certain mode to troubleshoot the cluster. To improve user experience, TiDB provides diagnosis-related system tables based on the existing basic information tables, so that the diagnosis is automatically executed. The following are the system tables related to the automatic diagnosis: -+ The diagnosis result table `information_schema.inspection_result` displays the diagnosis result of the system. The diagnosis is passively triggered. Executing `select * from inspection_result` triggers all diagnostic rules to diagnose the system, and the faults or risks in the system are displayed in the results. -+ The diagnosis summary table `information_schema.inspection_summary` summarizes the monitoring information of a specific link or module. You can troubleshoot and locate problems based on the context of the entire module or link. ++ The diagnosis result table [`information_schema.inspection_result`](/reference/system-databases/inspection-result.md) displays the diagnosis result of the system. The diagnosis is passively triggered. Executing `select * from inspection_result` triggers all diagnostic rules to diagnose the system, and the faults or risks in the system are displayed in the results. ++ The diagnosis summary table [`information_schema.inspection_summary`](/reference/system-databases/inspection-summary.md) summarizes the monitoring information of a specific link or module. You can troubleshoot and locate problems based on the context of the entire module or link.