-
Notifications
You must be signed in to change notification settings - Fork 709
reference: add 3 metrics system tables #2251
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
|
||
| # Metrics Schema | ||
|
|
||
| To dynamically observe and compare cluster conditions of different time periods, the SQL diagnosis system introduces cluster monitoring system tables. All monitoring tables are in the metrics schema, and you can query the monitoring information using SQL statements in this schema. In fact, the data of the three monitoring-related summary tables ([`metrics_summary`](/reference/system-databases/metrics-summary.md), [`metrics_summary_by_label`](/reference/system-databases/metrics-summary.md), and `inspection_result`) are obtained by querying the monitoring tables in the metrics schema. Currently, many system tables are added and you can query the information of these tables through the [`information_schema.metrics_tables`](/reference/system-databases/metrics-tables.md) table. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| To dynamically observe and compare cluster conditions of different time periods, the SQL diagnosis system introduces cluster monitoring system tables. All monitoring tables are in the metrics schema, and you can query the monitoring information using SQL statements in this schema. In fact, the data of the three monitoring-related summary tables ([`metrics_summary`](/reference/system-databases/metrics-summary.md), [`metrics_summary_by_label`](/reference/system-databases/metrics-summary.md), and `inspection_result`) are obtained by querying the monitoring tables in the metrics schema. Currently, many system tables are added and you can query the information of these tables through the [`information_schema.metrics_tables`](/reference/system-databases/metrics-tables.md) table. | |
| To dynamically observe and compare cluster conditions of different time ranges, the SQL diagnosis system introduces cluster monitoring system tables. All monitoring tables are in the metrics schema, and you can query the monitoring information using SQL statements in this schema. The data of the three monitoring-related summary tables ([`metrics_summary`](/reference/system-databases/metrics-summary.md), [`metrics_summary_by_label`](/reference/system-databases/metrics-summary.md), and `inspection_result`) are all obtained by querying the monitoring tables in the metrics schema. Currently, many system tables are added, so you can query the information of these tables using the [`information_schema.metrics_tables`](/reference/system-databases/metrics-tables.md) table. |
| * `PROMQL`: The working principle of the monitoring table is to map SQL statements to `PromQL` and convert Prometheus results into SQL query results. This field is the expression template of `PromQL`. When getting the data of the monitoring table, the query conditions are used to rewrite the variables in this template to generate the final query expression. | ||
| * `LABELS`: The label for the monitoring item. `tidb_query_duration` has two labels: `instance` and `sql_type`. | ||
| * `QUANTILE`: The percentile. For monitoring data of the histogram type, a default percentile is specified. If the value of this field is `0`, it means that the monitoring item corresponding to the monitoring table is not a histogram. | ||
| * `COMMENT`: The comment for the monitoring table. You can see that the `tidb_query_duration` table is used to query the percentile time of the TiDB query execution, such as the query time of P999/P99/P90. The unit is second. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that the Chinese version itself is confusing: 可以看出 tidb_query_duration 表的是用来查询 TiDB query 执行的百分位时间,如 P999/P99/P90 的查询耗时,单位是秒。 @reafans Would you please update 表的是 to a clear way? @TomShawn Please confirm it and update if necessary.
| * `COMMENT`: The comment for the monitoring table. You can see that the `tidb_query_duration` table is used to query the percentile time of the TiDB query execution, such as the query time of P999/P99/P90. The unit is second. | |
| * `COMMENT`: Explanations for the monitoring table. You can see that the `tidb_query_duration` table is used to query the percentile time of the TiDB query execution, such as the query time of P999/P99/P90. The unit is second. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be 可以看出 tidb_query_duration 表是用来查询 TiDB query 执行的百分位时间的, I‘ll update it in the Chinese version
| * `QUANTILE`: The percentile. For monitoring data of the histogram type, a default percentile is specified. If the value of this field is `0`, it means that the monitoring item corresponding to the monitoring table is not a histogram. | ||
| * `COMMENT`: The comment for the monitoring table. You can see that the `tidb_query_duration` table is used to query the percentile time of the TiDB query execution, such as the query time of P999/P99/P90. The unit is second. | ||
|
|
||
| The structure of the `tidb_query_duration` table is queried as follows: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| The structure of the `tidb_query_duration` table is queried as follows: | |
| To query the schema of the `tidb_query_duration` table, execute the following statement: |
| +---------------------+-------------------+----------+----------+----------------+ | ||
| ``` | ||
|
|
||
| The first row of the above query result means that at the time of 2020-03-25 23:40:00, on the TiDB instance `172.16.5.40:10089`, the P99 execution time of the `Insert` type statement is 0.509929485256 seconds. The meanings of other rows are similar. Other values of the `sql_type` column is described as follows: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| The first row of the above query result means that at the time of 2020-03-25 23:40:00, on the TiDB instance `172.16.5.40:10089`, the P99 execution time of the `Insert` type statement is 0.509929485256 seconds. The meanings of other rows are similar. Other values of the `sql_type` column is described as follows: | |
| The first row of the above query result means that at the time of 2020-03-25 23:40:00, on the TiDB instance `172.16.5.40:10089`, the P99 execution time of the `Insert` type statement is 0.509929485256 seconds. The meanings of other rows are similar. Other values of the `sql_type` column are described as follows: |
| +------------------+----------+------+---------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | ||
| ``` | ||
|
|
||
| From the above result, you can see that `PromQL`, `start_time`, `end_time`, and the value of `step`. During actual execution, TiDB calls the `query_range` HTTP API interface of Prometheus to query the monitoring data. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| From the above result, you can see that `PromQL`, `start_time`, `end_time`, and the value of `step`. During actual execution, TiDB calls the `query_range` HTTP API interface of Prometheus to query the monitoring data. | |
| From the above result, you can see that `PromQL`, `start_time`, `end_time`, and `step` are in the execution plan. During the execution process, TiDB calls the `query_range` HTTP API of Prometheus to query the monitoring data. |
|
|
||
| From the above result, you can see that `PromQL`, `start_time`, `end_time`, and the value of `step`. During actual execution, TiDB calls the `query_range` HTTP API interface of Prometheus to query the monitoring data. | ||
|
|
||
| You might find that during the range of [`2020-03-25 23:40:00`, `2020-03-25 23:42:00`], each label only has three time values. In the execution plan, the value of `step` is 1 minute, which is determined by the following two variables: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| You might find that during the range of [`2020-03-25 23:40:00`, `2020-03-25 23:42:00`], each label only has three time values. In the execution plan, the value of `step` is 1 minute, which is determined by the following two variables: | |
| You might find that in the range of [`2020-03-25 23:40:00`, `2020-03-25 23:42:00`], each label only has three time values. In the execution plan, the value of `step` is 1 minute, which is determined by the following two variables: |
|
|
||
| You might find that during the range of [`2020-03-25 23:40:00`, `2020-03-25 23:42:00`], each label only has three time values. In the execution plan, the value of `step` is 1 minute, which is determined by the following two variables: | ||
|
|
||
| * `tidb_metric_query_step`: The resolution step of the query. To get the `query_range` data from Prometheus, you need to specify `start`, `end`, and `step`. `step` uses the value of this variable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
How about using the similar description in Prometheus's doc? @reafans Please also help confirm. Thanks!
-
Confirm whether its's
start/endorstart_time/end_timehere.
| * `tidb_metric_query_step`: The resolution step of the query. To get the `query_range` data from Prometheus, you need to specify `start`, `end`, and `step`. `step` uses the value of this variable. | |
| * `tidb_metric_query_step`: The query resolution step width. To get the `query_range` data from Prometheus, you need to specify `start`, `end`, and `step`. `step` uses the value of this variable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Start_time/end_time is more accurate, I'll fix it in Chinese version.
| You might find that during the range of [`2020-03-25 23:40:00`, `2020-03-25 23:42:00`], each label only has three time values. In the execution plan, the value of `step` is 1 minute, which is determined by the following two variables: | ||
|
|
||
| * `tidb_metric_query_step`: The resolution step of the query. To get the `query_range` data from Prometheus, you need to specify `start`, `end`, and `step`. `step` uses the value of this variable. | ||
| * `tidb_metric_query_range_duration`: When querying the monitoring, the `$ RANGE_DURATION` field in `PROMQL` is replaced with the value of this variable. The default value is 60 seconds. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that the subjects are not consistent.
| * `tidb_metric_query_range_duration`: When querying the monitoring, the `$ RANGE_DURATION` field in `PROMQL` is replaced with the value of this variable. The default value is 60 seconds. | |
| * `tidb_metric_query_range_duration`: When the monitoring data is queried, the value of the `$ RANGE_DURATION` field in `PROMQL` is replaced with the value of this variable. The default value is 60 seconds. |
| * `tidb_metric_query_step`: The resolution step of the query. To get the `query_range` data from Prometheus, you need to specify `start`, `end`, and `step`. `step` uses the value of this variable. | ||
| * `tidb_metric_query_range_duration`: When querying the monitoring, the `$ RANGE_DURATION` field in `PROMQL` is replaced with the value of this variable. The default value is 60 seconds. | ||
|
|
||
| To view the values of monitoring items with different granularities, you can modify the above two session variables before querying the monitoring table. For example: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggest using above this way, which is more commonly used. Please also update other places in this PR.
| To view the values of monitoring items with different granularities, you can modify the above two session variables before querying the monitoring table. For example: | |
| To view the values of monitoring items with different granularities, you can modify the two session variables above before querying the monitoring table. For example: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure.
1 similar comment
|
@lilin90 All comments are addressed, PTAL again, thanks! |
|
LGTM |
|
|
||
| # METRICS_SUMMARY | ||
|
|
||
| Because the TiDB cluster has many monitoring metrics, the SQL diagnosis system also provides the following two monitoring summary tables for you to easily find abnormal monitoring items: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Because the TiDB cluster has many monitoring metrics, the SQL diagnosis system also provides the following two monitoring summary tables for you to easily find abnormal monitoring items: | |
| The TiDB cluster has many monitoring metrics. To make it easy to detect abnormal monitoring metrics, TiDB 4.0 introduces the following two monitoring summary tables: |
| * `information_schema.metrics_summary` | ||
| * `information_schema.metrics_summary_by_label` | ||
|
|
||
| The two tables summarize all monitoring data to for you to check each monitoring metric with higher efficiency. Compare to `information_schema.metrics_summary`, the `information_schema.metrics_summary_by_label` table has an additional `label` column and performs differentiated statistics according to different labels. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo and grammar mistake?
| The two tables summarize all monitoring data to for you to check each monitoring metric with higher efficiency. Compare to `information_schema.metrics_summary`, the `information_schema.metrics_summary_by_label` table has an additional `label` column and performs differentiated statistics according to different labels. | |
| The two tables summarize all monitoring data for you to check each monitoring metric efficiently. Compared with `information_schema.metrics_summary`, the `information_schema.metrics_summary_by_label` table has an additional `label` column and performs differentiated statistics according to different labels. |
| * `QUANTILE`: The percentile. You can specify `QUANTILE` using SQL statements. For example: | ||
| * `select * from metrics_summary where quantile=0.99` specifies viewing the data of the 0.99 percentile. | ||
| * `select * from metrics_summary where quantile in (0.80, 0.90, 0.99, 0.999)` specifies viewing the data of the 0.8, 0.90, 0.99, 0.999 percentiles at the same time. | ||
| * `SUM_VALUE, AVG_VALUE, MIN_VALUE, and MAX_VALUE` respectively mean the sum, the average value, the minimum value, and the maximum value. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| * `SUM_VALUE, AVG_VALUE, MIN_VALUE, and MAX_VALUE` respectively mean the sum, the average value, the minimum value, and the maximum value. | |
| * `SUM_VALUE`, `AVG_VALUE`, `MIN_VALUE`, and `MAX_VALUE` respectively mean the sum, the average value, the minimum value, and the maximum value. |
|
|
||
| For example: | ||
|
|
||
| To query the three groups of monitoring items with the highest average time consumption in the TiDB cluster in the time range of `'2020-03-08 13:23:00', '2020-03-08 13: 33: 00'`, you can directly query the `information_schema.metrics_summary` table and use the `/*+ time_range() */` hint to specify the time range. The SQL statement is built as follows: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| To query the three groups of monitoring items with the highest average time consumption in the TiDB cluster in the time range of `'2020-03-08 13:23:00', '2020-03-08 13: 33: 00'`, you can directly query the `information_schema.metrics_summary` table and use the `/*+ time_range() */` hint to specify the time range. The SQL statement is built as follows: | |
| To query the three groups of monitoring items with the highest average time consumption in the TiDB cluster within the time range of `'2020-03-08 13:23:00', '2020-03-08 13: 33: 00'`, you can directly query the `information_schema.metrics_summary` table and use the `/*+ time_range() */` hint to specify the time range. The SQL statement is as follows: |
| COMMENT | The quantile of kv requests durations by store | ||
| ``` | ||
|
|
||
| Similarly, below is an example of querying the `metrics_summary_by_label` monitoring summary table: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Similarly, below is an example of querying the `metrics_summary_by_label` monitoring summary table: | |
| Similarly, the following example queries the `metrics_summary_by_label` monitoring summary table: |
| +----------------+------------------------------------------+----------------+------------------+---------------------------------------------------------------------------------------------+ | ||
| ``` | ||
|
|
||
| From the query above result: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| From the query above result: | |
| From the query result above, you can get the following information: |
| * `tikv_cop_total_response_size` (the size of the TiKV Coprocessor request result) in period t2 is 192 times higher than that in period t1. | ||
| * `tikv_cop_scan_details` in period t2 (the scan requested by the TiKV Coprocessor) is 105 times higher than that in period t1. | ||
|
|
||
| From the result above, you can see that the Coprocessor request in period t2 is much higher than period t1, which causes TiKV Coprocessor to be overloaded, and there is a `cop task` waiting. It might be that some large queries appear in period t2 that bring more load. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| From the result above, you can see that the Coprocessor request in period t2 is much higher than period t1, which causes TiKV Coprocessor to be overloaded, and there is a `cop task` waiting. It might be that some large queries appear in period t2 that bring more load. | |
| From the result above, you can see that the Coprocessor requests in period t2 are much more than those in period t1. This causes TiKV Coprocessor to be overloaded, and the `cop task` has to wait. It might be that some large queries appear in period t2 that bring more load. |
|
|
||
| From the result above, you can see that the Coprocessor request in period t2 is much higher than period t1, which causes TiKV Coprocessor to be overloaded, and there is a `cop task` waiting. It might be that some large queries appear in period t2 that bring more load. | ||
|
|
||
| In fact, during the entire time period from t1 to t2, the `go-ycsb` pressure test is being run. Then 20 `tpch` queries are run during period t2, so it is the `tpch` queries that cause many Coprocessor requests. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| In fact, during the entire time period from t1 to t2, the `go-ycsb` pressure test is being run. Then 20 `tpch` queries are run during period t2, so it is the `tpch` queries that cause many Coprocessor requests. | |
| In fact, during the entire time period from t1 to t2, the `go-ycsb` pressure test is running. Then 20 `tpch` queries are running during period t2. So it is the `tpch` queries that cause many Coprocessor requests. |
|
|
||
| * `TABLE_NAME`: Corresponds to the table name in `metrics_schema`. | ||
| * `PROMQL`: The working principle of the monitoring table is to map SQL statements to `PromQL` and convert Prometheus results into SQL query results. This field is the expression template of `PromQL`. When getting the data of the monitoring table, the query conditions are used to rewrite the variables in this template to generate the final query expression. | ||
| * `LABELS`: The label for the monitoring item. Each label corresponds to a column in the monitoring table. If the SQL statement contains filter of the corresponding column, the corresponding `PromQL` changes accordingly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| * `LABELS`: The label for the monitoring item. Each label corresponds to a column in the monitoring table. If the SQL statement contains filter of the corresponding column, the corresponding `PromQL` changes accordingly. | |
| * `LABELS`: The label for the monitoring item. Each label corresponds to a column in the monitoring table. If the SQL statement contains the filter of the corresponding column, the corresponding `PromQL` changes accordingly. |
| * `PROMQL`: The working principle of the monitoring table is to map SQL statements to `PromQL` and convert Prometheus results into SQL query results. This field is the expression template of `PromQL`. When getting the data of the monitoring table, the query conditions are used to rewrite the variables in this template to generate the final query expression. | ||
| * `LABELS`: The label for the monitoring item. Each label corresponds to a column in the monitoring table. If the SQL statement contains filter of the corresponding column, the corresponding `PromQL` changes accordingly. | ||
| * `QUANTILE`: The percentile. For monitoring data of the histogram type, a default percentile is specified. If the value of this field is `0`, it means that the monitoring item corresponding to the monitoring table is not a histogram. | ||
| * `COMMENT`: The comment for the monitoring table. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally, use about or on.
| * `COMMENT`: The comment for the monitoring table. | |
| * `COMMENT`: The comment about the monitoring table. |
|
@lilin90 Comment addressed, PTAL again, thanks! |
lilin90
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
/run-all-tests |
|
cherry pick to release-4.0 in PR #2394 |
What is changed, added or deleted? (Required)
Add
metrics_schema,metrics_tables, andmetrics_summarysystem tables.Which TiDB version(s) do your changes apply to? (Required)
If you select two or more versions from above, to trigger the bot to cherry-pick this PR to your desired release version branch(es), you must add corresponding labels such as needs-cherry-pick-4.0, needs-cherry-pick-3.1, needs-cherry-pick-3.0, and needs-cherry-pick-2.1.
What is the related PR or file link(s)?