Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

infoschema: add TIDB_CLUSTER_CONFIG virtual table to retrieve all instance config #13063

Merged
merged 10 commits into from Nov 7, 2019
Merged

infoschema: add TIDB_CLUSTER_CONFIG virtual table to retrieve all instance config #13063

merged 10 commits into from Nov 7, 2019

Conversation

lonng
Copy link
Contributor

@lonng lonng commented Oct 31, 2019

Signed-off-by: Lonng heng@lonng.org

What problem does this PR solve?

In the current version, we don't have a simple way to get cluster configuration information. We need to use different HTTP interfaces of different components to retrieve them.
eg.

  • PD: /pd/api/v1/config
  • TiDB/TiKV /config

Furthermore, there is no an utility way to filter configuration items or aggregate them.

What is changed and how it works?

This PR introduces the information_schema.tidb_cluster_config and make the user can retrieve cluster configuration easier via select * from information_schema.tidb_cluster_config.
And we can filter them easily: select * from information_schema.tidb_cluster_config where type='tikv' and `key` like 'raft%'

Check List

Tests

  • Unit test
  • Manual test
mysql> select * from information_schema.tidb_cluster_config where `key` like 'raftstore%';
+------+------+--------+-----------------+------------------------------------------------+---------------------------------------------------------+
| ID   | TYPE | NAME   | ADDRESS         | KEY                                            | VALUE                                                   |
+------+------+--------+-----------------+------------------------------------------------+---------------------------------------------------------+
|  298 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.abnormal-leader-missing-duration     | 10m                                                     |
|  299 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.allow-remove-leader                  | false                                                   |
|  300 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.apply-max-batch-size                 | 1024                                                    |
|  301 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.apply-pool-size                      | 2                                                       |
|  302 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.capacity                             | 0KiB                                                    |
|  303 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.clean-stale-peer-delay               | 11m                                                     |
|  304 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.cleanup-import-sst-interval          | 10m                                                     |
|  305 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.consistency-check-interval           | 0s                                                      |
|  306 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.future-poll-size                     | 1                                                       |
|  307 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.hibernate-regions                    | true                                                    |
|  308 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.leader-transfer-max-log-lag          | 10                                                      |
|  309 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.local-read-batch-size                | 1024                                                    |
|  310 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.lock-cf-compact-bytes-threshold      | 256MiB                                                  |
|  311 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.lock-cf-compact-interval             | 10m                                                     |
|  312 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.max-leader-missing-duration          | 2h                                                      |
|  313 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.max-peer-down-duration               | 5m                                                      |
|  314 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.merge-check-tick-interval            | 10s                                                     |
|  315 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.merge-max-log-gap                    | 10                                                      |
|  316 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.messages-per-tick                    | 4096                                                    |
|  317 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.notify-capacity                      | 40960                                                   |
|  318 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.pd-heartbeat-tick-interval           | 1m                                                      |
|  319 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.pd-store-heartbeat-tick-interval     | 10s                                                     |
|  320 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.peer-stale-state-check-interval      | 5m                                                      |
|  321 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.prevote                              | true                                                    |
|  322 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.raft-base-tick-interval              | 1s                                                      |
|  323 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.raft-election-timeout-ticks          | 10                                                      |
|  324 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.raft-entry-cache-life-time           | 30s                                                     |
|  325 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.raft-entry-max-size                  | 8MiB                                                    |
|  326 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.raft-heartbeat-ticks                 | 2                                                       |
|  327 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.raft-log-gc-count-limit              | 73728                                                   |
|  328 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.raft-log-gc-size-limit               | 72MiB                                                   |
|  329 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.raft-log-gc-threshold                | 50                                                      |
|  330 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.raft-log-gc-tick-interval            | 10s                                                     |
|  331 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.raft-max-election-timeout-ticks      | 20                                                      |
|  332 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.raft-max-inflight-msgs               | 256                                                     |
|  333 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.raft-max-size-per-msg                | 1MiB                                                    |
|  334 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.raft-min-election-timeout-ticks      | 10                                                      |
|  335 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.raft-reject-transfer-leader-duration | 3s                                                      |
|  336 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.raft-store-max-leader-lease          | 9s                                                      |
|  337 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.raftdb-path                          | /Users/lonng/devel/testkit/debug-cluster/data/tikv/raft |
|  338 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.region-compact-check-interval        | 5m                                                      |
|  339 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.region-compact-check-step            | 100                                                     |
|  340 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.region-compact-min-tombstones        | 10000                                                   |
|  341 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.region-compact-tombstones-percent    | 30                                                      |
|  342 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.region-split-check-diff              | 6MiB                                                    |
|  343 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.report-region-flow-interval          | 1m                                                      |
|  344 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.right-derive-when-split              | true                                                    |
|  345 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.snap-apply-batch-size                | 10MiB                                                   |
|  346 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.snap-gc-timeout                      | 4h                                                      |
|  347 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.snap-mgr-gc-tick-interval            | 1m                                                      |
|  348 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.split-region-check-tick-interval     | 10s                                                     |
|  349 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.store-max-batch-size                 | 1024                                                    |
|  350 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.store-pool-size                      | 2                                                       |
|  351 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.sync-log                             | true                                                    |
|  352 | tikv | tikv-0 | 127.0.0.1:20160 | raftstore.use-delete-range                     | false                                                   |

Release note

  • [feature] Add support for retrieving cluster configuration via select * from information_schema.tidb_cluster_config

@codecov
Copy link

codecov bot commented Nov 4, 2019

Codecov Report

Merging #13063 into master will increase coverage by 0.0736%.
The diff coverage is n/a.

@@               Coverage Diff                @@
##             master     #13063        +/-   ##
================================================
+ Coverage   80.1498%   80.2235%   +0.0736%     
================================================
  Files           469        469                
  Lines        111823     112012       +189     
================================================
+ Hits          89626      89860       +234     
+ Misses        15308      15287        -21     
+ Partials       6889       6865        -24

…tances config

Signed-off-by: Lonng <heng@lonng.org>
infoschema/tables.go Outdated Show resolved Hide resolved
Copy link
Contributor

@djshow832 djshow832 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@djshow832 djshow832 added the status/LGT1 Indicates that a PR has LGTM 1. label Nov 6, 2019
@@ -1971,6 +1985,145 @@ func dataForTiDBClusterInfo(ctx sessionctx.Context) ([][]types.Datum, error) {
return rows, nil
}

func dataForClusterConfig(ctx sessionctx.Context) ([][]types.Datum, error) {
sql := "SELECT type, name, address, status_address FROM INFORMATION_SCHEMA.TIDB_CLUSTER_INFO ORDER BY type"
rows, _, err := ctx.(sqlexec.RestrictedSQLExecutor).ExecRestrictedSQL(sql)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about use dataForTiDBClusterInfo directly?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The caller cannot assume the dataForTiDBClusterInfo results and prefer to use ORDER BY clause instead of sorting results manually.

Copy link
Contributor

@crazycs520 crazycs520 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@lonng
Copy link
Contributor Author

lonng commented Nov 7, 2019

/merge

@sre-bot sre-bot added the status/can-merge Indicates a PR has been approved by a committer. label Nov 7, 2019
@sre-bot
Copy link
Contributor

sre-bot commented Nov 7, 2019

/run-all-tests

@sre-bot sre-bot merged commit 3696bc5 into pingcap:master Nov 7, 2019
@lonng lonng removed the status/LGT1 Indicates that a PR has LGTM 1. label Nov 7, 2019
@lonng lonng deleted the cluster-config branch November 7, 2019 02:57
XiaTianliang pushed a commit to XiaTianliang/tidb that referenced this pull request Dec 21, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/server status/can-merge Indicates a PR has been approved by a committer.
Projects
No open projects
Development

Successfully merging this pull request may close these issues.

None yet

4 participants