Skip to content

Commit

Permalink
add tiflash label settings example (#10699) (#10717)
Browse files Browse the repository at this point in the history
  • Loading branch information
ti-chi-bot committed Oct 10, 2022
1 parent c9ccbca commit 627b37b
Show file tree
Hide file tree
Showing 5 changed files with 128 additions and 59 deletions.
9 changes: 8 additions & 1 deletion config-templates/complex-tiflash.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,10 @@ tikv_servers:
# # The following configs are used to overwrite the `server_configs.tikv` values.
# config:
# server.grpc-concurrency: 4
# server.labels: { zone: "zone1", dc: "dc1", host: "host1" }
# server.labels:
# zone: "zone1"
# dc: "dc1"
# host: "host1"
- host: 10.0.1.2
- host: 10.0.1.3

Expand Down Expand Up @@ -124,6 +127,10 @@ tiflash_servers:
# # storage.latest.capacity: [ 161061273600 ]
# learner_config:
# log-level: "info"
# server.labels:
# zone: "zone2"
# dc: "dc2"
# host: "host2"
# - host: 10.0.1.12
# - host: 10.0.1.13

Expand Down
56 changes: 45 additions & 11 deletions schedule-replicas-by-topology-labels.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,25 +17,25 @@ To make this mechanism effective, you need to properly configure TiKV and PD so

### Configure `labels` for TiKV

You can use the command-line flag or set the TiKV configuration file to bind some attributes in the form of key-value pairs. These attributes are called `labels`. After TiKV is started, it reports its `labels` to PD so users can identify the location of TiKV nodes.
You can use the command-line flag or set the TiKV or TiFlash configuration file to bind some attributes in the form of key-value pairs. These attributes are called `labels`. After TiKV and TiFlash are started, they report their `labels` to PD so users can identify the location of TiKV and TiFlash nodes.

Assume that the topology has three layers: zone > rack > host, and you can use these labels (zone, rack, host) to set the TiKV location in one of the following methods:
Assume that the topology has four layers: zone > data center (dc) > rack > host, and you can use these labels (zone, dc, rack, host) to set location of the TiKV and TiFlash. To set labels for TiKV and TiFlash, you can use one of the following methods:

+ Use the command-line flag to start a TiKV instance:

{{< copyable "" >}}

```
tikv-server --labels zone=<zone>,rack=<rack>,host=<host>
```shell
tikv-server --labels zone=<zone>,dc=<dc>,rack=<rack>,host=<host>
```

+ Configure in the TiKV configuration file:

{{< copyable "" >}}

```toml
[server]
labels = "zone=<zone>,rack=<rack>,host=<host>"
[server.labels]
zone = "<zone>"
dc = "<dc>"
rack = "<rack>"
host = "<host>"
```

### Configure `location-labels` for PD
Expand Down Expand Up @@ -99,9 +99,9 @@ The `location-level` configuration is an array of strings, which needs to corres
### Configure a cluster using TiUP (recommended)

When using TiUP to deploy a cluster, you can configure the TiKV location in the [initialization configuration file](/production-deployment-using-tiup.md#step-3-initialize-cluster-topology-file). TiUP will generate the corresponding TiKV and PD configuration files during deployment.
When using TiUP to deploy a cluster, you can configure the TiKV location in the [initialization configuration file](/production-deployment-using-tiup.md#step-3-initialize-cluster-topology-file). TiUP will generate the corresponding configuration files for TiKV, PD, and TiFlash during deployment.

In the following example, a two-layer topology of `zone/host` is defined. The TiKV nodes of the cluster are distributed among three zones, each zone with two hosts. In z1, two TiKV instances are deployed per host. In z2 and z3, one TiKV instance is deployed per host. In the following example, `tikv-n` represents the IP address of the `n`th TiKV node.
In the following example, a two-layer topology of `zone/host` is defined. The TiKV nodes of the cluster are distributed among three zones, z1, z2, and z3, with each zone having four hosts, h1, h2, h3, and h4. In z1, four TiKV instances are deployed on two hosts, `tikv-1` and `tikv-2` on h1, and `tikv-3` and `tikv-4` on h2. Two TiFlash instances are deployed on the other two hosts, `tiflash-1` on h3 and `tiflash-2` on h4. In z2 and z3, two TiKV instances are deployed on two hosts, and two TiFlash instances are deployed on the other two hosts. In the following example, `tikv-n` represents the IP address of the `n`th TiKV node, and `tiflash-n` represents the IP address of the `n`th TiFlash node.

```
server_configs:
Expand Down Expand Up @@ -152,6 +152,40 @@ tikv_servers:
server.labels:
zone: z3
host: h2s
tiflash_servers:
# z1
- host: tiflash-1
learner_config:
server.labels:
zone: z1
host: h3
- host: tiflash-2
learner_config:
server.labels:
zone: z1
host: h4
# z2
- host: tiflash-3
learner_config:
server.labels:
zone: z2
host: h3
- host: tiflash-4
learner_config:
server.labels:
zone: z2
host: h4
# z3
- host: tiflash-5
learner_config:
server.labels:
zone: z3
host: h3
- host: tiflash-6
learner_config:
server.labels:
zone: z3
host: h4
```

For details, see [Geo-distributed Deployment topology](/geo-distributed-deployment-topology.md).
Expand Down
18 changes: 14 additions & 4 deletions tiflash/create-tiflash-replicas.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,16 +148,26 @@ When configuring replicas, if you need to distribute TiFlash replicas to multipl
```
tiflash_servers:
- host: 172.16.5.81
config:
flash.proxy.labels: zone=z1
logger.level: "info"
learner_config:
server.labels:
zone: "z1"
- host: 172.16.5.82
config:
flash.proxy.labels: zone=z1
logger.level: "info"
learner_config:
server.labels:
zone: "z1"
- host: 172.16.5.85
config:
flash.proxy.labels: zone=z2
logger.level: "info"
learner_config:
server.labels:
zone: "z2"
```

Note that the `flash.proxy.labels` configuration in earlier versions cannot handle special characters in the available zone name correctly. It is recommended to use the `server.labels` in `learner_config` to configure the name of an available zone.

2. After starting a cluster, specify the labels when creating replicas.

{{< copyable "sql" >}}
Expand Down
4 changes: 4 additions & 0 deletions tiflash/tiflash-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -248,6 +248,10 @@ delta_index_cache_size = 0

In addition to the items above, other parameters are the same as those of TiKV. Note that the `label` whose key is `engine` is reserved and cannot be configured manually.

### Schedule replicas by topology labels

See [Set available zones](/tiflash/create-tiflash-replicas.md#set-available-zones).

### Multi-disk deployment

TiFlash supports multi-disk deployment. If there are multiple disks in your TiFlash node, you can make full use of those disks by configuring the parameters described in the following sections. For TiFlash's configuration template to be used for TiUP, see [The complex template for the TiFlash topology](https://github.com/pingcap/docs/blob/master/config-templates/complex-tiflash.yaml).
Expand Down
100 changes: 57 additions & 43 deletions tiup/tiup-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,11 @@ tikv_servers:
- host: 172.16.5.139
- host: 172.16.5.140

tiflash_servers:
- host: 172.16.5.141
- host: 172.16.5.142
- host: 172.16.5.143

grafana_servers:
- host: 172.16.5.134

Expand Down Expand Up @@ -126,18 +131,21 @@ During the execution, TiUP asks you to confirm your topology again and requires
```bash
Please confirm your topology:
TiDB Cluster: prod-cluster
TiDB Version: v6.1.0
Type Host Ports Directories
---- ---- ----- -----------
pd 172.16.5.134 2379/2380 deploy/pd-2379,data/pd-2379
pd 172.16.5.139 2379/2380 deploy/pd-2379,data/pd-2379
pd 172.16.5.140 2379/2380 deploy/pd-2379,data/pd-2379
tikv 172.16.5.134 20160/20180 deploy/tikv-20160,data/tikv-20160
tikv 172.16.5.139 20160/20180 deploy/tikv-20160,data/tikv-20160
tikv 172.16.5.140 20160/20180 deploy/tikv-20160,data/tikv-20160
tidb 172.16.5.134 4000/10080 deploy/tidb-4000
tidb 172.16.5.139 4000/10080 deploy/tidb-4000
tidb 172.16.5.140 4000/10080 deploy/tidb-4000
TiDB Version: v6.3.0
Type Host Ports OS/Arch Directories
---- ---- ----- ------- -----------
pd 172.16.5.134 2379/2380 linux/x86_64 deploy/pd-2379,data/pd-2379
pd 172.16.5.139 2379/2380 linux/x86_64 deploy/pd-2379,data/pd-2379
pd 172.16.5.140 2379/2380 linux/x86_64 deploy/pd-2379,data/pd-2379
tikv 172.16.5.134 20160/20180 linux/x86_64 deploy/tikv-20160,data/tikv-20160
tikv 172.16.5.139 20160/20180 linux/x86_64 deploy/tikv-20160,data/tikv-20160
tikv 172.16.5.140 20160/20180 linux/x86_64 deploy/tikv-20160,data/tikv-20160
tidb 172.16.5.134 4000/10080 linux/x86_64 deploy/tidb-4000
tidb 172.16.5.139 4000/10080 linux/x86_64 deploy/tidb-4000
tidb 172.16.5.140 4000/10080 linux/x86_64 deploy/tidb-4000
tiflash 172.16.5.141 9000/8123/3930/20170/20292/8234 linux/x86_64 deploy/tiflash-9000,data/tiflash-9000
tiflash 172.16.5.142 9000/8123/3930/20170/20292/8234 linux/x86_64 deploy/tiflash-9000,data/tiflash-9000
tiflash 172.16.5.143 9000/8123/3930/20170/20292/8234 linux/x86_64 deploy/tiflash-9000,data/tiflash-9000
prometheus 172.16.5.134 9090 deploy/prometheus-9090,data/prometheus-9090
grafana 172.16.5.134 3000 deploy/grafana-3000
Attention:
Expand Down Expand Up @@ -196,20 +204,23 @@ tiup cluster display prod-cluster
```
Starting /root/.tiup/components/cluster/v1.10.0/cluster display prod-cluster
TiDB Cluster: prod-cluster
TiDB Version: v6.1.0
ID Role Host Ports Status Data Dir Deploy Dir
-- ---- ---- ----- ------ -------- ----------
172.16.5.134:3000 grafana 172.16.5.134 3000 Up - deploy/grafana-3000
172.16.5.134:2379 pd 172.16.5.134 2379/2380 Up|L data/pd-2379 deploy/pd-2379
172.16.5.139:2379 pd 172.16.5.139 2379/2380 Up|UI data/pd-2379 deploy/pd-2379
172.16.5.140:2379 pd 172.16.5.140 2379/2380 Up data/pd-2379 deploy/pd-2379
172.16.5.134:9090 prometheus 172.16.5.134 9090 Up data/prometheus-9090 deploy/prometheus-9090
172.16.5.134:4000 tidb 172.16.5.134 4000/10080 Up - deploy/tidb-4000
172.16.5.139:4000 tidb 172.16.5.139 4000/10080 Up - deploy/tidb-4000
172.16.5.140:4000 tidb 172.16.5.140 4000/10080 Up - deploy/tidb-4000
172.16.5.134:20160 tikv 172.16.5.134 20160/20180 Up data/tikv-20160 deploy/tikv-20160
172.16.5.139:20160 tikv 172.16.5.139 20160/20180 Up data/tikv-20160 deploy/tikv-20160
172.16.5.140:20160 tikv 172.16.5.140 20160/20180 Up data/tikv-20160 deploy/tikv-20160
TiDB Version: v6.3.0
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
172.16.5.134:3000 grafana 172.16.5.134 3000 linux/x86_64 Up - deploy/grafana-3000
172.16.5.134:2379 pd 172.16.5.134 2379/2380 linux/x86_64 Up|L data/pd-2379 deploy/pd-2379
172.16.5.139:2379 pd 172.16.5.139 2379/2380 linux/x86_64 Up|UI data/pd-2379 deploy/pd-2379
172.16.5.140:2379 pd 172.16.5.140 2379/2380 linux/x86_64 Up data/pd-2379 deploy/pd-2379
172.16.5.134:9090 prometheus 172.16.5.134 9090 linux/x86_64 Up data/prometheus-9090 deploy/prometheus-9090
172.16.5.134:4000 tidb 172.16.5.134 4000/10080 linux/x86_64 Up - deploy/tidb-4000
172.16.5.139:4000 tidb 172.16.5.139 4000/10080 linux/x86_64 Up - deploy/tidb-4000
172.16.5.140:4000 tidb 172.16.5.140 4000/10080 linux/x86_64 Up - deploy/tidb-4000
172.16.5.141:9000 tiflash 172.16.5.141 9000/8123/3930/20170/20292/8234 linux/x86_64 Up data/tiflash-9000 deploy/tiflash-9000
172.16.5.142:9000 tiflash 172.16.5.142 9000/8123/3930/20170/20292/8234 linux/x86_64 Up data/tiflash-9000 deploy/tiflash-9000
172.16.5.143:9000 tiflash 172.16.5.143 9000/8123/3930/20170/20292/8234 linux/x86_64 Up data/tiflash-9000 deploy/tiflash-9000
172.16.5.134:20160 tikv 172.16.5.134 20160/20180 linux/x86_64 Up data/tikv-20160 deploy/tikv-20160
172.16.5.139:20160 tikv 172.16.5.139 20160/20180 linux/x86_64 Up data/tikv-20160 deploy/tikv-20160
172.16.5.140:20160 tikv 172.16.5.140 20160/20180 linux/x86_64 Up data/tikv-20160 deploy/tikv-20160
```

The `Status` column uses `Up` or `Down` to indicate whether the service is running normally.
Expand All @@ -224,12 +235,12 @@ For the PD component, `|L` or `|UI` might be appended to `Up` or `Down`. `|L` in
Scaling in a cluster means making some node(s) offline. This operation removes the specific node(s) from the cluster and deletes the remaining files.

Because the offline process of the TiKV and TiDB Binlog components is asynchronous (which requires removing the node through API), and the process takes a long time (which requires continuous observation on whether the node is successfully taken offline), special treatment is given to the TiKV and TiDB Binlog components.
Because the offline process of the TiKV, TiFlash, and TiDB Binlog components is asynchronous (which requires removing the node through API), and the process takes a long time (which requires continuous observation on whether the node is successfully taken offline), special treatment is given to the TiKV, TiFlash, and TiDB Binlog components.

- For TiKV and Binlog:
- For TiKV, TiFlash, and Binlog:

- TiUP cluster takes the node offline through API and directly exits without waiting for the process to be completed.
- Afterwards, when a command related to the cluster operation is executed, TiUP cluster examines whether there is a TiKV/Binlog node that has been taken offline. If not, TiUP cluster continues with the specified operation; If there is, TiUP cluster takes the following steps:
- Afterwards, when a command related to the cluster operation is executed, TiUP cluster examines whether there is a TiKV, TiFlash, or Binlog node that has been taken offline. If not, TiUP cluster continues with the specified operation; If there is, TiUP cluster takes the following steps:

1. Stop the service of the node that has been taken offline.
2. Clean up the data files related to the node.
Expand Down Expand Up @@ -267,20 +278,23 @@ tiup cluster display prod-cluster
```
Starting /root/.tiup/components/cluster/v1.10.0/cluster display prod-cluster
TiDB Cluster: prod-cluster
TiDB Version: v6.1.0
ID Role Host Ports Status Data Dir Deploy Dir
-- ---- ---- ----- ------ -------- ----------
172.16.5.134:3000 grafana 172.16.5.134 3000 Up - deploy/grafana-3000
172.16.5.134:2379 pd 172.16.5.134 2379/2380 Up|L data/pd-2379 deploy/pd-2379
172.16.5.139:2379 pd 172.16.5.139 2379/2380 Up|UI data/pd-2379 deploy/pd-2379
172.16.5.140:2379 pd 172.16.5.140 2379/2380 Up data/pd-2379 deploy/pd-2379
172.16.5.134:9090 prometheus 172.16.5.134 9090 Up data/prometheus-9090 deploy/prometheus-9090
172.16.5.134:4000 tidb 172.16.5.134 4000/10080 Up - deploy/tidb-4000
172.16.5.139:4000 tidb 172.16.5.139 4000/10080 Up - deploy/tidb-4000
172.16.5.140:4000 tidb 172.16.5.140 4000/10080 Up - deploy/tidb-4000
172.16.5.134:20160 tikv 172.16.5.134 20160/20180 Up data/tikv-20160 deploy/tikv-20160
172.16.5.139:20160 tikv 172.16.5.139 20160/20180 Up data/tikv-20160 deploy/tikv-20160
172.16.5.140:20160 tikv 172.16.5.140 20160/20180 Offline data/tikv-20160 deploy/tikv-20160
TiDB Version: v6.3.0
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
172.16.5.134:3000 grafana 172.16.5.134 3000 linux/x86_64 Up - deploy/grafana-3000
172.16.5.134:2379 pd 172.16.5.134 2379/2380 linux/x86_64 Up|L data/pd-2379 deploy/pd-2379
172.16.5.139:2379 pd 172.16.5.139 2379/2380 linux/x86_64 Up|UI data/pd-2379 deploy/pd-2379
172.16.5.140:2379 pd 172.16.5.140 2379/2380 linux/x86_64 Up data/pd-2379 deploy/pd-2379
172.16.5.134:9090 prometheus 172.16.5.134 9090 linux/x86_64 Up data/prometheus-9090 deploy/prometheus-9090
172.16.5.134:4000 tidb 172.16.5.134 4000/10080 linux/x86_64 Up - deploy/tidb-4000
172.16.5.139:4000 tidb 172.16.5.139 4000/10080 linux/x86_64 Up - deploy/tidb-4000
172.16.5.140:4000 tidb 172.16.5.140 4000/10080 linux/x86_64 Up - deploy/tidb-4000
172.16.5.141:9000 tiflash 172.16.5.141 9000/8123/3930/20170/20292/8234 linux/x86_64 Up data/tiflash-9000 deploy/tiflash-9000
172.16.5.142:9000 tiflash 172.16.5.142 9000/8123/3930/20170/20292/8234 linux/x86_64 Up data/tiflash-9000 deploy/tiflash-9000
172.16.5.143:9000 tiflash 172.16.5.143 9000/8123/3930/20170/20292/8234 linux/x86_64 Up data/tiflash-9000 deploy/tiflash-9000
172.16.5.134:20160 tikv 172.16.5.134 20160/20180 linux/x86_64 Up data/tikv-20160 deploy/tikv-20160
172.16.5.139:20160 tikv 172.16.5.139 20160/20180 linux/x86_64 Up data/tikv-20160 deploy/tikv-20160
172.16.5.140:20160 tikv 172.16.5.140 20160/20180 linux/x86_64 Offline data/tikv-20160 deploy/tikv-20160
```

After PD schedules the data on the node to other TiKV nodes, this node will be deleted automatically.
Expand Down

0 comments on commit 627b37b

Please sign in to comment.