Permalink
Browse files

tools: highlight snippets missing the INI, TOML etc tags (#762)

  • Loading branch information...
kennytm authored and lilin90 committed Nov 26, 2018
1 parent 1869e1f commit 2937beb731ea45daae8d36aa4d0f6d2ec91805da
@@ -282,15 +282,15 @@ Edit the `deploy_dir` variable to configure the deployment directory.
The global variable is set to `/home/tidb/deploy` by default, and it applies to all services. If the data disk is mounted on the `/data1` directory, you can set it to `/data1/dm`. For example:
```bash
```ini
## Global variables
[all:vars]
deploy_dir = /data1/dm
```
If you need to set a separate deployment directory for a service, you can configure the host variable while configuring the service host list in the `inventory.ini` file. It is required to add the first column alias, to avoid confusion in scenarios of mixed services deployment.
```bash
```ini
dm-master ansible_host=172.16.10.71 deploy_dir=/data1/deploy
```
@@ -138,7 +138,7 @@ To combine tables, start the `route-rules` parameter in the configuration file o
- To use the table combination function, it is required to fill the `pattern-schema` and `target-schema`.
- If the `pattern-table` and `target-table` are NULL, the table name is not combined or converted.
```
```toml
[[route-rules]]
pattern-schema = "example_db"
pattern-table = "table_*"
@@ -48,7 +48,7 @@ Usage of Reparo:
### Description of the configuration file
```
```toml
# The storage directory for the binlog file in the protobuf format that Drainer outputs
data-dir = "./data.drainer"
@@ -432,7 +432,7 @@ target-table = "order_2017"
Check the binlog information using the following statement:
```
```sql
show binlog events in 'mysql-bin.000023' from 136676560 limit 10;
```
@@ -108,14 +108,14 @@ It is recommended to deploy TiDB-Binlog using TiDB-Ansible. If you just want to
1. Set `enable_binlog = True` to start `binlog` of the TiDB cluster.
```
```ini
## binlog trigger
enable_binlog = True
```
2. Add the deployment machine IPs for `pump_servers`.
```
```ini
## Binlog Part
[pump_servers]
172.16.10.72
@@ -125,7 +125,7 @@ It is recommended to deploy TiDB-Binlog using TiDB-Ansible. If you just want to
Pump retains the data of the latest 5 days by default. You can modify the value of the `gc` variable in the `tidb-ansible/conf/pump.yml` file and remove the related comments. Take modifying the variable value to 7 as an example:
```
```yaml
global:
# an integer value to control the expiry date of the binlog data, which indicates for how long (in days) the binlog data would be stored
# must be bigger than 0
@@ -134,7 +134,7 @@ It is recommended to deploy TiDB-Binlog using TiDB-Ansible. If you just want to
Make sure the space of the deployment directory is sufficient for storing Binlog. For more details, see [Configure the deployment directory](../op-guide/ansible-deployment.md#configure-the-deployment-directory). You can also set a separate deployment directory for Pump.
```
```ini
## Binlog Part
[pump_servers]
pump1 ansible_host=172.16.10.72 deploy_dir=/data1/pump
@@ -187,14 +187,14 @@ It is recommended to deploy TiDB-Binlog using TiDB-Ansible. If you just want to
- Assume that the downstream is MySQL with the alias `drainer_mysql`:
```
```ini
[drainer_servers]
drainer_mysql ansible_host=172.16.10.71 initial_commit_ts="402899541671542785"
```
- Assume that the downstream is `pb` with the alias `drainer_pb`:
```
```ini
[drainer_servers]
drainer_pb ansible_host=172.16.10.71 initial_commit_ts="402899541671542785"
```
@@ -213,7 +213,7 @@ It is recommended to deploy TiDB-Binlog using TiDB-Ansible. If you just want to
Set `db-type` to `mysql` and configure the downstream MySQL information:
```
```toml
# downstream storage, equal to --dest-db-type
# Valid values are "mysql", "pb", "tidb", "flash", and "kafka".
db-type = "mysql"
@@ -239,7 +239,7 @@ It is recommended to deploy TiDB-Binlog using TiDB-Ansible. If you just want to
Set `db-type` to `pb`.
```
```toml
# downstream storage, equal to --dest-db-type
# Valid values are "mysql", "pb", "tidb", "flash", and "kafka".
db-type = "pb"
@@ -339,14 +339,14 @@ The following part shows how to use Pump and Drainer based on the nodes above.
- Taking deploying Pump on "192.168.0.11" as an example, the Pump configuration file is as follows:
```
```toml
# Pump Configuration
# the bound address of Pump
addr = "192.168.0.11:8250"
# the address through which Pump provides the service
advertise-addr = 192.168.0.11:8250"
advertise-addr = "192.168.0.11:8250"
# the number of days to retain the data in Pump (7 by default)
gc = 7
@@ -363,7 +363,7 @@ The following part shows how to use Pump and Drainer based on the nodes above.
- The example of starting Pump:
```
```bash
./bin/pump -config pump.toml
```
@@ -423,7 +423,7 @@ The following part shows how to use Pump and Drainer based on the nodes above.
- Taking deploying Drainer on "192.168.0.13" as an example, the Drainer configuration file is as follows:
```
```toml
# Drainer Configuration.
# the address through which Drainer provides the service ("192.168.0.13:8249")
@@ -85,7 +85,7 @@ cd tidb-binlog-kafka-linux-amd64
- The drainer outputs `pb` and you need to set the following parameters in the configuration file:
```
```toml
[syncer]
db-type = "pb"
disable-dispatch = true
@@ -96,7 +96,7 @@ cd tidb-binlog-kafka-linux-amd64
- The drainer outputs `kafka` and you need to set the following parameters in the configuration file:
```
```toml
[syncer]
db-type = "kafka"
@@ -134,7 +134,7 @@ cd tidb-binlog-kafka-linux-amd64
Configuration example:
```
```ini
# binlog trigger
enable_binlog = True
# ZooKeeper address of the Kafka cluster. Example:
@@ -150,7 +150,7 @@ A usage example:
Assume that we have three PDs, three ZooKeepers, and one TiDB. The information of each node is as follows:
```
```ini
TiDB="192.168.0.10"
PD1="192.168.0.16"
PD2="192.168.0.15"
@@ -88,21 +88,21 @@ cd tidb-binlog-latest-linux-amd64
- Run Drainer at the `gen-savepoint` model and generate the Drainer savepoint file:
```
```bash
bin/drainer -gen-savepoint --data-dir= ${drainer_savepoint_dir} --pd-urls=${pd_urls}
```
- Do a full backup. For example, back up TiDB using mydumper.
- Import the full backup to the target system.
- Set the file path of the savepoint and start Drainer:
```
```bash
bin/drainer --config=conf/drainer.toml --data-dir=${drainer_savepoint_dir}
```
- The drainer outputs `pb` and you need to set the following parameters in the configuration file.
```
```toml
[syncer]
db-type = "pb"
disable-dispatch = true
@@ -234,7 +234,7 @@ Usage of Drainer:
Configuration file
```
```toml
# Drainer Configuration
# addr (i.e. 'host:port') to listen on for Drainer connections
@@ -70,7 +70,7 @@ tidb-ctl schema in {database name}
For example, `tidb-ctl schema in mysql` returns the following result:
```text
```json
[
{
"id": 13,
@@ -92,7 +92,7 @@ The result is long and displayed in JSON. The above result is a truncated one.
For example, `tidb-ctl schema in mysql -n db` returns the table schema of the `db` table in the `mysql` database:
```text
```json
{
"id": 9,
"name": {

0 comments on commit 2937beb

Please sign in to comment.