Skip to content
Permalink
Browse files

*: update links for SEO (#681)

  • Loading branch information...
lilin90 authored and xuechunL committed Oct 19, 2018
1 parent 9a4fac4 commit 00da68d85fab5c3a3608f3ad508fd01f5687be1d
Showing with 158 additions and 146 deletions.
  1. +2 −2 FAQ.md
  2. +1 −2 bikeshare-example-database.md
  3. +1 −1 dev-guide/deployment.md
  4. +1 −1 dev-guide/development.md
  5. +1 −1 op-guide/ansible-deployment-rolling-update.md
  6. +10 −12 op-guide/ansible-deployment.md
  7. +1 −1 op-guide/configuration.md
  8. +1 −1 op-guide/dashboard-overview-info.md
  9. +1 −1 op-guide/docker-deployment.md
  10. +1 −1 op-guide/history-read.md
  11. +1 −1 op-guide/horizontal-scale.md
  12. +1 −1 op-guide/location-awareness.md
  13. +4 −3 op-guide/migration-overview.md
  14. +12 −12 op-guide/migration.md
  15. +10 −10 op-guide/monitor.md
  16. +9 −9 op-guide/offline-ansible-deployment.md
  17. +1 −1 op-guide/security.md
  18. +1 −1 op-guide/tidb-config-file.md
  19. +4 −4 op-guide/tidb-v2-upgrade-guide.md
  20. +2 −3 op-guide/tune-tikv.md
  21. +5 −5 sql/admin.md
  22. +5 −0 sql/aggregate-group-by-functions.md
  23. +1 −1 sql/character-set-configuration.md
  24. +4 −4 sql/comment-syntax.md
  25. +2 −3 sql/datatype.md
  26. +4 −4 sql/ddl.md
  27. +7 −5 sql/dml.md
  28. +4 −4 sql/encrypted-connections.md
  29. +1 −1 sql/generated-columns.md
  30. +3 −3 sql/keywords-and-reserved-words.md
  31. +3 −3 sql/literal-values.md
  32. +1 −1 sql/mysql-compatibility.md
  33. +1 −1 sql/operators.md
  34. +2 −1 sql/precision-math.md
  35. +7 −6 sql/privilege.md
  36. +2 −2 sql/schema-object-names.md
  37. +1 −1 sql/system-database.md
  38. +4 −4 sql/tidb-server.md
  39. +11 −5 sql/tidb-specific.md
  40. +1 −1 sql/time-zone.md
  41. +2 −2 sql/understanding-the-query-execution-plan.md
  42. +1 −0 sql/user-defined-variables.md
  43. +1 −1 sql/util.md
  44. +2 −2 sql/variable.md
  45. +1 −1 tikv/deploy-tikv-docker-compose.md
  46. +1 −1 tikv/deploy-tikv-using-ansible.md
  47. +1 −1 tikv/deploy-tikv-using-docker.md
  48. +2 −2 tikv/go-client-api.md
  49. +2 −2 tikv/tikv-overview.md
  50. +1 −1 tispark/tispark-quick-start-guide.md
  51. +5 −5 tools/syncer.md
  52. +3 −3 tools/tidb-binlog-kafka.md
  53. +1 −1 tools/tidb-binlog.md
  54. +1 −1 trouble-shooting.md
4 FAQ.md
@@ -541,7 +541,7 @@ You can combine the above two parameters with the DML of TiDB to use them. For u

1. Adjust the priority by writing SQL statements in the database:

```
```sql
select HIGH_PRIORITY | LOW_PRIORITY count(*) from table_name;
insert HIGH_PRIORITY | LOW_PRIORITY into table_name insert_values;
delete HIGH_PRIORITY | LOW_PRIORITY from table_name;
@@ -561,7 +561,7 @@ When the modified number or the current total row number is larger than `tidb_au

Its usage is similar to MySQL:

```
```sql
select column_name from table_name use index(index_name)where where_condition;
```

@@ -6,8 +6,7 @@ category: user guide

# Bikeshare Example Database

Examples used in the TiDB manual use [System Data](https://www.capitalbikeshare.com/system-data) from
Capital Bikeshare, released under the [Capital Bikeshare Data License Agreement](https://www.capitalbikeshare.com/data-license-agreement).
Examples used in the TiDB manual use [System Data](https://www.capitalbikeshare.com/system-data) from Capital Bikeshare, released under the [Capital Bikeshare Data License Agreement](https://www.capitalbikeshare.com/data-license-agreement).

## Download all data files

@@ -4,7 +4,7 @@

Note: **The easiest way to deploy TiDB is to use TiDB Ansible, see [Ansible Deployment](../op-guide/ansible-deployment.md).**

Before you start, check the [supported platforms](./requirements.md#supported-platforms) and [prerequisites](./requirements.md#prerequisites) first.
Before you start, check the [supported platforms](../dev-guide/requirements.md#supported-platforms) and [prerequisites](../dev-guide/requirements.md#prerequisites) first.

## Building and installing TiDB components

@@ -4,7 +4,7 @@

If you want to develop the TiDB project, you can follow this guide.

Before you begin, check the [supported platforms](./requirements.md#supported-platforms) and [prerequisites](./requirements.md#prerequisites) first.
Before you begin, check the [supported platforms](../dev-guide/requirements.md#supported-platforms) and [prerequisites](../dev-guide/requirements.md#prerequisites) first.

## Build TiKV

@@ -12,7 +12,7 @@ When you perform a rolling update for a TiDB cluster, the service is shut down s
## Upgrade the component version

- To upgrade between large versions, you need to upgrade [`tidb-ansible`](https://github.com/pingcap/tidb-ansible). If you want to upgrade the version of TiDB from 1.0 to 2.0, see [TiDB 2.0 Upgrade Guide](tidb-v2-upgrade-guide.md).
- To upgrade between large versions, you need to upgrade [`tidb-ansible`](https://github.com/pingcap/tidb-ansible). If you want to upgrade the version of TiDB from 1.0 to 2.0, see [TiDB 2.0 Upgrade Guide](../op-guide/tidb-v2-upgrade-guide.md).

- For a minor upgrade, it is also recommended to update `tidb-ansible` for the latest configuration file templates, features, and bug fixes.

@@ -18,13 +18,13 @@ You can use the TiDB-Ansible configuration file to set up the cluster topology a

- Initialize operating system parameters
- Deploy the whole TiDB cluster
- [Start the TiDB cluster](ansible-operation.md#start-a-cluster)
- [Stop the TiDB cluster](ansible-operation.md#stop-a-cluster)
- [Modify component configuration](ansible-deployment-rolling-update.md#modify-component-configuration)
- [Scale the TiDB cluster](ansible-deployment-scale.md)
- [Upgrade the component version](ansible-deployment-rolling-update.md#upgrade-the-component-version)
- [Clean up data of the TiDB cluster](ansible-operation.md#clean-up-cluster-data)
- [Destroy the TiDB cluster](ansible-operation.md#destroy-a-cluster)
- [Start the TiDB cluster](../op-guide/ansible-operation.md#start-a-cluster)
- [Stop the TiDB cluster](../op-guide/ansible-operation.md#stop-a-cluster)
- [Modify component configuration](../op-guide/ansible-deployment-rolling-update.md#modify-component-configuration)
- [Scale the TiDB cluster](../op-guide/ansible-deployment-scale.md)
- [Upgrade the component version](../op-guide/ansible-deployment-rolling-update.md#upgrade-the-component-version)
- [Clean up data of the TiDB cluster](../op-guide/ansible-operation.md#clean-up-cluster-data)
- [Destroy the TiDB cluster](../op-guide/ansible-operation.md#destroy-a-cluster)

## Prepare

@@ -34,12 +34,12 @@ Before you start, make sure you have:

- 4 or more machines

A standard TiDB cluster contains 6 machines. You can use 4 machines for testing. For more details, see [Software and Hardware Requirements](recommendation.md).
A standard TiDB cluster contains 6 machines. You can use 4 machines for testing. For more details, see [Software and Hardware Requirements](../op-guide/recommendation.md).

- CentOS 7.3 (64 bit) or later, x86_64 architecture (AMD64)
- Network between machines

> **Note:** When you deploy TiDB using Ansible, **use SSD disks for the data directory of TiKV and PD nodes**. Otherwise, it cannot pass the check. If you only want to try TiDB out and explore the features, it is recommended to [deploy TiDB using Docker Compose](docker-compose.md) on a single machine.
> **Note:** When you deploy TiDB using Ansible, **use SSD disks for the data directory of TiKV and PD nodes**. Otherwise, it cannot pass the check. If you only want to try TiDB out and explore the features, it is recommended to [deploy TiDB using Docker Compose](../op-guide/docker-compose.md) on a single machine.
2. A Control Machine that meets the following requirements:

@@ -508,7 +508,7 @@ To enable the following control variables, use the capitalized `True`. To disabl
| tidb_version | the version of TiDB, configured by default in TiDB-Ansible branches |
| process_supervision | the supervision way of processes, systemd by default, supervise optional |
| timezone | the global default time zone configured when a new TiDB cluster bootstrap is initialized; you can edit it later using the global `time_zone` system variable and the session `time_zone` system variable as described in [Time Zone Support](../sql/time-zone.md); the default value is `Asia/Shanghai` and see [the list of time zones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) for more optional values |
| enable_firewalld | to enable the firewall, closed by default; to enable it, add the ports in [network requirements](recommendation.md#network-requirements) to the white list |
| enable_firewalld | to enable the firewall, closed by default; to enable it, add the ports in [network requirements](../op-guide/recommendation.md#network-requirements) to the white list |
| enable_ntpd | to monitor the NTP service of the managed node, True by default; do not close it |
| set_hostname | to edit the hostname of the managed node based on the IP, False by default |
| enable_binlog | whether to deploy Pump and enable the binlog, False by default, dependent on the Kafka cluster; see the `zookeeper_addrs` variable |
@@ -580,8 +580,6 @@ The following example uses `tidb` as the user who runs the service.
ansible-playbook start.yml
```

> **Note:** If you want to deploy TiDB using the root user account, see [Ansible Deployment Using the Root User Account](root-ansible-deployment.md).
## Test the TiDB cluster

Because TiDB is compatible with MySQL, you must use the MySQL client to connect to TiDB directly. It is recommended to configure load balancing to provide uniform SQL interface.
@@ -28,7 +28,7 @@ The default TiDB ports are 4000 for client requests and 10080 for status report.

- The configuration file
- Default: ""
- If you have specified the configuration file, TiDB reads the configuration file. If the corresponding configuration also exists in the command line flags, TiDB uses the configuration in the command line flags to overwrite that in the configuration file. For detailed configuration information, see [TiDB Configuration File Description](tidb-config-file.md)
- If you have specified the configuration file, TiDB reads the configuration file. If the corresponding configuration also exists in the command line flags, TiDB uses the configuration in the command line flags to overwrite that in the configuration file. For detailed configuration information, see [TiDB Configuration File Description](../op-guide/tidb-config-file.md)

### `--host`

@@ -6,7 +6,7 @@ category: operations

# Key Metrics

If you use Ansible to deploy the TiDB cluster, the monitoring system is deployed at the same time. For more information, see [Overview of the Monitoring Framework](monitor-overview.md) .
If you use Ansible to deploy the TiDB cluster, the monitoring system is deployed at the same time. For more information, see [Overview of the Monitoring Framework](../op-guide/monitor-overview.md).

The Grafana dashboard is divided into a series of sub dashboards which include Overview, PD, TiDB, TiKV, Node\_exporter, Disk Performance, and so on. A lot of metrics are there to help you diagnose.

@@ -8,7 +8,7 @@ category: operations

This page shows you how to manually deploy a multi-node TiDB cluster on multiple machines using Docker.

To learn more, see [TiDB architecture](../overview.md#tidb-architecture) and [Software and Hardware Requirements](recommendation.md).
To learn more, see [TiDB architecture](../overview.md#tidb-architecture) and [Software and Hardware Requirements](../op-guide/recommendation.md).

## Preparation

@@ -33,7 +33,7 @@ After reading data from history versions, you can read data from the latest vers

TiDB implements Multi-Version Concurrency Control (MVCC) to manage data versions. The history versions of data are kept because each update/removal creates a new version of the data object instead of updating/removing the data object in-place. But not all the versions are kept. If the versions are older than a specific time, they will be removed completely to reduce the storage occupancy and the performance overhead caused by too many history versions.

In TiDB, Garbage Collection (GC) runs periodically to remove the obsolete data versions. For GC details, see [TiDB Garbage Collection (GC)](gc.md)
In TiDB, Garbage Collection (GC) runs periodically to remove the obsolete data versions. For GC details, see [TiDB Garbage Collection (GC)](../op-guide/gc.md)

Pay special attention to the following two variables:

@@ -10,7 +10,7 @@ category: operations

The capacity of a TiDB cluster can be increased or reduced without affecting online services.

> **Note:** If your TiDB cluster is deployed using Ansible, see [Scale the TiDB Cluster Using TiDB-Ansible](ansible-deployment-scale.md).
> **Note:** If your TiDB cluster is deployed using Ansible, see [Scale the TiDB Cluster Using TiDB-Ansible](../op-guide/ansible-deployment-scale.md).
The following part shows you how to add or delete PD, TiKV or TiDB nodes.

@@ -10,7 +10,7 @@ category: operations

PD schedules according to the topology of the TiKV cluster to maximize the TiKV's capability for disaster recovery.

Before you begin, see [Deploy TiDB Using Ansible (Recommended)](ansible-deployment.md) and [Deploy TiDB Using Docker](docker-deployment.md).
Before you begin, see [Deploy TiDB Using Ansible (Recommended)](../op-guide/ansible-deployment.md) and [Deploy TiDB Using Docker](../op-guide/docker-deployment.md).

## TiKV reports the topological information

@@ -20,12 +20,12 @@ See the following for the assumed MySQL and TiDB server information:
## Scenarios

+ To import all the history data. This needs the following tools:
- `Checker`: to check if the shema is compatible with TiDB.
- `Checker`: to check if the schema is compatible with TiDB.
- `Mydumper`: to export data from MySQL.
- `Loader`: to import data to TiDB.

+ To incrementally synchronise data after all the history data is imported. This needs the following tools:
- `Checker`: to check if the shema is compatible with TiDB.
+ To incrementally synchronize data after all the history data is imported. This needs the following tools:
- `Checker`: to check if the schema is compatible with TiDB.
- `Mydumper`: to export data from MySQL.
- `Loader`: to import data to TiDB.
- `Syncer`: to incrementally synchronize data from MySQL to TiDB.
@@ -35,6 +35,7 @@ See the following for the assumed MySQL and TiDB server information:
### Enable binary logging (binlog) in MySQL

Before using the `syncer` tool, make sure:

+ Binlog is enabled in MySQL. See [Setting the Replication Master Configuration](http://dev.mysql.com/doc/refman/5.7/en/replication-howto-masterbaseconfig.html).

+ Binlog must use the `row` format which is the recommended binlog format in MySQL 5.7. It can be configured using the following statement:
@@ -6,7 +6,7 @@ category: operations

# Migrate Data from MySQL to TiDB

## Use the `mydumper` / `loader` tool to export and import all the data
## Use the `mydumper`/`loader` tool to export and import all the data

You can use `mydumper` to export data from MySQL and `loader` to import the data into TiDB.

@@ -31,7 +31,7 @@ In this command,
### Import data to TiDB

Use `loader` to import the data from MySQL to TiDB. See [Loader instructions](./tools/loader.md) for more information.
Use `loader` to import the data from MySQL to TiDB. See [Loader instructions](../tools/loader.md) for more information.

```bash
./bin/loader -h 127.0.0.1 -u root -P 4000 -t 32 -d ./var/test
@@ -116,9 +116,9 @@ tar -xzf tidb-enterprise-tools-latest-linux-amd64.tar.gz
cd tidb-enterprise-tools-latest-linux-amd64
```

Assuming the data from `t1` and `t2` is already imported to TiDB using `mydumper`/`loader`. Now we hope that any updates to these two tables are synchronised to TiDB in real time.
Assuming the data from `t1` and `t2` is already imported to TiDB using `mydumper`/`loader`. Now we hope that any updates to these two tables are synchronized to TiDB in real time.

### Obtain the position to synchronise
### Obtain the position to synchronize

The data exported from MySQL contains a metadata file which includes the position information. Take the following metadata information as an example:
```
@@ -139,7 +139,7 @@ binlog-name = "mysql-bin.000003"
binlog-pos = 930143241
```

> **Note:** The `syncer.meta` file only needs to be configured once when it is first used. The position will be automatically updated when binlog is synchronised.
> **Note:** The `syncer.meta` file only needs to be configured once when it is first used. The position will be automatically updated when binlog is synchronized.
### Start `syncer`

@@ -160,22 +160,22 @@ status-addr = ":10081"
skip-sqls = ["ALTER USER", "CREATE USER"]
# Support whitelist filter. You can specify the database and table to be synchronised. For example:
# Synchronise all the tables of db1 and db2:
# Support whitelist filter. You can specify the database and table to be synchronized. For example:
# Synchronize all the tables of db1 and db2:
replicate-do-db = ["db1","db2"]
# Synchronise db1.table1.
# Synchronize db1.table1.
[[replicate-do-table]]
db-name ="db1"
tbl-name = "table1"
# Synchronise db3.table2.
# Synchronize db3.table2.
[[replicate-do-table]]
db-name ="db3"
tbl-name = "table2"
# Support regular expressions. Start with '~' to use regular expressions.
# To synchronise all the databases that start with `test`:
# To synchronize all the databases that start with `test`:
replicate-do-db = ["~^test.*"]
# The sharding synchronising rules support wildcharacter.
@@ -241,7 +241,7 @@ mysql> select * from t1;
+----+------+
```

`syncer` outputs the current synchronised data statistics every 30 seconds:
`syncer` outputs the current synchronized data statistics every 30 seconds:

```bash
2017/06/08 01:18:51 syncer.go:934: [info] [syncer]total events = 15, total tps = 130, recent tps = 4,
@@ -252,4 +252,4 @@ master-binlog = (ON.000001, 11992), master-binlog-gtid=53ea0ed1-9bf8-11e6-8bea-6
syncer-binlog = (ON.000001, 2504), syncer-binlog-gtid = 53ea0ed1-9bf8-11e6-8bea-64006a897c73:1-35
```

You can see that by using `syncer`, the updates in MySQL are automatically synchronised in TiDB.
You can see that by using `syncer`, the updates in MySQL are automatically synchronized in TiDB.
@@ -140,7 +140,7 @@ See the following diagram for the deployment architecture:

See the following links for your reference:

- Prometheus Push Gateway: [https://github.com/prometheus/pushgateway](https://github.com/prometheus/pushgateway)
- Prometheus Pushgateway: [https://github.com/prometheus/pushgateway](https://github.com/prometheus/pushgateway)

- Prometheus Server: [https://github.com/prometheus/prometheus#install](https://github.com/prometheus/prometheus#install)

@@ -152,26 +152,26 @@ See the following links for your reference:

+ TiDB: Set the two parameters: `--metrics-addr` and `--metrics-interval`.

- Set the Push Gateway address as the `--metrics-addr` parameter.
- Set the Pushgateway address as the `--metrics-addr` parameter.
- Set the push frequency as the `--metrics-interval` parameter. The unit is s, and the default value is 15.

+ PD: update the toml configuration file with the Push Gateway address and the push frequency:
+ PD: update the toml configuration file with the Pushgateway address and the push frequency:

```toml
[metric]
# prometheus client push interval, set "0s" to disable prometheus.
# Prometheus client push interval, set "0s" to disable prometheus.
interval = "15s"
# prometheus pushgateway address, leaves it empty will disable prometheus.
# Prometheus Pushgateway address, leaves it empty will disable prometheus.
address = "host:port"
```

+ TiKV: update the toml configuration file with the Push Gateway address and the the push frequency. Set the job field as "tikv".
+ TiKV: update the toml configuration file with the Pushgateway address and the the push frequency. Set the job field as "tikv".

```toml
[metric]
# the Prometheus client push interval. Setting the value to 0s stops Prometheus client from pushing.
interval = "15s"
# the Prometheus pushgateway address. Leaving it empty stops Prometheus client from pushing.
# the Prometheus Pushgateway address. Leaving it empty stops Prometheus client from pushing.
address = "host:port"
# the Prometheus client push job name. Note: A node id will automatically append, e.g., "tikv_1".
job = "tikv"
@@ -183,7 +183,7 @@ Generally, it does not need to be configured. You can use the default port: 9091

### Configure Prometheus

Add the Push Gateway address to the yaml configuration file:
Add the Pushgateway address to the yaml configuration file:

```yaml
scrape_configs:
@@ -196,7 +196,7 @@ Add the Push Gateway address to the yaml configuration file:
honor_labels: true
static_configs:
- targets: ['host:port'] # use the Push Gateway address
- targets: ['host:port'] # use the Pushgateway address
labels:
group: 'production'
```
@@ -237,7 +237,7 @@ labels:

2. On the sidebar menu, click "Dashboards" -> "Import" to open the "Import Dashboard" window.

3. Click "Upload .json File" to upload a JSON file ( Download [TiDB Grafana Config](https://grafana.com/tidb) ).
3. Click "Upload .json File" to upload a JSON file (Download [TiDB Grafana Config](https://grafana.com/tidb)).

4. Click "Save & Open".

Oops, something went wrong.

0 comments on commit 00da68d

Please sign in to comment.
You can’t perform that action at this time.