Skip to content

Commit

Permalink
dev, v2.1, v3.0: fix broken links (#1308)
Browse files Browse the repository at this point in the history
* dev, v2.1, v3.0: fix broken links

* Add GRANT back

* Add GRANT back

* Add GRANT back

* fix links

* fix links
  • Loading branch information
CaitinChen authored and lilin90 committed Jul 5, 2019
1 parent 915adc4 commit 3a19a31
Show file tree
Hide file tree
Showing 42 changed files with 53 additions and 54 deletions.
4 changes: 2 additions & 2 deletions dev/faq/tidb.md
Expand Up @@ -912,7 +912,7 @@ If the amount of data that needs to be deleted at a time is very large, this loo

#### How to improve the data loading speed in TiDB?

- The [Lightning](/reference/tools/lightning/overview.md) tool is developed for distributed data import. It should be noted that the data import process does not perform a complete transaction process for performance reasons. Therefore, the ACID constraint of the data being imported during the import process cannot be guaranteed. The ACID constraint of the imported data can only be guaranteed after the entire import process ends. Therefore, the applicable scenarios mainly include importing new data (such as a new table or a new index) or the full backup and restoring (truncate the original table and then import data).
- The [Lightning](/reference/tools/tidb-lightning/overview.md) tool is developed for distributed data import. It should be noted that the data import process does not perform a complete transaction process for performance reasons. Therefore, the ACID constraint of the data being imported during the import process cannot be guaranteed. The ACID constraint of the imported data can only be guaranteed after the entire import process ends. Therefore, the applicable scenarios mainly include importing new data (such as a new table or a new index) or the full backup and restoring (truncate the original table and then import data).
- Data loading in TiDB is related to the status of disks and the whole cluster. When loading data, pay attention to metrics like the disk usage rate of the host, TiClient Error, Backoff, Thread CPU and so on. You can analyze the bottlenecks using these metrics.

#### What should I do if it is slow to reclaim storage space after deleting data?
Expand Down Expand Up @@ -1011,7 +1011,7 @@ See [Overview of the Monitoring Framework](/how-to/monitor/overview.md).

### Key metrics of monitoring

See [Key Metrics](/reference/key-monitoring-metrics/overview.md).
See [Key Metrics](/reference/key-monitoring-metrics/overview-dashboard.md).

#### Is there a better way of monitoring the key metrics?

Expand Down
2 changes: 1 addition & 1 deletion dev/reference/sql/statements/drop-database.md
Expand Up @@ -62,4 +62,4 @@ This statement is understood to be fully compatible with MySQL. Any compatibilit

* [CREATE DATABASE](/reference/sql/statements/create-database.md)
* [ALTER DATABASE](/reference/sql/statements/alter-database.md)
* [SHOW CREATE DATABASE](/reference/sql/statements/show-create-database.md)

2 changes: 1 addition & 1 deletion dev/reference/sql/statements/show-grants.md
Expand Up @@ -54,4 +54,4 @@ This statement is understood to be fully compatible with MySQL. Any compatibilit
## See also

* [SHOW CREATE USER](/reference/sql/statements/show-create-user.md)
* [GRANT](/reference/sql/statements/grant.md)
* [GRANT](/reference/sql/statements/grant-privileges.md)
2 changes: 1 addition & 1 deletion dev/reference/tools/data-migration/deploy.md
Expand Up @@ -113,7 +113,7 @@ To detect possible errors of data replication configuration in advance, DM provi
- DM automatically checks the corresponding privileges and configuration while starting the data replication task.
- You can also use the `check-task` command to manually precheck whether the upstream MySQL instance configuration satisfies the DM requirements.

For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](/tools/dm/precheck.md).
For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](/reference/tools/data-migration/precheck.md).

> **Note:**
>
Expand Down
Expand Up @@ -147,7 +147,7 @@ break-ddl-lock <--worker=127.0.0.1:8262> [--remove-id] [--exec] [--skip] <task-n
+ `task-name`:

- Non-flag; string; required
- It specifies the name of the task containing the lock that is going to execute the breaking operation (you can check whether a task contains the lock via [query-status](/tools/dm/query-status.md)).
- It specifies the name of the task containing the lock that is going to execute the breaking operation (you can check whether a task contains the lock via [query-status](/reference/tools/data-migration/query-status.md)).

#### Example of results

Expand Down
6 changes: 3 additions & 3 deletions dev/reference/tools/data-migration/features/overview.md
Expand Up @@ -33,7 +33,7 @@ routes:

### Parameter explanation

DM replicates the upstream MySQL or MariaDB instance table that matches the [`schema-pattern`/`table-pattern` rule provided by Table selector](/tools/dm/table-selector.md) to the downstream `target-schema`/`target-table`.
DM replicates the upstream MySQL or MariaDB instance table that matches the [`schema-pattern`/`table-pattern` rule provided by Table selector](/reference/tools/data-migration/table-selector.md) to the downstream `target-schema`/`target-table`.

### Usage examples

Expand Down Expand Up @@ -225,7 +225,7 @@ filters:

### Parameter explanation

- [`schema-pattern`/`table-pattern`](/tools/dm/table-selector.md): the binlog events or DDL SQL statements of upstream MySQL or MariaDB instance tables that match `schema-pattern`/`table-pattern` are filtered by the rules below.
- [`schema-pattern`/`table-pattern`](/reference/tools/data-migration/table-selector.md): the binlog events or DDL SQL statements of upstream MySQL or MariaDB instance tables that match `schema-pattern`/`table-pattern` are filtered by the rules below.

- `events`: the binlog event array.

Expand Down Expand Up @@ -373,7 +373,7 @@ column-mappings:

### Parameter explanation

- [`schema-pattern`/`table-pattern`](/tools/dm/table-selector.md): to execute column value modifying operations on the upstream MySQL or MariaDB instance tables that match the `schema-pattern`/`table-pattern` filtering rule.
- [`schema-pattern`/`table-pattern`](/reference/tools/data-migration/table-selector.md): to execute column value modifying operations on the upstream MySQL or MariaDB instance tables that match the `schema-pattern`/`table-pattern` filtering rule.
- `source-column`, `target-column`: to modify the value of the `source-column` column according to specified `expression` and assign the new value to `target-column`.
- `expression`: the expression used to modify data. Currently, only the `partition id` built-in expression is supported.

Expand Down
4 changes: 2 additions & 2 deletions dev/reference/tools/data-migration/overview.md
Expand Up @@ -33,7 +33,7 @@ DM-worker executes specific data replication tasks.
- Orchestrating the operation of the data replication subtasks
- Monitoring the running state of the data replication subtasks

After DM-worker is started, it automatically replicates the upstream binlog to the local configuration directory (the default replication directory is `<deploy_dir>/relay_log` if DM is deployed using `DM-Ansible`). For details about DM-worker, see [DM-worker Introduction](/tools/dm/dm-worker-intro.md). For details about the relay log, see [Relay Log](/tools/dm/relay-log.md).
After DM-worker is started, it automatically replicates the upstream binlog to the local configuration directory (the default replication directory is `<deploy_dir>/relay_log` if DM is deployed using `DM-Ansible`). For details about DM-worker, see [DM-worker Introduction](/reference/tools/data-migration/dm-worker-intro.md). For details about the relay log, see [Relay Log](/reference/tools/data-migration/relay-log.md).

### dmctl

Expand Down Expand Up @@ -84,7 +84,7 @@ Before using the DM tool, note the following restrictions:
> - 5.7.1 < MySQL version < 5.8
> - MariaDB version >= 10.1.3
Data Migration [prechecks the corresponding privileges and configuration automatically](/tools/dm/precheck.md) while starting the data replication task using dmctl.
Data Migration [prechecks the corresponding privileges and configuration automatically](/reference/tools/data-migration/precheck.md) while starting the data replication task using dmctl.

+ DDL syntax

Expand Down
2 changes: 1 addition & 1 deletion dev/reference/tools/data-migration/query-status.md
Expand Up @@ -154,7 +154,7 @@ This document introduces the query result and subtask status of Data Migration (

For the status description and status switch relationship of "stage" of "subTaskStatus" of "workers", see [Subtask status](#subtask-status).

For operation details of "unresolvedDDLLockID" of "subTaskStatus" of "workers", see [Handle Sharding DDL Locks Manually](/tools/dm/manually-handling-sharding-ddl-locks.md).
For operation details of "unresolvedDDLLockID" of "subTaskStatus" of "workers", see [Handle Sharding DDL Locks Manually](/reference/tools/data-migration/manually-handling-sharding-ddl-locks.md).

## Subtask status

Expand Down
2 changes: 1 addition & 1 deletion dev/reference/tools/data-migration/skip-replace-sqls.md
Expand Up @@ -128,7 +128,7 @@ When you use dmctl to manually handle the SQL statements unsupported by TiDB, th

#### query-status

`query-status` allows you to query the current status of items such as the subtask and the relay unit in each DM-worker. For details, see [query status](/tools/dm/query-status.md).
`query-status` allows you to query the current status of items such as the subtask and the relay unit in each DM-worker. For details, see [query status](/reference/tools/data-migration/query-status.md).

#### query-error

Expand Down
Expand Up @@ -98,7 +98,7 @@ Assume that the downstream schema after replication is as follows:
>
> The replication Requirements #4, #5 and #7 indicate that all the deletion operations in the `user` schema are filtered out, so a schema level filtering rule is configured here. However, the deletion operations of future tables in the `user` schema will also be filtered out.
- To satisfy the replication Requirement #6, configure the [binlog event filter rule](/tools/dm/data-synchronization-features.md#binlog-event-filter) as follows:
- To satisfy the replication Requirement #6, configure the [binlog event filter rule](/reference/tools/data-migration/features/overview.md#binlog-event-filter) as follows:

```yaml
filters:
Expand Down
2 changes: 1 addition & 1 deletion dev/reference/tools/syncer.md
Expand Up @@ -512,7 +512,7 @@ Syncer provides the metric interface, and requires Prometheus to actively obtain

2. Import the configuration file of Grafana dashboard.

Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/docs/tree/master/etc) -> choose the corresponding data source.
Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/tidb-ansible/blob/master/scripts/syncer.json) -> choose the corresponding data source.

### Description of Grafana Syncer metrics

Expand Down
2 changes: 1 addition & 1 deletion dev/tispark/tispark-quick-start-guide_v1.x.md
Expand Up @@ -6,7 +6,7 @@ category: User Guide

# TiSpark Quick Start Guide

To make it easy to [try TiSpark](/tispark/tispark-user-guide.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default.
To make it easy to [try TiSpark](/reference/tispark.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default.

## Deployment information

Expand Down
2 changes: 1 addition & 1 deletion v1.0/FAQ.md
Expand Up @@ -616,7 +616,7 @@ See [Syncer User Guide](docs/tools/syncer.md).

##### How to configure to monitor Syncer status?

Download and import [Syncer Json](https://github.com/pingcap/docs/blob/master/etc/Syncer.json) to Grafana. Edit the Prometheus configuration file and add the following content:
Download and import [Syncer Json](https://github.com/pingcap/tidb-ansible/blob/master/scripts/syncer.json) to Grafana. Edit the Prometheus configuration file and add the following content:

```
- job_name: ‘syncer_ops’ // task name
Expand Down
2 changes: 1 addition & 1 deletion v1.0/tools/syncer.md
Expand Up @@ -483,7 +483,7 @@ Syncer provides the metric interface, and requires Prometheus to actively obtain

2. Import the configuration file of Grafana dashboard.

Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/docs/tree/master/etc) -> choose the corresponding data source.
Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/tidb-ansible/blob/master/scripts/syncer.json) -> choose the corresponding data source.

### Description of Grafana Syncer metrics

Expand Down
2 changes: 1 addition & 1 deletion v2.0/tools/syncer.md
Expand Up @@ -484,7 +484,7 @@ Syncer provides the metric interface, and requires Prometheus to actively obtain

2. Import the configuration file of Grafana dashboard.

Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/docs/tree/master/etc) -> choose the corresponding data source.
Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/tidb-ansible/blob/master/scripts/syncer.json) -> choose the corresponding data source.

### Description of Grafana Syncer metrics

Expand Down
2 changes: 1 addition & 1 deletion v2.1-legacy/tispark/tispark-quick-start-guide.md
Expand Up @@ -6,7 +6,7 @@ category: User Guide

# TiSpark Quick Start Guide

To make it easy to [try TiSpark](../tispark/tispark-user-guide.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default.
To make it easy to [try TiSpark](/reference/tispark.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default.

## Deployment information

Expand Down
2 changes: 1 addition & 1 deletion v2.1-legacy/tools/data-migration-overview.md
Expand Up @@ -33,7 +33,7 @@ DM-worker executes specific data replication tasks.
- Orchestrating the operation of the data replication subtasks
- Monitoring the running state of the data replication subtasks

For details about DM-worker, see [DM-worker Introduction](../tools/dm-worker-intro.md).
For details about DM-worker, see [DM-worker Introduction](/reference/tools/data-migration/dm-worker-intro.md).

### dmctl

Expand Down
2 changes: 1 addition & 1 deletion v2.1-legacy/tools/syncer.md
Expand Up @@ -484,7 +484,7 @@ Syncer provides the metric interface, and requires Prometheus to actively obtain

2. Import the configuration file of Grafana dashboard.

Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/docs/tree/master/etc) -> choose the corresponding data source.
Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/tidb-ansible/blob/master/scripts/syncer.json) -> choose the corresponding data source.

### Description of Grafana Syncer metrics

Expand Down
4 changes: 2 additions & 2 deletions v2.1/faq/tidb.md
Expand Up @@ -914,7 +914,7 @@ If the amount of data that needs to be deleted at a time is very large, this loo

#### How to improve the data loading speed in TiDB?

- The [Lightning](/reference/tools/lightning/overview.md) tool is developed for distributed data import. It should be noted that the data import process does not perform a complete transaction process for performance reasons. Therefore, the ACID constraint of the data being imported during the import process cannot be guaranteed. The ACID constraint of the imported data can only be guaranteed after the entire import process ends. Therefore, the applicable scenarios mainly include importing new data (such as a new table or a new index) or the full backup and restoring (truncate the original table and then import data).
- The [Lightning](/reference/tools/tidb-lightning/overview.md) tool is developed for distributed data import. It should be noted that the data import process does not perform a complete transaction process for performance reasons. Therefore, the ACID constraint of the data being imported during the import process cannot be guaranteed. The ACID constraint of the imported data can only be guaranteed after the entire import process ends. Therefore, the applicable scenarios mainly include importing new data (such as a new table or a new index) or the full backup and restoring (truncate the original table and then import data).
- Data loading in TiDB is related to the status of disks and the whole cluster. When loading data, pay attention to metrics like the disk usage rate of the host, TiClient Error, Backoff, Thread CPU and so on. You can analyze the bottlenecks using these metrics.

#### What should I do if it is slow to reclaim storage space after deleting data?
Expand Down Expand Up @@ -1013,7 +1013,7 @@ See [Overview of the Monitoring Framework](/how-to/monitor/overview.md).

### Key metrics of monitoring

See [Key Metrics](/reference/key-monitoring-metrics/overview.md).
See [Key Metrics](/reference/key-monitoring-metrics/overview-dashboard.md).

#### Is there a better way of monitoring the key metrics?

Expand Down
2 changes: 1 addition & 1 deletion v2.1/reference/sql/statements/drop-database.md
Expand Up @@ -62,4 +62,4 @@ This statement is understood to be fully compatible with MySQL. Any compatibilit

* [CREATE DATABASE](/reference/sql/statements/create-database.md)
* [ALTER DATABASE](/reference/sql/statements/alter-database.md)
* [SHOW CREATE DATABASE](/reference/sql/statements/show-create-database.md)

2 changes: 1 addition & 1 deletion v2.1/reference/sql/statements/show-grants.md
Expand Up @@ -54,4 +54,4 @@ This statement is understood to be fully compatible with MySQL. Any compatibilit
## See also

* [SHOW CREATE USER](/reference/sql/statements/show-create-user.md)
* [GRANT](/reference/sql/statements/grant.md)
* [GRANT](/reference/sql/statements/grant-privileges.md)
2 changes: 1 addition & 1 deletion v2.1/reference/tools/data-migration/deploy.md
Expand Up @@ -113,7 +113,7 @@ To detect possible errors of data replication configuration in advance, DM provi
- DM automatically checks the corresponding privileges and configuration while starting the data replication task.
- You can also use the `check-task` command to manually precheck whether the upstream MySQL instance configuration satisfies the DM requirements.

For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](/tools/dm/precheck.md).
For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](/reference/tools/data-migration/precheck.md).

> **Note:**
>
Expand Down
Expand Up @@ -147,7 +147,7 @@ break-ddl-lock <--worker=127.0.0.1:8262> [--remove-id] [--exec] [--skip] <task-n
+ `task-name`:

- Non-flag; string; required
- It specifies the name of the task containing the lock that is going to execute the breaking operation (you can check whether a task contains the lock via [query-status](/tools/dm/query-status.md)).
- It specifies the name of the task containing the lock that is going to execute the breaking operation (you can check whether a task contains the lock via [query-status](/reference/tools/data-migration/query-status.md)).

#### Example of results

Expand Down
6 changes: 3 additions & 3 deletions v2.1/reference/tools/data-migration/features/overview.md
Expand Up @@ -33,7 +33,7 @@ routes:

### Parameter explanation

DM replicates the upstream MySQL or MariaDB instance table that matches the [`schema-pattern`/`table-pattern` rule provided by Table selector](/tools/dm/table-selector.md) to the downstream `target-schema`/`target-table`.
DM replicates the upstream MySQL or MariaDB instance table that matches the [`schema-pattern`/`table-pattern` rule provided by Table selector](/reference/tools/data-migration/table-selector.md) to the downstream `target-schema`/`target-table`.

### Usage examples

Expand Down Expand Up @@ -225,7 +225,7 @@ filters:

### Parameter explanation

- [`schema-pattern`/`table-pattern`](/tools/dm/table-selector.md): the binlog events or DDL SQL statements of upstream MySQL or MariaDB instance tables that match `schema-pattern`/`table-pattern` are filtered by the rules below.
- [`schema-pattern`/`table-pattern`](/reference/tools/data-migration/table-selector.md): the binlog events or DDL SQL statements of upstream MySQL or MariaDB instance tables that match `schema-pattern`/`table-pattern` are filtered by the rules below.

- `events`: the binlog event array.

Expand Down Expand Up @@ -373,7 +373,7 @@ column-mappings:

### Parameter explanation

- [`schema-pattern`/`table-pattern`](/tools/dm/table-selector.md): to execute column value modifying operations on the upstream MySQL or MariaDB instance tables that match the `schema-pattern`/`table-pattern` filtering rule.
- [`schema-pattern`/`table-pattern`](/reference/tools/data-migration/table-selector.md): to execute column value modifying operations on the upstream MySQL or MariaDB instance tables that match the `schema-pattern`/`table-pattern` filtering rule.
- `source-column`, `target-column`: to modify the value of the `source-column` column according to specified `expression` and assign the new value to `target-column`.
- `expression`: the expression used to modify data. Currently, only the `partition id` built-in expression is supported.

Expand Down

0 comments on commit 3a19a31

Please sign in to comment.