Skip to content
Permalink
Browse files

dev, v2.1, v3.0: fix broken links (#1308)

* dev, v2.1, v3.0: fix broken links

* Add GRANT back

* Add GRANT back

* Add GRANT back

* fix links

* fix links
  • Loading branch information...
CaitinChen authored and lilin90 committed Jul 5, 2019
1 parent 915adc4 commit 3a19a3162b75efb69ce28acb4ff40dd8a6ffec18
Showing with 53 additions and 54 deletions.
  1. +2 −2 dev/faq/tidb.md
  2. +1 −1 dev/reference/sql/statements/drop-database.md
  3. +1 −1 dev/reference/sql/statements/show-grants.md
  4. +1 −1 dev/reference/tools/data-migration/deploy.md
  5. +1 −1 dev/reference/tools/data-migration/features/manually-handling-sharding-ddl-locks.md
  6. +3 −3 dev/reference/tools/data-migration/features/overview.md
  7. +2 −2 dev/reference/tools/data-migration/overview.md
  8. +1 −1 dev/reference/tools/data-migration/query-status.md
  9. +1 −1 dev/reference/tools/data-migration/skip-replace-sqls.md
  10. +1 −1 dev/reference/tools/data-migration/usage-scenarios/shard-merge.md
  11. +1 −1 dev/reference/tools/syncer.md
  12. +1 −1 dev/tispark/tispark-quick-start-guide_v1.x.md
  13. +1 −1 v1.0/FAQ.md
  14. +1 −1 v1.0/tools/syncer.md
  15. +1 −1 v2.0/tools/syncer.md
  16. +1 −1 v2.1-legacy/tispark/tispark-quick-start-guide.md
  17. +1 −1 v2.1-legacy/tools/data-migration-overview.md
  18. +1 −1 v2.1-legacy/tools/syncer.md
  19. +2 −2 v2.1/faq/tidb.md
  20. +1 −1 v2.1/reference/sql/statements/drop-database.md
  21. +1 −1 v2.1/reference/sql/statements/show-grants.md
  22. +1 −1 v2.1/reference/tools/data-migration/deploy.md
  23. +1 −1 v2.1/reference/tools/data-migration/features/manually-handling-sharding-ddl-locks.md
  24. +3 −3 v2.1/reference/tools/data-migration/features/overview.md
  25. +2 −2 v2.1/reference/tools/data-migration/overview.md
  26. +1 −1 v2.1/reference/tools/data-migration/query-status.md
  27. +1 −1 v2.1/reference/tools/data-migration/skip-replace-sqls.md
  28. +1 −1 v2.1/reference/tools/data-migration/usage-scenarios/shard-merge.md
  29. +1 −1 v2.1/reference/tools/syncer.md
  30. +1 −1 v2.1/tispark/tispark-quick-start-guide_v1.x.md
  31. +2 −2 v3.0/faq/tidb.md
  32. +0 −1 v3.0/reference/sql/statements/drop-database.md
  33. +1 −1 v3.0/reference/sql/statements/show-grants.md
  34. +1 −1 v3.0/reference/tools/data-migration/deploy.md
  35. +1 −1 v3.0/reference/tools/data-migration/features/manually-handling-sharding-ddl-locks.md
  36. +3 −3 v3.0/reference/tools/data-migration/features/overview.md
  37. +2 −2 v3.0/reference/tools/data-migration/overview.md
  38. +1 −1 v3.0/reference/tools/data-migration/query-status.md
  39. +1 −1 v3.0/reference/tools/data-migration/skip-replace-sqls.md
  40. +1 −1 v3.0/reference/tools/data-migration/usage-scenarios/shard-merge.md
  41. +1 −1 v3.0/reference/tools/syncer.md
  42. +1 −1 v3.0/tispark/tispark-quick-start-guide_v1.x.md
@@ -912,7 +912,7 @@ If the amount of data that needs to be deleted at a time is very large, this loo

#### How to improve the data loading speed in TiDB?

- The [Lightning](/reference/tools/lightning/overview.md) tool is developed for distributed data import. It should be noted that the data import process does not perform a complete transaction process for performance reasons. Therefore, the ACID constraint of the data being imported during the import process cannot be guaranteed. The ACID constraint of the imported data can only be guaranteed after the entire import process ends. Therefore, the applicable scenarios mainly include importing new data (such as a new table or a new index) or the full backup and restoring (truncate the original table and then import data).
- The [Lightning](/reference/tools/tidb-lightning/overview.md) tool is developed for distributed data import. It should be noted that the data import process does not perform a complete transaction process for performance reasons. Therefore, the ACID constraint of the data being imported during the import process cannot be guaranteed. The ACID constraint of the imported data can only be guaranteed after the entire import process ends. Therefore, the applicable scenarios mainly include importing new data (such as a new table or a new index) or the full backup and restoring (truncate the original table and then import data).
- Data loading in TiDB is related to the status of disks and the whole cluster. When loading data, pay attention to metrics like the disk usage rate of the host, TiClient Error, Backoff, Thread CPU and so on. You can analyze the bottlenecks using these metrics.

#### What should I do if it is slow to reclaim storage space after deleting data?
@@ -1011,7 +1011,7 @@ See [Overview of the Monitoring Framework](/how-to/monitor/overview.md).

### Key metrics of monitoring

See [Key Metrics](/reference/key-monitoring-metrics/overview.md).
See [Key Metrics](/reference/key-monitoring-metrics/overview-dashboard.md).

#### Is there a better way of monitoring the key metrics?

@@ -62,4 +62,4 @@ This statement is understood to be fully compatible with MySQL. Any compatibilit

* [CREATE DATABASE](/reference/sql/statements/create-database.md)
* [ALTER DATABASE](/reference/sql/statements/alter-database.md)
* [SHOW CREATE DATABASE](/reference/sql/statements/show-create-database.md)

@@ -54,4 +54,4 @@ This statement is understood to be fully compatible with MySQL. Any compatibilit
## See also

* [SHOW CREATE USER](/reference/sql/statements/show-create-user.md)
* [GRANT](/reference/sql/statements/grant.md)
* [GRANT](/reference/sql/statements/grant-privileges.md)
@@ -113,7 +113,7 @@ To detect possible errors of data replication configuration in advance, DM provi
- DM automatically checks the corresponding privileges and configuration while starting the data replication task.
- You can also use the `check-task` command to manually precheck whether the upstream MySQL instance configuration satisfies the DM requirements.

For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](/tools/dm/precheck.md).
For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](/reference/tools/data-migration/precheck.md).

> **Note:**
>
@@ -147,7 +147,7 @@ break-ddl-lock <--worker=127.0.0.1:8262> [--remove-id] [--exec] [--skip] <task-n
+ `task-name`:

- Non-flag; string; required
- It specifies the name of the task containing the lock that is going to execute the breaking operation (you can check whether a task contains the lock via [query-status](/tools/dm/query-status.md)).
- It specifies the name of the task containing the lock that is going to execute the breaking operation (you can check whether a task contains the lock via [query-status](/reference/tools/data-migration/query-status.md)).

#### Example of results

@@ -33,7 +33,7 @@ routes:

### Parameter explanation

DM replicates the upstream MySQL or MariaDB instance table that matches the [`schema-pattern`/`table-pattern` rule provided by Table selector](/tools/dm/table-selector.md) to the downstream `target-schema`/`target-table`.
DM replicates the upstream MySQL or MariaDB instance table that matches the [`schema-pattern`/`table-pattern` rule provided by Table selector](/reference/tools/data-migration/table-selector.md) to the downstream `target-schema`/`target-table`.

### Usage examples

@@ -225,7 +225,7 @@ filters:

### Parameter explanation

- [`schema-pattern`/`table-pattern`](/tools/dm/table-selector.md): the binlog events or DDL SQL statements of upstream MySQL or MariaDB instance tables that match `schema-pattern`/`table-pattern` are filtered by the rules below.
- [`schema-pattern`/`table-pattern`](/reference/tools/data-migration/table-selector.md): the binlog events or DDL SQL statements of upstream MySQL or MariaDB instance tables that match `schema-pattern`/`table-pattern` are filtered by the rules below.

- `events`: the binlog event array.

@@ -373,7 +373,7 @@ column-mappings:

### Parameter explanation

- [`schema-pattern`/`table-pattern`](/tools/dm/table-selector.md): to execute column value modifying operations on the upstream MySQL or MariaDB instance tables that match the `schema-pattern`/`table-pattern` filtering rule.
- [`schema-pattern`/`table-pattern`](/reference/tools/data-migration/table-selector.md): to execute column value modifying operations on the upstream MySQL or MariaDB instance tables that match the `schema-pattern`/`table-pattern` filtering rule.
- `source-column`, `target-column`: to modify the value of the `source-column` column according to specified `expression` and assign the new value to `target-column`.
- `expression`: the expression used to modify data. Currently, only the `partition id` built-in expression is supported.

@@ -33,7 +33,7 @@ DM-worker executes specific data replication tasks.
- Orchestrating the operation of the data replication subtasks
- Monitoring the running state of the data replication subtasks

After DM-worker is started, it automatically replicates the upstream binlog to the local configuration directory (the default replication directory is `<deploy_dir>/relay_log` if DM is deployed using `DM-Ansible`). For details about DM-worker, see [DM-worker Introduction](/tools/dm/dm-worker-intro.md). For details about the relay log, see [Relay Log](/tools/dm/relay-log.md).
After DM-worker is started, it automatically replicates the upstream binlog to the local configuration directory (the default replication directory is `<deploy_dir>/relay_log` if DM is deployed using `DM-Ansible`). For details about DM-worker, see [DM-worker Introduction](/reference/tools/data-migration/dm-worker-intro.md). For details about the relay log, see [Relay Log](/reference/tools/data-migration/relay-log.md).

### dmctl

@@ -84,7 +84,7 @@ Before using the DM tool, note the following restrictions:
> - 5.7.1 < MySQL version < 5.8
> - MariaDB version >= 10.1.3
Data Migration [prechecks the corresponding privileges and configuration automatically](/tools/dm/precheck.md) while starting the data replication task using dmctl.
Data Migration [prechecks the corresponding privileges and configuration automatically](/reference/tools/data-migration/precheck.md) while starting the data replication task using dmctl.

+ DDL syntax

@@ -154,7 +154,7 @@ This document introduces the query result and subtask status of Data Migration (

For the status description and status switch relationship of "stage" of "subTaskStatus" of "workers", see [Subtask status](#subtask-status).

For operation details of "unresolvedDDLLockID" of "subTaskStatus" of "workers", see [Handle Sharding DDL Locks Manually](/tools/dm/manually-handling-sharding-ddl-locks.md).
For operation details of "unresolvedDDLLockID" of "subTaskStatus" of "workers", see [Handle Sharding DDL Locks Manually](/reference/tools/data-migration/manually-handling-sharding-ddl-locks.md).

## Subtask status

@@ -128,7 +128,7 @@ When you use dmctl to manually handle the SQL statements unsupported by TiDB, th

#### query-status

`query-status` allows you to query the current status of items such as the subtask and the relay unit in each DM-worker. For details, see [query status](/tools/dm/query-status.md).
`query-status` allows you to query the current status of items such as the subtask and the relay unit in each DM-worker. For details, see [query status](/reference/tools/data-migration/query-status.md).

#### query-error

@@ -98,7 +98,7 @@ Assume that the downstream schema after replication is as follows:
>
> The replication Requirements #4, #5 and #7 indicate that all the deletion operations in the `user` schema are filtered out, so a schema level filtering rule is configured here. However, the deletion operations of future tables in the `user` schema will also be filtered out.
- To satisfy the replication Requirement #6, configure the [binlog event filter rule](/tools/dm/data-synchronization-features.md#binlog-event-filter) as follows:
- To satisfy the replication Requirement #6, configure the [binlog event filter rule](/reference/tools/data-migration/features/overview.md#binlog-event-filter) as follows:

```yaml
filters:
@@ -512,7 +512,7 @@ Syncer provides the metric interface, and requires Prometheus to actively obtain

2. Import the configuration file of Grafana dashboard.

Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/docs/tree/master/etc) -> choose the corresponding data source.
Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/tidb-ansible/blob/master/scripts/syncer.json) -> choose the corresponding data source.

### Description of Grafana Syncer metrics

@@ -6,7 +6,7 @@ category: User Guide

# TiSpark Quick Start Guide

To make it easy to [try TiSpark](/tispark/tispark-user-guide.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default.
To make it easy to [try TiSpark](/reference/tispark.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default.

## Deployment information

@@ -616,7 +616,7 @@ See [Syncer User Guide](docs/tools/syncer.md).

##### How to configure to monitor Syncer status?

Download and import [Syncer Json](https://github.com/pingcap/docs/blob/master/etc/Syncer.json) to Grafana. Edit the Prometheus configuration file and add the following content:
Download and import [Syncer Json](https://github.com/pingcap/tidb-ansible/blob/master/scripts/syncer.json) to Grafana. Edit the Prometheus configuration file and add the following content:

```
- job_name: ‘syncer_ops’ // task name
@@ -483,7 +483,7 @@ Syncer provides the metric interface, and requires Prometheus to actively obtain

2. Import the configuration file of Grafana dashboard.

Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/docs/tree/master/etc) -> choose the corresponding data source.
Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/tidb-ansible/blob/master/scripts/syncer.json) -> choose the corresponding data source.

### Description of Grafana Syncer metrics

@@ -484,7 +484,7 @@ Syncer provides the metric interface, and requires Prometheus to actively obtain

2. Import the configuration file of Grafana dashboard.

Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/docs/tree/master/etc) -> choose the corresponding data source.
Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/tidb-ansible/blob/master/scripts/syncer.json) -> choose the corresponding data source.

### Description of Grafana Syncer metrics

@@ -6,7 +6,7 @@ category: User Guide

# TiSpark Quick Start Guide

To make it easy to [try TiSpark](../tispark/tispark-user-guide.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default.
To make it easy to [try TiSpark](/reference/tispark.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default.

## Deployment information

@@ -33,7 +33,7 @@ DM-worker executes specific data replication tasks.
- Orchestrating the operation of the data replication subtasks
- Monitoring the running state of the data replication subtasks

For details about DM-worker, see [DM-worker Introduction](../tools/dm-worker-intro.md).
For details about DM-worker, see [DM-worker Introduction](/reference/tools/data-migration/dm-worker-intro.md).

### dmctl

@@ -484,7 +484,7 @@ Syncer provides the metric interface, and requires Prometheus to actively obtain

2. Import the configuration file of Grafana dashboard.

Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/docs/tree/master/etc) -> choose the corresponding data source.
Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/tidb-ansible/blob/master/scripts/syncer.json) -> choose the corresponding data source.

### Description of Grafana Syncer metrics

@@ -914,7 +914,7 @@ If the amount of data that needs to be deleted at a time is very large, this loo

#### How to improve the data loading speed in TiDB?

- The [Lightning](/reference/tools/lightning/overview.md) tool is developed for distributed data import. It should be noted that the data import process does not perform a complete transaction process for performance reasons. Therefore, the ACID constraint of the data being imported during the import process cannot be guaranteed. The ACID constraint of the imported data can only be guaranteed after the entire import process ends. Therefore, the applicable scenarios mainly include importing new data (such as a new table or a new index) or the full backup and restoring (truncate the original table and then import data).
- The [Lightning](/reference/tools/tidb-lightning/overview.md) tool is developed for distributed data import. It should be noted that the data import process does not perform a complete transaction process for performance reasons. Therefore, the ACID constraint of the data being imported during the import process cannot be guaranteed. The ACID constraint of the imported data can only be guaranteed after the entire import process ends. Therefore, the applicable scenarios mainly include importing new data (such as a new table or a new index) or the full backup and restoring (truncate the original table and then import data).
- Data loading in TiDB is related to the status of disks and the whole cluster. When loading data, pay attention to metrics like the disk usage rate of the host, TiClient Error, Backoff, Thread CPU and so on. You can analyze the bottlenecks using these metrics.

#### What should I do if it is slow to reclaim storage space after deleting data?
@@ -1013,7 +1013,7 @@ See [Overview of the Monitoring Framework](/how-to/monitor/overview.md).

### Key metrics of monitoring

See [Key Metrics](/reference/key-monitoring-metrics/overview.md).
See [Key Metrics](/reference/key-monitoring-metrics/overview-dashboard.md).

#### Is there a better way of monitoring the key metrics?

@@ -62,4 +62,4 @@ This statement is understood to be fully compatible with MySQL. Any compatibilit

* [CREATE DATABASE](/reference/sql/statements/create-database.md)
* [ALTER DATABASE](/reference/sql/statements/alter-database.md)
* [SHOW CREATE DATABASE](/reference/sql/statements/show-create-database.md)

@@ -54,4 +54,4 @@ This statement is understood to be fully compatible with MySQL. Any compatibilit
## See also

* [SHOW CREATE USER](/reference/sql/statements/show-create-user.md)
* [GRANT](/reference/sql/statements/grant.md)
* [GRANT](/reference/sql/statements/grant-privileges.md)
@@ -113,7 +113,7 @@ To detect possible errors of data replication configuration in advance, DM provi
- DM automatically checks the corresponding privileges and configuration while starting the data replication task.
- You can also use the `check-task` command to manually precheck whether the upstream MySQL instance configuration satisfies the DM requirements.

For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](/tools/dm/precheck.md).
For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](/reference/tools/data-migration/precheck.md).

> **Note:**
>
@@ -147,7 +147,7 @@ break-ddl-lock <--worker=127.0.0.1:8262> [--remove-id] [--exec] [--skip] <task-n
+ `task-name`:

- Non-flag; string; required
- It specifies the name of the task containing the lock that is going to execute the breaking operation (you can check whether a task contains the lock via [query-status](/tools/dm/query-status.md)).
- It specifies the name of the task containing the lock that is going to execute the breaking operation (you can check whether a task contains the lock via [query-status](/reference/tools/data-migration/query-status.md)).

#### Example of results

@@ -33,7 +33,7 @@ routes:

### Parameter explanation

DM replicates the upstream MySQL or MariaDB instance table that matches the [`schema-pattern`/`table-pattern` rule provided by Table selector](/tools/dm/table-selector.md) to the downstream `target-schema`/`target-table`.
DM replicates the upstream MySQL or MariaDB instance table that matches the [`schema-pattern`/`table-pattern` rule provided by Table selector](/reference/tools/data-migration/table-selector.md) to the downstream `target-schema`/`target-table`.

### Usage examples

@@ -225,7 +225,7 @@ filters:

### Parameter explanation

- [`schema-pattern`/`table-pattern`](/tools/dm/table-selector.md): the binlog events or DDL SQL statements of upstream MySQL or MariaDB instance tables that match `schema-pattern`/`table-pattern` are filtered by the rules below.
- [`schema-pattern`/`table-pattern`](/reference/tools/data-migration/table-selector.md): the binlog events or DDL SQL statements of upstream MySQL or MariaDB instance tables that match `schema-pattern`/`table-pattern` are filtered by the rules below.

- `events`: the binlog event array.

@@ -373,7 +373,7 @@ column-mappings:

### Parameter explanation

- [`schema-pattern`/`table-pattern`](/tools/dm/table-selector.md): to execute column value modifying operations on the upstream MySQL or MariaDB instance tables that match the `schema-pattern`/`table-pattern` filtering rule.
- [`schema-pattern`/`table-pattern`](/reference/tools/data-migration/table-selector.md): to execute column value modifying operations on the upstream MySQL or MariaDB instance tables that match the `schema-pattern`/`table-pattern` filtering rule.
- `source-column`, `target-column`: to modify the value of the `source-column` column according to specified `expression` and assign the new value to `target-column`.
- `expression`: the expression used to modify data. Currently, only the `partition id` built-in expression is supported.

0 comments on commit 3a19a31

Please sign in to comment.
You can’t perform that action at this time.