From 3a19a3162b75efb69ce28acb4ff40dd8a6ffec18 Mon Sep 17 00:00:00 2001 From: Caitin <34535727+CaitinChen@users.noreply.github.com> Date: Fri, 5 Jul 2019 10:06:41 +0800 Subject: [PATCH] dev, v2.1, v3.0: fix broken links (#1308) * dev, v2.1, v3.0: fix broken links * Add GRANT back * Add GRANT back * Add GRANT back * fix links * fix links --- dev/faq/tidb.md | 4 ++-- dev/reference/sql/statements/drop-database.md | 2 +- dev/reference/sql/statements/show-grants.md | 2 +- dev/reference/tools/data-migration/deploy.md | 2 +- .../features/manually-handling-sharding-ddl-locks.md | 2 +- dev/reference/tools/data-migration/features/overview.md | 6 +++--- dev/reference/tools/data-migration/overview.md | 4 ++-- dev/reference/tools/data-migration/query-status.md | 2 +- dev/reference/tools/data-migration/skip-replace-sqls.md | 2 +- .../tools/data-migration/usage-scenarios/shard-merge.md | 2 +- dev/reference/tools/syncer.md | 2 +- dev/tispark/tispark-quick-start-guide_v1.x.md | 2 +- v1.0/FAQ.md | 2 +- v1.0/tools/syncer.md | 2 +- v2.0/tools/syncer.md | 2 +- v2.1-legacy/tispark/tispark-quick-start-guide.md | 2 +- v2.1-legacy/tools/data-migration-overview.md | 2 +- v2.1-legacy/tools/syncer.md | 2 +- v2.1/faq/tidb.md | 4 ++-- v2.1/reference/sql/statements/drop-database.md | 2 +- v2.1/reference/sql/statements/show-grants.md | 2 +- v2.1/reference/tools/data-migration/deploy.md | 2 +- .../features/manually-handling-sharding-ddl-locks.md | 2 +- v2.1/reference/tools/data-migration/features/overview.md | 6 +++--- v2.1/reference/tools/data-migration/overview.md | 4 ++-- v2.1/reference/tools/data-migration/query-status.md | 2 +- v2.1/reference/tools/data-migration/skip-replace-sqls.md | 2 +- .../tools/data-migration/usage-scenarios/shard-merge.md | 2 +- v2.1/reference/tools/syncer.md | 2 +- v2.1/tispark/tispark-quick-start-guide_v1.x.md | 2 +- v3.0/faq/tidb.md | 4 ++-- v3.0/reference/sql/statements/drop-database.md | 1 - v3.0/reference/sql/statements/show-grants.md | 2 +- v3.0/reference/tools/data-migration/deploy.md | 2 +- .../features/manually-handling-sharding-ddl-locks.md | 2 +- v3.0/reference/tools/data-migration/features/overview.md | 6 +++--- v3.0/reference/tools/data-migration/overview.md | 4 ++-- v3.0/reference/tools/data-migration/query-status.md | 2 +- v3.0/reference/tools/data-migration/skip-replace-sqls.md | 2 +- .../tools/data-migration/usage-scenarios/shard-merge.md | 2 +- v3.0/reference/tools/syncer.md | 2 +- v3.0/tispark/tispark-quick-start-guide_v1.x.md | 2 +- 42 files changed, 53 insertions(+), 54 deletions(-) diff --git a/dev/faq/tidb.md b/dev/faq/tidb.md index 64656d34ff65..9fb82f6bc0a7 100644 --- a/dev/faq/tidb.md +++ b/dev/faq/tidb.md @@ -912,7 +912,7 @@ If the amount of data that needs to be deleted at a time is very large, this loo #### How to improve the data loading speed in TiDB? -- The [Lightning](/reference/tools/lightning/overview.md) tool is developed for distributed data import. It should be noted that the data import process does not perform a complete transaction process for performance reasons. Therefore, the ACID constraint of the data being imported during the import process cannot be guaranteed. The ACID constraint of the imported data can only be guaranteed after the entire import process ends. Therefore, the applicable scenarios mainly include importing new data (such as a new table or a new index) or the full backup and restoring (truncate the original table and then import data). +- The [Lightning](/reference/tools/tidb-lightning/overview.md) tool is developed for distributed data import. It should be noted that the data import process does not perform a complete transaction process for performance reasons. Therefore, the ACID constraint of the data being imported during the import process cannot be guaranteed. The ACID constraint of the imported data can only be guaranteed after the entire import process ends. Therefore, the applicable scenarios mainly include importing new data (such as a new table or a new index) or the full backup and restoring (truncate the original table and then import data). - Data loading in TiDB is related to the status of disks and the whole cluster. When loading data, pay attention to metrics like the disk usage rate of the host, TiClient Error, Backoff, Thread CPU and so on. You can analyze the bottlenecks using these metrics. #### What should I do if it is slow to reclaim storage space after deleting data? @@ -1011,7 +1011,7 @@ See [Overview of the Monitoring Framework](/how-to/monitor/overview.md). ### Key metrics of monitoring -See [Key Metrics](/reference/key-monitoring-metrics/overview.md). +See [Key Metrics](/reference/key-monitoring-metrics/overview-dashboard.md). #### Is there a better way of monitoring the key metrics? diff --git a/dev/reference/sql/statements/drop-database.md b/dev/reference/sql/statements/drop-database.md index 4e19edf15352..4b238071713e 100644 --- a/dev/reference/sql/statements/drop-database.md +++ b/dev/reference/sql/statements/drop-database.md @@ -62,4 +62,4 @@ This statement is understood to be fully compatible with MySQL. Any compatibilit * [CREATE DATABASE](/reference/sql/statements/create-database.md) * [ALTER DATABASE](/reference/sql/statements/alter-database.md) -* [SHOW CREATE DATABASE](/reference/sql/statements/show-create-database.md) + diff --git a/dev/reference/sql/statements/show-grants.md b/dev/reference/sql/statements/show-grants.md index 41a460ec376f..fdd88e55ac30 100644 --- a/dev/reference/sql/statements/show-grants.md +++ b/dev/reference/sql/statements/show-grants.md @@ -54,4 +54,4 @@ This statement is understood to be fully compatible with MySQL. Any compatibilit ## See also * [SHOW CREATE USER](/reference/sql/statements/show-create-user.md) -* [GRANT](/reference/sql/statements/grant.md) +* [GRANT](/reference/sql/statements/grant-privileges.md) diff --git a/dev/reference/tools/data-migration/deploy.md b/dev/reference/tools/data-migration/deploy.md index e83c840f7fe0..2b46b4f68b0f 100644 --- a/dev/reference/tools/data-migration/deploy.md +++ b/dev/reference/tools/data-migration/deploy.md @@ -113,7 +113,7 @@ To detect possible errors of data replication configuration in advance, DM provi - DM automatically checks the corresponding privileges and configuration while starting the data replication task. - You can also use the `check-task` command to manually precheck whether the upstream MySQL instance configuration satisfies the DM requirements. -For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](/tools/dm/precheck.md). +For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](/reference/tools/data-migration/precheck.md). > **Note:** > diff --git a/dev/reference/tools/data-migration/features/manually-handling-sharding-ddl-locks.md b/dev/reference/tools/data-migration/features/manually-handling-sharding-ddl-locks.md index f9b9258d8ff2..074e77abd69b 100644 --- a/dev/reference/tools/data-migration/features/manually-handling-sharding-ddl-locks.md +++ b/dev/reference/tools/data-migration/features/manually-handling-sharding-ddl-locks.md @@ -147,7 +147,7 @@ break-ddl-lock <--worker=127.0.0.1:8262> [--remove-id] [--exec] [--skip] /relay_log` if DM is deployed using `DM-Ansible`). For details about DM-worker, see [DM-worker Introduction](/tools/dm/dm-worker-intro.md). For details about the relay log, see [Relay Log](/tools/dm/relay-log.md). +After DM-worker is started, it automatically replicates the upstream binlog to the local configuration directory (the default replication directory is `/relay_log` if DM is deployed using `DM-Ansible`). For details about DM-worker, see [DM-worker Introduction](/reference/tools/data-migration/dm-worker-intro.md). For details about the relay log, see [Relay Log](/reference/tools/data-migration/relay-log.md). ### dmctl @@ -84,7 +84,7 @@ Before using the DM tool, note the following restrictions: > - 5.7.1 < MySQL version < 5.8 > - MariaDB version >= 10.1.3 - Data Migration [prechecks the corresponding privileges and configuration automatically](/tools/dm/precheck.md) while starting the data replication task using dmctl. + Data Migration [prechecks the corresponding privileges and configuration automatically](/reference/tools/data-migration/precheck.md) while starting the data replication task using dmctl. + DDL syntax diff --git a/dev/reference/tools/data-migration/query-status.md b/dev/reference/tools/data-migration/query-status.md index 1024d818fb9d..84ceab59bdea 100644 --- a/dev/reference/tools/data-migration/query-status.md +++ b/dev/reference/tools/data-migration/query-status.md @@ -154,7 +154,7 @@ This document introduces the query result and subtask status of Data Migration ( For the status description and status switch relationship of "stage" of "subTaskStatus" of "workers", see [Subtask status](#subtask-status). -For operation details of "unresolvedDDLLockID" of "subTaskStatus" of "workers", see [Handle Sharding DDL Locks Manually](/tools/dm/manually-handling-sharding-ddl-locks.md). +For operation details of "unresolvedDDLLockID" of "subTaskStatus" of "workers", see [Handle Sharding DDL Locks Manually](/reference/tools/data-migration/manually-handling-sharding-ddl-locks.md). ## Subtask status diff --git a/dev/reference/tools/data-migration/skip-replace-sqls.md b/dev/reference/tools/data-migration/skip-replace-sqls.md index b11237525cd8..0d5e2af29459 100644 --- a/dev/reference/tools/data-migration/skip-replace-sqls.md +++ b/dev/reference/tools/data-migration/skip-replace-sqls.md @@ -128,7 +128,7 @@ When you use dmctl to manually handle the SQL statements unsupported by TiDB, th #### query-status -`query-status` allows you to query the current status of items such as the subtask and the relay unit in each DM-worker. For details, see [query status](/tools/dm/query-status.md). +`query-status` allows you to query the current status of items such as the subtask and the relay unit in each DM-worker. For details, see [query status](/reference/tools/data-migration/query-status.md). #### query-error diff --git a/dev/reference/tools/data-migration/usage-scenarios/shard-merge.md b/dev/reference/tools/data-migration/usage-scenarios/shard-merge.md index 5d8e0b36ab01..0f3c8378060b 100644 --- a/dev/reference/tools/data-migration/usage-scenarios/shard-merge.md +++ b/dev/reference/tools/data-migration/usage-scenarios/shard-merge.md @@ -98,7 +98,7 @@ Assume that the downstream schema after replication is as follows: > > The replication Requirements #4, #5 and #7 indicate that all the deletion operations in the `user` schema are filtered out, so a schema level filtering rule is configured here. However, the deletion operations of future tables in the `user` schema will also be filtered out. -- To satisfy the replication Requirement #6, configure the [binlog event filter rule](/tools/dm/data-synchronization-features.md#binlog-event-filter) as follows: +- To satisfy the replication Requirement #6, configure the [binlog event filter rule](/reference/tools/data-migration/features/overview.md#binlog-event-filter) as follows: ```yaml filters: diff --git a/dev/reference/tools/syncer.md b/dev/reference/tools/syncer.md index 7a592cf6ecf0..7c6d68318350 100644 --- a/dev/reference/tools/syncer.md +++ b/dev/reference/tools/syncer.md @@ -512,7 +512,7 @@ Syncer provides the metric interface, and requires Prometheus to actively obtain 2. Import the configuration file of Grafana dashboard. - Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/docs/tree/master/etc) -> choose the corresponding data source. + Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/tidb-ansible/blob/master/scripts/syncer.json) -> choose the corresponding data source. ### Description of Grafana Syncer metrics diff --git a/dev/tispark/tispark-quick-start-guide_v1.x.md b/dev/tispark/tispark-quick-start-guide_v1.x.md index d213dd780e48..f6aa87f6efc7 100644 --- a/dev/tispark/tispark-quick-start-guide_v1.x.md +++ b/dev/tispark/tispark-quick-start-guide_v1.x.md @@ -6,7 +6,7 @@ category: User Guide # TiSpark Quick Start Guide -To make it easy to [try TiSpark](/tispark/tispark-user-guide.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default. +To make it easy to [try TiSpark](/reference/tispark.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default. ## Deployment information diff --git a/v1.0/FAQ.md b/v1.0/FAQ.md index ef00f830a9e1..3e5d254c6372 100755 --- a/v1.0/FAQ.md +++ b/v1.0/FAQ.md @@ -616,7 +616,7 @@ See [Syncer User Guide](docs/tools/syncer.md). ##### How to configure to monitor Syncer status? -Download and import [Syncer Json](https://github.com/pingcap/docs/blob/master/etc/Syncer.json) to Grafana. Edit the Prometheus configuration file and add the following content: +Download and import [Syncer Json](https://github.com/pingcap/tidb-ansible/blob/master/scripts/syncer.json) to Grafana. Edit the Prometheus configuration file and add the following content: ``` - job_name: ‘syncer_ops’ // task name diff --git a/v1.0/tools/syncer.md b/v1.0/tools/syncer.md index 6d205f1777d9..dd50093aecad 100755 --- a/v1.0/tools/syncer.md +++ b/v1.0/tools/syncer.md @@ -483,7 +483,7 @@ Syncer provides the metric interface, and requires Prometheus to actively obtain 2. Import the configuration file of Grafana dashboard. - Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/docs/tree/master/etc) -> choose the corresponding data source. + Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/tidb-ansible/blob/master/scripts/syncer.json) -> choose the corresponding data source. ### Description of Grafana Syncer metrics diff --git a/v2.0/tools/syncer.md b/v2.0/tools/syncer.md index 78f6ae06d379..cb43cd15079b 100755 --- a/v2.0/tools/syncer.md +++ b/v2.0/tools/syncer.md @@ -484,7 +484,7 @@ Syncer provides the metric interface, and requires Prometheus to actively obtain 2. Import the configuration file of Grafana dashboard. - Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/docs/tree/master/etc) -> choose the corresponding data source. + Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/tidb-ansible/blob/master/scripts/syncer.json) -> choose the corresponding data source. ### Description of Grafana Syncer metrics diff --git a/v2.1-legacy/tispark/tispark-quick-start-guide.md b/v2.1-legacy/tispark/tispark-quick-start-guide.md index 04d200aa4e77..f6aa87f6efc7 100755 --- a/v2.1-legacy/tispark/tispark-quick-start-guide.md +++ b/v2.1-legacy/tispark/tispark-quick-start-guide.md @@ -6,7 +6,7 @@ category: User Guide # TiSpark Quick Start Guide -To make it easy to [try TiSpark](../tispark/tispark-user-guide.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default. +To make it easy to [try TiSpark](/reference/tispark.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default. ## Deployment information diff --git a/v2.1-legacy/tools/data-migration-overview.md b/v2.1-legacy/tools/data-migration-overview.md index 816a7e1c1b7e..5d36473e580b 100755 --- a/v2.1-legacy/tools/data-migration-overview.md +++ b/v2.1-legacy/tools/data-migration-overview.md @@ -33,7 +33,7 @@ DM-worker executes specific data replication tasks. - Orchestrating the operation of the data replication subtasks - Monitoring the running state of the data replication subtasks -For details about DM-worker, see [DM-worker Introduction](../tools/dm-worker-intro.md). +For details about DM-worker, see [DM-worker Introduction](/reference/tools/data-migration/dm-worker-intro.md). ### dmctl diff --git a/v2.1-legacy/tools/syncer.md b/v2.1-legacy/tools/syncer.md index b58f485978de..82c8b7fc31bb 100755 --- a/v2.1-legacy/tools/syncer.md +++ b/v2.1-legacy/tools/syncer.md @@ -484,7 +484,7 @@ Syncer provides the metric interface, and requires Prometheus to actively obtain 2. Import the configuration file of Grafana dashboard. - Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/docs/tree/master/etc) -> choose the corresponding data source. + Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/tidb-ansible/blob/master/scripts/syncer.json) -> choose the corresponding data source. ### Description of Grafana Syncer metrics diff --git a/v2.1/faq/tidb.md b/v2.1/faq/tidb.md index 5d6b1845a01f..789431928ec8 100644 --- a/v2.1/faq/tidb.md +++ b/v2.1/faq/tidb.md @@ -914,7 +914,7 @@ If the amount of data that needs to be deleted at a time is very large, this loo #### How to improve the data loading speed in TiDB? -- The [Lightning](/reference/tools/lightning/overview.md) tool is developed for distributed data import. It should be noted that the data import process does not perform a complete transaction process for performance reasons. Therefore, the ACID constraint of the data being imported during the import process cannot be guaranteed. The ACID constraint of the imported data can only be guaranteed after the entire import process ends. Therefore, the applicable scenarios mainly include importing new data (such as a new table or a new index) or the full backup and restoring (truncate the original table and then import data). +- The [Lightning](/reference/tools/tidb-lightning/overview.md) tool is developed for distributed data import. It should be noted that the data import process does not perform a complete transaction process for performance reasons. Therefore, the ACID constraint of the data being imported during the import process cannot be guaranteed. The ACID constraint of the imported data can only be guaranteed after the entire import process ends. Therefore, the applicable scenarios mainly include importing new data (such as a new table or a new index) or the full backup and restoring (truncate the original table and then import data). - Data loading in TiDB is related to the status of disks and the whole cluster. When loading data, pay attention to metrics like the disk usage rate of the host, TiClient Error, Backoff, Thread CPU and so on. You can analyze the bottlenecks using these metrics. #### What should I do if it is slow to reclaim storage space after deleting data? @@ -1013,7 +1013,7 @@ See [Overview of the Monitoring Framework](/how-to/monitor/overview.md). ### Key metrics of monitoring -See [Key Metrics](/reference/key-monitoring-metrics/overview.md). +See [Key Metrics](/reference/key-monitoring-metrics/overview-dashboard.md). #### Is there a better way of monitoring the key metrics? diff --git a/v2.1/reference/sql/statements/drop-database.md b/v2.1/reference/sql/statements/drop-database.md index aeac968258a2..f90428071df7 100644 --- a/v2.1/reference/sql/statements/drop-database.md +++ b/v2.1/reference/sql/statements/drop-database.md @@ -62,4 +62,4 @@ This statement is understood to be fully compatible with MySQL. Any compatibilit * [CREATE DATABASE](/reference/sql/statements/create-database.md) * [ALTER DATABASE](/reference/sql/statements/alter-database.md) -* [SHOW CREATE DATABASE](/reference/sql/statements/show-create-database.md) + diff --git a/v2.1/reference/sql/statements/show-grants.md b/v2.1/reference/sql/statements/show-grants.md index 3a51be927710..d3669891bd58 100644 --- a/v2.1/reference/sql/statements/show-grants.md +++ b/v2.1/reference/sql/statements/show-grants.md @@ -54,4 +54,4 @@ This statement is understood to be fully compatible with MySQL. Any compatibilit ## See also * [SHOW CREATE USER](/reference/sql/statements/show-create-user.md) -* [GRANT](/reference/sql/statements/grant.md) +* [GRANT](/reference/sql/statements/grant-privileges.md) diff --git a/v2.1/reference/tools/data-migration/deploy.md b/v2.1/reference/tools/data-migration/deploy.md index e83c840f7fe0..2b46b4f68b0f 100644 --- a/v2.1/reference/tools/data-migration/deploy.md +++ b/v2.1/reference/tools/data-migration/deploy.md @@ -113,7 +113,7 @@ To detect possible errors of data replication configuration in advance, DM provi - DM automatically checks the corresponding privileges and configuration while starting the data replication task. - You can also use the `check-task` command to manually precheck whether the upstream MySQL instance configuration satisfies the DM requirements. -For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](/tools/dm/precheck.md). +For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](/reference/tools/data-migration/precheck.md). > **Note:** > diff --git a/v2.1/reference/tools/data-migration/features/manually-handling-sharding-ddl-locks.md b/v2.1/reference/tools/data-migration/features/manually-handling-sharding-ddl-locks.md index f9b9258d8ff2..074e77abd69b 100644 --- a/v2.1/reference/tools/data-migration/features/manually-handling-sharding-ddl-locks.md +++ b/v2.1/reference/tools/data-migration/features/manually-handling-sharding-ddl-locks.md @@ -147,7 +147,7 @@ break-ddl-lock <--worker=127.0.0.1:8262> [--remove-id] [--exec] [--skip] /relay_log` if DM is deployed using `DM-Ansible`). For details about DM-worker, see [DM-worker Introduction](/tools/dm/dm-worker-intro.md). For details about the relay log, see [Relay Log](/tools/dm/relay-log.md). +After DM-worker is started, it automatically replicates the upstream binlog to the local configuration directory (the default replication directory is `/relay_log` if DM is deployed using `DM-Ansible`). For details about DM-worker, see [DM-worker Introduction](/reference/tools/data-migration/dm-worker-intro.md). For details about the relay log, see [Relay Log](/reference/tools/data-migration/relay-log.md). ### dmctl @@ -84,7 +84,7 @@ Before using the DM tool, note the following restrictions: > - 5.7.1 < MySQL version < 5.8 > - MariaDB version >= 10.1.3 - Data Migration [prechecks the corresponding privileges and configuration automatically](/tools/dm/precheck.md) while starting the data replication task using dmctl. + Data Migration [prechecks the corresponding privileges and configuration automatically](/reference/tools/data-migration/precheck.md) while starting the data replication task using dmctl. + DDL syntax diff --git a/v2.1/reference/tools/data-migration/query-status.md b/v2.1/reference/tools/data-migration/query-status.md index 1024d818fb9d..84ceab59bdea 100644 --- a/v2.1/reference/tools/data-migration/query-status.md +++ b/v2.1/reference/tools/data-migration/query-status.md @@ -154,7 +154,7 @@ This document introduces the query result and subtask status of Data Migration ( For the status description and status switch relationship of "stage" of "subTaskStatus" of "workers", see [Subtask status](#subtask-status). -For operation details of "unresolvedDDLLockID" of "subTaskStatus" of "workers", see [Handle Sharding DDL Locks Manually](/tools/dm/manually-handling-sharding-ddl-locks.md). +For operation details of "unresolvedDDLLockID" of "subTaskStatus" of "workers", see [Handle Sharding DDL Locks Manually](/reference/tools/data-migration/manually-handling-sharding-ddl-locks.md). ## Subtask status diff --git a/v2.1/reference/tools/data-migration/skip-replace-sqls.md b/v2.1/reference/tools/data-migration/skip-replace-sqls.md index b11237525cd8..0d5e2af29459 100644 --- a/v2.1/reference/tools/data-migration/skip-replace-sqls.md +++ b/v2.1/reference/tools/data-migration/skip-replace-sqls.md @@ -128,7 +128,7 @@ When you use dmctl to manually handle the SQL statements unsupported by TiDB, th #### query-status -`query-status` allows you to query the current status of items such as the subtask and the relay unit in each DM-worker. For details, see [query status](/tools/dm/query-status.md). +`query-status` allows you to query the current status of items such as the subtask and the relay unit in each DM-worker. For details, see [query status](/reference/tools/data-migration/query-status.md). #### query-error diff --git a/v2.1/reference/tools/data-migration/usage-scenarios/shard-merge.md b/v2.1/reference/tools/data-migration/usage-scenarios/shard-merge.md index 5d8e0b36ab01..0f3c8378060b 100644 --- a/v2.1/reference/tools/data-migration/usage-scenarios/shard-merge.md +++ b/v2.1/reference/tools/data-migration/usage-scenarios/shard-merge.md @@ -98,7 +98,7 @@ Assume that the downstream schema after replication is as follows: > > The replication Requirements #4, #5 and #7 indicate that all the deletion operations in the `user` schema are filtered out, so a schema level filtering rule is configured here. However, the deletion operations of future tables in the `user` schema will also be filtered out. -- To satisfy the replication Requirement #6, configure the [binlog event filter rule](/tools/dm/data-synchronization-features.md#binlog-event-filter) as follows: +- To satisfy the replication Requirement #6, configure the [binlog event filter rule](/reference/tools/data-migration/features/overview.md#binlog-event-filter) as follows: ```yaml filters: diff --git a/v2.1/reference/tools/syncer.md b/v2.1/reference/tools/syncer.md index 7a592cf6ecf0..7c6d68318350 100644 --- a/v2.1/reference/tools/syncer.md +++ b/v2.1/reference/tools/syncer.md @@ -512,7 +512,7 @@ Syncer provides the metric interface, and requires Prometheus to actively obtain 2. Import the configuration file of Grafana dashboard. - Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/docs/tree/master/etc) -> choose the corresponding data source. + Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/tidb-ansible/blob/master/scripts/syncer.json) -> choose the corresponding data source. ### Description of Grafana Syncer metrics diff --git a/v2.1/tispark/tispark-quick-start-guide_v1.x.md b/v2.1/tispark/tispark-quick-start-guide_v1.x.md index d213dd780e48..f6aa87f6efc7 100644 --- a/v2.1/tispark/tispark-quick-start-guide_v1.x.md +++ b/v2.1/tispark/tispark-quick-start-guide_v1.x.md @@ -6,7 +6,7 @@ category: User Guide # TiSpark Quick Start Guide -To make it easy to [try TiSpark](/tispark/tispark-user-guide.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default. +To make it easy to [try TiSpark](/reference/tispark.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default. ## Deployment information diff --git a/v3.0/faq/tidb.md b/v3.0/faq/tidb.md index 81eebd48b6ea..41bd56f43242 100644 --- a/v3.0/faq/tidb.md +++ b/v3.0/faq/tidb.md @@ -913,7 +913,7 @@ If the amount of data that needs to be deleted at a time is very large, this loo #### How to improve the data loading speed in TiDB? -- The [Lightning](/reference/tools/lightning/overview.md) tool is developed for distributed data import. It should be noted that the data import process does not perform a complete transaction process for performance reasons. Therefore, the ACID constraint of the data being imported during the import process cannot be guaranteed. The ACID constraint of the imported data can only be guaranteed after the entire import process ends. Therefore, the applicable scenarios mainly include importing new data (such as a new table or a new index) or the full backup and restoring (truncate the original table and then import data). +- The [Lightning](/reference/tools/tidb-lightning/overview.md) tool is developed for distributed data import. It should be noted that the data import process does not perform a complete transaction process for performance reasons. Therefore, the ACID constraint of the data being imported during the import process cannot be guaranteed. The ACID constraint of the imported data can only be guaranteed after the entire import process ends. Therefore, the applicable scenarios mainly include importing new data (such as a new table or a new index) or the full backup and restoring (truncate the original table and then import data). - Data loading in TiDB is related to the status of disks and the whole cluster. When loading data, pay attention to metrics like the disk usage rate of the host, TiClient Error, Backoff, Thread CPU and so on. You can analyze the bottlenecks using these metrics. #### What should I do if it is slow to reclaim storage space after deleting data? @@ -1012,7 +1012,7 @@ See [Overview of the Monitoring Framework](/how-to/monitor/overview.md). ### Key metrics of monitoring -See [Key Metrics](/reference/key-monitoring-metrics/overview.md). +See [Key Metrics](/reference/key-monitoring-metrics/overview-dashboard.md). #### Is there a better way of monitoring the key metrics? diff --git a/v3.0/reference/sql/statements/drop-database.md b/v3.0/reference/sql/statements/drop-database.md index a1b9d081664a..9e7016f63ac9 100644 --- a/v3.0/reference/sql/statements/drop-database.md +++ b/v3.0/reference/sql/statements/drop-database.md @@ -62,4 +62,3 @@ This statement is understood to be fully compatible with MySQL. Any compatibilit * [CREATE DATABASE](/reference/sql/statements/create-database.md) * [ALTER DATABASE](/reference/sql/statements/alter-database.md) -* [SHOW CREATE DATABASE](/reference/sql/statements/show-create-database.md) diff --git a/v3.0/reference/sql/statements/show-grants.md b/v3.0/reference/sql/statements/show-grants.md index ec3879fee443..edbc580f065e 100644 --- a/v3.0/reference/sql/statements/show-grants.md +++ b/v3.0/reference/sql/statements/show-grants.md @@ -54,4 +54,4 @@ This statement is understood to be fully compatible with MySQL. Any compatibilit ## See also * [SHOW CREATE USER](/reference/sql/statements/show-create-user.md) -* [GRANT](/reference/sql/statements/grant.md) +* [GRANT](/reference/sql/statements/grant-privileges.md) diff --git a/v3.0/reference/tools/data-migration/deploy.md b/v3.0/reference/tools/data-migration/deploy.md index be21bcaa6015..c0d985005b28 100644 --- a/v3.0/reference/tools/data-migration/deploy.md +++ b/v3.0/reference/tools/data-migration/deploy.md @@ -114,7 +114,7 @@ To detect possible errors of data replication configuration in advance, DM provi - DM automatically checks the corresponding privileges and configuration while starting the data replication task. - You can also use the `check-task` command to manually precheck whether the upstream MySQL instance configuration satisfies the DM requirements. -For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](/tools/dm/precheck.md). +For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](/reference/tools/data-migration/precheck.md). > **Note:** > diff --git a/v3.0/reference/tools/data-migration/features/manually-handling-sharding-ddl-locks.md b/v3.0/reference/tools/data-migration/features/manually-handling-sharding-ddl-locks.md index e9d6392a5a0b..1a1e3e090c8f 100644 --- a/v3.0/reference/tools/data-migration/features/manually-handling-sharding-ddl-locks.md +++ b/v3.0/reference/tools/data-migration/features/manually-handling-sharding-ddl-locks.md @@ -148,7 +148,7 @@ break-ddl-lock <--worker=127.0.0.1:8262> [--remove-id] [--exec] [--skip] /relay_log` if DM is deployed using `DM-Ansible`). For details about DM-worker, see [DM-worker Introduction](/tools/dm/dm-worker-intro.md). For details about the relay log, see [Relay Log](/tools/dm/relay-log.md). +After DM-worker is started, it automatically replicates the upstream binlog to the local configuration directory (the default replication directory is `/relay_log` if DM is deployed using `DM-Ansible`). For details about DM-worker, see [DM-worker Introduction](/reference/tools/data-migration/dm-worker-intro.md). For details about the relay log, see [Relay Log](/tools/dm/relay-log.md). ### dmctl @@ -85,7 +85,7 @@ Before using the DM tool, note the following restrictions: > - 5.7.1 < MySQL version < 5.8 > - MariaDB version >= 10.1.3 - Data Migration [prechecks the corresponding privileges and configuration automatically](/tools/dm/precheck.md) while starting the data replication task using dmctl. + Data Migration [prechecks the corresponding privileges and configuration automatically](/reference/tools/data-migration/precheck.md) while starting the data replication task using dmctl. + DDL syntax diff --git a/v3.0/reference/tools/data-migration/query-status.md b/v3.0/reference/tools/data-migration/query-status.md index 260bc55bf585..460495abe4e3 100644 --- a/v3.0/reference/tools/data-migration/query-status.md +++ b/v3.0/reference/tools/data-migration/query-status.md @@ -155,7 +155,7 @@ This document introduces the query result and subtask status of Data Migration ( For the status description and status switch relationship of "stage" of "subTaskStatus" of "workers", see [Subtask status](#subtask-status). -For operation details of "unresolvedDDLLockID" of "subTaskStatus" of "workers", see [Handle Sharding DDL Locks Manually](/tools/dm/manually-handling-sharding-ddl-locks.md). +For operation details of "unresolvedDDLLockID" of "subTaskStatus" of "workers", see [Handle Sharding DDL Locks Manually](/reference/tools/data-migration/manually-handling-sharding-ddl-locks.md). ## Subtask status diff --git a/v3.0/reference/tools/data-migration/skip-replace-sqls.md b/v3.0/reference/tools/data-migration/skip-replace-sqls.md index 1161ecc3e343..77b9f89601a7 100644 --- a/v3.0/reference/tools/data-migration/skip-replace-sqls.md +++ b/v3.0/reference/tools/data-migration/skip-replace-sqls.md @@ -129,7 +129,7 @@ When you use dmctl to manually handle the SQL statements unsupported by TiDB, th #### query-status -`query-status` allows you to query the current status of items such as the subtask and the relay unit in each DM-worker. For details, see [query status](/tools/dm/query-status.md). +`query-status` allows you to query the current status of items such as the subtask and the relay unit in each DM-worker. For details, see [query status](/reference/tools/data-migration/query-status.md). #### query-error diff --git a/v3.0/reference/tools/data-migration/usage-scenarios/shard-merge.md b/v3.0/reference/tools/data-migration/usage-scenarios/shard-merge.md index 33687955d5d5..a9483cc692aa 100644 --- a/v3.0/reference/tools/data-migration/usage-scenarios/shard-merge.md +++ b/v3.0/reference/tools/data-migration/usage-scenarios/shard-merge.md @@ -99,7 +99,7 @@ Assume that the downstream schema after replication is as follows: > > The replication Requirements #4, #5 and #7 indicate that all the deletion operations in the `user` schema are filtered out, so a schema level filtering rule is configured here. However, the deletion operations of future tables in the `user` schema will also be filtered out. -- To satisfy the replication Requirement #6, configure the [binlog event filter rule](/tools/dm/data-synchronization-features.md#binlog-event-filter) as follows: +- To satisfy the replication Requirement #6, configure the [binlog event filter rule](/reference/tools/data-migration/features/overview.md#binlog-event-filter) as follows: ```yaml filters: diff --git a/v3.0/reference/tools/syncer.md b/v3.0/reference/tools/syncer.md index 344a31f5e660..08855e129cfd 100644 --- a/v3.0/reference/tools/syncer.md +++ b/v3.0/reference/tools/syncer.md @@ -513,7 +513,7 @@ Syncer provides the metric interface, and requires Prometheus to actively obtain 2. Import the configuration file of Grafana dashboard. - Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/docs/tree/master/etc) -> choose the corresponding data source. + Click the Grafana Logo -> click Dashboards -> click Import -> choose and import the dashboard [configuration file](https://github.com/pingcap/tidb-ansible/blob/master/scripts/syncer.json) -> choose the corresponding data source. ### Description of Grafana Syncer metrics diff --git a/v3.0/tispark/tispark-quick-start-guide_v1.x.md b/v3.0/tispark/tispark-quick-start-guide_v1.x.md index d213dd780e48..f6aa87f6efc7 100644 --- a/v3.0/tispark/tispark-quick-start-guide_v1.x.md +++ b/v3.0/tispark/tispark-quick-start-guide_v1.x.md @@ -6,7 +6,7 @@ category: User Guide # TiSpark Quick Start Guide -To make it easy to [try TiSpark](/tispark/tispark-user-guide.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default. +To make it easy to [try TiSpark](/reference/tispark.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default. ## Deployment information