Skip to content
Permalink
Browse files

*: update all anchor links to github style (#1515)

* *: update all anchor links to github style

* fix a link
  • Loading branch information...
TomShawn authored and YiniXu9506 committed Sep 17, 2019
1 parent 0697ea2 commit 041e6004171f819912997d199461655319f363c2
Showing with 170 additions and 170 deletions.
  1. +4 −4 dev/faq/tidb.md
  2. +2 −2 dev/how-to/deploy/orchestrated/offline-ansible.md
  3. +3 −3 dev/how-to/secure/enable-tls-clients.md
  4. +10 −10 dev/reference/alert-rules.md
  5. +1 −1 dev/reference/configuration/tidb-server/tidb-specific-variables.md
  6. +2 −2 dev/reference/garbage-collection/configuration.md
  7. +1 −1 dev/reference/garbage-collection/overview.md
  8. +3 −3 dev/reference/mysql-compatibility.md
  9. +6 −6 dev/reference/performance/check-cluster-status-using-sql-statements.md
  10. +2 −2 dev/reference/performance/understanding-the-query-execution-plan.md
  11. +1 −1 dev/reference/sql/statements/create-index.md
  12. +1 −1 dev/reference/sql/statements/create-table.md
  13. +1 −1 dev/reference/system-databases/information-schema.md
  14. +1 −1 dev/reference/tispark.md
  15. +1 −1 dev/reference/tools/data-migration/configure/overview.md
  16. +1 −1 dev/reference/tools/data-migration/features/overview.md
  17. +1 −1 dev/reference/tools/data-migration/manage-tasks.md
  18. +2 −2 dev/reference/tools/data-migration/overview.md
  19. +1 −1 dev/reference/tools/data-migration/skip-replace-sqls.md
  20. +1 −1 dev/reference/tools/data-migration/usage-scenarios/shard-merge.md
  21. +2 −2 dev/reference/tools/data-migration/usage-scenarios/simple-replication.md
  22. +1 −1 dev/reference/tools/syncer.md
  23. +1 −1 dev/reference/tools/tidb-lightning/table-filter.md
  24. +5 −5 dev/releases/2.1ga.md
  25. +1 −1 dev/tidb-in-kubernetes/deploy/alibaba-cloud.md
  26. +1 −1 dev/tidb-in-kubernetes/deploy/aws-eks.md
  27. +1 −1 dev/tidb-in-kubernetes/deploy/gcp-gke.md
  28. +1 −1 dev/tidb-in-kubernetes/faq.md
  29. +4 −4 dev/tidb-in-kubernetes/reference/configuration/tidb-cluster.md
  30. +1 −1 dev/tidb-in-kubernetes/reference/configuration/tidb-drainer.md
  31. +3 −3 v2.1/faq/tidb.md
  32. +2 −2 v2.1/how-to/deploy/orchestrated/offline-ansible.md
  33. +3 −3 v2.1/how-to/secure/enable-tls-clients.md
  34. +10 −10 v2.1/reference/alert-rules.md
  35. +1 −1 v2.1/reference/configuration/tidb-server/tidb-specific-variables.md
  36. +3 −3 v2.1/reference/mysql-compatibility.md
  37. +2 −2 v2.1/reference/performance/understanding-the-query-execution-plan.md
  38. +1 −1 v2.1/reference/sql/statements/create-index.md
  39. +1 −1 v2.1/reference/sql/statements/create-table.md
  40. +1 −1 v2.1/reference/tispark.md
  41. +1 −1 v2.1/reference/tools/data-migration/configure/overview.md
  42. +1 −1 v2.1/reference/tools/data-migration/features/overview.md
  43. +1 −1 v2.1/reference/tools/data-migration/manage-tasks.md
  44. +1 −1 v2.1/reference/tools/data-migration/overview.md
  45. +1 −1 v2.1/reference/tools/data-migration/skip-replace-sqls.md
  46. +1 −1 v2.1/reference/tools/data-migration/usage-scenarios/shard-merge.md
  47. +2 −2 v2.1/reference/tools/data-migration/usage-scenarios/simple-replication.md
  48. +1 −1 v2.1/reference/tools/syncer.md
  49. +1 −1 v2.1/reference/tools/tidb-lightning/table-filter.md
  50. +5 −5 v2.1/releases/2.1ga.md
  51. +1 −1 v2.1/tispark/tispark-user-guide_v1.x.md
  52. +4 −4 v3.0/faq/tidb.md
  53. +2 −2 v3.0/how-to/deploy/orchestrated/offline-ansible.md
  54. +3 −3 v3.0/how-to/secure/enable-tls-clients.md
  55. +10 −10 v3.0/reference/alert-rules.md
  56. +1 −1 v3.0/reference/configuration/tidb-server/tidb-specific-variables.md
  57. +2 −2 v3.0/reference/garbage-collection/configuration.md
  58. +1 −1 v3.0/reference/garbage-collection/overview.md
  59. +3 −3 v3.0/reference/mysql-compatibility.md
  60. +7 −7 v3.0/reference/performance/check-cluster-status-using-sql-statements.md
  61. +2 −2 v3.0/reference/performance/understanding-the-query-execution-plan.md
  62. +1 −1 v3.0/reference/sql/statements/create-index.md
  63. +1 −1 v3.0/reference/sql/statements/create-table.md
  64. +1 −1 v3.0/reference/system-databases/information-schema.md
  65. +1 −1 v3.0/reference/tispark.md
  66. +1 −1 v3.0/reference/tools/data-migration/configure/overview.md
  67. +1 −1 v3.0/reference/tools/data-migration/features/overview.md
  68. +1 −1 v3.0/reference/tools/data-migration/manage-tasks.md
  69. +2 −2 v3.0/reference/tools/data-migration/overview.md
  70. +1 −1 v3.0/reference/tools/data-migration/skip-replace-sqls.md
  71. +1 −1 v3.0/reference/tools/data-migration/usage-scenarios/shard-merge.md
  72. +2 −2 v3.0/reference/tools/data-migration/usage-scenarios/simple-replication.md
  73. +1 −1 v3.0/reference/tools/syncer.md
  74. +1 −1 v3.0/reference/tools/tidb-lightning/table-filter.md
  75. +5 −5 v3.0/releases/2.1ga.md
  76. +1 −1 v3.0/tidb-in-kubernetes/deploy/alibaba-cloud.md
  77. +1 −1 v3.0/tidb-in-kubernetes/deploy/aws-eks.md
  78. +1 −1 v3.0/tidb-in-kubernetes/deploy/gcp-gke.md
  79. +1 −1 v3.0/tidb-in-kubernetes/faq.md
  80. +4 −4 v3.0/tidb-in-kubernetes/reference/configuration/tidb-cluster.md
  81. +1 −1 v3.0/tidb-in-kubernetes/reference/configuration/tidb-drainer.md
@@ -369,7 +369,7 @@ Similar to MySQL, TiDB includes static and solid parameters. You can directly mo

#### Where and what are the data directories in TiDB (TiKV)?

TiKV data is located in the [`--data-dir`](/dev/reference/configuration/tikv-server/configuration.md#data-dir), which include four directories of backup, db, raft, and snap, used to store backup, data, Raft data, and mirror data respectively.
TiKV data is located in the [`--data-dir`](/dev/reference/configuration/tikv-server/configuration.md#--data-dir), which include four directories of backup, db, raft, and snap, used to store backup, data, Raft data, and mirror data respectively.

#### What are the system tables in TiDB?

@@ -544,7 +544,7 @@ When TiDB is executing a SQL statement, the query will be `EXPENSIVE_QUERY` if e

#### How to control or change the execution priority of SQL commits?

TiDB supports changing the priority on a [per-session](/dev/reference/configuration/tidb-server/tidb-specific-variables.md#tidb-force-priority), [global](/dev/reference/configuration/tidb-server/server-command-option.md#force-priority) or individual statement basis. Priority has the following meaning:
TiDB supports changing the priority on a [per-session](/dev/reference/configuration/tidb-server/tidb-specific-variables.md#tidb_force_priority), [global](/dev/reference/configuration/tidb-server/server-command-option.md#force-priority) or individual statement basis. Priority has the following meaning:

- `HIGH_PRIORITY`: this statement has a high priority, that is, TiDB gives priority to this statement and executes it first.

@@ -961,7 +961,7 @@ See [Introduction to Statistics](/dev/reference/performance/statistics.md).

#### How to optimize `select count(1)`?

The `count(1)` statement counts the total number of rows in a table. Improving the degree of concurrency can significantly improve the speed. To modify the concurrency, refer to the [document](/dev/reference/configuration/tidb-server/tidb-specific-variables.md#tidb-distsql-scan-concurrency). But it also depends on the CPU and I/O resources. TiDB accesses TiKV in every query. When the amount of data is small, all MySQL is in memory, and TiDB needs to conduct a network access.
The `count(1)` statement counts the total number of rows in a table. Improving the degree of concurrency can significantly improve the speed. To modify the concurrency, refer to the [document](/dev/reference/configuration/tidb-server/tidb-specific-variables.md#tidb_distsql_scan_concurrency). But it also depends on the CPU and I/O resources. TiDB accesses TiKV in every query. When the amount of data is small, all MySQL is in memory, and TiDB needs to conduct a network access.

Recommendations:

@@ -1026,7 +1026,7 @@ See [The TiDB Command Options](/dev/reference/configuration/tidb-server/configur

#### How to scatter the hotspots?

In TiDB, data is divided into Regions for management. Generally, the TiDB hotspot means the Read/Write hotspot in a Region. In TiDB, for the table whose primary key (PK) is not an integer or which has no PK, you can properly break Regions by configuring `SHARD_ROW_ID_BITS` to scatter the Region hotspots. For details, see the introduction of `SHARD_ROW_ID_BITS` in [TiDB Specific System Variables and Syntax](/dev/reference/configuration/tidb-server/tidb-specific-variables.md#shard-row-id-bits).
In TiDB, data is divided into Regions for management. Generally, the TiDB hotspot means the Read/Write hotspot in a Region. In TiDB, for the table whose primary key (PK) is not an integer or which has no PK, you can properly break Regions by configuring `SHARD_ROW_ID_BITS` to scatter the Region hotspots. For details, see the introduction of `SHARD_ROW_ID_BITS` in [TiDB Specific System Variables and Syntax](/dev/reference/configuration/tidb-server/tidb-specific-variables.md#shard_row_id_bits).

### TiKV

@@ -145,7 +145,7 @@ See [Configure the SSH mutual trust and sudo rules on the Control Machine](/dev/

See [Install the NTP service on the target machines](/dev/how-to/deploy/orchestrated/ansible.md#step-6-install-the-ntp-service-on-the-target-machines).

> **Note:** If the time and time zone of all your target machines are same, the NTP service is on and is normally synchronizing time, you can ignore this step. See [How to check whether the NTP service is normal](#how-to-check-whether-the-ntp-service-is-normal).
> **Note:** If the time and time zone of all your target machines are same, the NTP service is on and is normally synchronizing time, you can ignore this step. See [How to check whether the NTP service is normal](/dev/how-to/deploy/orchestrated/ansible.md#how-to-check-whether-the-ntp-service-is-normal).
## Step 7: Configure the CPUfreq governor mode on the target machine

@@ -157,7 +157,7 @@ See [Mount the data disk ext4 filesystem with options on the target machines](/d

## Step 9: Edit the `inventory.ini` file to orchestrate the TiDB cluster

See [Edit the `inventory.ini` file to orchestrate the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible.md#step-9-edit-the-inventory-ini-file-to-orchestrate-the-tidb-cluster).
See [Edit the `inventory.ini` file to orchestrate the TiDB cluster](/dev/how-to/deploy/orchestrated/ansible.md#step-9-edit-the-inventoryini-file-to-orchestrate-the-tidb-cluster).

## Step 10: Deploy the TiDB cluster

@@ -27,9 +27,9 @@ In short, to use encrypted connections, both of the following conditions must be

See the following desrciptions about the related parameters to enable encrypted connections:

- [`ssl-cert`](/dev/reference/configuration/tidb-server/configuration.md#ssl-cert): specifies the file path of the SSL certificate
- [`ssl-key`](/dev/reference/configuration/tidb-server/configuration.md#ssl-key): specifies the private key that matches the certificate
- [`ssl-ca`](/dev/reference/configuration/tidb-server/configuration.md#ssl-ca): (optional) specifies the file path of the trusted CA certificate
- [`ssl-cert`](/dev/reference/configuration/tidb-server/configuration-file.md#ssl-cert): specifies the file path of the SSL certificate
- [`ssl-key`](/dev/reference/configuration/tidb-server/configuration-file.md#ssl-key): specifies the private key that matches the certificate
- [`ssl-ca`](/dev/reference/configuration/tidb-server/configuration-file.md#ssl-ca): (optional) specifies the file path of the trusted CA certificate

To enable encrypted connections in the TiDB server, you must specify both of the `ssl-cert` and `ssl-key` parameters in the configuration file when you start the TiDB server. You can also specify the `ssl-ca` parameter for client authentication (see [Enable authentication](#enable-authentication)).

@@ -60,7 +60,7 @@ Emergency-level alerts are often caused by a service or node failure. Manual int

* Solution:

Same as [`TiDB_schema_error`](#tidb-schema-error).
Same as [`TiDB_schema_error`](#tidb_schema_error).

#### `TiDB_monitor_keep_alive`

@@ -463,7 +463,7 @@ For the critical-level alerts, a close watch on the abnormal metrics is required

1. Check whether the network is clear.
2. Check whether the remote TiKV is down.
3. If the remote TiKV is not down, check whether the pressure is too high. Refer to the solution in [`TiKV_channel_full_total`](#tikv-channel-full-total).
3. If the remote TiKV is not down, check whether the pressure is too high. Refer to the solution in [`TiKV_channel_full_total`](#tikv_channel_full_total).

#### `TiKV_channel_full_total`

@@ -519,7 +519,7 @@ For the critical-level alerts, a close watch on the abnormal metrics is required

* Solution:

Refer to the solution in [`TiKV_channel_full_total`](#tikv-channel-full-total).
Refer to the solution in [`TiKV_channel_full_total`](#tikv_channel_full_total).

#### `TiKV_async_request_write_duration_seconds`

@@ -533,7 +533,7 @@ For the critical-level alerts, a close watch on the abnormal metrics is required

* Solution:

1. Check the pressure on Raftstore. See the solution in [`TiKV_channel_full_total`](#tikv-channel-full-total).
1. Check the pressure on Raftstore. See the solution in [`TiKV_channel_full_total`](#tikv_channel_full_total).
2. Check the pressure on the apply worker thread.

#### `TiKV_coprocessor_request_wait_seconds`
@@ -564,7 +564,7 @@ For the critical-level alerts, a close watch on the abnormal metrics is required

* Solution:

Refer to the solution in [`TiKV_channel_full_total`](#tikv-channel-full-total).
Refer to the solution in [`TiKV_channel_full_total`](#tikv_channel_full_total).

#### `TiKV_raft_append_log_duration_secs`

@@ -643,7 +643,7 @@ Warning-level alerts are a reminder for an issue or error.

* Solution:

1. Refer to [`TiKV_channel_full_total`](#tikv-channel-full-total).
1. Refer to [`TiKV_channel_full_total`](#tikv_channel_full_total).
2. It there is low pressure on TiKV, consider whether the PD scheduling is too frequent. You can view the Operator Create panel on the PD page, and check the types and number of the PD scheduling.

#### `TiKV_raft_process_ready_duration_secs`
@@ -683,7 +683,7 @@ Warning-level alerts are a reminder for an issue or error.

* Solution:

Refer to [`TiKV_scheduler_latch_wait_duration_seconds`](#tikv-scheduler-latch-wait-duration-seconds).
Refer to [`TiKV_scheduler_latch_wait_duration_seconds`](#tikv_scheduler_latch_wait_duration_seconds).

#### `TiKV_scheduler_command_duration_seconds`

@@ -697,7 +697,7 @@ Warning-level alerts are a reminder for an issue or error.

* Solution:

Refer to [`TiKV_scheduler_latch_wait_duration_seconds`](#tikv-scheduler-latch-wait-duration-seconds).
Refer to [`TiKV_scheduler_latch_wait_duration_seconds`](#tikv_scheduler_latch_wait_duration_seconds).

#### `TiKV_coprocessor_outdated_request_wait_seconds`

@@ -711,7 +711,7 @@ Warning-level alerts are a reminder for an issue or error.

* Solution:

Refer to [`TiKV_coprocessor_request_wait_seconds`](#tikv-coprocessor-request-wait-seconds).
Refer to [`TiKV_coprocessor_request_wait_seconds`](#tikv_coprocessor_request_wait_seconds).

#### `TiKV_coprocessor_request_error`

@@ -753,7 +753,7 @@ Warning-level alerts are a reminder for an issue or error.

* Solution:

Refer to [`TiKV_coprocessor_request_wait_seconds`](#tikv-coprocessor-request-wait-seconds).
Refer to [`TiKV_coprocessor_request_wait_seconds`](#tikv_coprocessor_request_wait_seconds).

#### `TiKV_batch_request_snapshot_nums`

@@ -283,7 +283,7 @@ set @@global.tidb_distsql_scan_concurrency = 10

This variable does not affect automatically committed implicit transactions and internally executed transactions in TiDB. The maximum retry count of these transactions is determined by the value of `tidb_retry_limit`.

To decide whether you can enable automatic retry, see [description of optimistic transactions](/dev/reference/transactions/transaction-isolation.md#description-of-optimistic-transactions).
To decide whether you can enable automatic retry, see [description of optimistic transactions](/dev/reference/transactions/transaction-isolation.md#transaction-retry).

### tidb_backoff_weight

@@ -81,9 +81,9 @@ update mysql.tidb set VARIABLE_VALUE="24h" where VARIABLE_NAME="tikv_gc_life_tim
When `tikv_gc_mode` is set to `"distributed"`, GC concurrency works in the [Resolve Locks](/dev/reference/garbage-collection/overview.md#resolve-locks) step. When `tikv_gc_mode` is set to `"central"`, it is applied to both the Resolve Locks and [Do GC](/dev/reference/garbage-collection/overview.md#do-gc) steps.

- `true`(default): Automatically use the number of TiKV nodes in the cluster as the GC concurrency
- `false`: Use the value of [`tikv_gc_concurrency`](#tikv-gc-concurrency) as the GC concurrency
- `false`: Use the value of [`tikv_gc_concurrency`](#tikv_gc_concurrency) as the GC concurrency

## `tikv_gc_concurrency`

- Specifies the GC concurrency manually. This parameter works only when you set [`tikv_gc_auto_concurrency`](#tikv-gc-auto-concurrency) to `false`.
- Specifies the GC concurrency manually. This parameter works only when you set [`tikv_gc_auto_concurrency`](#tikv_gc_auto_concurrency) to `false`.
- Default: 2
@@ -25,7 +25,7 @@ The TiDB transaction model is implemented based on [Google's Percolator](https:/
The Resolve Locks step rolls back or commits the locks before the safe point, depending on whether their primary key has been committed or not. If the primary key is also retained, the transaction times out and is rolled back.
This step is required. Once GC has cleared the write record of the primary lock, you can never know whether this transaction is successful or not. Also, if the transaction contains retained secondary keys, it's important to know whether it should be rolled back or committed. As a result, data consistency cannot be guaranteed.

In the Resolve Lock step, the GC leader processes requests from all Regions. From TiDB 3.0, this process runs concurrently by default, with the default concurrency identical to the number of TiKV nodes in the cluster. For more details on how to configure, see [GC Configuration](/dev/reference/garbage-collection/configuration.md#tikv-gc-auto-concurrency).
In the Resolve Lock step, the GC leader processes requests from all Regions. From TiDB 3.0, this process runs concurrently by default, with the default concurrency identical to the number of TiKV nodes in the cluster. For more details on how to configure, see [GC Configuration](/dev/reference/garbage-collection/configuration.md#tikv_gc_auto_concurrency).

### Delete Ranges

@@ -69,15 +69,15 @@ The operations are executed as follows:

### Performance schema

Performance schema tables return empty results in TiDB. TiDB uses a combination of [Prometheus and Grafana](/dev/how-to/monitor/monitor-a-cluster.md#use-prometheus-and-grafana) for performance metrics instead.
Performance schema tables return empty results in TiDB. TiDB uses a combination of [Prometheus and Grafana](/dev/how-to/monitor/monitor-a-cluster.md) for performance metrics instead.

### Query Execution Plan

The output format of Query Execution Plan (`EXPLAIN`/`EXPLAIN FOR`) in TiDB is greatly different from that in MySQL. Besides, the output content and the privileges setting of `EXPLAIN FOR` are not the same as those of MySQL. See [Understand the Query Execution Plan](/dev/reference/performance/understanding-the-query-execution-plan.md) for more details.

### Built-in functions

TiDB supports most of the MySQL built-in functions, but not all. See [TiDB SQL Grammar](https://pingcap.github.io/sqlgram/#FunctionCallKeyword) for the supported functions.
TiDB supports most of the MySQL built-in functions, but not all. See [TiDB SQL Grammar](https://pingcap.github.io/sqlgram/#functioncallkeyword) for the supported functions.

### DDL

@@ -125,7 +125,7 @@ Create Table: CREATE TABLE `t1` (
1 row in set (0.00 sec)
```

Architecturally, TiDB does support a similar storage engine abstraction to MySQL, and user tables are created in the engine specified by the [`--store`](/dev/reference/configuration/tidb-server/configuration.md#store) option used when you start tidb-server (typically `tikv`).
Architecturally, TiDB does support a similar storage engine abstraction to MySQL, and user tables are created in the engine specified by the [`--store`](/dev/reference/configuration/tidb-server/configuration.md#--store) option used when you start tidb-server (typically `tikv`).

### SQL modes

@@ -11,12 +11,12 @@ TiDB offers some SQL statements and system tables to check the TiDB cluster stat
The `INFORMATION_SCHEMA` system database offers system tables as follows to query the cluster status and diagnose common cluster issues:

- [`TABLES`](/dev/reference/system-databases/information-schema.md#tables-table)
- [`TIDB_INDEXES`](/dev/reference/system-databases/information-schema.md#tidb-indexes-table)
- [`ANALYZE_STATUS`](/dev/reference/system-databases/information-schema.md#analyze-status-table)
- [`TIDB_HOT_REGIONS`](/dev/reference/system-databases/information-schema.md#tidb-hot-regions-table)
- [`TIKV_STORE_STATUS`](/dev/reference/system-databases/information-schema.md#tikv-store-status-table)
- [`TIKV_REGION_STATUS`](/dev/reference/system-databases/information-schema.md#tikv-region-status-table)
- [`TIKV_REGION_PEERS`](/dev/reference/system-databases/information-schema.md#tikv-region-peers-table)
- [`TIDB_INDEXES`](/dev/reference/system-databases/information-schema.md#tidb_indexes-table)
- [`ANALYZE_STATUS`](/dev/reference/system-databases/information-schema.md#analyze_status-table)
- [`TIDB_HOT_REGIONS`](/dev/reference/system-databases/information-schema.md#tidb_hot_regions-table)
- [`TIKV_STORE_STATUS`](/dev/reference/system-databases/information-schema.md#tikv_store_status-table)
- [`TIKV_REGION_STATUS`](/dev/reference/system-databases/information-schema.md#tikv_region_status-table)
- [`TIKV_REGION_PEERS`](/dev/reference/system-databases/information-schema.md#tikv_region_peers-table)

You can also use the following statements to obtain some useful information for troubleshooting and querying the TiDB cluster status.

@@ -18,7 +18,7 @@ The result of the `EXPLAIN` statement provides information about how TiDB execut

The results of `EXPLAIN` shed light on how to index the data tables so that the execution plan can use the index to speed up the execution of SQL statements. You can also use `EXPLAIN` to check if the optimizer chooses the optimal order to join tables.

## <span id="explain-output-format">`EXPLAIN` output format</span>
## `EXPLAIN` output format

Currently, the `EXPLAIN` statement returns the following four columns: id, count, task, operator info. Each operator in the execution plan is described by the four properties. In the results returned by `EXPLAIN`, each row describes an operator. See the following table for details:

@@ -68,7 +68,7 @@ mysql> EXPLAIN SELECT count(*) FROM trips WHERE start_date BETWEEN '2017-07-01 0

In the revisited `EXPLAIN` you can see the count of rows scanned has reduced via the use of an index. On a reference system, the query execution time reduced from 50.41 seconds to 0.00 seconds!

## <span id="explain-analyze-output-format">`EXPLAIN ANALYZE` output format</span>
## `EXPLAIN ANALYZE` output format

As an extension to `EXPLAIN`, `EXPLAIN ANALYZE` will execute the query and provide additional execution statistics in the `execution info` column as follows:

@@ -83,7 +83,7 @@ Query OK, 0 rows affected (0.31 sec)

## Associated session variables

The global variables associated with the `CREATE INDEX` statement are `tidb_ddl_reorg_worker_cnt`, `tidb_ddl_reorg_batch_size` and `tidb_ddl_reorg_priority`. Refer to [TiDB-specific system variables](/dev/reference/configuration/tidb-server/tidb-specific-variables.md#tidb-ddl-reorg-worker-cnt) for details.
The global variables associated with the `CREATE INDEX` statement are `tidb_ddl_reorg_worker_cnt`, `tidb_ddl_reorg_batch_size` and `tidb_ddl_reorg_priority`. Refer to [TiDB-specific system variables](/dev/reference/configuration/tidb-server/tidb-specific-variables.md#tidb_ddl_reorg_worker_cnt) for details.

## MySQL compatibility

@@ -206,7 +206,7 @@ table_option:
| STATS_PERSISTENT [=] {DEFAULT|0|1}
```

The `table_option` currently only supports `AUTO_INCREMENT`, `SHARD_ROW_ID_BITS` (see [TiDB Specific System Variables](/dev/reference/configuration/tidb-server/tidb-specific-variables.md#shard-row-id-bits) for details), `PRE_SPLIT_REGIONS`, `CHARACTER SET`, `COLLATE`, and `COMMENT`, while the others are only supported in syntax. The clauses are separated by a comma `,`. See the following table for details:
The `table_option` currently only supports `AUTO_INCREMENT`, `SHARD_ROW_ID_BITS` (see [TiDB Specific System Variables](/dev/reference/configuration/tidb-server/tidb-specific-variables.md#shard_row_id_bits) for details), `PRE_SPLIT_REGIONS`, `CHARACTER SET`, `COLLATE`, and `COMMENT`, while the others are only supported in syntax. The clauses are separated by a comma `,`. See the following table for details:

| Parameters | Description | Example |
| ---------- | ---------- | ------- |

0 comments on commit 041e600

Please sign in to comment.
You can’t perform that action at this time.