From b1b4b9a17e9e09dcb1ac6c1151c48604db7b3583 Mon Sep 17 00:00:00 2001 From: Ran Date: Fri, 31 Jul 2020 15:45:03 +0800 Subject: [PATCH 1/2] cherry pick #3489 to release-2.1 Signed-off-by: ti-srebot --- configure-time-zone.md | 2 +- faq/tidb-faq.md | 6 +- .../expressions-pushed-down.md | 147 ++++++++++++++++++ geo-redundancy-deployment.md | 10 +- key-features.md | 2 +- migrate-from-aurora-mysql-database.md | 12 +- online-deployment-using-ansible.md | 2 +- releases/release-3.0-ga.md | 2 +- releases/release-3.0.0-rc.3.md | 2 +- releases/release-3.0.4.md | 128 +++++++++++++++ sql-mode.md | 59 +++++++ sql-statements/sql-statement-recover-table.md | 107 +++++++++++++ syncer-overview.md | 4 +- tidb-binlog/tidb-binlog-faq.md | 4 +- 14 files changed, 464 insertions(+), 23 deletions(-) create mode 100644 functions-and-operators/expressions-pushed-down.md create mode 100644 releases/release-3.0.4.md create mode 100644 sql-mode.md create mode 100644 sql-statements/sql-statement-recover-table.md diff --git a/configure-time-zone.md b/configure-time-zone.md index 7803bef14fa1c..5637a0fbbbb6c 100644 --- a/configure-time-zone.md +++ b/configure-time-zone.md @@ -69,4 +69,4 @@ In this example, no matter how you adjust the value of the time zone, the value > **Note:** > > - Time zone is involved during the conversion of the value of Timestamp and Datetime, which is handled based on the current `time_zone` of the session. -> - For data migration, you need to pay special attention to the time zone setting of the master database and the slave database. +> - For data migration, you need to pay special attention to the time zone setting of the primary database and the secondary database. diff --git a/faq/tidb-faq.md b/faq/tidb-faq.md index 2447f7b8f5edf..e7d372009ef13 100644 --- a/faq/tidb-faq.md +++ b/faq/tidb-faq.md @@ -641,9 +641,9 @@ This is because the address in the startup parameter has been registered in the To solve this problem, use the [`store delete`](https://github.com/pingcap/pd/tree/55db505e8f35e8ab4e00efd202beb27a8ecc40fb/tools/pd-ctl#store-delete--label--weight-store_id----jqquery-string) function to delete the previous store and then restart TiKV. -#### TiKV master and slave use the same compression algorithm, why the results are different? +#### TiKV leader replicas and follower replicas use the same compression algorithm. Why the amount of disk space occupied is different? -Currently, some files of TiKV master have a higher compression rate, which depends on the underlying data distribution and RocksDB implementation. It is normal that the data size fluctuates occasionally. The underlying storage engine adjusts data as needed. +TiKV stores data in the LSM tree, in which each layer has a different compression algorithm. If two replicas of the same data are located in different layers in two TiKV nodes, the two replicas might occupy different space. #### What are the features of TiKV block cache? @@ -747,7 +747,7 @@ At the beginning, many users tend to do a benchmark test or a comparison test be #### What's the relationship between the TiDB cluster capacity (QPS) and the number of nodes? How does TiDB compare to MySQL? - Within 10 nodes, the relationship between TiDB write capacity (Insert TPS) and the number of nodes is roughly 40% linear increase. Because MySQL uses single-node write, its write capacity cannot be scaled. -- In MySQL, the read capacity can be increased by adding slave, but the write capacity cannot be increased except using sharding, which has many problems. +- In MySQL, the read capacity can be increased by adding replicas, but the write capacity cannot be increased except using sharding, which has many problems. - In TiDB, both the read and write capacity can be easily increased by adding more nodes. #### The performance test of MySQL and TiDB by our DBA shows that the performance of a standalone TiDB is not as good as MySQL diff --git a/functions-and-operators/expressions-pushed-down.md b/functions-and-operators/expressions-pushed-down.md new file mode 100644 index 0000000000000..c46ae713d3e95 --- /dev/null +++ b/functions-and-operators/expressions-pushed-down.md @@ -0,0 +1,147 @@ +--- +title: List of Expressions for Pushdown +summary: Learn a list of expressions that can be pushed down to TiKV and the related operations. +aliases: ['/docs/v3.1/functions-and-operators/expressions-pushed-down/','/docs/v3.1/reference/sql/functions-and-operators/expressions-pushed-down/'] +--- + +# List of Expressions for Pushdown + +When TiDB reads data from TiKV, TiDB tries to push down some expressions (including calculations of functions or operators) to be processed to TiKV. This reduces the amount of transferred data and offloads processing from a single TiDB node. This document introduces the expressions that TiDB already supports pushing down and how to prohibit specific expressions from being pushed down using blocklist. + +## Supported expressions for pushdown + +| Expression Type | Operations | +| :-------------- | :------------------------------------- | +| [Logical operators](/functions-and-operators/operators.md#logical-operators) | AND (&&), OR (||), NOT (!) | +| [Comparison functions and operators](/functions-and-operators/operators.md#comparison-functions-and-operators) | `<`, `<=`, `=`, `!=` (`<>`), `>`, `>=`, [`<=>`](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_equal-to), [`IN()`](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_in), IS NULL, LIKE, IS TRUE, IS FALSE, [`COALESCE()`](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_coalesce) | +| [Numeric functions and operators](/functions-and-operators/numeric-functions-and-operators.md) | +, -, *, /, [`ABS()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_abs), [`CEIL()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_ceil), [`CEILING()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_ceiling), [`FLOOR()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_floor) | +| [Control flow functions](/functions-and-operators/control-flow-functions.md) | [`CASE`](https://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#operator_case), [`IF()`](https://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#function_if), [`IFNULL()`](https://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#function_ifnull) | +| [JSON functions](/functions-and-operators/json-functions.md) | [JSON_TYPE(json_val)][json_type],
[JSON_EXTRACT(json_doc, path[, path] ...)][json_extract],
[JSON_UNQUOTE(json_val)][json_unquote],
[JSON_OBJECT(key, val[, key, val] ...)][json_object],
[JSON_ARRAY([val[, val] ...])][json_array],
[JSON_MERGE(json_doc, json_doc[, json_doc] ...)][json_merge],
[JSON_SET(json_doc, path, val[, path, val] ...)][json_set],
[JSON_INSERT(json_doc, path, val[, path, val] ...)][json_insert],
[JSON_REPLACE(json_doc, path, val[, path, val] ...)][json_replace],
[JSON_REMOVE(json_doc, path[, path] ...)][json_remove] | +| [Date and time functions](/functions-and-operators/date-and-time-functions.md) | [`DATE_FORMAT()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_date-format) | + +## Blocklist specific expressions + +If unexpected behavior occurs during the calculation of a function caused by its pushdown, you can quickly restore the application by blocklisting that function. Specifically, you can prohibit an expression from being pushed down by adding the corresponding functions or operator to the blocklist `mysql.expr_pushdown_blacklist`. + +### Add to the blocklist + +To add one or more functions or operators to the blocklist, perform the following steps: + +1. Insert the function or operator name to `mysql.expr_pushdown_blacklist`. + +2. Execute the `admin reload expr_pushdown_blacklist;` command. + +### Remove from the blocklist + +To remove one or more functions or operators from the blocklist, perform the following steps: + +1. Delete the function or operator name in `mysql.expr_pushdown_blacklist`. + +2. Execute the `admin reload expr_pushdown_blacklist;` command. + +### blocklist usage examples + +The following example demonstrates how to add the `<` and `>` operators to the blocklist, then remove `>` from the blocklist. + +You can see whether the blocklist takes effect by checking the results returned by `EXPLAIN` statement (See [Understanding `EXPLAIN` results](/query-execution-plan.md)). + +```sql +tidb> create table t(a int); +Query OK, 0 rows affected (0.01 sec) + +tidb> explain select * from t where a < 2 and a > 2; ++---------------------+----------+------+------------------------------------------------------------+ +| id | count | task | operator info | ++---------------------+----------+------+------------------------------------------------------------+ +| TableReader_7 | 0.00 | root | data:Selection_6 | +| └─Selection_6 | 0.00 | cop | gt(test.t.a, 2), lt(test.t.a, 2) | +| └─TableScan_5 | 10000.00 | cop | table:t, range:[-inf,+inf], keep order:false, stats:pseudo | ++---------------------+----------+------+------------------------------------------------------------+ +3 rows in set (0.00 sec) + +tidb> insert into mysql.expr_pushdown_blacklist values('<'), ('>'); +Query OK, 2 rows affected (0.00 sec) +Records: 2 Duplicates: 0 Warnings: 0 + +tidb> admin reload expr_pushdown_blacklist; +Query OK, 0 rows affected (0.00 sec) + +tidb> explain select * from t where a < 2 and a > 2; ++---------------------+----------+------+------------------------------------------------------------+ +| id | count | task | operator info | ++---------------------+----------+------+------------------------------------------------------------+ +| Selection_5 | 8000.00 | root | gt(test.t.a, 2), lt(test.t.a, 2) | +| └─TableReader_7 | 10000.00 | root | data:TableScan_6 | +| └─TableScan_6 | 10000.00 | cop | table:t, range:[-inf,+inf], keep order:false, stats:pseudo | ++---------------------+----------+------+------------------------------------------------------------+ +3 rows in set (0.00 sec) + +tidb> delete from mysql.expr_pushdown_blacklist where name = '>'; +Query OK, 1 row affected (0.00 sec) + +tidb> admin reload expr_pushdown_blacklist; +Query OK, 0 rows affected (0.00 sec) + +tidb> explain select * from t where a < 2 and a > 2; ++-----------------------+----------+------+------------------------------------------------------------+ +| id | count | task | operator info | ++-----------------------+----------+------+------------------------------------------------------------+ +| Selection_5 | 2666.67 | root | lt(test.t.a, 2) | +| └─TableReader_8 | 3333.33 | root | data:Selection_7 | +| └─Selection_7 | 3333.33 | cop | gt(test.t.a, 2) | +| └─TableScan_6 | 10000.00 | cop | table:t, range:[-inf,+inf], keep order:false, stats:pseudo | ++-----------------------+----------+------+------------------------------------------------------------+ +4 rows in set (0.00 sec) +``` + +> **Note:** +> +> - `admin reload expr_pushdown_blacklist` only takes effect on the TiDB server that executes this SQL statement. To make it apply to all TiDB servers, execute the SQL statement on each TiDB server. +> - The feature of blacklisting specific expressions is supported in TiDB 3.0.0 or later versions. +> - TiDB 3.0.3 or earlier versions does not support adding some of the operators (such as ">", "+", "is null") to the blocklist by using their original names. You need to use their aliases (case-sensitive) instead, as shown in the following table: + +| Operator Name | Aliases | +| :-------- | :---------- | +| `<` | lt | +| `>` | gt | +| `<=` | le | +| `>=` | ge | +| `=` | eq | +| `!=` | ne | +| `<>` | ne | +| `<=>` | nulleq | +| | | bitor | +| && | bitand| +| || | or | +| ! | not | +| in | in | +| + | plus| +| - | minus | +| * | mul | +| / | div | +| DIV | intdiv| +| IS NULL | isnull | +| IS TRUE | istrue | +| IS FALSE | isfalse | + +[json_extract]: https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#function_json-extract +[json_short_extract]: https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#operator_json-column-path +[json_short_extract_unquote]: https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#operator_json-inline-path +[json_unquote]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-unquote +[json_type]: https://dev.mysql.com/doc/refman/5.7/en/json-attribute-functions.html#function_json-type +[json_set]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-set +[json_insert]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-insert +[json_replace]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-replace +[json_remove]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-remove +[json_merge]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-merge +[json_merge_preserve]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-merge-preserve +[json_object]: https://dev.mysql.com/doc/refman/5.7/en/json-creation-functions.html#function_json-object +[json_array]: https://dev.mysql.com/doc/refman/5.7/en/json-creation-functions.html#function_json-array +[json_keys]: https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#function_json-keys +[json_length]: https://dev.mysql.com/doc/refman/5.7/en/json-attribute-functions.html#function_json-length +[json_valid]: https://dev.mysql.com/doc/refman/5.7/en/json-attribute-functions.html#function_json-valid +[json_quote]: https://dev.mysql.com/doc/refman/5.7/en/json-creation-functions.html#function_json-quote +[json_contains]: https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#function_json-contains +[json_contains_path]: https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#function_json-contains-path +[json_arrayagg]: https://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_json-arrayagg +[json_depth]: https://dev.mysql.com/doc/refman/5.7/en/json-attribute-functions.html#function_json-depth diff --git a/geo-redundancy-deployment.md b/geo-redundancy-deployment.md index 4c757cfccc147..60905f49bfe78 100644 --- a/geo-redundancy-deployment.md +++ b/geo-redundancy-deployment.md @@ -49,23 +49,23 @@ However, the disadvantage is that if the 2 DCs within the same city goes down, w ## 2-DC + Binlog replication deployment solution -The 2-DC + Binlog replication is similar to the MySQL Master-Slave solution. 2 complete sets of TiDB clusters (each complete set of the TiDB cluster includes TiDB, PD and TiKV) are deployed in 2 DCs, one acts as the Master and one as the Slave. Under normal circumstances, the Master DC handles all the requests and the data written to the Master DC is asynchronously written to the Slave DC via Binlog. +The 2-DC + Binlog replication is similar to the MySQL Source-Replica solution. 2 complete sets of TiDB clusters (each complete set of the TiDB cluster includes TiDB, PD and TiKV) are deployed in 2 DCs, one acts as the primary and one as the secondary. Under normal circumstances, the primary DC handles all the requests and the data written to the primary DC is asynchronously written to the secondary DC via Binlog. ![Data Replication in 2-DC in 2 Cities Deployment](/media/deploy-binlog.png) -If the Master DC goes down, the requests can be switched to the slave cluster. Similar to MySQL, some data might be lost. But different from MySQL, this solution can ensure the high availability within the same DC: if some nodes within the DC are down, the online workloads won’t be impacted and no manual efforts are needed because the cluster will automatically re-elect leaders to provide services. +If the primary DC goes down, the requests can be switched to the secondary cluster. Similar to MySQL, some data might be lost. But different from MySQL, this solution can ensure the high availability within the same DC: if some nodes within the DC are down, the online workloads won’t be impacted and no manual efforts are needed because the cluster will automatically re-elect leaders to provide services. ![2-DC as a Mutual Backup Deployment](/media/deploy-backup.png) Some of our production users also adopt the 2-DC multi-active solution, which means: 1. The application requests are separated and dispatched into 2 DCs. -2. Each DC has 1 cluster and each cluster has two databases: A Master database to serve part of the application requests and a Slave database to act as the backup of the other DC’s Master database. Data written into the Master database is replicated via Binlog to the Slave database in the other DC, forming a loop of backup. +2. Each DC has 1 cluster and each cluster has two databases: A primary database to serve part of the application requests and a secondary database to act as the backup of the other DC’s primary database. Data written into the primary database is replicated via Binlog to the secondary database in the other DC, forming a loop of backup. -Please be noted that for the 2-DC + Binlog replication solution, data is asynchronously replicated via Binlog. If the network latency between 2 DCs is too high, the data in the Slave cluster will fall much behind of the Master cluster. If the Master cluster goes down, some data will be lost and it cannot be guaranteed the lost data is within 5 minutes. +Please be noted that for the 2-DC + Binlog replication solution, data is asynchronously replicated via Binlog. If the network latency between 2 DCs is too high, the data in the secondary cluster will fall much behind of the primary cluster. If the primary cluster goes down, some data will be lost and it cannot be guaranteed the lost data is within 5 minutes. ## Overall analysis for HA and DR For the 3-DC deployment solution and 3-DC in 2 cities solution, we can guarantee that the cluster will automatically recover, no human interference is needed and that the data is strongly consistent even if any one of the 3 DCs goes down. All the scheduling policies are to tune the performance, but availability is the top 1 priority instead of performance in case of an outage. -For 2-DC + Binlog replication solution, we can guarantee that the cluster will automatically recover, no human interference is needed and that the data is strongly consistent even if any some of the nodes within the Master cluster go down. When the entire Master cluster goes down, manual efforts will be needed to switch to the Slave and some data will be lost. The amount of the lost data depends on the network latency and is decided by the network condition. +For 2-DC + Binlog replication solution, we can guarantee that the cluster will automatically recover, no human interference is needed and that the data is strongly consistent even if any some of the nodes within the primary cluster go down. When the entire primary cluster goes down, manual efforts will be needed to switch to the secondary and some data will be lost. The amount of the lost data depends on the network latency and is decided by the network condition. diff --git a/key-features.md b/key-features.md index 67c62e06ad7a1..e02bdd5d122ce 100644 --- a/key-features.md +++ b/key-features.md @@ -52,7 +52,7 @@ Failure and self-healing operations are also transparent to applications. TiDB s The storage in TiKV is automatically rebalanced to match changes in your workload. For example, if part of your data is more frequently accessed, this hotspot will be detected and may trigger the data to be rebalanced among other TiKV servers. Chunks of data ("Regions" in TiDB terminology) will automatically be split or merged as needed. -This helps remove some of the headaches associated with maintaining a large database cluster and also leads to better utilization over traditional master-slave read-write splitting that is commonly used with MySQL deployments. +This helps remove some of the headaches associated with maintaining a large database cluster and also leads to better utilization over traditional source-replica read-write splitting that is commonly used with MySQL deployments. ## Deployment and orchestration with Ansible, Kubernetes, Docker diff --git a/migrate-from-aurora-mysql-database.md b/migrate-from-aurora-mysql-database.md index 90451b2491ab1..dd5f1769c16b8 100644 --- a/migrate-from-aurora-mysql-database.md +++ b/migrate-from-aurora-mysql-database.md @@ -106,20 +106,20 @@ mysql-instances: - # ID of the upstream instance or the replication group. Refer to the configuration of `source_id` in the `inventory.ini` file or configuration of `source-id` in the `dm-master.toml` file. source-id: "mysql-replica-01" - # The configuration item name of the black and white lists of the schema or table to be replicated, used to quote the global black and white lists configuration. For global configuration, see the `black-white-list` below. - black-white-list: "global" + # The configuration item name of the block and allow lists of the schema or table to be replicated, used to quote the global block and allow lists configuration. For global configuration, see the `block-allow-list` below. + block-allow-list: "global" # The configuration item name of Mydumper, used to quote the global Mydumper configuration. mydumper-config-name: "global" - source-id: "mysql-replica-02" - black-white-list: "global" + block-allow-list: "global" mydumper-config-name: "global" -# The global configuration of black and white lists. Each instance can quote it by the configuration item name. -black-white-list: +# The global configuration of block and allow lists. Each instance can quote it by the configuration item name. +block-allow-list: global: - do-tables: # The white list of the upstream table to be replicated + do-tables: # The allow list of the upstream table to be replicated - db-name: "test_db" # The database name of the table to be replicated tbl-name: "test_table" # The name of the table to be replicated diff --git a/online-deployment-using-ansible.md b/online-deployment-using-ansible.md index ecb1c72ccdf14..0f70f59405ef9 100644 --- a/online-deployment-using-ansible.md +++ b/online-deployment-using-ansible.md @@ -629,7 +629,7 @@ To enable the following control variables, use the capitalized `True`. To disabl | tidb_version | the version of TiDB, configured by default in TiDB Ansible branches | | process_supervision | the supervision way of processes, systemd by default, supervise optional | | timezone | the global default time zone configured when a new TiDB cluster bootstrap is initialized; you can edit it later using the global `time_zone` system variable and the session `time_zone` system variable as described in [Time Zone Support](/configure-time-zone.md); the default value is `Asia/Shanghai` and see [the list of time zones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) for more optional values | -| enable_firewalld | to enable the firewall, closed by default; to enable it, add the ports in [network requirements](/hardware-and-software-requirements.md#network-requirements) to the white list | +| enable_firewalld | to enable the firewall, closed by default; to enable it, add the ports in [network requirements](/hardware-and-software-requirements.md#network-requirements) to the allowlist | | enable_ntpd | to monitor the NTP service of the managed node, True by default; do not close it | | set_hostname | to edit the hostname of the managed node based on the IP, False by default | | enable_binlog | whether to deploy Pump and enable the binlog, False by default, dependent on the Kafka cluster; see the `zookeeper_addrs` variable | diff --git a/releases/release-3.0-ga.md b/releases/release-3.0-ga.md index c6c0c612c9282..6d1d8e0de6d87 100644 --- a/releases/release-3.0-ga.md +++ b/releases/release-3.0-ga.md @@ -64,7 +64,7 @@ On June 28, 2019, TiDB 3.0 GA is released. The corresponding TiDB Ansible versio - Improve the performance of `admin show ddl jobs` by supporting scanning data in reverse order - Add the `split table region` statement to manually split the table Region to alleviate hotspot issues - Add the `split index region` statement to manually split the index Region to alleviate hotspot issues - - Add a blacklist to prohibit pushing down expressions to Coprocessor + - Add a blocklist to prohibit pushing down expressions to Coprocessor - Optimize the `Expensive Query` log to print the SQL query in the log when it exceeds the configured limit of execution time or memory + DDL - Support migrating from character set `utf8` to `utf8mb4` diff --git a/releases/release-3.0.0-rc.3.md b/releases/release-3.0.0-rc.3.md index f15ed6517f146..b81e92265b6e7 100644 --- a/releases/release-3.0.0-rc.3.md +++ b/releases/release-3.0.0-rc.3.md @@ -36,7 +36,7 @@ On June 21, 2019, TiDB 3.0.0-rc.3 is released. The corresponding TiDB Ansible ve - Add the `split table region` statement to manually split the table Region to alleviate the hotspot issue [#10765](https://github.com/pingcap/tidb/pull/10765) - Add the `split index region` statement to manually split the index Region to alleviate the hotspot issue [#10764](https://github.com/pingcap/tidb/pull/10764) - Fix the incorrect execution issue when you execute multiple statements such as `create user`, `grant`, or `revoke` consecutively [#10737](https://github.com/pingcap/tidb/pull/10737) - - Add a blacklist to prohibit pushing down expressions to Coprocessor [#10791](https://github.com/pingcap/tidb/pull/10791) + - Add a blocklist to prohibit pushing down expressions to Coprocessor [#10791](https://github.com/pingcap/tidb/pull/10791) - Add the feature of printing the `expensive query` log when a query exceeds the memory configuration limit [#10849](https://github.com/pingcap/tidb/pull/10849) - Add the `bind-info-lease` configuration item to control the update time of the modified binding execution plan [#10727](https://github.com/pingcap/tidb/pull/10727) - Fix the OOM issue in high concurrent scenarios caused by the failure to quickly release Coprocessor resources, resulted from the `execdetails.ExecDetails` pointer [#10832](https://github.com/pingcap/tidb/pull/10832) diff --git a/releases/release-3.0.4.md b/releases/release-3.0.4.md new file mode 100644 index 0000000000000..39354f1f80afb --- /dev/null +++ b/releases/release-3.0.4.md @@ -0,0 +1,128 @@ +--- +title: TiDB 3.0.4 Release Notes +aliases: ['/docs/v3.1/releases/release-3.0.4/','/docs/v3.1/releases/3.0.4/'] +--- + +# TiDB 3.0.4 Release Notes + +Release date: October 8, 2019 + +TiDB version: 3.0.4 + +TiDB Ansible version: 3.0.4 + +- New features + - Add the `performance_schema.events_statements_summary_by_digest` system table to troubleshoot performance issues at the SQL level + - Add the `WHERE` clause in TiDB’s `SHOW TABLE REGIONS` syntax + - Add the `worker-count` and `txn-batch` configuration items in Reparo to control the recovery speed +- Improvements + - Support batch Region split command and empty split command in TiKV to improve split performance + - Support double linked list for RocksDB in TiKV to improve performance of reverse scan + - Add two perf tools `iosnoop` and `funcslower` in TiDB Ansible to better diagnose the cluster state + - Optimize the output of slow query logs in TiDB by deleting redundant fields +- Changed behaviors + - Update the default value of `txn-local-latches.enable` to `false` to disable the default behavior of checking conflicts of local transactions in TiDB + - Add the `tidb_txn_mode` system variable of global scope in TiDB and allow using the pessimistic lock; note that TiDB still adopts the optimistic lock by default + - Replace the `Index_ids` field in TiDB slow query logs with `Index_names` to improve the usability of slow query logs + - Add the `split-region-max-num` parameter in the TiDB configuration file to modify the maximum number of Regions allowed in the `SPLIT TABLE` syntax + - Return the `Out Of Memory Quota` error instead of disconnecting the link when a SQL execution exceeds the memory limit + - Disallow dropping the `AUTO_INCREMENT` attribute of columns in TiDB to avoid misoperations. To drop this attribute, change the `tidb_allow_remove_auto_inc` system variable +- Fixed issues + - Fix the issue that the uncommented TiDB-specific syntax `PRE_SPLIT_REGIONS` might cause errors in the downstream database during data replication + - Fix the issue in TiDB that the slow query logs are incorrect when getting the result of `PREPARE` + `EXECUTE` by using the cursor + - Fix the issue in PD that adjacent small Regions cannot be merged + - Fix the issue in TiKV that file descriptor leak in idle clusters might cause TiKV processes to exit abnormally when the processes run for a long time +- Contributors + + Our thanks go to the following contributors from the community for helping this release: + - [sduzh](https://github.com/sduzh) + - [lizhenda](https://github.com/lizhenda) + +## TiDB + +- SQL Optimizer + - Fix the issue that invalid query ranges might be resulted when splitted by feedback [#12170](https://github.com/pingcap/tidb/pull/12170) + - Display the returned error of the `SHOW STATS_BUCKETS` statement in hexadecimal rather than return errors when the result contains invalid Keys [#12094](https://github.com/pingcap/tidb/pull/12094) + - Fix the issue that when a query contains the `SLEEP` function (for example, `select 1 from (select sleep(1)) t;)`), column pruning causes invalid `sleep(1)` during query [#11953](https://github.com/pingcap/tidb/pull/11953) + - Use index scan to lower IO when a query only concerns the number of columns rather than the table data [#12112](https://github.com/pingcap/tidb/pull/12112) + - Do not use any index when no index is specified in `use index()` to be compatible with MySQL [#12100](https://github.com/pingcap/tidb/pull/12100) + - Strictly limit the number of `TopN` records in the `CMSketch` statistics to fix the issue that the `ANALYZE` statement fails because the statement count exceeds TiDB’s limit on the size of a transaction [#11914](https://github.com/pingcap/tidb/pull/11914) + - Fix the error occurred when converting the subqueries contained in the `Update` statement [#12483](https://github.com/pingcap/tidb/pull/12483) + - Optimize execution performance of the `select ... limit ... offset ...` statement by pushing the Limit operator down to the `IndexLookUpReader` execution logic [#12378](https://github.com/pingcap/tidb/pull/12378) +- SQL Execution Engine + - Print the SQL statement in the log when the `PREPARED` statement is incorrectly executed [#12191](https://github.com/pingcap/tidb/pull/12191) + - Support partition pruning when the `UNIX_TIMESTAMP` function is used to implement partitioning [#12169](https://github.com/pingcap/tidb/pull/12169) + - Fix the issue that no error is reported when `AUTO_INCREMENT` incorrectly allocates `MAX int64` and `MAX uint64` [#12162](https://github.com/pingcap/tidb/pull/12162) + - Add the `WHERE` clause in the `SHOW TABLE … REGIONS` and `SHOW TABLE .. INDEX … REGIONS` syntaxes [#12123](https://github.com/pingcap/tidb/pull/12123) + - Return the `Out Of Memory Quota` error instead of disconnecting the link when a SQL execution exceeds the memory limit [#12127](https://github.com/pingcap/tidb/pull/12127) + - Fix the issue that incorrect result is returned when `JSON_UNQUOTE` function handles JSON text [#11955](https://github.com/pingcap/tidb/pull/11955) + - Fix the issue that `LAST INSERT ID` is incorrect when assigning values to the `AUTO_INCREMENT` column in the first row (for example, `insert into t (pk, c) values (1, 2), (NULL, 3)`) [#12002](https://github.com/pingcap/tidb/pull/12002) + - Fix the issue that the `GROUPBY` parsing rule is incorrect in the `PREPARE` statement [#12351](https://github.com/pingcap/tidb/pull/12351) + - Fix the issue that the privilege check is incorrect in the point queries [#12340](https://github.com/pingcap/tidb/pull/12340) + - Fix the issue that the duration by `sql_type` for the `PREPARE` statement is not shown in the monitoring record [#12331](https://github.com/pingcap/tidb/pull/12331) + - Support using aliases for tables in the point queries (for example, `select * from t tmp where a = "aa"`) [#12282](https://github.com/pingcap/tidb/pull/12282) + - Fix the error occurred when not handling negative values as unsigned when inserting negative numbers into BIT type columns [#12423](https://github.com/pingcap/tidb/pull/12423) + - Fix the incorrectly rounding of time (for example, `2019-09-11 11:17:47.999999666` should be rounded to `2019-09-11 11:17:48`.) [#12258](https://github.com/pingcap/tidb/pull/12258) + - Refine the usage of expression blocklist (for example, `<` is equivalent to `It`.) [#11975](https://github.com/pingcap/tidb/pull/11975) + - Add the database prefix to the message of non-existing function error (for example, `[expression:1305]FUNCTION test.std_samp does not exist`) [#12111](https://github.com/pingcap/tidb/pull/12111) +- Server + - Add the `Prev_stmt` field in slow query logs to output the previous statement when the last statement is `COMMIT` [#12180](https://github.com/pingcap/tidb/pull/12180) + - Optimize the output of slow query logs by deleting redundant fields [#12144](https://github.com/pingcap/tidb/pull/12144) + - Update the default value of `txn-local-latches.enable` to `false` to disable the default behavior of checking conflicts of local transactions in TiDB [#12095](https://github.com/pingcap/tidb/pull/12095) + - Replace the `Index_ids` field in TiDB slow query logs with `Index_names` to improve the usability of slow query logs [#12061](https://github.com/pingcap/tidb/pull/12061) + - Add the `tidb_txn_mode` system variable of global scope in TiDB and allow using pessimistic lock [#12049](https://github.com/pingcap/tidb/pull/12049) + - Add the `Backoff` field in the slow query logs to record the Backoff information in the commit phase of 2PC [#12335](https://github.com/pingcap/tidb/pull/12335) + - Fix the issue that the slow query logs are incorrect when getting the result of `PREPARE` + `EXECUTE` by using the cursor (for example, `PREPARE stmt1FROM SELECT * FROM t WHERE a > ?; EXECUTE stmt1 USING @variable`) [#12392](https://github.com/pingcap/tidb/pull/12392) + - Support `tidb_enable_stmt_summary`. When this feature is enabled, TiDB counts the SQL statements and the result can be queried by using the system table `performance_schema.events_statements_summary_by_digest` [#12308](https://github.com/pingcap/tidb/pull/12308) + - Adjust the level of some logs in tikv-client (for example, change the log level of `batchRecvLoop fails` from `ERROR` to `INFO`) [#12383](https://github.com/pingcap/tidb/pull/12383) +- DDL + - Add the `tidb_allow_remove_auto_inc` variable. Dropping the `AUTO INCREMENT` attribute of the column is disabled by default [#12145](https://github.com/pingcap/tidb/pull/12145) + - Fix the issue that the uncommented TiDB-specific syntax `PRE_SPLIT_REGIONS` might cause errors in the downstream database during data replication [#12120](https://github.com/pingcap/tidb/pull/12120) + - Add the `split-region-max-num` variable in the configuration file so that the maximum allowable number of Regions is adjustable [#12097](https://github.com/pingcap/tidb/pull/12079) + - Support splitting a Region into multiple Regions and fix the timeout issue during Region scatterings [#12343](https://github.com/pingcap/tidb/pull/12343) + - Fix the issue that the `drop index` statement fails when the index that contains an `AUTO_INCREMENT` column referenced by two indexes [#12344](https://github.com/pingcap/tidb/pull/12344) +- Monitor + - Add the `connection_transient_failure_count` monitoring metrics to count the number of gRPC connection errors in `tikvclient` [#12093](https://github.com/pingcap/tidb/pull/12093) + +## TiKV + +- Raftstore + - Fix the issue that Raftstore inaccurately counts the number of keys in empty Regions [#5414](https://github.com/tikv/tikv/pull/5414) + - Support double linked list for RocksDB to improve the performance of reverse scan [#5368](https://github.com/tikv/tikv/pull/5368) + - Support batch Region split command and empty split command to improve split performance [#5470](https://github.com/tikv/tikv/pull/5470) +- Server + - Fix the issue that the output format of the `-V` command is not consistent with the format of 2.X [#5501](https://github.com/tikv/tikv/pull/5501) + - Upgrade Titan to the latest version in the 3.0 branch [#5517](https://github.com/tikv/tikv/pull/5517) + - Upgrade grpcio to v0.4.5 [#5523](https://github.com/tikv/tikv/pull/5523) + - Fix the issue of gRPC coredump and support shared memory to avoid OOM [#5524](https://github.com/tikv/tikv/pull/5524) + - Fix the issue in TiKV that file descriptor leak in idle clusters might cause TiKV processes to exit abnormally when the processes run for a long time [#5567](https://github.com/tikv/tikv/pull/5567) +- Storage + - Support the `txn_heart_beat` API to make the pessimistic lock in TiDB consistent with that in MySQL as much as possible [#5507](https://github.com/tikv/tikv/pull/5507) + - Fix the issue that the performance of point queries is low in some situations [#5495](https://github.com/tikv/tikv/pull/5495) [#5463](https://github.com/tikv/tikv/pull/5463) + +## PD + +- Fix the issue that adjacent small Regions cannot be merged [#1726](https://github.com/pingcap/pd/pull/1726) +- Fix the issue that the TLS enabling parameter in `pd-ctl` is invalid [#1738](https://github.com/pingcap/pd/pull/1738) +- Fix the thread-safety issue that the PD operator is accidentally removed [#1734](https://github.com/pingcap/pd/pull/1734) +- Support TLS for Region syncer [#1739](https://github.com/pingcap/pd/pull/1739) + +## Tools + +- TiDB Binlog + - Add the `worker-count` and `txn-batch` configuration items in Reparo to control the recovery speed [#746](https://github.com/pingcap/tidb-binlog/pull/746) + - Optimize the memory usage of Drainer to enhance the efficiency of simultaneous execution [#737](https://github.com/pingcap/tidb-binlog/pull/737) +- TiDB Lightning + - Fix the issue that re-importing data from checkpoint might cause TiDB Lightning to panic [#237](https://github.com/pingcap/tidb-lightning/pull/237) + - Optimize the algorithm of `AUTO_INCREMENT` to reduce the risk of overflowing `AUTO_INCREMENT` columns [#227](https://github.com/pingcap/tidb-lightning/pull/227) + +## TiDB Ansible + +- Upgrade TiSpark to v2.2.0 [#926](https://github.com/pingcap/tidb-ansible/pull/926) +- Update the default value of the TiDB configuration item `pessimistic_txn` to `true` [#933](https://github.com/pingcap/tidb-ansible/pull/933) +- Add more system-level monitoring metrics to `node_exporter` [#938](https://github.com/pingcap/tidb-ansible/pull/938) +- Add two perf tools `iosnoop` and `funcslower` in TiDB Ansible to better diagnose the cluster state [#946](https://github.com/pingcap/tidb-ansible/pull/946) +- Replace the raw module to shell module to address the long waiting time in such situations as the password expires [#949](https://github.com/pingcap/tidb-ansible/pull/949) +- Update the default value of the TiDB configuration item `txn_local_latches` to `false` +- Optimize the monitoring metrics and alert rules of Grafana dashboard [#962](https://github.com/pingcap/tidb-ansible/pull/962) [#963](https://github.com/pingcap/tidb-ansible/pull/963) [#969](https://github.com/pingcap/tidb-ansible/pull/963) +- Check the configuration file before the deployment and upgrade [#934](https://github.com/pingcap/tidb-ansible/pull/934) [#972](https://github.com/pingcap/tidb-ansible/pull/972) diff --git a/sql-mode.md b/sql-mode.md new file mode 100644 index 0000000000000..399d4f9b16d55 --- /dev/null +++ b/sql-mode.md @@ -0,0 +1,59 @@ +--- +title: SQL Mode +summary: Learn SQL mode. +aliases: ['/docs/v3.1/sql-mode/','/docs/v3.1/reference/sql/sql-mode/'] +--- + +# SQL Mode + +TiDB servers operate in different SQL modes and apply these modes differently for different clients. SQL mode defines the SQL syntaxes that TiDB supports and the type of data validation check to perform, as described below: + ++ Before TiDB is started, modify the `--sql-mode="modes"` configuration item to set SQL mode. + ++ After TiDB is started, modify `SET [ SESSION | GLOBAL ] sql_mode='modes'` to set SQL mode. + +Ensure that you have `SUPER` privilege when setting SQL mode at `GLOBAL` level, and your setting at this level only affects the connections established afterwards. Changes to SQL mode at `SESSION` level only affect the current client. + +`Modes` are a series of different modes separated by commas (','). You can use the `SELECT @@sql_mode` statement to check the current SQL mode. The default value of SQL mode: `ONLY_FULL_GROUP_BY, STRICT_TRANS_TABLES, NO_ZERO_IN_DATE, NO_ZERO_DATE, ERROR_FOR_DIVISION_BY_ZERO, NO_AUTO_CREATE_USER, NO_ENGINE_SUBSTITUTION`. + +## Important `sql_mode` values + +* `ANSI`: This mode complies with standard SQL. In this mode, data is checked. If data does not comply with the defined type or length, the data type is adjusted or trimmed and a `warning` is returned. +* `STRICT_TRANS_TABLES`: Strict mode, where data is strictly checked. When any incorrect data is inserted into a table, an error is returned. +* `TRADITIONAL`: In this mode, TiDB behaves like a "traditional" SQL database system. An error instead of a warning is returned when any incorrect value is inserted into a column. Then, the `INSERT` or `UPDATE` statement is immediately stopped. + +## SQL mode table + +| Name | Description | +| :--- | :--- | +| `PIPES_AS_CONCAT` | Treats "\|\|" as a string concatenation operator (`+`) (the same as `CONCAT()`), not as an `OR` (full support) | +| `ANSI_QUOTES` | Treats `"` as an identifier. If `ANSI_QUOTES` is enabled, only single quotes are treated as string literals, and double quotes are treated as identifiers. Therefore, double quotes cannot be used to quote strings. (full support)| +| `IGNORE_SPACE` | If this mode is enabled, the system ignores space. For example: "user" and "user " are the same. (full support)| +| `ONLY_FULL_GROUP_BY` | If a non-aggregated column that is referred to in `SELECT`, `HAVING`, or `ORDER BY` is absent in `GROUP BY`, this SQL statement is invalid, because it is abnormal for a column to be absent in `GROUP BY` but displayed by query. (full support) | +| `NO_UNSIGNED_SUBTRACTION` | Does not mark the result as `UNSIGNED` if an operand has no symbol in subtraction. (full support)| +| `NO_DIR_IN_CREATE` | Ignores all `INDEX DIRECTORY` and `DATA DIRECTORY` directives when a table is created. This option is only useful for secondary replication servers (syntax support only) | +| `NO_KEY_OPTIONS` | When you use the `SHOW CREATE TABLE` statement, MySQL-specific syntaxes such as `ENGINE` are not exported. Consider this option when migrating across DB types using mysqldump. (syntax support only)| +| `NO_FIELD_OPTIONS` | When you use the `SHOW CREATE TABLE` statement, MySQL-specific syntaxes such as `ENGINE` are not exported. Consider this option when migrating across DB types using mysqldump. (syntax support only) | +| `NO_TABLE_OPTIONS` | When you use the `SHOW CREATE TABLE` statement, MySQL-specific syntaxes such as `ENGINE` are not exported. Consider this option when migrating across DB types using mysqldump. (syntax support only)| +| `NO_AUTO_VALUE_ON_ZERO` | If this mode is enabled, when the value passed in the `AUTO_INCREMENT` column is `0` or a specific value, the system directly writes this value to this column. When `NULL` is passed, the system automatically generates the next serial number. (full support)| +| `NO_BACKSLASH_ESCAPES` | If this mode is enabled, the `\` backslash symbol only stands for itself. (full support)| +| `STRICT_TRANS_TABLES` | Enables the strict mode for the transaction storage engine and rolls back the entire statement after an illegal value is inserted. (full support) | +| `STRICT_ALL_TABLES` | For transactional tables, rolls back the entire transaction statement after an illegal value is inserted. (full support) | +| `NO_ZERO_IN_DATE` | Strict mode, where dates with a month or day part of `0` are not accepted. If you use the `IGNORE` option, TiDB inserts '0000-00-00' for a similar date. In non-strict mode, this date is accepted but a warning is returned. (full support) +| `NO_ZERO_DATE` | Does not use '0000-00-00' as a legal date in strict mode. You can still insert a zero date with the `IGNORE` option. In non-strict mode, this date is accepted but a warning is returned. (full support)| +| `ALLOW_INVALID_DATES` | In this mode, the system does not check the validity of all dates. It only checks the month value ranging from `1` to `12` and the date value ranging from `1` to `31`. The mode only applies to `DATE` and `DATATIME` columns. All `TIMESTAMP` columns need a full validity check. (full support) | +| `ERROR_FOR_DIVISION_BY_ZERO` | If this mode is enabled, the system returns an error when handling division by `0` in data-change operations (`INSERT` or `UPDATE`).
If this mode is not enabled, the system returns a warning and `NULL` is used instead. (full support) | +| `NO_AUTO_CREATE_USER` | Prevents `GRANT` from automatically creating new users, except for the specified password (full support)| +| `HIGH_NOT_PRECEDENCE` | The precedence of the NOT operator is such that expressions such as `NOT a BETWEEN b AND c` are parsed as `NOT (a BETWEEN b AND c)`. In some older versions of MySQL, this expression is parsed as `(NOT a) BETWEEN b AND c`. (full support) | +| `NO_ENGINE_SUBSTITUTION` | Prevents the automatic replacement of storage engines if the required storage engine is disabled or not compiled. (syntax support only)| +| `PAD_CHAR_TO_FULL_LENGTH` | If this mode is enabled, the system does not trim the trailing spaces for `CHAR` types. (full support) | +| `REAL_AS_FLOAT` | Treats `REAL` as the synonym of `FLOAT`, not the synonym of `DOUBLE` (full support)| +| `POSTGRESQL` | Equivalent to `PIPES_AS_CONCAT`, `ANSI_QUOTES`, `IGNORE_SPACE`, `NO_KEY_OPTIONS`, `NO_TABLE_OPTIONS`, `NO_FIELD_OPTIONS` (syntax support only)| +| `MSSQL` | Equivalent to `PIPES_AS_CONCAT`, `ANSI_QUOTES`, `IGNORE_SPACE`, `NO_KEY_OPTIONS`, `NO_TABLE_OPTIONS`, `NO_FIELD_OPTIONS` (syntax support only)| +| `DB2` | Equivalent to `PIPES_AS_CONCAT`, `ANSI_QUOTES`, `IGNORE_SPACE`, `NO_KEY_OPTIONS`, `NO_TABLE_OPTIONS`, `NO_FIELD_OPTIONS` (syntax support only)| +| `MAXDB` | Equivalent to `PIPES_AS_CONCAT`, `ANSI_QUOTES`, `IGNORE_SPACE`, `NO_KEY_OPTIONS`, `NO_TABLE_OPTIONS`, `NO_FIELD_OPTIONS`, `NO_AUTO_CREATE_USER` (full support)| +| `MySQL323` | Equivalent to `NO_FIELD_OPTIONS`, `HIGH_NOT_PRECEDENCE` (syntax support only)| +| `MYSQL40` | Equivalent to `NO_FIELD_OPTIONS`, `HIGH_NOT_PRECEDENCE` (syntax support only)| +| `ANSI` | Equivalent to `REAL_AS_FLOAT`, `PIPES_AS_CONCAT`, `ANSI_QUOTES`, `IGNORE_SPACE` (syntax support only)| +| `TRADITIONAL` | Equivalent to `STRICT_TRANS_TABLES`, `STRICT_ALL_TABLES`, `NO_ZERO_IN_DATE`, `NO_ZERO_DATE`, `ERROR_FOR_DIVISION_BY_ZERO`, `NO_AUTO_CREATE_USER` (syntax support only) | +| `ORACLE` | Equivalent to `PIPES_AS_CONCAT`, `ANSI_QUOTES`, `IGNORE_SPACE`, `NO_KEY_OPTIONS`, `NO_TABLE_OPTIONS`, `NO_FIELD_OPTIONS`, `NO_AUTO_CREATE_USER` (syntax support only)| diff --git a/sql-statements/sql-statement-recover-table.md b/sql-statements/sql-statement-recover-table.md new file mode 100644 index 0000000000000..07de3bb90e0be --- /dev/null +++ b/sql-statements/sql-statement-recover-table.md @@ -0,0 +1,107 @@ +--- +title: RECOVER TABLE +summary: An overview of the usage of RECOVER TABLE for the TiDB database. +aliases: ['/docs/v3.1/sql-statements/sql-statement-recover-table/','/docs/v3.1/reference/sql/statements/recover-table/'] +--- + +# RECOVER TABLE + +`RECOVER TABLE` is used to recover a deleted table and the data on it within the GC (Garbage Collection) life time after the `DROP TABLE` statement is executed. + +## Syntax + +{{< copyable "sql" >}} + +```sql +RECOVER TABLE table_name +``` + +{{< copyable "sql" >}} + +```sql +RECOVER TABLE BY JOB ddl_job_id +``` + +> **Note:** +> +> + If a table is deleted and the GC lifetime is out, the table cannot be recovered with `RECOVER TABLE`. Execution of `RECOVER TABLE` in this scenario returns an error like: `snapshot is older than GC safe point 2019-07-10 13:45:57 +0800 CST`. +> +> + If the TiDB version is 3.0.0 or later, it is not recommended for you to use `RECOVER TABLE` when TiDB Binlog is used. +> +> + `RECOVER TABLE` is supported in the Binlog version 3.0.1, so you can use `RECOVER TABLE` in the following three situations: +> +> - Binglog version is 3.0.1 or later. +> - TiDB 3.0 is used both in the upstream cluster and the downstream cluster. +> - The GC life time of the secondary cluster must be longer than that of the primary cluster. However, as latency occurs during data replication between upstream and downstream databases, data recovery might fail in the downstream. + +### Troubleshoot errors during TiDB Binlog replication + +When you use `RECOVER TABLE` in the upstream TiDB during TiDB Binlog replication, TiDB Binlog might be interrupted in the following three situations: + ++ The downstream database does not support the `RECOVER TABLE` statement. An error instance: `check the manual that corresponds to your MySQL server version for the right syntax to use near 'RECOVER TABLE table_name'`. + ++ The GC life time is not consistent between the upstream database and the downstream database. An error instance: `snapshot is older than GC safe point 2019-07-10 13:45:57 +0800 CST`. + ++ Latency occurs during replication between upstream and downstream databases. An error instance: `snapshot is older than GC safe point 2019-07-10 13:45:57 +0800 CST`. + +For the above three situations, you can resume data replication from TiDB Binlog with a [full import of the deleted table](/ecosystem-tool-user-guide.md#backup-and-restore). + +## Examples + ++ Recover the deleted table according to the table name. + + {{< copyable "sql" >}} + + ```sql + DROP TABLE t; + ``` + + {{< copyable "sql" >}} + + ```sql + RECOVER TABLE t; + ``` + + This method searches the recent DDL job history and locates the first DDL operation of the `DROP TABLE` type, and then recovers the deleted table with the name identical to the one table name specified in the `RECOVER TABLE` statement. + ++ Recover the deleted table according to the table's `DDL JOB ID` used. + + Suppose that you had deleted the table `t` and created another `t`, and again you deleted the newly created `t`. Then, if you want to recover the `t` deleted in the first place, you must use the method that specifies the `DDL JOB ID`. + + {{< copyable "sql" >}} + + ```sql + DROP TABLE t; + ``` + + {{< copyable "sql" >}} + + ```sql + ADMIN SHOW DDL JOBS 1; + ``` + + The second statement above is used to search for the table's `DDL JOB ID` to delete `t`. In the following example, the ID is `53`. + + ``` + +--------+---------+------------+------------+--------------+-----------+----------+-----------+-----------------------------------+--------+ + | JOB_ID | DB_NAME | TABLE_NAME | JOB_TYPE | SCHEMA_STATE | SCHEMA_ID | TABLE_ID | ROW_COUNT | START_TIME | STATE | + +--------+---------+------------+------------+--------------+-----------+----------+-----------+-----------------------------------+--------+ + | 53 | test | | drop table | none | 1 | 41 | 0 | 2019-07-10 13:23:18.277 +0800 CST | synced | + +--------+---------+------------+------------+--------------+-----------+----------+-----------+-----------------------------------+--------+ + ``` + + {{< copyable "sql" >}} + + ```sql + RECOVER TABLE BY JOB 53; + ``` + + This method recovers the deleted table via the `DDL JOB ID`. If the corresponding DDL job is not of the `DROP TABLE` type, an error occurs. + +## Implementation principle + +When deleting a table, TiDB only deletes the table metadata, and writes the table data (row data and index data) to be deleted to the `mysql.gc_delete_range` table. The GC Worker in the TiDB background periodically removes from the `mysql.gc_delete_range` table the keys that exceed the GC life time. + +Therefore, to recover a table, you only need to recover the table metadata and delete the corresponding row record in the `mysql.gc_delete_range` table before the GC Worker deletes the table data. You can use a snapshot read of TiDB to recover the table metadata. Refer to [Read Historical Data](/read-historical-data.md) for details. + +Table recovery is done by TiDB obtaining the table metadata through snapshot read, and then going through the process of table creation similar to `CREATE TABLE`. Therefore, `RECOVER TABLE` itself is, in essence, a kind of DDL operation. diff --git a/syncer-overview.md b/syncer-overview.md index f34600e192f61..1ce1eadc5ffb5 100644 --- a/syncer-overview.md +++ b/syncer-overview.md @@ -74,7 +74,7 @@ Usage of syncer: -safe-mode to specify and enable the safe mode to make Syncer reentrant -server-id int - to specify MySQL slave sever-id (default 101) + to specify MySQL replica sever-id (default 101) -status-addr string to specify Syncer metrics (default :8271), such as `--status-addr 127:0.0.1:8271` -timezone string @@ -357,7 +357,7 @@ Before replicating data using Syncer, check the following items: > **Note:** > - > If there is a master-slave replication structure between the upstream MySQL/MariaDB servers, then choose the following version. + > If there is a source/replica replication structure between the upstream MySQL/MariaDB servers, then choose the following version. > > - 5.7.1 < MySQL version < 8.0 > - MariaDB version >= 10.1.3 diff --git a/tidb-binlog/tidb-binlog-faq.md b/tidb-binlog/tidb-binlog-faq.md index 1a474080cc6c1..440c83030f56c 100644 --- a/tidb-binlog/tidb-binlog-faq.md +++ b/tidb-binlog/tidb-binlog-faq.md @@ -122,9 +122,9 @@ If the data in the downstream is not affected, you can redeploy Drainer on the n 2. To restore the latest data of the backup file, use Reparo to set `start-tso` = {snapshot timestamp of the full backup + 1} and `end-ts` = 0 (or you can specify a point in time). -## How to redeploy Drainer when enabling `ignore-error` in Master-Slave replication triggers a critical error? +## How to redeploy Drainer when enabling `ignore-error` in Primary-Secondary replication triggers a critical error? -If a critical error is trigged when TiDB fails to write binlog after enabling `ignore-error`, TiDB stops writing binlog and binlog data loss occurs. To resume replication, perform the following steps: +If a critical error is triggered when TiDB fails to write binlog after enabling `ignore-error`, TiDB stops writing binlog and binlog data loss occurs. To resume replication, perform the following steps: 1. Stop the Drainer instance. From c7cc7893941273617e7322f9211833df894dbe38 Mon Sep 17 00:00:00 2001 From: Ran Date: Fri, 31 Jul 2020 17:34:46 +0800 Subject: [PATCH 2/2] update version-specific changes Signed-off-by: Ran --- .../expressions-pushed-down.md | 147 ------------------ key-features.md | 2 +- releases/release-3.0.4.md | 128 --------------- sql-mode.md | 59 ------- sql-statements/sql-statement-recover-table.md | 107 ------------- 5 files changed, 1 insertion(+), 442 deletions(-) delete mode 100644 functions-and-operators/expressions-pushed-down.md delete mode 100644 releases/release-3.0.4.md delete mode 100644 sql-mode.md delete mode 100644 sql-statements/sql-statement-recover-table.md diff --git a/functions-and-operators/expressions-pushed-down.md b/functions-and-operators/expressions-pushed-down.md deleted file mode 100644 index c46ae713d3e95..0000000000000 --- a/functions-and-operators/expressions-pushed-down.md +++ /dev/null @@ -1,147 +0,0 @@ ---- -title: List of Expressions for Pushdown -summary: Learn a list of expressions that can be pushed down to TiKV and the related operations. -aliases: ['/docs/v3.1/functions-and-operators/expressions-pushed-down/','/docs/v3.1/reference/sql/functions-and-operators/expressions-pushed-down/'] ---- - -# List of Expressions for Pushdown - -When TiDB reads data from TiKV, TiDB tries to push down some expressions (including calculations of functions or operators) to be processed to TiKV. This reduces the amount of transferred data and offloads processing from a single TiDB node. This document introduces the expressions that TiDB already supports pushing down and how to prohibit specific expressions from being pushed down using blocklist. - -## Supported expressions for pushdown - -| Expression Type | Operations | -| :-------------- | :------------------------------------- | -| [Logical operators](/functions-and-operators/operators.md#logical-operators) | AND (&&), OR (||), NOT (!) | -| [Comparison functions and operators](/functions-and-operators/operators.md#comparison-functions-and-operators) | `<`, `<=`, `=`, `!=` (`<>`), `>`, `>=`, [`<=>`](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#operator_equal-to), [`IN()`](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_in), IS NULL, LIKE, IS TRUE, IS FALSE, [`COALESCE()`](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_coalesce) | -| [Numeric functions and operators](/functions-and-operators/numeric-functions-and-operators.md) | +, -, *, /, [`ABS()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_abs), [`CEIL()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_ceil), [`CEILING()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_ceiling), [`FLOOR()`](https://dev.mysql.com/doc/refman/5.7/en/mathematical-functions.html#function_floor) | -| [Control flow functions](/functions-and-operators/control-flow-functions.md) | [`CASE`](https://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#operator_case), [`IF()`](https://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#function_if), [`IFNULL()`](https://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#function_ifnull) | -| [JSON functions](/functions-and-operators/json-functions.md) | [JSON_TYPE(json_val)][json_type],
[JSON_EXTRACT(json_doc, path[, path] ...)][json_extract],
[JSON_UNQUOTE(json_val)][json_unquote],
[JSON_OBJECT(key, val[, key, val] ...)][json_object],
[JSON_ARRAY([val[, val] ...])][json_array],
[JSON_MERGE(json_doc, json_doc[, json_doc] ...)][json_merge],
[JSON_SET(json_doc, path, val[, path, val] ...)][json_set],
[JSON_INSERT(json_doc, path, val[, path, val] ...)][json_insert],
[JSON_REPLACE(json_doc, path, val[, path, val] ...)][json_replace],
[JSON_REMOVE(json_doc, path[, path] ...)][json_remove] | -| [Date and time functions](/functions-and-operators/date-and-time-functions.md) | [`DATE_FORMAT()`](https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_date-format) | - -## Blocklist specific expressions - -If unexpected behavior occurs during the calculation of a function caused by its pushdown, you can quickly restore the application by blocklisting that function. Specifically, you can prohibit an expression from being pushed down by adding the corresponding functions or operator to the blocklist `mysql.expr_pushdown_blacklist`. - -### Add to the blocklist - -To add one or more functions or operators to the blocklist, perform the following steps: - -1. Insert the function or operator name to `mysql.expr_pushdown_blacklist`. - -2. Execute the `admin reload expr_pushdown_blacklist;` command. - -### Remove from the blocklist - -To remove one or more functions or operators from the blocklist, perform the following steps: - -1. Delete the function or operator name in `mysql.expr_pushdown_blacklist`. - -2. Execute the `admin reload expr_pushdown_blacklist;` command. - -### blocklist usage examples - -The following example demonstrates how to add the `<` and `>` operators to the blocklist, then remove `>` from the blocklist. - -You can see whether the blocklist takes effect by checking the results returned by `EXPLAIN` statement (See [Understanding `EXPLAIN` results](/query-execution-plan.md)). - -```sql -tidb> create table t(a int); -Query OK, 0 rows affected (0.01 sec) - -tidb> explain select * from t where a < 2 and a > 2; -+---------------------+----------+------+------------------------------------------------------------+ -| id | count | task | operator info | -+---------------------+----------+------+------------------------------------------------------------+ -| TableReader_7 | 0.00 | root | data:Selection_6 | -| └─Selection_6 | 0.00 | cop | gt(test.t.a, 2), lt(test.t.a, 2) | -| └─TableScan_5 | 10000.00 | cop | table:t, range:[-inf,+inf], keep order:false, stats:pseudo | -+---------------------+----------+------+------------------------------------------------------------+ -3 rows in set (0.00 sec) - -tidb> insert into mysql.expr_pushdown_blacklist values('<'), ('>'); -Query OK, 2 rows affected (0.00 sec) -Records: 2 Duplicates: 0 Warnings: 0 - -tidb> admin reload expr_pushdown_blacklist; -Query OK, 0 rows affected (0.00 sec) - -tidb> explain select * from t where a < 2 and a > 2; -+---------------------+----------+------+------------------------------------------------------------+ -| id | count | task | operator info | -+---------------------+----------+------+------------------------------------------------------------+ -| Selection_5 | 8000.00 | root | gt(test.t.a, 2), lt(test.t.a, 2) | -| └─TableReader_7 | 10000.00 | root | data:TableScan_6 | -| └─TableScan_6 | 10000.00 | cop | table:t, range:[-inf,+inf], keep order:false, stats:pseudo | -+---------------------+----------+------+------------------------------------------------------------+ -3 rows in set (0.00 sec) - -tidb> delete from mysql.expr_pushdown_blacklist where name = '>'; -Query OK, 1 row affected (0.00 sec) - -tidb> admin reload expr_pushdown_blacklist; -Query OK, 0 rows affected (0.00 sec) - -tidb> explain select * from t where a < 2 and a > 2; -+-----------------------+----------+------+------------------------------------------------------------+ -| id | count | task | operator info | -+-----------------------+----------+------+------------------------------------------------------------+ -| Selection_5 | 2666.67 | root | lt(test.t.a, 2) | -| └─TableReader_8 | 3333.33 | root | data:Selection_7 | -| └─Selection_7 | 3333.33 | cop | gt(test.t.a, 2) | -| └─TableScan_6 | 10000.00 | cop | table:t, range:[-inf,+inf], keep order:false, stats:pseudo | -+-----------------------+----------+------+------------------------------------------------------------+ -4 rows in set (0.00 sec) -``` - -> **Note:** -> -> - `admin reload expr_pushdown_blacklist` only takes effect on the TiDB server that executes this SQL statement. To make it apply to all TiDB servers, execute the SQL statement on each TiDB server. -> - The feature of blacklisting specific expressions is supported in TiDB 3.0.0 or later versions. -> - TiDB 3.0.3 or earlier versions does not support adding some of the operators (such as ">", "+", "is null") to the blocklist by using their original names. You need to use their aliases (case-sensitive) instead, as shown in the following table: - -| Operator Name | Aliases | -| :-------- | :---------- | -| `<` | lt | -| `>` | gt | -| `<=` | le | -| `>=` | ge | -| `=` | eq | -| `!=` | ne | -| `<>` | ne | -| `<=>` | nulleq | -| | | bitor | -| && | bitand| -| || | or | -| ! | not | -| in | in | -| + | plus| -| - | minus | -| * | mul | -| / | div | -| DIV | intdiv| -| IS NULL | isnull | -| IS TRUE | istrue | -| IS FALSE | isfalse | - -[json_extract]: https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#function_json-extract -[json_short_extract]: https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#operator_json-column-path -[json_short_extract_unquote]: https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#operator_json-inline-path -[json_unquote]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-unquote -[json_type]: https://dev.mysql.com/doc/refman/5.7/en/json-attribute-functions.html#function_json-type -[json_set]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-set -[json_insert]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-insert -[json_replace]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-replace -[json_remove]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-remove -[json_merge]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-merge -[json_merge_preserve]: https://dev.mysql.com/doc/refman/5.7/en/json-modification-functions.html#function_json-merge-preserve -[json_object]: https://dev.mysql.com/doc/refman/5.7/en/json-creation-functions.html#function_json-object -[json_array]: https://dev.mysql.com/doc/refman/5.7/en/json-creation-functions.html#function_json-array -[json_keys]: https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#function_json-keys -[json_length]: https://dev.mysql.com/doc/refman/5.7/en/json-attribute-functions.html#function_json-length -[json_valid]: https://dev.mysql.com/doc/refman/5.7/en/json-attribute-functions.html#function_json-valid -[json_quote]: https://dev.mysql.com/doc/refman/5.7/en/json-creation-functions.html#function_json-quote -[json_contains]: https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#function_json-contains -[json_contains_path]: https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#function_json-contains-path -[json_arrayagg]: https://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_json-arrayagg -[json_depth]: https://dev.mysql.com/doc/refman/5.7/en/json-attribute-functions.html#function_json-depth diff --git a/key-features.md b/key-features.md index e02bdd5d122ce..bc6a5a8e4862d 100644 --- a/key-features.md +++ b/key-features.md @@ -96,4 +96,4 @@ TiDB has been released under the Apache 2.0 license since its initial launch in TiDB implements the _Online, Asynchronous Schema Change_ algorithm first described in [Google's F1 paper](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41376.pdf). -In simplified terms, this means that TiDB is able to make changes to the schema across its distributed architecture without blocking either read or write operations. There is no need to use an external schema change tool or flip between masters and slaves as is common in large MySQL deployments. +In simplified terms, this means that TiDB is able to make changes to the schema across its distributed architecture without blocking either read or write operations. There is no need to use an external schema change tool or flip between sources and replicas as is common in large MySQL deployments. diff --git a/releases/release-3.0.4.md b/releases/release-3.0.4.md deleted file mode 100644 index 39354f1f80afb..0000000000000 --- a/releases/release-3.0.4.md +++ /dev/null @@ -1,128 +0,0 @@ ---- -title: TiDB 3.0.4 Release Notes -aliases: ['/docs/v3.1/releases/release-3.0.4/','/docs/v3.1/releases/3.0.4/'] ---- - -# TiDB 3.0.4 Release Notes - -Release date: October 8, 2019 - -TiDB version: 3.0.4 - -TiDB Ansible version: 3.0.4 - -- New features - - Add the `performance_schema.events_statements_summary_by_digest` system table to troubleshoot performance issues at the SQL level - - Add the `WHERE` clause in TiDB’s `SHOW TABLE REGIONS` syntax - - Add the `worker-count` and `txn-batch` configuration items in Reparo to control the recovery speed -- Improvements - - Support batch Region split command and empty split command in TiKV to improve split performance - - Support double linked list for RocksDB in TiKV to improve performance of reverse scan - - Add two perf tools `iosnoop` and `funcslower` in TiDB Ansible to better diagnose the cluster state - - Optimize the output of slow query logs in TiDB by deleting redundant fields -- Changed behaviors - - Update the default value of `txn-local-latches.enable` to `false` to disable the default behavior of checking conflicts of local transactions in TiDB - - Add the `tidb_txn_mode` system variable of global scope in TiDB and allow using the pessimistic lock; note that TiDB still adopts the optimistic lock by default - - Replace the `Index_ids` field in TiDB slow query logs with `Index_names` to improve the usability of slow query logs - - Add the `split-region-max-num` parameter in the TiDB configuration file to modify the maximum number of Regions allowed in the `SPLIT TABLE` syntax - - Return the `Out Of Memory Quota` error instead of disconnecting the link when a SQL execution exceeds the memory limit - - Disallow dropping the `AUTO_INCREMENT` attribute of columns in TiDB to avoid misoperations. To drop this attribute, change the `tidb_allow_remove_auto_inc` system variable -- Fixed issues - - Fix the issue that the uncommented TiDB-specific syntax `PRE_SPLIT_REGIONS` might cause errors in the downstream database during data replication - - Fix the issue in TiDB that the slow query logs are incorrect when getting the result of `PREPARE` + `EXECUTE` by using the cursor - - Fix the issue in PD that adjacent small Regions cannot be merged - - Fix the issue in TiKV that file descriptor leak in idle clusters might cause TiKV processes to exit abnormally when the processes run for a long time -- Contributors - - Our thanks go to the following contributors from the community for helping this release: - - [sduzh](https://github.com/sduzh) - - [lizhenda](https://github.com/lizhenda) - -## TiDB - -- SQL Optimizer - - Fix the issue that invalid query ranges might be resulted when splitted by feedback [#12170](https://github.com/pingcap/tidb/pull/12170) - - Display the returned error of the `SHOW STATS_BUCKETS` statement in hexadecimal rather than return errors when the result contains invalid Keys [#12094](https://github.com/pingcap/tidb/pull/12094) - - Fix the issue that when a query contains the `SLEEP` function (for example, `select 1 from (select sleep(1)) t;)`), column pruning causes invalid `sleep(1)` during query [#11953](https://github.com/pingcap/tidb/pull/11953) - - Use index scan to lower IO when a query only concerns the number of columns rather than the table data [#12112](https://github.com/pingcap/tidb/pull/12112) - - Do not use any index when no index is specified in `use index()` to be compatible with MySQL [#12100](https://github.com/pingcap/tidb/pull/12100) - - Strictly limit the number of `TopN` records in the `CMSketch` statistics to fix the issue that the `ANALYZE` statement fails because the statement count exceeds TiDB’s limit on the size of a transaction [#11914](https://github.com/pingcap/tidb/pull/11914) - - Fix the error occurred when converting the subqueries contained in the `Update` statement [#12483](https://github.com/pingcap/tidb/pull/12483) - - Optimize execution performance of the `select ... limit ... offset ...` statement by pushing the Limit operator down to the `IndexLookUpReader` execution logic [#12378](https://github.com/pingcap/tidb/pull/12378) -- SQL Execution Engine - - Print the SQL statement in the log when the `PREPARED` statement is incorrectly executed [#12191](https://github.com/pingcap/tidb/pull/12191) - - Support partition pruning when the `UNIX_TIMESTAMP` function is used to implement partitioning [#12169](https://github.com/pingcap/tidb/pull/12169) - - Fix the issue that no error is reported when `AUTO_INCREMENT` incorrectly allocates `MAX int64` and `MAX uint64` [#12162](https://github.com/pingcap/tidb/pull/12162) - - Add the `WHERE` clause in the `SHOW TABLE … REGIONS` and `SHOW TABLE .. INDEX … REGIONS` syntaxes [#12123](https://github.com/pingcap/tidb/pull/12123) - - Return the `Out Of Memory Quota` error instead of disconnecting the link when a SQL execution exceeds the memory limit [#12127](https://github.com/pingcap/tidb/pull/12127) - - Fix the issue that incorrect result is returned when `JSON_UNQUOTE` function handles JSON text [#11955](https://github.com/pingcap/tidb/pull/11955) - - Fix the issue that `LAST INSERT ID` is incorrect when assigning values to the `AUTO_INCREMENT` column in the first row (for example, `insert into t (pk, c) values (1, 2), (NULL, 3)`) [#12002](https://github.com/pingcap/tidb/pull/12002) - - Fix the issue that the `GROUPBY` parsing rule is incorrect in the `PREPARE` statement [#12351](https://github.com/pingcap/tidb/pull/12351) - - Fix the issue that the privilege check is incorrect in the point queries [#12340](https://github.com/pingcap/tidb/pull/12340) - - Fix the issue that the duration by `sql_type` for the `PREPARE` statement is not shown in the monitoring record [#12331](https://github.com/pingcap/tidb/pull/12331) - - Support using aliases for tables in the point queries (for example, `select * from t tmp where a = "aa"`) [#12282](https://github.com/pingcap/tidb/pull/12282) - - Fix the error occurred when not handling negative values as unsigned when inserting negative numbers into BIT type columns [#12423](https://github.com/pingcap/tidb/pull/12423) - - Fix the incorrectly rounding of time (for example, `2019-09-11 11:17:47.999999666` should be rounded to `2019-09-11 11:17:48`.) [#12258](https://github.com/pingcap/tidb/pull/12258) - - Refine the usage of expression blocklist (for example, `<` is equivalent to `It`.) [#11975](https://github.com/pingcap/tidb/pull/11975) - - Add the database prefix to the message of non-existing function error (for example, `[expression:1305]FUNCTION test.std_samp does not exist`) [#12111](https://github.com/pingcap/tidb/pull/12111) -- Server - - Add the `Prev_stmt` field in slow query logs to output the previous statement when the last statement is `COMMIT` [#12180](https://github.com/pingcap/tidb/pull/12180) - - Optimize the output of slow query logs by deleting redundant fields [#12144](https://github.com/pingcap/tidb/pull/12144) - - Update the default value of `txn-local-latches.enable` to `false` to disable the default behavior of checking conflicts of local transactions in TiDB [#12095](https://github.com/pingcap/tidb/pull/12095) - - Replace the `Index_ids` field in TiDB slow query logs with `Index_names` to improve the usability of slow query logs [#12061](https://github.com/pingcap/tidb/pull/12061) - - Add the `tidb_txn_mode` system variable of global scope in TiDB and allow using pessimistic lock [#12049](https://github.com/pingcap/tidb/pull/12049) - - Add the `Backoff` field in the slow query logs to record the Backoff information in the commit phase of 2PC [#12335](https://github.com/pingcap/tidb/pull/12335) - - Fix the issue that the slow query logs are incorrect when getting the result of `PREPARE` + `EXECUTE` by using the cursor (for example, `PREPARE stmt1FROM SELECT * FROM t WHERE a > ?; EXECUTE stmt1 USING @variable`) [#12392](https://github.com/pingcap/tidb/pull/12392) - - Support `tidb_enable_stmt_summary`. When this feature is enabled, TiDB counts the SQL statements and the result can be queried by using the system table `performance_schema.events_statements_summary_by_digest` [#12308](https://github.com/pingcap/tidb/pull/12308) - - Adjust the level of some logs in tikv-client (for example, change the log level of `batchRecvLoop fails` from `ERROR` to `INFO`) [#12383](https://github.com/pingcap/tidb/pull/12383) -- DDL - - Add the `tidb_allow_remove_auto_inc` variable. Dropping the `AUTO INCREMENT` attribute of the column is disabled by default [#12145](https://github.com/pingcap/tidb/pull/12145) - - Fix the issue that the uncommented TiDB-specific syntax `PRE_SPLIT_REGIONS` might cause errors in the downstream database during data replication [#12120](https://github.com/pingcap/tidb/pull/12120) - - Add the `split-region-max-num` variable in the configuration file so that the maximum allowable number of Regions is adjustable [#12097](https://github.com/pingcap/tidb/pull/12079) - - Support splitting a Region into multiple Regions and fix the timeout issue during Region scatterings [#12343](https://github.com/pingcap/tidb/pull/12343) - - Fix the issue that the `drop index` statement fails when the index that contains an `AUTO_INCREMENT` column referenced by two indexes [#12344](https://github.com/pingcap/tidb/pull/12344) -- Monitor - - Add the `connection_transient_failure_count` monitoring metrics to count the number of gRPC connection errors in `tikvclient` [#12093](https://github.com/pingcap/tidb/pull/12093) - -## TiKV - -- Raftstore - - Fix the issue that Raftstore inaccurately counts the number of keys in empty Regions [#5414](https://github.com/tikv/tikv/pull/5414) - - Support double linked list for RocksDB to improve the performance of reverse scan [#5368](https://github.com/tikv/tikv/pull/5368) - - Support batch Region split command and empty split command to improve split performance [#5470](https://github.com/tikv/tikv/pull/5470) -- Server - - Fix the issue that the output format of the `-V` command is not consistent with the format of 2.X [#5501](https://github.com/tikv/tikv/pull/5501) - - Upgrade Titan to the latest version in the 3.0 branch [#5517](https://github.com/tikv/tikv/pull/5517) - - Upgrade grpcio to v0.4.5 [#5523](https://github.com/tikv/tikv/pull/5523) - - Fix the issue of gRPC coredump and support shared memory to avoid OOM [#5524](https://github.com/tikv/tikv/pull/5524) - - Fix the issue in TiKV that file descriptor leak in idle clusters might cause TiKV processes to exit abnormally when the processes run for a long time [#5567](https://github.com/tikv/tikv/pull/5567) -- Storage - - Support the `txn_heart_beat` API to make the pessimistic lock in TiDB consistent with that in MySQL as much as possible [#5507](https://github.com/tikv/tikv/pull/5507) - - Fix the issue that the performance of point queries is low in some situations [#5495](https://github.com/tikv/tikv/pull/5495) [#5463](https://github.com/tikv/tikv/pull/5463) - -## PD - -- Fix the issue that adjacent small Regions cannot be merged [#1726](https://github.com/pingcap/pd/pull/1726) -- Fix the issue that the TLS enabling parameter in `pd-ctl` is invalid [#1738](https://github.com/pingcap/pd/pull/1738) -- Fix the thread-safety issue that the PD operator is accidentally removed [#1734](https://github.com/pingcap/pd/pull/1734) -- Support TLS for Region syncer [#1739](https://github.com/pingcap/pd/pull/1739) - -## Tools - -- TiDB Binlog - - Add the `worker-count` and `txn-batch` configuration items in Reparo to control the recovery speed [#746](https://github.com/pingcap/tidb-binlog/pull/746) - - Optimize the memory usage of Drainer to enhance the efficiency of simultaneous execution [#737](https://github.com/pingcap/tidb-binlog/pull/737) -- TiDB Lightning - - Fix the issue that re-importing data from checkpoint might cause TiDB Lightning to panic [#237](https://github.com/pingcap/tidb-lightning/pull/237) - - Optimize the algorithm of `AUTO_INCREMENT` to reduce the risk of overflowing `AUTO_INCREMENT` columns [#227](https://github.com/pingcap/tidb-lightning/pull/227) - -## TiDB Ansible - -- Upgrade TiSpark to v2.2.0 [#926](https://github.com/pingcap/tidb-ansible/pull/926) -- Update the default value of the TiDB configuration item `pessimistic_txn` to `true` [#933](https://github.com/pingcap/tidb-ansible/pull/933) -- Add more system-level monitoring metrics to `node_exporter` [#938](https://github.com/pingcap/tidb-ansible/pull/938) -- Add two perf tools `iosnoop` and `funcslower` in TiDB Ansible to better diagnose the cluster state [#946](https://github.com/pingcap/tidb-ansible/pull/946) -- Replace the raw module to shell module to address the long waiting time in such situations as the password expires [#949](https://github.com/pingcap/tidb-ansible/pull/949) -- Update the default value of the TiDB configuration item `txn_local_latches` to `false` -- Optimize the monitoring metrics and alert rules of Grafana dashboard [#962](https://github.com/pingcap/tidb-ansible/pull/962) [#963](https://github.com/pingcap/tidb-ansible/pull/963) [#969](https://github.com/pingcap/tidb-ansible/pull/963) -- Check the configuration file before the deployment and upgrade [#934](https://github.com/pingcap/tidb-ansible/pull/934) [#972](https://github.com/pingcap/tidb-ansible/pull/972) diff --git a/sql-mode.md b/sql-mode.md deleted file mode 100644 index 399d4f9b16d55..0000000000000 --- a/sql-mode.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -title: SQL Mode -summary: Learn SQL mode. -aliases: ['/docs/v3.1/sql-mode/','/docs/v3.1/reference/sql/sql-mode/'] ---- - -# SQL Mode - -TiDB servers operate in different SQL modes and apply these modes differently for different clients. SQL mode defines the SQL syntaxes that TiDB supports and the type of data validation check to perform, as described below: - -+ Before TiDB is started, modify the `--sql-mode="modes"` configuration item to set SQL mode. - -+ After TiDB is started, modify `SET [ SESSION | GLOBAL ] sql_mode='modes'` to set SQL mode. - -Ensure that you have `SUPER` privilege when setting SQL mode at `GLOBAL` level, and your setting at this level only affects the connections established afterwards. Changes to SQL mode at `SESSION` level only affect the current client. - -`Modes` are a series of different modes separated by commas (','). You can use the `SELECT @@sql_mode` statement to check the current SQL mode. The default value of SQL mode: `ONLY_FULL_GROUP_BY, STRICT_TRANS_TABLES, NO_ZERO_IN_DATE, NO_ZERO_DATE, ERROR_FOR_DIVISION_BY_ZERO, NO_AUTO_CREATE_USER, NO_ENGINE_SUBSTITUTION`. - -## Important `sql_mode` values - -* `ANSI`: This mode complies with standard SQL. In this mode, data is checked. If data does not comply with the defined type or length, the data type is adjusted or trimmed and a `warning` is returned. -* `STRICT_TRANS_TABLES`: Strict mode, where data is strictly checked. When any incorrect data is inserted into a table, an error is returned. -* `TRADITIONAL`: In this mode, TiDB behaves like a "traditional" SQL database system. An error instead of a warning is returned when any incorrect value is inserted into a column. Then, the `INSERT` or `UPDATE` statement is immediately stopped. - -## SQL mode table - -| Name | Description | -| :--- | :--- | -| `PIPES_AS_CONCAT` | Treats "\|\|" as a string concatenation operator (`+`) (the same as `CONCAT()`), not as an `OR` (full support) | -| `ANSI_QUOTES` | Treats `"` as an identifier. If `ANSI_QUOTES` is enabled, only single quotes are treated as string literals, and double quotes are treated as identifiers. Therefore, double quotes cannot be used to quote strings. (full support)| -| `IGNORE_SPACE` | If this mode is enabled, the system ignores space. For example: "user" and "user " are the same. (full support)| -| `ONLY_FULL_GROUP_BY` | If a non-aggregated column that is referred to in `SELECT`, `HAVING`, or `ORDER BY` is absent in `GROUP BY`, this SQL statement is invalid, because it is abnormal for a column to be absent in `GROUP BY` but displayed by query. (full support) | -| `NO_UNSIGNED_SUBTRACTION` | Does not mark the result as `UNSIGNED` if an operand has no symbol in subtraction. (full support)| -| `NO_DIR_IN_CREATE` | Ignores all `INDEX DIRECTORY` and `DATA DIRECTORY` directives when a table is created. This option is only useful for secondary replication servers (syntax support only) | -| `NO_KEY_OPTIONS` | When you use the `SHOW CREATE TABLE` statement, MySQL-specific syntaxes such as `ENGINE` are not exported. Consider this option when migrating across DB types using mysqldump. (syntax support only)| -| `NO_FIELD_OPTIONS` | When you use the `SHOW CREATE TABLE` statement, MySQL-specific syntaxes such as `ENGINE` are not exported. Consider this option when migrating across DB types using mysqldump. (syntax support only) | -| `NO_TABLE_OPTIONS` | When you use the `SHOW CREATE TABLE` statement, MySQL-specific syntaxes such as `ENGINE` are not exported. Consider this option when migrating across DB types using mysqldump. (syntax support only)| -| `NO_AUTO_VALUE_ON_ZERO` | If this mode is enabled, when the value passed in the `AUTO_INCREMENT` column is `0` or a specific value, the system directly writes this value to this column. When `NULL` is passed, the system automatically generates the next serial number. (full support)| -| `NO_BACKSLASH_ESCAPES` | If this mode is enabled, the `\` backslash symbol only stands for itself. (full support)| -| `STRICT_TRANS_TABLES` | Enables the strict mode for the transaction storage engine and rolls back the entire statement after an illegal value is inserted. (full support) | -| `STRICT_ALL_TABLES` | For transactional tables, rolls back the entire transaction statement after an illegal value is inserted. (full support) | -| `NO_ZERO_IN_DATE` | Strict mode, where dates with a month or day part of `0` are not accepted. If you use the `IGNORE` option, TiDB inserts '0000-00-00' for a similar date. In non-strict mode, this date is accepted but a warning is returned. (full support) -| `NO_ZERO_DATE` | Does not use '0000-00-00' as a legal date in strict mode. You can still insert a zero date with the `IGNORE` option. In non-strict mode, this date is accepted but a warning is returned. (full support)| -| `ALLOW_INVALID_DATES` | In this mode, the system does not check the validity of all dates. It only checks the month value ranging from `1` to `12` and the date value ranging from `1` to `31`. The mode only applies to `DATE` and `DATATIME` columns. All `TIMESTAMP` columns need a full validity check. (full support) | -| `ERROR_FOR_DIVISION_BY_ZERO` | If this mode is enabled, the system returns an error when handling division by `0` in data-change operations (`INSERT` or `UPDATE`).
If this mode is not enabled, the system returns a warning and `NULL` is used instead. (full support) | -| `NO_AUTO_CREATE_USER` | Prevents `GRANT` from automatically creating new users, except for the specified password (full support)| -| `HIGH_NOT_PRECEDENCE` | The precedence of the NOT operator is such that expressions such as `NOT a BETWEEN b AND c` are parsed as `NOT (a BETWEEN b AND c)`. In some older versions of MySQL, this expression is parsed as `(NOT a) BETWEEN b AND c`. (full support) | -| `NO_ENGINE_SUBSTITUTION` | Prevents the automatic replacement of storage engines if the required storage engine is disabled or not compiled. (syntax support only)| -| `PAD_CHAR_TO_FULL_LENGTH` | If this mode is enabled, the system does not trim the trailing spaces for `CHAR` types. (full support) | -| `REAL_AS_FLOAT` | Treats `REAL` as the synonym of `FLOAT`, not the synonym of `DOUBLE` (full support)| -| `POSTGRESQL` | Equivalent to `PIPES_AS_CONCAT`, `ANSI_QUOTES`, `IGNORE_SPACE`, `NO_KEY_OPTIONS`, `NO_TABLE_OPTIONS`, `NO_FIELD_OPTIONS` (syntax support only)| -| `MSSQL` | Equivalent to `PIPES_AS_CONCAT`, `ANSI_QUOTES`, `IGNORE_SPACE`, `NO_KEY_OPTIONS`, `NO_TABLE_OPTIONS`, `NO_FIELD_OPTIONS` (syntax support only)| -| `DB2` | Equivalent to `PIPES_AS_CONCAT`, `ANSI_QUOTES`, `IGNORE_SPACE`, `NO_KEY_OPTIONS`, `NO_TABLE_OPTIONS`, `NO_FIELD_OPTIONS` (syntax support only)| -| `MAXDB` | Equivalent to `PIPES_AS_CONCAT`, `ANSI_QUOTES`, `IGNORE_SPACE`, `NO_KEY_OPTIONS`, `NO_TABLE_OPTIONS`, `NO_FIELD_OPTIONS`, `NO_AUTO_CREATE_USER` (full support)| -| `MySQL323` | Equivalent to `NO_FIELD_OPTIONS`, `HIGH_NOT_PRECEDENCE` (syntax support only)| -| `MYSQL40` | Equivalent to `NO_FIELD_OPTIONS`, `HIGH_NOT_PRECEDENCE` (syntax support only)| -| `ANSI` | Equivalent to `REAL_AS_FLOAT`, `PIPES_AS_CONCAT`, `ANSI_QUOTES`, `IGNORE_SPACE` (syntax support only)| -| `TRADITIONAL` | Equivalent to `STRICT_TRANS_TABLES`, `STRICT_ALL_TABLES`, `NO_ZERO_IN_DATE`, `NO_ZERO_DATE`, `ERROR_FOR_DIVISION_BY_ZERO`, `NO_AUTO_CREATE_USER` (syntax support only) | -| `ORACLE` | Equivalent to `PIPES_AS_CONCAT`, `ANSI_QUOTES`, `IGNORE_SPACE`, `NO_KEY_OPTIONS`, `NO_TABLE_OPTIONS`, `NO_FIELD_OPTIONS`, `NO_AUTO_CREATE_USER` (syntax support only)| diff --git a/sql-statements/sql-statement-recover-table.md b/sql-statements/sql-statement-recover-table.md deleted file mode 100644 index 07de3bb90e0be..0000000000000 --- a/sql-statements/sql-statement-recover-table.md +++ /dev/null @@ -1,107 +0,0 @@ ---- -title: RECOVER TABLE -summary: An overview of the usage of RECOVER TABLE for the TiDB database. -aliases: ['/docs/v3.1/sql-statements/sql-statement-recover-table/','/docs/v3.1/reference/sql/statements/recover-table/'] ---- - -# RECOVER TABLE - -`RECOVER TABLE` is used to recover a deleted table and the data on it within the GC (Garbage Collection) life time after the `DROP TABLE` statement is executed. - -## Syntax - -{{< copyable "sql" >}} - -```sql -RECOVER TABLE table_name -``` - -{{< copyable "sql" >}} - -```sql -RECOVER TABLE BY JOB ddl_job_id -``` - -> **Note:** -> -> + If a table is deleted and the GC lifetime is out, the table cannot be recovered with `RECOVER TABLE`. Execution of `RECOVER TABLE` in this scenario returns an error like: `snapshot is older than GC safe point 2019-07-10 13:45:57 +0800 CST`. -> -> + If the TiDB version is 3.0.0 or later, it is not recommended for you to use `RECOVER TABLE` when TiDB Binlog is used. -> -> + `RECOVER TABLE` is supported in the Binlog version 3.0.1, so you can use `RECOVER TABLE` in the following three situations: -> -> - Binglog version is 3.0.1 or later. -> - TiDB 3.0 is used both in the upstream cluster and the downstream cluster. -> - The GC life time of the secondary cluster must be longer than that of the primary cluster. However, as latency occurs during data replication between upstream and downstream databases, data recovery might fail in the downstream. - -### Troubleshoot errors during TiDB Binlog replication - -When you use `RECOVER TABLE` in the upstream TiDB during TiDB Binlog replication, TiDB Binlog might be interrupted in the following three situations: - -+ The downstream database does not support the `RECOVER TABLE` statement. An error instance: `check the manual that corresponds to your MySQL server version for the right syntax to use near 'RECOVER TABLE table_name'`. - -+ The GC life time is not consistent between the upstream database and the downstream database. An error instance: `snapshot is older than GC safe point 2019-07-10 13:45:57 +0800 CST`. - -+ Latency occurs during replication between upstream and downstream databases. An error instance: `snapshot is older than GC safe point 2019-07-10 13:45:57 +0800 CST`. - -For the above three situations, you can resume data replication from TiDB Binlog with a [full import of the deleted table](/ecosystem-tool-user-guide.md#backup-and-restore). - -## Examples - -+ Recover the deleted table according to the table name. - - {{< copyable "sql" >}} - - ```sql - DROP TABLE t; - ``` - - {{< copyable "sql" >}} - - ```sql - RECOVER TABLE t; - ``` - - This method searches the recent DDL job history and locates the first DDL operation of the `DROP TABLE` type, and then recovers the deleted table with the name identical to the one table name specified in the `RECOVER TABLE` statement. - -+ Recover the deleted table according to the table's `DDL JOB ID` used. - - Suppose that you had deleted the table `t` and created another `t`, and again you deleted the newly created `t`. Then, if you want to recover the `t` deleted in the first place, you must use the method that specifies the `DDL JOB ID`. - - {{< copyable "sql" >}} - - ```sql - DROP TABLE t; - ``` - - {{< copyable "sql" >}} - - ```sql - ADMIN SHOW DDL JOBS 1; - ``` - - The second statement above is used to search for the table's `DDL JOB ID` to delete `t`. In the following example, the ID is `53`. - - ``` - +--------+---------+------------+------------+--------------+-----------+----------+-----------+-----------------------------------+--------+ - | JOB_ID | DB_NAME | TABLE_NAME | JOB_TYPE | SCHEMA_STATE | SCHEMA_ID | TABLE_ID | ROW_COUNT | START_TIME | STATE | - +--------+---------+------------+------------+--------------+-----------+----------+-----------+-----------------------------------+--------+ - | 53 | test | | drop table | none | 1 | 41 | 0 | 2019-07-10 13:23:18.277 +0800 CST | synced | - +--------+---------+------------+------------+--------------+-----------+----------+-----------+-----------------------------------+--------+ - ``` - - {{< copyable "sql" >}} - - ```sql - RECOVER TABLE BY JOB 53; - ``` - - This method recovers the deleted table via the `DDL JOB ID`. If the corresponding DDL job is not of the `DROP TABLE` type, an error occurs. - -## Implementation principle - -When deleting a table, TiDB only deletes the table metadata, and writes the table data (row data and index data) to be deleted to the `mysql.gc_delete_range` table. The GC Worker in the TiDB background periodically removes from the `mysql.gc_delete_range` table the keys that exceed the GC life time. - -Therefore, to recover a table, you only need to recover the table metadata and delete the corresponding row record in the `mysql.gc_delete_range` table before the GC Worker deletes the table data. You can use a snapshot read of TiDB to recover the table metadata. Refer to [Read Historical Data](/read-historical-data.md) for details. - -Table recovery is done by TiDB obtaining the table metadata through snapshot read, and then going through the process of table creation similar to `CREATE TABLE`. Therefore, `RECOVER TABLE` itself is, in essence, a kind of DDL operation.