From 263db1b16586d73cef68f275b58702ffb6a22a48 Mon Sep 17 00:00:00 2001 From: Ti Chi Robot Date: Fri, 2 Jun 2023 14:12:43 +0800 Subject: [PATCH 1/6] This is an automated cherry-pick of #13763 Signed-off-by: ti-chi-bot --- TOC.md | 2 +- _docHome.md | 158 ++++++++++ br/backup-and-restore-overview.md | 33 ++ br/br-pitr-guide.md | 137 +++++++++ clinic/clinic-introduction.md | 2 +- clinic/clinic-user-guide-for-tiup.md | 2 +- develop/dev-guide-aws-appflow-integration.md | 10 +- develop/dev-guide-build-cluster-in-cloud.md | 30 +- develop/dev-guide-create-database.md | 2 +- develop/dev-guide-create-secondary-indexes.md | 2 +- develop/dev-guide-create-table.md | 4 +- develop/dev-guide-delete-data.md | 2 +- develop/dev-guide-insert-data.md | 2 +- develop/dev-guide-outdated-for-django.md | 2 +- develop/dev-guide-proxysql-integration.md | 24 +- .../dev-guide-sample-application-golang.md | 10 +- develop/dev-guide-sample-application-java.md | 16 +- .../dev-guide-sample-application-python.md | 18 +- ...ev-guide-sample-application-spring-boot.md | 8 +- develop/dev-guide-tidb-crud-sql.md | 2 +- develop/dev-guide-update-data.md | 2 +- encryption-at-rest.md | 2 +- explore-htap.md | 2 +- garbage-collection-configuration.md | 4 +- .../information-schema-resource-groups.md | 63 ++++ .../information-schema-slow-query.md | 11 + quick-start-with-tidb.md | 2 +- releases/release-5.2.0.md | 6 +- releases/release-6.0.0-dmr.md | 4 +- .../sql-statement-alter-resource-group.md | 91 ++++++ .../sql-statement-create-resource-group.md | 87 ++++++ .../sql-statement-drop-resource-group.md | 67 +++++ .../sql-statement-flashback-to-timestamp.md | 129 ++++++++ ...ql-statement-show-create-resource-group.md | 59 ++++ statement-summary-tables.md | 66 ++++ statistics.md | 5 + system-variables.md | 83 +++++ tidb-resource-control.md | 197 ++++++++++++ time-to-live.md | 284 ++++++++++++++++++ 39 files changed, 1557 insertions(+), 73 deletions(-) create mode 100644 _docHome.md create mode 100644 br/br-pitr-guide.md create mode 100644 information-schema/information-schema-resource-groups.md create mode 100644 sql-statements/sql-statement-alter-resource-group.md create mode 100644 sql-statements/sql-statement-create-resource-group.md create mode 100644 sql-statements/sql-statement-drop-resource-group.md create mode 100644 sql-statements/sql-statement-flashback-to-timestamp.md create mode 100644 sql-statements/sql-statement-show-create-resource-group.md create mode 100644 tidb-resource-control.md create mode 100644 time-to-live.md diff --git a/TOC.md b/TOC.md index 4a76dd8f18c86..3d07a65bf5da4 100644 --- a/TOC.md +++ b/TOC.md @@ -23,7 +23,7 @@ - Develop - [Overview](/develop/dev-guide-overview.md) - Quick Start - - [Build a TiDB Cluster in TiDB Cloud (Serverless Tier)](/develop/dev-guide-build-cluster-in-cloud.md) + - [Build a TiDB Serverless Cluster](/develop/dev-guide-build-cluster-in-cloud.md) - [CRUD SQL in TiDB](/develop/dev-guide-tidb-crud-sql.md) - Example Applications - [Golang](/develop/dev-guide-sample-application-golang.md) diff --git a/_docHome.md b/_docHome.md new file mode 100644 index 0000000000000..665d6764677c5 --- /dev/null +++ b/_docHome.md @@ -0,0 +1,158 @@ +--- +title: PingCAP Documentation +hide_sidebar: true +hide_commit: true +hide_leftNav: true +--- + + + + + +TiDB Cloud is a fully-managed Database-as-a-Service (DBaaS) that brings everything great about TiDB to your cloud, and lets you focus on your applications, not the complexities of your database. + + + + + +See the documentation of TiDB Cloud + + + + + +Guides you through an easy way to get started with TiDB Cloud + + + + + +Helps you quickly complete a Proof of Concept (PoC) of TiDB Cloud + + + + + +Get the power of a cloud-native, distributed SQL database built for real-time analytics in a fully-managed service. + +Try Free + + + + + + + +TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability. You can deploy TiDB in a self-hosted environment or in the cloud. + + + + + +See the documentation of TiDB + + + + + +Walks you through the quickest way to get started with TiDB + + + + + +Learn how to deploy TiDB locally in production + + + + + +The open-source TiDB platform is released under the Apache 2.0 license, and supported by the community. + +Download + + + + + + + + + +Documentation for TiDB application developers + + + + + +Documentation for TiDB Cloud application developers + + + + + + + + + + + + + +Learn TiDB and TiDB Cloud through well-designed online courses and instructor-led training + + + + + +Join us on Slack or become a contributor + + + + + +Learn great articles about TiDB and TiDB Cloud + + + + + +See a compilation of short videos describing TiDB and a variety of use cases + + + + + +Learn events about PingCAP and the community + + + + + +Download eBooks and papers + + + + + +A powerful insight tool that analyzes in depth any GitHub repository, powered by TiDB Cloud + + + + + +Let’s work together to make the documentation better! + + + + + + + + diff --git a/br/backup-and-restore-overview.md b/br/backup-and-restore-overview.md index d4fab28e69514..4e79f3c846d88 100644 --- a/br/backup-and-restore-overview.md +++ b/br/backup-and-restore-overview.md @@ -18,7 +18,40 @@ Each TiKV node has a path in which the backup files generated in the backup oper ![br-arch](/media/br-arch.png) +<<<<<<< HEAD For detailed information about the BR design, see [BR Design Principles](/br/backup-and-restore-design.md). +======= +- PITR only supports restoring data to **an empty cluster**. +- PITR only supports cluster-level restore and does not support database-level or table-level restore. +- PITR does not support restoring the data of user tables or privilege tables from system tables. +- BR does not support running multiple backup tasks on a cluster **at the same time**. +- When a PITR is running, you cannot run a log backup task or use TiCDC to replicate data to a downstream cluster. + +### Some tips + +Snapshot backup: + +- It is recommended that you perform the backup operation during off-peak hours to minimize the impact on applications. +- It is recommended that you execute multiple backup or restore tasks one by one. Running multiple backup tasks in parallel leads to low performance. Worse still, a lack of collaboration between multiple tasks might result in task failures and affect cluster performance. + +Snapshot restore: + +- BR uses resources of the target cluster as much as possible. Therefore, it is recommended that you restore data to a new cluster or an offline cluster. Avoid restoring data to a production cluster. Otherwise, your application will be affected inevitably. + +Backup storage and network configuration: + +- It is recommended that you store backup data to a storage system that is compatible with Amazon S3, GCS, or Azure Blob Storage. +- You need to ensure that BR, TiKV, and the backup storage system have enough network bandwidth, and that the backup storage system can provide sufficient read and write performance (IOPS). Otherwise, they might become a performance bottleneck during backup and restore. + +## Use backup and restore + +The way to use BR varies with the deployment method of TiDB. This document introduces how to use the br command-line tool to back up and restore TiDB cluster data in a self-hosted deployment. + +For information about how to use this feature in other deployment scenarios, see the following documents: + +- [Back Up and Restore TiDB Deployed on TiDB Cloud](https://docs.pingcap.com/tidbcloud/backup-and-restore): It is recommended that you create TiDB clusters on [TiDB Cloud](https://www.pingcap.com/tidb-cloud/?from=en). TiDB Cloud offers fully managed databases to let you focus on your applications. +- [Back Up and Restore Data Using TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/stable/backup-restore-overview): If you deploy a TiDB cluster using TiDB Operator on Kubernetes, it is recommended to back up and restore data using Kubernetes CustomResourceDefinition (CRD). +>>>>>>> 8eee4b162 (tidb: rename products (#13692) (#13763)) ## BR features diff --git a/br/br-pitr-guide.md b/br/br-pitr-guide.md new file mode 100644 index 0000000000000..903f2ff625723 --- /dev/null +++ b/br/br-pitr-guide.md @@ -0,0 +1,137 @@ +--- +title: TiDB Log Backup and PITR Guide +summary: Learns about how to perform log backup and PITR in TiDB. +--- + +# TiDB Log Backup and PITR Guide + +A full backup (snapshot backup) contains the full cluster data at a certain point, while TiDB log backup can back up data written by applications to a specified storage in a timely manner. If you want to choose the restore point as required, that is, to perform point-in-time recovery (PITR), you can [start log backup](#start-log-backup) and [run full backup regularly](#run-full-backup-regularly). + +Before you back up or restore data using the br command-line tool (hereinafter referred to as `br`), you need to [install `br`](/br/br-use-overview.md#deploy-and-use-br) first. + +## Back up TiDB cluster + +### Start log backup + +> **Note:** +> +> - The following examples assume that Amazon S3 access keys and secret keys are used to authorize permissions. If IAM roles are used to authorize permissions, you need to set `--send-credentials-to-tikv` to `false`. +> - If other storage systems or authorization methods are used to authorize permissions, adjust the parameter settings according to [Backup Storages](/br/backup-and-restore-storages.md). + +To start a log backup, run `br log start`. A cluster can only run one log backup task each time. + +```shell +tiup br log start --task-name=pitr --pd "${PD_IP}:2379" \ +--storage 's3://backup-101/logbackup?access-key=${access-key}&secret-access-key=${secret-access-key}"' +``` + +After the log backup task starts, it runs in the background of the TiDB cluster until you stop it manually. During this process, the TiDB change logs are regularly backed up to the specified storage in small batches. To query the status of the log backup task, run the following command: + +```shell +tiup br log status --task-name=pitr --pd "${PD_IP}:2379" +``` + +Expected output: + +``` +● Total 1 Tasks. +> #1 < + name: pitr + status: ● NORMAL + start: 2022-05-13 11:09:40.7 +0800 + end: 2035-01-01 00:00:00 +0800 + storage: s3://backup-101/log-backup + speed(est.): 0.00 ops/s +checkpoint[global]: 2022-05-13 11:31:47.2 +0800; gap=4m53s +``` + +### Run full backup regularly + +The snapshot backup can be used as a method of full backup. You can run `br backup full` to back up the cluster snapshot to the backup storage according to a fixed schedule (for example, every 2 days). + +```shell +tiup br backup full --pd "${PD_IP}:2379" \ +--storage 's3://backup-101/snapshot-${date}?access-key=${access-key}&secret-access-key=${secret-access-key}"' +``` + +## Run PITR + +To restore the cluster to any point in time within the backup retention period, you can use `br restore point`. When you run this command, you need to specify the **time point you want to restore**, **the latest snapshot backup data before the time point**, and the **log backup data**. BR will automatically determine and read data needed for the restore, and then restore these data to the specified cluster in order. + +```shell +br restore point --pd "${PD_IP}:2379" \ +--storage='s3://backup-101/logbackup?access-key=${access-key}&secret-access-key=${secret-access-key}"' \ +--full-backup-storage='s3://backup-101/snapshot-${date}?access-key=${access-key}&secret-access-key=${secret-access-key}"' \ +--restored-ts '2022-05-15 18:00:00+0800' +``` + +During data restore, you can view the progress through the progress bar in the terminal. The restore is divided into two phases, full restore and log restore (restore meta files and restore KV files). After each phase is completed, `br` outputs information such as restore time and data size. + +```shell +Full Restore <--------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00% +*** ["Full Restore success summary"] ****** [total-take=xxx.xxxs] [restore-data-size(after-compressed)=xxx.xxx] [Size=xxxx] [BackupTS={TS}] [total-kv=xxx] [total-kv-size=xxx] [average-speed=xxx] +Restore Meta Files <--------------------------------------------------------------------------------------------------------------------------------------------------> 100.00% +Restore KV Files <----------------------------------------------------------------------------------------------------------------------------------------------------> 100.00% +*** ["restore log success summary"] [total-take=xxx.xx] [restore-from={TS}] [restore-to={TS}] [total-kv-count=xxx] [total-size=xxx] +``` + +## Clean up outdated data + +As described in the [Usage Overview of TiDB Backup and Restore](/br/br-use-overview.md): + +To perform PITR, you need to restore the full backup before the restore point, and the log backup between the full backup point and the restore point. Therefore, for log backups that exceed the backup retention period, you can use `br log truncate` to delete the backup before the specified time point. **It is recommended to only delete the log backup before the full snapshot**. + +The following steps describe how to clean up backup data that exceeds the backup retention period: + +1. Get the **last full backup** outside the backup retention period. +2. Use the `validate` command to get the time point corresponding to the backup. Assume that the backup data before 2022/09/01 needs to be cleaned, you should look for the last full backup before this time point and ensure that it will not be cleaned. + + ```shell + FULL_BACKUP_TS=`tiup br validate decode --field="end-version" --storage "s3://backup-101/snapshot-${date}?access-key=${access-key}&secret-access-key=${secret-access-key}"| tail -n1` + ``` + +3. Delete log backup data earlier than the snapshot backup `FULL_BACKUP_TS`: + + ```shell + tiup br log truncate --until=${FULL_BACKUP_TS} --storage='s3://backup-101/logbackup?access-key=${access-key}&secret-access-key=${secret-access-key}"' + ``` + +4. Delete snapshot data earlier than the snapshot backup `FULL_BACKUP_TS`: + + ```shell + rm -rf s3://backup-101/snapshot-${date} + ``` + +## Performance and impact of PITR + +### Capabilities + +- On each TiKV node, PITR can restore snapshot data at a speed of 280 GB/h and log data 30 GB/h. +- BR deletes outdated log backup data at a speed of 600 GB/h. + +> **Note:** +> +> The preceding specifications are based on test results from the following two testing scenarios. The actual data might be different. +> +> - Snapshot data restore speed = Snapshot data size / (duration * the number of TiKV nodes) +> - Log data restore speed = Restored log data size / (duration * the number of TiKV nodes) + +Testing scenario 1 (on [TiDB Cloud](https://tidbcloud.com)): + +- The number of TiKV nodes (8 core, 16 GB memory): 21 +- The number of Regions: 183,000 +- New log data created in the cluster: 10 GB/h +- Write (INSERT/UPDATE/DELETE) QPS: 10,000 + +Testing scenario 2 (on TiDB Self-Hosted): + +- The number of TiKV nodes (8 core, 64 GB memory): 6 +- The number of Regions: 50,000 +- New log data created in the cluster: 10 GB/h +- Write (INSERT/UPDATE/DELETE) QPS: 10,000 + +## See also + +* [TiDB Backup and Restore Use Cases](/br/backup-and-restore-use-cases.md) +* [br Command-line Manual](/br/use-br-command-line-tool.md) +* [Log Backup and PITR Architecture](/br/br-log-architecture.md) diff --git a/clinic/clinic-introduction.md b/clinic/clinic-introduction.md index 664cced7f44a6..261f986447051 100644 --- a/clinic/clinic-introduction.md +++ b/clinic/clinic-introduction.md @@ -74,7 +74,7 @@ First, Diag gets cluster topology information from the deployment tool TiUP (tiu ## Next step -- Use PingCAP Clinic in an on-premise environment +- Use PingCAP Clinic in a self-hosted environment - [Quick Start with PingCAP Clinic](/clinic/quick-start-with-clinic.md) - [Troubleshoot Clusters using PingCAP Clinic](/clinic/clinic-user-guide-for-tiup.md) - [PingCAP Clinic Diagnostic Data](/clinic/clinic-data-instruction-for-tiup.md) diff --git a/clinic/clinic-user-guide-for-tiup.md b/clinic/clinic-user-guide-for-tiup.md index ffca3e01b9ec6..1493fbd30fbb8 100644 --- a/clinic/clinic-user-guide-for-tiup.md +++ b/clinic/clinic-user-guide-for-tiup.md @@ -9,7 +9,7 @@ For TiDB clusters and DM clusters deployed using TiUP, you can use PingCAP Clini > **Note:** > -> - This document **only** applies to clusters deployed using TiUP in an on-premises environment. For clusters deployed using TiDB Operator on Kubernetes, see [PingCAP Clinic for TiDB Operator environments](https://docs.pingcap.com/tidb-in-kubernetes/stable/clinic-user-guide). +> - This document **only** applies to clusters deployed using TiUP in a self-hosted environment. For clusters deployed using TiDB Operator on Kubernetes, see [PingCAP Clinic for TiDB Operator environments](https://docs.pingcap.com/tidb-in-kubernetes/stable/clinic-user-guide). > > - PingCAP Clinic **does not support** collecting data from clusters deployed using TiDB Ansible. diff --git a/develop/dev-guide-aws-appflow-integration.md b/develop/dev-guide-aws-appflow-integration.md index 608a3c30781ba..1b2ff2a604b4c 100644 --- a/develop/dev-guide-aws-appflow-integration.md +++ b/develop/dev-guide-aws-appflow-integration.md @@ -7,9 +7,9 @@ summary: Introduce how to integrate TiDB with Amazon AppFlow step by step. [Amazon AppFlow](https://aws.amazon.com/appflow/) is a fully managed API integration service that you use to connect your software as a service (SaaS) applications to AWS services, and securely transfer data. With Amazon AppFlow, you can import and export data from and to TiDB into many types of data providers, such as Salesforce, Amazon S3, LinkedIn, and GitHub. For more information, see [Supported source and destination applications](https://docs.aws.amazon.com/appflow/latest/userguide/app-specific.html) in AWS documentation. -This document describes how to integrate TiDB with Amazon AppFlow and takes integrating a TiDB Cloud Serverless Tier cluster as an example. +This document describes how to integrate TiDB with Amazon AppFlow and takes integrating a TiDB Serverless cluster as an example. -If you do not have a TiDB cluster, you can create a [Serverless Tier](https://tidbcloud.com/console/clusters) cluster, which is free and can be created in approximately 30 seconds. +If you do not have a TiDB cluster, you can create a [TiDB Serverless](https://tidbcloud.com/console/clusters) cluster, which is free and can be created in approximately 30 seconds. ## Prerequisites @@ -66,7 +66,7 @@ git clone https://github.com/pingcap-inc/tidb-appflow-integration > > - The `--guided` option uses prompts to guide you through the deployment. Your input will be stored in a configuration file, which is `samconfig.toml` by default. > - `stack_name` specifies the name of AWS Lambda that you are deploying. - > - This prompted guide uses AWS as the cloud provider of TiDB Cloud Serverless Tier. To use Amazon S3 as the source or destination, you need to set the `region` of AWS Lambda as the same as that of Amazon S3. + > - This prompted guide uses AWS as the cloud provider of TiDB Serverless. To use Amazon S3 as the source or destination, you need to set the `region` of AWS Lambda as the same as that of Amazon S3. > - If you have already run `sam deploy --guided` before, you can just run `sam deploy` instead, and SAM CLI will use the configuration file `samconfig.toml` to simplify the interaction. If you see a similar output as follows, this Lambda is successfully deployed. @@ -148,7 +148,7 @@ Choose the **Source details** and **Destination details**. TiDB connector can be ``` 5. After the `sf_account` table is created, click **Connect**. A connection dialog is displayed. -6. In the **Connect to TiDB-Connector** dialog, enter the connection properties of the TiDB cluster. If you use a TiDB Cloud Serverless Tier cluster, you need to set the **TLS** option to `Yes`, which lets the TiDB connector use the TLS connection. Then, click **Connect**. +6. In the **Connect to TiDB-Connector** dialog, enter the connection properties of the TiDB cluster. If you use a TiDB Serverless cluster, you need to set the **TLS** option to `Yes`, which lets the TiDB connector use the TLS connection. Then, click **Connect**. ![tidb connection message](/media/develop/aws-appflow-step-tidb-connection-message.png) @@ -244,5 +244,5 @@ test> SELECT * FROM sf_account; - If anything goes wrong, you can navigate to the [CloudWatch](https://console.aws.amazon.com/cloudwatch/home) page on the AWS Management Console to get logs. - The steps in this document are based on [Building custom connectors using the Amazon AppFlow Custom Connector SDK](https://aws.amazon.com/blogs/compute/building-custom-connectors-using-the-amazon-appflow-custom-connector-sdk/). -- [TiDB Cloud Serverless Tier](https://docs.pingcap.com/tidbcloud/select-cluster-tier#serverless-tier-beta) is **NOT** a production environment. +- [TiDB Serverless](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta) is **NOT** a production environment. - To prevent excessive length, the examples in this document only show the `Insert` strategy, but `Update` and `Upsert` strategies are also tested and can be used. \ No newline at end of file diff --git a/develop/dev-guide-build-cluster-in-cloud.md b/develop/dev-guide-build-cluster-in-cloud.md index 40b1865b6c27f..939e55a034e4c 100644 --- a/develop/dev-guide-build-cluster-in-cloud.md +++ b/develop/dev-guide-build-cluster-in-cloud.md @@ -1,15 +1,15 @@ --- -title: Build a TiDB Cluster in TiDB Cloud (Serverless Tier) -summary: Learn how to build a TiDB cluster in TiDB Cloud (Serverless Tier) and connect to a TiDB Cloud cluster. +title: Build a TiDB Serverless Cluster +summary: Learn how to build a TiDB Serverless cluster in TiDB Cloud and connect to it. --- -# Build a TiDB Cluster in TiDB Cloud (Serverless Tier) +# Build a TiDB Serverless Cluster -This document walks you through the quickest way to get started with TiDB. You will use [TiDB Cloud](https://en.pingcap.com/tidb-cloud) to create a Serverless Tier cluster, connect to it, and run a sample application on it. +This document walks you through the quickest way to get started with TiDB. You will use [TiDB Cloud](https://en.pingcap.com/tidb-cloud) to create a TiDB Serverless cluster, connect to it, and run a sample application on it. If you need to run TiDB on your local machine, see [Starting TiDB Locally](/quick-start-with-tidb.md). @@ -21,7 +21,7 @@ This document walks you through the quickest way to get started with TiDB Cloud. -## Step 1. Create a Serverless Tier cluster +## Step 1. Create a TiDB Serverless cluster 1. If you do not have a TiDB Cloud account, click [here](https://tidbcloud.com/free-trial) to sign up for an account. @@ -29,9 +29,15 @@ This document walks you through the quickest way to get started with TiDB Cloud. The [**Clusters**](https://tidbcloud.com/console/clusters) list page is displayed by default. +<<<<<<< HEAD 3. For new sign-up users, TiDB Cloud creates a default Serverless Tier cluster `Cluster0` for you automatically. You can either use this default cluster for the subsequent steps or create a new Serverless Tier cluster on your own. To create a new Serverless Tier cluster on your own, take the following operations: +======= +4. On the **Create Cluster** page, **Serverless** is selected by default. Update the default cluster name if necessary, and then select the region where you want to create your cluster. + +5. Click **Create** to create a TiDB Serverless cluster. +>>>>>>> 8eee4b162 (tidb: rename products (#13692) (#13763)) 1. Click **Create Cluster**. 2. On the **Create Cluster** page, **Serverless Tier** is selected by default. Update the default cluster name if necessary, select a target region of your cluster, and then click **Create**. Your Serverless Tier cluster will be created in approximately 30 seconds. @@ -46,7 +52,7 @@ This document walks you through the quickest way to get started with TiDB Cloud. > **Note:** > -> For [Serverless Tier clusters](https://docs.pingcap.com/tidbcloud/select-cluster-tier#serverless-tier), when you connect to your cluster, you must include the prefix for your cluster in the user name and wrap the name with quotation marks. For more information, see [User name prefix](https://docs.pingcap.com/tidbcloud/select-cluster-tier#user-name-prefix). +> For [TiDB Serverless clusters](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta), when you connect to your cluster, you must include the prefix for your cluster in the user name and wrap the name with quotation marks. For more information, see [User name prefix](https://docs.pingcap.com/tidbcloud/select-cluster-tier#user-name-prefix). @@ -54,7 +60,7 @@ This document walks you through the quickest way to get started with TiDB Cloud. > **Note:** > -> For [Serverless Tier clusters](/tidb-cloud/select-cluster-tier.md#serverless-tier-beta), when you connect to your cluster, you must include the prefix for your cluster in the user name and wrap the name with quotation marks. For more information, see [User name prefix](/tidb-cloud/select-cluster-tier.md#user-name-prefix). +> For [TiDB Serverless clusters](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta), when you connect to your cluster, you must include the prefix for your cluster in the user name and wrap the name with quotation marks. For more information, see [User name prefix](/tidb-cloud/select-cluster-tier.md#user-name-prefix). @@ -131,7 +137,7 @@ mysql Ver 15.1 Distrib 5.5.68-MariaDB, for Linux (x86_64) using readline 5.1 -2. Run the connection string obtained in [Step 1](#step-1-create-a-serverless-tier-cluster). +2. Run the connection string obtained in [Step 1](#step-1-create-a-tidb-serverless-cluster). {{< copyable "shell-regular" >}} @@ -143,8 +149,8 @@ mysql Ver 15.1 Distrib 5.5.68-MariaDB, for Linux (x86_64) using readline 5.1 > **Note:** > -> - When you connect to a Serverless Tier cluster, you must [use the TLS connection](https://docs.pingcap.com/tidbcloud/secure-connections-to-serverless-tier-clusters). -> - If you encounter problems when connecting to a Serverless Tier cluster, you can read [Secure Connections to Serverless Tier Clusters](https://docs.pingcap.com/tidbcloud/secure-connections-to-serverless-tier-clusters) for more information. +> - When you connect to a TiDB Serverless cluster, you must [use the TLS connection](https://docs.pingcap.com/tidbcloud/secure-connections-to-serverless-tier-clusters). +> - If you encounter problems when connecting to a TiDB Serverless cluster, you can read [Secure Connections to TiDB Serverless Clusters](https://docs.pingcap.com/tidbcloud/secure-connections-to-serverless-tier-clusters) for more information. @@ -152,8 +158,8 @@ mysql Ver 15.1 Distrib 5.5.68-MariaDB, for Linux (x86_64) using readline 5.1 > **Note:** > -> - When you connect to a Serverless Tier cluster, you must [use the TLS connection](/tidb-cloud/secure-connections-to-serverless-tier-clusters.md). -> - If you encounter problems when connecting to a Serverless Tier cluster, you can read [Secure Connections to Serverless Tier Clusters](/tidb-cloud/secure-connections-to-serverless-tier-clusters.md) for more information. +> - When you connect to a TiDB Serverless cluster, you must [use the TLS connection](/tidb-cloud/secure-connections-to-serverless-tier-clusters.md). +> - If you encounter problems when connecting to a TiDB Serverless cluster, you can read [Secure Connections to TiDB Serverless Clusters](/tidb-cloud/secure-connections-to-serverless-tier-clusters.md) for more information. diff --git a/develop/dev-guide-create-database.md b/develop/dev-guide-create-database.md index d79fbbb44462f..c9ef93d081af3 100644 --- a/develop/dev-guide-create-database.md +++ b/develop/dev-guide-create-database.md @@ -11,7 +11,7 @@ This document describes how to create a database using SQL and various programmi Before creating a database, do the following: -- [Build a TiDB Cluster in TiDB Cloud (Serverless Tier)](/develop/dev-guide-build-cluster-in-cloud.md). +- [Build a TiDB Serverless Cluster](/develop/dev-guide-build-cluster-in-cloud.md). - Read [Schema Design Overview](/develop/dev-guide-schema-design-overview.md). ## What is database diff --git a/develop/dev-guide-create-secondary-indexes.md b/develop/dev-guide-create-secondary-indexes.md index 486cc168af422..0e57fd3c6f4a9 100644 --- a/develop/dev-guide-create-secondary-indexes.md +++ b/develop/dev-guide-create-secondary-indexes.md @@ -11,7 +11,7 @@ This document describes how to create a secondary index using SQL and various pr Before creating a secondary index, do the following: -- [Build a TiDB Cluster in TiDB Cloud (Serverless Tier)](/develop/dev-guide-build-cluster-in-cloud.md). +- [Build a TiDB Serverless Cluster](/develop/dev-guide-build-cluster-in-cloud.md). - Read [Schema Design Overview](/develop/dev-guide-schema-design-overview.md). - [Create a Database](/develop/dev-guide-create-database.md). - [Create a Table](/develop/dev-guide-create-table.md). diff --git a/develop/dev-guide-create-table.md b/develop/dev-guide-create-table.md index 75af52d0a6273..14a7e10295c29 100644 --- a/develop/dev-guide-create-table.md +++ b/develop/dev-guide-create-table.md @@ -11,7 +11,7 @@ This document introduces how to create tables using the SQL statement and the re Before reading this document, make sure that the following tasks are completed: -- [Build a TiDB Cluster in TiDB Cloud (Serverless Tier)](/develop/dev-guide-build-cluster-in-cloud.md). +- [Build a TiDB Serverless Cluster](/develop/dev-guide-build-cluster-in-cloud.md). - Read [Schema Design Overview](/develop/dev-guide-schema-design-overview.md). - [Create a Database](/develop/dev-guide-create-database.md). @@ -290,7 +290,7 @@ ALTER TABLE `bookshop`.`ratings` SET TIFLASH REPLICA 1; > **Note:** > -> If your cluster does not contain **TiFlash** nodes, this SQL statement will report an error: `1105 - the tiflash replica count: 1 should be less than the total tiflash server count: 0`. You can use [Build a TiDB Cluster in TiDB Cloud (Serverless Tier)](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-serverless-tier-cluster) to create a Serverless Tier cluster that includes **TiFlash**. +> If your cluster does not contain **TiFlash** nodes, this SQL statement will report an error: `1105 - the tiflash replica count: 1 should be less than the total tiflash server count: 0`. You can use [Build a TiDB Serverless Cluster](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-tidb-serverless-cluster) to create a TiDB Serverless cluster that includes **TiFlash**. Then you can go on to perform the following query: diff --git a/develop/dev-guide-delete-data.md b/develop/dev-guide-delete-data.md index 4c8c07773d03f..e6d1d64c52e2c 100644 --- a/develop/dev-guide-delete-data.md +++ b/develop/dev-guide-delete-data.md @@ -11,7 +11,7 @@ This document describes how to use the [DELETE](/sql-statements/sql-statement-de Before reading this document, you need to prepare the following: -- [Build a TiDB Cluster in TiDB Cloud (Serverless Tier)](/develop/dev-guide-build-cluster-in-cloud.md) +- [Build a TiDB Serverless Cluster](/develop/dev-guide-build-cluster-in-cloud.md) - Read [Schema Design Overview](/develop/dev-guide-schema-design-overview.md), [Create a Database](/develop/dev-guide-create-database.md), [Create a Table](/develop/dev-guide-create-table.md), and [Create Secondary Indexes](/develop/dev-guide-create-secondary-indexes.md) - [Insert Data](/develop/dev-guide-insert-data.md) diff --git a/develop/dev-guide-insert-data.md b/develop/dev-guide-insert-data.md index abc4b106a3659..b05aec3a5a85f 100644 --- a/develop/dev-guide-insert-data.md +++ b/develop/dev-guide-insert-data.md @@ -13,7 +13,7 @@ This document describes how to insert data into TiDB by using the SQL language w Before reading this document, you need to prepare the following: -- [Build a TiDB Cluster in TiDB Cloud (Serverless Tier)](/develop/dev-guide-build-cluster-in-cloud.md). +- [Build a TiDB Serverless Cluster](/develop/dev-guide-build-cluster-in-cloud.md). - Read [Schema Design Overview](/develop/dev-guide-schema-design-overview.md), [Create a Database](/develop/dev-guide-create-database.md), [Create a Table](/develop/dev-guide-create-table.md), and [Create Secondary Indexes](/develop/dev-guide-create-secondary-indexes.md) ## Insert rows diff --git a/develop/dev-guide-outdated-for-django.md b/develop/dev-guide-outdated-for-django.md index 031ea6b5a4d32..969d8475dbd77 100644 --- a/develop/dev-guide-outdated-for-django.md +++ b/develop/dev-guide-outdated-for-django.md @@ -26,7 +26,7 @@ The above command starts a temporary and single-node cluster with mock TiKV. The > > To deploy a "real" TiDB cluster for production, see the following guides: > -> + [Deploy TiDB using TiUP for On-Premises](https://docs.pingcap.com/tidb/v5.1/production-deployment-using-tiup) +> + [Deploy TiDB using TiUP for Self-Hosted Environment](https://docs.pingcap.com/tidb/v5.1/production-deployment-using-tiup) > + [Deploy TiDB on Kubernetes](https://docs.pingcap.com/tidb-in-kubernetes/stable) > > You can also [use TiDB Cloud](https://pingcap.com/products/tidbcloud/), a fully-managed Database-as-a-Service (DBaaS) of TiDB. diff --git a/develop/dev-guide-proxysql-integration.md b/develop/dev-guide-proxysql-integration.md index 18d1b729f606e..cbd15c6103e73 100644 --- a/develop/dev-guide-proxysql-integration.md +++ b/develop/dev-guide-proxysql-integration.md @@ -119,13 +119,21 @@ systemctl start docker ### Option 1: Integrate TiDB Cloud Serverless Tier with ProxySQL -For this integration, you will be using the [ProxySQL Docker image](https://hub.docker.com/r/proxysql/proxysql) along with a TiDB Serverless Tier cluster. The following steps will set up ProxySQL on port `16033`, so make sure this port is available. +For this integration, you will be using the [ProxySQL Docker image](https://hub.docker.com/r/proxysql/proxysql) along with a TiDB Serverless cluster. The following steps will set up ProxySQL on port `16033`, so make sure this port is available. -#### Step 1. Create a TiDB Cloud Serverless Tier cluster +#### Step 1. Create a TiDB Serverless cluster +<<<<<<< HEAD 1. [Create a free TiDB Serverless Tier cluster](https://docs.pingcap.com/tidbcloud/tidb-cloud-quickstart#step-1-create-a-tidb-cluster). 2. Follow the steps in [Connect via Standard Connection](https://docs.pingcap.com/tidbcloud/connect-via-standard-connection#serverless-tier) to get the connection string and set a password for your cluster. 3. In the connection string, locate your cluster endpoint after `-h`, your user name after `-u`, and your cluster port after `-P`. +======= +1. [Create a free TiDB Serverless cluster](https://docs.pingcap.com/tidbcloud/tidb-cloud-quickstart#step-1-create-a-tidb-cluster). Remember the root password that you set for your cluster. +2. Get your cluster hostname, port, and username for later use. + + 1. On the [Clusters](https://tidbcloud.com/console/clusters) page, click your cluster name to go to the cluster overview page. + 2. On the cluster overview page, locate the **Connection** pane, and then copy the `Endpoint`, `Port`, and `User` fields, where the `Endpoint` is your cluster hostname. +>>>>>>> 8eee4b162 (tidb: rename products (#13692) (#13763)) #### Step 2. Generate ProxySQL configuration files @@ -325,12 +333,12 @@ For this integration, you will be using the [ProxySQL Docker image](https://hub. > > 1. Adds a user using the username and password of your cluster. > 2. Assigns the user to the monitoring account. - > 3. Adds your TiDB Serverless Tier cluster to the list of hosts. - > 4. Enables a secure connection between ProxySQL and the TiDB Serverless Tier cluster. + > 3. Adds your TiDB Serverless cluster to the list of hosts. + > 4. Enables a secure connection between ProxySQL and the TiDB Serverless cluster. > > To have a better understanding, it is strongly recommended that you check the `proxysql-prepare.sql` file. To learn more about ProxySQL configuration, see [ProxySQL documentation](https://proxysql.com/documentation/proxysql-configuration/). - The following is an example output. You will see that the hostname of your cluster is shown in the output, which means that the connectivity between ProxySQL and the TiDB Serverless Tier cluster is established. + The following is an example output. You will see that the hostname of your cluster is shown in the output, which means that the connectivity between ProxySQL and the TiDB Serverless cluster is established. ``` *************************** 1. row *************************** @@ -386,7 +394,7 @@ For this integration, you will be using the [ProxySQL Docker image](https://hub. SELECT VERSION(); ``` - If the TiDB version is displayed, you are successfully connected to your TiDB Serverless Tier cluster through ProxySQL. To exit from the MySQL client anytime, enter `quit` and press enter. + If the TiDB version is displayed, you are successfully connected to your TiDB Serverless cluster through ProxySQL. To exit from the MySQL client anytime, enter `quit` and press enter. > **Note:** > @@ -634,7 +642,7 @@ ProxySQL can be installed on many different platforms. The following takes CentO For a full list of supported platforms and the corresponding version requirements, see [ProxySQL documentation](https://proxysql.com/documentation/installing-proxysql/). -#### Step 1. Create a TiDB Cloud Dedicated Tier cluster +#### Step 1. Create a TiDB Dedicated cluster For detailed steps, see [Create a TiDB Cluster](https://docs.pingcap.com/tidbcloud/create-tidb-cluster). @@ -685,7 +693,7 @@ To use ProxySQL as a proxy for TiDB, you need to configure ProxySQL. To do so, y The above step will take you to the ProxySQL admin prompt. -2. Configure the TiDB clusters to be used, where you can add one or multiple TiDB clusters to ProxySQL. The following statement will add one TiDB Cloud Dedicated Tier cluster for example. You need to replace `` and `` with your TiDB Cloud endpoint and port (the default port is `4000`). +2. Configure the TiDB clusters to be used, where you can add one or multiple TiDB clusters to ProxySQL. The following statement will add one TiDB Dedicated cluster for example. You need to replace `` and `` with your TiDB Cloud endpoint and port (the default port is `4000`). ```sql INSERT INTO mysql_servers(hostgroup_id, hostname, port) diff --git a/develop/dev-guide-sample-application-golang.md b/develop/dev-guide-sample-application-golang.md index aac48b3bd4e57..55984c435dbd4 100644 --- a/develop/dev-guide-sample-application-golang.md +++ b/develop/dev-guide-sample-application-golang.md @@ -21,9 +21,9 @@ This document describes how to use TiDB and Golang to build a simple CRUD applic The following introduces how to start a TiDB cluster. -**Use a TiDB Cloud Serverless Tier cluster** +**Use a TiDB Serverless cluster** -For detailed steps, see [Create a Serverless Tier cluster](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-serverless-tier-cluster). +For detailed steps, see [Create a TiDB Serverless cluster](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-tidb-serverless-cluster). **Use a local cluster** @@ -33,7 +33,7 @@ For detailed steps, see [Deploy a local test cluster](/quick-start-with-tidb.md# -See [Create a Serverless Tier cluster](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-serverless-tier-cluster). +See [Create a TiDB Serverless cluster](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-tidb-serverless-cluster). @@ -769,7 +769,7 @@ When using go-sql-driver/mysql, you need to connect to your cluster and run the
-If you are using a TiDB Cloud Serverless Tier cluster, modify the value of the `dsn` in `gorm.go`: +If you are using a TiDB Serverless cluster, modify the value of the `dsn` in `gorm.go`: ```go dsn := "root:@tcp(127.0.0.1:4000)/test?charset=utf8mb4" @@ -796,7 +796,7 @@ dsn := "2aEp24QWEDLqRFs.root:123456@tcp(xxx.tidbcloud.com:4000)/test?charset=utf
-If you are using a TiDB Cloud Serverless Tier cluster, modify the value of the `dsn` in `sqldriver.go`: +If you are using a TiDB Serverless cluster, modify the value of the `dsn` in `sqldriver.go`: ```go dsn := "root:@tcp(127.0.0.1:4000)/test?charset=utf8mb4" diff --git a/develop/dev-guide-sample-application-java.md b/develop/dev-guide-sample-application-java.md index 3ecf39a9e8312..8e3d34c858927 100644 --- a/develop/dev-guide-sample-application-java.md +++ b/develop/dev-guide-sample-application-java.md @@ -23,9 +23,9 @@ This document describes how to use TiDB and Java to build a simple CRUD applicat The following introduces how to start a TiDB cluster. -**Use a TiDB Cloud Serverless Tier cluster** +**Use a TiDB Serverless cluster** -For detailed steps, see [Create a Serverless Tier cluster](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-serverless-tier-cluster). +For detailed steps, see [Create a TiDB Serverless cluster](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-tidb-serverless-cluster). **Use a local cluster** @@ -35,7 +35,7 @@ For detailed steps, see [Deploy a local test cluster](/quick-start-with-tidb.md# -See [Create a Serverless Tier cluster](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-serverless-tier-cluster). +See [Create a TiDB Serverless cluster](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-tidb-serverless-cluster). @@ -294,7 +294,7 @@ public interface PlayerMapper { id, coins, goods - select + select from player where `id` = #{id,jdbcType=VARCHAR} @@ -1449,7 +1449,7 @@ When using JDBC, you need to connect to your cluster and run the statement in th
-If you are using a TiDB Cloud Serverless Tier cluster, modify the `dataSource.url`, `dataSource.username`, `dataSource.password` in `mybatis-config.xml`. +If you are using a TiDB Serverless cluster, modify the `dataSource.url`, `dataSource.username`, `dataSource.password` in `mybatis-config.xml`. ```xml @@ -1524,7 +1524,7 @@ In this case, you can modify the parameters in `dataSource` node as follows:
-If you are using a TiDB Cloud Serverless Tier cluster, modify the `hibernate.connection.url`, `hibernate.connection.username`, `hibernate.connection.password` in `hibernate.cfg.xml`. +If you are using a TiDB Serverless cluster, modify the `hibernate.connection.url`, `hibernate.connection.username`, `hibernate.connection.password` in `hibernate.cfg.xml`. ```xml @@ -1590,7 +1590,7 @@ In this case, you can modify the parameters as follows:
-If you are using a TiDB Cloud Serverless Tier cluster, modify the parameters of the host, port, user, and password in `JDBCExample.java`: +If you are using a TiDB Serverless cluster, modify the parameters of the host, port, user, and password in `JDBCExample.java`: ```java mysqlDataSource.setServerName("localhost"); diff --git a/develop/dev-guide-sample-application-python.md b/develop/dev-guide-sample-application-python.md index acc77a2c0dbfe..b0244bfbe5e0b 100644 --- a/develop/dev-guide-sample-application-python.md +++ b/develop/dev-guide-sample-application-python.md @@ -21,9 +21,9 @@ This document describes how to use TiDB and Python to build a simple CRUD applic The following introduces how to start a TiDB cluster. -**Use a TiDB Cloud Serverless Tier cluster** +**Use a TiDB Serverless cluster** -For detailed steps, see [Create a Serverless Tier cluster](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-serverless-tier-cluster). +For detailed steps, see [Create a TiDB Serverless cluster](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-tidb-serverless-cluster). **Use a local cluster** @@ -33,7 +33,7 @@ For detailed steps, see [Deploy a local test cluster](/quick-start-with-tidb.md# -See [Create a Serverless Tier cluster](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-serverless-tier-cluster). +See [Create a TiDB Serverless cluster](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-tidb-serverless-cluster). @@ -823,13 +823,13 @@ If you are not using a local cluster, or have not installed a MySQL client, conn ### Step 3.2 Modify parameters for TiDB Cloud -If you are using a TiDB Cloud Serverless Tier cluster, you need to provide your CA root path and replace `` in the following examples with your CA path. To get the CA root path on your system, refer to [Where is the CA root path on my system?](https://docs.pingcap.com/tidbcloud/secure-connections-to-serverless-tier-clusters#where-is-the-ca-root-path-on-my-system). +If you are using a TiDB Serverless cluster, you need to provide your CA root path and replace `` in the following examples with your CA path. To get the CA root path on your system, refer to [Where is the CA root path on my system?](https://docs.pingcap.com/tidbcloud/secure-connections-to-serverless-tier-clusters#where-is-the-ca-root-path-on-my-system).
-If you are using a TiDB Cloud Serverless Tier cluster, modify the parameters of the `create_engine` function in `sqlalchemy_example.py`: +If you are using a TiDB Serverless cluster, modify the parameters of the `create_engine` function in `sqlalchemy_example.py`: ```python engine = create_engine('mysql://root:@127.0.0.1:4000/test') @@ -856,7 +856,7 @@ engine = create_engine('mysql://2aEp24QWEDLqRFs.root:123456@xxx.tidbcloud.com:40
-If you are using a TiDB Cloud Serverless Tier cluster, modify the parameters of the `create_engine` function in `sqlalchemy_example.py`: +If you are using a TiDB Serverless cluster, modify the parameters of the `create_engine` function in `sqlalchemy_example.py`: ```python db = connect('mysql://root:@127.0.0.1:4000/test') @@ -890,7 +890,7 @@ Because peewee will pass parameters to the driver, you need to pay attention to
-If you are using a TiDB Cloud Serverless Tier cluster, change the `get_connection` function in `mysqlclient_example.py`: +If you are using a TiDB Serverless cluster, change the `get_connection` function in `mysqlclient_example.py`: ```python def get_connection(autocommit: bool = True) -> MySQLdb.Connection: @@ -932,7 +932,7 @@ def get_connection(autocommit: bool = True) -> MySQLdb.Connection:
-If you are using a TiDB Cloud Serverless Tier cluster, change the `get_connection` function in `pymysql_example.py`: +If you are using a TiDB Serverless cluster, change the `get_connection` function in `pymysql_example.py`: ```python def get_connection(autocommit: bool = False) -> Connection: @@ -971,7 +971,7 @@ def get_connection(autocommit: bool = False) -> Connection:
-If you are using a TiDB Cloud Serverless Tier cluster, change the `get_connection` function in `mysql_connector_python_example.py`: +If you are using a TiDB Serverless cluster, change the `get_connection` function in `mysql_connector_python_example.py`: ```python def get_connection(autocommit: bool = True) -> MySQLConnection: diff --git a/develop/dev-guide-sample-application-spring-boot.md b/develop/dev-guide-sample-application-spring-boot.md index e6c5b95f82faa..543842a1e6e4e 100644 --- a/develop/dev-guide-sample-application-spring-boot.md +++ b/develop/dev-guide-sample-application-spring-boot.md @@ -21,9 +21,9 @@ You can build your own application based on this example. The following introduces how to start a TiDB cluster. -**Use a TiDB Cloud Serverless Tier cluster** +**Use a TiDB Serverless cluster** -For detailed steps, see [Create a Serverless Tier cluster](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-serverless-tier-cluster). +For detailed steps, see [Create a TiDB Serverless cluster](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-tidb-serverless-cluster). **Use a local cluster** @@ -33,7 +33,7 @@ For detailed steps, see [Deploy a local test cluster](/quick-start-with-tidb.md# -See [Create a Serverless Tier cluster](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-serverless-tier-cluster). +See [Create a TiDB Serverless cluster](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-tidb-serverless-cluster). @@ -97,7 +97,7 @@ If you want to learn more about the code of this application, refer to [Implemen ### Step 5.1 Change parameters -If you are using a TiDB Cloud Serverless Tier cluster, change the `spring.datasource.url`, `spring.datasource.username`, `spring.datasource.password` parameters in the `application.yml` (located in `src/main/resources`). +If you are using a TiDB Serverless cluster, change the `spring.datasource.url`, `spring.datasource.username`, `spring.datasource.password` parameters in the `application.yml` (located in `src/main/resources`). ```yaml spring: diff --git a/develop/dev-guide-tidb-crud-sql.md b/develop/dev-guide-tidb-crud-sql.md index cbf26a3b34549..92233d7e3a50f 100644 --- a/develop/dev-guide-tidb-crud-sql.md +++ b/develop/dev-guide-tidb-crud-sql.md @@ -9,7 +9,7 @@ This document briefly introduces how to use TiDB's CURD SQL. ## Before you start -Please make sure you are connected to a TiDB cluster. If not, refer to [Build a TiDB Cluster in TiDB Cloud (Serverless Tier)](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-serverless-tier-cluster) to create a Serverless Tier cluster. +Please make sure you are connected to a TiDB cluster. If not, refer to [Build a TiDB Serverless Cluster](/develop/dev-guide-build-cluster-in-cloud.md#step-1-create-a-tidb-serverless-cluster) to create a TiDB Serverless cluster. ## Explore SQL with TiDB diff --git a/develop/dev-guide-update-data.md b/develop/dev-guide-update-data.md index b0a96d7f84c5d..2dcb69a80ccbb 100644 --- a/develop/dev-guide-update-data.md +++ b/develop/dev-guide-update-data.md @@ -14,7 +14,7 @@ This document describes how to use the following SQL statements to update the da Before reading this document, you need to prepare the following: -- [Build a TiDB Cluster in TiDB Cloud (Serverless Tier)](/develop/dev-guide-build-cluster-in-cloud.md). +- [Build a TiDB Serverless Cluster](/develop/dev-guide-build-cluster-in-cloud.md). - Read [Schema Design Overview](/develop/dev-guide-schema-design-overview.md), [Create a Database](/develop/dev-guide-create-database.md), [Create a Table](/develop/dev-guide-create-table.md), and [Create Secondary Indexes](/develop/dev-guide-create-secondary-indexes.md). - If you want to `UPDATE` data, you need to [insert data](/develop/dev-guide-insert-data.md) first. diff --git a/encryption-at-rest.md b/encryption-at-rest.md index 4fc25a549aa1d..31b3d51f03056 100644 --- a/encryption-at-rest.md +++ b/encryption-at-rest.md @@ -21,7 +21,7 @@ When a TiDB cluster is deployed, the majority of user data is stored on TiKV and TiKV supports encryption at rest. This feature allows TiKV to transparently encrypt data files using [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) in [CTR](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation) mode. To enable encryption at rest, an encryption key must be provided by the user and this key is called master key. TiKV automatically rotates data keys that it used to encrypt actual data files. Manually rotating the master key can be done occasionally. Note that encryption at rest only encrypts data at rest (namely, on disk) and not while data is transferred over network. It is advised to use TLS together with encryption at rest. -Optionally, you can use AWS KMS for both cloud and on-premises deployments. You can also supply the plaintext master key in a file. +Optionally, you can use AWS KMS for both cloud and self-hosted deployments. You can also supply the plaintext master key in a file. TiKV currently does not exclude encryption keys and user data from core dumps. It is advised to disable core dumps for the TiKV process when using encryption at rest. This is not currently handled by TiKV itself. diff --git a/explore-htap.md b/explore-htap.md index 5d32d5cb3d9a2..ec62e5e4be6f4 100644 --- a/explore-htap.md +++ b/explore-htap.md @@ -13,7 +13,7 @@ This guide describes how to explore and use the features of TiDB Hybrid Transact ## Use cases -TiDB HTAP can handle the massive data that increases rapidly, reduce the cost of DevOps, and be deployed in either on-premises or cloud environments easily, which brings the value of data assets in real time. +TiDB HTAP can handle the massive data that increases rapidly, reduce the cost of DevOps, and be deployed in either self-hosted or cloud environments easily, which brings the value of data assets in real time. The following are the typical use cases of HTAP: diff --git a/garbage-collection-configuration.md b/garbage-collection-configuration.md index f2857d6898d57..89e8beaacfcaf 100644 --- a/garbage-collection-configuration.md +++ b/garbage-collection-configuration.md @@ -20,7 +20,7 @@ Garbage collection is configured via the following system variables: > **Note:** > -> This section is only applicable to on-premises TiDB. TiDB Cloud does not have a GC I/O limit by default. +> This section is only applicable to TiDB Self-Hosted. TiDB Cloud does not have a GC I/O limit by default. @@ -58,7 +58,7 @@ Based on the `DISTRIBUTED` GC mode, the mechanism of GC in Compaction Filter use > **Note:** > -> The following examples of modifying TiKV configurations are only applicable to on-premises TiDB. For TiDB Cloud, the mechanism of GC in Compaction Filter is enabled by default. +> The following examples of modifying TiKV configurations are only applicable to TiDB Self-Hosted. For TiDB Cloud, the mechanism of GC in Compaction Filter is enabled by default. diff --git a/information-schema/information-schema-resource-groups.md b/information-schema/information-schema-resource-groups.md new file mode 100644 index 0000000000000..20618008182b5 --- /dev/null +++ b/information-schema/information-schema-resource-groups.md @@ -0,0 +1,63 @@ +--- +title: RESOURCE_GROUPS +summary: Learn the `RESOURCE_GROUPS` information_schema table. +--- + +# RESOURCE_GROUPS + +> **Warning:** +> +> This feature is experimental and its form and usage might change in subsequent versions. + + + +> **Note:** +> +> This feature is not available on [TiDB Serverless clusters](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta). + + + +The `RESOURCE_GROUPS` table shows the information about all resource groups. For more information, see [Use Resource Control to Achieve Resource Isolation](/tidb-resource-control.md). + +```sql +USE information_schema; +DESC resource_groups; +``` + +```sql ++------------+-------------+------+------+---------+-------+ +| Field | Type | Null | Key | Default | Extra | ++------------+-------------+------+------+---------+-------+ +| NAME | varchar(32) | NO | | NULL | | +| RU_PER_SEC | bigint(21) | YES | | NULL | | +| BURSTABLE | varchar(3) | YES | | NULL | | ++------------+-------------+------+------+---------+-------+ +3 rows in set (0.00 sec) +``` + +## Examples + +```sql +mysql> CREATE RESOURCE GROUP rg1 RU_PER_SEC=1000; -- Create the resource group rg1 +Query OK, 0 rows affected (0.34 sec) +mysql> SHOW CREATE RESOURCE GROUP rg1; -- Display the definition of the rg1 resource group ++----------------+---------------------------------------------+ +| Resource_Group | Create Resource Group | ++----------------+---------------------------------------------+ +| rg1 | CREATE RESOURCE GROUP `rg1` RU_PER_SEC=1000 | ++----------------+---------------------------------------------+ +1 row in set (0.00 sec) +mysql> SELECT * FROM information_schema.resource_groups WHERE NAME = 'rg1'; ++------+------------+-----------+ +| NAME | RU_PER_SEC | BURSTABLE | ++------+------------+-----------+ +| rg1 | 1000 | NO | ++------+------------+-----------+ +1 row in set (0.00 sec) +``` + +The descriptions of the columns in the `RESOURCE_GROUPS` table are as follows: + +* `NAME`: the name of the resource group. +* `RU_PER_SEC`:the backfilling speed of the resource group. The unit is RU/second, in which RU means [Request Unit](/tidb-resource-control.md#what-is-request-unit-ru). +* `BURSTABLE`: whether to allow the resource group to overuse the available system resources. diff --git a/information-schema/information-schema-slow-query.md b/information-schema/information-schema-slow-query.md index 56d5d3c5bf121..1f70505eba0e9 100644 --- a/information-schema/information-schema-slow-query.md +++ b/information-schema/information-schema-slow-query.md @@ -7,6 +7,17 @@ summary: Learn the `SLOW_QUERY` information_schema table. The `SLOW_QUERY` table provides the slow query information of the current node, which is the parsing result of the TiDB slow log file. The column names in the table are corresponding to the field names in the slow log. +<<<<<<< HEAD +======= + + +> **Note:** +> +> The `SLOW_QUERY` table is unavailable for [TiDB Serverless clusters](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta). + + + +>>>>>>> 8eee4b162 (tidb: rename products (#13692) (#13763)) For how to use this table to identify problematic statements and improve query performance, see [Slow Query Log Document](/identify-slow-queries.md). diff --git a/quick-start-with-tidb.md b/quick-start-with-tidb.md index bc8cc211e691c..b1d29280d610e 100644 --- a/quick-start-with-tidb.md +++ b/quick-start-with-tidb.md @@ -14,7 +14,7 @@ This guide walks you through the quickest way to get started with TiDB. For non- > > The deployment method provided in this guide is **ONLY FOR** quick start, **NOT FOR** production. > -> - To deploy an on-premises production cluster, see [production installation guide](/production-deployment-using-tiup.md). +> - To deploy a self-hosted production cluster, see [production installation guide](/production-deployment-using-tiup.md). > - To deploy TiDB on Kubernetes, see [Get Started with TiDB on Kubernetes](https://docs.pingcap.com/tidb-in-kubernetes/stable/get-started). > - To manage TiDB in the cloud, see [TiDB Cloud Quick Start](https://docs.pingcap.com/tidbcloud/tidb-cloud-quickstart). diff --git a/releases/release-5.2.0.md b/releases/release-5.2.0.md index 79f428d01dc80..3bca214609478 100644 --- a/releases/release-5.2.0.md +++ b/releases/release-5.2.0.md @@ -20,7 +20,7 @@ In v5.2, the key new features and improvements are as follows: - Add the TiFlash I/O traffic limit feature to improve the stability of read and write for TiFlash - TiKV introduces a new flow control mechanism to replace the previous RocksDB write stall mechanism to improve the stability of TiKV flow control - Simplify the operation and maintenance of Data Migration (DM) to reduce the management cost. -- TiCDC supports HTTP protocol OpenAPI to manage TiCDC tasks. It provides a more user-friendly operation method for both Kubernetes and on-premises environments. (Experimental feature) +- TiCDC supports HTTP protocol OpenAPI to manage TiCDC tasks. It provides a more user-friendly operation method for both Kubernetes and self-hosted environments. (Experimental feature) ## Compatibility changes @@ -165,7 +165,7 @@ In v5.2, the key new features and improvements are as follows: ### TiDB data share subscription -TiCDC supports using the HTTP protocol (OpenAPI) to manage TiCDC tasks, which is a more user-friendly operation method for both Kubernetes and on-premises environments. (Experimental feature) +TiCDC supports using the HTTP protocol (OpenAPI) to manage TiCDC tasks, which is a more user-friendly operation method for both Kubernetes and self-hosted environments. (Experimental feature) [#2411](https://github.com/pingcap/tiflow/issues/2411) @@ -210,7 +210,7 @@ Support running the `tiup playground` command on Mac computers with Apple M1 chi - Support completing the garbage collection automatically for the bindings in the "deleted" status [#26206](https://github.com/pingcap/tidb/pull/26206) - Support showing whether a binding is used for query optimization in the result of `EXPLAIN VERBOSE` [#26930](https://github.com/pingcap/tidb/pull/26930) - Add a new status variation `last_plan_binding_update_time` to view the timestamp corresponding to the binding cache in the current TiDB instance [#26340](https://github.com/pingcap/tidb/pull/26340) - - Support reporting an error when starting binding evolution or running `admin evolve bindings` to ban the baseline evolution (currently disabled in the on-premises TiDB version because it is an experimental feature) affecting other features [#26333](https://github.com/pingcap/tidb/pull/26333) + - Support reporting an error when starting binding evolution or running `admin evolve bindings` to ban the baseline evolution (currently disabled in the TiDB Self-Hosted version because it is an experimental feature) affecting other features [#26333](https://github.com/pingcap/tidb/pull/26333) + PD diff --git a/releases/release-6.0.0-dmr.md b/releases/release-6.0.0-dmr.md index 09526e32a9742..d2d3b5f286792 100644 --- a/releases/release-6.0.0-dmr.md +++ b/releases/release-6.0.0-dmr.md @@ -41,7 +41,7 @@ Starting from TiDB v6.0.0, TiDB provides two types of releases: - Development Milestone Releases - Development Milestone Releases (DMR) are released approximately every two months. A DMR introduces new features and improvements, but does not accept patch releases. It is not recommended for on-premises users to use DMR in production environments. For example, v6.0.0-DMR is a DMR. + Development Milestone Releases (DMR) are released approximately every two months. A DMR introduces new features and improvements, but does not accept patch releases. It is not recommended for users to use DMR in production environments. For example, v6.0.0-DMR is a DMR. TiDB v6.0.0 is a DMR, and its version is 6.0.0-DMR. @@ -266,7 +266,7 @@ TiDB v6.0.0 is a DMR, and its version is 6.0.0-DMR. - An enterprise-level database management platform, TiDB Enterprise Manager - TiDB Enterprise Manager (TiEM) is an enterprise-level database management platform based on the TiDB database, which aims to help users manage TiDB clusters in on-premises or public cloud environments. + TiDB Enterprise Manager (TiEM) is an enterprise-level database management platform based on the TiDB database, which aims to help users manage TiDB clusters in self-hosted or public cloud environments. TiEM not only provides full lifecycle visual management for TiDB clusters, but also provides one-stop services: parameter management, version upgrades, cluster clone, active-standby cluster switching, data import and export, data replication, and data backup and restore services. TiEM can improve the efficiency of DevOps on TiDB and reduce the DevOps cost for enterprises. diff --git a/sql-statements/sql-statement-alter-resource-group.md b/sql-statements/sql-statement-alter-resource-group.md new file mode 100644 index 0000000000000..017c26d117f86 --- /dev/null +++ b/sql-statements/sql-statement-alter-resource-group.md @@ -0,0 +1,91 @@ +--- +title: ALTER RESOURCE GROUP +summary: Learn the usage of ALTER RESOURCE GROUP in TiDB. +--- + +# ALTER RESOURCE GROUP + + + +> **Note:** +> +> This feature is not available on [TiDB Serverless clusters](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta). + + + +The `ALTER RESOURCE GROUP` statement is used to modify a resource group in a database. + +## Synopsis + +```ebnf+diagram +AlterResourceGroupStmt: + "ALTER" "RESOURCE" "GROUP" IfExists ResourceGroupName ResourceGroupOptionList + +IfExists ::= + ('IF' 'EXISTS')? + +ResourceGroupName: + Identifier + +ResourceGroupOptionList: + DirectResourceGroupOption +| ResourceGroupOptionList DirectResourceGroupOption +| ResourceGroupOptionList ',' DirectResourceGroupOption + +DirectResourceGroupOption: + "RU_PER_SEC" EqOpt stringLit +| "BURSTABLE" + +``` + +TiDB supports the following `DirectResourceGroupOption`, where [Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru) is a unified abstraction unit in TiDB for CPU, IO, and other system resources. + +| Option | Description | Example | +|---------------|-------------------------------------|------------------------| +| `RU_PER_SEC` | Rate of RU backfilling per second | `RU_PER_SEC = 500` indicates that this resource group is backfilled with 500 RUs per second | + +If the `BURSTABLE` attribute is set, TiDB allows the corresponding resource group to use the available system resources when the quota is exceeded. + +> **Note:** +> +> The `ALTER RESOURCE GROUP` statement can only be executed when the global variable [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) is set to `ON`. + +## Examples + +Create a resource group named `rg1` and modify its properties. + +```sql +mysql> DROP RESOURCE GROUP IF EXISTS rg1; +Query OK, 0 rows affected (0.22 sec) +mysql> CREATE RESOURCE GROUP IF NOT EXISTS rg1 + -> RU_PER_SEC = 100 + -> BURSTABLE; +Query OK, 0 rows affected (0.08 sec) +mysql> SELECT * FROM information_schema.resource_groups WHERE NAME ='rg1'; ++------+------------+-----------+ +| NAME | RU_PER_SEC | BURSTABLE | ++------+------------+-----------+ +| rg1 | 100 | YES | ++------+------------+-----------+ +1 rows in set (1.30 sec) +mysql> ALTER RESOURCE GROUP rg1 + -> RU_PER_SEC = 200; +Query OK, 0 rows affected (0.08 sec) +mysql> SELECT * FROM information_schema.resource_groups WHERE NAME ='rg1'; ++------+------------+-----------+ +| NAME | RU_PER_SEC | BURSTABLE | ++------+------------+-----------+ +| rg1 | 200 | NO | ++------+------------+-----------+ +1 rows in set (1.30 sec) +``` + +## MySQL compatibility + +MySQL also supports [ALTER RESOURCE GROUP](https://dev.mysql.com/doc/refman/8.0/en/alter-resource-group.html). However, the acceptable parameters are different from that of TiDB so that they are not compatible. + +## See also + +* [DROP RESOURCE GROUP](/sql-statements/sql-statement-drop-resource-group.md) +* [CREATE RESOURCE GROUP](/sql-statements/sql-statement-create-resource-group.md) +* [Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru) diff --git a/sql-statements/sql-statement-create-resource-group.md b/sql-statements/sql-statement-create-resource-group.md new file mode 100644 index 0000000000000..ebc0ad761f252 --- /dev/null +++ b/sql-statements/sql-statement-create-resource-group.md @@ -0,0 +1,87 @@ +--- +title: CREATE RESOURCE GROUP +summary: Learn the usage of CREATE RESOURCE GROUP in TiDB. +--- + +# CREATE RESOURCE GROUP + + + +> **Note:** +> +> This feature is not available on [TiDB Serverless clusters](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta). + + + +You can use the `CREATE RESOURCE GROUP` statement to create a resource group. + +## Synopsis + +```ebnf+diagram +CreateResourceGroupStmt: + "CREATE" "RESOURCE" "GROUP" IfNotExists ResourceGroupName ResourceGroupOptionList + +IfNotExists ::= + ('IF' 'NOT' 'EXISTS')? + +ResourceGroupName: + Identifier + +ResourceGroupOptionList: + DirectResourceGroupOption +| ResourceGroupOptionList DirectResourceGroupOption +| ResourceGroupOptionList ',' DirectResourceGroupOption + +DirectResourceGroupOption: + "RU_PER_SEC" EqOpt stringLit +| "BURSTABLE" + +``` + +The resource group name parameter (`ResourceGroupName`) must be globally unique. + +TiDB supports the following `DirectResourceGroupOption`, where [Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru) is a unified abstraction unit in TiDB for CPU, IO, and other system resources. + +| Option | Description | Example | +|---------------|-------------------------------------|------------------------| +| `RU_PER_SEC` | Rate of RU backfilling per second | `RU_PER_SEC = 500` indicates that this resource group is backfilled with 500 RUs per second | + +If the `BURSTABLE` attribute is set, TiDB allows the corresponding resource group to use the available system resources when the quota is exceeded. + +> **Note:** +> +> The `CREATE RESOURCE GROUP` statement can only be executed when the global variable [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) is set to `ON`. + +## Examples + +Create two resource groups `rg1` and `rg2`. + +```sql +mysql> DROP RESOURCE GROUP IF EXISTS rg1; +Query OK, 0 rows affected (0.22 sec) +mysql> CREATE RESOURCE GROUP IF NOT EXISTS rg1 + -> RU_PER_SEC = 100 + -> BURSTABLE; +Query OK, 0 rows affected (0.08 sec) +mysql> CREATE RESOURCE GROUP IF NOT EXISTS rg2 + -> RU_PER_SEC = 200; +Query OK, 0 rows affected (0.08 sec) +mysql> SELECT * FROM information_schema.resource_groups WHERE NAME ='rg1' or NAME = 'rg2'; ++------+-------------+-----------+ +| NAME | RU_PER_SEC | BURSTABLE | ++------+-------------+-----------+ +| rg1 | 100 | YES | +| rg2 | 200 | NO | ++------+-------------+-----------+ +2 rows in set (1.30 sec) +``` + +## MySQL compatibility + +MySQL also supports [CREATE RESOURCE GROUP](https://dev.mysql.com/doc/refman/8.0/en/create-resource-group.html). However, the acceptable parameters are different from that of TiDB so that they are not compatible. + +## See also + +* [DROP RESOURCE GROUP](/sql-statements/sql-statement-drop-resource-group.md) +* [ALTER RESOURCE GROUP](/sql-statements/sql-statement-alter-resource-group.md) +* [Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru) diff --git a/sql-statements/sql-statement-drop-resource-group.md b/sql-statements/sql-statement-drop-resource-group.md new file mode 100644 index 0000000000000..669d23b379e52 --- /dev/null +++ b/sql-statements/sql-statement-drop-resource-group.md @@ -0,0 +1,67 @@ +--- +title: DROP RESOURCE GROUP +summary: Learn the usage of DROP RESOURCE GROUP in TiDB. +--- + +# DROP RESOURCE GROUP + + + +> **Note:** +> +> This feature is not available on [TiDB Serverless clusters](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta). + + + +You can use the `DROP RESOURCE GROUP` statement to drop a resource group. + +## Synopsis + +```ebnf+diagram +DropResourceGroupStmt: + "DROP" "RESOURCE" "GROUP" IfExists ResourceGroupName + +IfExists ::= + ('IF' 'EXISTS')? + +ResourceGroupName: + Identifier +``` + +> **Note:** +> +> The `DROP RESOURCE GROUP` statement can only be executed when the global variable [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) is set to `ON`. + +## Examples + +Drop a resource group named `rg1`. + +```sql +mysql> DROP RESOURCE GROUP IF EXISTS rg1; +Query OK, 0 rows affected (0.22 sec) +mysql> CREATE RESOURCE GROUP IF NOT EXISTS rg1 RU_PER_SEC = 500 BURSTABLE; +Query OK, 0 rows affected (0.08 sec) +mysql> SELECT * FROM information_schema.resource_groups WHERE NAME ='rg1'; ++------+------------+-----------+ +| NAME | RU_PER_SEC | BURSTABLE | ++------+------------+-----------+ +| rg1 | 500 | YES | ++------+------------+-----------+ +1 row in set (0.01 sec) + +mysql> DROP RESOURCE GROUP IF EXISTS rg1; +Query OK, 1 rows affected (0.09 sec) + +mysql> SELECT * FROM information_schema.resource_groups WHERE NAME ='rg1'; +Empty set (0.00 sec) +``` + +## MySQL compatibility + +MySQL also supports [DROP RESOURCE GROUP](https://dev.mysql.com/doc/refman/8.0/en/drop-resource-group.html), but TiDB does not support the `FORCE` parameter. + +## See also + +* [ALTER RESOURCE GROUP](/sql-statements/sql-statement-alter-resource-group.md) +* [CREATE RESOURCE GROUP](/sql-statements/sql-statement-create-resource-group.md) +* [Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru) \ No newline at end of file diff --git a/sql-statements/sql-statement-flashback-to-timestamp.md b/sql-statements/sql-statement-flashback-to-timestamp.md new file mode 100644 index 0000000000000..c8159cbe17a18 --- /dev/null +++ b/sql-statements/sql-statement-flashback-to-timestamp.md @@ -0,0 +1,129 @@ +--- +title: FLASHBACK CLUSTER TO TIMESTAMP +summary: Learn the usage of FLASHBACK CLUSTER TO TIMESTAMP in TiDB databases. +--- + +# FLASHBACK CLUSTER TO TIMESTAMP + +TiDB v6.4.0 introduces the `FLASHBACK CLUSTER TO TIMESTAMP` syntax. You can use it to restore a cluster to a specific point in time. + + + +> **Warning:** +> +> The `FLASHBACK CLUSTER TO TIMESTAMP` syntax is not applicable to [TiDB Serverless](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta) clusters. Do not execute this statement on TiDB Serverless clusters to avoid unexpected results. + + + +> **Note:** +> +> The working principle of `FLASHBACK CLUSTER TO TIMESTAMP` is to write the old data of a specific point in time with the latest timestamp, and will not delete the current data. So before using this feature, you need to ensure that there is enough storage space for the old data and the current data. + +## Syntax + +```sql +FLASHBACK CLUSTER TO TIMESTAMP '2022-09-21 16:02:50'; +``` + +### Synopsis + +```ebnf+diagram +FlashbackToTimestampStmt ::= + "FLASHBACK" "CLUSTER" "TO" "TIMESTAMP" stringLit +``` + +## Notes + +* The time specified in the `FLASHBACK` statement must be within the Garbage Collection (GC) lifetime. The system variable [`tidb_gc_life_time`](/system-variables.md#tidb_gc_life_time-new-in-v50) (default: `10m0s`) defines the retention time of earlier versions of rows. The current `safePoint` of where garbage collection has been performed up to can be obtained with the following query: + + ```sql + SELECT * FROM mysql.tidb WHERE variable_name = 'tikv_gc_safe_point'; + ``` + + + +* Only a user with the `SUPER` privilege can execute the `FLASHBACK CLUSTER` SQL statement. +* `FLASHBACK CLUSTER` does not support rolling back DDL statements that modify PD-related information, such as `ALTER TABLE ATTRIBUTE`, `ALTER TABLE REPLICA`, and `CREATE PLACEMENT POLICY`. +* At the time specified in the `FLASHBACK` statement, there cannot be a DDL statement that is not completely executed. If such a DDL exists, TiDB will reject it. +* Before executing `FLASHBACK CLUSTER TO TIMESTAMP`, TiDB disconnects all related connections and prohibits read and write operations on these tables until the `FLASHBACK CLUSTER` statement is completed. +* The `FLASHBACK CLUSTER TO TIMESTAMP` statement cannot be canceled after being executed. TiDB will keep retrying until it succeeds. +* During the execution of `FLASHBACK CLUSTER`, if you need to back up data, you can only use [Backup & Restore](/br/br-snapshot-guide.md) and specify a `BackupTS` that is earlier than the start time of `FLASHBACK CLUSTER`. In addition, during the execution of `FLASHBACK CLUSTER`, enabling [log backup](/br/br-pitr-guide.md) will fail. Therefore, try to enable log backup after `FLASHBACK CLUSTER` is completed. +* If the `FLASHBACK CLUSTER` statement causes the rollback of metadata (table structure, database structure), the related modifications will **not** be replicated by TiCDC. Therefore, you need to pause the task manually, wait for the completion of `FLASHBACK CLUSTER`, and manually replicate the schema definitions of the upstream and downstream to make sure that they are consistent. After that, you need to recreate the TiCDC changefeed. + + + + + +* Only a user with the `SUPER` privilege can execute the `FLASHBACK CLUSTER` SQL statement. +* `FLASHBACK CLUSTER` does not support rolling back DDL statements that modify PD-related information, such as `ALTER TABLE ATTRIBUTE`, `ALTER TABLE REPLICA`, and `CREATE PLACEMENT POLICY`. +* At the time specified in the `FLASHBACK` statement, there cannot be a DDL statement that is not completely executed. If such a DDL exists, TiDB will reject it. +* Before executing `FLASHBACK CLUSTER TO TIMESTAMP`, TiDB disconnects all related connections and prohibits read and write operations on these tables until the `FLASHBACK CLUSTER` statement is completed. +* The `FLASHBACK CLUSTER TO TIMESTAMP` statement cannot be canceled after being executed. TiDB will keep retrying until it succeeds. +* If the `FLASHBACK CLUSTER` statement causes the rollback of metadata (table structure, database structure), the related modifications will **not** be replicated by TiCDC. Therefore, you need to pause the task manually, wait for the completion of `FLASHBACK CLUSTER`, and manually replicate the schema definitions of the upstream and downstream to make sure that they are consistent. After that, you need to recreate the TiCDC changefeed. + + + +## Example + +The following example shows how to restore the newly inserted data: + +```sql +mysql> CREATE TABLE t(a INT); +Query OK, 0 rows affected (0.09 sec) + +mysql> SELECT * FROM t; +Empty set (0.01 sec) + +mysql> SELECT now(); ++---------------------+ +| now() | ++---------------------+ +| 2022-09-28 17:24:16 | ++---------------------+ +1 row in set (0.02 sec) + +mysql> INSERT INTO t VALUES (1); +Query OK, 1 row affected (0.02 sec) + +mysql> SELECT * FROM t; ++------+ +| a | ++------+ +| 1 | ++------+ +1 row in set (0.01 sec) + +mysql> FLASHBACK CLUSTER TO TIMESTAMP '2022-09-28 17:24:16'; +Query OK, 0 rows affected (0.20 sec) + +mysql> SELECT * FROM t; +Empty set (0.00 sec) +``` + +If there is a DDL statement that is not completely executed at the time specified in the `FLASHBACK` statement, the `FLASHBACK` statement fails: + +```sql +mysql> ALTER TABLE t ADD INDEX k(a); +Query OK, 0 rows affected (0.56 sec) + +mysql> ADMIN SHOW DDL JOBS 1; ++--------+---------+-----------------------+------------------------+--------------+-----------+----------+-----------+---------------------+---------------------+---------------------+--------+ +| JOB_ID | DB_NAME | TABLE_NAME | JOB_TYPE | SCHEMA_STATE | SCHEMA_ID | TABLE_ID | ROW_COUNT | CREATE_TIME | START_TIME | END_TIME | STATE | ++--------+---------+-----------------------+------------------------+--------------+-----------+----------+-----------+---------------------+---------------------+---------------------+--------+ +| 84 | test | t | add index /* ingest */ | public | 2 | 82 | 0 | 2023-01-29 14:33:11 | 2023-01-29 14:33:11 | 2023-01-29 14:33:12 | synced | ++--------+---------+-----------------------+------------------------+--------------+-----------+----------+-----------+---------------------+---------------------+---------------------+--------+ +1 rows in set (0.01 sec) + +mysql> FLASHBACK CLUSTER TO TIMESTAMP '2023-01-29 14:33:12'; +ERROR 1105 (HY000): Detected another DDL job at 2023-01-29 14:33:12 +0800 CST, can't do flashback +``` + +Through the log, you can obtain the execution progress of `FLASHBACK`. The following is an example: + +``` +[2022/10/09 17:25:59.316 +08:00] [INFO] [cluster.go:463] ["flashback cluster stats"] ["complete regions"=9] ["total regions"=10] [] +``` + +## MySQL compatibility + +This statement is a TiDB extension to MySQL syntax. diff --git a/sql-statements/sql-statement-show-create-resource-group.md b/sql-statements/sql-statement-show-create-resource-group.md new file mode 100644 index 0000000000000..2c49734a91609 --- /dev/null +++ b/sql-statements/sql-statement-show-create-resource-group.md @@ -0,0 +1,59 @@ +--- +title: SHOW CREATE RESOURCE GROUP +summary: Learn the usage of SHOW CREATE RESOURCE GROUP in TiDB. +--- + +# SHOW CREATE RESOURCE GROUP + + + +> **Note:** +> +> This feature is not available on [TiDB Serverless clusters](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta). + + + +You can use the `SHOW CREATE RESOURCE GROUP` statement to view the current definition of a resource group. + +## Synopsis + +```ebnf+diagram +ShowCreateResourceGroupStmt ::= + "SHOW" "CREATE" "RESOURCE" "GROUP" ResourceGroupName + +ResourceGroupName ::= + Identifier +``` + +## Examples + +Create a resource group `rg1`. + +```sql +CREATE RESOURCE GROUP rg1 RU_PER_SEC=100; +Query OK, 0 rows affected (0.10 sec) +``` + +View the definition of `rg1`. + +```sql +SHOW CREATE RESOURCE GROUP rg1; +***************************[ 1. row ]*************************** ++----------------+--------------------------------------------+ +| Resource_Group | Create Resource Group | ++----------------+--------------------------------------------+ +| rg1 | CREATE RESOURCE GROUP `rg1` RU_PER_SEC=100 | ++----------------+--------------------------------------------+ +1 row in set (0.01 sec) +``` + +## MySQL compatibility + +This statement is a TiDB extension for MySQL. + +## See also + +* [TiDB RESOURCE CONTROL](/tidb-resource-control.md) +* [CREATE RESOURCE GROUP](/sql-statements/sql-statement-alter-resource-group.md) +* [ALTER RESOURCE GROUP](/sql-statements/sql-statement-alter-resource-group.md) +* [DROP RESOURCE GROUP](/sql-statements/sql-statement-drop-resource-group.md) diff --git a/statement-summary-tables.md b/statement-summary-tables.md index 8f1574f8ed4e4..68d93b33226a7 100644 --- a/statement-summary-tables.md +++ b/statement-summary-tables.md @@ -15,6 +15,17 @@ Therefore, starting from v4.0.0-rc.1, TiDB provides system tables in `informatio - [`cluster_statements_summary_history`](#statements_summary_evicted) - [`statements_summary_evicted`](#statements_summary_evicted) +<<<<<<< HEAD +======= + + +> **Note:** +> +> The following tables are unavailable for [TiDB Serverless clusters](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta): `statements_summary`, `statements_summary_history`, `cluster_statements_summary`, and `cluster_statements_summary_history`. + + + +>>>>>>> 8eee4b162 (tidb: rename products (#13692) (#13763)) This document details these tables and introduces how to use them to troubleshoot SQL performance issues. ## `statements_summary` @@ -182,7 +193,62 @@ From the result above, you can see that a maximum of 59 SQL categories are evict The statement summary tables have the following limitation: +<<<<<<< HEAD All data of the statement summary tables above will be lost when the TiDB server is restarted. This is because statement summary tables are all memory tables, and the data is cached in memory instead of being persisted on storage. +======= + + +To address this issue, TiDB v6.6.0 experimentally introduces the [statement summary persistence](#persist-statements-summary) feature, which is disabled by default. After this feature is enabled, the history data is no longer saved in memory, but directly written to disks. In this way, the history data is still available if a TiDB server restarts. + + + +## Persist statements summary + + + +This section is only applicable to TiDB Self-Hosted. For TiDB Cloud, the value of the `tidb_stmt_summary_enable_persistent` parameter is `false` by default and does not support dynamic modification. + + + +> **Warning:** +> +> Statements summary persistence is an experimental feature. It is not recommended that you use it in the production environment. This feature might be changed or removed without prior notice. If you find a bug, you can report an [issue](https://github.com/pingcap/tidb/issues) on GitHub. + + + +As described in the [Limitation](#limitation) section, statements summary tables are saved in memory by default. Once a TiDB server restarts, all the statements summary will be lost. Starting from v6.6.0, TiDB experimentally provides the configuration item [`tidb_stmt_summary_enable_persistent`](/tidb-configuration-file.md#tidb_stmt_summary_enable_persistent-new-in-v660) to allow users to enable or disable statements summary persistence. + + + + + +As described in the [Limitation](#limitation) section, statements summary tables are saved in memory by default. Once a TiDB server restarts, all the statements summary will be lost. Starting from v6.6.0, TiDB experimentally provides the configuration item `tidb_stmt_summary_enable_persistent` to allow users to enable or disable statements summary persistence. + + + +To enable statements summary persistence, you can add the following configuration items to the TiDB configuration file: + +```toml +[instance] +tidb_stmt_summary_enable_persistent = true +# The following entries use the default values, which can be modified as needed. +# tidb_stmt_summary_filename = "tidb-statements.log" +# tidb_stmt_summary_file_max_days = 3 +# tidb_stmt_summary_file_max_size = 64 # MiB +# tidb_stmt_summary_file_max_backups = 0 +``` + +After statements summary persistence is enabled, the memory keeps only the current real-time data and no history data. Once the real-time data is refreshed as history data, the history data is written to the disk at an interval of `tidb_stmt_summary_refresh_interval` described in the [Parameter configuration](#parameter-configuration) section. Queries on the `statements_summary_history` or `cluster_statements_summary_history` table will return results combining both in-memory and on-disk data. + + + +> **Note:** +> +> - When statements summary persistence is enabled, the `tidb_stmt_summary_history_size` configuration described in the [Parameter configuration](#parameter-configuration) section will no longer take effect because the memory does not keep the history data. Instead, the following three configurations will be used to control the retention period and size of history data for persistence: [`tidb_stmt_summary_file_max_days`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_days-new-in-v660), [`tidb_stmt_summary_file_max_size`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_size-new-in-v660), and [`tidb_stmt_summary_file_max_backups`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_backups-new-in-v660). +> - The smaller the value of `tidb_stmt_summary_refresh_interval`, the more immediate data is written to the disk. However, this also means more redundant data is written to the disk. + + +>>>>>>> 8eee4b162 (tidb: rename products (#13692) (#13763)) ## Troubleshooting examples diff --git a/statistics.md b/statistics.md index 3d61d68035d6b..176509c24f0ce 100644 --- a/statistics.md +++ b/statistics.md @@ -11,7 +11,12 @@ TiDB uses statistics to decide [which index to choose](/choose-index.md). The `t In versions earlier than v5.1.0, the default value of this variable is `1`. In v5.3.0 and later versions, the default value of this variable is `2`. If your cluster is upgraded from a version earlier than v5.3.0 to v5.3.0 or later, the default value of `tidb_analyze_version` does not change. +<<<<<<< HEAD +======= +- For TiDB Self-Hosted, the default value of this variable is `1` before v5.1.0. In v5.3.0 and later versions, the default value of this variable is `2`. If your cluster is upgraded from a version earlier than v5.3.0 to v5.3.0 or later, the default value of `tidb_analyze_version` does not change. +- For TiDB Cloud, the default value of this variable is `1`. +>>>>>>> 8eee4b162 (tidb: rename products (#13692) (#13763)) diff --git a/system-variables.md b/system-variables.md index 810a50114e016..d5792969c24ad 100644 --- a/system-variables.md +++ b/system-variables.md @@ -658,6 +658,7 @@ MPP is a distributed computing framework provided by the TiFlash engine, which a - Scope: SESSION | GLOBAL - Persists to cluster: Yes - Type: Integer +<<<<<<< HEAD @@ -671,6 +672,9 @@ MPP is a distributed computing framework provided by the TiFlash engine, which a +======= +- Default value: `2` for TiDB Self-Hosted and `1` for TiDB Cloud +>>>>>>> 8eee4b162 (tidb: rename products (#13692) (#13763)) - Range: `[1, 2]` - Controls how TiDB collects statistics. @@ -915,6 +919,85 @@ Constraint checking is always performed in place for pessimistic transactions (d - Default value: `0` - This variable is read-only. It is used to obtain the timestamp of the current transaction. +<<<<<<< HEAD +======= +### tidb_ddl_disk_quota New in v6.3.0 + + + +> **Note:** +> +> This TiDB variable is not applicable to TiDB Cloud. Do not change the default value of this variable for TiDB Cloud. + + + +- Scope: GLOBAL +- Persists to cluster: Yes +- Type: Integer +- Default value: `107374182400` (100 GiB) +- Range: `[107374182400, 1125899906842624]` ([100 GiB, 1 PiB]) +- Unit: Bytes +- This variable only takes effect when [`tidb_ddl_enable_fast_reorg`](#tidb_ddl_enable_fast_reorg-new-in-v630) is enabled. It sets the usage limit of local storage during backfilling when creating an index. + +### tidb_ddl_enable_fast_reorg New in v6.3.0 + + + +> **Note:** +> +> To improve the speed for index creation using this variable, make sure that your TiDB cluster is hosted on AWS and your TiDB node size is at least 8 vCPU. For [TiDB Serverless](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta) clusters, this feature is unavailable. + + + +- Scope: GLOBAL +- Persists to cluster: Yes +- Type: Boolean +- Default value: `ON` +- This variable controls whether to enable the acceleration of `ADD INDEX` and `CREATE INDEX` to improve the speed of backfilling for index creation. Setting this variable value to `ON` can bring performance improvement for index creation on tables with a large amount of data. +- To verify whether a completed `ADD INDEX` operation is accelerated, you can execute the [`ADMIN SHOW DDL JOBS`](/sql-statements/sql-statement-admin-show-ddl.md#admin-show-ddl-jobs) statement to see whether `ingest` is displayed in the `JOB_TYPE` column. + + + +> **Warning:** +> +> Currently, this feature is not fully compatible with adding a unique index. When adding a unique index, it is recommended to disable the index acceleration feature (set `tidb_ddl_enable_fast_reorg` to `OFF`). +> +> When [PITR (Point-in-time recovery)](/br/backup-and-restore-overview.md) is disabled, the speed of adding indexes is expected to be about 10 times that in v6.1.0. However, there is no performance improvement when both PITR and index acceleration are enabled. To optimize performance, it is recommended that you disable PITR, add indexes in a quick way, then enable PITR and perform a full backup. Otherwise, the following behaviors might occur: +> +> - When PITR starts working first, the index adding job automatically falls back to the legacy mode by default, even if the configuration is set to `ON`. The index is added slowly. +> - When the index adding job starts first, it prevents the log backup job of PITR from starting by throwing an error, which does not affect the index adding job in progress. After the index adding job is completed, you need to restart the log backup job and perform a full backup manually. +> - When a log backup job of PITR and an index adding job start at the same time, no error is prompted because the two jobs are unable to detect each other. PITR does not back up the newly added index. After the index adding job is completed, you still need to restart the log backup job and perform a full backup manually. + + + + + +> **Warning:** +> +> Currently, this feature is not fully compatible with [altering multiple columns or indexes in a single `ALTER TABLE` statement](/sql-statements/sql-statement-alter-table.md). When adding a unique index with the index acceleration, you need to avoid altering other columns or indexes in the same statement. +> +> When [PITR (Point-in-time recovery)](/tidb-cloud/backup-and-restore.md) is disabled, the speed of adding indexes is expected to be about 10 times that in v6.1.0. However, there is no performance improvement when both PITR and index acceleration are enabled. To optimize performance, it is recommended that you disable PITR, add indexes in a quick way, then enable PITR and perform a full backup. Otherwise, the following expected behaviors might occur: +> +> - When PITR starts working first, the index adding job automatically falls back to the legacy mode by default, even if the configuration is set to `ON`. The index is added slowly. +> - When the index adding job starts first, it prevents the log backup job of PITR from starting by throwing an error, which does not affect the index adding job in progress. After the index adding job is completed, you need to restart the log backup job and perform a full backup manually. +> - When a log backup job of PITR and an index adding job start at the same time, no error is prompted because the two jobs are unable to detect each other. PITR does not back up the newly added index. After the index adding job is completed, you still need to restart the log backup job and perform a full backup manually. + + + +### tidb_ddl_distribute_reorg New in v6.6.0 + +> **Warning:** +> +> - This feature is still in the experimental stage. It is not recommended to enable this feature in production environments. +> - When this feature is enabled, TiDB only performs simple retries when an exception occurs during the DDL reorg phase. There is currently no retry method that is compatible with DDL operations. That is, you cannot control the number of retries using [`tidb_ddl_error_count_limit`](#tidb_ddl_error_count_limit). + +- Scope: GLOBAL +- Persists to cluster: Yes +- Default value: `OFF` +- This variable is used to control whether to enable distributed execution of the DDL reorg phase to improve the speed of this phase. Currently, this variable is only valid for the `ADD INDEX` statement. Enabling this variable improves the performance of large tables. Distributed DDL execution can control the CPU usage of DDL through dynamic DDL resource management to prevent DDL from affecting the online application. +- To verify whether a completed `ADD INDEX` operation is accelerated by this feature, you can check whether a corresponding task is in the `mysql.tidb_background_subtask_history` table. + +>>>>>>> 8eee4b162 (tidb: rename products (#13692) (#13763)) ### tidb_ddl_error_count_limit - Scope: GLOBAL diff --git a/tidb-resource-control.md b/tidb-resource-control.md new file mode 100644 index 0000000000000..91b262bbe0246 --- /dev/null +++ b/tidb-resource-control.md @@ -0,0 +1,197 @@ +--- +title: Use Resource Control to Achieve Resource Isolation +summary: Learn how to use the resource control feature to control and schedule application resources. +--- + +# Use Resource Control to Achieve Resource Isolation + +> **Warning:** +> +> This feature is experimental and its form and usage might change in subsequent versions. + + + +> **Note:** +> +> This feature is not available on [TiDB Serverless clusters](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta). + + + +As a cluster administrator, you can use the resource control feature to create resource groups, set read and write quotas for resource groups, and bind users to those groups. This allows the TiDB layer to control the flow of user read and write requests based on the quotas set for the resource groups, and allows the TiKV layer to schedule the requests based on the priority mapped to the read and write quota. By doing this, you can ensure resource isolation for your applications and meet quality of service (QoS) requirements. + +The TiDB resource control feature provides two layers of resource management capabilities: the flow control capability at the TiDB layer and the priority scheduling capability at the TiKV layer. The two capabilities can be enabled separately or simultaneously. See the [Parameters for resource control](#parameters-for-resource-control) for details. + +- TiDB flow control: TiDB flow control uses the [token bucket algorithm](https://en.wikipedia.org/wiki/Token_bucket). If there are not enough tokens in a bucket, and the resource group does not specify the `BURSTABLE` option, the requests to the resource group will wait for the token bucket to backfill the tokens and retry. The retry might fail due to timeout. + + + +- TiKV scheduling: if [`resource-control.enabled`](/tikv-configuration-file.md#resource-control) is enabled, TiKV uses the value of `RU_PER_SEC` of each resource group to determine the priority of the read and write requests for each resource group. Based on the priorities, the storage layer uses the priority queue to schedule and process requests. + + + + + +- TiKV scheduling: for TiDB Self-Hosted, if the `resource-control.enabled` parameter is enabled, TiKV uses the value of `RU_PER_SEC` of each resource group to determine the priority of the read and write requests for each resource group. Based on the priorities, the storage layer uses the priority queue to schedule and process requests. For TiDB Cloud, the value of the `resource-control.enabled` parameter is `false` by default and does not support dynamic modification. If you need to enable it for TiDB Dedicated clusters, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). + + + +## Scenarios for resource control + +The introduction of the resource control feature is a milestone for TiDB. It can divide a distributed database cluster into multiple logical units. Even if an individual unit overuses resources, it does not crowd out the resources needed by other units. + +With this feature, you can: + +- Combine multiple small and medium-sized applications from different systems into a single TiDB cluster. When the workload of an application grows larger, it does not affect the normal operation of other applications. When the system workload is low, busy applications can still be allocated the required system resources even if they exceed the set read and write quotas, so as to achieve the maximum utilization of resources. +- Choose to combine all test environments into a single TiDB cluster, or group the batch tasks that consume more resources into a single resource group. It can improve hardware utilization and reduce operating costs while ensuring that critical applications can always get the necessary resources. + +In addition, the rational use of the resource control feature can reduce the number of clusters, ease the difficulty of operation and maintenance, and save management costs. + +## What is Request Unit (RU) + +Request Unit (RU) is a unified abstraction unit in TiDB for system resources, which currently includes CPU, IOPS, and IO bandwidth metrics. The consumption of these three metrics is represented by RU according to a certain ratio. + +The following table shows the consumption of TiKV storage layer CPU and IO resources by user requests and the corresponding RU weights. + +| Resource | RU Weight | +|:----------------|:-----------------| +| CPU | 1/3 RU per millisecond | +| Read IO | 1/64 RU per KB | +| Write IO | 1 RU/KB | +| Basic overhead of a read request | 0.25 RU | +| Basic overhead of a write request | 1.5 RU | + +Based on the above table, assuming that the TiKV time consumed by a resource group is `c` milliseconds, `r1` times of requests read `r2` KB data, `w1` times of write requests write `w2` KB data, and the number of non-witness TiKV nodes in the cluster is `n`. Then, the formula for the total RUs consumed by the resource group is as follows: + +`c`\* 1/3 + (`r1` \* 0.25 + `r2` \* 1/64) + (1.5 \* `w1` + `w2` \* 1 \* `n`) + +## Parameters for resource control + +The resource control feature introduces two new global variables. + +* TiDB: you can use the [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) system variable to control whether to enable flow control for resource groups. + + + +* TiKV: you can use the [`resource-control.enabled`](/tikv-configuration-file.md#resource-control) parameter to control whether to use request scheduling based on resource groups. + + + + + +* TiKV: For TiDB Self-Hosted, you can use the `resource-control.enabled` parameter to control whether to use request scheduling based on resource group quotas. For TiDB Cloud, the value of the `resource-control.enabled` parameter is `false` by default and does not support dynamic modification. If you need to enable it for TiDB Dedicated clusters, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). + + + +The results of the combinations of these two parameters are shown in the following table. + +| `resource-control.enabled` | `tidb_enable_resource_control`= ON | `tidb_enable_resource_control`= OFF | +|:----------------------------|:-------------------------------------|:-------------------------------------| +| `resource-control.enabled`= true | Flow control and scheduling (recommended) | Invalid combination | +| `resource-control.enabled`= false | Only flow control (not recommended) | The feature is disabled. | + +For more information about the resource control mechanism and parameters, see [RFC: Global Resource Control in TiDB](https://github.com/pingcap/tidb/blob/master/docs/design/2022-11-25-global-resource-control.md). + +## How to use resource control + +To create, modify, or delete a resource group, you need to have the `SUPER` or `RESOURCE_GROUP_ADMIN` privilege. + +You can create a resource group in the cluster by using [`CREATE RESOURCE GROUP`](/sql-statements/sql-statement-create-resource-group.md), and then bind users to a specific resource group by using [`CREATE USER`](/sql-statements/sql-statement-create-user.md) or [`ALTER USER`](/sql-statements/sql-statement-alter-user.md). + +For an existing resource group, you can modify the `RU_PER_SEC` option (the rate of RU backfilling per second) of the resource group by using [`ALTER RESOURCE GROUP`](/sql-statements/sql-statement-alter-resource-group.md). The changes to the resource group take effect immediately. + +You can delete a resource group by using [`DROP RESOURCE GROUP`](/sql-statements/sql-statement-drop-resource-group.md). + +> **Note:** +> +> - When you bind a user to a resource group by using `CREATE USER` or `ALTER USER`, it will not take effect for the user's existing sessions, but only for the user's new sessions. +> - If a user is not bound to a resource group or is bound to a `default` resource group, the user's requests are not subject to the flow control restrictions of TiDB. The `default` resource group is currently not visible to the user and cannot be created or modified. You cannot view it with `SHOW CREATE RESOURCE GROUP` or `SELECT * FROM information_schema.resource_groups`. But you can view it through the `mysql.user` table. + +### Step 1. Enable the resource control feature + +Enable the resource control feature. + +```sql +SET GLOBAL tidb_enable_resource_control = 'ON'; +``` + + + +Set the TiKV [`resource-control.enabled`](/tikv-configuration-file.md#resource-control) parameter to `true`. + + + + + +For TiDB Self-Hosted, set the TiKV `resource-control.enabled` parameter to `true`. For TiDB Cloud, the value of the `resource-control.enabled` parameter is `false` by default and does not support dynamic modification. If you need to enable it for TiDB Dedicated clusters, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). + + + +### Step 2. Create a resource group, and then bind users to it + +The following is an example of how to create a resource group and bind users to it. + +1. Create a resource group `rg1`. The RU backfill rate is 500 RUs per second and allows applications in this resource group to overrun resources. + + ```sql + CREATE RESOURCE GROUP IF NOT EXISTS rg1 RU_PER_SEC = 500 BURSTABLE; + ``` + +2. Create a resource group `rg2`. The RU backfill rate is 600 RUs per second and does not allow applications in this resource group to overrun resources. + + ```sql + CREATE RESOURCE GROUP IF NOT EXISTS rg2 RU_PER_SEC = 600; + ``` + +3. Bind users `usr1` and `usr2` to resource groups `rg1` and `rg2` respectively. + + ```sql + ALTER USER usr1 RESOURCE GROUP rg1; + ``` + + ```sql + ALTER USER usr2 RESOURCE GROUP rg2; + ``` + +After you complete the above operations of creating resource groups and binding users, the resource consumption of newly created sessions will be controlled by the specified quota. If the system workload is relatively high and there is no spare capacity, the resource consumption rate of `usr2` will be strictly controlled not to exceed the quota. Because `usr1` is bound by `rg1` with `BURSTABLE` configured, the consumption rate of `usr1` is allowed to exceed the quota. + +If there are too many requests that result in insufficient resources for the resource group, the client's requests will wait. If the wait time is too long, the requests will report an error. + +## Monitoring metrics and charts + + + +TiDB regularly collects runtime information about resource control and provides visual charts of the metrics in Grafana's **TiDB** > **Resource Control** dashboard. The metrics are detailed in the **Resource Control** section of [TiDB Important Monitoring Metrics](/grafana-tidb-dashboard.md). + +TiKV also records the request QPS from different resource groups. For more details, see [TiKV Monitoring Metrics Detail](/grafana-tikv-dashboard.md#grpc). + + + + + +> **Note:** +> +> This section is only applicable to TiDB Self-Hosted. Currently, TiDB Cloud does not provide resource control metrics. + +TiDB regularly collects runtime information about resource control and provides visual charts of the metrics in Grafana's **TiDB** > **Resource Control** dashboard. + +TiKV also records the request QPS from different resource groups in Grafana's **TiKV** dashboard. + + + +## Tool compatibility + +The resource control feature is still in its experimental stage and does not impact the regular usage of data import, export, and other replication tools. BR, TiDB Lightning, and TiCDC do not currently support processing DDL operations related to resource control, and their resource consumption is not limited by resource control. + +## Limitations + +Currently, the resource control feature has the following limitations: + +* This feature only supports flow control and scheduling of read and write requests initiated by foreground clients. It does not support flow control and scheduling of background tasks such as DDL operations and auto analyze. +* Resource control incurs additional scheduling overhead. Therefore, there might be a slight performance degradation when this feature is enabled. + +## See also + +* [CREATE RESOURCE GROUP](/sql-statements/sql-statement-create-resource-group.md) +* [ALTER RESOURCE GROUP](/sql-statements/sql-statement-alter-resource-group.md) +* [DROP RESOURCE GROUP](/sql-statements/sql-statement-drop-resource-group.md) +* [RESOURCE GROUP RFC](https://github.com/pingcap/tidb/blob/master/docs/design/2022-11-25-global-resource-control.md) diff --git a/time-to-live.md b/time-to-live.md new file mode 100644 index 0000000000000..e595965e8b9fe --- /dev/null +++ b/time-to-live.md @@ -0,0 +1,284 @@ +--- +title: Periodically Delete Data Using TTL (Time to Live) +summary: Time to live (TTL) is a feature that allows you to manage TiDB data lifetime at the row level. In this document, you can learn how to use TTL to automatically expire and delete old data. +--- + +# Periodically Delete Expired Data Using TTL (Time to Live) + +Time to live (TTL) is a feature that allows you to manage TiDB data lifetime at the row level. For a table with the TTL attribute, TiDB automatically checks data lifetime and deletes expired data at the row level. This feature can effectively save storage space and enhance performance in some scenarios. + +The following are some common scenarios for TTL: + +* Regularly delete verification codes and short URLs. +* Regularly delete unnecessary historical orders. +* Automatically delete intermediate results of calculations. + +TTL is designed to help users clean up unnecessary data periodically and in a timely manner without affecting the online read and write workloads. TTL concurrently dispatches different jobs to different TiDB nodes to delete data in parallel in the unit of table. TTL does not guarantee that all expired data is deleted immediately, which means that even if some data is expired, the client might still read that data some time after the expiration time until that data is deleted by the background TTL job. + +> **Warning:** +> +> This is an experimental feature. It is not recommended that you use it in a production environment. +> TTL is not available for [TiDB Serverless](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta). + +## Syntax + +You can configure the TTL attribute of a table using the [`CREATE TABLE`](/sql-statements/sql-statement-create-table.md) or [`ALTER TABLE`](/sql-statements/sql-statement-alter-table.md) statement. + +### Create a table with a TTL attribute + +- Create a table with a TTL attribute: + + ```sql + CREATE TABLE t1 ( + id int PRIMARY KEY, + created_at TIMESTAMP + ) TTL = `created_at` + INTERVAL 3 MONTH; + ``` + + The preceding example creates a table `t1` and specifies `created_at` as the TTL timestamp column, which indicates the creation time of the data. The example also sets the longest time that a row is allowed to live in the table to 3 months through `INTERVAL 3 MONTH`. Data that lives longer than this value will be deleted later. + +- Set the `TTL_ENABLE` attribute to enable or disable the feature of cleaning up expired data: + + ```sql + CREATE TABLE t1 ( + id int PRIMARY KEY, + created_at TIMESTAMP + ) TTL = `created_at` + INTERVAL 3 MONTH TTL_ENABLE = 'OFF'; + ``` + + If `TTL_ENABLE` is set to `OFF`, even if other TTL options are set, TiDB does not automatically clean up expired data in this table. For a table with the TTL attribute, `TTL_ENABLE` is `ON` by default. + +- To be compatible with MySQL, you can set a TTL attribute using a comment: + + ```sql + CREATE TABLE t1 ( + id int PRIMARY KEY, + created_at TIMESTAMP + ) /*T![ttl] TTL = `created_at` + INTERVAL 3 MONTH TTL_ENABLE = 'OFF'*/; + ``` + + In TiDB, using the table TTL attribute or using comments to configure TTL is equivalent. In MySQL, the comment is ignored and an ordinary table is created. + +### Modify the TTL attribute of a table + +- Modify the TTL attribute of a table: + + ```sql + ALTER TABLE t1 TTL = `created_at` + INTERVAL 1 MONTH; + ``` + + You can use the preceding statement to modify a table with an existing TTL attribute or to add a TTL attribute to a table without a TTL attribute. + +- Modify the value of `TTL_ENABLE` for a table with the TTL attribute: + + ```sql + ALTER TABLE t1 TTL_ENABLE = 'OFF'; + ``` + +- To remove all TTL attributes of a table: + + ```sql + ALTER TABLE t1 REMOVE TTL; + ``` + +### TTL and the default values of data types + +You can use TTL together with [default values of the data types](/data-type-default-values.md). The following are two common usage examples: + +* Use `DEFAULT CURRENT_TIMESTAMP` to specify the default value of a column as the current creation time and use this column as the TTL timestamp column. Records that were created 3 months ago are expired: + + ```sql + CREATE TABLE t1 ( + id int PRIMARY KEY, + created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP + ) TTL = `created_at` + INTERVAL 3 MONTH; + ``` + +* Specify the default value of a column as the creation time or the latest update time and use this column as the TTL timestamp column. Records that have not been updated for 3 months are expired: + + ```sql + CREATE TABLE t1 ( + id int PRIMARY KEY, + created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP + ) TTL = `created_at` + INTERVAL 3 MONTH; + ``` + +### TTL and generated columns + +You can use TTL together with [generated columns](/generated-columns.md) (experimental feature) to configure complex expiration rules. For example: + +```sql +CREATE TABLE message ( + id int PRIMARY KEY, + created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, + image bool, + expire_at TIMESTAMP AS (IF(image, + created_at + INTERVAL 5 DAY, + created_at + INTERVAL 30 DAY + )) +) TTL = `expire_at` + INTERVAL 0 DAY; +``` + +The preceding statement uses the `expire_at` column as the TTL timestamp column and sets the expiration time according to the message type. If the message is an image, it expires in 5 days. Otherwise, it expires in 30 days. + +You can use TTL together with the [JSON type](/data-type-json.md). For example: + +```sql +CREATE TABLE orders ( + id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, + order_info JSON, + created_at DATE AS (JSON_EXTRACT(order_info, '$.created_at')) VIRTUAL +) TTL = `created_at` + INTERVAL 3 month; +``` + +## TTL job + +For each table with a TTL attribute, TiDB internally schedules a background job to clean up expired data. You can customize the execution period of these jobs by setting the `TTL_JOB_INTERVAL` attribute for the table. The following example sets the background cleanup jobs for the table `orders` to run once every 24 hours: + +```sql +ALTER TABLE orders TTL_JOB_INTERVAL = '24h'; +``` + +`TTL_JOB_INTERVAL` is set to `1h` by default. + +To disable the execution of TTL jobs, in addition to setting the `TTL_ENABLE='OFF'` table option, you can also disable the execution of TTL jobs in the entire cluster by setting the [`tidb_ttl_job_enable`](/system-variables.md#tidb_ttl_job_enable-new-in-v650) global variable: + +```sql +SET @@global.tidb_ttl_job_enable = OFF; +``` + +In some scenarios, you might want to allow TTL jobs to run only in a certain time window. In this case, you can set the [`tidb_ttl_job_schedule_window_start_time`](/system-variables.md#tidb_ttl_job_schedule_window_start_time-new-in-v650) and [`tidb_ttl_job_schedule_window_end_time`](/system-variables.md#tidb_ttl_job_schedule_window_end_time-new-in-v650) global variables to specify the time window. For example: + +```sql +SET @@global.tidb_ttl_job_schedule_window_start_time = '01:00 +0000'; +SET @@global.tidb_ttl_job_schedule_window_end_time = '05:00 +0000'; +``` + +The preceding statement allows TTL jobs to be scheduled only between 1:00 and 5:00 UTC. By default, the time window is set to `00:00 +0000` to `23:59 +0000`, which allows the jobs to be scheduled at any time. + +## Observability + + + +> **Note:** +> +> This section is only applicable to TiDB Self-Hosted. Currently, TiDB Cloud does not provide TTL metrics. + + + +TiDB collects runtime information about TTL periodically and provides visualized charts of these metrics in Grafana. You can see these metrics in the TiDB -> TTL panel in Grafana. + + + +For details of the metrics, see the TTL section in [TiDB Monitoring Metrics](/grafana-tidb-dashboard.md). + + + +In addition, TiDB provides three tables to obtain more information about TTL jobs: + ++ The `mysql.tidb_ttl_table_status` table contains information about the previously executed TTL job and ongoing TTL job for all TTL tables + + ```sql + MySQL [(none)]> SELECT * FROM mysql.tidb_ttl_table_status LIMIT 1\G; + *************************** 1. row *************************** + table_id: 85 + parent_table_id: 85 + table_statistics: NULL + last_job_id: 0b4a6d50-3041-4664-9516-5525ee6d9f90 + last_job_start_time: 2023-02-15 20:43:46 + last_job_finish_time: 2023-02-15 20:44:46 + last_job_ttl_expire: 2023-02-15 19:43:46 + last_job_summary: {"total_rows":4369519,"success_rows":4369519,"error_rows":0,"total_scan_task":64,"scheduled_scan_task":64,"finished_scan_task":64} + current_job_id: NULL + current_job_owner_id: NULL + current_job_owner_addr: NULL + current_job_owner_hb_time: NULL + current_job_start_time: NULL + current_job_ttl_expire: NULL + current_job_state: NULL + current_job_status: NULL + current_job_status_update_time: NULL + 1 row in set (0.040 sec) + ``` + + The column `table_id` is the ID of the partitioned table, and the `parent_table_id` is the ID of the table, corresponding with the ID in `infomation_schema.tables`. If the table is not a partitioned table, the two IDs are the same. + + The columns `{last, current}_job_{start_time, finish_time, ttl_expire}` describe respectively the start time, finish time, and expiration time used by the TTL job of the last or current execution. The `last_job_summary` column describes the execution status of the last TTL task, including the total number of rows, the number of successful rows, and the number of failed rows. + ++ The `mysql.tidb_ttl_task` table contains information about the ongoing TTL subtasks. A TTL job is split into many subtasks, and this table records the subtasks that are currently being executed. ++ The `mysql.tidb_ttl_job_history` table contains information about the TTL jobs that have been executed. The record of TTL job history is kept for 90 days. + + ```sql + MySQL [(none)]> SELECT * FROM mysql.tidb_ttl_job_history LIMIT 1\G; + *************************** 1. row *************************** + job_id: f221620c-ab84-4a28-9d24-b47ca2b5a301 + table_id: 85 + parent_table_id: 85 + table_schema: test_schema + table_name: TestTable + partition_name: NULL + create_time: 2023-02-15 17:43:46 + finish_time: 2023-02-15 17:45:46 + ttl_expire: 2023-02-15 16:43:46 + summary_text: {"total_rows":9588419,"success_rows":9588419,"error_rows":0,"total_scan_task":63,"scheduled_scan_task":63,"finished_scan_task":63} + expired_rows: 9588419 + deleted_rows: 9588419 + error_delete_rows: 0 + status: finished + ``` + + The column `table_id` is the ID of the partitioned table, and the `parent_table_id` is the ID of the table, corresponding with the ID in `infomation_schema.tables`. `table_schema`, `table_name`, and `partition_name` correspond to the database, table name, and partition name. `create_time`, `finish_time`, and `ttl_expire` indicate the creation time, end time, and expiration time of the TTL task. `expired_rows` and `deleted_rows` indicate the number of expired rows and the number of rows deleted successfully. + +## Compatibility with TiDB tools + +As an experimental feature, the TTL feature is not compatible with data import and export tools, including BR, TiDB Lightning, and TiCDC. + +## Limitations + +Currently, the TTL feature has the following limitations: + +* The TTL attribute cannot be set on temporary tables, including local temporary tables and global temporary tables. +* A table with the TTL attribute does not support being referenced by other tables as the primary table in a foreign key constraint. +* It is not guaranteed that all expired data is deleted immediately. The time when expired data is deleted depends on the scheduling interval and scheduling window of the background cleanup job. + +## FAQs + + + +- How can I determine whether the deletion is fast enough to keep the data size relatively stable? + + In the [Grafana `TiDB` dashboard](/grafana-tidb-dashboard.md), the panel `TTL Insert Rows Per Hour` records the total number of rows inserted in the previous hour. The corresponding `TTL Delete Rows Per Hour` records the total number of rows deleted by the TTL task in the previous hour. If `TTL Insert Rows Per Hour` is higher than `TTL Delete Rows Per Hour` for a long time, it means that the rate of insertion is higher than the rate of deletion and the total amount of data will increase. For example: + + ![insert fast example](/media/ttl/insert-fast.png) + + It is worth noting that since TTL does not guarantee that the expired rows will be deleted immediately, and the rows currently inserted will be deleted in a future TTL task, even if the speed of TTL deletion is lower than the speed of insertion in a short period of time, it does not necessarily mean that the speed of TTL is too slow. You need to consider the situation in its context. + +- How can I determine whether the bottleneck of a TTL task is in scanning or deleting? + + Look at the `TTL Scan Worker Time By Phase` and `TTL Delete Worker Time By Phase` panels. If the scan worker is in the `dispatch` phase for a large percentage of time and the delete worker is rarely in the `idle` phase, then the scan worker is waiting for the delete worker to finish the deletion. If the cluster resources are still free at this point, you can consider increasing `tidb_ttl_ delete_worker_count` to increase the number of delete workers. For example: + + ![scan fast example](/media/ttl/scan-fast.png) + + In contrast, if the scan worker is rarely in the `dispatch` phase and the delete worker is in the `idle` phase for a long time, then the scan worker is relatively busy. For example: + + ![delete fast example](/media/ttl/delete-fast.png) + + The percentage of scan and delete in TTL jobs is related to the machine configuration and data distribution, so the monitoring data at each moment is only representative of the TTL Jobs being executed. You can read the table `mysql.tidb_ttl_job_history` to determine which TTL job is running at a certain moment and the corresponding table of the job. + +- How to configure `tidb_ttl_scan_worker_count` and `tidb_ttl_delete_worker_count` properly? + + 1. Refer to the question "How to determine whether the bottleneck of TTL tasks is in scanning or deleting?" to consider whether to increase the value of `tidb_ttl_scan_worker_count` or `tidb_ttl_delete_worker_count`. + 2. If the number of TiKV nodes is high, increase the value of `tidb_ttl_scan_worker_count` can make the TTL task workload more balanced. + + Since too many TTL workers will cause a lot of pressure, you need to evaluate the CPU level of TiDB and the disk and CPU usage of TiKV together. Depending on different scenarios and needs (whether you need to speed up TTL as much as possible, or to reduce the impact of TTL on other queries), you can adjust the value of `tidb_ttl_scan_worker_count` and `tidb_ttl_delete_worker_count` to improve the speed of TTL scanning and deleting or reduce the performance impact brought by TTL tasks. + + + + +- How to configure `tidb_ttl_scan_worker_count` and `tidb_ttl_delete_worker_count` properly? + + If the number of TiKV nodes is high, increase the value of `tidb_ttl_scan_worker_count` can make the TTL task workload more balanced. + + But too many TTL workers will cause a lot of pressure, you need to evaluate the CPU level of TiDB and the disk and CPU usage of TiKV together. Depending on different scenarios and needs (whether you need to speed up TTL as much as possible, or to reduce the impact of TTL on other queries), you can adjust the value of `tidb_ttl_scan_worker_count` and `tidb_ttl_delete_worker_count` to improve the speed of TTL scanning and deleting or reduce the performance impact brought by TTL tasks. + + From 84135c8f68a00a46eb311ec41147d1880b7b0b44 Mon Sep 17 00:00:00 2001 From: qiancai Date: Fri, 2 Jun 2023 14:33:15 +0800 Subject: [PATCH 2/6] remove unnecessary files from cherry-pick --- _docHome.md | 158 ---------- br/br-pitr-guide.md | 137 --------- .../sql-statement-alter-resource-group.md | 91 ------ .../sql-statement-create-resource-group.md | 87 ------ .../sql-statement-drop-resource-group.md | 67 ----- .../sql-statement-flashback-to-timestamp.md | 129 -------- ...ql-statement-show-create-resource-group.md | 59 ---- tidb-resource-control.md | 197 ------------ time-to-live.md | 284 ------------------ 9 files changed, 1209 deletions(-) delete mode 100644 _docHome.md delete mode 100644 br/br-pitr-guide.md delete mode 100644 sql-statements/sql-statement-alter-resource-group.md delete mode 100644 sql-statements/sql-statement-create-resource-group.md delete mode 100644 sql-statements/sql-statement-drop-resource-group.md delete mode 100644 sql-statements/sql-statement-flashback-to-timestamp.md delete mode 100644 sql-statements/sql-statement-show-create-resource-group.md delete mode 100644 tidb-resource-control.md delete mode 100644 time-to-live.md diff --git a/_docHome.md b/_docHome.md deleted file mode 100644 index 665d6764677c5..0000000000000 --- a/_docHome.md +++ /dev/null @@ -1,158 +0,0 @@ ---- -title: PingCAP Documentation -hide_sidebar: true -hide_commit: true -hide_leftNav: true ---- - - - - - -TiDB Cloud is a fully-managed Database-as-a-Service (DBaaS) that brings everything great about TiDB to your cloud, and lets you focus on your applications, not the complexities of your database. - - - - - -See the documentation of TiDB Cloud - - - - - -Guides you through an easy way to get started with TiDB Cloud - - - - - -Helps you quickly complete a Proof of Concept (PoC) of TiDB Cloud - - - - - -Get the power of a cloud-native, distributed SQL database built for real-time analytics in a fully-managed service. - -Try Free - - - - - - - -TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability. You can deploy TiDB in a self-hosted environment or in the cloud. - - - - - -See the documentation of TiDB - - - - - -Walks you through the quickest way to get started with TiDB - - - - - -Learn how to deploy TiDB locally in production - - - - - -The open-source TiDB platform is released under the Apache 2.0 license, and supported by the community. - -Download - - - - - - - - - -Documentation for TiDB application developers - - - - - -Documentation for TiDB Cloud application developers - - - - - - - - - - - - - -Learn TiDB and TiDB Cloud through well-designed online courses and instructor-led training - - - - - -Join us on Slack or become a contributor - - - - - -Learn great articles about TiDB and TiDB Cloud - - - - - -See a compilation of short videos describing TiDB and a variety of use cases - - - - - -Learn events about PingCAP and the community - - - - - -Download eBooks and papers - - - - - -A powerful insight tool that analyzes in depth any GitHub repository, powered by TiDB Cloud - - - - - -Let’s work together to make the documentation better! - - - - - - - - diff --git a/br/br-pitr-guide.md b/br/br-pitr-guide.md deleted file mode 100644 index 903f2ff625723..0000000000000 --- a/br/br-pitr-guide.md +++ /dev/null @@ -1,137 +0,0 @@ ---- -title: TiDB Log Backup and PITR Guide -summary: Learns about how to perform log backup and PITR in TiDB. ---- - -# TiDB Log Backup and PITR Guide - -A full backup (snapshot backup) contains the full cluster data at a certain point, while TiDB log backup can back up data written by applications to a specified storage in a timely manner. If you want to choose the restore point as required, that is, to perform point-in-time recovery (PITR), you can [start log backup](#start-log-backup) and [run full backup regularly](#run-full-backup-regularly). - -Before you back up or restore data using the br command-line tool (hereinafter referred to as `br`), you need to [install `br`](/br/br-use-overview.md#deploy-and-use-br) first. - -## Back up TiDB cluster - -### Start log backup - -> **Note:** -> -> - The following examples assume that Amazon S3 access keys and secret keys are used to authorize permissions. If IAM roles are used to authorize permissions, you need to set `--send-credentials-to-tikv` to `false`. -> - If other storage systems or authorization methods are used to authorize permissions, adjust the parameter settings according to [Backup Storages](/br/backup-and-restore-storages.md). - -To start a log backup, run `br log start`. A cluster can only run one log backup task each time. - -```shell -tiup br log start --task-name=pitr --pd "${PD_IP}:2379" \ ---storage 's3://backup-101/logbackup?access-key=${access-key}&secret-access-key=${secret-access-key}"' -``` - -After the log backup task starts, it runs in the background of the TiDB cluster until you stop it manually. During this process, the TiDB change logs are regularly backed up to the specified storage in small batches. To query the status of the log backup task, run the following command: - -```shell -tiup br log status --task-name=pitr --pd "${PD_IP}:2379" -``` - -Expected output: - -``` -● Total 1 Tasks. -> #1 < - name: pitr - status: ● NORMAL - start: 2022-05-13 11:09:40.7 +0800 - end: 2035-01-01 00:00:00 +0800 - storage: s3://backup-101/log-backup - speed(est.): 0.00 ops/s -checkpoint[global]: 2022-05-13 11:31:47.2 +0800; gap=4m53s -``` - -### Run full backup regularly - -The snapshot backup can be used as a method of full backup. You can run `br backup full` to back up the cluster snapshot to the backup storage according to a fixed schedule (for example, every 2 days). - -```shell -tiup br backup full --pd "${PD_IP}:2379" \ ---storage 's3://backup-101/snapshot-${date}?access-key=${access-key}&secret-access-key=${secret-access-key}"' -``` - -## Run PITR - -To restore the cluster to any point in time within the backup retention period, you can use `br restore point`. When you run this command, you need to specify the **time point you want to restore**, **the latest snapshot backup data before the time point**, and the **log backup data**. BR will automatically determine and read data needed for the restore, and then restore these data to the specified cluster in order. - -```shell -br restore point --pd "${PD_IP}:2379" \ ---storage='s3://backup-101/logbackup?access-key=${access-key}&secret-access-key=${secret-access-key}"' \ ---full-backup-storage='s3://backup-101/snapshot-${date}?access-key=${access-key}&secret-access-key=${secret-access-key}"' \ ---restored-ts '2022-05-15 18:00:00+0800' -``` - -During data restore, you can view the progress through the progress bar in the terminal. The restore is divided into two phases, full restore and log restore (restore meta files and restore KV files). After each phase is completed, `br` outputs information such as restore time and data size. - -```shell -Full Restore <--------------------------------------------------------------------------------------------------------------------------------------------------------> 100.00% -*** ["Full Restore success summary"] ****** [total-take=xxx.xxxs] [restore-data-size(after-compressed)=xxx.xxx] [Size=xxxx] [BackupTS={TS}] [total-kv=xxx] [total-kv-size=xxx] [average-speed=xxx] -Restore Meta Files <--------------------------------------------------------------------------------------------------------------------------------------------------> 100.00% -Restore KV Files <----------------------------------------------------------------------------------------------------------------------------------------------------> 100.00% -*** ["restore log success summary"] [total-take=xxx.xx] [restore-from={TS}] [restore-to={TS}] [total-kv-count=xxx] [total-size=xxx] -``` - -## Clean up outdated data - -As described in the [Usage Overview of TiDB Backup and Restore](/br/br-use-overview.md): - -To perform PITR, you need to restore the full backup before the restore point, and the log backup between the full backup point and the restore point. Therefore, for log backups that exceed the backup retention period, you can use `br log truncate` to delete the backup before the specified time point. **It is recommended to only delete the log backup before the full snapshot**. - -The following steps describe how to clean up backup data that exceeds the backup retention period: - -1. Get the **last full backup** outside the backup retention period. -2. Use the `validate` command to get the time point corresponding to the backup. Assume that the backup data before 2022/09/01 needs to be cleaned, you should look for the last full backup before this time point and ensure that it will not be cleaned. - - ```shell - FULL_BACKUP_TS=`tiup br validate decode --field="end-version" --storage "s3://backup-101/snapshot-${date}?access-key=${access-key}&secret-access-key=${secret-access-key}"| tail -n1` - ``` - -3. Delete log backup data earlier than the snapshot backup `FULL_BACKUP_TS`: - - ```shell - tiup br log truncate --until=${FULL_BACKUP_TS} --storage='s3://backup-101/logbackup?access-key=${access-key}&secret-access-key=${secret-access-key}"' - ``` - -4. Delete snapshot data earlier than the snapshot backup `FULL_BACKUP_TS`: - - ```shell - rm -rf s3://backup-101/snapshot-${date} - ``` - -## Performance and impact of PITR - -### Capabilities - -- On each TiKV node, PITR can restore snapshot data at a speed of 280 GB/h and log data 30 GB/h. -- BR deletes outdated log backup data at a speed of 600 GB/h. - -> **Note:** -> -> The preceding specifications are based on test results from the following two testing scenarios. The actual data might be different. -> -> - Snapshot data restore speed = Snapshot data size / (duration * the number of TiKV nodes) -> - Log data restore speed = Restored log data size / (duration * the number of TiKV nodes) - -Testing scenario 1 (on [TiDB Cloud](https://tidbcloud.com)): - -- The number of TiKV nodes (8 core, 16 GB memory): 21 -- The number of Regions: 183,000 -- New log data created in the cluster: 10 GB/h -- Write (INSERT/UPDATE/DELETE) QPS: 10,000 - -Testing scenario 2 (on TiDB Self-Hosted): - -- The number of TiKV nodes (8 core, 64 GB memory): 6 -- The number of Regions: 50,000 -- New log data created in the cluster: 10 GB/h -- Write (INSERT/UPDATE/DELETE) QPS: 10,000 - -## See also - -* [TiDB Backup and Restore Use Cases](/br/backup-and-restore-use-cases.md) -* [br Command-line Manual](/br/use-br-command-line-tool.md) -* [Log Backup and PITR Architecture](/br/br-log-architecture.md) diff --git a/sql-statements/sql-statement-alter-resource-group.md b/sql-statements/sql-statement-alter-resource-group.md deleted file mode 100644 index 017c26d117f86..0000000000000 --- a/sql-statements/sql-statement-alter-resource-group.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -title: ALTER RESOURCE GROUP -summary: Learn the usage of ALTER RESOURCE GROUP in TiDB. ---- - -# ALTER RESOURCE GROUP - - - -> **Note:** -> -> This feature is not available on [TiDB Serverless clusters](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta). - - - -The `ALTER RESOURCE GROUP` statement is used to modify a resource group in a database. - -## Synopsis - -```ebnf+diagram -AlterResourceGroupStmt: - "ALTER" "RESOURCE" "GROUP" IfExists ResourceGroupName ResourceGroupOptionList - -IfExists ::= - ('IF' 'EXISTS')? - -ResourceGroupName: - Identifier - -ResourceGroupOptionList: - DirectResourceGroupOption -| ResourceGroupOptionList DirectResourceGroupOption -| ResourceGroupOptionList ',' DirectResourceGroupOption - -DirectResourceGroupOption: - "RU_PER_SEC" EqOpt stringLit -| "BURSTABLE" - -``` - -TiDB supports the following `DirectResourceGroupOption`, where [Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru) is a unified abstraction unit in TiDB for CPU, IO, and other system resources. - -| Option | Description | Example | -|---------------|-------------------------------------|------------------------| -| `RU_PER_SEC` | Rate of RU backfilling per second | `RU_PER_SEC = 500` indicates that this resource group is backfilled with 500 RUs per second | - -If the `BURSTABLE` attribute is set, TiDB allows the corresponding resource group to use the available system resources when the quota is exceeded. - -> **Note:** -> -> The `ALTER RESOURCE GROUP` statement can only be executed when the global variable [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) is set to `ON`. - -## Examples - -Create a resource group named `rg1` and modify its properties. - -```sql -mysql> DROP RESOURCE GROUP IF EXISTS rg1; -Query OK, 0 rows affected (0.22 sec) -mysql> CREATE RESOURCE GROUP IF NOT EXISTS rg1 - -> RU_PER_SEC = 100 - -> BURSTABLE; -Query OK, 0 rows affected (0.08 sec) -mysql> SELECT * FROM information_schema.resource_groups WHERE NAME ='rg1'; -+------+------------+-----------+ -| NAME | RU_PER_SEC | BURSTABLE | -+------+------------+-----------+ -| rg1 | 100 | YES | -+------+------------+-----------+ -1 rows in set (1.30 sec) -mysql> ALTER RESOURCE GROUP rg1 - -> RU_PER_SEC = 200; -Query OK, 0 rows affected (0.08 sec) -mysql> SELECT * FROM information_schema.resource_groups WHERE NAME ='rg1'; -+------+------------+-----------+ -| NAME | RU_PER_SEC | BURSTABLE | -+------+------------+-----------+ -| rg1 | 200 | NO | -+------+------------+-----------+ -1 rows in set (1.30 sec) -``` - -## MySQL compatibility - -MySQL also supports [ALTER RESOURCE GROUP](https://dev.mysql.com/doc/refman/8.0/en/alter-resource-group.html). However, the acceptable parameters are different from that of TiDB so that they are not compatible. - -## See also - -* [DROP RESOURCE GROUP](/sql-statements/sql-statement-drop-resource-group.md) -* [CREATE RESOURCE GROUP](/sql-statements/sql-statement-create-resource-group.md) -* [Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru) diff --git a/sql-statements/sql-statement-create-resource-group.md b/sql-statements/sql-statement-create-resource-group.md deleted file mode 100644 index ebc0ad761f252..0000000000000 --- a/sql-statements/sql-statement-create-resource-group.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -title: CREATE RESOURCE GROUP -summary: Learn the usage of CREATE RESOURCE GROUP in TiDB. ---- - -# CREATE RESOURCE GROUP - - - -> **Note:** -> -> This feature is not available on [TiDB Serverless clusters](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta). - - - -You can use the `CREATE RESOURCE GROUP` statement to create a resource group. - -## Synopsis - -```ebnf+diagram -CreateResourceGroupStmt: - "CREATE" "RESOURCE" "GROUP" IfNotExists ResourceGroupName ResourceGroupOptionList - -IfNotExists ::= - ('IF' 'NOT' 'EXISTS')? - -ResourceGroupName: - Identifier - -ResourceGroupOptionList: - DirectResourceGroupOption -| ResourceGroupOptionList DirectResourceGroupOption -| ResourceGroupOptionList ',' DirectResourceGroupOption - -DirectResourceGroupOption: - "RU_PER_SEC" EqOpt stringLit -| "BURSTABLE" - -``` - -The resource group name parameter (`ResourceGroupName`) must be globally unique. - -TiDB supports the following `DirectResourceGroupOption`, where [Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru) is a unified abstraction unit in TiDB for CPU, IO, and other system resources. - -| Option | Description | Example | -|---------------|-------------------------------------|------------------------| -| `RU_PER_SEC` | Rate of RU backfilling per second | `RU_PER_SEC = 500` indicates that this resource group is backfilled with 500 RUs per second | - -If the `BURSTABLE` attribute is set, TiDB allows the corresponding resource group to use the available system resources when the quota is exceeded. - -> **Note:** -> -> The `CREATE RESOURCE GROUP` statement can only be executed when the global variable [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) is set to `ON`. - -## Examples - -Create two resource groups `rg1` and `rg2`. - -```sql -mysql> DROP RESOURCE GROUP IF EXISTS rg1; -Query OK, 0 rows affected (0.22 sec) -mysql> CREATE RESOURCE GROUP IF NOT EXISTS rg1 - -> RU_PER_SEC = 100 - -> BURSTABLE; -Query OK, 0 rows affected (0.08 sec) -mysql> CREATE RESOURCE GROUP IF NOT EXISTS rg2 - -> RU_PER_SEC = 200; -Query OK, 0 rows affected (0.08 sec) -mysql> SELECT * FROM information_schema.resource_groups WHERE NAME ='rg1' or NAME = 'rg2'; -+------+-------------+-----------+ -| NAME | RU_PER_SEC | BURSTABLE | -+------+-------------+-----------+ -| rg1 | 100 | YES | -| rg2 | 200 | NO | -+------+-------------+-----------+ -2 rows in set (1.30 sec) -``` - -## MySQL compatibility - -MySQL also supports [CREATE RESOURCE GROUP](https://dev.mysql.com/doc/refman/8.0/en/create-resource-group.html). However, the acceptable parameters are different from that of TiDB so that they are not compatible. - -## See also - -* [DROP RESOURCE GROUP](/sql-statements/sql-statement-drop-resource-group.md) -* [ALTER RESOURCE GROUP](/sql-statements/sql-statement-alter-resource-group.md) -* [Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru) diff --git a/sql-statements/sql-statement-drop-resource-group.md b/sql-statements/sql-statement-drop-resource-group.md deleted file mode 100644 index 669d23b379e52..0000000000000 --- a/sql-statements/sql-statement-drop-resource-group.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -title: DROP RESOURCE GROUP -summary: Learn the usage of DROP RESOURCE GROUP in TiDB. ---- - -# DROP RESOURCE GROUP - - - -> **Note:** -> -> This feature is not available on [TiDB Serverless clusters](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta). - - - -You can use the `DROP RESOURCE GROUP` statement to drop a resource group. - -## Synopsis - -```ebnf+diagram -DropResourceGroupStmt: - "DROP" "RESOURCE" "GROUP" IfExists ResourceGroupName - -IfExists ::= - ('IF' 'EXISTS')? - -ResourceGroupName: - Identifier -``` - -> **Note:** -> -> The `DROP RESOURCE GROUP` statement can only be executed when the global variable [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) is set to `ON`. - -## Examples - -Drop a resource group named `rg1`. - -```sql -mysql> DROP RESOURCE GROUP IF EXISTS rg1; -Query OK, 0 rows affected (0.22 sec) -mysql> CREATE RESOURCE GROUP IF NOT EXISTS rg1 RU_PER_SEC = 500 BURSTABLE; -Query OK, 0 rows affected (0.08 sec) -mysql> SELECT * FROM information_schema.resource_groups WHERE NAME ='rg1'; -+------+------------+-----------+ -| NAME | RU_PER_SEC | BURSTABLE | -+------+------------+-----------+ -| rg1 | 500 | YES | -+------+------------+-----------+ -1 row in set (0.01 sec) - -mysql> DROP RESOURCE GROUP IF EXISTS rg1; -Query OK, 1 rows affected (0.09 sec) - -mysql> SELECT * FROM information_schema.resource_groups WHERE NAME ='rg1'; -Empty set (0.00 sec) -``` - -## MySQL compatibility - -MySQL also supports [DROP RESOURCE GROUP](https://dev.mysql.com/doc/refman/8.0/en/drop-resource-group.html), but TiDB does not support the `FORCE` parameter. - -## See also - -* [ALTER RESOURCE GROUP](/sql-statements/sql-statement-alter-resource-group.md) -* [CREATE RESOURCE GROUP](/sql-statements/sql-statement-create-resource-group.md) -* [Request Unit (RU)](/tidb-resource-control.md#what-is-request-unit-ru) \ No newline at end of file diff --git a/sql-statements/sql-statement-flashback-to-timestamp.md b/sql-statements/sql-statement-flashback-to-timestamp.md deleted file mode 100644 index c8159cbe17a18..0000000000000 --- a/sql-statements/sql-statement-flashback-to-timestamp.md +++ /dev/null @@ -1,129 +0,0 @@ ---- -title: FLASHBACK CLUSTER TO TIMESTAMP -summary: Learn the usage of FLASHBACK CLUSTER TO TIMESTAMP in TiDB databases. ---- - -# FLASHBACK CLUSTER TO TIMESTAMP - -TiDB v6.4.0 introduces the `FLASHBACK CLUSTER TO TIMESTAMP` syntax. You can use it to restore a cluster to a specific point in time. - - - -> **Warning:** -> -> The `FLASHBACK CLUSTER TO TIMESTAMP` syntax is not applicable to [TiDB Serverless](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta) clusters. Do not execute this statement on TiDB Serverless clusters to avoid unexpected results. - - - -> **Note:** -> -> The working principle of `FLASHBACK CLUSTER TO TIMESTAMP` is to write the old data of a specific point in time with the latest timestamp, and will not delete the current data. So before using this feature, you need to ensure that there is enough storage space for the old data and the current data. - -## Syntax - -```sql -FLASHBACK CLUSTER TO TIMESTAMP '2022-09-21 16:02:50'; -``` - -### Synopsis - -```ebnf+diagram -FlashbackToTimestampStmt ::= - "FLASHBACK" "CLUSTER" "TO" "TIMESTAMP" stringLit -``` - -## Notes - -* The time specified in the `FLASHBACK` statement must be within the Garbage Collection (GC) lifetime. The system variable [`tidb_gc_life_time`](/system-variables.md#tidb_gc_life_time-new-in-v50) (default: `10m0s`) defines the retention time of earlier versions of rows. The current `safePoint` of where garbage collection has been performed up to can be obtained with the following query: - - ```sql - SELECT * FROM mysql.tidb WHERE variable_name = 'tikv_gc_safe_point'; - ``` - - - -* Only a user with the `SUPER` privilege can execute the `FLASHBACK CLUSTER` SQL statement. -* `FLASHBACK CLUSTER` does not support rolling back DDL statements that modify PD-related information, such as `ALTER TABLE ATTRIBUTE`, `ALTER TABLE REPLICA`, and `CREATE PLACEMENT POLICY`. -* At the time specified in the `FLASHBACK` statement, there cannot be a DDL statement that is not completely executed. If such a DDL exists, TiDB will reject it. -* Before executing `FLASHBACK CLUSTER TO TIMESTAMP`, TiDB disconnects all related connections and prohibits read and write operations on these tables until the `FLASHBACK CLUSTER` statement is completed. -* The `FLASHBACK CLUSTER TO TIMESTAMP` statement cannot be canceled after being executed. TiDB will keep retrying until it succeeds. -* During the execution of `FLASHBACK CLUSTER`, if you need to back up data, you can only use [Backup & Restore](/br/br-snapshot-guide.md) and specify a `BackupTS` that is earlier than the start time of `FLASHBACK CLUSTER`. In addition, during the execution of `FLASHBACK CLUSTER`, enabling [log backup](/br/br-pitr-guide.md) will fail. Therefore, try to enable log backup after `FLASHBACK CLUSTER` is completed. -* If the `FLASHBACK CLUSTER` statement causes the rollback of metadata (table structure, database structure), the related modifications will **not** be replicated by TiCDC. Therefore, you need to pause the task manually, wait for the completion of `FLASHBACK CLUSTER`, and manually replicate the schema definitions of the upstream and downstream to make sure that they are consistent. After that, you need to recreate the TiCDC changefeed. - - - - - -* Only a user with the `SUPER` privilege can execute the `FLASHBACK CLUSTER` SQL statement. -* `FLASHBACK CLUSTER` does not support rolling back DDL statements that modify PD-related information, such as `ALTER TABLE ATTRIBUTE`, `ALTER TABLE REPLICA`, and `CREATE PLACEMENT POLICY`. -* At the time specified in the `FLASHBACK` statement, there cannot be a DDL statement that is not completely executed. If such a DDL exists, TiDB will reject it. -* Before executing `FLASHBACK CLUSTER TO TIMESTAMP`, TiDB disconnects all related connections and prohibits read and write operations on these tables until the `FLASHBACK CLUSTER` statement is completed. -* The `FLASHBACK CLUSTER TO TIMESTAMP` statement cannot be canceled after being executed. TiDB will keep retrying until it succeeds. -* If the `FLASHBACK CLUSTER` statement causes the rollback of metadata (table structure, database structure), the related modifications will **not** be replicated by TiCDC. Therefore, you need to pause the task manually, wait for the completion of `FLASHBACK CLUSTER`, and manually replicate the schema definitions of the upstream and downstream to make sure that they are consistent. After that, you need to recreate the TiCDC changefeed. - - - -## Example - -The following example shows how to restore the newly inserted data: - -```sql -mysql> CREATE TABLE t(a INT); -Query OK, 0 rows affected (0.09 sec) - -mysql> SELECT * FROM t; -Empty set (0.01 sec) - -mysql> SELECT now(); -+---------------------+ -| now() | -+---------------------+ -| 2022-09-28 17:24:16 | -+---------------------+ -1 row in set (0.02 sec) - -mysql> INSERT INTO t VALUES (1); -Query OK, 1 row affected (0.02 sec) - -mysql> SELECT * FROM t; -+------+ -| a | -+------+ -| 1 | -+------+ -1 row in set (0.01 sec) - -mysql> FLASHBACK CLUSTER TO TIMESTAMP '2022-09-28 17:24:16'; -Query OK, 0 rows affected (0.20 sec) - -mysql> SELECT * FROM t; -Empty set (0.00 sec) -``` - -If there is a DDL statement that is not completely executed at the time specified in the `FLASHBACK` statement, the `FLASHBACK` statement fails: - -```sql -mysql> ALTER TABLE t ADD INDEX k(a); -Query OK, 0 rows affected (0.56 sec) - -mysql> ADMIN SHOW DDL JOBS 1; -+--------+---------+-----------------------+------------------------+--------------+-----------+----------+-----------+---------------------+---------------------+---------------------+--------+ -| JOB_ID | DB_NAME | TABLE_NAME | JOB_TYPE | SCHEMA_STATE | SCHEMA_ID | TABLE_ID | ROW_COUNT | CREATE_TIME | START_TIME | END_TIME | STATE | -+--------+---------+-----------------------+------------------------+--------------+-----------+----------+-----------+---------------------+---------------------+---------------------+--------+ -| 84 | test | t | add index /* ingest */ | public | 2 | 82 | 0 | 2023-01-29 14:33:11 | 2023-01-29 14:33:11 | 2023-01-29 14:33:12 | synced | -+--------+---------+-----------------------+------------------------+--------------+-----------+----------+-----------+---------------------+---------------------+---------------------+--------+ -1 rows in set (0.01 sec) - -mysql> FLASHBACK CLUSTER TO TIMESTAMP '2023-01-29 14:33:12'; -ERROR 1105 (HY000): Detected another DDL job at 2023-01-29 14:33:12 +0800 CST, can't do flashback -``` - -Through the log, you can obtain the execution progress of `FLASHBACK`. The following is an example: - -``` -[2022/10/09 17:25:59.316 +08:00] [INFO] [cluster.go:463] ["flashback cluster stats"] ["complete regions"=9] ["total regions"=10] [] -``` - -## MySQL compatibility - -This statement is a TiDB extension to MySQL syntax. diff --git a/sql-statements/sql-statement-show-create-resource-group.md b/sql-statements/sql-statement-show-create-resource-group.md deleted file mode 100644 index 2c49734a91609..0000000000000 --- a/sql-statements/sql-statement-show-create-resource-group.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -title: SHOW CREATE RESOURCE GROUP -summary: Learn the usage of SHOW CREATE RESOURCE GROUP in TiDB. ---- - -# SHOW CREATE RESOURCE GROUP - - - -> **Note:** -> -> This feature is not available on [TiDB Serverless clusters](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta). - - - -You can use the `SHOW CREATE RESOURCE GROUP` statement to view the current definition of a resource group. - -## Synopsis - -```ebnf+diagram -ShowCreateResourceGroupStmt ::= - "SHOW" "CREATE" "RESOURCE" "GROUP" ResourceGroupName - -ResourceGroupName ::= - Identifier -``` - -## Examples - -Create a resource group `rg1`. - -```sql -CREATE RESOURCE GROUP rg1 RU_PER_SEC=100; -Query OK, 0 rows affected (0.10 sec) -``` - -View the definition of `rg1`. - -```sql -SHOW CREATE RESOURCE GROUP rg1; -***************************[ 1. row ]*************************** -+----------------+--------------------------------------------+ -| Resource_Group | Create Resource Group | -+----------------+--------------------------------------------+ -| rg1 | CREATE RESOURCE GROUP `rg1` RU_PER_SEC=100 | -+----------------+--------------------------------------------+ -1 row in set (0.01 sec) -``` - -## MySQL compatibility - -This statement is a TiDB extension for MySQL. - -## See also - -* [TiDB RESOURCE CONTROL](/tidb-resource-control.md) -* [CREATE RESOURCE GROUP](/sql-statements/sql-statement-alter-resource-group.md) -* [ALTER RESOURCE GROUP](/sql-statements/sql-statement-alter-resource-group.md) -* [DROP RESOURCE GROUP](/sql-statements/sql-statement-drop-resource-group.md) diff --git a/tidb-resource-control.md b/tidb-resource-control.md deleted file mode 100644 index 91b262bbe0246..0000000000000 --- a/tidb-resource-control.md +++ /dev/null @@ -1,197 +0,0 @@ ---- -title: Use Resource Control to Achieve Resource Isolation -summary: Learn how to use the resource control feature to control and schedule application resources. ---- - -# Use Resource Control to Achieve Resource Isolation - -> **Warning:** -> -> This feature is experimental and its form and usage might change in subsequent versions. - - - -> **Note:** -> -> This feature is not available on [TiDB Serverless clusters](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta). - - - -As a cluster administrator, you can use the resource control feature to create resource groups, set read and write quotas for resource groups, and bind users to those groups. This allows the TiDB layer to control the flow of user read and write requests based on the quotas set for the resource groups, and allows the TiKV layer to schedule the requests based on the priority mapped to the read and write quota. By doing this, you can ensure resource isolation for your applications and meet quality of service (QoS) requirements. - -The TiDB resource control feature provides two layers of resource management capabilities: the flow control capability at the TiDB layer and the priority scheduling capability at the TiKV layer. The two capabilities can be enabled separately or simultaneously. See the [Parameters for resource control](#parameters-for-resource-control) for details. - -- TiDB flow control: TiDB flow control uses the [token bucket algorithm](https://en.wikipedia.org/wiki/Token_bucket). If there are not enough tokens in a bucket, and the resource group does not specify the `BURSTABLE` option, the requests to the resource group will wait for the token bucket to backfill the tokens and retry. The retry might fail due to timeout. - - - -- TiKV scheduling: if [`resource-control.enabled`](/tikv-configuration-file.md#resource-control) is enabled, TiKV uses the value of `RU_PER_SEC` of each resource group to determine the priority of the read and write requests for each resource group. Based on the priorities, the storage layer uses the priority queue to schedule and process requests. - - - - - -- TiKV scheduling: for TiDB Self-Hosted, if the `resource-control.enabled` parameter is enabled, TiKV uses the value of `RU_PER_SEC` of each resource group to determine the priority of the read and write requests for each resource group. Based on the priorities, the storage layer uses the priority queue to schedule and process requests. For TiDB Cloud, the value of the `resource-control.enabled` parameter is `false` by default and does not support dynamic modification. If you need to enable it for TiDB Dedicated clusters, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). - - - -## Scenarios for resource control - -The introduction of the resource control feature is a milestone for TiDB. It can divide a distributed database cluster into multiple logical units. Even if an individual unit overuses resources, it does not crowd out the resources needed by other units. - -With this feature, you can: - -- Combine multiple small and medium-sized applications from different systems into a single TiDB cluster. When the workload of an application grows larger, it does not affect the normal operation of other applications. When the system workload is low, busy applications can still be allocated the required system resources even if they exceed the set read and write quotas, so as to achieve the maximum utilization of resources. -- Choose to combine all test environments into a single TiDB cluster, or group the batch tasks that consume more resources into a single resource group. It can improve hardware utilization and reduce operating costs while ensuring that critical applications can always get the necessary resources. - -In addition, the rational use of the resource control feature can reduce the number of clusters, ease the difficulty of operation and maintenance, and save management costs. - -## What is Request Unit (RU) - -Request Unit (RU) is a unified abstraction unit in TiDB for system resources, which currently includes CPU, IOPS, and IO bandwidth metrics. The consumption of these three metrics is represented by RU according to a certain ratio. - -The following table shows the consumption of TiKV storage layer CPU and IO resources by user requests and the corresponding RU weights. - -| Resource | RU Weight | -|:----------------|:-----------------| -| CPU | 1/3 RU per millisecond | -| Read IO | 1/64 RU per KB | -| Write IO | 1 RU/KB | -| Basic overhead of a read request | 0.25 RU | -| Basic overhead of a write request | 1.5 RU | - -Based on the above table, assuming that the TiKV time consumed by a resource group is `c` milliseconds, `r1` times of requests read `r2` KB data, `w1` times of write requests write `w2` KB data, and the number of non-witness TiKV nodes in the cluster is `n`. Then, the formula for the total RUs consumed by the resource group is as follows: - -`c`\* 1/3 + (`r1` \* 0.25 + `r2` \* 1/64) + (1.5 \* `w1` + `w2` \* 1 \* `n`) - -## Parameters for resource control - -The resource control feature introduces two new global variables. - -* TiDB: you can use the [`tidb_enable_resource_control`](/system-variables.md#tidb_enable_resource_control-new-in-v660) system variable to control whether to enable flow control for resource groups. - - - -* TiKV: you can use the [`resource-control.enabled`](/tikv-configuration-file.md#resource-control) parameter to control whether to use request scheduling based on resource groups. - - - - - -* TiKV: For TiDB Self-Hosted, you can use the `resource-control.enabled` parameter to control whether to use request scheduling based on resource group quotas. For TiDB Cloud, the value of the `resource-control.enabled` parameter is `false` by default and does not support dynamic modification. If you need to enable it for TiDB Dedicated clusters, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). - - - -The results of the combinations of these two parameters are shown in the following table. - -| `resource-control.enabled` | `tidb_enable_resource_control`= ON | `tidb_enable_resource_control`= OFF | -|:----------------------------|:-------------------------------------|:-------------------------------------| -| `resource-control.enabled`= true | Flow control and scheduling (recommended) | Invalid combination | -| `resource-control.enabled`= false | Only flow control (not recommended) | The feature is disabled. | - -For more information about the resource control mechanism and parameters, see [RFC: Global Resource Control in TiDB](https://github.com/pingcap/tidb/blob/master/docs/design/2022-11-25-global-resource-control.md). - -## How to use resource control - -To create, modify, or delete a resource group, you need to have the `SUPER` or `RESOURCE_GROUP_ADMIN` privilege. - -You can create a resource group in the cluster by using [`CREATE RESOURCE GROUP`](/sql-statements/sql-statement-create-resource-group.md), and then bind users to a specific resource group by using [`CREATE USER`](/sql-statements/sql-statement-create-user.md) or [`ALTER USER`](/sql-statements/sql-statement-alter-user.md). - -For an existing resource group, you can modify the `RU_PER_SEC` option (the rate of RU backfilling per second) of the resource group by using [`ALTER RESOURCE GROUP`](/sql-statements/sql-statement-alter-resource-group.md). The changes to the resource group take effect immediately. - -You can delete a resource group by using [`DROP RESOURCE GROUP`](/sql-statements/sql-statement-drop-resource-group.md). - -> **Note:** -> -> - When you bind a user to a resource group by using `CREATE USER` or `ALTER USER`, it will not take effect for the user's existing sessions, but only for the user's new sessions. -> - If a user is not bound to a resource group or is bound to a `default` resource group, the user's requests are not subject to the flow control restrictions of TiDB. The `default` resource group is currently not visible to the user and cannot be created or modified. You cannot view it with `SHOW CREATE RESOURCE GROUP` or `SELECT * FROM information_schema.resource_groups`. But you can view it through the `mysql.user` table. - -### Step 1. Enable the resource control feature - -Enable the resource control feature. - -```sql -SET GLOBAL tidb_enable_resource_control = 'ON'; -``` - - - -Set the TiKV [`resource-control.enabled`](/tikv-configuration-file.md#resource-control) parameter to `true`. - - - - - -For TiDB Self-Hosted, set the TiKV `resource-control.enabled` parameter to `true`. For TiDB Cloud, the value of the `resource-control.enabled` parameter is `false` by default and does not support dynamic modification. If you need to enable it for TiDB Dedicated clusters, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). - - - -### Step 2. Create a resource group, and then bind users to it - -The following is an example of how to create a resource group and bind users to it. - -1. Create a resource group `rg1`. The RU backfill rate is 500 RUs per second and allows applications in this resource group to overrun resources. - - ```sql - CREATE RESOURCE GROUP IF NOT EXISTS rg1 RU_PER_SEC = 500 BURSTABLE; - ``` - -2. Create a resource group `rg2`. The RU backfill rate is 600 RUs per second and does not allow applications in this resource group to overrun resources. - - ```sql - CREATE RESOURCE GROUP IF NOT EXISTS rg2 RU_PER_SEC = 600; - ``` - -3. Bind users `usr1` and `usr2` to resource groups `rg1` and `rg2` respectively. - - ```sql - ALTER USER usr1 RESOURCE GROUP rg1; - ``` - - ```sql - ALTER USER usr2 RESOURCE GROUP rg2; - ``` - -After you complete the above operations of creating resource groups and binding users, the resource consumption of newly created sessions will be controlled by the specified quota. If the system workload is relatively high and there is no spare capacity, the resource consumption rate of `usr2` will be strictly controlled not to exceed the quota. Because `usr1` is bound by `rg1` with `BURSTABLE` configured, the consumption rate of `usr1` is allowed to exceed the quota. - -If there are too many requests that result in insufficient resources for the resource group, the client's requests will wait. If the wait time is too long, the requests will report an error. - -## Monitoring metrics and charts - - - -TiDB regularly collects runtime information about resource control and provides visual charts of the metrics in Grafana's **TiDB** > **Resource Control** dashboard. The metrics are detailed in the **Resource Control** section of [TiDB Important Monitoring Metrics](/grafana-tidb-dashboard.md). - -TiKV also records the request QPS from different resource groups. For more details, see [TiKV Monitoring Metrics Detail](/grafana-tikv-dashboard.md#grpc). - - - - - -> **Note:** -> -> This section is only applicable to TiDB Self-Hosted. Currently, TiDB Cloud does not provide resource control metrics. - -TiDB regularly collects runtime information about resource control and provides visual charts of the metrics in Grafana's **TiDB** > **Resource Control** dashboard. - -TiKV also records the request QPS from different resource groups in Grafana's **TiKV** dashboard. - - - -## Tool compatibility - -The resource control feature is still in its experimental stage and does not impact the regular usage of data import, export, and other replication tools. BR, TiDB Lightning, and TiCDC do not currently support processing DDL operations related to resource control, and their resource consumption is not limited by resource control. - -## Limitations - -Currently, the resource control feature has the following limitations: - -* This feature only supports flow control and scheduling of read and write requests initiated by foreground clients. It does not support flow control and scheduling of background tasks such as DDL operations and auto analyze. -* Resource control incurs additional scheduling overhead. Therefore, there might be a slight performance degradation when this feature is enabled. - -## See also - -* [CREATE RESOURCE GROUP](/sql-statements/sql-statement-create-resource-group.md) -* [ALTER RESOURCE GROUP](/sql-statements/sql-statement-alter-resource-group.md) -* [DROP RESOURCE GROUP](/sql-statements/sql-statement-drop-resource-group.md) -* [RESOURCE GROUP RFC](https://github.com/pingcap/tidb/blob/master/docs/design/2022-11-25-global-resource-control.md) diff --git a/time-to-live.md b/time-to-live.md deleted file mode 100644 index e595965e8b9fe..0000000000000 --- a/time-to-live.md +++ /dev/null @@ -1,284 +0,0 @@ ---- -title: Periodically Delete Data Using TTL (Time to Live) -summary: Time to live (TTL) is a feature that allows you to manage TiDB data lifetime at the row level. In this document, you can learn how to use TTL to automatically expire and delete old data. ---- - -# Periodically Delete Expired Data Using TTL (Time to Live) - -Time to live (TTL) is a feature that allows you to manage TiDB data lifetime at the row level. For a table with the TTL attribute, TiDB automatically checks data lifetime and deletes expired data at the row level. This feature can effectively save storage space and enhance performance in some scenarios. - -The following are some common scenarios for TTL: - -* Regularly delete verification codes and short URLs. -* Regularly delete unnecessary historical orders. -* Automatically delete intermediate results of calculations. - -TTL is designed to help users clean up unnecessary data periodically and in a timely manner without affecting the online read and write workloads. TTL concurrently dispatches different jobs to different TiDB nodes to delete data in parallel in the unit of table. TTL does not guarantee that all expired data is deleted immediately, which means that even if some data is expired, the client might still read that data some time after the expiration time until that data is deleted by the background TTL job. - -> **Warning:** -> -> This is an experimental feature. It is not recommended that you use it in a production environment. -> TTL is not available for [TiDB Serverless](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta). - -## Syntax - -You can configure the TTL attribute of a table using the [`CREATE TABLE`](/sql-statements/sql-statement-create-table.md) or [`ALTER TABLE`](/sql-statements/sql-statement-alter-table.md) statement. - -### Create a table with a TTL attribute - -- Create a table with a TTL attribute: - - ```sql - CREATE TABLE t1 ( - id int PRIMARY KEY, - created_at TIMESTAMP - ) TTL = `created_at` + INTERVAL 3 MONTH; - ``` - - The preceding example creates a table `t1` and specifies `created_at` as the TTL timestamp column, which indicates the creation time of the data. The example also sets the longest time that a row is allowed to live in the table to 3 months through `INTERVAL 3 MONTH`. Data that lives longer than this value will be deleted later. - -- Set the `TTL_ENABLE` attribute to enable or disable the feature of cleaning up expired data: - - ```sql - CREATE TABLE t1 ( - id int PRIMARY KEY, - created_at TIMESTAMP - ) TTL = `created_at` + INTERVAL 3 MONTH TTL_ENABLE = 'OFF'; - ``` - - If `TTL_ENABLE` is set to `OFF`, even if other TTL options are set, TiDB does not automatically clean up expired data in this table. For a table with the TTL attribute, `TTL_ENABLE` is `ON` by default. - -- To be compatible with MySQL, you can set a TTL attribute using a comment: - - ```sql - CREATE TABLE t1 ( - id int PRIMARY KEY, - created_at TIMESTAMP - ) /*T![ttl] TTL = `created_at` + INTERVAL 3 MONTH TTL_ENABLE = 'OFF'*/; - ``` - - In TiDB, using the table TTL attribute or using comments to configure TTL is equivalent. In MySQL, the comment is ignored and an ordinary table is created. - -### Modify the TTL attribute of a table - -- Modify the TTL attribute of a table: - - ```sql - ALTER TABLE t1 TTL = `created_at` + INTERVAL 1 MONTH; - ``` - - You can use the preceding statement to modify a table with an existing TTL attribute or to add a TTL attribute to a table without a TTL attribute. - -- Modify the value of `TTL_ENABLE` for a table with the TTL attribute: - - ```sql - ALTER TABLE t1 TTL_ENABLE = 'OFF'; - ``` - -- To remove all TTL attributes of a table: - - ```sql - ALTER TABLE t1 REMOVE TTL; - ``` - -### TTL and the default values of data types - -You can use TTL together with [default values of the data types](/data-type-default-values.md). The following are two common usage examples: - -* Use `DEFAULT CURRENT_TIMESTAMP` to specify the default value of a column as the current creation time and use this column as the TTL timestamp column. Records that were created 3 months ago are expired: - - ```sql - CREATE TABLE t1 ( - id int PRIMARY KEY, - created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP - ) TTL = `created_at` + INTERVAL 3 MONTH; - ``` - -* Specify the default value of a column as the creation time or the latest update time and use this column as the TTL timestamp column. Records that have not been updated for 3 months are expired: - - ```sql - CREATE TABLE t1 ( - id int PRIMARY KEY, - created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP - ) TTL = `created_at` + INTERVAL 3 MONTH; - ``` - -### TTL and generated columns - -You can use TTL together with [generated columns](/generated-columns.md) (experimental feature) to configure complex expiration rules. For example: - -```sql -CREATE TABLE message ( - id int PRIMARY KEY, - created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, - image bool, - expire_at TIMESTAMP AS (IF(image, - created_at + INTERVAL 5 DAY, - created_at + INTERVAL 30 DAY - )) -) TTL = `expire_at` + INTERVAL 0 DAY; -``` - -The preceding statement uses the `expire_at` column as the TTL timestamp column and sets the expiration time according to the message type. If the message is an image, it expires in 5 days. Otherwise, it expires in 30 days. - -You can use TTL together with the [JSON type](/data-type-json.md). For example: - -```sql -CREATE TABLE orders ( - id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, - order_info JSON, - created_at DATE AS (JSON_EXTRACT(order_info, '$.created_at')) VIRTUAL -) TTL = `created_at` + INTERVAL 3 month; -``` - -## TTL job - -For each table with a TTL attribute, TiDB internally schedules a background job to clean up expired data. You can customize the execution period of these jobs by setting the `TTL_JOB_INTERVAL` attribute for the table. The following example sets the background cleanup jobs for the table `orders` to run once every 24 hours: - -```sql -ALTER TABLE orders TTL_JOB_INTERVAL = '24h'; -``` - -`TTL_JOB_INTERVAL` is set to `1h` by default. - -To disable the execution of TTL jobs, in addition to setting the `TTL_ENABLE='OFF'` table option, you can also disable the execution of TTL jobs in the entire cluster by setting the [`tidb_ttl_job_enable`](/system-variables.md#tidb_ttl_job_enable-new-in-v650) global variable: - -```sql -SET @@global.tidb_ttl_job_enable = OFF; -``` - -In some scenarios, you might want to allow TTL jobs to run only in a certain time window. In this case, you can set the [`tidb_ttl_job_schedule_window_start_time`](/system-variables.md#tidb_ttl_job_schedule_window_start_time-new-in-v650) and [`tidb_ttl_job_schedule_window_end_time`](/system-variables.md#tidb_ttl_job_schedule_window_end_time-new-in-v650) global variables to specify the time window. For example: - -```sql -SET @@global.tidb_ttl_job_schedule_window_start_time = '01:00 +0000'; -SET @@global.tidb_ttl_job_schedule_window_end_time = '05:00 +0000'; -``` - -The preceding statement allows TTL jobs to be scheduled only between 1:00 and 5:00 UTC. By default, the time window is set to `00:00 +0000` to `23:59 +0000`, which allows the jobs to be scheduled at any time. - -## Observability - - - -> **Note:** -> -> This section is only applicable to TiDB Self-Hosted. Currently, TiDB Cloud does not provide TTL metrics. - - - -TiDB collects runtime information about TTL periodically and provides visualized charts of these metrics in Grafana. You can see these metrics in the TiDB -> TTL panel in Grafana. - - - -For details of the metrics, see the TTL section in [TiDB Monitoring Metrics](/grafana-tidb-dashboard.md). - - - -In addition, TiDB provides three tables to obtain more information about TTL jobs: - -+ The `mysql.tidb_ttl_table_status` table contains information about the previously executed TTL job and ongoing TTL job for all TTL tables - - ```sql - MySQL [(none)]> SELECT * FROM mysql.tidb_ttl_table_status LIMIT 1\G; - *************************** 1. row *************************** - table_id: 85 - parent_table_id: 85 - table_statistics: NULL - last_job_id: 0b4a6d50-3041-4664-9516-5525ee6d9f90 - last_job_start_time: 2023-02-15 20:43:46 - last_job_finish_time: 2023-02-15 20:44:46 - last_job_ttl_expire: 2023-02-15 19:43:46 - last_job_summary: {"total_rows":4369519,"success_rows":4369519,"error_rows":0,"total_scan_task":64,"scheduled_scan_task":64,"finished_scan_task":64} - current_job_id: NULL - current_job_owner_id: NULL - current_job_owner_addr: NULL - current_job_owner_hb_time: NULL - current_job_start_time: NULL - current_job_ttl_expire: NULL - current_job_state: NULL - current_job_status: NULL - current_job_status_update_time: NULL - 1 row in set (0.040 sec) - ``` - - The column `table_id` is the ID of the partitioned table, and the `parent_table_id` is the ID of the table, corresponding with the ID in `infomation_schema.tables`. If the table is not a partitioned table, the two IDs are the same. - - The columns `{last, current}_job_{start_time, finish_time, ttl_expire}` describe respectively the start time, finish time, and expiration time used by the TTL job of the last or current execution. The `last_job_summary` column describes the execution status of the last TTL task, including the total number of rows, the number of successful rows, and the number of failed rows. - -+ The `mysql.tidb_ttl_task` table contains information about the ongoing TTL subtasks. A TTL job is split into many subtasks, and this table records the subtasks that are currently being executed. -+ The `mysql.tidb_ttl_job_history` table contains information about the TTL jobs that have been executed. The record of TTL job history is kept for 90 days. - - ```sql - MySQL [(none)]> SELECT * FROM mysql.tidb_ttl_job_history LIMIT 1\G; - *************************** 1. row *************************** - job_id: f221620c-ab84-4a28-9d24-b47ca2b5a301 - table_id: 85 - parent_table_id: 85 - table_schema: test_schema - table_name: TestTable - partition_name: NULL - create_time: 2023-02-15 17:43:46 - finish_time: 2023-02-15 17:45:46 - ttl_expire: 2023-02-15 16:43:46 - summary_text: {"total_rows":9588419,"success_rows":9588419,"error_rows":0,"total_scan_task":63,"scheduled_scan_task":63,"finished_scan_task":63} - expired_rows: 9588419 - deleted_rows: 9588419 - error_delete_rows: 0 - status: finished - ``` - - The column `table_id` is the ID of the partitioned table, and the `parent_table_id` is the ID of the table, corresponding with the ID in `infomation_schema.tables`. `table_schema`, `table_name`, and `partition_name` correspond to the database, table name, and partition name. `create_time`, `finish_time`, and `ttl_expire` indicate the creation time, end time, and expiration time of the TTL task. `expired_rows` and `deleted_rows` indicate the number of expired rows and the number of rows deleted successfully. - -## Compatibility with TiDB tools - -As an experimental feature, the TTL feature is not compatible with data import and export tools, including BR, TiDB Lightning, and TiCDC. - -## Limitations - -Currently, the TTL feature has the following limitations: - -* The TTL attribute cannot be set on temporary tables, including local temporary tables and global temporary tables. -* A table with the TTL attribute does not support being referenced by other tables as the primary table in a foreign key constraint. -* It is not guaranteed that all expired data is deleted immediately. The time when expired data is deleted depends on the scheduling interval and scheduling window of the background cleanup job. - -## FAQs - - - -- How can I determine whether the deletion is fast enough to keep the data size relatively stable? - - In the [Grafana `TiDB` dashboard](/grafana-tidb-dashboard.md), the panel `TTL Insert Rows Per Hour` records the total number of rows inserted in the previous hour. The corresponding `TTL Delete Rows Per Hour` records the total number of rows deleted by the TTL task in the previous hour. If `TTL Insert Rows Per Hour` is higher than `TTL Delete Rows Per Hour` for a long time, it means that the rate of insertion is higher than the rate of deletion and the total amount of data will increase. For example: - - ![insert fast example](/media/ttl/insert-fast.png) - - It is worth noting that since TTL does not guarantee that the expired rows will be deleted immediately, and the rows currently inserted will be deleted in a future TTL task, even if the speed of TTL deletion is lower than the speed of insertion in a short period of time, it does not necessarily mean that the speed of TTL is too slow. You need to consider the situation in its context. - -- How can I determine whether the bottleneck of a TTL task is in scanning or deleting? - - Look at the `TTL Scan Worker Time By Phase` and `TTL Delete Worker Time By Phase` panels. If the scan worker is in the `dispatch` phase for a large percentage of time and the delete worker is rarely in the `idle` phase, then the scan worker is waiting for the delete worker to finish the deletion. If the cluster resources are still free at this point, you can consider increasing `tidb_ttl_ delete_worker_count` to increase the number of delete workers. For example: - - ![scan fast example](/media/ttl/scan-fast.png) - - In contrast, if the scan worker is rarely in the `dispatch` phase and the delete worker is in the `idle` phase for a long time, then the scan worker is relatively busy. For example: - - ![delete fast example](/media/ttl/delete-fast.png) - - The percentage of scan and delete in TTL jobs is related to the machine configuration and data distribution, so the monitoring data at each moment is only representative of the TTL Jobs being executed. You can read the table `mysql.tidb_ttl_job_history` to determine which TTL job is running at a certain moment and the corresponding table of the job. - -- How to configure `tidb_ttl_scan_worker_count` and `tidb_ttl_delete_worker_count` properly? - - 1. Refer to the question "How to determine whether the bottleneck of TTL tasks is in scanning or deleting?" to consider whether to increase the value of `tidb_ttl_scan_worker_count` or `tidb_ttl_delete_worker_count`. - 2. If the number of TiKV nodes is high, increase the value of `tidb_ttl_scan_worker_count` can make the TTL task workload more balanced. - - Since too many TTL workers will cause a lot of pressure, you need to evaluate the CPU level of TiDB and the disk and CPU usage of TiKV together. Depending on different scenarios and needs (whether you need to speed up TTL as much as possible, or to reduce the impact of TTL on other queries), you can adjust the value of `tidb_ttl_scan_worker_count` and `tidb_ttl_delete_worker_count` to improve the speed of TTL scanning and deleting or reduce the performance impact brought by TTL tasks. - - - - -- How to configure `tidb_ttl_scan_worker_count` and `tidb_ttl_delete_worker_count` properly? - - If the number of TiKV nodes is high, increase the value of `tidb_ttl_scan_worker_count` can make the TTL task workload more balanced. - - But too many TTL workers will cause a lot of pressure, you need to evaluate the CPU level of TiDB and the disk and CPU usage of TiKV together. Depending on different scenarios and needs (whether you need to speed up TTL as much as possible, or to reduce the impact of TTL on other queries), you can adjust the value of `tidb_ttl_scan_worker_count` and `tidb_ttl_delete_worker_count` to improve the speed of TTL scanning and deleting or reduce the performance impact brought by TTL tasks. - - From 56559e7a3ea472c1d8c5c714dc36df1600ee3357 Mon Sep 17 00:00:00 2001 From: qiancai Date: Fri, 2 Jun 2023 14:43:48 +0800 Subject: [PATCH 3/6] resolve conflicts --- br/backup-and-restore-overview.md | 33 -------- develop/dev-guide-build-cluster-in-cloud.md | 12 +-- develop/dev-guide-proxysql-integration.md | 14 +--- .../information-schema-slow-query.md | 11 --- statement-summary-tables.md | 66 --------------- statistics.md | 5 -- system-variables.md | 83 ------------------- 7 files changed, 6 insertions(+), 218 deletions(-) diff --git a/br/backup-and-restore-overview.md b/br/backup-and-restore-overview.md index 4e79f3c846d88..d4fab28e69514 100644 --- a/br/backup-and-restore-overview.md +++ b/br/backup-and-restore-overview.md @@ -18,40 +18,7 @@ Each TiKV node has a path in which the backup files generated in the backup oper ![br-arch](/media/br-arch.png) -<<<<<<< HEAD For detailed information about the BR design, see [BR Design Principles](/br/backup-and-restore-design.md). -======= -- PITR only supports restoring data to **an empty cluster**. -- PITR only supports cluster-level restore and does not support database-level or table-level restore. -- PITR does not support restoring the data of user tables or privilege tables from system tables. -- BR does not support running multiple backup tasks on a cluster **at the same time**. -- When a PITR is running, you cannot run a log backup task or use TiCDC to replicate data to a downstream cluster. - -### Some tips - -Snapshot backup: - -- It is recommended that you perform the backup operation during off-peak hours to minimize the impact on applications. -- It is recommended that you execute multiple backup or restore tasks one by one. Running multiple backup tasks in parallel leads to low performance. Worse still, a lack of collaboration between multiple tasks might result in task failures and affect cluster performance. - -Snapshot restore: - -- BR uses resources of the target cluster as much as possible. Therefore, it is recommended that you restore data to a new cluster or an offline cluster. Avoid restoring data to a production cluster. Otherwise, your application will be affected inevitably. - -Backup storage and network configuration: - -- It is recommended that you store backup data to a storage system that is compatible with Amazon S3, GCS, or Azure Blob Storage. -- You need to ensure that BR, TiKV, and the backup storage system have enough network bandwidth, and that the backup storage system can provide sufficient read and write performance (IOPS). Otherwise, they might become a performance bottleneck during backup and restore. - -## Use backup and restore - -The way to use BR varies with the deployment method of TiDB. This document introduces how to use the br command-line tool to back up and restore TiDB cluster data in a self-hosted deployment. - -For information about how to use this feature in other deployment scenarios, see the following documents: - -- [Back Up and Restore TiDB Deployed on TiDB Cloud](https://docs.pingcap.com/tidbcloud/backup-and-restore): It is recommended that you create TiDB clusters on [TiDB Cloud](https://www.pingcap.com/tidb-cloud/?from=en). TiDB Cloud offers fully managed databases to let you focus on your applications. -- [Back Up and Restore Data Using TiDB Operator](https://docs.pingcap.com/tidb-in-kubernetes/stable/backup-restore-overview): If you deploy a TiDB cluster using TiDB Operator on Kubernetes, it is recommended to back up and restore data using Kubernetes CustomResourceDefinition (CRD). ->>>>>>> 8eee4b162 (tidb: rename products (#13692) (#13763)) ## BR features diff --git a/develop/dev-guide-build-cluster-in-cloud.md b/develop/dev-guide-build-cluster-in-cloud.md index 939e55a034e4c..6fab1a152481f 100644 --- a/develop/dev-guide-build-cluster-in-cloud.md +++ b/develop/dev-guide-build-cluster-in-cloud.md @@ -29,18 +29,12 @@ This document walks you through the quickest way to get started with TiDB Cloud. The [**Clusters**](https://tidbcloud.com/console/clusters) list page is displayed by default. -<<<<<<< HEAD -3. For new sign-up users, TiDB Cloud creates a default Serverless Tier cluster `Cluster0` for you automatically. You can either use this default cluster for the subsequent steps or create a new Serverless Tier cluster on your own. +3. For new sign-up users, TiDB Cloud creates a default TiDB Serverless cluster `Cluster0` for you automatically. You can either use this default cluster for the subsequent steps or create a new TiDB Serverless cluster on your own. - To create a new Serverless Tier cluster on your own, take the following operations: -======= -4. On the **Create Cluster** page, **Serverless** is selected by default. Update the default cluster name if necessary, and then select the region where you want to create your cluster. - -5. Click **Create** to create a TiDB Serverless cluster. ->>>>>>> 8eee4b162 (tidb: rename products (#13692) (#13763)) + To create a new TiDB Serverless cluster on your own, take the following operations: 1. Click **Create Cluster**. - 2. On the **Create Cluster** page, **Serverless Tier** is selected by default. Update the default cluster name if necessary, select a target region of your cluster, and then click **Create**. Your Serverless Tier cluster will be created in approximately 30 seconds. + 2. On the **Create Cluster** page, **Serverless** is selected by default. Update the default cluster name if necessary, select a target region of your cluster, and then click **Create**. Your TiDB Serverless cluster will be created in approximately 30 seconds. 4. Click the target cluster name to go to its overview page, and then click **Connect** in the upper-right corner. A connection dialog box is displayed. diff --git a/develop/dev-guide-proxysql-integration.md b/develop/dev-guide-proxysql-integration.md index cbd15c6103e73..3c391d2ec2eb4 100644 --- a/develop/dev-guide-proxysql-integration.md +++ b/develop/dev-guide-proxysql-integration.md @@ -39,7 +39,7 @@ The most obvious way to deploy ProxySQL with TiDB is to add ProxySQL as a standa This section describes how to integrate TiDB with ProxySQL in a development environment. To get started with the ProxySQL integration, you can choose either of the following options depending on your TiDB cluster type after you have all the [prerequisites](#prerequisite) in place. -- Option 1: [Integrate TiDB Cloud Serverless Tier with ProxySQL](#option-1-integrate-tidb-cloud-serverless-tier-with-proxysql) +- Option 1: [Integrate TiDB Serverless with ProxySQL](#option-1-integrate-tidb-cloud-serverless-tier-with-proxysql) - Option 2: [Integrate TiDB (self-hosted) with ProxySQL](#option-2-integrate-tidb-self-hosted-with-proxysql) ### Prerequisites @@ -117,23 +117,15 @@ systemctl start docker -### Option 1: Integrate TiDB Cloud Serverless Tier with ProxySQL +### Option 1: Integrate TiDB Serverless with ProxySQL For this integration, you will be using the [ProxySQL Docker image](https://hub.docker.com/r/proxysql/proxysql) along with a TiDB Serverless cluster. The following steps will set up ProxySQL on port `16033`, so make sure this port is available. #### Step 1. Create a TiDB Serverless cluster -<<<<<<< HEAD -1. [Create a free TiDB Serverless Tier cluster](https://docs.pingcap.com/tidbcloud/tidb-cloud-quickstart#step-1-create-a-tidb-cluster). +1. [Create a free TiDB Serverless cluster](https://docs.pingcap.com/tidbcloud/tidb-cloud-quickstart#step-1-create-a-tidb-cluster). 2. Follow the steps in [Connect via Standard Connection](https://docs.pingcap.com/tidbcloud/connect-via-standard-connection#serverless-tier) to get the connection string and set a password for your cluster. 3. In the connection string, locate your cluster endpoint after `-h`, your user name after `-u`, and your cluster port after `-P`. -======= -1. [Create a free TiDB Serverless cluster](https://docs.pingcap.com/tidbcloud/tidb-cloud-quickstart#step-1-create-a-tidb-cluster). Remember the root password that you set for your cluster. -2. Get your cluster hostname, port, and username for later use. - - 1. On the [Clusters](https://tidbcloud.com/console/clusters) page, click your cluster name to go to the cluster overview page. - 2. On the cluster overview page, locate the **Connection** pane, and then copy the `Endpoint`, `Port`, and `User` fields, where the `Endpoint` is your cluster hostname. ->>>>>>> 8eee4b162 (tidb: rename products (#13692) (#13763)) #### Step 2. Generate ProxySQL configuration files diff --git a/information-schema/information-schema-slow-query.md b/information-schema/information-schema-slow-query.md index 1f70505eba0e9..56d5d3c5bf121 100644 --- a/information-schema/information-schema-slow-query.md +++ b/information-schema/information-schema-slow-query.md @@ -7,17 +7,6 @@ summary: Learn the `SLOW_QUERY` information_schema table. The `SLOW_QUERY` table provides the slow query information of the current node, which is the parsing result of the TiDB slow log file. The column names in the table are corresponding to the field names in the slow log. -<<<<<<< HEAD -======= - - -> **Note:** -> -> The `SLOW_QUERY` table is unavailable for [TiDB Serverless clusters](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta). - - - ->>>>>>> 8eee4b162 (tidb: rename products (#13692) (#13763)) For how to use this table to identify problematic statements and improve query performance, see [Slow Query Log Document](/identify-slow-queries.md). diff --git a/statement-summary-tables.md b/statement-summary-tables.md index 68d93b33226a7..8f1574f8ed4e4 100644 --- a/statement-summary-tables.md +++ b/statement-summary-tables.md @@ -15,17 +15,6 @@ Therefore, starting from v4.0.0-rc.1, TiDB provides system tables in `informatio - [`cluster_statements_summary_history`](#statements_summary_evicted) - [`statements_summary_evicted`](#statements_summary_evicted) -<<<<<<< HEAD -======= - - -> **Note:** -> -> The following tables are unavailable for [TiDB Serverless clusters](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta): `statements_summary`, `statements_summary_history`, `cluster_statements_summary`, and `cluster_statements_summary_history`. - - - ->>>>>>> 8eee4b162 (tidb: rename products (#13692) (#13763)) This document details these tables and introduces how to use them to troubleshoot SQL performance issues. ## `statements_summary` @@ -193,62 +182,7 @@ From the result above, you can see that a maximum of 59 SQL categories are evict The statement summary tables have the following limitation: -<<<<<<< HEAD All data of the statement summary tables above will be lost when the TiDB server is restarted. This is because statement summary tables are all memory tables, and the data is cached in memory instead of being persisted on storage. -======= - - -To address this issue, TiDB v6.6.0 experimentally introduces the [statement summary persistence](#persist-statements-summary) feature, which is disabled by default. After this feature is enabled, the history data is no longer saved in memory, but directly written to disks. In this way, the history data is still available if a TiDB server restarts. - - - -## Persist statements summary - - - -This section is only applicable to TiDB Self-Hosted. For TiDB Cloud, the value of the `tidb_stmt_summary_enable_persistent` parameter is `false` by default and does not support dynamic modification. - - - -> **Warning:** -> -> Statements summary persistence is an experimental feature. It is not recommended that you use it in the production environment. This feature might be changed or removed without prior notice. If you find a bug, you can report an [issue](https://github.com/pingcap/tidb/issues) on GitHub. - - - -As described in the [Limitation](#limitation) section, statements summary tables are saved in memory by default. Once a TiDB server restarts, all the statements summary will be lost. Starting from v6.6.0, TiDB experimentally provides the configuration item [`tidb_stmt_summary_enable_persistent`](/tidb-configuration-file.md#tidb_stmt_summary_enable_persistent-new-in-v660) to allow users to enable or disable statements summary persistence. - - - - - -As described in the [Limitation](#limitation) section, statements summary tables are saved in memory by default. Once a TiDB server restarts, all the statements summary will be lost. Starting from v6.6.0, TiDB experimentally provides the configuration item `tidb_stmt_summary_enable_persistent` to allow users to enable or disable statements summary persistence. - - - -To enable statements summary persistence, you can add the following configuration items to the TiDB configuration file: - -```toml -[instance] -tidb_stmt_summary_enable_persistent = true -# The following entries use the default values, which can be modified as needed. -# tidb_stmt_summary_filename = "tidb-statements.log" -# tidb_stmt_summary_file_max_days = 3 -# tidb_stmt_summary_file_max_size = 64 # MiB -# tidb_stmt_summary_file_max_backups = 0 -``` - -After statements summary persistence is enabled, the memory keeps only the current real-time data and no history data. Once the real-time data is refreshed as history data, the history data is written to the disk at an interval of `tidb_stmt_summary_refresh_interval` described in the [Parameter configuration](#parameter-configuration) section. Queries on the `statements_summary_history` or `cluster_statements_summary_history` table will return results combining both in-memory and on-disk data. - - - -> **Note:** -> -> - When statements summary persistence is enabled, the `tidb_stmt_summary_history_size` configuration described in the [Parameter configuration](#parameter-configuration) section will no longer take effect because the memory does not keep the history data. Instead, the following three configurations will be used to control the retention period and size of history data for persistence: [`tidb_stmt_summary_file_max_days`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_days-new-in-v660), [`tidb_stmt_summary_file_max_size`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_size-new-in-v660), and [`tidb_stmt_summary_file_max_backups`](/tidb-configuration-file.md#tidb_stmt_summary_file_max_backups-new-in-v660). -> - The smaller the value of `tidb_stmt_summary_refresh_interval`, the more immediate data is written to the disk. However, this also means more redundant data is written to the disk. - - ->>>>>>> 8eee4b162 (tidb: rename products (#13692) (#13763)) ## Troubleshooting examples diff --git a/statistics.md b/statistics.md index 176509c24f0ce..3d61d68035d6b 100644 --- a/statistics.md +++ b/statistics.md @@ -11,12 +11,7 @@ TiDB uses statistics to decide [which index to choose](/choose-index.md). The `t In versions earlier than v5.1.0, the default value of this variable is `1`. In v5.3.0 and later versions, the default value of this variable is `2`. If your cluster is upgraded from a version earlier than v5.3.0 to v5.3.0 or later, the default value of `tidb_analyze_version` does not change. -<<<<<<< HEAD -======= -- For TiDB Self-Hosted, the default value of this variable is `1` before v5.1.0. In v5.3.0 and later versions, the default value of this variable is `2`. If your cluster is upgraded from a version earlier than v5.3.0 to v5.3.0 or later, the default value of `tidb_analyze_version` does not change. -- For TiDB Cloud, the default value of this variable is `1`. ->>>>>>> 8eee4b162 (tidb: rename products (#13692) (#13763)) diff --git a/system-variables.md b/system-variables.md index d5792969c24ad..810a50114e016 100644 --- a/system-variables.md +++ b/system-variables.md @@ -658,7 +658,6 @@ MPP is a distributed computing framework provided by the TiFlash engine, which a - Scope: SESSION | GLOBAL - Persists to cluster: Yes - Type: Integer -<<<<<<< HEAD @@ -672,9 +671,6 @@ MPP is a distributed computing framework provided by the TiFlash engine, which a -======= -- Default value: `2` for TiDB Self-Hosted and `1` for TiDB Cloud ->>>>>>> 8eee4b162 (tidb: rename products (#13692) (#13763)) - Range: `[1, 2]` - Controls how TiDB collects statistics. @@ -919,85 +915,6 @@ Constraint checking is always performed in place for pessimistic transactions (d - Default value: `0` - This variable is read-only. It is used to obtain the timestamp of the current transaction. -<<<<<<< HEAD -======= -### tidb_ddl_disk_quota New in v6.3.0 - - - -> **Note:** -> -> This TiDB variable is not applicable to TiDB Cloud. Do not change the default value of this variable for TiDB Cloud. - - - -- Scope: GLOBAL -- Persists to cluster: Yes -- Type: Integer -- Default value: `107374182400` (100 GiB) -- Range: `[107374182400, 1125899906842624]` ([100 GiB, 1 PiB]) -- Unit: Bytes -- This variable only takes effect when [`tidb_ddl_enable_fast_reorg`](#tidb_ddl_enable_fast_reorg-new-in-v630) is enabled. It sets the usage limit of local storage during backfilling when creating an index. - -### tidb_ddl_enable_fast_reorg New in v6.3.0 - - - -> **Note:** -> -> To improve the speed for index creation using this variable, make sure that your TiDB cluster is hosted on AWS and your TiDB node size is at least 8 vCPU. For [TiDB Serverless](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta) clusters, this feature is unavailable. - - - -- Scope: GLOBAL -- Persists to cluster: Yes -- Type: Boolean -- Default value: `ON` -- This variable controls whether to enable the acceleration of `ADD INDEX` and `CREATE INDEX` to improve the speed of backfilling for index creation. Setting this variable value to `ON` can bring performance improvement for index creation on tables with a large amount of data. -- To verify whether a completed `ADD INDEX` operation is accelerated, you can execute the [`ADMIN SHOW DDL JOBS`](/sql-statements/sql-statement-admin-show-ddl.md#admin-show-ddl-jobs) statement to see whether `ingest` is displayed in the `JOB_TYPE` column. - - - -> **Warning:** -> -> Currently, this feature is not fully compatible with adding a unique index. When adding a unique index, it is recommended to disable the index acceleration feature (set `tidb_ddl_enable_fast_reorg` to `OFF`). -> -> When [PITR (Point-in-time recovery)](/br/backup-and-restore-overview.md) is disabled, the speed of adding indexes is expected to be about 10 times that in v6.1.0. However, there is no performance improvement when both PITR and index acceleration are enabled. To optimize performance, it is recommended that you disable PITR, add indexes in a quick way, then enable PITR and perform a full backup. Otherwise, the following behaviors might occur: -> -> - When PITR starts working first, the index adding job automatically falls back to the legacy mode by default, even if the configuration is set to `ON`. The index is added slowly. -> - When the index adding job starts first, it prevents the log backup job of PITR from starting by throwing an error, which does not affect the index adding job in progress. After the index adding job is completed, you need to restart the log backup job and perform a full backup manually. -> - When a log backup job of PITR and an index adding job start at the same time, no error is prompted because the two jobs are unable to detect each other. PITR does not back up the newly added index. After the index adding job is completed, you still need to restart the log backup job and perform a full backup manually. - - - - - -> **Warning:** -> -> Currently, this feature is not fully compatible with [altering multiple columns or indexes in a single `ALTER TABLE` statement](/sql-statements/sql-statement-alter-table.md). When adding a unique index with the index acceleration, you need to avoid altering other columns or indexes in the same statement. -> -> When [PITR (Point-in-time recovery)](/tidb-cloud/backup-and-restore.md) is disabled, the speed of adding indexes is expected to be about 10 times that in v6.1.0. However, there is no performance improvement when both PITR and index acceleration are enabled. To optimize performance, it is recommended that you disable PITR, add indexes in a quick way, then enable PITR and perform a full backup. Otherwise, the following expected behaviors might occur: -> -> - When PITR starts working first, the index adding job automatically falls back to the legacy mode by default, even if the configuration is set to `ON`. The index is added slowly. -> - When the index adding job starts first, it prevents the log backup job of PITR from starting by throwing an error, which does not affect the index adding job in progress. After the index adding job is completed, you need to restart the log backup job and perform a full backup manually. -> - When a log backup job of PITR and an index adding job start at the same time, no error is prompted because the two jobs are unable to detect each other. PITR does not back up the newly added index. After the index adding job is completed, you still need to restart the log backup job and perform a full backup manually. - - - -### tidb_ddl_distribute_reorg New in v6.6.0 - -> **Warning:** -> -> - This feature is still in the experimental stage. It is not recommended to enable this feature in production environments. -> - When this feature is enabled, TiDB only performs simple retries when an exception occurs during the DDL reorg phase. There is currently no retry method that is compatible with DDL operations. That is, you cannot control the number of retries using [`tidb_ddl_error_count_limit`](#tidb_ddl_error_count_limit). - -- Scope: GLOBAL -- Persists to cluster: Yes -- Default value: `OFF` -- This variable is used to control whether to enable distributed execution of the DDL reorg phase to improve the speed of this phase. Currently, this variable is only valid for the `ADD INDEX` statement. Enabling this variable improves the performance of large tables. Distributed DDL execution can control the CPU usage of DDL through dynamic DDL resource management to prevent DDL from affecting the online application. -- To verify whether a completed `ADD INDEX` operation is accelerated by this feature, you can check whether a corresponding task is in the `mysql.tidb_background_subtask_history` table. - ->>>>>>> 8eee4b162 (tidb: rename products (#13692) (#13763)) ### tidb_ddl_error_count_limit - Scope: GLOBAL From bafbaabea9678ba5216ded7d7c140a8036f014b5 Mon Sep 17 00:00:00 2001 From: qiancai Date: Fri, 2 Jun 2023 14:47:53 +0800 Subject: [PATCH 4/6] Update dev-guide-proxysql-integration.md --- develop/dev-guide-proxysql-integration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/develop/dev-guide-proxysql-integration.md b/develop/dev-guide-proxysql-integration.md index 3c391d2ec2eb4..0cb466d77a51f 100644 --- a/develop/dev-guide-proxysql-integration.md +++ b/develop/dev-guide-proxysql-integration.md @@ -123,7 +123,7 @@ For this integration, you will be using the [ProxySQL Docker image](https://hub. #### Step 1. Create a TiDB Serverless cluster -1. [Create a free TiDB Serverless cluster](https://docs.pingcap.com/tidbcloud/tidb-cloud-quickstart#step-1-create-a-tidb-cluster). +1. [Create a TiDB Serverless cluster](https://docs.pingcap.com/tidbcloud/tidb-cloud-quickstart#step-1-create-a-tidb-cluster). 2. Follow the steps in [Connect via Standard Connection](https://docs.pingcap.com/tidbcloud/connect-via-standard-connection#serverless-tier) to get the connection string and set a password for your cluster. 3. In the connection string, locate your cluster endpoint after `-h`, your user name after `-u`, and your cluster port after `-P`. From 1301fe0ee4a2c979666b484a9ab3970969661692 Mon Sep 17 00:00:00 2001 From: qiancai Date: Fri, 2 Jun 2023 15:09:20 +0800 Subject: [PATCH 5/6] Update dev-guide-proxysql-integration.md --- develop/dev-guide-proxysql-integration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/develop/dev-guide-proxysql-integration.md b/develop/dev-guide-proxysql-integration.md index 0cb466d77a51f..fd2c53aef25d9 100644 --- a/develop/dev-guide-proxysql-integration.md +++ b/develop/dev-guide-proxysql-integration.md @@ -39,7 +39,7 @@ The most obvious way to deploy ProxySQL with TiDB is to add ProxySQL as a standa This section describes how to integrate TiDB with ProxySQL in a development environment. To get started with the ProxySQL integration, you can choose either of the following options depending on your TiDB cluster type after you have all the [prerequisites](#prerequisite) in place. -- Option 1: [Integrate TiDB Serverless with ProxySQL](#option-1-integrate-tidb-cloud-serverless-tier-with-proxysql) +- Option 1: [Integrate TiDB Serverless with ProxySQL](#option-1-integrate-tidb-serverless-with-proxysql) - Option 2: [Integrate TiDB (self-hosted) with ProxySQL](#option-2-integrate-tidb-self-hosted-with-proxysql) ### Prerequisites From c67d5a63df4e7cd9d9e009bdb77be619b7e3bf48 Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Fri, 2 Jun 2023 15:20:52 +0800 Subject: [PATCH 6/6] Delete information-schema-resource-groups.md --- .../information-schema-resource-groups.md | 63 ------------------- 1 file changed, 63 deletions(-) delete mode 100644 information-schema/information-schema-resource-groups.md diff --git a/information-schema/information-schema-resource-groups.md b/information-schema/information-schema-resource-groups.md deleted file mode 100644 index 20618008182b5..0000000000000 --- a/information-schema/information-schema-resource-groups.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -title: RESOURCE_GROUPS -summary: Learn the `RESOURCE_GROUPS` information_schema table. ---- - -# RESOURCE_GROUPS - -> **Warning:** -> -> This feature is experimental and its form and usage might change in subsequent versions. - - - -> **Note:** -> -> This feature is not available on [TiDB Serverless clusters](https://docs.pingcap.com/tidbcloud/select-cluster-tier#tidb-serverless-beta). - - - -The `RESOURCE_GROUPS` table shows the information about all resource groups. For more information, see [Use Resource Control to Achieve Resource Isolation](/tidb-resource-control.md). - -```sql -USE information_schema; -DESC resource_groups; -``` - -```sql -+------------+-------------+------+------+---------+-------+ -| Field | Type | Null | Key | Default | Extra | -+------------+-------------+------+------+---------+-------+ -| NAME | varchar(32) | NO | | NULL | | -| RU_PER_SEC | bigint(21) | YES | | NULL | | -| BURSTABLE | varchar(3) | YES | | NULL | | -+------------+-------------+------+------+---------+-------+ -3 rows in set (0.00 sec) -``` - -## Examples - -```sql -mysql> CREATE RESOURCE GROUP rg1 RU_PER_SEC=1000; -- Create the resource group rg1 -Query OK, 0 rows affected (0.34 sec) -mysql> SHOW CREATE RESOURCE GROUP rg1; -- Display the definition of the rg1 resource group -+----------------+---------------------------------------------+ -| Resource_Group | Create Resource Group | -+----------------+---------------------------------------------+ -| rg1 | CREATE RESOURCE GROUP `rg1` RU_PER_SEC=1000 | -+----------------+---------------------------------------------+ -1 row in set (0.00 sec) -mysql> SELECT * FROM information_schema.resource_groups WHERE NAME = 'rg1'; -+------+------------+-----------+ -| NAME | RU_PER_SEC | BURSTABLE | -+------+------------+-----------+ -| rg1 | 1000 | NO | -+------+------------+-----------+ -1 row in set (0.00 sec) -``` - -The descriptions of the columns in the `RESOURCE_GROUPS` table are as follows: - -* `NAME`: the name of the resource group. -* `RU_PER_SEC`:the backfilling speed of the resource group. The unit is RU/second, in which RU means [Request Unit](/tidb-resource-control.md#what-is-request-unit-ru). -* `BURSTABLE`: whether to allow the resource group to overuse the available system resources.