From e445c00934d4e10772349dcbab19183c841e432b Mon Sep 17 00:00:00 2001
From: Airton Lastori <6343615+alastori@users.noreply.github.com>
Date: Tue, 28 Apr 2026 00:19:06 -0400
Subject: [PATCH 1/9] cloud: Premium supports data migration
Add a new Public Preview guide for using the Data Migration feature
on TiDB Cloud Premium, plus the corresponding entry in the Premium
TOC. Mirrors the structure of premium-export.md.
---
TOC-tidb-cloud-premium.md | 1 +
tidb-cloud/premium/premium-data-migration.md | 141 +++++++++++++++++++
2 files changed, 142 insertions(+)
create mode 100644 tidb-cloud/premium/premium-data-migration.md
diff --git a/TOC-tidb-cloud-premium.md b/TOC-tidb-cloud-premium.md
index 64bae50b8cb26..8c5cd93c6fced 100644
--- a/TOC-tidb-cloud-premium.md
+++ b/TOC-tidb-cloud-premium.md
@@ -134,6 +134,7 @@
- [Connect via Private Endpoint with Alibaba Cloud](/tidb-cloud/premium/connect-to-premium-via-alibaba-cloud-private-endpoint.md)
- [Back Up and Restore TiDB Cloud Data](/tidb-cloud/premium/backup-and-restore-premium.md)
- [Export Data from {{{ .premium }}}](/tidb-cloud/premium/premium-export.md)
+ - [Migrate Data to {{{ .premium }}} Using Data Migration](/tidb-cloud/premium/premium-data-migration.md)
- Use TiFlash for HTAP
- [TiFlash Overview](/tiflash/tiflash-overview.md)
- [Create TiFlash Replicas](/tiflash/create-tiflash-replicas.md)
diff --git a/tidb-cloud/premium/premium-data-migration.md b/tidb-cloud/premium/premium-data-migration.md
new file mode 100644
index 0000000000000..e858d21e792f1
--- /dev/null
+++ b/tidb-cloud/premium/premium-data-migration.md
@@ -0,0 +1,141 @@
+---
+title: Migrate Data to {{{ .premium }}} Using Data Migration
+summary: Learn how to migrate data from MySQL-compatible databases to {{{ .premium }}} instances using the Data Migration feature in the TiDB Cloud console.
+---
+
+# Migrate Data to {{{ .premium }}} Using Data Migration
+
+This document describes how to migrate data from a MySQL-compatible database to a {{{ .premium }}} instance using the Data Migration feature in the [TiDB Cloud console](https://tidbcloud.com/).
+
+The Data Migration feature enables you to migrate existing MySQL data and continuously replicate ongoing changes (binlog) from your MySQL-compatible source database directly to a {{{ .premium }}} instance, reducing downtime and simplifying your migration to TiDB.
+
+> **Note:**
+>
+> The Data Migration feature for {{{ .premium }}} is currently in Public Preview. During Public Preview, the source database must be reachable over a public network endpoint, and the source connection cannot be reused across migration jobs. For details, see [Limitations](#limitations).
+
+## Supported source databases
+
+The Data Migration feature supports any MySQL-compatible database with binary log replication enabled. The wizard exposes a single source-engine option (**MySQL**); to migrate from a managed MySQL service such as Amazon Aurora MySQL, Amazon RDS MySQL, Azure Database for MySQL - Flexible Server, Google Cloud SQL for MySQL, or Alibaba Cloud RDS MySQL, connect via the public endpoint of the managed instance.
+
+Supported MySQL versions: 5.7 and 8.0.
+
+## Migration modes
+
+When you create a migration job, you choose one of the following modes:
+
+- **Full + Incremental**: migrates existing data from the source database first, and then continuously replicates ongoing changes (binlog) to the target {{{ .premium }}} instance.
+- **Incremental Data Only**: continuously replicates ongoing changes (binlog) from the source database to the target {{{ .premium }}} instance, starting from the current binlog position.
+
+## Limitations
+
+### Public Preview limitations
+
+- Connectivity to the source database is currently public-only. Private Link connectivity to the source database is in development and not yet available.
+- Source connection details cannot be saved or reused across migration jobs. Each migration job requires the source connection to be entered from scratch.
+- Migration jobs created during Public Preview might be subject to additional restrictions as the feature matures. For up-to-date information, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md).
+
+### General limitations
+
+- The system databases `mysql`, `information_schema`, `performance_schema`, and `sys` are filtered out and not migrated, even if you select all databases.
+- During existing data migration, if the target database already contains the table to be migrated and there are duplicate keys, the rows with duplicate keys are replaced.
+- During incremental data migration, if a migration job recovers from an abrupt error, it might enter safe mode for 60 seconds. During safe mode, `INSERT` statements are migrated as `REPLACE`, and `UPDATE` statements as `DELETE` and `REPLACE`. For source tables without primary keys or non-null unique indexes, this can result in duplicated rows in the target instance.
+
+For a complete list of Data Migration limitations across TiDB Cloud, see [Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md#limitations).
+
+## Prerequisites
+
+Before creating a migration job, make sure the following prerequisites are met.
+
+### Enable binary logs on the source database
+
+To replicate incremental changes from the source MySQL-compatible database to the target {{{ .premium }}} instance, configure the source database with the following settings:
+
+| Configuration | Required value | Purpose |
+|:---------------------------------|:---------------|:--------|
+| `log_bin` | `ON` | Enables binary logging, which Data Migration uses to replicate changes to TiDB. |
+| `binlog_format` | `ROW` | Captures all data changes accurately. |
+| `binlog_row_image` | `FULL` | Includes all column values in events for safe conflict resolution. |
+| `binlog_expire_logs_seconds` | ≥ `86400` (1 day); `604800` (7 days) recommended | Ensures Data Migration can access consecutive logs during migration. |
+| `binlog_transaction_compression` | `OFF` | Data Migration does not support transaction compression. |
+
+For detailed configuration steps for self-managed MySQL, AWS RDS, Aurora, Azure Database for MySQL, Google Cloud SQL, and Alibaba Cloud RDS, see [Enable binary logs in the source MySQL-compatible database for replication](/tidb-cloud/migrate-from-mysql-using-data-migration.md#enable-binary-logs-in-the-source-mysql-compatible-database-for-replication).
+
+### Ensure network connectivity
+
+The {{{ .premium }}} instance connects to the source database over the public internet during Public Preview. Make sure that:
+
+- The source database accepts inbound connections from the public IP ranges used by the {{{ .premium }}} region.
+- Any firewall, security group, or network ACL between the {{{ .premium }}} instance and the source database allows traffic on the source database port (typically `3306`).
+
+The target {{{ .premium }}} instance must also be reachable. If the target cluster's public endpoint is disabled, enable it under **Settings** > **Networking** before creating the migration job. For more information, see [Connect via Public Endpoint](/tidb-cloud/premium/connect-to-premium-via-public-connection.md).
+
+### Grant required privileges
+
+The migration user on the source database must have privileges sufficient to read schema and data and to read the binary log, including (but not limited to) `SELECT`, `RELOAD`, `REPLICATION SLAVE`, and `REPLICATION CLIENT`. For managed MySQL services such as AWS RDS, Aurora, Azure Database for MySQL, Google Cloud SQL, and Alibaba Cloud RDS, additional service-specific permissions might be required. For details, see [Grant required privileges to the migration user in the source database](/tidb-cloud/migrate-from-mysql-using-data-migration.md#grant-required-privileges-to-the-migration-user-in-the-source-database).
+
+On the target {{{ .premium }}} instance, the migration user must have privileges sufficient to create databases, create tables, and write data in the target schemas. For details, see [Grant required privileges to the migration user in the target database](/tidb-cloud/migrate-from-mysql-using-data-migration.md#grant-required-privileges-to-the-migration-user-in-the-target-database).
+
+## Create a migration job
+
+To create a migration job from a MySQL-compatible source database to a {{{ .premium }}} instance, take the following steps.
+
+### Step 1: Configure source and target connection
+
+1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**TiDB Instances**](https://tidbcloud.com/tidbs) page.
+
+ > **Tip:**
+ >
+ > If you are in multiple organizations, use the combo box in the upper-left corner to switch to your target organization first.
+
+2. Click the name of your target {{{ .premium }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane.
+
+3. On the **Data Migration** page, click **Create Migration Job** in the upper-right corner.
+
+4. On the **Configure source and target connection** step, enter the following information:
+
+ - **Job Name**: a name for the migration job. The default value is `migration_job_{timestamp}`. The name must start with a letter and can contain letters, numbers, underscores (`_`), and hyphens (`-`), with a maximum length of 60 characters.
+ - **Source Connection Profile**:
+ - **Data Source**: select **MySQL**.
+ - **Connectivity Method**: select **Public**.
+ - **Hostname or IP address**: enter the hostname or public IP address of the source database.
+ - **Port**: enter the source database port. The default is `3306`.
+ - **User Name** and **Password**: enter the credentials for the migration user. This user must have the privileges listed in [Grant required privileges](#grant-required-privileges).
+ - **SSL/TLS**: enabled by default. If your source database requires encrypted connections, upload the **CA Certificate**, **Client Certificate**, and **Client private key** as needed. If your source database does not require encrypted connections, turn off the **SSL/TLS** toggle.
+ - **Target Connection Profile**: the **Region**, **Cluster ID**, and **Cluster Name** fields are auto-populated from the current {{{ .premium }}} instance. Enter the **User Name** and **Password** for a TiDB user that has sufficient privileges in the target instance.
+
+5. Click **Validate Connection and Next**. The console validates both source and target connections. If validation fails, the wizard displays an error and remains on this step. Resolve the issue and try again.
+
+### Step 2: Choose objects to be migrated
+
+1. Select the **Migration Type**: **Full + Incremental** (default) or **Incremental Data Only**.
+2. The wizard scans the source database and displays the available databases and tables. Select the databases and tables you want to migrate. The system databases (`mysql`, `information_schema`, `performance_schema`, `sys`) are filtered out automatically.
+3. Click **Next**.
+
+### Step 3: Review and start migration
+
+Review the configuration summary. When you are ready, click **Create Job and Start** to create the migration job. The console redirects to the job detail page, where the job status starts in **Creating** and transitions to **Running** when the migration begins.
+
+## Manage a migration job
+
+After a migration job is created, you can monitor and manage it from the **Data Migration** page of your {{{ .premium }}} instance.
+
+### View job status and progress
+
+The migration job list shows the **Name**, **Status**, **Mode**, **Target User**, and **Creation Time** for each job. Click a migration job to open its detail page, which shows:
+
+- A **Summary** panel with the job name, ID, status, mode, data source, data target, migration objects, and creation time.
+- A **Progress** panel that shows the migration progress once the job starts running.
+
+### Pause, resume, or delete a migration job
+
+From the migration job detail page or from the actions menu in the job list, you can take the following actions:
+
+- **Pause**: temporarily pause a running migration job. You can resume it later from the same position.
+- **Resume**: resume a paused migration job.
+- **Delete**: delete the migration job and its metadata. This action does not affect data already migrated to the target instance.
+
+## See also
+
+- [Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md): the canonical Data Migration reference, including detailed prerequisites, source-specific configuration, and troubleshooting.
+- [Migrate Incremental Data from MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md): a focused guide for incremental-only migration scenarios.
+- [Connect to Your {{{ .premium }}} Instance](/tidb-cloud/premium/connect-to-tidb-instance.md): network and connectivity options for {{{ .premium }}} instances.
From 79e726a6f4c2dbb9dffd69489f76bbaee5796c54 Mon Sep 17 00:00:00 2001
From: Airton Lastori <6343615+alastori@users.noreply.github.com>
Date: Tue, 28 Apr 2026 00:40:33 -0400
Subject: [PATCH 2/9] cloud: align Premium DM doc with verified wizard state
- Update wizard structure to 4 steps (add Precheck as Step 3)
- Tighten Job Name constraints language to match wizard helper text
- Note that Private Link is in development and not yet generally available
Verified against the Premium DM proto enums and the dev wizard text;
prod release tag does not yet include the Private Link backend
support, so the doc deliberately documents Public-only connectivity.
---
tidb-cloud/premium/premium-data-migration.md | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/tidb-cloud/premium/premium-data-migration.md b/tidb-cloud/premium/premium-data-migration.md
index e858d21e792f1..b8b6ae7bbe98d 100644
--- a/tidb-cloud/premium/premium-data-migration.md
+++ b/tidb-cloud/premium/premium-data-migration.md
@@ -30,7 +30,7 @@ When you create a migration job, you choose one of the following modes:
### Public Preview limitations
-- Connectivity to the source database is currently public-only. Private Link connectivity to the source database is in development and not yet available.
+- Connectivity to the source database is currently public-only. Private Link connectivity to the source database is in development and not yet generally available.
- Source connection details cannot be saved or reused across migration jobs. Each migration job requires the source connection to be entered from scratch.
- Migration jobs created during Public Preview might be subject to additional restrictions as the feature matures. For up-to-date information, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md).
@@ -93,11 +93,11 @@ To create a migration job from a MySQL-compatible source database to a {{{ .prem
4. On the **Configure source and target connection** step, enter the following information:
- - **Job Name**: a name for the migration job. The default value is `migration_job_{timestamp}`. The name must start with a letter and can contain letters, numbers, underscores (`_`), and hyphens (`-`), with a maximum length of 60 characters.
+ - **Job Name**: a name for the migration job. The default value is `migration_job_{timestamp}`. The name must start with a letter, can contain letters, numbers, underscores (`_`), and hyphens (`-`), and must be less than 60 characters.
- **Source Connection Profile**:
- **Data Source**: select **MySQL**.
- **Connectivity Method**: select **Public**.
- - **Hostname or IP address**: enter the hostname or public IP address of the source database.
+ - **Hostname or IP address**: enter the hostname or IP address of the source database.
- **Port**: enter the source database port. The default is `3306`.
- **User Name** and **Password**: enter the credentials for the migration user. This user must have the privileges listed in [Grant required privileges](#grant-required-privileges).
- **SSL/TLS**: enabled by default. If your source database requires encrypted connections, upload the **CA Certificate**, **Client Certificate**, and **Client private key** as needed. If your source database does not require encrypted connections, turn off the **SSL/TLS** toggle.
@@ -111,7 +111,13 @@ To create a migration job from a MySQL-compatible source database to a {{{ .prem
2. The wizard scans the source database and displays the available databases and tables. Select the databases and tables you want to migrate. The system databases (`mysql`, `information_schema`, `performance_schema`, `sys`) are filtered out automatically.
3. Click **Next**.
-### Step 3: Review and start migration
+### Step 3: Precheck
+
+The console runs prechecks against the source database, network connectivity, and the target {{{ .premium }}} instance. If any precheck fails, follow the displayed error messages to fix the issue, and then click **Recheck**. For common precheck errors and remediation, see [Precheck errors and solutions](/tidb-cloud/migrate-from-mysql-using-data-migration.md#precheck-errors-and-solutions).
+
+When all prechecks pass, click **Next**.
+
+### Step 4: Review and start migration
Review the configuration summary. When you are ready, click **Create Job and Start** to create the migration job. The console redirects to the job detail page, where the job status starts in **Creating** and transitions to **Running** when the migration begins.
From ac7a07cd72f4beca0ea96756d058c451d21e20f2 Mon Sep 17 00:00:00 2001
From: Airton Lastori <6343615+alastori@users.noreply.github.com>
Date: Tue, 28 Apr 2026 00:42:54 -0400
Subject: [PATCH 3/9] cloud: remove safe-mode limitation note from Premium DM
doc
The 60-second safe-mode behavior is implemented in the legacy DM
stack (used by Dedicated and Essential) and does not apply to the
Premium DM service. Verified via dataflow-service-ng/app/models/
premium_dm/ which contains no safe-mode references.
---
tidb-cloud/premium/premium-data-migration.md | 1 -
1 file changed, 1 deletion(-)
diff --git a/tidb-cloud/premium/premium-data-migration.md b/tidb-cloud/premium/premium-data-migration.md
index b8b6ae7bbe98d..0635434e6e7ce 100644
--- a/tidb-cloud/premium/premium-data-migration.md
+++ b/tidb-cloud/premium/premium-data-migration.md
@@ -38,7 +38,6 @@ When you create a migration job, you choose one of the following modes:
- The system databases `mysql`, `information_schema`, `performance_schema`, and `sys` are filtered out and not migrated, even if you select all databases.
- During existing data migration, if the target database already contains the table to be migrated and there are duplicate keys, the rows with duplicate keys are replaced.
-- During incremental data migration, if a migration job recovers from an abrupt error, it might enter safe mode for 60 seconds. During safe mode, `INSERT` statements are migrated as `REPLACE`, and `UPDATE` statements as `DELETE` and `REPLACE`. For source tables without primary keys or non-null unique indexes, this can result in duplicated rows in the target instance.
For a complete list of Data Migration limitations across TiDB Cloud, see [Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md#limitations).
From 8aacebad6c013ff23a2a8b138a57a327ca7d4612 Mon Sep 17 00:00:00 2001
From: Airton Lastori <6343615+alastori@users.noreply.github.com>
Date: Tue, 28 Apr 2026 00:54:05 -0400
Subject: [PATCH 4/9] cloud: align Premium DM doc with end-to-end wizard
verification
Verified the complete wizard flow against the dev environment with a
real MySQL source connection. Several corrections:
- Step 2 has two controls under Migration Type: "Migration process"
(Full + Incremental / Incremental only) and "Existing data migration
mode" (Logical default / Physical). Document both.
- Object selection is an All / Customize toggle, with Customize
revealing a transfer-list pattern between source and selected.
- Step 3 is named "Pre-check" (hyphenated) in the UI; "Check Again"
re-runs; warnings can be ignored via a confirmation dialog.
- Mode label is "Incremental only", not "Incremental Data Only".
- Step 4 review shows three sections: Job Configuration, Source
Connection Profile, Target Connection Profile.
- PROCESS privilege is also recommended; pre-check warns when missing.
---
tidb-cloud/premium/premium-data-migration.md | 52 ++++++++++++++++----
1 file changed, 42 insertions(+), 10 deletions(-)
diff --git a/tidb-cloud/premium/premium-data-migration.md b/tidb-cloud/premium/premium-data-migration.md
index 0635434e6e7ce..1ed926b628805 100644
--- a/tidb-cloud/premium/premium-data-migration.md
+++ b/tidb-cloud/premium/premium-data-migration.md
@@ -21,10 +21,19 @@ Supported MySQL versions: 5.7 and 8.0.
## Migration modes
-When you create a migration job, you choose one of the following modes:
+When you create a migration job, you choose a **Migration process** and an **Existing data migration mode**.
+
+The **Migration process** determines what data is migrated:
- **Full + Incremental**: migrates existing data from the source database first, and then continuously replicates ongoing changes (binlog) to the target {{{ .premium }}} instance.
-- **Incremental Data Only**: continuously replicates ongoing changes (binlog) from the source database to the target {{{ .premium }}} instance, starting from the current binlog position.
+- **Incremental only**: continuously replicates ongoing changes (binlog) from the source database to the target {{{ .premium }}} instance, starting from the current binlog position.
+
+The **Existing data migration mode** determines how the existing data load is performed when **Full + Incremental** is selected:
+
+- **Logical** (default): exports rows from the source database and replays them as SQL `INSERT` statements on the target instance. Logical mode applies before any incremental replication starts. This mode consumes Request Capacity Units (RCUs) on the target instance during the data load.
+- **Physical**: uses `IMPORT INTO` on the target instance to import data without RCU charges during the load. Use this mode for large datasets where load throughput and cost are priorities.
+
+The **Existing data migration mode** does not apply to **Incremental only** migrations.
## Limitations
@@ -70,7 +79,9 @@ The target {{{ .premium }}} instance must also be reachable. If the target clust
### Grant required privileges
-The migration user on the source database must have privileges sufficient to read schema and data and to read the binary log, including (but not limited to) `SELECT`, `RELOAD`, `REPLICATION SLAVE`, and `REPLICATION CLIENT`. For managed MySQL services such as AWS RDS, Aurora, Azure Database for MySQL, Google Cloud SQL, and Alibaba Cloud RDS, additional service-specific permissions might be required. For details, see [Grant required privileges to the migration user in the source database](/tidb-cloud/migrate-from-mysql-using-data-migration.md#grant-required-privileges-to-the-migration-user-in-the-source-database).
+The migration user on the source database must have privileges sufficient to read schema and data and to read the binary log, including (but not limited to) `SELECT`, `RELOAD`, `REPLICATION SLAVE`, `REPLICATION CLIENT`, and `PROCESS`. The pre-check step warns if the `PROCESS` privilege is missing, because Data Migration uses it to verify that the migration user does not exceed the source database's connection-concurrency limit.
+
+For managed MySQL services such as AWS RDS, Aurora, Azure Database for MySQL, Google Cloud SQL, and Alibaba Cloud RDS, additional service-specific permissions might be required. For details, see [Grant required privileges to the migration user in the source database](/tidb-cloud/migrate-from-mysql-using-data-migration.md#grant-required-privileges-to-the-migration-user-in-the-source-database).
On the target {{{ .premium }}} instance, the migration user must have privileges sufficient to create databases, create tables, and write data in the target schemas. For details, see [Grant required privileges to the migration user in the target database](/tidb-cloud/migrate-from-mysql-using-data-migration.md#grant-required-privileges-to-the-migration-user-in-the-target-database).
@@ -106,19 +117,40 @@ To create a migration job from a MySQL-compatible source database to a {{{ .prem
### Step 2: Choose objects to be migrated
-1. Select the **Migration Type**: **Full + Incremental** (default) or **Incremental Data Only**.
-2. The wizard scans the source database and displays the available databases and tables. Select the databases and tables you want to migrate. The system databases (`mysql`, `information_schema`, `performance_schema`, `sys`) are filtered out automatically.
-3. Click **Next**.
+In the **Migration Type** section, configure how data is migrated:
+
+- **Migration process**: select **Full + Incremental** (default) or **Incremental only**.
+- **Existing data migration mode** (only applies to **Full + Incremental**): select **Logical** (default) or **Physical**. For details, see [Migration modes](#migration-modes).
+
+In the **Select Objects to Migrate** section, choose:
-### Step 3: Precheck
+- **All** (default): migrate every database and table on the source. The system databases (`mysql`, `information_schema`, `performance_schema`, `sys`) are excluded automatically.
+- **Customize**: pick specific databases and tables. The wizard fetches the source schema and shows two panels, **Source Database** and **Selected Objects**. Use the arrow buttons between the panels to move databases or tables into the **Selected Objects** list.
-The console runs prechecks against the source database, network connectivity, and the target {{{ .premium }}} instance. If any precheck fails, follow the displayed error messages to fix the issue, and then click **Recheck**. For common precheck errors and remediation, see [Precheck errors and solutions](/tidb-cloud/migrate-from-mysql-using-data-migration.md#precheck-errors-and-solutions).
+Click **Next**.
-When all prechecks pass, click **Next**.
+### Step 3: Pre-check
+
+The console runs the pre-check against the source database, network connectivity, and the target {{{ .premium }}} instance. The progress bar shows **Running {percentage}%** while checks execute, and **Finished 100%** when complete. The summary line reports total items, completed, passed, with warning, and failed.
+
+The **Pre-check Result** table lists every item that did not pass, along with its reason and a suggested solution. To re-run the pre-check after fixing an item, click **Check Again**. To proceed without addressing a warning, you can dismiss it by selecting **Ignore** on the row.
+
+If the pre-check completes with at least one warning and you click **Next**, the console shows a confirmation dialog with two options:
+
+- **Check Again**: return to the **Pre-check Result** table and address the warnings.
+- **Ignore warnings**: advance to the next step. Note that ignoring warnings may result in job failures or data inconsistencies.
+
+When all checks pass (or you choose to ignore the remaining warnings), click **Next**.
### Step 4: Review and start migration
-Review the configuration summary. When you are ready, click **Create Job and Start** to create the migration job. The console redirects to the job detail page, where the job status starts in **Creating** and transitions to **Running** when the migration begins.
+The review page shows three sections summarizing the migration job:
+
+- **Job Configuration**: job name and migration type.
+- **Source Connection Profile**: data source, host, port, connectivity method, username, SSL/TLS status, selected objects, and import mode.
+- **Target Connection Profile**: region, cluster ID, cluster name, and target username.
+
+Click **Previous** to revise any setting, or click **Create Job and Start** to create the migration job. The console redirects to the job detail page, where the job status starts in **Creating** and transitions to **Running** when the migration begins.
## Manage a migration job
From 1744a9ebcd50c2759443005ef2538943200f99dc Mon Sep 17 00:00:00 2001
From: Airton Lastori <6343615+alastori@users.noreply.github.com>
Date: Tue, 28 Apr 2026 00:57:34 -0400
Subject: [PATCH 5/9] cloud: restore safe-mode limitation for Premium DM
Safe mode is implemented in the tiflow DM kernel (used by Premium DM
via the agent layer), not in the cloud control plane. The earlier
removal was based on a search of the dataflow-service repo only,
which is incomplete. Restoring the 60-second safe-mode note so the
Premium doc matches the underlying replication engine behavior.
---
tidb-cloud/premium/premium-data-migration.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/tidb-cloud/premium/premium-data-migration.md b/tidb-cloud/premium/premium-data-migration.md
index 1ed926b628805..6ac08eb057e4f 100644
--- a/tidb-cloud/premium/premium-data-migration.md
+++ b/tidb-cloud/premium/premium-data-migration.md
@@ -47,6 +47,7 @@ The **Existing data migration mode** does not apply to **Incremental only** migr
- The system databases `mysql`, `information_schema`, `performance_schema`, and `sys` are filtered out and not migrated, even if you select all databases.
- During existing data migration, if the target database already contains the table to be migrated and there are duplicate keys, the rows with duplicate keys are replaced.
+- During incremental data migration, if a migration job recovers from an abrupt error, it might enter safe mode for 60 seconds. During safe mode, `INSERT` statements are migrated as `REPLACE`, and `UPDATE` statements as `DELETE` and `REPLACE`. For source tables without primary keys or non-null unique indexes, this can result in duplicated rows in the target instance.
For a complete list of Data Migration limitations across TiDB Cloud, see [Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md#limitations).
From e8f3626a80e98e43ab83bed1dad3db5ea54f2d6e Mon Sep 17 00:00:00 2001
From: Airton Lastori <6343615+alastori@users.noreply.github.com>
Date: Tue, 28 Apr 2026 01:09:07 -0400
Subject: [PATCH 6/9] cloud: extend canonical Data Migration docs to render for
Premium
Customers reading the new Premium DM guide cross-reference the
canonical Cloud DM doc for binary-log setup, privileges, and
limitations. Without Premium variants in the canonical doc, those
links would either render Dedicated-default content or leave tier
placeholders blank.
Changes:
- TOC-tidb-cloud-premium.md: add the canonical and incremental-only
Cloud DM docs as siblings of premium-data-migration.md so Premium
customers can navigate to them.
- tidb-cloud/migrate-from-mysql-using-data-migration.md: add Premium
tier to all inline tier-name placeholders, plus three new Premium
variant blocks: Public Preview note, supported sources matrix, and
the Physical / Logical mode discussion (including PITR /
changefeed and concurrent-job caveats for physical mode).
- tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md:
add Premium tier to all inline tier-name placeholders.
- tidb-cloud/premium/premium-data-migration.md: add the two
physical-mode caveats (PITR / changefeed; concurrent-job limit)
inline so they are visible in the Premium-tier overview without
requiring readers to click through.
The Dedicated and Essential renderings of all three docs are
unchanged.
---
TOC-tidb-cloud-premium.md | 2 +
...migrate-from-mysql-using-data-migration.md | 63 ++++++++++++++-----
...al-data-from-mysql-using-data-migration.md | 6 +-
tidb-cloud/premium/premium-data-migration.md | 5 ++
4 files changed, 57 insertions(+), 19 deletions(-)
diff --git a/TOC-tidb-cloud-premium.md b/TOC-tidb-cloud-premium.md
index 8c5cd93c6fced..762fefbc19f95 100644
--- a/TOC-tidb-cloud-premium.md
+++ b/TOC-tidb-cloud-premium.md
@@ -135,6 +135,8 @@
- [Back Up and Restore TiDB Cloud Data](/tidb-cloud/premium/backup-and-restore-premium.md)
- [Export Data from {{{ .premium }}}](/tidb-cloud/premium/premium-export.md)
- [Migrate Data to {{{ .premium }}} Using Data Migration](/tidb-cloud/premium/premium-data-migration.md)
+ - [Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md)
+ - [Migrate Incremental Data from MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md)
- Use TiFlash for HTAP
- [TiFlash Overview](/tiflash/tiflash-overview.md)
- [Create TiFlash Replicas](/tiflash/create-tiflash-replicas.md)
diff --git a/tidb-cloud/migrate-from-mysql-using-data-migration.md b/tidb-cloud/migrate-from-mysql-using-data-migration.md
index 3ad766aca2a85..7b7ae0777505e 100644
--- a/tidb-cloud/migrate-from-mysql-using-data-migration.md
+++ b/tidb-cloud/migrate-from-mysql-using-data-migration.md
@@ -6,7 +6,7 @@ aliases: ['/tidbcloud/migrate-data-into-tidb','/tidbcloud/migrate-incremental-da
# Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration
-This document guides you through migrating your MySQL databases from Amazon Aurora MySQL, Amazon RDS, Azure Database for MySQL - Flexible Server, Google Cloud SQL for MySQL, or self-managed MySQL instances to {{{ .dedicated }}}{{{ .essential }}} using the Data Migration feature in the [TiDB Cloud console](https://tidbcloud.com/).
+This document guides you through migrating your MySQL databases from Amazon Aurora MySQL, Amazon RDS, Azure Database for MySQL - Flexible Server, Google Cloud SQL for MySQL, or self-managed MySQL instances to {{{ .dedicated }}}{{{ .essential }}}{{{ .premium }}} using the Data Migration feature in the [TiDB Cloud console](https://tidbcloud.com/).
@@ -16,6 +16,14 @@ This document guides you through migrating your MySQL databases from Amazon Auro
+
+
+> **Note:**
+>
+> Currently, the Data Migration feature is in Public Preview for {{{ .premium }}}. For a {{{ .premium }}}-focused overview, see [Migrate Data to {{{ .premium }}} Using Data Migration](/tidb-cloud/premium/premium-data-migration.md).
+
+
+
This feature enables you to migrate your existing MySQL data and continuously replicate ongoing changes (binlog) from your MySQL-compatible source databases directly to TiDB Cloud, maintaining data consistency whether in the same region or across different regions. The streamlined process eliminates the need for separate dump and load operations, reducing downtime and simplifying your migration from MySQL to a more scalable platform.
If you only want to replicate ongoing binlog changes from your MySQL-compatible database to TiDB Cloud, see [Migrate Incremental Data from MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md).
@@ -86,9 +94,17 @@ To prevent this, create the target tables in the downstream database before star
+
+
+- For {{{ .premium }}}, both logical mode (default) and physical mode are supported. Logical mode exports rows as SQL statements and replays them on the target instance, consuming Request Capacity Units (RCUs) on the target during the load. Physical mode uses `IMPORT INTO` on the target instance and is recommended for large datasets where load throughput and cost are priorities.
+- When you use physical mode and the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .premium }}} instance. Otherwise, the migration job will be stuck. If you need to enable PITR or have any changefeed, use logical mode instead to migrate data.
+- When you use physical mode, you cannot create a second migration job or import task for the {{{ .premium }}} instance before the existing data migration is completed.
+
+
+
### Limitations of incremental data migration
-- During incremental data migration, if the table to be migrated already exists in the target database with duplicate keys, an error is reported and the migration is interrupted. In this situation, you need to verify that the MySQL source data is accurate. If it is accurate, click the **Restart** button of the migration job, and the migration job will replace the conflicting records in the target {{{ .dedicated }}} cluster{{{ .essential }}} instance with the MySQL source records.
+- During incremental data migration, if the table to be migrated already exists in the target database with duplicate keys, an error is reported and the migration is interrupted. In this situation, you need to verify that the MySQL source data is accurate. If it is accurate, click the **Restart** button of the migration job, and the migration job will replace the conflicting records in the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance with the MySQL source records.
@@ -108,7 +124,7 @@ To prevent this, create the target tables in the downstream database before star
## Prerequisites
-Before migrating, check whether your data source is supported, enable binary logging in your MySQL-compatible database, ensure network connectivity, and grant required privileges for both the source database and the target {{{ .dedicated }}} cluster{{{ .essential }}} instance database.
+Before migrating, check whether your data source is supported, enable binary logging in your MySQL-compatible database, ensure network connectivity, and grant required privileges for both the source database and the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance database.
### Make sure your data source and version are supported
@@ -141,9 +157,24 @@ For {{{ .essential }}}, the Data Migration feature supports the following data s
+
+
+For {{{ .premium }}}, the Data Migration feature supports any MySQL-compatible source database. The wizard exposes a single source-engine option (**MySQL**); to migrate from a managed MySQL service, connect via the public endpoint of the managed instance.
+
+| Data source | Supported versions |
+|:-------------------------------------------------|:-------------------|
+| Self-managed MySQL (on-premises or public cloud) | 8.0, 5.7 |
+| Amazon Aurora MySQL | 8.0, 5.7 |
+| Amazon RDS MySQL | 8.0, 5.7 |
+| Azure Database for MySQL - Flexible Server | 8.0, 5.7 |
+| Google Cloud SQL for MySQL | 8.0, 5.7 |
+| Alibaba Cloud RDS MySQL | 8.0, 5.7 |
+
+
+
### Enable binary logs in the source MySQL-compatible database for replication
-To continuously replicate incremental changes from the source MySQL-compatible database to the target {{{ .dedicated }}} cluster{{{ .essential }}} instance using DM, you need the following configurations to enable binary logs in the source database:
+To continuously replicate incremental changes from the source MySQL-compatible database to the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance using DM, you need the following configurations to enable binary logs in the source database:
| Configuration | Required value | Why |
|:---------------------------------|:---------------|:----|
@@ -255,7 +286,7 @@ For more information, see [Set instance parameters](https://www.alibabacloud.com
### Ensure network connectivity
-Before creating a migration job, you need to plan and set up proper network connectivity between your source MySQL instance, the TiDB Cloud Data Migration (DM) service, and your target {{{ .dedicated }}} cluster{{{ .essential }}} instance.
+Before creating a migration job, you need to plan and set up proper network connectivity between your source MySQL instance, the TiDB Cloud Data Migration (DM) service, and your target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance.
@@ -399,7 +430,7 @@ If you use AWS VPC peering or Google Cloud VPC network peering, see the followin
If your MySQL service is in an AWS VPC, take the following steps:
-1. [Set up a VPC peering connection](/tidb-cloud/set-up-vpc-peering-connections.md) between the VPC of the MySQL service and your {{{ .dedicated }}} cluster{{{ .essential }}} instance.
+1. [Set up a VPC peering connection](/tidb-cloud/set-up-vpc-peering-connections.md) between the VPC of the MySQL service and your {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance.
2. Modify the inbound rules of the security group that the MySQL service is associated with.
@@ -451,7 +482,7 @@ If your MySQL service is in a Google Cloud VPC, take the following steps:
### Grant required privileges for migration
-Before starting migration, you need to set up appropriate database users with the required privileges on both the source and target databases. These privileges enable TiDB Cloud DM to read data from MySQL, replicate changes, and write to your {{{ .dedicated }}} cluster{{{ .essential }}} instance securely. Because the migration involves both full data dumps for existing data and binlog replication for incremental changes, your migration user requires specific permissions beyond basic read access.
+Before starting migration, you need to set up appropriate database users with the required privileges on both the source and target databases. These privileges enable TiDB Cloud DM to read data from MySQL, replicate changes, and write to your {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance securely. Because the migration involves both full data dumps for existing data and binlog replication for incremental changes, your migration user requires specific permissions beyond basic read access.
#### Grant required privileges to the migration user in the source MySQL database
@@ -477,11 +508,11 @@ GRANT SELECT, RELOAD, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'dm_source
GRANT SELECT, RELOAD, LOCK TABLES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'dm_source_user'@'%';
```
-#### Grant required privileges in the target {{{ .dedicated }}} cluster{{{ .essential }}} instance
+#### Grant required privileges in the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance
-For testing purposes, you can use the `root` account of your {{{ .dedicated }}} cluster{{{ .essential }}} instance.
+For testing purposes, you can use the `root` account of your {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance.
-For production workloads, it is recommended to have a dedicated user for replication in the target {{{ .dedicated }}} cluster{{{ .essential }}} instance and grant only the necessary privileges:
+For production workloads, it is recommended to have a dedicated user for replication in the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance and grant only the necessary privileges:
| Privilege | Scope | Purpose |
|:----------|:------|:--------|
@@ -495,7 +526,7 @@ For production workloads, it is recommended to have a dedicated user for replica
| `INDEX` | Tables | Creates and modifies indexes |
| `CREATE VIEW` | Views | Creates views used by migration |
-For example, you can execute the following `GRANT` statement in your target {{{ .dedicated }}} cluster{{{ .essential }}} instance to grant corresponding privileges:
+For example, you can execute the following `GRANT` statement in your target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance to grant corresponding privileges:
```sql
GRANT CREATE, SELECT, INSERT, UPDATE, DELETE, ALTER, DROP, INDEX ON *.* TO 'dm_target_user'@'%';
@@ -505,7 +536,7 @@ GRANT CREATE, SELECT, INSERT, UPDATE, DELETE, ALTER, DROP, INDEX ON *.* TO 'dm_t
1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**My TiDB**](https://tidbcloud.com/tidbs) page.
-2. Click the name of your target {{{ .dedicated }}} cluster{{{ .essential }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane.
+2. Click the name of your target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane.
3. On the **Data Migration** page, click **Create Migration Job** in the upper-right corner. The **Create Migration Job** page is displayed.
@@ -589,7 +620,7 @@ On the **Create Migration Job** page, configure the source and target connection
3. Fill in the target connection profile.
- - **User Name**: enter the username of the target {{{ .dedicated }}} cluster{{{ .essential }}} instance in TiDB Cloud.
+ - **User Name**: enter the username of the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance in TiDB Cloud.
- **Password**: enter the password of the TiDB Cloud username.
4. Click **Validate Connection and Next** to validate the information you have entered.
@@ -638,8 +669,8 @@ You can use **physical mode** or **logical mode** to migrate **existing data** a
> **Note:**
>
-> - When you use physical mode, you cannot create a second migration job or import task for the {{{ .dedicated }}} cluster{{{ .essential }}} instance before the existing data migration is completed.
-> - When you use physical mode and the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .dedicated }}} cluster{{{ .essential }}} instance. Otherwise, the migration job will be stuck. If you need to enable PITR or have any changefeed, use logical mode instead to migrate data.
+> - When you use physical mode, you cannot create a second migration job or import task for the {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance before the existing data migration is completed.
+> - When you use physical mode and the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance. Otherwise, the migration job will be stuck. If you need to enable PITR or have any changefeed, use logical mode instead to migrate data.
Physical mode exports the MySQL source data as fast as possible, so [different specifications](/tidb-cloud/tidb-cloud-billing-dm.md#specifications-for-data-migration) have different performance impacts on QPS and TPS of the MySQL source database during data export. The following table shows the performance regression of each specification.
@@ -755,7 +786,7 @@ When scaling a migration job specification, note the following:
1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**My TiDB**](https://tidbcloud.com/tidbs) page.
-2. Click the name of your target {{{ .dedicated }}} cluster{{{ .essential }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane.
+2. Click the name of your target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane.
3. On the **Data Migration** page, locate the migration job you want to scale. In the **Action** column, click **...** > **Scale Up/Down**.
diff --git a/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md b/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md
index 5d0a3d1d10d19..8b7664fcbfdd5 100644
--- a/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md
+++ b/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md
@@ -5,7 +5,7 @@ summary: Learn how to migrate incremental data from MySQL-compatible databases h
# Migrate Only Incremental Data from MySQL-Compatible Databases to TiDB Cloud Using Data Migration
-This document describes how to migrate incremental data from a MySQL-compatible database on a cloud provider (Amazon Aurora MySQL, Amazon Relational Database Service (RDS), Google Cloud SQL for MySQL, Azure Database for MySQL, or Alibaba Cloud RDS) or self-hosted source database to {{{ .dedicated }}}{{{ .essential }}} using the Data Migration feature of the TiDB Cloud console.
+This document describes how to migrate incremental data from a MySQL-compatible database on a cloud provider (Amazon Aurora MySQL, Amazon Relational Database Service (RDS), Google Cloud SQL for MySQL, Azure Database for MySQL, or Alibaba Cloud RDS) or self-hosted source database to {{{ .dedicated }}}{{{ .essential }}}{{{ .premium }}} using the Data Migration feature of the TiDB Cloud console.
@@ -148,7 +148,7 @@ To enable the GTID mode for a self-hosted MySQL instance, follow these steps:
>
> If you are in multiple organizations, use the combo box in the upper-left corner to switch to your target organization first.
-2. Click the name of your target {{{ .dedicated }}} cluster{{{ .essential }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane.
+2. Click the name of your target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane.
3. On the **Data Migration** page, click **Create Migration Job** in the upper-right corner. The **Create Migration Job** page is displayed.
@@ -187,7 +187,7 @@ On the **Create Migration Job** page, configure the source and target connection
3. Fill in the target connection profile.
- - **Username**: enter the username of the target {{{ .dedicated }}} cluster{{{ .essential }}} instance.
+ - **Username**: enter the username of the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance.
- **Password**: enter the password of the TiDB Cloud username.
4. Click **Validate Connection and Next** to validate the information you have entered.
diff --git a/tidb-cloud/premium/premium-data-migration.md b/tidb-cloud/premium/premium-data-migration.md
index 6ac08eb057e4f..359b8e209ebb0 100644
--- a/tidb-cloud/premium/premium-data-migration.md
+++ b/tidb-cloud/premium/premium-data-migration.md
@@ -35,6 +35,11 @@ The **Existing data migration mode** determines how the existing data load is pe
The **Existing data migration mode** does not apply to **Incremental only** migrations.
+When you use physical mode, the following limitations apply:
+
+- After the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .premium }}} instance. Otherwise, the migration job will be stuck. If you need to enable PITR or have any changefeed, use logical mode instead.
+- You cannot create a second migration job or import task for the {{{ .premium }}} instance before the existing data migration is completed.
+
## Limitations
### Public Preview limitations
From 6dee520993b4fbfd43ae821bb86361ad657a2f2f Mon Sep 17 00:00:00 2001
From: Airton Lastori <6343615+alastori@users.noreply.github.com>
Date: Tue, 28 Apr 2026 01:19:24 -0400
Subject: [PATCH 7/9] cloud: fix dead anchors in Premium DM doc cross-refs
The canonical Cloud DM doc anchors are:
- "grant-required-privileges-to-the-migration-user-in-the-source-mysql-database"
(note "source-mysql", not just "source")
- "grant-required-privileges-for-migration" (parent ### section; the
target-side ## #### heading uses CustomContent variants and the
rendered anchor is not stable, so link to the parent instead)
Detected by the internal-links-anchors CI job on PR #22821.
---
tidb-cloud/premium/premium-data-migration.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tidb-cloud/premium/premium-data-migration.md b/tidb-cloud/premium/premium-data-migration.md
index 359b8e209ebb0..d4aea5fdd7a9f 100644
--- a/tidb-cloud/premium/premium-data-migration.md
+++ b/tidb-cloud/premium/premium-data-migration.md
@@ -87,9 +87,9 @@ The target {{{ .premium }}} instance must also be reachable. If the target clust
The migration user on the source database must have privileges sufficient to read schema and data and to read the binary log, including (but not limited to) `SELECT`, `RELOAD`, `REPLICATION SLAVE`, `REPLICATION CLIENT`, and `PROCESS`. The pre-check step warns if the `PROCESS` privilege is missing, because Data Migration uses it to verify that the migration user does not exceed the source database's connection-concurrency limit.
-For managed MySQL services such as AWS RDS, Aurora, Azure Database for MySQL, Google Cloud SQL, and Alibaba Cloud RDS, additional service-specific permissions might be required. For details, see [Grant required privileges to the migration user in the source database](/tidb-cloud/migrate-from-mysql-using-data-migration.md#grant-required-privileges-to-the-migration-user-in-the-source-database).
+For managed MySQL services such as AWS RDS, Aurora, Azure Database for MySQL, Google Cloud SQL, and Alibaba Cloud RDS, additional service-specific permissions might be required. For details, see [Grant required privileges to the migration user in the source MySQL database](/tidb-cloud/migrate-from-mysql-using-data-migration.md#grant-required-privileges-to-the-migration-user-in-the-source-mysql-database).
-On the target {{{ .premium }}} instance, the migration user must have privileges sufficient to create databases, create tables, and write data in the target schemas. For details, see [Grant required privileges to the migration user in the target database](/tidb-cloud/migrate-from-mysql-using-data-migration.md#grant-required-privileges-to-the-migration-user-in-the-target-database).
+On the target {{{ .premium }}} instance, the migration user must have privileges sufficient to create databases, create tables, and write data in the target schemas. For details, see [Grant required privileges for migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md#grant-required-privileges-for-migration).
## Create a migration job
From 8f94e41e9f3a83c7f361d6a35a93a50474233fab Mon Sep 17 00:00:00 2001
From: Airton Lastori <6343615+alastori@users.noreply.github.com>
Date: Tue, 28 Apr 2026 01:25:47 -0400
Subject: [PATCH 8/9] cloud: address Gemini code-review style suggestions
Apply 7 of 9 Gemini suggestions on PR #22821, all marked low
priority and aligned with pingcap/docs styleguide:
- Active voice: replace "the source connection cannot be reused"
with "you cannot reuse the source connection".
- Active voice: replace "rows ... are replaced" with "TiDB Cloud
replaces the rows" in existing-data limitation.
- Active voice + subject clarity: replace "INSERT statements are
migrated as ..." with "TiDB Cloud migrates INSERT statements
as ...".
- Active voice: replace "the migration job will be stuck" with
"the migration job stops" (Premium DM doc + canonical Cloud DM
doc).
- Active voice + subject clarity: replace "system databases ... are
excluded automatically" with "TiDB Cloud automatically excludes
the system databases".
- Grammar: "with warning" -> "with warnings"; rephrase pre-check
summary line for clarity.
- Terminology consistency: in Step 4 review section, replace
"import mode" with "the existing data migration mode (shown as
Import Mode on the review page)" to bridge the wizard's two
labels for the same concept.
Skipped: the suggestion to use "fewer than 60 characters" /
"contains letters" instead of "less than 60 characters" / "can
contain letters" is intentionally rejected; the current wording
mirrors the wizard's helper text verbatim.
---
.../migrate-from-mysql-using-data-migration.md | 2 +-
tidb-cloud/premium/premium-data-migration.md | 14 +++++++-------
2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/tidb-cloud/migrate-from-mysql-using-data-migration.md b/tidb-cloud/migrate-from-mysql-using-data-migration.md
index 7b7ae0777505e..6663676ec4feb 100644
--- a/tidb-cloud/migrate-from-mysql-using-data-migration.md
+++ b/tidb-cloud/migrate-from-mysql-using-data-migration.md
@@ -97,7 +97,7 @@ To prevent this, create the target tables in the downstream database before star
- For {{{ .premium }}}, both logical mode (default) and physical mode are supported. Logical mode exports rows as SQL statements and replays them on the target instance, consuming Request Capacity Units (RCUs) on the target during the load. Physical mode uses `IMPORT INTO` on the target instance and is recommended for large datasets where load throughput and cost are priorities.
-- When you use physical mode and the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .premium }}} instance. Otherwise, the migration job will be stuck. If you need to enable PITR or have any changefeed, use logical mode instead to migrate data.
+- When you use physical mode and the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .premium }}} instance. Otherwise, the migration job stops. If you need to enable PITR or have any changefeed, use logical mode instead to migrate data.
- When you use physical mode, you cannot create a second migration job or import task for the {{{ .premium }}} instance before the existing data migration is completed.
diff --git a/tidb-cloud/premium/premium-data-migration.md b/tidb-cloud/premium/premium-data-migration.md
index d4aea5fdd7a9f..b862904b4a328 100644
--- a/tidb-cloud/premium/premium-data-migration.md
+++ b/tidb-cloud/premium/premium-data-migration.md
@@ -11,7 +11,7 @@ The Data Migration feature enables you to migrate existing MySQL data and contin
> **Note:**
>
-> The Data Migration feature for {{{ .premium }}} is currently in Public Preview. During Public Preview, the source database must be reachable over a public network endpoint, and the source connection cannot be reused across migration jobs. For details, see [Limitations](#limitations).
+> The Data Migration feature for {{{ .premium }}} is currently in Public Preview. During Public Preview, the source database must be reachable over a public network endpoint, and you cannot reuse the source connection across migration jobs. For details, see [Limitations](#limitations).
## Supported source databases
@@ -37,7 +37,7 @@ The **Existing data migration mode** does not apply to **Incremental only** migr
When you use physical mode, the following limitations apply:
-- After the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .premium }}} instance. Otherwise, the migration job will be stuck. If you need to enable PITR or have any changefeed, use logical mode instead.
+- After the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .premium }}} instance. Otherwise, the migration job stops. If you need to enable PITR or have any changefeed, use logical mode instead.
- You cannot create a second migration job or import task for the {{{ .premium }}} instance before the existing data migration is completed.
## Limitations
@@ -51,8 +51,8 @@ When you use physical mode, the following limitations apply:
### General limitations
- The system databases `mysql`, `information_schema`, `performance_schema`, and `sys` are filtered out and not migrated, even if you select all databases.
-- During existing data migration, if the target database already contains the table to be migrated and there are duplicate keys, the rows with duplicate keys are replaced.
-- During incremental data migration, if a migration job recovers from an abrupt error, it might enter safe mode for 60 seconds. During safe mode, `INSERT` statements are migrated as `REPLACE`, and `UPDATE` statements as `DELETE` and `REPLACE`. For source tables without primary keys or non-null unique indexes, this can result in duplicated rows in the target instance.
+- During existing data migration, if the target database already contains the table to be migrated and there are duplicate keys, TiDB Cloud replaces the rows with duplicate keys.
+- During incremental data migration, if a migration job recovers from an abrupt error, it might enter safe mode for 60 seconds. During safe mode, TiDB Cloud migrates `INSERT` statements as `REPLACE`, and `UPDATE` statements as `DELETE` and `REPLACE`. For source tables without primary keys or non-null unique indexes, this can result in duplicated rows in the target instance.
For a complete list of Data Migration limitations across TiDB Cloud, see [Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md#limitations).
@@ -130,14 +130,14 @@ In the **Migration Type** section, configure how data is migrated:
In the **Select Objects to Migrate** section, choose:
-- **All** (default): migrate every database and table on the source. The system databases (`mysql`, `information_schema`, `performance_schema`, `sys`) are excluded automatically.
+- **All** (default): migrate every database and table on the source. TiDB Cloud automatically excludes the system databases (`mysql`, `information_schema`, `performance_schema`, `sys`).
- **Customize**: pick specific databases and tables. The wizard fetches the source schema and shows two panels, **Source Database** and **Selected Objects**. Use the arrow buttons between the panels to move databases or tables into the **Selected Objects** list.
Click **Next**.
### Step 3: Pre-check
-The console runs the pre-check against the source database, network connectivity, and the target {{{ .premium }}} instance. The progress bar shows **Running {percentage}%** while checks execute, and **Finished 100%** when complete. The summary line reports total items, completed, passed, with warning, and failed.
+The console runs the pre-check against the source database, network connectivity, and the target {{{ .premium }}} instance. The progress bar shows **Running {percentage}%** while checks execute, and **Finished 100%** when complete. The summary line reports the total number of items, including those that are completed, passed, with warnings, or failed.
The **Pre-check Result** table lists every item that did not pass, along with its reason and a suggested solution. To re-run the pre-check after fixing an item, click **Check Again**. To proceed without addressing a warning, you can dismiss it by selecting **Ignore** on the row.
@@ -153,7 +153,7 @@ When all checks pass (or you choose to ignore the remaining warnings), click **N
The review page shows three sections summarizing the migration job:
- **Job Configuration**: job name and migration type.
-- **Source Connection Profile**: data source, host, port, connectivity method, username, SSL/TLS status, selected objects, and import mode.
+- **Source Connection Profile**: data source, host, port, connectivity method, username, SSL/TLS status, selected objects, and the existing data migration mode (shown as **Import Mode** on the review page).
- **Target Connection Profile**: region, cluster ID, cluster name, and target username.
Click **Previous** to revise any setting, or click **Create Job and Start** to create the migration job. The console redirects to the job detail page, where the job status starts in **Creating** and transitions to **Running** when the migration begins.
From 3d8ac1e90553f5d20111880d3f164f34311a43f3 Mon Sep 17 00:00:00 2001
From: Airton Lastori <6343615+alastori@users.noreply.github.com>
Date: Tue, 28 Apr 2026 01:38:32 -0400
Subject: [PATCH 9/9] cloud: clarify Premium DM job actions are
status-dependent
End-to-end wizard verification on the dev cluster created a real
migration job (id dmtskc3frek3p5fhy7ixu6wpj7cy2r4) and inspected
the post-creation experience:
- The Job Detail page does not expose action buttons (just Summary
and Progress panels).
- The list-page actions menu (the "..." button at the end of each
row) shows different items based on job status. While the job is
in Creating state, only View and Delete are visible. Pause and
Resume become available once the job reaches a running or paused
state.
Doc previously implied Pause/Resume/Delete were always available
from the detail page or the list. Replaced with status-aware
phrasing and noted the Creating-state subset explicitly.
The dev cluster job remained in Creating for 9+ minutes without
transitioning, matching the March AS-IS report KI-5 (dev
infrastructure issue, not a feature gap), so Pause/Resume
behavior was confirmed via API surface (PausePremiumMigration /
ResumePremiumMigration RPCs in proto) rather than the UI.
---
tidb-cloud/premium/premium-data-migration.md | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/tidb-cloud/premium/premium-data-migration.md b/tidb-cloud/premium/premium-data-migration.md
index b862904b4a328..26abab53db0b4 100644
--- a/tidb-cloud/premium/premium-data-migration.md
+++ b/tidb-cloud/premium/premium-data-migration.md
@@ -169,14 +169,17 @@ The migration job list shows the **Name**, **Status**, **Mode**, **Target User**
- A **Summary** panel with the job name, ID, status, mode, data source, data target, migration objects, and creation time.
- A **Progress** panel that shows the migration progress once the job starts running.
-### Pause, resume, or delete a migration job
+### Manage a migration job from the job list
-From the migration job detail page or from the actions menu in the job list, you can take the following actions:
+To manage a migration job, click the `...` (more) button at the end of the migration job row on the **Data Migration** page. The actions menu shows different options depending on the job status:
+- **View**: navigate to the job detail page.
- **Pause**: temporarily pause a running migration job. You can resume it later from the same position.
- **Resume**: resume a paused migration job.
- **Delete**: delete the migration job and its metadata. This action does not affect data already migrated to the target instance.
+**Pause** and **Resume** are only available when the job is in a running or paused state. While the job is in the **Creating** state, only **View** and **Delete** are available.
+
## See also
- [Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md): the canonical Data Migration reference, including detailed prerequisites, source-specific configuration, and troubleshooting.