diff --git a/TOC-tidb-cloud-premium.md b/TOC-tidb-cloud-premium.md index 64bae50b8cb26..762fefbc19f95 100644 --- a/TOC-tidb-cloud-premium.md +++ b/TOC-tidb-cloud-premium.md @@ -134,6 +134,9 @@ - [Connect via Private Endpoint with Alibaba Cloud](/tidb-cloud/premium/connect-to-premium-via-alibaba-cloud-private-endpoint.md) - [Back Up and Restore TiDB Cloud Data](/tidb-cloud/premium/backup-and-restore-premium.md) - [Export Data from {{{ .premium }}}](/tidb-cloud/premium/premium-export.md) + - [Migrate Data to {{{ .premium }}} Using Data Migration](/tidb-cloud/premium/premium-data-migration.md) + - [Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md) + - [Migrate Incremental Data from MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md) - Use TiFlash for HTAP - [TiFlash Overview](/tiflash/tiflash-overview.md) - [Create TiFlash Replicas](/tiflash/create-tiflash-replicas.md) diff --git a/tidb-cloud/migrate-from-mysql-using-data-migration.md b/tidb-cloud/migrate-from-mysql-using-data-migration.md index 3ad766aca2a85..6663676ec4feb 100644 --- a/tidb-cloud/migrate-from-mysql-using-data-migration.md +++ b/tidb-cloud/migrate-from-mysql-using-data-migration.md @@ -6,7 +6,7 @@ aliases: ['/tidbcloud/migrate-data-into-tidb','/tidbcloud/migrate-incremental-da # Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration -This document guides you through migrating your MySQL databases from Amazon Aurora MySQL, Amazon RDS, Azure Database for MySQL - Flexible Server, Google Cloud SQL for MySQL, or self-managed MySQL instances to {{{ .dedicated }}}{{{ .essential }}} using the Data Migration feature in the [TiDB Cloud console](https://tidbcloud.com/). +This document guides you through migrating your MySQL databases from Amazon Aurora MySQL, Amazon RDS, Azure Database for MySQL - Flexible Server, Google Cloud SQL for MySQL, or self-managed MySQL instances to {{{ .dedicated }}}{{{ .essential }}}{{{ .premium }}} using the Data Migration feature in the [TiDB Cloud console](https://tidbcloud.com/). @@ -16,6 +16,14 @@ This document guides you through migrating your MySQL databases from Amazon Auro + + +> **Note:** +> +> Currently, the Data Migration feature is in Public Preview for {{{ .premium }}}. For a {{{ .premium }}}-focused overview, see [Migrate Data to {{{ .premium }}} Using Data Migration](/tidb-cloud/premium/premium-data-migration.md). + + + This feature enables you to migrate your existing MySQL data and continuously replicate ongoing changes (binlog) from your MySQL-compatible source databases directly to TiDB Cloud, maintaining data consistency whether in the same region or across different regions. The streamlined process eliminates the need for separate dump and load operations, reducing downtime and simplifying your migration from MySQL to a more scalable platform. If you only want to replicate ongoing binlog changes from your MySQL-compatible database to TiDB Cloud, see [Migrate Incremental Data from MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md). @@ -86,9 +94,17 @@ To prevent this, create the target tables in the downstream database before star + + +- For {{{ .premium }}}, both logical mode (default) and physical mode are supported. Logical mode exports rows as SQL statements and replays them on the target instance, consuming Request Capacity Units (RCUs) on the target during the load. Physical mode uses `IMPORT INTO` on the target instance and is recommended for large datasets where load throughput and cost are priorities. +- When you use physical mode and the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .premium }}} instance. Otherwise, the migration job stops. If you need to enable PITR or have any changefeed, use logical mode instead to migrate data. +- When you use physical mode, you cannot create a second migration job or import task for the {{{ .premium }}} instance before the existing data migration is completed. + + + ### Limitations of incremental data migration -- During incremental data migration, if the table to be migrated already exists in the target database with duplicate keys, an error is reported and the migration is interrupted. In this situation, you need to verify that the MySQL source data is accurate. If it is accurate, click the **Restart** button of the migration job, and the migration job will replace the conflicting records in the target {{{ .dedicated }}} cluster{{{ .essential }}} instance with the MySQL source records. +- During incremental data migration, if the table to be migrated already exists in the target database with duplicate keys, an error is reported and the migration is interrupted. In this situation, you need to verify that the MySQL source data is accurate. If it is accurate, click the **Restart** button of the migration job, and the migration job will replace the conflicting records in the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance with the MySQL source records. @@ -108,7 +124,7 @@ To prevent this, create the target tables in the downstream database before star ## Prerequisites -Before migrating, check whether your data source is supported, enable binary logging in your MySQL-compatible database, ensure network connectivity, and grant required privileges for both the source database and the target {{{ .dedicated }}} cluster{{{ .essential }}} instance database. +Before migrating, check whether your data source is supported, enable binary logging in your MySQL-compatible database, ensure network connectivity, and grant required privileges for both the source database and the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance database. ### Make sure your data source and version are supported @@ -141,9 +157,24 @@ For {{{ .essential }}}, the Data Migration feature supports the following data s + + +For {{{ .premium }}}, the Data Migration feature supports any MySQL-compatible source database. The wizard exposes a single source-engine option (**MySQL**); to migrate from a managed MySQL service, connect via the public endpoint of the managed instance. + +| Data source | Supported versions | +|:-------------------------------------------------|:-------------------| +| Self-managed MySQL (on-premises or public cloud) | 8.0, 5.7 | +| Amazon Aurora MySQL | 8.0, 5.7 | +| Amazon RDS MySQL | 8.0, 5.7 | +| Azure Database for MySQL - Flexible Server | 8.0, 5.7 | +| Google Cloud SQL for MySQL | 8.0, 5.7 | +| Alibaba Cloud RDS MySQL | 8.0, 5.7 | + + + ### Enable binary logs in the source MySQL-compatible database for replication -To continuously replicate incremental changes from the source MySQL-compatible database to the target {{{ .dedicated }}} cluster{{{ .essential }}} instance using DM, you need the following configurations to enable binary logs in the source database: +To continuously replicate incremental changes from the source MySQL-compatible database to the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance using DM, you need the following configurations to enable binary logs in the source database: | Configuration | Required value | Why | |:---------------------------------|:---------------|:----| @@ -255,7 +286,7 @@ For more information, see [Set instance parameters](https://www.alibabacloud.com ### Ensure network connectivity -Before creating a migration job, you need to plan and set up proper network connectivity between your source MySQL instance, the TiDB Cloud Data Migration (DM) service, and your target {{{ .dedicated }}} cluster{{{ .essential }}} instance. +Before creating a migration job, you need to plan and set up proper network connectivity between your source MySQL instance, the TiDB Cloud Data Migration (DM) service, and your target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance. @@ -399,7 +430,7 @@ If you use AWS VPC peering or Google Cloud VPC network peering, see the followin If your MySQL service is in an AWS VPC, take the following steps: -1. [Set up a VPC peering connection](/tidb-cloud/set-up-vpc-peering-connections.md) between the VPC of the MySQL service and your {{{ .dedicated }}} cluster{{{ .essential }}} instance. +1. [Set up a VPC peering connection](/tidb-cloud/set-up-vpc-peering-connections.md) between the VPC of the MySQL service and your {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance. 2. Modify the inbound rules of the security group that the MySQL service is associated with. @@ -451,7 +482,7 @@ If your MySQL service is in a Google Cloud VPC, take the following steps: ### Grant required privileges for migration -Before starting migration, you need to set up appropriate database users with the required privileges on both the source and target databases. These privileges enable TiDB Cloud DM to read data from MySQL, replicate changes, and write to your {{{ .dedicated }}} cluster{{{ .essential }}} instance securely. Because the migration involves both full data dumps for existing data and binlog replication for incremental changes, your migration user requires specific permissions beyond basic read access. +Before starting migration, you need to set up appropriate database users with the required privileges on both the source and target databases. These privileges enable TiDB Cloud DM to read data from MySQL, replicate changes, and write to your {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance securely. Because the migration involves both full data dumps for existing data and binlog replication for incremental changes, your migration user requires specific permissions beyond basic read access. #### Grant required privileges to the migration user in the source MySQL database @@ -477,11 +508,11 @@ GRANT SELECT, RELOAD, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'dm_source GRANT SELECT, RELOAD, LOCK TABLES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'dm_source_user'@'%'; ``` -#### Grant required privileges in the target {{{ .dedicated }}} cluster{{{ .essential }}} instance +#### Grant required privileges in the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance -For testing purposes, you can use the `root` account of your {{{ .dedicated }}} cluster{{{ .essential }}} instance. +For testing purposes, you can use the `root` account of your {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance. -For production workloads, it is recommended to have a dedicated user for replication in the target {{{ .dedicated }}} cluster{{{ .essential }}} instance and grant only the necessary privileges: +For production workloads, it is recommended to have a dedicated user for replication in the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance and grant only the necessary privileges: | Privilege | Scope | Purpose | |:----------|:------|:--------| @@ -495,7 +526,7 @@ For production workloads, it is recommended to have a dedicated user for replica | `INDEX` | Tables | Creates and modifies indexes | | `CREATE VIEW` | Views | Creates views used by migration | -For example, you can execute the following `GRANT` statement in your target {{{ .dedicated }}} cluster{{{ .essential }}} instance to grant corresponding privileges: +For example, you can execute the following `GRANT` statement in your target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance to grant corresponding privileges: ```sql GRANT CREATE, SELECT, INSERT, UPDATE, DELETE, ALTER, DROP, INDEX ON *.* TO 'dm_target_user'@'%'; @@ -505,7 +536,7 @@ GRANT CREATE, SELECT, INSERT, UPDATE, DELETE, ALTER, DROP, INDEX ON *.* TO 'dm_t 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**My TiDB**](https://tidbcloud.com/tidbs) page. -2. Click the name of your target {{{ .dedicated }}} cluster{{{ .essential }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane. +2. Click the name of your target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane. 3. On the **Data Migration** page, click **Create Migration Job** in the upper-right corner. The **Create Migration Job** page is displayed. @@ -589,7 +620,7 @@ On the **Create Migration Job** page, configure the source and target connection 3. Fill in the target connection profile. - - **User Name**: enter the username of the target {{{ .dedicated }}} cluster{{{ .essential }}} instance in TiDB Cloud. + - **User Name**: enter the username of the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance in TiDB Cloud. - **Password**: enter the password of the TiDB Cloud username. 4. Click **Validate Connection and Next** to validate the information you have entered. @@ -638,8 +669,8 @@ You can use **physical mode** or **logical mode** to migrate **existing data** a > **Note:** > -> - When you use physical mode, you cannot create a second migration job or import task for the {{{ .dedicated }}} cluster{{{ .essential }}} instance before the existing data migration is completed. -> - When you use physical mode and the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .dedicated }}} cluster{{{ .essential }}} instance. Otherwise, the migration job will be stuck. If you need to enable PITR or have any changefeed, use logical mode instead to migrate data. +> - When you use physical mode, you cannot create a second migration job or import task for the {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance before the existing data migration is completed. +> - When you use physical mode and the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance. Otherwise, the migration job will be stuck. If you need to enable PITR or have any changefeed, use logical mode instead to migrate data. Physical mode exports the MySQL source data as fast as possible, so [different specifications](/tidb-cloud/tidb-cloud-billing-dm.md#specifications-for-data-migration) have different performance impacts on QPS and TPS of the MySQL source database during data export. The following table shows the performance regression of each specification. @@ -755,7 +786,7 @@ When scaling a migration job specification, note the following: 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**My TiDB**](https://tidbcloud.com/tidbs) page. -2. Click the name of your target {{{ .dedicated }}} cluster{{{ .essential }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane. +2. Click the name of your target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane. 3. On the **Data Migration** page, locate the migration job you want to scale. In the **Action** column, click **...** > **Scale Up/Down**. diff --git a/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md b/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md index 5d0a3d1d10d19..8b7664fcbfdd5 100644 --- a/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md +++ b/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md @@ -5,7 +5,7 @@ summary: Learn how to migrate incremental data from MySQL-compatible databases h # Migrate Only Incremental Data from MySQL-Compatible Databases to TiDB Cloud Using Data Migration -This document describes how to migrate incremental data from a MySQL-compatible database on a cloud provider (Amazon Aurora MySQL, Amazon Relational Database Service (RDS), Google Cloud SQL for MySQL, Azure Database for MySQL, or Alibaba Cloud RDS) or self-hosted source database to {{{ .dedicated }}}{{{ .essential }}} using the Data Migration feature of the TiDB Cloud console. +This document describes how to migrate incremental data from a MySQL-compatible database on a cloud provider (Amazon Aurora MySQL, Amazon Relational Database Service (RDS), Google Cloud SQL for MySQL, Azure Database for MySQL, or Alibaba Cloud RDS) or self-hosted source database to {{{ .dedicated }}}{{{ .essential }}}{{{ .premium }}} using the Data Migration feature of the TiDB Cloud console. @@ -148,7 +148,7 @@ To enable the GTID mode for a self-hosted MySQL instance, follow these steps: > > If you are in multiple organizations, use the combo box in the upper-left corner to switch to your target organization first. -2. Click the name of your target {{{ .dedicated }}} cluster{{{ .essential }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane. +2. Click the name of your target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane. 3. On the **Data Migration** page, click **Create Migration Job** in the upper-right corner. The **Create Migration Job** page is displayed. @@ -187,7 +187,7 @@ On the **Create Migration Job** page, configure the source and target connection 3. Fill in the target connection profile. - - **Username**: enter the username of the target {{{ .dedicated }}} cluster{{{ .essential }}} instance. + - **Username**: enter the username of the target {{{ .dedicated }}} cluster{{{ .essential }}} instance{{{ .premium }}} instance. - **Password**: enter the password of the TiDB Cloud username. 4. Click **Validate Connection and Next** to validate the information you have entered. diff --git a/tidb-cloud/premium/premium-data-migration.md b/tidb-cloud/premium/premium-data-migration.md new file mode 100644 index 0000000000000..26abab53db0b4 --- /dev/null +++ b/tidb-cloud/premium/premium-data-migration.md @@ -0,0 +1,187 @@ +--- +title: Migrate Data to {{{ .premium }}} Using Data Migration +summary: Learn how to migrate data from MySQL-compatible databases to {{{ .premium }}} instances using the Data Migration feature in the TiDB Cloud console. +--- + +# Migrate Data to {{{ .premium }}} Using Data Migration + +This document describes how to migrate data from a MySQL-compatible database to a {{{ .premium }}} instance using the Data Migration feature in the [TiDB Cloud console](https://tidbcloud.com/). + +The Data Migration feature enables you to migrate existing MySQL data and continuously replicate ongoing changes (binlog) from your MySQL-compatible source database directly to a {{{ .premium }}} instance, reducing downtime and simplifying your migration to TiDB. + +> **Note:** +> +> The Data Migration feature for {{{ .premium }}} is currently in Public Preview. During Public Preview, the source database must be reachable over a public network endpoint, and you cannot reuse the source connection across migration jobs. For details, see [Limitations](#limitations). + +## Supported source databases + +The Data Migration feature supports any MySQL-compatible database with binary log replication enabled. The wizard exposes a single source-engine option (**MySQL**); to migrate from a managed MySQL service such as Amazon Aurora MySQL, Amazon RDS MySQL, Azure Database for MySQL - Flexible Server, Google Cloud SQL for MySQL, or Alibaba Cloud RDS MySQL, connect via the public endpoint of the managed instance. + +Supported MySQL versions: 5.7 and 8.0. + +## Migration modes + +When you create a migration job, you choose a **Migration process** and an **Existing data migration mode**. + +The **Migration process** determines what data is migrated: + +- **Full + Incremental**: migrates existing data from the source database first, and then continuously replicates ongoing changes (binlog) to the target {{{ .premium }}} instance. +- **Incremental only**: continuously replicates ongoing changes (binlog) from the source database to the target {{{ .premium }}} instance, starting from the current binlog position. + +The **Existing data migration mode** determines how the existing data load is performed when **Full + Incremental** is selected: + +- **Logical** (default): exports rows from the source database and replays them as SQL `INSERT` statements on the target instance. Logical mode applies before any incremental replication starts. This mode consumes Request Capacity Units (RCUs) on the target instance during the data load. +- **Physical**: uses `IMPORT INTO` on the target instance to import data without RCU charges during the load. Use this mode for large datasets where load throughput and cost are priorities. + +The **Existing data migration mode** does not apply to **Incremental only** migrations. + +When you use physical mode, the following limitations apply: + +- After the migration job has started, do **NOT** enable PITR (Point-in-time Recovery) or have any changefeed on the {{{ .premium }}} instance. Otherwise, the migration job stops. If you need to enable PITR or have any changefeed, use logical mode instead. +- You cannot create a second migration job or import task for the {{{ .premium }}} instance before the existing data migration is completed. + +## Limitations + +### Public Preview limitations + +- Connectivity to the source database is currently public-only. Private Link connectivity to the source database is in development and not yet generally available. +- Source connection details cannot be saved or reused across migration jobs. Each migration job requires the source connection to be entered from scratch. +- Migration jobs created during Public Preview might be subject to additional restrictions as the feature matures. For up-to-date information, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md). + +### General limitations + +- The system databases `mysql`, `information_schema`, `performance_schema`, and `sys` are filtered out and not migrated, even if you select all databases. +- During existing data migration, if the target database already contains the table to be migrated and there are duplicate keys, TiDB Cloud replaces the rows with duplicate keys. +- During incremental data migration, if a migration job recovers from an abrupt error, it might enter safe mode for 60 seconds. During safe mode, TiDB Cloud migrates `INSERT` statements as `REPLACE`, and `UPDATE` statements as `DELETE` and `REPLACE`. For source tables without primary keys or non-null unique indexes, this can result in duplicated rows in the target instance. + +For a complete list of Data Migration limitations across TiDB Cloud, see [Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md#limitations). + +## Prerequisites + +Before creating a migration job, make sure the following prerequisites are met. + +### Enable binary logs on the source database + +To replicate incremental changes from the source MySQL-compatible database to the target {{{ .premium }}} instance, configure the source database with the following settings: + +| Configuration | Required value | Purpose | +|:---------------------------------|:---------------|:--------| +| `log_bin` | `ON` | Enables binary logging, which Data Migration uses to replicate changes to TiDB. | +| `binlog_format` | `ROW` | Captures all data changes accurately. | +| `binlog_row_image` | `FULL` | Includes all column values in events for safe conflict resolution. | +| `binlog_expire_logs_seconds` | ≥ `86400` (1 day); `604800` (7 days) recommended | Ensures Data Migration can access consecutive logs during migration. | +| `binlog_transaction_compression` | `OFF` | Data Migration does not support transaction compression. | + +For detailed configuration steps for self-managed MySQL, AWS RDS, Aurora, Azure Database for MySQL, Google Cloud SQL, and Alibaba Cloud RDS, see [Enable binary logs in the source MySQL-compatible database for replication](/tidb-cloud/migrate-from-mysql-using-data-migration.md#enable-binary-logs-in-the-source-mysql-compatible-database-for-replication). + +### Ensure network connectivity + +The {{{ .premium }}} instance connects to the source database over the public internet during Public Preview. Make sure that: + +- The source database accepts inbound connections from the public IP ranges used by the {{{ .premium }}} region. +- Any firewall, security group, or network ACL between the {{{ .premium }}} instance and the source database allows traffic on the source database port (typically `3306`). + +The target {{{ .premium }}} instance must also be reachable. If the target cluster's public endpoint is disabled, enable it under **Settings** > **Networking** before creating the migration job. For more information, see [Connect via Public Endpoint](/tidb-cloud/premium/connect-to-premium-via-public-connection.md). + +### Grant required privileges + +The migration user on the source database must have privileges sufficient to read schema and data and to read the binary log, including (but not limited to) `SELECT`, `RELOAD`, `REPLICATION SLAVE`, `REPLICATION CLIENT`, and `PROCESS`. The pre-check step warns if the `PROCESS` privilege is missing, because Data Migration uses it to verify that the migration user does not exceed the source database's connection-concurrency limit. + +For managed MySQL services such as AWS RDS, Aurora, Azure Database for MySQL, Google Cloud SQL, and Alibaba Cloud RDS, additional service-specific permissions might be required. For details, see [Grant required privileges to the migration user in the source MySQL database](/tidb-cloud/migrate-from-mysql-using-data-migration.md#grant-required-privileges-to-the-migration-user-in-the-source-mysql-database). + +On the target {{{ .premium }}} instance, the migration user must have privileges sufficient to create databases, create tables, and write data in the target schemas. For details, see [Grant required privileges for migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md#grant-required-privileges-for-migration). + +## Create a migration job + +To create a migration job from a MySQL-compatible source database to a {{{ .premium }}} instance, take the following steps. + +### Step 1: Configure source and target connection + +1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**TiDB Instances**](https://tidbcloud.com/tidbs) page. + + > **Tip:** + > + > If you are in multiple organizations, use the combo box in the upper-left corner to switch to your target organization first. + +2. Click the name of your target {{{ .premium }}} instance to go to its overview page, and then click **Data** > **Data Migration** in the left navigation pane. + +3. On the **Data Migration** page, click **Create Migration Job** in the upper-right corner. + +4. On the **Configure source and target connection** step, enter the following information: + + - **Job Name**: a name for the migration job. The default value is `migration_job_{timestamp}`. The name must start with a letter, can contain letters, numbers, underscores (`_`), and hyphens (`-`), and must be less than 60 characters. + - **Source Connection Profile**: + - **Data Source**: select **MySQL**. + - **Connectivity Method**: select **Public**. + - **Hostname or IP address**: enter the hostname or IP address of the source database. + - **Port**: enter the source database port. The default is `3306`. + - **User Name** and **Password**: enter the credentials for the migration user. This user must have the privileges listed in [Grant required privileges](#grant-required-privileges). + - **SSL/TLS**: enabled by default. If your source database requires encrypted connections, upload the **CA Certificate**, **Client Certificate**, and **Client private key** as needed. If your source database does not require encrypted connections, turn off the **SSL/TLS** toggle. + - **Target Connection Profile**: the **Region**, **Cluster ID**, and **Cluster Name** fields are auto-populated from the current {{{ .premium }}} instance. Enter the **User Name** and **Password** for a TiDB user that has sufficient privileges in the target instance. + +5. Click **Validate Connection and Next**. The console validates both source and target connections. If validation fails, the wizard displays an error and remains on this step. Resolve the issue and try again. + +### Step 2: Choose objects to be migrated + +In the **Migration Type** section, configure how data is migrated: + +- **Migration process**: select **Full + Incremental** (default) or **Incremental only**. +- **Existing data migration mode** (only applies to **Full + Incremental**): select **Logical** (default) or **Physical**. For details, see [Migration modes](#migration-modes). + +In the **Select Objects to Migrate** section, choose: + +- **All** (default): migrate every database and table on the source. TiDB Cloud automatically excludes the system databases (`mysql`, `information_schema`, `performance_schema`, `sys`). +- **Customize**: pick specific databases and tables. The wizard fetches the source schema and shows two panels, **Source Database** and **Selected Objects**. Use the arrow buttons between the panels to move databases or tables into the **Selected Objects** list. + +Click **Next**. + +### Step 3: Pre-check + +The console runs the pre-check against the source database, network connectivity, and the target {{{ .premium }}} instance. The progress bar shows **Running {percentage}%** while checks execute, and **Finished 100%** when complete. The summary line reports the total number of items, including those that are completed, passed, with warnings, or failed. + +The **Pre-check Result** table lists every item that did not pass, along with its reason and a suggested solution. To re-run the pre-check after fixing an item, click **Check Again**. To proceed without addressing a warning, you can dismiss it by selecting **Ignore** on the row. + +If the pre-check completes with at least one warning and you click **Next**, the console shows a confirmation dialog with two options: + +- **Check Again**: return to the **Pre-check Result** table and address the warnings. +- **Ignore warnings**: advance to the next step. Note that ignoring warnings may result in job failures or data inconsistencies. + +When all checks pass (or you choose to ignore the remaining warnings), click **Next**. + +### Step 4: Review and start migration + +The review page shows three sections summarizing the migration job: + +- **Job Configuration**: job name and migration type. +- **Source Connection Profile**: data source, host, port, connectivity method, username, SSL/TLS status, selected objects, and the existing data migration mode (shown as **Import Mode** on the review page). +- **Target Connection Profile**: region, cluster ID, cluster name, and target username. + +Click **Previous** to revise any setting, or click **Create Job and Start** to create the migration job. The console redirects to the job detail page, where the job status starts in **Creating** and transitions to **Running** when the migration begins. + +## Manage a migration job + +After a migration job is created, you can monitor and manage it from the **Data Migration** page of your {{{ .premium }}} instance. + +### View job status and progress + +The migration job list shows the **Name**, **Status**, **Mode**, **Target User**, and **Creation Time** for each job. Click a migration job to open its detail page, which shows: + +- A **Summary** panel with the job name, ID, status, mode, data source, data target, migration objects, and creation time. +- A **Progress** panel that shows the migration progress once the job starts running. + +### Manage a migration job from the job list + +To manage a migration job, click the `...` (more) button at the end of the migration job row on the **Data Migration** page. The actions menu shows different options depending on the job status: + +- **View**: navigate to the job detail page. +- **Pause**: temporarily pause a running migration job. You can resume it later from the same position. +- **Resume**: resume a paused migration job. +- **Delete**: delete the migration job and its metadata. This action does not affect data already migrated to the target instance. + +**Pause** and **Resume** are only available when the job is in a running or paused state. While the job is in the **Creating** state, only **View** and **Delete** are available. + +## See also + +- [Migrate MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-from-mysql-using-data-migration.md): the canonical Data Migration reference, including detailed prerequisites, source-specific configuration, and troubleshooting. +- [Migrate Incremental Data from MySQL-Compatible Databases to TiDB Cloud Using Data Migration](/tidb-cloud/migrate-incremental-data-from-mysql-using-data-migration.md): a focused guide for incremental-only migration scenarios. +- [Connect to Your {{{ .premium }}} Instance](/tidb-cloud/premium/connect-to-tidb-instance.md): network and connectivity options for {{{ .premium }}} instances.