diff --git a/TOC-tidb-cloud-premium.md b/TOC-tidb-cloud-premium.md index 63aa93ff2697c..c01c2758d8609 100644 --- a/TOC-tidb-cloud-premium.md +++ b/TOC-tidb-cloud-premium.md @@ -221,6 +221,13 @@ - [CSV Configurations for Importing Data](/tidb-cloud/csv-config-for-import-data.md) - [Troubleshoot Access Denied Errors during Data Import from Amazon S3](/tidb-cloud/troubleshoot-import-access-denied-error.md) - [Connect AWS DMS to TiDB Cloud clusters](/tidb-cloud/tidb-cloud-connect-aws-dms.md) +- Stream Data + - [Changefeed Overview](/tidb-cloud/changefeed-overview.md) + - [To MySQL Sink](/tidb-cloud/changefeed-sink-to-mysql.md) + - [To Kafka Sink](/tidb-cloud/changefeed-sink-to-apache-kafka.md) + - Reference + - [Set Up Self-Hosted Kafka Private Link Service in AWS](/tidb-cloud/setup-aws-self-hosted-kafka-private-link-service.md) + - [Set Up Private Endpoint for Changefeeds](/tidb-cloud/premium/set-up-sink-private-endpoint-premium.md) - Security - [Security Overview](/tidb-cloud/security-overview.md) - Identity Access Control @@ -243,6 +250,7 @@ - [Credits](/tidb-cloud/tidb-cloud-billing.md#credits) - [Payment Method Setting](/tidb-cloud/tidb-cloud-billing.md#payment-method) - [Billing from Cloud Provider Marketplace](/tidb-cloud/tidb-cloud-billing.md#billing-from-cloud-provider-marketplace) + - [Billing for Changefeed](/tidb-cloud/premium/tidb-cloud-billing-ticdc-ccu.md) - [Manage Budgets](/tidb-cloud/tidb-cloud-budget.md) - Integrations - [Airbyte](/tidb-cloud/integrate-tidbcloud-with-airbyte.md) diff --git a/tidb-cloud/changefeed-overview.md b/tidb-cloud/changefeed-overview.md index d7343c97464d2..18bf45a1b7207 100644 --- a/tidb-cloud/changefeed-overview.md +++ b/tidb-cloud/changefeed-overview.md @@ -9,7 +9,7 @@ TiDB Cloud changefeed helps you stream data from TiDB Cloud to other data servic > **Note:** > -> - Currently, TiDB Cloud only allows up to 100 changefeeds per cluster. +> - Currently, TiDB Cloud only allows up to 100 changefeeds per clusterinstance. > - Currently, TiDB Cloud only allows up to 100 table filter rules per changefeed. > - For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) and [{{{ .essential }}}](/tidb-cloud/select-cluster-tier.md#essential) clusters, the changefeed feature is unavailable. @@ -17,13 +17,13 @@ TiDB Cloud changefeed helps you stream data from TiDB Cloud to other data servic To access the changefeed feature, take the following steps: -1. In the [TiDB Cloud console](https://tidbcloud.com), navigate to the [**Clusters**](https://tidbcloud.com/project/clusters) page of your project. +1. In the [TiDB Cloud console](https://tidbcloud.com), navigate to the [**Clusters**](https://tidbcloud.com/project/clusters) page of your project.navigate to the [**TiDB Instances**](https://tidbcloud.com/tidbs) page. > **Tip:** > > You can use the combo box in the upper-left corner to switch between organizations, projects, and clusters. -2. Click the name of your target cluster to go to its overview page, and then click **Data** > **Changefeed** in the left navigation pane. The changefeed page is displayed. +2. Click the name of your target clusterinstance to go to its overview page, and then click **Data** > **Changefeed** in the left navigation pane. The changefeed page is displayed. On the **Changefeed** page, you can create a changefeed, view a list of existing changefeeds, and operate the existing changefeeds (such as scaling, pausing, resuming, editing, and deleting a changefeed). @@ -36,14 +36,31 @@ To create a changefeed, refer to the tutorials: - [Sink to TiDB Cloud](/tidb-cloud/changefeed-sink-to-tidb-cloud.md) - [Sink to cloud storage](/tidb-cloud/changefeed-sink-to-cloud-storage.md) -## Query Changefeed RCUs +## Query changefeed capacity + + + +For TiDB Cloud Dedicated, you can query the TiCDC Replication Capacity Units (RCUs) of a changefeed. 1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB cluster. 2. Locate the corresponding changefeed you want to query, and click **...** > **View** in the **Action** column. 3. You can see the current TiCDC Replication Capacity Units (RCUs) in the **Specification** area of the page. + + + +For {{{ .premium }}}, you can query the TiCDC Changefeed Capacity Units (CCUs) of a changefeed. + +1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB instance. +2. Locate the corresponding changefeed you want to query, and click **...** > **View** in the **Action** column. +3. You can see the current TiCDC Changefeed Capacity Units (CCUs) in the **Specification** area of the page. + + + ## Scale a changefeed + + You can change the TiCDC Replication Capacity Units (RCUs) of a changefeed by scaling up or down the changfeed. > **Note:** @@ -51,7 +68,14 @@ You can change the TiCDC Replication Capacity Units (RCUs) of a changefeed by sc > - To scale a changefeed for a cluster, make sure that all changefeeds for this cluster are created after March 28, 2023. > - If a cluster has changefeeds created before March 28, 2023, neither the existing changefeeds nor newly created changefeeds for this cluster support scaling up or down. -1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB cluster. + + + +You can change the TiCDC Changefeed Capacity Units (CCUs) of a changefeed by scaling up or down the changfeed. + + + +1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB clusterinstance. 2. Locate the corresponding changefeed you want to scale, and click **...** > **Scale Up/Down** in the **Action** column. 3. Select a new specification. 4. Click **Submit**. @@ -60,7 +84,7 @@ It takes about 10 minutes to complete the scaling process (during which the chan ## Pause or resume a changefeed -1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB cluster. +1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB clusterinstance. 2. Locate the corresponding changefeed you want to pause or resume, and click **...** > **Pause/Resume** in the **Action** column. ## Edit a changefeed @@ -69,7 +93,7 @@ It takes about 10 minutes to complete the scaling process (during which the chan > > TiDB Cloud currently only allows editing changefeeds in the paused status. -1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB cluster. +1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB clusterinstance. 2. Locate the changefeed you want to pause, and click **...** > **Pause** in the **Action** column. 3. When the changefeed status changes to `Paused`, click **...** > **Edit** to edit the corresponding changefeed. @@ -84,7 +108,7 @@ It takes about 10 minutes to complete the scaling process (during which the chan ## Delete a changefeed -1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB cluster. +1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB clusterinstance. 2. Locate the corresponding changefeed you want to delete, and click **...** > **Delete** in the **Action** column. ## Changefeed billing diff --git a/tidb-cloud/changefeed-sink-to-apache-kafka.md b/tidb-cloud/changefeed-sink-to-apache-kafka.md index 0bdb4b2bb64ac..9bb1195bb029a 100644 --- a/tidb-cloud/changefeed-sink-to-apache-kafka.md +++ b/tidb-cloud/changefeed-sink-to-apache-kafka.md @@ -7,17 +7,31 @@ summary: This document explains how to create a changefeed to stream data from T This document describes how to create a changefeed to stream data from TiDB Cloud to Apache Kafka. + + > **Note:** > > - To use the changefeed feature, make sure that your TiDB Cloud Dedicated cluster version is v6.1.3 or later. > - For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) and [{{{ .essential }}}](/tidb-cloud/select-cluster-tier.md#essential) clusters, the changefeed feature is unavailable. + + + +> **Note:** +> +> For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) and [{{{ .essential }}}](/tidb-cloud/select-cluster-tier.md#essential) clusters, the changefeed feature is unavailable. + + + ## Restrictions -- For each TiDB Cloud cluster, you can create up to 100 changefeeds. +- For each TiDB Cloud clusterinstance, you can create up to 100 changefeeds. - Currently, TiDB Cloud does not support uploading self-signed TLS certificates to connect to Kafka brokers. - Because TiDB Cloud uses TiCDC to establish changefeeds, it has the same [restrictions as TiCDC](https://docs.pingcap.com/tidb/stable/ticdc-overview#unsupported-scenarios). - If the table to be replicated does not have a primary key or a non-null unique index, the absence of a unique constraint during replication could result in duplicated data being inserted downstream in some retry scenarios. + + + - If you choose Private Link or Private Service Connect as the network connectivity method, ensure that your TiDB cluster version meets the following requirements: - For v6.5.x: version v6.5.9 or later @@ -30,6 +44,8 @@ This document describes how to create a changefeed to stream data from TiDB Clou - If you want to distribute changelogs by primary key or index value to Kafka partition with a specified index name, make sure the version of your TiDB cluster is v7.5.0 or later. - If you want to distribute changelogs by column value to Kafka partition, make sure the version of your TiDB cluster is v7.5.0 or later. + + ## Prerequisites Before creating a changefeed to stream data to Apache Kafka, you need to complete the following prerequisites: @@ -39,12 +55,14 @@ Before creating a changefeed to stream data to Apache Kafka, you need to complet ### Network -Ensure that your TiDB cluster can connect to the Apache Kafka service. You can choose one of the following connection methods: +Ensure that your TiDB clusterinstance can connect to the Apache Kafka service. You can choose one of the following connection methods: - Private Connect: ideal for avoiding VPC CIDR conflicts and meeting security compliance, but incurs additional [Private Data Link Cost](/tidb-cloud/tidb-cloud-billing-ticdc-rcu.md#private-data-link-cost). - VPC Peering: suitable as a cost-effective option, but requires managing potential VPC CIDR conflicts and security considerations. - Public IP: suitable for a quick setup. + +
@@ -87,6 +105,35 @@ It is **NOT** recommended to use Public IP in a production environment.
+
+ + + + +
+ +Private Connect leverages **Private Link** or **Private Service Connect** technologies from cloud providers to enable resources in your VPC to connect to services in other VPCs using private IP addresses, as if those services were hosted directly within your VPC. + +TiDB Cloud currently supports Private Connect only for self-hosted Kafka. It does not support direct integration with MSK, Confluent Kafka, or other Kafka SaaS services. To connect to these Kafka SaaS services via Private Connect, you can deploy a [kafka-proxy](https://github.com/grepplabs/kafka-proxy) as an intermediary, effectively exposing the Kafka service as self-hosted Kafka. + +If your Apache Kafka service is hosted on AWS, follow [Set Up Self-Hosted Kafka Private Link Service in AWS](/tidb-cloud/setup-aws-self-hosted-kafka-private-link-service.md) to configure the network connection and obtain the **Bootstrap Ports** information, and then follow [Set Up Private Endpoint for Changefeeds](/tidb-cloud/premium/set-up-sink-private-endpoint-premium.md) to create a private endpoint. + +
+
+ +If you want to provide Public IP access to your Apache Kafka service, assign Public IP addresses to all your Kafka brokers. + +It is **NOT** recommended to use Public IP in a production environment. + +
+ +
+ +Currently, the VPC Peering feature for {{{ .premium }}} instances is only available upon request. To request this feature, click **?** in the lower-right corner of the [TiDB Cloud console](https://tidbcloud.com) and click **Request Support**. Then, fill in "Apply for VPC Peering for {{{ .premium }}} instance" in the **Description** field and click **Submit**. + +
+
+
### Kafka ACL authorization @@ -100,7 +147,7 @@ For example, if your Kafka cluster is in Confluent Cloud, you can see [Resources ## Step 1. Open the Changefeed page for Apache Kafka 1. Log in to the [TiDB Cloud console](https://tidbcloud.com). -2. Navigate to the cluster overview page of the target TiDB cluster, and then click **Data** > **Changefeed** in the left navigation pane. +2. Navigate to the overview page of the target TiDB clusterinstance, and then click **Data** > **Changefeed** in the left navigation pane. 3. Click **Create Changefeed**, and select **Kafka** as **Destination**. ## Step 2. Configure the changefeed target @@ -140,6 +187,8 @@ The steps vary depending on the connectivity method you select. 11. Return to the [TiDB Cloud console](https://tidbcloud.com) to confirm that you have accepted the connection request. TiDB Cloud will test the connection and proceed to the next page if the test succeeds. + +
1. In **Connectivity Method**, select **Private Service Connect**. @@ -158,6 +207,9 @@ The steps vary depending on the connectivity method you select. 11. Return to the [TiDB Cloud console](https://tidbcloud.com) to confirm that you have accepted the connection request. TiDB Cloud will test the connection and proceed to the next page if the test succeeds.
+
+ +
1. In **Connectivity Method**, select **Private Link**. @@ -176,6 +228,7 @@ The steps vary depending on the connectivity method you select. 11. Return to the [TiDB Cloud console](https://tidbcloud.com) to confirm that you have accepted the connection request. TiDB Cloud will test the connection and proceed to the next page if the test succeeds.
+
## Step 3. Set the changefeed @@ -219,7 +272,7 @@ The steps vary depending on the connectivity method you select. 6. If you select **Avro** as your data format, you will see some Avro-specific configurations on the page. You can fill in these configurations as follows: - In the **Decimal** and **Unsigned BigInt** configurations, specify how TiDB Cloud handles the decimal and unsigned bigint data types in Kafka messages. - - In the **Schema Registry** area, fill in your schema registry endpoint. If you enable **HTTP Authentication**, the fields for user name and password are displayed and automatically filled in with your TiDB cluster endpoint and password. + - In the **Schema Registry** area, fill in your schema registry endpoint. If you enable **HTTP Authentication**, the fields for user name and password are displayed and automatically filled in with your TiDB clusterinstance endpoint and password. 7. In the **Topic Distribution** area, select a distribution mode, and then fill in the topic name configurations according to the mode. @@ -272,7 +325,7 @@ The steps vary depending on the connectivity method you select. ## Step 4. Configure your changefeed specification -1. In the **Changefeed Specification** area, specify the number of Replication Capacity Units (RCUs) to be used by the changefeed. +1. In the **Changefeed Specification** area, specify the number of Replication Capacity Units (RCUs)Changefeed Capacity Units (CCUs) to be used by the changefeed. 2. In the **Changefeed Name** area, specify a name for the changefeed. 3. Click **Next** to check the configurations you set and go to the next page. diff --git a/tidb-cloud/changefeed-sink-to-mysql.md b/tidb-cloud/changefeed-sink-to-mysql.md index 136fed436889e..676bf3d62dafa 100644 --- a/tidb-cloud/changefeed-sink-to-mysql.md +++ b/tidb-cloud/changefeed-sink-to-mysql.md @@ -7,14 +7,25 @@ summary: This document explains how to stream data from TiDB Cloud to MySQL usin This document describes how to stream data from TiDB Cloud to MySQL using the **Sink to MySQL** changefeed. + + > **Note:** > > - To use the changefeed feature, make sure that your TiDB Cloud Dedicated cluster version is v6.1.3 or later. > - For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) and [{{{ .essential }}}](/tidb-cloud/select-cluster-tier.md#essential) clusters, the changefeed feature is unavailable. + + + +> **Note:** +> +> For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) and [{{{ .essential }}}](/tidb-cloud/select-cluster-tier.md#essential) clusters, the changefeed feature is unavailable. + + + ## Restrictions -- For each TiDB Cloud cluster, you can create up to 100 changefeeds. +- For each TiDB Cloud clusterinstance, you can create up to 100 changefeeds. - Because TiDB Cloud uses TiCDC to establish changefeeds, it has the same [restrictions as TiCDC](https://docs.pingcap.com/tidb/stable/ticdc-overview#unsupported-scenarios). - If the table to be replicated does not have a primary key or a non-null unique index, the absence of a unique constraint during replication could result in duplicated data being inserted downstream in some retry scenarios. @@ -28,6 +39,8 @@ Before creating a changefeed, you need to complete the following prerequisites: ### Network + + Make sure that your TiDB Cloud cluster can connect to the MySQL service. @@ -65,10 +78,35 @@ You can connect your TiDB Cloud cluster to your MySQL service securely through a + + + + +Make sure that your TiDB Cloud instance can connect to the MySQL service. + +> **Note:** +> +> Currently, the VPC Peering feature for {{{ .premium }}} instances is only available upon request. To request this feature, click **?** in the lower-right corner of the [TiDB Cloud console](https://tidbcloud.com) and click **Request Support**. Then, fill in "Apply for VPC Peering for {{{ .premium }}} instance" in the **Description** field and click **Submit**. + +Private endpoints leverage **Private Link** or **Private Service Connect** technologies from cloud providers, enabling resources in your VPC to connect to services in other VPCs through private IP addresses, as if those services were hosted directly within your VPC. + +You can connect your TiDB Cloud instance to your MySQL service securely through a private endpoint. If the private endpoint is not available for your MySQL service, follow [Set Up Private Endpoint for Changefeeds](/tidb-cloud/premium/set-up-sink-private-endpoint-premium.md) to create one. + + + ### Load existing data (optional) + + The **Sink to MySQL** connector can only sink incremental data from your TiDB cluster to MySQL after a certain timestamp. If you already have data in your TiDB cluster, you can export and load the existing data of your TiDB cluster into MySQL before enabling **Sink to MySQL**. + + + +The **Sink to MySQL** connector can only sink incremental data from your TiDB instance to MySQL after a certain timestamp. If you already have data in your TiDB instance, you can export and load the existing data of your TiDB instance into MySQL before enabling **Sink to MySQL**. + + + To load the existing data: 1. Extend the [tidb_gc_life_time](https://docs.pingcap.com/tidb/stable/system-variables#tidb_gc_life_time-new-in-v50) to be longer than the total time of the following two operations, so that historical data during the time is not garbage collected by TiDB. @@ -84,7 +122,7 @@ To load the existing data: SET GLOBAL tidb_gc_life_time = '720h'; ``` -2. Use [Dumpling](https://docs.pingcap.com/tidb/stable/dumpling-overview) to export data from your TiDB cluster, then use community tools such as [mydumper/myloader](https://centminmod.com/mydumper.html) to load data to the MySQL service. +2. Use [Dumpling](https://docs.pingcap.com/tidb/stable/dumpling-overview) to export data from your TiDB clusterinstance, then use community tools such as [mydumper/myloader](https://centminmod.com/mydumper.html) to load data to the MySQL service. 3. From the [exported files of Dumpling](https://docs.pingcap.com/tidb/stable/dumpling-overview#format-of-exported-files), get the start position of MySQL sink from the metadata file: @@ -106,7 +144,7 @@ If you do not load the existing data, you need to create corresponding target ta After completing the prerequisites, you can sink your data to MySQL. -1. Navigate to the cluster overview page of the target TiDB cluster, and then click **Data** > **Changefeed** in the left navigation pane. +1. Navigate to the overview page of the target TiDB clusterinstance, and then click **Data** > **Changefeed** in the left navigation pane. 2. Click **Create Changefeed**, and select **MySQL** as **Destination**. @@ -143,12 +181,12 @@ After completing the prerequisites, you can sink your data to MySQL. 8. In **Start Replication Position**, configure the starting position for your MySQL sink. - If you have [loaded the existing data](#load-existing-data-optional) using Dumpling, select **Start replication from a specific TSO** and fill in the TSO that you get from Dumpling exported metadata files. - - If you do not have any data in the upstream TiDB cluster, select **Start replication from now on**. + - If you do not have any data in the upstream TiDB clusterinstance, select **Start replication from now on**. - Otherwise, you can customize the start time point by choosing **Start replication from a specific time**. 9. Click **Next** to configure your changefeed specification. - - In the **Changefeed Specification** area, specify the number of Replication Capacity Units (RCUs) to be used by the changefeed. + - In the **Changefeed Specification** area, specify the number of Replication Capacity Units (RCUs)Changefeed Capacity Units (CCUs) to be used by the changefeed. - In the **Changefeed Name** area, specify a name for the changefeed. 10. Click **Next** to review the changefeed configuration. diff --git a/tidb-cloud/premium/set-up-sink-private-endpoint-premium.md b/tidb-cloud/premium/set-up-sink-private-endpoint-premium.md new file mode 100644 index 0000000000000..3c9a6ecaab6a5 --- /dev/null +++ b/tidb-cloud/premium/set-up-sink-private-endpoint-premium.md @@ -0,0 +1,108 @@ +--- +title: Set Up Private Endpoint for Changefeeds +summary: Learn how to set up a private endpoint for changefeeds. +--- + +# Set Up Private Endpoint for Changefeeds + +This document describes how to create a private endpoint for changefeeds in your {{{ .premium }}} instances, enabling you to securely stream data to self-hosted Kafka or MySQL through private connectivity. + +## Prerequisites + +- Check permissions for private endpoint creation +- Set up your network connection + +### Permissions + +Only users with any of the following roles in your organization can create private endpoints for changefeeds: + +- `Organization Owner` +- `Instance Administrator` for the corresponding instance + +For more information about roles in TiDB Cloud, see [User roles](/tidb-cloud/premium/manage-user-access-premium.md#user-roles). + +### Network + +Private endpoints leverage the **Private Link** technology from cloud providers, enabling resources in your VPC to connect to services in other VPCs through private IP addresses, as if those services were hosted directly within your VPC. + + +
+ +If your changefeed downstream service is hosted on AWS, collect the following information: + +- The name of the Private Endpoint Service for your downstream service +- The availability zones (AZs) where your downstream service is deployed + +If the Private Endpoint Service is not available for your downstream service, follow [Step 2. Expose the Kafka cluster as Private Link Service](/tidb-cloud/setup-aws-self-hosted-kafka-private-link-service.md#step-2-expose-the-kafka-cluster-as-private-link-service) to set up the load balancer and the Private Link Service. + +
+ + + +
+ +If your changefeed downstream service is hosted on Alibaba Cloud, collect the following information: + +- The name of the Private Endpoint Service for your downstream service +- The availability zones (AZs) where your downstream service is deployed + +
+
+ +
+ +## Step 1. Open the Networking page for your instance + +1. Log in to the [TiDB Cloud console](https://tidbcloud.com/). + +2. On the [**TiDB Instances**](https://tidbcloud.com/tidbs) page, click the name of your target instance to go to its overview page. + + > **Tip:** + > + > You can use the combo box in the upper-left corner to switch between organizations and instances. + +3. In the left navigation pane, click **Settings** > **Networking**. + +## Step 2. Configure the private endpoint for changefeeds + +The configuration steps vary depending on the cloud provider where your instance is deployed. + + +
+ +1. On the **Networking** page, click **Create Private Endpoint** in the **AWS Private Endpoint for Changefeed** section. +2. In the **Create Private Endpoint for Changefeed** dialog, enter a name for the private endpoint. +3. Follow the reminder to authorize the [AWS Principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#principal-accounts) of TiDB Cloud to create an endpoint. +4. Enter the **Endpoint Service Name** that you collected in the [Network](#network) section. +5. Select the **Number of AZs**. Ensure that the number of AZs and the AZ IDs match your Kafka deployment. +6. If this private endpoint is created for Apache Kafka, enable the **Advertised Listener for Kafka** option. +7. Configure the advertised listener for Kafka using either the **TiDB Managed** domain or the **Custom** domain. + + - To use the **TiDB Managed** domain for advertised listeners, enter a unique string in the **Domain Pattern** field, and then click **Generate**. TiDB will generate broker addresses with subdomains for each availability zone. + - To use your own **Custom** domain for advertised listeners, switch the domain type to **Custom**, enter the root domain in the **Custom Domain** field, click **Check**, and then specify the broker subdomains for each availability zone. + +8. Click **Create** to validate the configurations and create the private endpoint. + +
+ + + +
+ +1. On the **Networking** page, click **Create Private Endpoint** in the **Alibaba Cloud Private Endpoint for Changefeed** section. +2. In the **Create Private Endpoint for Changefeed** dialog, enter a name for the private endpoint. +3. Follow the reminder to whitelist TiDB Cloud's Alibaba Cloud account ID for your endpoint service to grant the TiDB Cloud VPC access. + +4. Enter the **Endpoint Service Name** that you collected in the [Network](#network) section. +5. Select the **Number of AZs**. Ensure that the number of AZs and the AZ IDs match your Kafka deployment. +6. If this private endpoint is created for Apache Kafka, enable the **Advertised Listener for Kafka** option. +7. Configure the advertised listener for Kafka using either the **TiDB Managed** domain or the **Custom** domain. + + - To use the **TiDB Managed** domain for advertised listeners, enter a unique string in the **Domain Pattern** field, and then click **Generate**. TiDB will generate broker addresses with subdomains for each availability zone. + - To use your own **Custom** domain for advertised listeners, switch the domain type to **Custom**, enter the root domain in the **Custom Domain** field, click **Check**, and then specify the broker subdomains for each availability zone. + +8. Click **Create** to validate the configurations and create the private endpoint. + +
+
+
diff --git a/tidb-cloud/premium/tidb-cloud-billing-ticdc-ccu.md b/tidb-cloud/premium/tidb-cloud-billing-ticdc-ccu.md new file mode 100644 index 0000000000000..70be0ec5bd7c6 --- /dev/null +++ b/tidb-cloud/premium/tidb-cloud-billing-ticdc-ccu.md @@ -0,0 +1,47 @@ +--- +title: Changefeed Billing for {{{ .premium }}} +summary: Learn about billing for changefeeds in {{{ .premium }}}. +--- + +# Changefeed Billing for {{{ .premium }}} + +This document describes the billing details for changefeeds in {{{ .premium }}}. + +## CCU cost + +{{{ .premium }}} measures the capacity of [changefeeds](/tidb-cloud/changefeed-overview.md) in TiCDC Changefeed Capacity Units (CCUs). When you [create a changefeed](/tidb-cloud/changefeed-overview.md#create-a-changefeed) for an instance, you can select an appropriate specification. The higher the CCU, the better the replication performance. You will be charged for these TiCDC CCUs. + +### Number of TiCDC CCUs + +The following table lists the specifications and corresponding replication performances for changefeeds: + +| Specification | Maximum replication performance | +|---------------|---------------------------------| +| 2 CCUs | 5,000 rows/s | +| 4 CCUs | 10,000 rows/s | +| 8 CCUs | 20,000 rows/s | +| 16 CCUs | 40,000 rows/s | +| 24 CCUs | 60,000 rows/s | +| 32 CCUs | 80,000 rows/s | +| 40 CCUs | 100,000 rows/s | +| 64 CCUs | 160,000 rows/s | +| 96 CCUs | 240,000 rows/s | +| 128 CCUs | 320,000 rows/s | +| 192 CCUs | 480,000 rows/s | +| 256 CCUs | 640,000 rows/s | +| 320 CCUs | 800,000 rows/s | +| 384 CCUs | 960,000 rows/s | + +> **Note:** +> +> The preceding performance data is for reference only and might vary in different scenarios. It is strongly recommended that you conduct a real workload test before using the changefeed feature in a production environment. For further assistance, contact [TiDB Cloud support](/tidb-cloud/tidb-cloud-support.md). + +### Price + +Currently, {{{ .premium }}} is in private preview. You can [contact our sales](https://www.pingcap.com/contact-us/) for pricing details. + +## Private Data Link cost + +If you choose the **Private Link** or **Private Service Connect** network connectivity method, additional **Private Data Link** costs will be incurred. These charges fall under the [Data Transfer Cost](https://www.pingcap.com/tidb-dedicated-pricing-details/#data-transfer-cost) category. + +The price of **Private Data Link** is **$0.01/GiB**, the same as **Data Processed** of [AWS Interface Endpoint pricing](https://aws.amazon.com/privatelink/pricing/#Interface_Endpoint_pricing), **Consumer data processing** of [Google Cloud Private Service Connect pricing](https://cloud.google.com/vpc/pricing#psc-forwarding-rules), and **Inbound/Outbound Data Processed** of [Azure Private Link pricing](https://azure.microsoft.com/en-us/pricing/details/private-link/). diff --git a/tidb-cloud/setup-aws-self-hosted-kafka-private-link-service.md b/tidb-cloud/setup-aws-self-hosted-kafka-private-link-service.md index e183587662098..9221c6c4036fb 100644 --- a/tidb-cloud/setup-aws-self-hosted-kafka-private-link-service.md +++ b/tidb-cloud/setup-aws-self-hosted-kafka-private-link-service.md @@ -23,7 +23,9 @@ The document provides an example of connecting to a Kafka Private Link service d ## Prerequisites -1. Ensure that you have the following authorization to set up a Kafka Private Link service in your own AWS account. + + +1. Ensure that you have the following authorization to set up a Kafka Private Link service in your own AWS account. - Manage EC2 nodes - Manage VPC @@ -48,6 +50,31 @@ The document provides an example of connecting to a Kafka Private Link service d 1. Input a unique random string. It can only include numbers or lowercase letters. You will use it to generate **Kafka Advertised Listener Pattern** later. 2. Click **Check usage and generate** to check if the random string is unique and generate **Kafka Advertised Listener Pattern** that will be used to assemble the EXTERNAL advertised listener for Kafka brokers. + + + +1. Ensure that you have the following authorization to set up a Kafka Private Link service in your own AWS account. + + - Manage EC2 nodes + - Manage VPC + - Manage subnets + - Manage security groups + - Manage load balancer + - Manage endpoint services + - Connect to EC2 nodes to configure Kafka nodes + +2. [Create a {{{ .premium }}} instance](/tidb-cloud/premium/create-tidb-instance-premium.md) if you do not have one. + +3. Get the Kafka deployment information from your {{{ .premium }}} instance. + + 1. In the [TiDB Cloud console](https://tidbcloud.com), navigate to the instance overview page of the TiDB instance, and then click **Data** > **Changefeed** in the left navigation pane. + 2. On the overview page, find the region of the TiDB instance. Ensure that your Kafka cluster will be deployed to the same region. + 3. To create a changefeed, refer to the tutorials: + + - [Sink to Apache Kafka](/tidb-cloud/changefeed-sink-to-apache-kafka.md) + + + Note down all the deployment information. You need to use it to configure your Kafka Private Link service later. The following table shows an example of the deployment information. @@ -523,8 +550,17 @@ LOG_DIR=$KAFKA_LOG_DIR nohup $KAFKA_START_CMD "$KAFKA_CONFIG_DIR/server.properti ### Reconfigure a running Kafka cluster + + Ensure that your Kafka cluster is deployed in the same region and AZs as the TiDB cluster. If any brokers are in different AZs, move them to the correct ones. + + + +Ensure that your Kafka cluster is deployed in the same region and AZs as the TiDB instance. If any brokers are in different AZs, move them to the correct ones. + + + #### 1. Configure the EXTERNAL listener for brokers The following configuration applies to a Kafka KRaft cluster. The ZK mode configuration is similar. @@ -729,7 +765,7 @@ Do the following to set up the load balancer: ## Step 3. Connect from TiDB Cloud -1. Return to the [TiDB Cloud console](https://tidbcloud.com) to create a changefeed for the cluster to connect to the Kafka cluster by **Private Link**. For more information, see [Sink to Apache Kafka](/tidb-cloud/changefeed-sink-to-apache-kafka.md). +1. Return to the [TiDB Cloud console](https://tidbcloud.com) to create a changefeed for the clusterinstance to connect to the Kafka cluster by **Private Link**. For more information, see [Sink to Apache Kafka](/tidb-cloud/changefeed-sink-to-apache-kafka.md). 2. When you proceed to **Configure the changefeed target > Connectivity Method > Private Link**, fill in the following fields with corresponding values and other fields as needed. diff --git a/tidb-cloud/tidb-cloud-billing-ticdc-rcu.md b/tidb-cloud/tidb-cloud-billing-ticdc-rcu.md index 270d6aada47ed..01a4a40c4b8ea 100644 --- a/tidb-cloud/tidb-cloud-billing-ticdc-rcu.md +++ b/tidb-cloud/tidb-cloud-billing-ticdc-rcu.md @@ -1,14 +1,16 @@ --- -title: Changefeed Billing +title: Changefeed Billing for TiDB Cloud Dedicated summary: Learn about billing for changefeeds in TiDB Cloud. aliases: ['/tidbcloud/tidb-cloud-billing-tcu'] --- -# Changefeed Billing +# Changefeed Billing for TiDB Cloud Dedicated + +This document describes the billing details for changefeeds in TiDB Cloud Dedicated. ## RCU cost -TiDB Cloud measures the capacity of [changefeeds](/tidb-cloud/changefeed-overview.md) in TiCDC Replication Capacity Units (RCUs). When you [create a changefeed](/tidb-cloud/changefeed-overview.md#create-a-changefeed) for a cluster, you can select an appropriate specification. The higher the RCU, the better the replication performance. You will be charged for these TiCDC changefeed RCUs. +TiDB Cloud Dedicated measures the capacity of [changefeeds](/tidb-cloud/changefeed-overview.md) in TiCDC Replication Capacity Units (RCUs). When you [create a changefeed](/tidb-cloud/changefeed-overview.md#create-a-changefeed) for a cluster, you can select an appropriate specification. The higher the RCU, the better the replication performance. You will be charged for these TiCDC changefeed RCUs. ### Number of TiCDC RCUs