diff --git a/TOC-tidb-cloud.md b/TOC-tidb-cloud.md
index 1c02cb2cfb5d5..5733a733d5795 100644
--- a/TOC-tidb-cloud.md
+++ b/TOC-tidb-cloud.md
@@ -309,6 +309,7 @@
- [Set Up Self-Hosted Kafka Private Link Service in AWS](/tidb-cloud/setup-aws-self-hosted-kafka-private-link-service.md)
- [Set Up Self-Hosted Kafka Private Link Service in Azure](/tidb-cloud/setup-azure-self-hosted-kafka-private-link-service.md)
- [Set Up Self-Hosted Kafka Private Service Connect in Google Cloud](/tidb-cloud/setup-self-hosted-kafka-private-service-connect.md)
+ - [Set Up Private Endpoint for Changefeeds](/tidb-cloud/set-up-sink-private-endpoint.md)
- Disaster Recovery
- [Recovery Group Overview](/tidb-cloud/recovery-group-overview.md)
- [Get Started](/tidb-cloud/recovery-group-get-started.md)
diff --git a/tidb-cloud/changefeed-sink-to-apache-kafka.md b/tidb-cloud/changefeed-sink-to-apache-kafka.md
index 6dd993e00e521..0bdb4b2bb64ac 100644
--- a/tidb-cloud/changefeed-sink-to-apache-kafka.md
+++ b/tidb-cloud/changefeed-sink-to-apache-kafka.md
@@ -52,23 +52,9 @@ Private Connect leverages **Private Link** or **Private Service Connect** techno
TiDB Cloud currently supports Private Connect only for self-hosted Kafka. It does not support direct integration with MSK, Confluent Kafka, or other Kafka SaaS services. To connect to these Kafka SaaS services via Private Connect, you can deploy a [kafka-proxy](https://github.com/grepplabs/kafka-proxy) as an intermediary, effectively exposing the Kafka service as self-hosted Kafka. For a detailed example, see [Set Up Self-Hosted Kafka Private Service Connect by Kafka-proxy in Google Cloud](/tidb-cloud/setup-self-hosted-kafka-private-service-connect.md#set-up-self-hosted-kafka-private-service-connect-by-kafka-proxy). This setup is similar across all Kafka SaaS services.
-- If your Apache Kafka service is hosted in AWS, follow [Set Up Self-Hosted Kafka Private Link Service in AWS](/tidb-cloud/setup-aws-self-hosted-kafka-private-link-service.md) to ensure that the network connection is properly configured. After setup, provide the following information in the TiDB Cloud console to create the changefeed:
-
- - The ID in Kafka Advertised Listener Pattern
- - The Endpoint Service Name
- - The Bootstrap Ports
-
-- If your Apache Kafka service is hosted in Google Cloud, follow [Set Up Self-Hosted Kafka Private Service Connect in Google Cloud](/tidb-cloud/setup-self-hosted-kafka-private-service-connect.md) to ensure that the network connection is properly configured. After setup, provide the following information in the TiDB Cloud console to create the changefeed:
-
- - The ID in Kafka Advertised Listener Pattern
- - The Service Attachment
- - The Bootstrap Ports
-
-- If your Apache Kafka service is hosted in Azure, follow [Set Up Self-Hosted Kafka Private Link Service in Azure](/tidb-cloud/setup-azure-self-hosted-kafka-private-link-service.md) to ensure that the network connection is properly configured. After setup, provide the following information in the TiDB Cloud console to create the changefeed:
-
- - The ID in Kafka Advertised Listener Pattern
- - The Alias of Private Link Service
- - The Bootstrap Ports
+- If your Apache Kafka service is hosted on AWS, follow [Set Up Self-Hosted Kafka Private Link Service in AWS](/tidb-cloud/setup-aws-self-hosted-kafka-private-link-service.md) to configure the network connection and obtain the **Bootstrap Ports** information, and then follow [Set Up Private Endpoint for Changefeeds](/tidb-cloud/set-up-sink-private-endpoint.md) to create a private endpoint.
+- If your Apache Kafka service is hosted on Google Cloud, follow [Set Up Self-Hosted Kafka Private Service Connect in Google Cloud](/tidb-cloud/setup-self-hosted-kafka-private-service-connect.md) to configure the network connection and obtain the **Bootstrap Ports** information, and then follow [Set Up Private Endpoint for Changefeeds](/tidb-cloud/set-up-sink-private-endpoint.md) to create a private endpoint.
+- If your Apache Kafka service is hosted on Azure, follow [Set Up Self-Hosted Kafka Private Link Service in Azure](/tidb-cloud/setup-azure-self-hosted-kafka-private-link-service.md) to configure the network connection and obtain the **Bootstrap Ports** information, and then follow [Set Up Private Endpoint for Changefeeds](/tidb-cloud/set-up-sink-private-endpoint.md) to create a private endpoint.
@@ -139,63 +125,55 @@ The steps vary depending on the connectivity method you select.
1. In **Connectivity Method**, select **Private Link**.
-2. Authorize the [AWS Principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#principal-accounts) of TiDB Cloud to create an endpoint for your endpoint service. The AWS Principal is provided in the tip on the web page.
-3. Make sure you select the same **Number of AZs** and **AZ IDs of Kafka Deployment**, and fill the same unique ID in **Kafka Advertised Listener Pattern** when you [Set Up Self-Hosted Kafka Private Link Service in AWS](/tidb-cloud/setup-aws-self-hosted-kafka-private-link-service.md) in the **Network** section.
-4. Fill in the **Endpoint Service Name** which is configured in [Set Up Self-Hosted Kafka Private Link Service in AWS](/tidb-cloud/setup-aws-self-hosted-kafka-private-link-service.md).
-5. Fill in the **Bootstrap Ports**. It is recommended that you set at least one port for one AZ. You can use commas `,` to separate multiple ports.
-6. Select an **Authentication** option according to your Kafka authentication configuration.
+2. In **Private Endpoint**, select the private endpoint that you created in the [Network](#network) section. Make sure the AZs of the private endpoint match the AZs of the Kafka deployment.
+3. Fill in the **Bootstrap Ports** that you obtained from the [Network](#network) section. It is recommended that you set at least one port for one AZ. You can use commas `,` to separate multiple ports.
+4. Select an **Authentication** option according to your Kafka authentication configuration.
- If your Kafka does not require authentication, keep the default option **Disable**.
- If your Kafka requires authentication, select the corresponding authentication type, and then fill in the **user name** and **password** of your Kafka account for authentication.
-
-7. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v2**.
-8. Select a **Compression** type for the data in this changefeed.
-9. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
-10. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
-11. TiDB Cloud creates the endpoint for **Private Link**, which might take several minutes.
-12. Once the endpoint is created, log in to your cloud provider console and accept the connection request.
-13. Return to the [TiDB Cloud console](https://tidbcloud.com) to confirm that you have accepted the connection request. TiDB Cloud will test the connection and proceed to the next page if the test succeeds.
+5. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v2**.
+6. Select a **Compression** type for the data in this changefeed.
+7. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
+8. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
+9. TiDB Cloud creates the endpoint for **Private Link**, which might take several minutes.
+10. Once the endpoint is created, log in to your cloud provider console and accept the connection request.
+11. Return to the [TiDB Cloud console](https://tidbcloud.com) to confirm that you have accepted the connection request. TiDB Cloud will test the connection and proceed to the next page if the test succeeds.
1. In **Connectivity Method**, select **Private Service Connect**.
-2. Ensure that you fill in the same unique ID in **Kafka Advertised Listener Pattern** when you [Set Up Self-Hosted Kafka Private Service Connect in Google Cloud](/tidb-cloud/setup-self-hosted-kafka-private-service-connect.md) in the **Network** section.
-3. Fill in the **Service Attachment** that you have configured in [Setup Self Hosted Kafka Private Service Connect in Google Cloud](/tidb-cloud/setup-self-hosted-kafka-private-service-connect.md)
-4. Fill in the **Bootstrap Ports**. It is recommended that you provide more than one port. You can use commas `,` to separate multiple ports.
-5. Select an **Authentication** option according to your Kafka authentication configuration.
+2. In **Private Endpoint**, select the private endpoint that you created in the [Network](#network) section.
+3. Fill in the **Bootstrap Ports** that you obtained from the [Network](#network) section. It is recommended that you provide more than one port. You can use commas `,` to separate multiple ports.
+4. Select an **Authentication** option according to your Kafka authentication configuration.
- If your Kafka does not require authentication, keep the default option **Disable**.
- If your Kafka requires authentication, select the corresponding authentication type, and then fill in the **user name** and **password** of your Kafka account for authentication.
-
-6. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v2**.
-7. Select a **Compression** type for the data in this changefeed.
-8. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
-9. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
-10. TiDB Cloud creates the endpoint for **Private Service Connect**, which might take several minutes.
-11. Once the endpoint is created, log in to your cloud provider console and accept the connection request.
-12. Return to the [TiDB Cloud console](https://tidbcloud.com) to confirm that you have accepted the connection request. TiDB Cloud will test the connection and proceed to the next page if the test succeeds.
+5. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v2**.
+6. Select a **Compression** type for the data in this changefeed.
+7. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
+8. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
+9. TiDB Cloud creates the endpoint for **Private Service Connect**, which might take several minutes.
+10. Once the endpoint is created, log in to your cloud provider console and accept the connection request.
+11. Return to the [TiDB Cloud console](https://tidbcloud.com) to confirm that you have accepted the connection request. TiDB Cloud will test the connection and proceed to the next page if the test succeeds.
1. In **Connectivity Method**, select **Private Link**.
-2. Authorize the Azure subscription of TiDB Cloud or allow anyone with your alias to access your Private Link service before creating the changefeed. The Azure subscription is provided in the **Reminders before proceeding** tip on the web page. For more information about the visibility of Private Link service, see [Control service exposure](https://learn.microsoft.com/en-us/azure/private-link/private-link-service-overview#control-service-exposure) in Azure documentation.
-3. Make sure you fill in the same unique ID in **Kafka Advertised Listener Pattern** when you [Set Up Self-Hosted Kafka Private Link Service in Azure](/tidb-cloud/setup-azure-self-hosted-kafka-private-link-service.md) in the **Network** section.
-4. Fill in the **Alias of Private Link Service** which is configured in [Set Up Self-Hosted Kafka Private Link Service in Azure](/tidb-cloud/setup-azure-self-hosted-kafka-private-link-service.md).
-5. Fill in the **Bootstrap Ports**. It is recommended that you set at least one port for one AZ. You can use commas `,` to separate multiple ports.
-6. Select an **Authentication** option according to your Kafka authentication configuration.
+2. In **Private Endpoint**, select the private endpoint that you created in the [Network](#network) section.
+3. Fill in the **Bootstrap Ports** that you obtained in the [Network](#network) section. It is recommended that you set at least one port for one AZ. You can use commas `,` to separate multiple ports.
+4. Select an **Authentication** option according to your Kafka authentication configuration.
- If your Kafka does not require authentication, keep the default option **Disable**.
- If your Kafka requires authentication, select the corresponding authentication type, and then fill in the **user name** and **password** of your Kafka account for authentication.
-
-7. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v2**.
-8. Select a **Compression** type for the data in this changefeed.
-9. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
-10. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
-11. TiDB Cloud creates the endpoint for **Private Link**, which might take several minutes.
-12. Once the endpoint is created, log in to the [Azure portal](https://portal.azure.com/) and accept the connection request.
-13. Return to the [TiDB Cloud console](https://tidbcloud.com) to confirm that you have accepted the connection request. TiDB Cloud will test the connection and proceed to the next page if the test succeeds.
+5. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v2**.
+6. Select a **Compression** type for the data in this changefeed.
+7. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
+8. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
+9. TiDB Cloud creates the endpoint for **Private Link**, which might take several minutes.
+10. Once the endpoint is created, log in to the [Azure portal](https://portal.azure.com/) and accept the connection request.
+11. Return to the [TiDB Cloud console](https://tidbcloud.com) to confirm that you have accepted the connection request. TiDB Cloud will test the connection and proceed to the next page if the test succeeds.
diff --git a/tidb-cloud/changefeed-sink-to-mysql.md b/tidb-cloud/changefeed-sink-to-mysql.md
index cc941a6cb8c8b..136fed436889e 100644
--- a/tidb-cloud/changefeed-sink-to-mysql.md
+++ b/tidb-cloud/changefeed-sink-to-mysql.md
@@ -28,7 +28,10 @@ Before creating a changefeed, you need to complete the following prerequisites:
### Network
-Make sure that your TiDB Cluster can connect to the MySQL service.
+Make sure that your TiDB Cloud cluster can connect to the MySQL service.
+
+
+
If your MySQL service is in an AWS VPC that has no public internet access, take the following steps:
@@ -48,7 +51,19 @@ If your MySQL service is in a Google Cloud VPC that has no public internet acces
2. [Set up a VPC peering connection](/tidb-cloud/set-up-vpc-peering-connections.md) between the VPC of the MySQL service and your TiDB cluster.
3. Modify the ingress firewall rules of the VPC where MySQL is located.
- You must add [the CIDR of the region where your TiDB Cloud cluster is located](/tidb-cloud/set-up-vpc-peering-connections.md#prerequisite-set-a-cidr-for-a-region) to the ingress firewall rules. Doing so allows the traffic to flow from your TiDB Cluster to the MySQL endpoint.
+ You must add [the CIDR of the region where your TiDB Cloud cluster is located](/tidb-cloud/set-up-vpc-peering-connections.md#prerequisite-set-a-cidr-for-a-region) to the ingress firewall rules. Doing so allows the traffic to flow from your TiDB Cloud cluster to the MySQL endpoint.
+
+
+
+
+
+Private endpoints leverage **Private Link** or **Private Service Connect** technologies from cloud providers, enabling resources in your VPC to connect to services in other VPCs through private IP addresses, as if those services were hosted directly within your VPC.
+
+You can connect your TiDB Cloud cluster to your MySQL service securely through a private endpoint. If the private endpoint is not available for your MySQL service, follow [Set Up Private Endpoint for Changefeeds](/tidb-cloud/set-up-sink-private-endpoint.md) to create one.
+
+
+
+
### Load existing data (optional)
@@ -95,21 +110,26 @@ After completing the prerequisites, you can sink your data to MySQL.
2. Click **Create Changefeed**, and select **MySQL** as **Destination**.
-3. Fill in the MySQL endpoints, user name, and password in **MySQL Connection**.
+3. In **Connectivity Method**, choose the method to connect to your MySQL service.
+
+ - If you choose **VPC Peering** or **Public IP**, fill in your MySQL endpoint.
+ - If you choose **Private Link**, select the private endpoint that you created in the [Network](#network) section, and then fill in the MySQL port for your MySQL service.
+
+4. In **Authentication**, fill in the MySQL user name and password of your MySQL service.
-4. Click **Next** to test whether TiDB can connect to MySQL successfully:
+5. Click **Next** to test whether TiDB can connect to MySQL successfully:
- If yes, you are directed to the next step of configuration.
- If not, a connectivity error is displayed, and you need to handle the error. After the error is resolved, click **Next** again.
-5. Customize **Table Filter** to filter the tables that you want to replicate. For the rule syntax, refer to [table filter rules](/table-filter.md).
+6. Customize **Table Filter** to filter the tables that you want to replicate. For the rule syntax, refer to [table filter rules](/table-filter.md).
- **Case Sensitive**: you can set whether the matching of database and table names in filter rules is case-sensitive. By default, matching is case-insensitive.
- **Filter Rules**: you can set filter rules in this column. By default, there is a rule `*.*`, which stands for replicating all tables. When you add a new rule, TiDB Cloud queries all the tables in TiDB and displays only the tables that match the rules in the box on the right. You can add up to 100 filter rules.
- **Tables with valid keys**: this column displays the tables that have valid keys, including primary keys or unique indexes.
- **Tables without valid keys**: this column shows tables that lack primary keys or unique keys. These tables present a challenge during replication because the absence of a unique identifier can result in inconsistent data when the downstream handles duplicate events. To ensure data consistency, it is recommended to add unique keys or primary keys to these tables before initiating the replication. Alternatively, you can add filter rules to exclude these tables. For example, you can exclude the table `test.tbl1` by using the rule `"!test.tbl1"`.
-6. Customize **Event Filter** to filter the events that you want to replicate.
+7. Customize **Event Filter** to filter the events that you want to replicate.
- **Tables matching**: you can set which tables the event filter will be applied to in this column. The rule syntax is the same as that used for the preceding **Table Filter** area. You can add up to 10 event filter rules per changefeed.
- **Event Filter**: you can use the following event filters to exclude specific events from the changefeed:
@@ -120,28 +140,28 @@ After completing the prerequisites, you can sink your data to MySQL.
- **Ignore update old value expression**: excludes `UPDATE` statements where the old value matches a specified condition. For example, `age < 18` excludes updates where the old value of `age` is less than 18.
- **Ignore delete value expression**: excludes `DELETE` statements that meet a specified condition. For example, `name = 'john'` excludes `DELETE` statements where `name` is `'john'`.
-7. In **Start Replication Position**, configure the starting position for your MySQL sink.
+8. In **Start Replication Position**, configure the starting position for your MySQL sink.
- If you have [loaded the existing data](#load-existing-data-optional) using Dumpling, select **Start replication from a specific TSO** and fill in the TSO that you get from Dumpling exported metadata files.
- If you do not have any data in the upstream TiDB cluster, select **Start replication from now on**.
- Otherwise, you can customize the start time point by choosing **Start replication from a specific time**.
-8. Click **Next** to configure your changefeed specification.
+9. Click **Next** to configure your changefeed specification.
- In the **Changefeed Specification** area, specify the number of Replication Capacity Units (RCUs) to be used by the changefeed.
- In the **Changefeed Name** area, specify a name for the changefeed.
-9. Click **Next** to review the changefeed configuration.
+10. Click **Next** to review the changefeed configuration.
If you confirm that all configurations are correct, check the compliance of cross-region replication, and click **Create**.
If you want to modify some configurations, click **Previous** to go back to the previous configuration page.
-10. The sink starts soon, and you can see the status of the sink changes from **Creating** to **Running**.
+11. The sink starts soon, and you can see the status of the sink changes from **Creating** to **Running**.
Click the changefeed name, and you can see more details about the changefeed, such as the checkpoint, replication latency, and other metrics.
-11. If you have [loaded the existing data](#load-existing-data-optional) using Dumpling, you need to restore the GC time to its original value (the default value is `10m`) after the sink is created:
+12. If you have [loaded the existing data](#load-existing-data-optional) using Dumpling, you need to restore the GC time to its original value (the default value is `10m`) after the sink is created:
{{< copyable "sql" >}}
diff --git a/tidb-cloud/set-up-sink-private-endpoint.md b/tidb-cloud/set-up-sink-private-endpoint.md
new file mode 100644
index 0000000000000..b70b45da96ce0
--- /dev/null
+++ b/tidb-cloud/set-up-sink-private-endpoint.md
@@ -0,0 +1,127 @@
+---
+title: Set Up Private Endpoint for Changefeeds
+summary: Learn how to set up a private endpoint for changefeeds.
+---
+
+# Set Up Private Endpoint for Changefeeds
+
+This document describes how to create a private endpoint for changefeeds in your [TiDB Cloud Dedicated](/tidb-cloud/select-cluster-tier.md#tidb-cloud-dedicated) clusters, enabling you to securely stream data to self-hosted Kafka or MySQL through private connectivity.
+
+## Restrictions
+
+Within the same VPC, each Private Endpoint Service in AWS, Service Attachment in Google Cloud, or Private Link Service in Azure can have up to 5 private endpoints. If this limit is exceeded, remove any unused private endpoints before creating new ones.
+
+## Prerequisites
+
+- Check permissions for private endpoint creation
+- Set up your network connection
+
+### Permissions
+
+Only users with any of the following roles in your organization can create private endpoints for changefeeds:
+
+- `Organization Owner`
+- `Project Owner`
+- `Project Data Access Read-Write`
+
+For more information about roles in TiDB Cloud, see [User roles](/tidb-cloud/manage-user-access.md#user-roles).
+
+### Network
+
+Private endpoints leverage **Private Link** or **Private Service Connect** technologies from cloud providers, enabling resources in your VPC to connect to services in other VPCs through private IP addresses, as if those services were hosted directly within your VPC.
+
+
+
+
+If your changefeed downstream service is hosted on AWS, collect the following information:
+
+- The name of the Private Endpoint Service for your downstream service
+- The availability zones (AZs) where your downstream service is deployed
+
+If the Private Endpoint Service is not available for your downstream service, follow [Step 2. Expose the Kafka cluster as Private Link Service](/tidb-cloud/setup-aws-self-hosted-kafka-private-link-service.md#step-2-expose-the-kafka-cluster-as-private-link-service) to set up the load balancer and the Private Link Service.
+
+
+
+
+
+If your changefeed downstream service is hosted on Google Cloud, collect the Service Attachment information of your downstream service.
+
+If Service Attachment is not available for your downstream service, follow [Step 2. Expose Kafka-proxy as Private Service Connect Service](/tidb-cloud/setup-self-hosted-kafka-private-service-connect.md#step-2-expose-kafka-proxy-as-private-service-connect-service) to get the Service Attachment information.
+
+
+
+
+
+If your changefeed downstream service is hosted on Azure, collect the alias of the Private Link Service of your downstream service.
+
+If the Private Endpoint Service is not available for your downstream service, follow [Step 2. Expose the Kafka cluster as Private Link Service](/tidb-cloud/setup-azure-self-hosted-kafka-private-link-service.md#step-2-expose-the-kafka-cluster-as-private-link-service) to set up the load balancer and the Private Link Service.
+
+
+
+
+## Step 1. Open the Networking page for your cluster
+
+1. Log in to the [TiDB Cloud console](https://tidbcloud.com/).
+
+2. On the [**Clusters**](https://tidbcloud.com/project/clusters) page, click the name of your target cluster to go to its overview page.
+
+ > **Tip:**
+ >
+ > You can use the combo box in the upper-left corner to switch between organizations, projects, and clusters.
+
+3. In the left navigation pane, click **Settings** > **Networking**.
+
+## Step 2. Configure the private endpoint for changefeeds
+
+The configuration steps vary depending on the cloud provider where your cluster is deployed.
+
+
+
+
+1. On the **Networking** page, click **Create Private Endpoint** in the **AWS Private Endpoint for Changefeed** section.
+2. In the **Create Private Endpoint for Changefeed** dialog, enter a name for the private endpoint.
+3. Follow the reminder to authorize the [AWS Principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#principal-accounts) of TiDB Cloud to create an endpoint.
+4. Enter the **Endpoint Service Name** that you collected in the [Network](#network) section.
+5. Select the **Number of AZs**. Ensure that the number of AZs and the AZ IDs match your Kafka deployment.
+6. If this private endpoint is created for Apache Kafka, enable the **Advertised Listener for Kafka** option.
+7. Configure the advertised listener for Kafka using either the **TiDB Managed** domain or the **Custom** domain.
+
+ - To use the **TiDB Managed** domain for advertised listeners, enter a unique string in the **Domain Pattern** field, and then click **Generate**. TiDB will generate broker addresses with subdomains for each availability zone.
+ - To use your own **Custom** domain for advertised listeners, switch the domain type to **Custom**, enter the root domain in the **Custom Domain** field, click **Check**, and then specify the broker subdomains for each availability zone.
+
+8. Click **Create** to validate the configurations and create the private endpoint.
+
+
+
+
+
+1. On the **Networking** page, click **Create Private Endpoint** in the **Google Cloud Private Endpoint for Changefeed** section.
+2. In the **Create Private Endpoint for Changefeed** dialog, enter a name for the private endpoint.
+3. Follow the reminder to authorize the [Google Cloud project](https://cloud.google.com/resource-manager/docs/creating-managing-projects) of TiDB Cloud to pre-approve endpoint creation, or manually approve the endpoint connection request when you receive it.
+4. Enter the **Service Attachment** that you collected in the [Network](#network) section.
+5. If this private endpoint is created for Apache Kafka, enable the **Advertised Listener for Kafka** option.
+6. Configure the advertised listener for Kafka using either the **TiDB Managed** domain or the **Custom** domain.
+
+ - To use the **TiDB Managed** domain for advertised listeners, enter a unique string in the **Domain Pattern** field, and then click **Generate**. TiDB will generate broker addresses with subdomains for each availability zone.
+ - To use your own **Custom** domain for advertised listeners, switch the domain type to **Custom**, enter the root domain in the **Custom Domain** field, click **Check**, and then specify the broker subdomains for each availability zone.
+
+7. Click **Create** to validate the configurations and create the private endpoint.
+
+
+
+
+
+1. On the **Networking** page, click **Create Private Endpoint** in the **Azure Private Endpoint for Changefeed** section.
+2. In the **Create Private Endpoint for Changefeed** dialog, enter a name for the private endpoint.
+3. Follow the reminder to authorize the Azure subscription of TiDB Cloud or allow anyone with your alias to access your Private Link service before creating the changefeed. For more information about Private Link service visibility, see [Control service exposure](https://learn.microsoft.com/en-us/azure/private-link/private-link-service-overview#control-service-exposure) in Azure documentation.
+4. Enter the **Alias of Private Link Service** that you collected in the [Network](#network) section.
+5. If this private endpoint is created for Apache Kafka, enable the **Advertised Listener for Kafka** option.
+6. Configure the advertised listener for Kafka using either the **TiDB Managed** domain or the **Custom** domain.
+
+ - To use the **TiDB Managed** domain for advertised listeners, enter a unique string in the **Domain Pattern** field, and then click **Generate**. TiDB will generate broker addresses with subdomains for each availability zone.
+ - To use your own **Custom** domain for advertised listeners, switch the domain type to **Custom**, enter the root domain in the **Custom Domain** field, click **Check**, and then specify the broker subdomains for each availability zone.
+
+7. Click **Create** to validate the configurations and create the private endpoint.
+
+
+