From 106e1960d2dacd45b2da3b5c6fd7d3912ac77fcb Mon Sep 17 00:00:00 2001
From: Shaun Struwig <41984034+Blargian@users.noreply.github.com>
Date: Mon, 1 Sep 2025 16:57:59 +0200
Subject: [PATCH] restructure
---
.../03_billing/01_billing_overview.md | 262 +----------------
.../03_billing/03_clickpipes_billing.md | 268 ++++++++++++++++++
...thresholds.md => 04_payment-thresholds.md} | 0
...nsfer.mdx => 05_network-data-transfer.mdx} | 0
...compliance.md => 06_billing_compliance.md} | 0
.../data-ingestion/clickpipes/index.md | 2 +-
.../data-ingestion/clickpipes/postgres/faq.md | 2 +-
.../clickpipes/postgres/scaling.md | 2 +-
.../clickstack/ingesting-data/collector.md | 1 +
9 files changed, 273 insertions(+), 264 deletions(-)
create mode 100644 docs/cloud/reference/03_billing/03_clickpipes_billing.md
rename docs/cloud/reference/03_billing/{03_payment-thresholds.md => 04_payment-thresholds.md} (100%)
rename docs/cloud/reference/03_billing/{04_network-data-transfer.mdx => 05_network-data-transfer.mdx} (100%)
rename docs/cloud/reference/03_billing/{05_billing_compliance.md => 06_billing_compliance.md} (100%)
diff --git a/docs/cloud/reference/03_billing/01_billing_overview.md b/docs/cloud/reference/03_billing/01_billing_overview.md
index cdbe6c40355..770cdb4f956 100644
--- a/docs/cloud/reference/03_billing/01_billing_overview.md
+++ b/docs/cloud/reference/03_billing/01_billing_overview.md
@@ -5,8 +5,6 @@ title: 'Pricing'
description: 'Overview page for ClickHouse Cloud pricing'
---
-import ClickPipesFAQ from '../../_snippets/_clickpipes_faq.md'
-
For pricing information, see the [ClickHouse Cloud Pricing](https://clickhouse.com/pricing#pricing-calculator) page.
ClickHouse Cloud bills based on the usage of compute, storage, [data transfer](/cloud/manage/network-data-transfer) (egress over the internet and cross-region), and [ClickPipes](/integrations/clickpipes).
To understand what can affect your bill, and ways that you can manage your spend, keep reading.
@@ -360,262 +358,4 @@ However, combining two services in a warehouse and idling one of them helps you
## ClickPipes pricing {#clickpipes-pricing}
-### ClickPipes for Postgres CDC {#clickpipes-for-postgres-cdc}
-
-This section outlines the pricing model for our Postgres Change Data Capture (CDC)
-connector in ClickPipes. In designing this model, our goal was to keep pricing
-highly competitive while staying true to our core vision:
-
-> Making it seamless and
-affordable for customers to move data from Postgres to ClickHouse for
-real-time analytics.
-
-The connector is over **5x more cost-effective** than external
-ETL tools and similar features in other database platforms.
-
-:::note
-Pricing will start being metered in monthly bills beginning **September 1st, 2025,**
-for all customers (both existing and new) using Postgres CDC ClickPipes. Until
-then, usage is free. Customers have a 3-month window starting May 29 (GA announcement)
-to review and optimize their costs if needed, although we expect most will not need
-to make any changes.
-:::
-
-#### Pricing dimensions {#pricing-dimensions}
-
-There are two main dimensions to pricing:
-
-1. **Ingested Data**: The raw, uncompressed bytes coming from Postgres and
- ingested into ClickHouse.
-2. **Compute**: The compute units provisioned per service manage multiple
- Postgres CDC ClickPipes and are separate from the compute units used by the
- ClickHouse Cloud service. This additional compute is dedicated specifically
- to Postgres CDC ClickPipes. Compute is billed at the service level, not per
- individual pipe. Each compute unit includes 2 vCPUs and 8 GB of RAM.
-
-#### Ingested data {#ingested-data}
-
-The Postgres CDC connector operates in two main phases:
-
-- **Initial load / resync**: This captures a full snapshot of Postgres tables
- and occurs when a pipe is first created or re-synced.
-- **Continuous Replication (CDC)**: Ongoing replication of changes—such as inserts,
- updates, deletes, and schema changes—from Postgres to ClickHouse.
-
-In most use cases, continuous replication accounts for over 90% of a ClickPipe
-life cycle. Because initial loads involve transferring a large volume of data all
-at once, we offer a lower rate for that phase.
-
-| Phase | Cost |
-|----------------------------------|--------------|
-| **Initial load / resync** | $0.10 per GB |
-| **Continuous Replication (CDC)** | $0.20 per GB |
-
-#### Compute {#compute}
-
-This dimension covers the compute units provisioned per service just for Postgres
-ClickPipes. Compute is shared across all Postgres pipes within a service. **It
-is provisioned when the first Postgres pipe is created and deallocated when no
-Postgres CDC pipes remain**. The amount of compute provisioned depends on your
-organization's tier:
-
-| Tier | Cost |
-|------------------------------|-----------------------------------------------|
-| **Basic Tier** | 0.5 compute unit per service — $0.10 per hour |
-| **Scale or Enterprise Tier** | 1 compute unit per service — $0.20 per hour |
-
-#### Example {#example}
-
-Let's say your service is in Scale tier and has the following setup:
-
-- 2 Postgres ClickPipes running continuous replication
-- Each pipe ingests 500 GB of data changes (CDC) per month
-- When the first pipe is kicked off, the service provisions **1 compute unit under the Scale Tier** for Postgres CDC
-
-##### Monthly cost breakdown {#cost-breakdown}
-
-**Ingested Data (CDC)**:
-
-$$ 2 \text{ pipes} \times 500 \text{ GB} = 1,000 \text{ GB per month} $$
-
-$$ 1,000 \text{ GB} \times \$0.20/\text{GB} = \$200 $$
-
-**Compute**:
-
-$$1 \text{ compute unit} \times \$0.20/\text{hr} \times 730 \text{ hours (approximate month)} = \$146$$
-
-:::note
-Compute is shared across both pipes
-:::
-
-**Total Monthly Cost**:
-
-$$\$200 \text{ (ingest)} + \$146 \text{ (compute)} = \$346$$
-
-### ClickPipes for streaming and object storage {#clickpipes-for-streaming-object-storage}
-
-This section outlines the pricing model of ClickPipes for streaming and object storage.
-
-#### What does the ClickPipes pricing structure look like? {#what-does-the-clickpipes-pricing-structure-look-like}
-
-It consists of two dimensions:
-
-- **Compute**: Price **per unit per hour**.
- Compute represents the cost of running the ClickPipes replica pods whether they actively ingest data or not.
- It applies to all ClickPipes types.
-- **Ingested data**: Price **per GB**.
- The ingested data rate applies to all streaming ClickPipes
- (Kafka, Confluent, Amazon MSK, Amazon Kinesis, Redpanda, WarpStream, Azure Event Hubs)
- for the data transferred via the replica pods. The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed).
-
-#### What are ClickPipes replicas? {#what-are-clickpipes-replicas}
-
-ClickPipes ingests data from remote data sources via a dedicated infrastructure
-that runs and scales independently of the ClickHouse Cloud service.
-For this reason, it uses dedicated compute replicas.
-
-#### What is the default number of replicas and their size? {#what-is-the-default-number-of-replicas-and-their-size}
-
-Each ClickPipe defaults to 1 replica that is provided with 512 MiB of RAM and 0.125 vCPU (XS).
-This corresponds to **0.0625** ClickHouse compute units (1 unit = 8 GiB RAM, 2 vCPUs).
-
-#### What are the ClickPipes public prices? {#what-are-the-clickpipes-public-prices}
-
-- Compute: \$0.20 per unit per hour (\$0.0125 per replica per hour for the default replica size)
-- Ingested data: \$0.04 per GB
-
-The price for the Compute dimension depends on the **number** and **size** of replica(s) in a ClickPipe. The default replica size can be adjusted using vertical scaling, and each replica size is priced as follows:
-
-| Replica Size | Compute Units | RAM | vCPU | Price per Hour |
-|----------------------------|---------------|---------|--------|----------------|
-| Extra Small (XS) (default) | 0.0625 | 512 MiB | 0.125. | $0.0125 |
-| Small (S) | 0.125 | 1 GiB | 0.25 | $0.025 |
-| Medium (M) | 0.25 | 2 GiB | 0.5 | $0.05 |
-| Large (L) | 0.5 | 4 GiB | 1.0 | $0.10 |
-| Extra Large (XL) | 1.0 | 8 GiB | 2.0 | $0.20 |
-
-#### How does it look in an illustrative example? {#how-does-it-look-in-an-illustrative-example}
-
-The following examples assume a single M-sized replica, unless explicitly mentioned.
-
-
-
- |
- 100 GB over 24h |
- 1 TB over 24h |
- 10 TB over 24h |
-
-
-
- | Streaming ClickPipe |
- (0.25 x 0.20 x 24) + (0.04 x 100) = \$5.20 |
- (0.25 x 0.20 x 24) + (0.04 x 1000) = \$41.20 |
- With 4 replicas: (0.25 x 0.20 x 24 x 4) + (0.04 x 10000) = \$404.80 |
-
-
- | Object Storage ClickPipe $^*$ |
- (0.25 x 0.20 x 24) = \$1.20 |
- (0.25 x 0.20 x 24) = \$1.20 |
- (0.25 x 0.20 x 24) = \$1.20 |
-
-
-
-
-$^1$ _Only ClickPipes compute for orchestration,
-effective data transfer is assumed by the underlying Clickhouse Service_
-
-## ClickPipes pricing FAQ {#clickpipes-pricing-faq}
-
-Below, you will find frequently asked questions about CDC ClickPipes and streaming
-and object-based storage ClickPipes.
-
-### FAQ for Postgres CDC ClickPipes {#faq-postgres-cdc-clickpipe}
-
-
-
-Is the ingested data measured in pricing based on compressed or uncompressed size?
-
-The ingested data is measured as _uncompressed data_ coming from Postgres—both
-during the initial load and CDC (via the replication slot). Postgres does not
-compress data during transit by default, and ClickPipe processes the raw,
-uncompressed bytes.
-
-
-
-
-
-When will Postgres CDC pricing start appearing on my bills?
-
-Postgres CDC ClickPipes pricing begins appearing on monthly bills starting
-**September 1st, 2025**, for all customers—both existing and new. Until then,
-usage is free. Customers have a **3-month window** starting from **May 29**
-(the GA announcement date) to review and optimize their usage if needed, although
-we expect most won't need to make any changes.
-
-
-
-
-
-Will I be charged if I pause my pipes?
-
-No data ingestion charges apply while a pipe is paused, since no data is moved.
-However, compute charges still apply—either 0.5 or 1 compute unit—based on your
-organization's tier. This is a fixed service-level cost and applies across all
-pipes within that service.
-
-
-
-
-
-How can I estimate my pricing?
-
-The Overview page in ClickPipes provides metrics for both initial load/resync and
-CDC data volumes. You can estimate your Postgres CDC costs using these metrics
-in conjunction with the ClickPipes pricing.
-
-
-
-
-
-Can I scale the compute allocated for Postgres CDC in my service?
-
-By default, compute scaling is not user-configurable. The provisioned resources
-are optimized to handle most customer workloads optimally. If your use case
-requires more or less compute, please open a support ticket so we can evaluate
-your request.
-
-
-
-
-
-What is the pricing granularity?
-
-- **Compute**: Billed per hour. Partial hours are rounded up to the next hour.
-- **Ingested Data**: Measured and billed per gigabyte (GB) of uncompressed data.
-
-
-
-
-
-Can I use my ClickHouse Cloud credits for Postgres CDC via ClickPipes?
-
-Yes. ClickPipes pricing is part of the unified ClickHouse Cloud pricing. Any
-platform credits you have will automatically apply to ClickPipes usage as well.
-
-
-
-
-
-How much additional cost should I expect from Postgres CDC ClickPipes in my existing monthly ClickHouse Cloud spend?
-
-The cost varies based on your use case, data volume, and organization tier.
-That said, most existing customers see an increase of **0–15%** relative to their
-existing monthly ClickHouse Cloud spend post trial. Actual costs may vary
-depending on your workload—some workloads involve high data volumes with
-lesser processing, while others require more processing with less data.
-
-
-
-### FAQ for streaming and object storage ClickPipes {#faq-streaming-and-object-storage}
-
-
+For information on ClickPipes billing, please see ["ClickPipes billing"](/cloud/reference/billing/clickpipes)
diff --git a/docs/cloud/reference/03_billing/03_clickpipes_billing.md b/docs/cloud/reference/03_billing/03_clickpipes_billing.md
new file mode 100644
index 00000000000..cd8390fd5d2
--- /dev/null
+++ b/docs/cloud/reference/03_billing/03_clickpipes_billing.md
@@ -0,0 +1,268 @@
+---
+sidebar_label: 'ClickPipes'
+slug: /cloud/reference/billing/clickpipes
+title: 'ClickPipes billing'
+description: 'Overview of ClickPipes billing'
+---
+
+import ClickPipesFAQ from '../../_snippets/_clickpipes_faq.md'
+
+## ClickPipes for streaming and object storage {#clickpipes-for-streaming-object-storage}
+
+This section outlines the pricing model of ClickPipes for streaming and object storage.
+
+### What does the ClickPipes pricing structure look like? {#what-does-the-clickpipes-pricing-structure-look-like}
+
+It consists of two dimensions:
+
+- **Compute**: Price **per unit per hour**.
+ Compute represents the cost of running the ClickPipes replica pods whether they actively ingest data or not.
+ It applies to all ClickPipes types.
+- **Ingested data**: Price **per GB**.
+ The ingested data rate applies to all streaming ClickPipes
+ (Kafka, Confluent, Amazon MSK, Amazon Kinesis, Redpanda, WarpStream, Azure Event Hubs)
+ for the data transferred via the replica pods. The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed).
+
+### What are ClickPipes replicas? {#what-are-clickpipes-replicas}
+
+ClickPipes ingests data from remote data sources via a dedicated infrastructure
+that runs and scales independently of the ClickHouse Cloud service.
+For this reason, it uses dedicated compute replicas.
+
+### What is the default number of replicas and their size? {#what-is-the-default-number-of-replicas-and-their-size}
+
+Each ClickPipe defaults to 1 replica that is provided with 512 MiB of RAM and 0.125 vCPU (XS).
+This corresponds to **0.0625** ClickHouse compute units (1 unit = 8 GiB RAM, 2 vCPUs).
+
+### What are the ClickPipes public prices? {#what-are-the-clickpipes-public-prices}
+
+- Compute: \$0.20 per unit per hour (\$0.0125 per replica per hour for the default replica size)
+- Ingested data: \$0.04 per GB
+
+The price for the Compute dimension depends on the **number** and **size** of replica(s) in a ClickPipe. The default replica size can be adjusted using vertical scaling, and each replica size is priced as follows:
+
+| Replica Size | Compute Units | RAM | vCPU | Price per Hour |
+|----------------------------|---------------|---------|--------|----------------|
+| Extra Small (XS) (default) | 0.0625 | 512 MiB | 0.125. | $0.0125 |
+| Small (S) | 0.125 | 1 GiB | 0.25 | $0.025 |
+| Medium (M) | 0.25 | 2 GiB | 0.5 | $0.05 |
+| Large (L) | 0.5 | 4 GiB | 1.0 | $0.10 |
+| Extra Large (XL) | 1.0 | 8 GiB | 2.0 | $0.20 |
+
+### How does it look in an illustrative example? {#how-does-it-look-in-an-illustrative-example}
+
+The following examples assume a single M-sized replica, unless explicitly mentioned.
+
+
+
+ |
+ 100 GB over 24h |
+ 1 TB over 24h |
+ 10 TB over 24h |
+
+
+
+ | Streaming ClickPipe |
+ (0.25 x 0.20 x 24) + (0.04 x 100) = \$5.20 |
+ (0.25 x 0.20 x 24) + (0.04 x 1000) = \$41.20 |
+ With 4 replicas: (0.25 x 0.20 x 24 x 4) + (0.04 x 10000) = \$404.80 |
+
+
+ | Object Storage ClickPipe $^*$ |
+ (0.25 x 0.20 x 24) = \$1.20 |
+ (0.25 x 0.20 x 24) = \$1.20 |
+ (0.25 x 0.20 x 24) = \$1.20 |
+
+
+
+
+$^1$ _Only ClickPipes compute for orchestration,
+effective data transfer is assumed by the underlying Clickhouse Service_
+
+## ClickPipes for PostgreSQL CDC {#clickpipes-for-postgresql-cdc}
+
+This section outlines the pricing model for our Postgres Change Data Capture (CDC)
+connector in ClickPipes. In designing this model, our goal was to keep pricing
+highly competitive while staying true to our core vision:
+
+> Making it seamless and
+affordable for customers to move data from Postgres to ClickHouse for
+real-time analytics.
+
+The connector is over **5x more cost-effective** than external
+ETL tools and similar features in other database platforms.
+
+:::note
+Pricing will start being metered in monthly bills beginning **September 1st, 2025,**
+for all customers (both existing and new) using Postgres CDC ClickPipes. Until
+then, usage is free. Customers have a 3-month window starting May 29 (GA announcement)
+to review and optimize their costs if needed, although we expect most will not need
+to make any changes.
+:::
+
+### Pricing dimensions {#pricing-dimensions}
+
+There are two main dimensions to pricing:
+
+1. **Ingested Data**: The raw, uncompressed bytes coming from Postgres and
+ ingested into ClickHouse.
+2. **Compute**: The compute units provisioned per service manage multiple
+ Postgres CDC ClickPipes and are separate from the compute units used by the
+ ClickHouse Cloud service. This additional compute is dedicated specifically
+ to Postgres CDC ClickPipes. Compute is billed at the service level, not per
+ individual pipe. Each compute unit includes 2 vCPUs and 8 GB of RAM.
+
+### Ingested data {#ingested-data}
+
+The Postgres CDC connector operates in two main phases:
+
+- **Initial load / resync**: This captures a full snapshot of Postgres tables
+ and occurs when a pipe is first created or re-synced.
+- **Continuous Replication (CDC)**: Ongoing replication of changes—such as inserts,
+ updates, deletes, and schema changes—from Postgres to ClickHouse.
+
+In most use cases, continuous replication accounts for over 90% of a ClickPipe
+life cycle. Because initial loads involve transferring a large volume of data all
+at once, we offer a lower rate for that phase.
+
+| Phase | Cost |
+|----------------------------------|--------------|
+| **Initial load / resync** | $0.10 per GB |
+| **Continuous Replication (CDC)** | $0.20 per GB |
+
+### Compute {#compute}
+
+This dimension covers the compute units provisioned per service just for Postgres
+ClickPipes. Compute is shared across all Postgres pipes within a service. **It
+is provisioned when the first Postgres pipe is created and deallocated when no
+Postgres CDC pipes remain**. The amount of compute provisioned depends on your
+organization's tier:
+
+| Tier | Cost |
+|------------------------------|-----------------------------------------------|
+| **Basic Tier** | 0.5 compute unit per service — $0.10 per hour |
+| **Scale or Enterprise Tier** | 1 compute unit per service — $0.20 per hour |
+
+### Example {#example}
+
+Let's say your service is in Scale tier and has the following setup:
+
+- 2 Postgres ClickPipes running continuous replication
+- Each pipe ingests 500 GB of data changes (CDC) per month
+- When the first pipe is kicked off, the service provisions **1 compute unit under the Scale Tier** for Postgres CDC
+
+#### Monthly cost breakdown {#cost-breakdown}
+
+**Ingested Data (CDC)**:
+
+$$ 2 \text{ pipes} \times 500 \text{ GB} = 1,000 \text{ GB per month} $$
+
+$$ 1,000 \text{ GB} \times \$0.20/\text{GB} = \$200 $$
+
+**Compute**:
+
+$$1 \text{ compute unit} \times \$0.20/\text{hr} \times 730 \text{ hours (approximate month)} = \$146$$
+
+:::note
+Compute is shared across both pipes
+:::
+
+**Total Monthly Cost**:
+
+$$\$200 \text{ (ingest)} + \$146 \text{ (compute)} = \$346$$
+
+# ClickPipes pricing FAQ {#clickpipes-pricing-faq}
+
+Below, you will find frequently asked questions about CDC ClickPipes and streaming
+and object-based storage ClickPipes.
+
+## FAQ for Postgres CDC ClickPipes {#faq-postgres-cdc-clickpipe}
+
+
+
+Is the ingested data measured in pricing based on compressed or uncompressed size?
+
+The ingested data is measured as _uncompressed data_ coming from Postgres—both
+during the initial load and CDC (via the replication slot). Postgres does not
+compress data during transit by default, and ClickPipe processes the raw,
+uncompressed bytes.
+
+
+
+
+
+When will Postgres CDC pricing start appearing on my bills?
+
+Postgres CDC ClickPipes pricing begins appearing on monthly bills starting
+**September 1st, 2025**, for all customers—both existing and new. Until then,
+usage is free. Customers have a **3-month window** starting from **May 29**
+(the GA announcement date) to review and optimize their usage if needed, although
+we expect most won't need to make any changes.
+
+
+
+
+
+Will I be charged if I pause my pipes?
+
+No data ingestion charges apply while a pipe is paused, since no data is moved.
+However, compute charges still apply—either 0.5 or 1 compute unit—based on your
+organization's tier. This is a fixed service-level cost and applies across all
+pipes within that service.
+
+
+
+
+
+How can I estimate my pricing?
+
+The Overview page in ClickPipes provides metrics for both initial load/resync and
+CDC data volumes. You can estimate your Postgres CDC costs using these metrics
+in conjunction with the ClickPipes pricing.
+
+
+
+
+
+Can I scale the compute allocated for Postgres CDC in my service?
+
+By default, compute scaling is not user-configurable. The provisioned resources
+are optimized to handle most customer workloads optimally. If your use case
+requires more or less compute, please open a support ticket so we can evaluate
+your request.
+
+
+
+
+
+What is the pricing granularity?
+
+- **Compute**: Billed per hour. Partial hours are rounded up to the next hour.
+- **Ingested Data**: Measured and billed per gigabyte (GB) of uncompressed data.
+
+
+
+
+
+Can I use my ClickHouse Cloud credits for Postgres CDC via ClickPipes?
+
+Yes. ClickPipes pricing is part of the unified ClickHouse Cloud pricing. Any
+platform credits you have will automatically apply to ClickPipes usage as well.
+
+
+
+
+
+How much additional cost should I expect from Postgres CDC ClickPipes in my existing monthly ClickHouse Cloud spend?
+
+The cost varies based on your use case, data volume, and organization tier.
+That said, most existing customers see an increase of **0–15%** relative to their
+existing monthly ClickHouse Cloud spend post trial. Actual costs may vary
+depending on your workload—some workloads involve high data volumes with
+lesser processing, while others require more processing with less data.
+
+
+
+## FAQ for streaming and object storage ClickPipes {#faq-streaming-and-object-storage}
+
+
\ No newline at end of file
diff --git a/docs/cloud/reference/03_billing/03_payment-thresholds.md b/docs/cloud/reference/03_billing/04_payment-thresholds.md
similarity index 100%
rename from docs/cloud/reference/03_billing/03_payment-thresholds.md
rename to docs/cloud/reference/03_billing/04_payment-thresholds.md
diff --git a/docs/cloud/reference/03_billing/04_network-data-transfer.mdx b/docs/cloud/reference/03_billing/05_network-data-transfer.mdx
similarity index 100%
rename from docs/cloud/reference/03_billing/04_network-data-transfer.mdx
rename to docs/cloud/reference/03_billing/05_network-data-transfer.mdx
diff --git a/docs/cloud/reference/03_billing/05_billing_compliance.md b/docs/cloud/reference/03_billing/06_billing_compliance.md
similarity index 100%
rename from docs/cloud/reference/03_billing/05_billing_compliance.md
rename to docs/cloud/reference/03_billing/06_billing_compliance.md
diff --git a/docs/integrations/data-ingestion/clickpipes/index.md b/docs/integrations/data-ingestion/clickpipes/index.md
index 1246c593627..ee9e0e91c2d 100644
--- a/docs/integrations/data-ingestion/clickpipes/index.md
+++ b/docs/integrations/data-ingestion/clickpipes/index.md
@@ -101,7 +101,7 @@ If ClickPipes cannot connect to a data source after 15 min or to a destination a
- **Does using ClickPipes incur an additional cost?**
- ClickPipes is billed on two dimensions: Ingested Data and Compute. The full details of the pricing are available on [this page](/cloud/manage/billing/overview#clickpipes-for-streaming-object-storage). Running ClickPipes might also generate an indirect compute and storage cost on the destination ClickHouse Cloud service similar to any ingest workload.
+ ClickPipes is billed on two dimensions: Ingested Data and Compute. The full details of the pricing are available on [this page](/cloud/reference/billing/clickpipes). Running ClickPipes might also generate an indirect compute and storage cost on the destination ClickHouse Cloud service similar to any ingest workload.
- **Is there a way to handle errors or failures when using ClickPipes for Kafka?**
diff --git a/docs/integrations/data-ingestion/clickpipes/postgres/faq.md b/docs/integrations/data-ingestion/clickpipes/postgres/faq.md
index 0709df6f740..27bdf551db4 100644
--- a/docs/integrations/data-ingestion/clickpipes/postgres/faq.md
+++ b/docs/integrations/data-ingestion/clickpipes/postgres/faq.md
@@ -78,7 +78,7 @@ Please refer to the [ClickPipes for Postgres: Schema Changes Propagation Support
### What are the costs for ClickPipes for Postgres CDC? {#what-are-the-costs-for-clickpipes-for-postgres-cdc}
-For detailed pricing information, please refer to the [ClickPipes for Postgres CDC pricing section on our main billing overview page](/cloud/manage/billing/overview#clickpipes-for-postgres-cdc).
+For detailed pricing information, please refer to the [ClickPipes for Postgres CDC pricing section on our main billing overview page](/cloud/reference/billing/clickpipes).
### My replication slot size is growing or not decreasing; what might be the issue? {#my-replication-slot-size-is-growing-or-not-decreasing-what-might-be-the-issue}
diff --git a/docs/integrations/data-ingestion/clickpipes/postgres/scaling.md b/docs/integrations/data-ingestion/clickpipes/postgres/scaling.md
index ef924002613..2d5e3019dc4 100644
--- a/docs/integrations/data-ingestion/clickpipes/postgres/scaling.md
+++ b/docs/integrations/data-ingestion/clickpipes/postgres/scaling.md
@@ -19,7 +19,7 @@ Before attempting to scale up, consider:
- First adjusting [initial load parallelism and partitioning](/integrations/clickpipes/postgres/parallel_initial_load) when creating a ClickPipe
- Checking for [long-running transactions](/integrations/clickpipes/postgres/sync_control#transactions) on the source that could be causing CDC delays
-**Increasing the scale will proportionally increase your ClickPipes compute costs.** If you're scaling up just for the initial loads, it's important to scale down after the snapshot is finished to avoid unexpected charges. For more details on pricing, see [Postgres CDC Pricing](/cloud/manage/billing/overview#clickpipes-for-postgres-cdc).
+**Increasing the scale will proportionally increase your ClickPipes compute costs.** If you're scaling up just for the initial loads, it's important to scale down after the snapshot is finished to avoid unexpected charges. For more details on pricing, see [Postgres CDC Pricing](/cloud/reference/billing/clickpipes).
## Prerequisites for this process {#prerequisites}
diff --git a/docs/use-cases/observability/clickstack/ingesting-data/collector.md b/docs/use-cases/observability/clickstack/ingesting-data/collector.md
index e55460191a0..26cb7c14a85 100644
--- a/docs/use-cases/observability/clickstack/ingesting-data/collector.md
+++ b/docs/use-cases/observability/clickstack/ingesting-data/collector.md
@@ -205,6 +205,7 @@ service:
receivers: [filelog]
processors: [batch]
exporters: [otlphttp/hdx]
+
```
Note the need to include an [authorization header containing your ingestion API key](#securing-the-collector) in any OTLP communication.