From 90158ace59b86c5a91b60ede7023e6499ac6ab6c Mon Sep 17 00:00:00 2001 From: Dominic Tran Date: Tue, 16 Sep 2025 08:36:11 -0500 Subject: [PATCH 1/2] only concurrency limits are per replica --- docs/cloud/guides/best_practices/usagelimits.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/cloud/guides/best_practices/usagelimits.md b/docs/cloud/guides/best_practices/usagelimits.md index 1dfe21650ba..532286a6b8c 100644 --- a/docs/cloud/guides/best_practices/usagelimits.md +++ b/docs/cloud/guides/best_practices/usagelimits.md @@ -8,8 +8,8 @@ description: 'Describes the recommended usage limits in ClickHouse Cloud' While ClickHouse is known for its speed and reliability, optimal performance is achieved within certain operating parameters. For example, having too many tables, databases, or parts can negatively impact performance. To prevent this, ClickHouse -Cloud enforces per-replica limits across several operational dimensions. -The details of these guardrails are listed below. +Cloud enforces limits across several operational dimensions. +The details of these guardrails are listed below. Note that query concurrency limits are per replica. :::tip If you've run up against one of these guardrails, it's possible that you are @@ -18,7 +18,7 @@ we will gladly help you refine your use case to avoid exceeding the guardrails or look together at how we can increase them in a controlled manner. ::: -| Dimension | Limit (Per Replica) | +| Dimension | Limit | |-------------------------------|------------------------------------------------------------| | **Databases** | 1000 | | **Tables** | 5000 | @@ -30,7 +30,7 @@ or look together at how we can increase them in a controlled manner. | **Services per warehouse** | 5 (soft) | | **Low cardinality** | 10k or less | | **Primary keys in a table** | 4-5 that sufficiently filter down the data | -| **Query concurrency** | 1000 | +| **Query concurrency** | 1000 (per replica) | | **Batch ingest** | anything > 1M will be split by the system in 1M row blocks | :::note From 2616904609295fc7bd4c84af409ac2a3f45cbfb0 Mon Sep 17 00:00:00 2001 From: Shaun Struwig <41984034+Blargian@users.noreply.github.com> Date: Tue, 16 Sep 2025 16:12:03 +0200 Subject: [PATCH 2/2] Update docs/cloud/guides/best_practices/usagelimits.md --- docs/cloud/guides/best_practices/usagelimits.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/cloud/guides/best_practices/usagelimits.md b/docs/cloud/guides/best_practices/usagelimits.md index 532286a6b8c..18f42bb97ed 100644 --- a/docs/cloud/guides/best_practices/usagelimits.md +++ b/docs/cloud/guides/best_practices/usagelimits.md @@ -9,7 +9,7 @@ While ClickHouse is known for its speed and reliability, optimal performance is achieved within certain operating parameters. For example, having too many tables, databases, or parts can negatively impact performance. To prevent this, ClickHouse Cloud enforces limits across several operational dimensions. -The details of these guardrails are listed below. Note that query concurrency limits are per replica. +The details of these guardrails are listed below. :::tip If you've run up against one of these guardrails, it's possible that you are