From 6448acb2f4c051853dfe80e44aa990f805df1d55 Mon Sep 17 00:00:00 2001 From: Shaun Struwig <41984034+Blargian@users.noreply.github.com> Date: Mon, 3 Feb 2025 10:06:42 +0100 Subject: [PATCH 1/3] Update usagelimits.md --- docs/en/cloud/bestpractices/usagelimits.md | 24 +++++++++++++--------- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git a/docs/en/cloud/bestpractices/usagelimits.md b/docs/en/cloud/bestpractices/usagelimits.md index 21fcf74a325..c02850d3c56 100644 --- a/docs/en/cloud/bestpractices/usagelimits.md +++ b/docs/en/cloud/bestpractices/usagelimits.md @@ -4,22 +4,26 @@ sidebar_label: Usage Limits title: Usage Limits --- - -## Database Limits -Clickhouse is very fast and reliable, but any database has its limits. For example, having too many tables or databases could negatively affect performance. To avoid that, Clickhouse Cloud has guardrails for several types of items. +While ClickHouse is known for its speed and reliability, optimal performance is achieved within certain operating parameters. For example, having too many tables, databases or parts could negatively impact performance. To avoid this, Clickhouse Cloud has guardrails set up for several types of items. You can find details of these guardrails below. :::tip -If you've reached one of those limits, it may mean that you are implementing your use case in an unoptimized way. You can contact our support so we can help you refine your use case to avoid going through the limits or to increase the limits in a guided way. +If you've run up against one of these guardrails, it's possible that you are implementing your use case in an unoptimized way. Contact our support and we will gladly help you refine your use case to avoid exceeding the guardrails or look together at how we can increase them in a controlled manner. ::: -# Partitions -Clickhouse Cloud have a limit of **50000** [partitions](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/custom-partitioning-key) per instance - -# Parts -Clickhouse Cloud have a limit of **100000** [parts](https://clickhouse.com/docs/en/operations/system-tables/parts) per instance +- **Databases**: 1000 +- **Tables**: 5000-10k +- **Columns**: ∼1000 (wide format is preferred to compact) +- **Partitions**: 50k +- **Parts**: 100k across the entire instance +- **Part size**: 150gb +- **Services**: 20 (soft) +- **Low cardinality**: 10k or less +- **Primary keys in a table**: 4-5 that sufficiently filter down the data +- **Concurrency**: default 100, can be increaseed to 1000 per node +- **Batch ingest**: anything > 1M will be split by the system in 1M row blocks :::note -For Single Replica Services, the maximum number of Databases is restricted to 100, and the maximum number of Tables is restricted to 500. In addition, Storage for Basic Tier Services is limited to 1 TB. +For Single Replica Services, the maximum number of databases is restricted to 100, and the maximum number of tables is restricted to 500. In addition, storage for Basic Tier Services is limited to 1 TB. ::: From ed56da2785a99e6d41135bfeb52f55c32505cc84 Mon Sep 17 00:00:00 2001 From: Shaun Struwig <41984034+Blargian@users.noreply.github.com> Date: Mon, 3 Feb 2025 10:08:13 +0100 Subject: [PATCH 2/3] Update usagelimits.md --- docs/en/cloud/bestpractices/usagelimits.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/cloud/bestpractices/usagelimits.md b/docs/en/cloud/bestpractices/usagelimits.md index c02850d3c56..f7160db8268 100644 --- a/docs/en/cloud/bestpractices/usagelimits.md +++ b/docs/en/cloud/bestpractices/usagelimits.md @@ -7,7 +7,7 @@ title: Usage Limits While ClickHouse is known for its speed and reliability, optimal performance is achieved within certain operating parameters. For example, having too many tables, databases or parts could negatively impact performance. To avoid this, Clickhouse Cloud has guardrails set up for several types of items. You can find details of these guardrails below. :::tip -If you've run up against one of these guardrails, it's possible that you are implementing your use case in an unoptimized way. Contact our support and we will gladly help you refine your use case to avoid exceeding the guardrails or look together at how we can increase them in a controlled manner. +If you've run up against one of these guardrails, it's possible that you are implementing your use case in an unoptimized way. Contact our support team and we will gladly help you refine your use case to avoid exceeding the guardrails or look together at how we can increase them in a controlled manner. ::: - **Databases**: 1000 From e837841dbb137cc7c33aa8698d44e7b5c1dee1fc Mon Sep 17 00:00:00 2001 From: Shaun Struwig <41984034+Blargian@users.noreply.github.com> Date: Mon, 3 Feb 2025 10:14:02 +0100 Subject: [PATCH 3/3] Update usagelimits.md --- docs/en/cloud/bestpractices/usagelimits.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/cloud/bestpractices/usagelimits.md b/docs/en/cloud/bestpractices/usagelimits.md index f7160db8268..00dd8f24e55 100644 --- a/docs/en/cloud/bestpractices/usagelimits.md +++ b/docs/en/cloud/bestpractices/usagelimits.md @@ -19,7 +19,7 @@ If you've run up against one of these guardrails, it's possible that you are imp - **Services**: 20 (soft) - **Low cardinality**: 10k or less - **Primary keys in a table**: 4-5 that sufficiently filter down the data -- **Concurrency**: default 100, can be increaseed to 1000 per node +- **Concurrency**: default 100, can be increased to 1000 per node - **Batch ingest**: anything > 1M will be split by the system in 1M row blocks :::note