From 80c8d458acd5000cf8743d6bc170a38c33ae1bba Mon Sep 17 00:00:00 2001 From: UnamedRus Date: Sat, 25 Sep 2021 23:54:51 +0300 Subject: [PATCH] fixes --- ...ltinity-kb-possible-issues-with-running-clickhouse-in-k8s.md | 2 +- .../altinity-kb-shutting-down-a-node.md | 2 +- .../schema-migration-tools/golang-migrate.md | 2 +- .../altinity-kb-number-of-active-parts-in-a-partition.md | 2 ++ 4 files changed, 5 insertions(+), 3 deletions(-) diff --git a/content/en/altinity-kb-kubernetes/altinity-kb-possible-issues-with-running-clickhouse-in-k8s.md b/content/en/altinity-kb-kubernetes/altinity-kb-possible-issues-with-running-clickhouse-in-k8s.md index 64b81b34b5..de1c5240ed 100644 --- a/content/en/altinity-kb-kubernetes/altinity-kb-possible-issues-with-running-clickhouse-in-k8s.md +++ b/content/en/altinity-kb-kubernetes/altinity-kb-possible-issues-with-running-clickhouse-in-k8s.md @@ -41,4 +41,4 @@ Q. Clickhouse is caching the Kafka pod's IP and trying to connect to the same ip ### ClickHouse init process failed It's due to low value for env `CLICKHOUSE_INIT_TIMEOUT` value. Consider increasing it up to 1 min. -[https://github.com/ClickHouse/ClickHouse/blob/9f5cd35a6963cc556a51218b46b0754dcac7306a/docker/server/entrypoint.sh\#L120](https://github.com/ClickHouse/ClickHouse/blob/9f5cd35a6963cc556a51218b46b0754dcac7306a/docker/server/entrypoint.sh#L120) +[https://github.com/ClickHouse/ClickHouse/blob/9f5cd35a6963cc556a51218b46b0754dcac7306a/docker/server/entrypoint.sh\#L120](https://github.com/ClickHouse/ClickHouse/blob/9f5cd35a6963cc556a51218b46b0754dcac7306a/docker/server/entrypoint.sh#L120) \ No newline at end of file diff --git a/content/en/altinity-kb-setup-and-maintenance/altinity-kb-shutting-down-a-node.md b/content/en/altinity-kb-setup-and-maintenance/altinity-kb-shutting-down-a-node.md index a64763630d..3139a01db4 100644 --- a/content/en/altinity-kb-setup-and-maintenance/altinity-kb-shutting-down-a-node.md +++ b/content/en/altinity-kb-setup-and-maintenance/altinity-kb-shutting-down-a-node.md @@ -25,6 +25,6 @@ More safer way: * Shutdown server. -`SYSTEM SHUTDOWN` query doesn’t wait until query completion and tries to kill all queries immediately after receiving signal, even if there is setting `shutdown_wait_unfinished`. +`SYSTEM SHUTDOWN` query doesn’t wait until query completion and tries to kill all queries immediately after receiving signal, even if setting `shutdown_wait_unfinished` being used. [https://github.com/ClickHouse/ClickHouse/blob/master/programs/server/Server.cpp\#L1353](https://github.com/ClickHouse/ClickHouse/blob/master/programs/server/Server.cpp#L1353) diff --git a/content/en/altinity-kb-setup-and-maintenance/schema-migration-tools/golang-migrate.md b/content/en/altinity-kb-setup-and-maintenance/schema-migration-tools/golang-migrate.md index 3f900df813..dc3eff8811 100644 --- a/content/en/altinity-kb-setup-and-maintenance/schema-migration-tools/golang-migrate.md +++ b/content/en/altinity-kb-setup-and-maintenance/schema-migration-tools/golang-migrate.md @@ -113,7 +113,7 @@ It's happens due of missing tzdata package in migrate/migrate docker image of go There is 2 possible solutions: 1. You can build your own golang-migrate image from official with tzdata package. -2. If you using it as part of your CI you can add installing tzdata package as one of step in ci before using golang-migrate. +2. If you using it as part of your CI you can add installing tzdata package as one of step in CI before using golang-migrate. Related GitHub issues: [https://github.com/golang-migrate/migrate/issues/494](https://github.com/golang-migrate/migrate/issues/494) diff --git a/content/en/altinity-kb-useful-queries/altinity-kb-number-of-active-parts-in-a-partition.md b/content/en/altinity-kb-useful-queries/altinity-kb-number-of-active-parts-in-a-partition.md index ffc8c059ae..72193b05e6 100644 --- a/content/en/altinity-kb-useful-queries/altinity-kb-number-of-active-parts-in-a-partition.md +++ b/content/en/altinity-kb-useful-queries/altinity-kb-number-of-active-parts-in-a-partition.md @@ -13,3 +13,5 @@ Merge scheduler selects parts by own algorithm based on the current node workloa CH merge scheduler balances between a big number of parts and a wasting resources on merges. Merges are CPU/DISK IO expensive. If CH will merge every new part then all resources will be spend on merges and will no resources remain on queries (selects ). + +CH will not merge parts with a combined size greater than 100 GB. \ No newline at end of file