From 4c6b045b75bb39a4efa9613a57e52227c821c090 Mon Sep 17 00:00:00 2001 From: Diana Carroll Date: Mon, 28 Apr 2025 10:10:52 -0400 Subject: [PATCH 1/2] fix grammar in compression-in-clickhouse.md Replace "out weighted" with "outweighed". --- docs/data-compression/compression-in-clickhouse.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/data-compression/compression-in-clickhouse.md b/docs/data-compression/compression-in-clickhouse.md index 4afe9599919..d0fba334d87 100644 --- a/docs/data-compression/compression-in-clickhouse.md +++ b/docs/data-compression/compression-in-clickhouse.md @@ -7,7 +7,7 @@ keywords: ['compression', 'codec', 'encoding'] One of the secrets to ClickHouse query performance is compression. -Less data on disk means less I/O and faster queries and inserts. The overhead of any compression algorithm with respect to CPU will in most cases be out weighted by the reduction in IO. Improving the compression of the data should therefore be the first focus when working on ensuring ClickHouse queries are fast. +Less data on disk means less I/O and faster queries and inserts. The overhead of any compression algorithm with respect to CPU will in most cases be outweighed by the reduction in IO. Improving the compression of the data should therefore be the first focus when working on ensuring ClickHouse queries are fast. > For why ClickHouse compresses data so well, we recommended [this article](https://clickhouse.com/blog/optimize-clickhouse-codecs-compression-schema). In summary, as a column-oriented database, values will be written in column order. If these values are sorted, the same values will be adjacent to each other. Compression algorithms exploit contiguous patterns of data. On top of this, ClickHouse has codecs and granular data types which allow users to tune the compression techniques further. From 23c9e33fecdcd26f39a953c95d03c346c18aa441 Mon Sep 17 00:00:00 2001 From: Shaun Struwig <41984034+Blargian@users.noreply.github.com> Date: Mon, 28 Apr 2025 23:31:14 +0200 Subject: [PATCH 2/2] Update compression-in-clickhouse.md --- docs/data-compression/compression-in-clickhouse.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/data-compression/compression-in-clickhouse.md b/docs/data-compression/compression-in-clickhouse.md index d0fba334d87..accecf43e45 100644 --- a/docs/data-compression/compression-in-clickhouse.md +++ b/docs/data-compression/compression-in-clickhouse.md @@ -7,7 +7,7 @@ keywords: ['compression', 'codec', 'encoding'] One of the secrets to ClickHouse query performance is compression. -Less data on disk means less I/O and faster queries and inserts. The overhead of any compression algorithm with respect to CPU will in most cases be outweighed by the reduction in IO. Improving the compression of the data should therefore be the first focus when working on ensuring ClickHouse queries are fast. +Less data on disk means less I/O and faster queries and inserts. The overhead of any compression algorithm with respect to CPU is in most cases outweighed by the reduction in IO. Improving the compression of the data should therefore be the first focus when working on ensuring ClickHouse queries are fast. > For why ClickHouse compresses data so well, we recommended [this article](https://clickhouse.com/blog/optimize-clickhouse-codecs-compression-schema). In summary, as a column-oriented database, values will be written in column order. If these values are sorted, the same values will be adjacent to each other. Compression algorithms exploit contiguous patterns of data. On top of this, ClickHouse has codecs and granular data types which allow users to tune the compression techniques further.