diff --git a/docs/data-compression/compression-in-clickhouse.md b/docs/data-compression/compression-in-clickhouse.md index 4afe9599919..accecf43e45 100644 --- a/docs/data-compression/compression-in-clickhouse.md +++ b/docs/data-compression/compression-in-clickhouse.md @@ -7,7 +7,7 @@ keywords: ['compression', 'codec', 'encoding'] One of the secrets to ClickHouse query performance is compression. -Less data on disk means less I/O and faster queries and inserts. The overhead of any compression algorithm with respect to CPU will in most cases be out weighted by the reduction in IO. Improving the compression of the data should therefore be the first focus when working on ensuring ClickHouse queries are fast. +Less data on disk means less I/O and faster queries and inserts. The overhead of any compression algorithm with respect to CPU is in most cases outweighed by the reduction in IO. Improving the compression of the data should therefore be the first focus when working on ensuring ClickHouse queries are fast. > For why ClickHouse compresses data so well, we recommended [this article](https://clickhouse.com/blog/optimize-clickhouse-codecs-compression-schema). In summary, as a column-oriented database, values will be written in column order. If these values are sorted, the same values will be adjacent to each other. Compression algorithms exploit contiguous patterns of data. On top of this, ClickHouse has codecs and granular data types which allow users to tune the compression techniques further.