Skip to content

Add table setting to change compression codec used for Kafka table engines#89073

Merged
antaljanosbenjamin merged 7 commits intomasterfrom
kakfa-add-compression-codec
Oct 30, 2025
Merged

Add table setting to change compression codec used for Kafka table engines#89073
antaljanosbenjamin merged 7 commits intomasterfrom
kakfa-add-compression-codec

Conversation

@antaljanosbenjamin
Copy link
Copy Markdown
Member

@antaljanosbenjamin antaljanosbenjamin commented Oct 27, 2025

Changelog category (leave one):

  • Improvement

Changelog entry (a user-readable short description of the changes that goes into CHANGELOG.md):

The kafka_compression_codec and kafka_compression_level settings can now be used to specify the compression for Kafka producers in both Kafka engines.

Documentation entry for user-facing changes

  • Documentation is written (mandatory for new features)

@clickhouse-gh
Copy link
Copy Markdown
Contributor

clickhouse-gh bot commented Oct 27, 2025

Workflow [PR], commit [63140ed]

Summary:

@clickhouse-gh clickhouse-gh bot added the pr-improvement Pull request with some product improvements label Oct 27, 2025
@antaljanosbenjamin antaljanosbenjamin changed the title Add table setting to change compression codec used for Kafka table en… Add table setting to change compression codec used for Kafka table engines Oct 27, 2025
@yariks5s yariks5s self-assigned this Oct 27, 2025
@antaljanosbenjamin
Copy link
Copy Markdown
Member Author

I have to add docs too.

- `kafka_handle_error_mode` — How to handle errors for Kafka engine. Possible values: default (the exception will be thrown if we fail to parse a message), stream (the exception message and raw message will be saved in virtual columns `_error` and `_raw_message`), dead_letter_queue (error related data will be saved in system.dead_letter_queue).
- `kafka_commit_on_select` — Commit messages when select query is made. Default: `false`.
- `kafka_max_rows_per_message` — The maximum number of rows written in one kafka message for row-based formats. Default : `1`.
- `kafka_compression_codec` — Compression codec used for producing messages. Supported: empty string, `none`, `gzip`, `snappy`, `lz4`, `zstd`. In case of empty string the compression codec is not set by the table, thus values from the config files or default value from `librdkafka` will be used. Default: empty string.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's none above, in the CREATE statement

Suggested change
- `kafka_compression_codec` — Compression codec used for producing messages. Supported: empty string, `none`, `gzip`, `snappy`, `lz4`, `zstd`. In case of empty string the compression codec is not set by the table, thus values from the config files or default value from `librdkafka` will be used. Default: empty string.
- `kafka_compression_codec` — Compression codec used for producing messages. Supported: empty string, `none`, `gzip`, `snappy`, `lz4`, `zstd`. In case of empty string the compression codec is not set by the table, thus values from the config files or default value from `librdkafka` will be used. Default: `none`.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's change in the create statement above instead.

@antaljanosbenjamin antaljanosbenjamin added this pull request to the merge queue Oct 30, 2025
Merged via the queue into master with commit 2ee4d48 Oct 30, 2025
124 checks passed
@antaljanosbenjamin antaljanosbenjamin deleted the kakfa-add-compression-codec branch October 30, 2025 11:09
@robot-clickhouse-ci-2 robot-clickhouse-ci-2 added the pr-synced-to-cloud The PR is synced to the cloud repo label Oct 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pr-improvement Pull request with some product improvements pr-synced-to-cloud The PR is synced to the cloud repo

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants