-## 关于 chDB {#about-chdb}
+## 性能基准 {#performance-benchmarks}
+
+chDB在不同场景下提供卓越的性能:
+
+- **[嵌入引擎的ClickBench](https://benchmark.clickhouse.com/#eyJzeXN0ZW0iOnsiQXRoZW5hIChwYXJ0aXRpb25lZCkiOnRydWUsIkF0aGVuYSAoc2luZ2xlKSI6dHJ1ZSwiQXVyb3JhIGZvciBNeVNRTCI6dHJ1ZSwiQXVyb3JhIGZvciBQb3N0Z3JlU1FMIjp0cnVlLCJCeXRlSG91c2UiOnRydWUsImNoREIiOnRydWUsIkNpdHVzIjp0cnVlLCJjbGlja2hvdXNlLWxvY2FsIChwYXJ0aXRpb25lZCkiOnRydWUsImNsaWNraG91c2UtbG9jYWwgKHNpbmdsZSkiOnRydWUsIkNsaWNrSG91c2UiOnRydWUsIkNsaWNrSG91c2UgKHR1bmVkKSI6dHJ1ZSwiQ2xpdGhvdXNlIHpzdGQiOnRydWUsIkNsaWNrSG91c2UgQ2xvdWQiOnRydWUsIkNsaWNrSG91c2UgKHdlYikiOnRydWUsIkNyYXRlREIiOnRydWUsIkRhdGFiZW5kIjp0cnVlLCJEYXRhRnVzaW9uIChzaW5nbGUpIjp0cnVlLCJBcGFjaGUgRG9yaXMiOnRydWUsIkRydWlkIjp0cnVlLCJEdWNrREIgKFBhcnF1ZXQpIjp0cnVlLCJEdWNrREIiOnRydWUsIkVsYXN0aWNzZWFyY2giOnRydWUsIkVsYXN0aWNzZWFyY2ggKHR1bmVkKSI6ZmFsc2UsIkdyZWVucGx1bSI6dHJ1ZSwiSGVhdnlBSSI6dHJ1ZSwiSHlkcmEiOnRydWUsIkluZm9icmlnaHQiOnRydWUsIktpbmV0aWNhIjp0cnVlLCJNYXJpYURCIENvbHVtblN0b3JlIjp0cnVlLCJNYXJpYURCIjpmYWxzZSwiTW9uZXREQiI6dHJ1ZSwiTW9uZ29EQiI6dHJ1ZSwiTXlTUUwgKE15SVNBTSkiOnRydWUsIk15U1FMIjp0cnVlLCJQaW5vdCI6dHJ1ZSwiUG9zdGdyZVNRTCI6dHJ1ZSwiUG9zdGdyZVNRTCAodHVuZWQpIjpmYWxzZSwiUXVlc3REQiAocGFydGl0aW9uZWQpIjp0cnVlLCJRdWVzdERCIjp0cnVlLCJSZWRzaGlmdCI6dHJ1ZSwiU2VsZWN0REIiOnRydWUsIlNpbmdsZVN0b3JlIjp0cnVlLCJTbm93Zmxha2UiOnRydWUsIlNRTGl0ZSI6dHJ1ZSwiU3RhclJvY2tzIjp0cnVlLCJUaW1lc2NhbGVEQiAoY29tcHJlc3Npb24pIjp0cnVlLCJUaW1lc2NhbGVEQiI6dHJ1ZX0sInR5cGUiOnsic3RhdGVsZXNzIjpmYWxzZSwibWFuYWdlZCI6ZmFsc2UsIkphdmEiOmZhbHNlLCJjb2x1bW4tb3JpZW50ZWQiOmZhbHNlLCJDKysiOmZhbHNlLCJNeVNRTCBjb21wYXRpYmxlIjpmYWxzZSwicm93LW9yaWVudGVkIjpmYWxzZSwiQyI6ZmFsc2UsIlBvc3RncmVTUUwgY29tcGF0aWJsZSI6ZmFsc2UsIkNsaWNrSG91c2UgZGVyaXZhdGl2ZSI6ZmFsc2UsImVtYmVkZGVkIjp0cnVlLCJzZXJ2ZXJsZXNzIjpmYWxzZSwiUnVzdCI6ZmFsc2UsInNlYXJjaCI6ZmFsc2UsImRvY3VtZW50IjpmYWxzZSwidGltZS1zZXJpZXMiOmZhbHNlfSwibWFjaGluZSI6eyJzZXJ2ZXJsZXNzIjp0cnVlLCIxNmFjdSI6dHJ1ZSwiTCI6dHJ1ZSwiTSI6dHJ1ZSwiUyI6dHJ1ZSwiWFMiOnRydWUsImM2YS5tZXRhbCwgNTAwZ2IgZ3AyIjp0cnVlLCJjNmEuNHhsYXJnZSwgNTAwZ2IgZ3AyIjp0cnVlLCJjNS40eGxhcmdlLCA1MDBnYiBncDIiOnRydWUsIjE2IHRocmVhZHMiOnRydWUsIjIwIHRocmVhZHMiOnRydWUsIjI0IHRocmVhZHMiOnRydWUsIjI4IHRocmVhZHMiOnRydWUsIjMwIHRocmVhZHMiOnRydWUsIjQ4IHRocmVhZHMiOnRydWUsIjYwIHRocmVhZHMiOnRydWUsIm01ZC4yNHhsYXJnZSI6dHJ1ZSwiYzVuLjR4bGFyZ2UsIDIwMGdiIGdwMiI6dHJ1ZSwiYzZhLjR4bGFyZ2UsIDE1MDBnYiBncDIiOnRydWUsImRjMi44eGxhcmdlIjp0cnVlLCJyYTMuMTZ4bGFyZ2UiOnRydWUsInJhMy40eGxhcmdlIjp0cnVlLCJyYTMueGxwbHVzIjp0cnVlLCJTMjQiOnRydWUsIlMyIjp0cnVlLCIyWEwiOnRydWUsIjNYTCI6dHJ1ZSwiNFhMIjp0cnVlLCJYTCI6dHJ1ZX0sImNsdXN0ZXJfc2l6ZSI6eyIxIjp0cnVlLCIyIjp0cnVlLCI0Ijp0cnVlLCI4Ijp0cnVlLCIxNiI6dHJ1ZSwiMzIiOnRydWUsIjY0Ijp0cnVlLCIxMjgiOnRydWUsInNlcnZlcmxlc3MiOnRydWUsInVuZGVmaW5lZCI6dHJ1ZX0sIm1ldHJpYyI6ImhvdCIsInF1ZXJpZXMiOlt0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlXX0=)** - 综合性能比较
+- **[DataFrame处理性能](https://colab.research.google.com/drive/1FogLujJ_-ds7RGurDrUnK-U0IW8a8Qd0)** - 与其他DataFrame库的比较分析
+- **[DataFrame基准测试](https://benchmark.clickhouse.com/#eyJzeXN0ZW0iOnsiQWxsb3lEQiI6dHJ1ZSwiQWxsb3lEQiAodHVuZWQpIjp0cnVlLCJBdGhlbmEgKHBhcnRpdGlvbmVkKSI6dHJ1ZSwiQXRoZW5hIChzaW5nbGUpIjp0cnVlLCJBdXJvcmEgZm9yIE15U1FMIjp0cnVlLCJBdXJvcmEgZm9yIFBvc3RncmVTUUwiOnRydWUsIkJ5Q29uaXR5Ijp0cnVlLCJCeXRlSG91c2UiOnRydWUsImNoREIgKERhdGFGcmFtZSkiOnRydWUsImNoREIgKFBhcnF1ZXQsIHBhcnRpdGlvbmVkKSI6dHJ1ZSwiY2hEQiI6dHJ1ZSwiQ2l0dXMiOnRydWUsIkNsaWNrSG91c2UgQ2xvdWQgKGF3cykiOnRydWUsIkNsaWNrSG91c2UgQ2xvdWQgKGF6dXJlKSI6dHJ1ZSwiQ2xpY2tIb3VzZSBDbG91ZCAoZ2NwKSI6dHJ1ZSwiQ2xpY2tIb3VzZSAoZGF0YSBsYWtlLCBwYXJ0aXRpb25lZCkiOnRydWUsIkNsaWNrSG91c2UgKGRhdGEgbGFrZSwgc2luZ2xlKSI6dHJ1ZSwiQ2xpY2tIb3VzZSAoUGFycXVldCwgcGFydGl0aW9uZWQpIjp0cnVlLCJDbGlja0hvdXNlIChQYXJxdWV0LCBzaW5nbGUpIjp0cnVlLCJDbGlja0hvdXNlICh3ZWIpIjp0cnVlLCJDbGlja0hvdXNlIjp0cnVlLCJDbGlja0hvdXNlICh0dW5lZCkiOnRydWUsIkNsaWNrSG91c2UgKHR1bmVkLCBtZW1vcnkpIjp0cnVlLCJDbG91ZGJlcnJ5Ijp0cnVlLCJDcmF0ZURCIjp0cnVlLCJDcnVuY2h5IEJyaWRnZSBmb3IgQW5hbHl0aWNzIChQYXJxdWV0KSI6dHJ1ZSwiRGF0YWJlbmQiOnRydWUsIkRhdGFGdXNpb24gKFBhcnF1ZXQsIHBhcnRpdGlvbmVkKSI6dHJ1ZSwiRGF0YUZ1c2lvbiAoUGFycXVldCwgc2luZ2xlKSI6dHJ1ZSwiQXBhY2hlIERvcmlzIjp0cnVlLCJEcnVpZCI6dHJ1ZSwiRHVja0RCIChEYXRhRnJhbWUpIjp0cnVlLCJEdWNrREIgKFBhcnF1ZXQsIHBhcnRpdGlvbmVkKSI6dHJ1ZSwiRHVja0RCIjp0cnVlLCJFbGFzdGljc2VhcmNoIjp0cnVlLCJFbGFzdGljc2VhcmNoICh0dW5lZCkiOmZhbHNlLCJHbGFyZURCIjp0cnVlLCJHcmVlbnBsdW0iOnRydWUsIkhlYXZ5QUkiOnRydWUsIkh5ZHJhIjp0cnVlLCJJbmZvYnJpZ2h0Ijp0cnVlLCJLaW5ldGljYSI6dHJ1ZSwiTWFyaWFEQiBDb2x1bW5TdG9yZSI6dHJ1ZSwiTWFyaWFEQiI6ZmFsc2UsIk1vbmV0REIiOnRydWUsIk1vbmdvREIiOnRydWUsIk1vdGhlcmR1Y2siOnRydWUsIk15U1FMIChNeUlTQU0pIjp0cnVlLCJNeVNRTCI6dHJ1ZSwiT3hsYSI6dHJ1ZSwiUGFuZGFzIChEYXRhRnJhbWUpIjp0cnVlLCJQYXJhZGVEQiAoUGFycXVldCwgcGFydGl0aW9uZWQpIjp0cnVlLCJQYXJhZGVEQiAoUGFycXVldCwgc2luZ2xlKSI6dHJ1ZSwiUGlub3QiOnRydWUsIlBvbGFycyAoRGF0YUZyYW1lKSI6dHJ1ZSwiUG9zdGdyZVNRTCAodHVuZWQpIjpmYWxzZSwiUG9zdGdyZVNRTCI6dHJ1ZSwiUXVlc3REQiAocGFydGl0aW9uZWQpIjp0cnVlLCJRdWVzdERCIjp0cnVlLCJSZWRzaGlmdCI6dHJ1ZSwiU2luZ2xlU3RvcmUiOnRydWUsIlNub3dmbGFrZSI6dHJ1ZSwiU1FMaXRlIjp0cnVlLCJTdGFyUm9ja3MiOnRydWUsIlRhYmxlc3BhY2UiOnRydWUsIlRlbWJvIE9MQVAgKGNvbHVtbmFyKSI6dHJ1ZSwiVGltZXNjYWxlREIgKGNvbXByZXNzaW9uKSI6dHJ1ZSwiVGltZXNjYWxlREIiOnRydWUsIlVtYnJhIjp0cnVlfSwidHlwZSI6eyJDIjpmYWxzZSwiY29sdW1uLW9yaWVudGVkIjpmYWxzZSwiUG9zdGdyZVNRTCBjb21wYXRpYmxlIjpmYWxzZSwibWFuYWdlZCI6ZmFsc2UsImdjcCI6ZmFsc2UsInN0YXRlbGVzcyI6ZmFsc2UsIkphdmEiOmZhbHNlLCJDKysiOmZhbHNlLCJNeVNRTCBjb21wYXRpYmxlIjpmYWxzZSwicm93LW9yaWVudGVkIjpmYWxzZSwiQ2xpY2tIb3VzZSBkZXJpdmF0aXZlIjpmYWxzZSwiZW1iZWRkZWQiOmZhbHNlLCJzZXJ2ZXJsZXNzIjpmYWxzZSwiZGF0YWZyYW1lIjp0cnVlLCJhd3MiOmZhbHNlLCJhenVyZSI6ZmFsc2UsImFuYWx5dGljYWwiOmZhbHNlLCJSdXN0IjpmYWxzZSwic2VhcmNoIjpmYWxzZSwiZG9jdW1lbnQiOmZhbHNlLCJzb21ld2hhdCBQb3N0Z3JlU1FMIGNvbXBhdGlibGUiOmZhbHNlLCJ0aW1lLXNlcmllcyI6ZmFsc2V9LCJtYWNoaW5lIjp7IjE2IHZDUFUgMTI4R0IiOnRydWUsIjggdkNQVSA2NEdCIjp0cnVlLCJzZXJ2ZXJsZXNzIjp0cnVlLCIxNmFjdSI6dHJ1ZSwiYzZhLjR4bGFyZ2UsIDUwMGdiIGdwMiI6dHJ1ZSwiTCI6dHJ1ZSwiTSI6dHJ1ZSwiUyI6dHJ1ZSwiWFMiOnRydWUsImM2YS5tZXRhbCwgNTAwZ2IgZ3AyIjp0cnVlLCIxOTJHQiI6dHJ1ZSwiMjRHQiI6dHJ1ZSwiMzYwR0IiOnRydWUsIjQ4R0IiOnRydWUsIjcyMEdCIjp0cnVlLCI5NkdCIjp0cnVlLCJkZXYiOnRydWUsIjcwOEdCIjp0cnVlLCJjNW4uNHhsYXJnZSwgNTAwZ2IgZ3AyIjp0cnVlLCJBbmFseXRpY3MtMjU2R0IgKDY0IHZDb3JlcywgMjU2IEdCKSI6dHJ1ZSwiYzUuNHhsYXJnZSwgNTAwZ2IgZ3AyIjp0cnVlLCJjNmEuNHhsYXJnZSwgMTUwMGdiIGdwMiI6dHJ1ZSwiY2xvdWQiOnRydWUsImRjMi44eGxhcmdlIjp0cnVlLCJyYTMuMTZ4bGFyZ2UiOnRydWUsInJhMy40eGxhcmdlIjp0cnVlLCJyYTMueGxwbHVzIjp0cnVlLCJTMiI6dHJ1ZSwiUzI0Ijp0cnVlLCIyWEwiOnRydWUsIjNYTCI6dHJ1ZSwiNFhMIjp0cnVlLCJYTCI6dHJ1ZSwiTDEgLSAxNkNQVSAzMkdCIjp0cnVlLCJjNmEuNHhsYXJnZSwgNTAwZ2IgZ3AzIjp0cnVlfSwiY2x1c3Rlcl9zaXplIjp7IjEiOnRydWUsIjIiOnRydWUsIjQiOnRydWUsIjgiOnRydWUsIjE2Ijp0cnVlLCIzMiI6dHJ1ZSwiNjQiOnRydWUsIjEyOCI6dHJ1ZSwic2VydmVybGVzcyI6dHJ1ZX0sIm1ldHJpYyI6ImhvdCIsInF1ZXJpZXMiOlt0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlLHRydWUsdHJ1ZSx0cnVlXX0=)**
+
+
-- 在 [Auxten's blog](https://clickhouse.com/blog/chdb-embedded-clickhouse-rocket-engine-on-a-bicycle) 上阅读 chDB 项目诞生的完整故事
-- 在 [官方 ClickHouse 博客](https://clickhouse.com/blog/welcome-chdb-to-clickhouse) 上阅读关于 chDB 及其使用案例的内容
-- 使用 [codapi 示例](https://antonz.org/trying-chdb/) 在浏览器中发现 chDB
+## 关于chDB {#about-chdb}
+- 阅读关于chDB项目出生的完整故事在[博客](https://clickhouse.com/blog/chdb-embedded-clickhouse-rocket-engine-on-a-bicycle)
+- 查看关于chDB及其使用案例的[博客](https://clickhouse.com/blog/welcome-chdb-to-clickhouse)
+- 参加[chDB按需课程](https://learn.clickhouse.com/user_catalog_class/show/1901178)
+- 使用[codapi示例](https://antonz.org/trying-chdb/)在您的浏览器中发现chDB
+- 更多示例请参见(https://github.com/chdb-io/chdb/tree/main/examples)
-## 它使用什么许可证? {#what-license-does-it-use}
+## 许可证 {#license}
-chDB 根据 Apache 许可证第 2.0 版提供。
+chDB根据Apache许可证2.0版提供。有关更多信息,请参见[许可证](https://github.com/chdb-io/chdb/blob/main/LICENSE.txt)。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/index.md.hash b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/index.md.hash
index 0a5f5e3777b..1b14879e101 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/index.md.hash
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/index.md.hash
@@ -1 +1 @@
-3b039ff406752922
+1b23a7330fc61f1e
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/bun.md b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/bun.md
index de6cef1ffe7..cab6644bdb2 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/bun.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/bun.md
@@ -1,59 +1,134 @@
---
-'title': '为 Bun 安装 chDB'
+'title': 'chDB for Bun'
'sidebar_label': 'Bun'
'slug': '/chdb/install/bun'
-'description': '如何为 Bun 安装 chDB'
+'description': '如何安装和使用 chDB 与 Bun 运行时'
'keywords':
- 'chdb'
-- 'embedded'
-- 'clickhouse-lite'
- 'bun'
-- 'install'
+- 'javascript'
+- 'typescript'
+- 'embedded'
+- 'clickhouse'
+- 'sql'
+- 'olap'
+'doc_type': 'guide'
---
-# Installing chDB for Bun
+# chDB for Bun
+
+chDB-bun 提供实验性的 FFI (Foreign Function Interface) 绑定,允许您在 Bun 应用中直接运行 ClickHouse 查询,而无需任何外部依赖。
+
+## 安装 {#installation}
+
+### 步骤 1: 安装系统依赖 {#install-system-dependencies}
-## Requirements {#requirements}
+首先,安装所需的系统依赖:
-安装 [libchdb](https://github.com/chdb-io/chdb):
+#### 安装 libchdb {#install-libchdb}
```bash
curl -sL https://lib.chdb.io | bash
```
-## Install {#install}
+#### 安装构建工具 {#install-build-tools}
-请参见: [chdb-bun](https://github.com/chdb-io/chdb-bun)
+您需要在系统上安装 `gcc` 或 `clang`:
-## GitHub repository {#github-repository}
+### 步骤 2: 安装 chDB-bun {#install-chdb-bun}
-您可以在 [chdb-io/chdb-bun](https://github.com/chdb-io/chdb-bun) 找到该项目的 GitHub 仓库。
+```bash
+
+# Install from the GitHub repository
+bun add github:chdb-io/chdb-bun
+
+
+# Or clone and build locally
+git clone https://github.com/chdb-io/chdb-bun.git
+cd chdb-bun
+bun install
+bun run build
+```
-## Usage {#usage}
+## 用法 {#usage}
-### Query(query, *format) (ephemeral) {#queryquery-format-ephemeral}
+chDB-bun 支持两种查询模式:用于一次性操作的临时查询和用于维护数据库状态的持久会话。
-```javascript
+### 临时查询 {#ephemeral-queries}
+
+对于简单的、一次性查询,且不需要持久状态:
+
+```typescript
import { query } from 'chdb-bun';
-// Query (ephemeral)
-var result = query("SELECT version()", "CSV");
-console.log(result); // 23.10.1.1
+// Basic query
+const result = query("SELECT version()", "CSV");
+console.log(result); // "23.10.1.1"
+
+// Query with different output formats
+const jsonResult = query("SELECT 1 as id, 'Hello' as message", "JSON");
+console.log(jsonResult);
+
+// Query with calculations
+const mathResult = query("SELECT 2 + 2 as sum, pi() as pi_value", "Pretty");
+console.log(mathResult);
+
+// Query system information
+const systemInfo = query("SELECT * FROM system.functions LIMIT 5", "CSV");
+console.log(systemInfo);
```
-### Session.Query(query, *format) {#sessionqueryquery-format}
+### 持久会话 {#persistent-sessions}
-```javascript
-import { Session } from 'chdb-bun';
-const sess = new Session('./chdb-bun-tmp');
+对于需要在查询之间维护状态的复杂操作:
-// Query Session (persistent)
-sess.query("CREATE FUNCTION IF NOT EXISTS hello AS () -> 'Hello chDB'", "CSV");
-var result = sess.query("SELECT hello()", "CSV");
-console.log(result);
+```typescript
+import { Session } from 'chdb-bun';
-// Before cleanup, you can find the database files in `./chdb-bun-tmp`
+// Create a session with persistent storage
+const sess = new Session('./chdb-bun-tmp');
-sess.cleanup(); // cleanup session, this will delete the database
+try {
+ // Create a database and table
+ sess.query(`
+ CREATE DATABASE IF NOT EXISTS mydb;
+ CREATE TABLE IF NOT EXISTS mydb.users (
+ id UInt32,
+ name String,
+ email String
+ ) ENGINE = MergeTree() ORDER BY id
+ `, "CSV");
+
+ // Insert data
+ sess.query(`
+ INSERT INTO mydb.users VALUES
+ (1, 'Alice', 'alice@example.com'),
+ (2, 'Bob', 'bob@example.com'),
+ (3, 'Charlie', 'charlie@example.com')
+ `, "CSV");
+
+ // Query the data
+ const users = sess.query("SELECT * FROM mydb.users ORDER BY id", "JSON");
+ console.log("Users:", users);
+
+ // Create and use custom functions
+ sess.query("CREATE FUNCTION IF NOT EXISTS hello AS () -> 'Hello chDB'", "CSV");
+ const greeting = sess.query("SELECT hello() as message", "Pretty");
+ console.log(greeting);
+
+ // Aggregate queries
+ const stats = sess.query(`
+ SELECT
+ COUNT(*) as total_users,
+ MAX(id) as max_id,
+ MIN(id) as min_id
+ FROM mydb.users
+ `, "JSON");
+ console.log("Statistics:", stats);
+
+} finally {
+ // Always cleanup the session to free resources
+ sess.cleanup(); // This deletes the database files
+}
```
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/bun.md.hash b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/bun.md.hash
index 01682d0fb36..10a66b6464d 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/bun.md.hash
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/bun.md.hash
@@ -1 +1 @@
-38ae1ccc991c9c80
+dc8ea61c3f6531fd
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/c.md b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/c.md
index eb147340b04..593014a02cd 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/c.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/c.md
@@ -1,50 +1,342 @@
---
-'title': '安装 chDB 适用于 C 和 C++'
+'title': 'chDB for C 和 C++'
'sidebar_label': 'C 和 C++'
'slug': '/chdb/install/c'
-'description': '如何安装 chDB 适用于 C 和 C++'
+'description': '如何安装和使用 chDB 与 C 和 C++'
'keywords':
- 'chdb'
+- 'c'
+- 'cpp'
- 'embedded'
-- 'clickhouse-lite'
-- 'install'
+- 'clickhouse'
+- 'sql'
+- 'olap'
+- 'api'
+'doc_type': 'guide'
---
-# Installing chDB for C and C++
+# chDB for C and C++
-## Requirements {#requirements}
+chDB 提供了一个原生的 C/C++ API,可以将 ClickHouse 功能直接嵌入到您的应用程序中。该 API 支持简单查询以及持久连接和流式查询结果等高级功能。
-安装 [libchdb](https://github.com/chdb-io/chdb):
+## Installation {#installation}
+
+### Step 1: Install libchdb {#install-libchdb}
+
+在您的系统上安装 chDB 库:
```bash
curl -sL https://lib.chdb.io | bash
```
+### Step 2: Include headers {#include-headers}
+
+在您的项目中包含 chDB 头文件:
+
+```c
+#include
+```
+
+### Step 3: Link library {#link-library}
+
+编译并链接您的应用程序与 chDB:
+
+```bash
+
+# C compilation
+gcc -o myapp myapp.c -lchdb
+
+
+# C++ compilation
+g++ -o myapp myapp.cpp -lchdb
+```
+
+## C Examples {#c-examples}
+
+### Basic connection and queries {#basic-connection-queries}
+
+```c
+#include
+#include
+
+int main() {
+ // Create connection arguments
+ char* args[] = {"chdb", "--path", "/tmp/chdb-data"};
+ int argc = 3;
+
+ // Connect to chDB
+ chdb_connection* conn = chdb_connect(argc, args);
+ if (!conn) {
+ printf("Failed to connect to chDB\n");
+ return 1;
+ }
+
+ // Execute a query
+ chdb_result* result = chdb_query(*conn, "SELECT version()", "CSV");
+ if (!result) {
+ printf("Query execution failed\n");
+ chdb_close_conn(conn);
+ return 1;
+ }
+
+ // Check for errors
+ const char* error = chdb_result_error(result);
+ if (error) {
+ printf("Query error: %s\n", error);
+ } else {
+ // Get result data
+ char* data = chdb_result_buffer(result);
+ size_t length = chdb_result_length(result);
+ double elapsed = chdb_result_elapsed(result);
+ uint64_t rows = chdb_result_rows_read(result);
+
+ printf("Result: %.*s\n", (int)length, data);
+ printf("Elapsed: %.3f seconds\n", elapsed);
+ printf("Rows: %llu\n", rows);
+ }
+
+ // Cleanup
+ chdb_destroy_query_result(result);
+ chdb_close_conn(conn);
+ return 0;
+}
+```
+
+### Streaming queries {#streaming-queries}
+
+```c
+#include
+#include
+
+int main() {
+ char* args[] = {"chdb", "--path", "/tmp/chdb-stream"};
+ chdb_connection* conn = chdb_connect(3, args);
+
+ if (!conn) {
+ printf("Failed to connect\n");
+ return 1;
+ }
+
+ // Start streaming query
+ chdb_result* stream_result = chdb_stream_query(*conn,
+ "SELECT number FROM system.numbers LIMIT 1000000", "CSV");
+
+ if (!stream_result) {
+ printf("Failed to start streaming query\n");
+ chdb_close_conn(conn);
+ return 1;
+ }
+
+ uint64_t total_rows = 0;
+
+ // Process chunks
+ while (true) {
+ chdb_result* chunk = chdb_stream_fetch_result(*conn, stream_result);
+ if (!chunk) break;
+
+ // Check if we have data in this chunk
+ size_t chunk_length = chdb_result_length(chunk);
+ if (chunk_length == 0) {
+ chdb_destroy_query_result(chunk);
+ break; // End of stream
+ }
+
+ uint64_t chunk_rows = chdb_result_rows_read(chunk);
+ total_rows += chunk_rows;
+
+ printf("Processed chunk: %llu rows, %zu bytes\n", chunk_rows, chunk_length);
+
+ // Process the chunk data here
+ // char* data = chdb_result_buffer(chunk);
+
+ chdb_destroy_query_result(chunk);
-## Usage {#usage}
+ // Progress reporting
+ if (total_rows % 100000 == 0) {
+ printf("Progress: %llu rows processed\n", total_rows);
+ }
+ }
-按照 [libchdb](https://github.com/chdb-io/chdb/blob/main/bindings.md) 的说明开始使用。
+ printf("Streaming complete. Total rows: %llu\n", total_rows);
-`chdb.h`
+ // Cleanup streaming query
+ chdb_destroy_query_result(stream_result);
+ chdb_close_conn(conn);
+ return 0;
+}
+```
+
+### Working with different data formats {#data-formats}
```c
-#pragma once
-#include
-#include
-
-extern "C" {
-struct local_result
-{
- char * buf;
- size_t len;
- void * _vec; // std::vector *, for freeing
- double elapsed;
- uint64_t rows_read;
- uint64_t bytes_read;
+#include
+#include
+
+int main() {
+ char* args[] = {"chdb"};
+ chdb_connection* conn = chdb_connect(1, args);
+
+ const char* query = "SELECT number, toString(number) as str FROM system.numbers LIMIT 3";
+
+ // CSV format
+ chdb_result* csv_result = chdb_query(*conn, query, "CSV");
+ printf("CSV Result:\n%.*s\n\n",
+ (int)chdb_result_length(csv_result),
+ chdb_result_buffer(csv_result));
+ chdb_destroy_query_result(csv_result);
+
+ // JSON format
+ chdb_result* json_result = chdb_query(*conn, query, "JSON");
+ printf("JSON Result:\n%.*s\n\n",
+ (int)chdb_result_length(json_result),
+ chdb_result_buffer(json_result));
+ chdb_destroy_query_result(json_result);
+
+ // Pretty format
+ chdb_result* pretty_result = chdb_query(*conn, query, "Pretty");
+ printf("Pretty Result:\n%.*s\n\n",
+ (int)chdb_result_length(pretty_result),
+ chdb_result_buffer(pretty_result));
+ chdb_destroy_query_result(pretty_result);
+
+ chdb_close_conn(conn);
+ return 0;
+}
+```
+
+## C++ example {#cpp-example}
+
+```cpp
+#include
+#include
+#include
+#include
+
+class ChDBConnection {
+private:
+ chdb_connection* conn;
+
+public:
+ ChDBConnection(const std::vector& args) {
+ // Convert string vector to char* array
+ std::vector argv;
+ for (const auto& arg : args) {
+ argv.push_back(const_cast(arg.c_str()));
+ }
+
+ conn = chdb_connect(argv.size(), argv.data());
+ if (!conn) {
+ throw std::runtime_error("Failed to connect to chDB");
+ }
+ }
+
+ ~ChDBConnection() {
+ if (conn) {
+ chdb_close_conn(conn);
+ }
+ }
+
+ std::string query(const std::string& sql, const std::string& format = "CSV") {
+ chdb_result* result = chdb_query(*conn, sql.c_str(), format.c_str());
+ if (!result) {
+ throw std::runtime_error("Query execution failed");
+ }
+
+ const char* error = chdb_result_error(result);
+ if (error) {
+ std::string error_msg(error);
+ chdb_destroy_query_result(result);
+ throw std::runtime_error("Query error: " + error_msg);
+ }
+
+ std::string data(chdb_result_buffer(result), chdb_result_length(result));
+
+ // Get query statistics
+ std::cout << "Query statistics:\n";
+ std::cout << " Elapsed: " << chdb_result_elapsed(result) << " seconds\n";
+ std::cout << " Rows read: " << chdb_result_rows_read(result) << "\n";
+ std::cout << " Bytes read: " << chdb_result_bytes_read(result) << "\n";
+
+ chdb_destroy_query_result(result);
+ return data;
+ }
};
-local_result * query_stable(int argc, char ** argv);
-void free_result(local_result * result);
+int main() {
+ try {
+ // Create connection
+ ChDBConnection db({{"chdb", "--path", "/tmp/chdb-cpp"}});
+
+ // Create and populate table
+ db.query("CREATE TABLE test (id UInt32, value String) ENGINE = MergeTree() ORDER BY id");
+ db.query("INSERT INTO test VALUES (1, 'hello'), (2, 'world'), (3, 'chdb')");
+
+ // Query with different formats
+ std::cout << "CSV Results:\n" << db.query("SELECT * FROM test", "CSV") << "\n";
+ std::cout << "JSON Results:\n" << db.query("SELECT * FROM test", "JSON") << "\n";
+
+ // Aggregation query
+ std::cout << "Count: " << db.query("SELECT COUNT(*) FROM test") << "\n";
+
+ } catch (const std::exception& e) {
+ std::cerr << "Error: " << e.what() << std::endl;
+ return 1;
+ }
+
+ return 0;
+}
+```
+
+## Error handling best practices {#error-handling}
+
+```c
+#include
+#include
+
+int safe_query_example() {
+ chdb_connection* conn = NULL;
+ chdb_result* result = NULL;
+ int return_code = 0;
+
+ // Create connection
+ char* args[] = {"chdb"};
+ conn = chdb_connect(1, args);
+ if (!conn) {
+ printf("Failed to create connection\n");
+ return 1;
+ }
+
+ // Execute query
+ result = chdb_query(*conn, "SELECT invalid_syntax", "CSV");
+ if (!result) {
+ printf("Query execution failed\n");
+ return_code = 1;
+ goto cleanup;
+ }
+
+ // Check for query errors
+ const char* error = chdb_result_error(result);
+ if (error) {
+ printf("Query error: %s\n", error);
+ return_code = 1;
+ goto cleanup;
+ }
+
+ // Process successful result
+ printf("Result: %.*s\n",
+ (int)chdb_result_length(result),
+ chdb_result_buffer(result));
+
+cleanup:
+ if (result) chdb_destroy_query_result(result);
+ if (conn) chdb_close_conn(conn);
+ return return_code;
}
```
+
+## GitHub repository {#github-repository}
+
+- **Main Repository**: [chdb-io/chdb](https://github.com/chdb-io/chdb)
+- **Issues and Support**: 在 [GitHub repository](https://github.com/chdb-io/chdb/issues) 上报告问题
+- **C API Documentation**: [Bindings Documentation](https://github.com/chdb-io/chdb/blob/main/bindings.md)
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/c.md.hash b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/c.md.hash
index b22edc99a88..24388c065ce 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/c.md.hash
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/c.md.hash
@@ -1 +1 @@
-66d7e1632baf1f01
+410ac9448a262702
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/go.md b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/go.md
index 732d94d74b6..12fdfc40be1 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/go.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/go.md
@@ -1,36 +1,259 @@
---
-'title': '为 Go 安装 chDB'
+'title': 'chDB for Go'
'sidebar_label': 'Go'
'slug': '/chdb/install/go'
-'description': '如何为 Go 安装 chDB'
+'description': '如何安装和使用 chDB 与 Go'
'keywords':
- 'chdb'
-- 'embedded'
-- 'clickhouse-lite'
- 'go'
-- 'install'
+- 'golang'
+- 'embedded'
+- 'clickhouse'
+- 'sql'
+- 'olap'
+'doc_type': 'guide'
---
-# Installing chDB for Go
+# chDB for Go
+
+chDB-go 提供了 chDB 的 Go 绑定,使您能够在 Go 应用程序中直接运行 ClickHouse 查询,且无需任何外部依赖。
+
+## Installation {#installation}
-## Requirements {#requirements}
+### Step 1: Install libchdb {#install-libchdb}
-安装 [libchdb](https://github.com/chdb-io/chdb):
+首先,安装 chDB 库:
```bash
curl -sL https://lib.chdb.io | bash
```
-## Install {#install}
+### Step 2: Install chdb-go {#install-chdb-go}
-查看: [chdb-go](https://github.com/chdb-io/chdb-go)
+安装 Go 包:
-## GitHub repository {#github-repository}
+```bash
+go install github.com/chdb-io/chdb-go@latest
+```
-您可以在 [chdb-io/chdb-go](https://github.com/chdb-io/chdb-go) 找到该项目的 GitHub 仓库。
+或者将其添加到您的 `go.mod` 中:
+
+```bash
+go get github.com/chdb-io/chdb-go
+```
## Usage {#usage}
-- API 文档: [High Level API](https://github.com/chdb-io/chdb-go/blob/main/chdb.md)
-- 低级 API 文档: [Low Level API](https://github.com/chdb-io/chdb-go/blob/main/lowApi.md)
+### Command line interface {#cli}
+
+chDB-go 包含一个 CLI,用于快速查询:
+
+```bash
+
+# Simple query
+./chdb-go "SELECT 123"
+
+
+# Interactive mode
+./chdb-go
+
+
+# Interactive mode with persistent storage
+./chdb-go --path /tmp/chdb
+```
+
+### Go Library - quick start {#quick-start}
+
+#### Stateless queries {#stateless-queries}
+
+用于简单的一次性查询:
+
+```go
+package main
+
+import (
+ "fmt"
+ "github.com/chdb-io/chdb-go"
+)
+
+func main() {
+ // Execute a simple query
+ result, err := chdb.Query("SELECT version()", "CSV")
+ if err != nil {
+ panic(err)
+ }
+ fmt.Println(result)
+}
+```
+
+#### Stateful queries with session {#stateful-queries}
+
+用于具有持久状态的复杂查询:
+
+```go
+package main
+
+import (
+ "fmt"
+ "github.com/chdb-io/chdb-go"
+)
+
+func main() {
+ // Create a session with persistent storage
+ session, err := chdb.NewSession("/tmp/chdb-data")
+ if err != nil {
+ panic(err)
+ }
+ defer session.Cleanup()
+
+ // Create database and table
+ _, err = session.Query(`
+ CREATE DATABASE IF NOT EXISTS testdb;
+ CREATE TABLE IF NOT EXISTS testdb.test_table (
+ id UInt32,
+ name String
+ ) ENGINE = MergeTree() ORDER BY id
+ `, "")
+
+ if err != nil {
+ panic(err)
+ }
+
+ // Insert data
+ _, err = session.Query(`
+ INSERT INTO testdb.test_table VALUES
+ (1, 'Alice'), (2, 'Bob'), (3, 'Charlie')
+ `, "")
+
+ if err != nil {
+ panic(err)
+ }
+
+ // Query data
+ result, err := session.Query("SELECT * FROM testdb.test_table ORDER BY id", "Pretty")
+ if err != nil {
+ panic(err)
+ }
+
+ fmt.Println(result)
+}
+```
+
+#### SQL driver interface {#sql-driver}
+
+chDB-go 实现了 Go 的 `database/sql` 接口:
+
+```go
+package main
+
+import (
+ "database/sql"
+ "fmt"
+ _ "github.com/chdb-io/chdb-go/driver"
+)
+
+func main() {
+ // Open database connection
+ db, err := sql.Open("chdb", "")
+ if err != nil {
+ panic(err)
+ }
+ defer db.Close()
+
+ // Query with standard database/sql interface
+ rows, err := db.Query("SELECT COUNT(*) FROM url('https://datasets.clickhouse.com/hits/hits.parquet')")
+ if err != nil {
+ panic(err)
+ }
+ defer rows.Close()
+
+ for rows.Next() {
+ var count int
+ err := rows.Scan(&count)
+ if err != nil {
+ panic(err)
+ }
+ fmt.Printf("Count: %d\n", count)
+ }
+}
+```
+
+#### Query streaming for large datasets {#query-streaming}
+
+用于处理不适合放入内存的大型数据集,请使用流式查询:
+
+```go
+package main
+
+import (
+ "fmt"
+ "log"
+ "github.com/chdb-io/chdb-go/chdb"
+)
+
+func main() {
+ // Create a session for streaming queries
+ session, err := chdb.NewSession("/tmp/chdb-stream")
+ if err != nil {
+ log.Fatal(err)
+ }
+ defer session.Cleanup()
+
+ // Execute a streaming query for large dataset
+ streamResult, err := session.QueryStreaming(
+ "SELECT number, number * 2 as double FROM system.numbers LIMIT 1000000",
+ "CSV",
+ )
+ if err != nil {
+ log.Fatal(err)
+ }
+ defer streamResult.Free()
+
+ rowCount := 0
+
+ // Process data in chunks
+ for {
+ chunk := streamResult.GetNext()
+ if chunk == nil {
+ // No more data
+ break
+ }
+
+ // Check for streaming errors
+ if err := streamResult.Error(); err != nil {
+ log.Printf("Streaming error: %v", err)
+ break
+ }
+
+ rowsRead := chunk.RowsRead()
+ // You can process the chunk data here
+ // For example, write to file, send over network, etc.
+ fmt.Printf("Processed chunk with %d rows\n", rowsRead)
+ rowCount += int(rowsRead)
+ if rowCount%100000 == 0 {
+ fmt.Printf("Processed %d rows so far...\n", rowCount)
+ }
+ }
+
+ fmt.Printf("Total rows processed: %d\n", rowCount)
+}
+```
+
+**流式查询的好处:**
+- **内存高效** - 在不将所有数据加载到内存中的情况下处理大型数据集
+- **实时处理** - 在第一个数据块到达后立即开始处理数据
+- **取消支持** - 可以使用 `Cancel()` 取消长时间运行的查询
+- **错误处理** - 使用 `Error()` 检查流式处理期间的错误
+
+## API documentation {#api-documentation}
+
+chDB-go 提供了高层次和低层次的 API:
+
+- **[高层次 API 文档](https://github.com/chdb-io/chdb-go/blob/main/chdb.md)** - 推荐用于大多数用例
+- **[低层次 API 文档](https://github.com/chdb-io/chdb-go/blob/main/lowApi.md)** - 用于需要精细控制的高级用例
+
+## System requirements {#requirements}
+
+- Go 1.21 或更高版本
+- 兼容 Linux, macOS
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/go.md.hash b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/go.md.hash
index 34b334f21a6..6878366f00f 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/go.md.hash
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/go.md.hash
@@ -1 +1 @@
-b2b885728e769679
+f5bd98c8d05c68ae
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/index.md b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/index.md
index dca9d31a4cb..7d15eb1fab0 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/index.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/index.md
@@ -10,15 +10,16 @@
- 'Bun'
- 'C'
- 'C++'
+'doc_type': 'landing-page'
---
-关于如何设置 chDB 的说明如下,适用于以下语言和运行时:
+如何设置 chDB 的说明如下,适用于以下语言和运行时:
-| 语言 |
-|---------------------------------------|
-| [Python](/chdb/install/python) |
-| [NodeJS](/chdb/install/nodejs) |
-| [Go](/chdb/install/go) |
-| [Rust](/chdb/install/rust) |
-| [Bun](/chdb/install/bun) |
-| [C and C++](/chdb/install/c) |
+| 语言 | API 参考 |
+|----------------------------------------|-------------------------------------|
+| [Python](/chdb/install/python) | [Python API](/chdb/api/python) |
+| [NodeJS](/chdb/install/nodejs) | |
+| [Go](/chdb/install/go) | |
+| [Rust](/chdb/install/rust) | |
+| [Bun](/chdb/install/bun) | |
+| [C 和 C++](/chdb/install/c) | |
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/index.md.hash b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/index.md.hash
index e253c9f8431..8bfb01f5b9b 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/index.md.hash
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/index.md.hash
@@ -1 +1 @@
-76e84f59d6a713d8
+04c84539c1f70e3c
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/nodejs.md b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/nodejs.md
index 9b04edad8ca..3e0df60b836 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/nodejs.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/nodejs.md
@@ -1,71 +1,201 @@
---
-'title': '为 NodeJS 安装 chDB'
-'sidebar_label': 'NodeJS'
+'title': 'chDB for Node.js'
+'sidebar_label': 'Node.js'
'slug': '/chdb/install/nodejs'
-'description': '如何为 NodeJS 安装 chDB'
+'description': '如何安装和使用 chDB 与 Node.js'
'keywords':
- 'chdb'
+- 'nodejs'
+- 'javascript'
- 'embedded'
-- 'clickhouse-lite'
-- 'NodeJS'
-- 'install'
+- 'clickhouse'
+- 'sql'
+- 'olap'
+'doc_type': 'guide'
---
-# 安装 chDB 用于 NodeJS
+# chDB for Node.js
-## 需求 {#requirements}
+chDB-node 提供了 Node.js 的 chDB 绑定,使您能够直接在 Node.js 应用程序中运行 ClickHouse 查询,无需外部依赖。
-安装 [libchdb](https://github.com/chdb-io/chdb):
+## Installation {#installation}
```bash
-curl -sL https://lib.chdb.io | bash
+npm install chdb
```
-## 安装 {#install}
+## Usage {#usage}
-```bash
-npm i chdb
-```
+chDB-node 支持两种查询模式:用于简单操作的独立查询和用于维护数据库状态的会话查询。
-## GitHub 仓库 {#github-repository}
+### Standalone queries {#standalone-queries}
-您可以在 [chdb-io/chdb-node](https://github.com/chdb-io/chdb-node) 找到该项目的 GitHub 仓库。
+对于不需要持久状态的简单一次性查询:
-## 使用 {#usage}
+```javascript
+const { query } = require("chdb");
-您可以通过导入和使用 chdb-node 模块,在 NodeJS 应用程序中利用 chdb 的强大功能:
+// Basic query
+const result = query("SELECT version()", "CSV");
+console.log("ClickHouse version:", result);
-```javascript
-const { query, Session } = require("chdb");
+// Query with multiple columns
+const multiResult = query("SELECT 'Hello' as greeting, 'chDB' as engine, 42 as answer", "CSV");
+console.log("Multi-column result:", multiResult);
-var ret;
+// Mathematical operations
+const mathResult = query("SELECT 2 + 2 as sum, pi() as pi_value", "JSON");
+console.log("Math result:", mathResult);
-// Test standalone query
-ret = query("SELECT version(), 'Hello chDB', chdb()", "CSV");
-console.log("Standalone Query Result:", ret);
+// System information
+const systemInfo = query("SELECT * FROM system.functions LIMIT 5", "Pretty");
+console.log("System functions:", systemInfo);
+```
-// Test session query
-// Create a new session instance
-const session = new Session("./chdb-node-tmp");
-ret = session.query("SELECT 123", "CSV")
-console.log("Session Query Result:", ret);
-ret = session.query("CREATE DATABASE IF NOT EXISTS testdb;" +
- "CREATE TABLE IF NOT EXISTS testdb.testtable (id UInt32) ENGINE = MergeTree() ORDER BY id;");
+### Session-Based queries {#session-based-queries}
-session.query("USE testdb; INSERT INTO testtable VALUES (1), (2), (3);")
+```javascript
+const { Session } = require("chdb");
+
+// Create a session with persistent storage
+const session = new Session("./chdb-node-data");
+
+try {
+ // Create database and table
+ session.query(`
+ CREATE DATABASE IF NOT EXISTS myapp;
+ CREATE TABLE IF NOT EXISTS myapp.users (
+ id UInt32,
+ name String,
+ email String,
+ created_at DateTime DEFAULT now()
+ ) ENGINE = MergeTree() ORDER BY id
+ `);
+
+ // Insert sample data
+ session.query(`
+ INSERT INTO myapp.users (id, name, email) VALUES
+ (1, 'Alice', 'alice@example.com'),
+ (2, 'Bob', 'bob@example.com'),
+ (3, 'Charlie', 'charlie@example.com')
+ `);
+
+ // Query the data with different formats
+ const csvResult = session.query("SELECT * FROM myapp.users ORDER BY id", "CSV");
+ console.log("CSV Result:", csvResult);
+
+ const jsonResult = session.query("SELECT * FROM myapp.users ORDER BY id", "JSON");
+ console.log("JSON Result:", jsonResult);
+
+ // Aggregate queries
+ const stats = session.query(`
+ SELECT
+ COUNT(*) as total_users,
+ MAX(id) as max_id,
+ MIN(created_at) as earliest_signup
+ FROM myapp.users
+ `, "Pretty");
+ console.log("User Statistics:", stats);
+
+} finally {
+ // Always cleanup the session
+ session.cleanup(); // This deletes the database files
+}
+```
-ret = session.query("SELECT * FROM testtable;")
-console.log("Session Query Result:", ret);
+### Processing external data {#processing-external-data}
-// Clean up the session
-session.cleanup();
+```javascript
+const { Session } = require("chdb");
+
+const session = new Session("./data-processing");
+
+try {
+ // Process CSV data from URL
+ const result = session.query(`
+ SELECT
+ COUNT(*) as total_records,
+ COUNT(DISTINCT "UserID") as unique_users
+ FROM url('https://datasets.clickhouse.com/hits/hits.csv', 'CSV')
+ LIMIT 1000
+ `, "JSON");
+
+ console.log("External data analysis:", result);
+
+ // Create table from external data
+ session.query(`
+ CREATE TABLE web_analytics AS
+ SELECT * FROM url('https://datasets.clickhouse.com/hits/hits.csv', 'CSV')
+ LIMIT 10000
+ `);
+
+ // Analyze the imported data
+ const analysis = session.query(`
+ SELECT
+ toDate("EventTime") as date,
+ COUNT(*) as events,
+ COUNT(DISTINCT "UserID") as unique_users
+ FROM web_analytics
+ GROUP BY date
+ ORDER BY date
+ LIMIT 10
+ `, "Pretty");
+
+ console.log("Daily analytics:", analysis);
+
+} finally {
+ session.cleanup();
+}
```
-## 从源代码构建 {#build-from-source}
+## Error handling {#error-handling}
-```bash
-npm run libchdb
-npm install
-npm run test
+在使用 chDB 时始终适当地处理错误:
+
+```javascript
+const { query, Session } = require("chdb");
+
+// Error handling for standalone queries
+function safeQuery(sql, format = "CSV") {
+ try {
+ const result = query(sql, format);
+ return { success: true, data: result };
+ } catch (error) {
+ console.error("Query error:", error.message);
+ return { success: false, error: error.message };
+ }
+}
+
+// Example usage
+const result = safeQuery("SELECT invalid_syntax");
+if (result.success) {
+ console.log("Query result:", result.data);
+} else {
+ console.log("Query failed:", result.error);
+}
+
+// Error handling for sessions
+function safeSessionQuery() {
+ const session = new Session("./error-test");
+
+ try {
+ // This will throw an error due to invalid syntax
+ const result = session.query("CREATE TABLE invalid syntax", "CSV");
+ console.log("Unexpected success:", result);
+ } catch (error) {
+ console.error("Session query error:", error.message);
+ } finally {
+ // Always cleanup, even if an error occurred
+ session.cleanup();
+ }
+}
+
+safeSessionQuery();
```
+
+## GitHub repository {#github-repository}
+
+- **GitHub Repository**: [chdb-io/chdb-node](https://github.com/chdb-io/chdb-node)
+- **Issues and Support**: 在 [GitHub repository](https://github.com/chdb-io/chdb-node/issues) 上报告问题
+- **NPM Package**: [chdb on npm](https://www.npmjs.com/package/chdb)
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/nodejs.md.hash b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/nodejs.md.hash
index fe9fd326416..9865fc0349a 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/nodejs.md.hash
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/nodejs.md.hash
@@ -1 +1 @@
-63a51001fc4d0ca5
+7b69f28e40cc310d
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/python.md b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/python.md
index a59d11056e3..ac6afdd6943 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/python.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/python.md
@@ -9,14 +9,13 @@
- 'clickhouse-lite'
- 'python'
- 'install'
+'doc_type': 'guide'
---
-
-# 安装 chDB for Python
-
## 要求 {#requirements}
-在 macOS 和 Linux (x86_64 和 ARM64) 上,Python 3.8+
+- Python 3.8+
+- 支持的平台:macOS 和 Linux (x86_64 和 ARM64)
## 安装 {#install}
@@ -26,242 +25,813 @@ pip install chdb
## 使用 {#usage}
-CLI 示例:
+### 命令行界面 {#command-line-interface}
-```python
-python3 -m chdb [SQL] [OutputFormat]
-```
+直接从命令行运行 SQL 查询:
-```python
+```bash
+
+# Basic query
python3 -m chdb "SELECT 1, 'abc'" Pretty
+
+
+# Query with formatting
+python3 -m chdb "SELECT version()" JSON
```
-Python 文件示例:
+### 基本的 Python 用法 {#basic-python-usage}
```python
import chdb
-res = chdb.query("SELECT 1, 'abc'", "CSV")
-print(res, end="")
+
+# Simple query
+result = chdb.query("SELECT 1 as id, 'Hello World' as message", "CSV")
+print(result)
+
+
+# Get query statistics
+print(f"Rows read: {result.rows_read()}")
+print(f"Bytes read: {result.bytes_read()}")
+print(f"Execution time: {result.elapsed()} seconds")
```
-查询可以使用任何 [支持的格式](/interfaces/formats) 返回数据,以及 `Dataframe` 和 `Debug`。
+### 基于连接的 API (推荐) {#connection-based-api}
-## GitHub 仓库 {#github-repository}
+为了更好的资源管理和性能:
-您可以在 [chdb-io/chdb](https://github.com/chdb-io/chdb) 查找该项目的 GitHub 仓库。
+```python
+import chdb
-## 数据输入 {#data-input}
-以下方法可用于访问磁盘和内存中的数据格式:
+# Create connection (in-memory by default)
+conn = chdb.connect(":memory:")
-### 对文件的查询 (Parquet, CSV, JSON, Arrow, ORC 及 60+ 格式) {#query-on-file-parquet-csv-json-arrow-orc-and-60}
+# Or use file-based: conn = chdb.connect("mydata.db")
-您可以执行 SQL 并返回所需格式的数据。
-```python
-import chdb
-res = chdb.query('select version()', 'Pretty'); print(res)
+# Create cursor for query execution
+cur = conn.cursor()
+
+
+# Execute queries
+cur.execute("SELECT number, toString(number) as str FROM system.numbers LIMIT 3")
+
+
+# Fetch results in different ways
+print(cur.fetchone()) # Single row: (0, '0')
+print(cur.fetchmany(2)) # Multiple rows: ((1, '1'), (2, '2'))
+
+
+# Get metadata
+print(cur.column_names()) # ['number', 'str']
+print(cur.column_types()) # ['UInt64', 'String']
+
+
+# Use cursor as iterator
+for row in cur:
+ print(row)
+
+
+# Always close resources
+cur.close()
+conn.close()
```
-**处理 Parquet 或 CSV**
+## 数据输入方法 {#data-input}
+
+### 基于文件的数据源 {#file-based-data-sources}
+
+chDB 支持超过 70 种数据格式进行直接文件查询:
```python
+import chdb
+
+# Prepare your data
+
+# ...
+
-# See more data type format in tests/format_output.py
-res = chdb.query('select * from file("data.parquet", Parquet)', 'JSON'); print(res)
-res = chdb.query('select * from file("data.csv", CSV)', 'CSV'); print(res)
-print(f"SQL read {res.rows_read()} rows, {res.bytes_read()} bytes, elapsed {res.elapsed()} seconds")
+# Query Parquet files
+result = chdb.query("""
+ SELECT customer_id, sum(amount) as total
+ FROM file('sales.parquet', Parquet)
+ GROUP BY customer_id
+ ORDER BY total DESC
+ LIMIT 10
+""", 'JSONEachRow')
+
+
+# Query CSV with headers
+result = chdb.query("""
+ SELECT * FROM file('data.csv', CSVWithNames)
+ WHERE column1 > 100
+""", 'DataFrame')
+
+
+# Multiple file formats
+result = chdb.query("""
+ SELECT * FROM file('logs*.jsonl', JSONEachRow)
+ WHERE timestamp > '2024-01-01'
+""", 'Pretty')
```
-**Pandas DataFrame 输出**
+### 输出格式示例 {#output-format-examples}
+
```python
-# See more in https://clickhouse.com/docs/interfaces/formats
-chdb.query('select * from file("data.parquet", Parquet)', 'Dataframe')
+# DataFrame for analysis
+df = chdb.query('SELECT * FROM system.numbers LIMIT 5', 'DataFrame')
+print(type(df)) #
+
+
+# Arrow Table for interoperability
+arrow_table = chdb.query('SELECT * FROM system.numbers LIMIT 5', 'ArrowTable')
+print(type(arrow_table)) #
+
+
+# JSON for APIs
+json_result = chdb.query('SELECT version()', 'JSON')
+print(json_result)
+
+
+# Pretty format for debugging
+pretty_result = chdb.query('SELECT * FROM system.numbers LIMIT 3', 'Pretty')
+print(pretty_result)
```
-### 对表的查询 (Pandas DataFrame, Parquet 文件/字节, Arrow 字节) {#query-on-table-pandas-dataframe-parquet-filebytes-arrow-bytes}
+### DataFrame 操作 {#dataframe-operations}
-**对 Pandas DataFrame 的查询**
+#### 传统 DataFrame API {#legacy-dataframe-api}
```python
import chdb.dataframe as cdf
import pandas as pd
-# Join 2 DataFrames
+
+# Join multiple DataFrames
df1 = pd.DataFrame({'a': [1, 2, 3], 'b': ["one", "two", "three"]})
df2 = pd.DataFrame({'c': [1, 2, 3], 'd': ["①", "②", "③"]})
-ret_tbl = cdf.query(sql="select * from __tbl1__ t1 join __tbl2__ t2 on t1.a = t2.c",
- tbl1=df1, tbl2=df2)
-print(ret_tbl)
-# Query on the DataFrame Table
-print(ret_tbl.query('select b, sum(a) from __table__ group by b'))
+result_df = cdf.query(
+ sql="SELECT * FROM __tbl1__ t1 JOIN __tbl2__ t2 ON t1.a = t2.c",
+ tbl1=df1,
+ tbl2=df2
+)
+print(result_df)
+
+
+# Query the result DataFrame
+summary = result_df.query('SELECT b, sum(a) FROM __table__ GROUP BY b')
+print(summary)
```
-### 使用有状态会话的查询 {#query-with-stateful-session}
+#### Python 表引擎 (推荐) {#python-table-engine-recommended}
+
+```python
+import chdb
+import pandas as pd
+import pyarrow as pa
+
-会话将保持查询的状态。所有 DDL 和 DML 状态将保存在一个目录中。目录路径可以作为参数传入。如果没有传入,将创建一个临时目录。
+# Query Pandas DataFrame directly
+df = pd.DataFrame({
+ "customer_id": [1, 2, 3, 1, 2],
+ "product": ["A", "B", "A", "C", "A"],
+ "amount": [100, 200, 150, 300, 250],
+ "metadata": [
+ {'category': 'electronics', 'priority': 'high'},
+ {'category': 'books', 'priority': 'low'},
+ {'category': 'electronics', 'priority': 'medium'},
+ {'category': 'clothing', 'priority': 'high'},
+ {'category': 'books', 'priority': 'low'}
+ ]
+})
+
+
+# Direct DataFrame querying with JSON support
+result = chdb.query("""
+ SELECT
+ customer_id,
+ sum(amount) as total_spent,
+ toString(metadata.category) as category
+ FROM Python(df)
+ WHERE toString(metadata.priority) = 'high'
+ GROUP BY customer_id, toString(metadata.category)
+ ORDER BY total_spent DESC
+""").show()
+
+
+# Query Arrow Table
+arrow_table = pa.table({
+ "id": [1, 2, 3, 4],
+ "name": ["Alice", "Bob", "Charlie", "David"],
+ "score": [98, 89, 86, 95]
+})
+
+chdb.query("""
+ SELECT name, score
+ FROM Python(arrow_table)
+ ORDER BY score DESC
+""").show()
+```
-如果未指定路径,临时目录将在会话对象被删除时删除。否则,路径将被保留。
+### 有状态会话 {#stateful-sessions}
-请注意,默认数据库是 `_local`,默认引擎是 `Memory`,这意味着所有数据都将存储在内存中。如果您想将数据存储在磁盘上,则应创建另一个数据库。
+会话在多次操作中保持查询状态,能够支持复杂的工作流:
```python
-from chdb import session as chs
-
-## Create DB, Table, View in temp session, auto cleanup when session is deleted.
-sess = chs.Session()
-sess.query("CREATE DATABASE IF NOT EXISTS db_xxx ENGINE = Atomic")
-sess.query("CREATE TABLE IF NOT EXISTS db_xxx.log_table_xxx (x String, y Int) ENGINE = Log;")
-sess.query("INSERT INTO db_xxx.log_table_xxx VALUES ('a', 1), ('b', 3), ('c', 2), ('d', 5);")
-sess.query(
- "CREATE VIEW db_xxx.view_xxx AS SELECT * FROM db_xxx.log_table_xxx LIMIT 4;"
+from chdb import session
+
+
+# Temporary session (auto-cleanup)
+sess = session.Session()
+
+
+# Or persistent session with specific path
+
+# sess = session.Session("/path/to/data")
+
+
+# Create database and tables
+sess.query("CREATE DATABASE IF NOT EXISTS analytics ENGINE = Atomic")
+sess.query("USE analytics")
+
+sess.query("""
+ CREATE TABLE sales (
+ id UInt64,
+ product String,
+ amount Decimal(10,2),
+ sale_date Date
+ ) ENGINE = MergeTree()
+ ORDER BY (sale_date, id)
+""")
+
+
+# Insert data
+sess.query("""
+ INSERT INTO sales VALUES
+ (1, 'Laptop', 999.99, '2024-01-15'),
+ (2, 'Mouse', 29.99, '2024-01-16'),
+ (3, 'Keyboard', 79.99, '2024-01-17')
+""")
+
+
+# Create materialized views
+sess.query("""
+ CREATE MATERIALIZED VIEW daily_sales AS
+ SELECT
+ sale_date,
+ count() as orders,
+ sum(amount) as revenue
+ FROM sales
+ GROUP BY sale_date
+""")
+
+
+# Query the view
+result = sess.query("SELECT * FROM daily_sales ORDER BY sale_date", "Pretty")
+print(result)
+
+
+# Session automatically manages resources
+sess.close() # Optional - auto-closed when object is deleted
+```
+
+### 先进的会话特性 {#advanced-session-features}
+
+```python
+
+# Session with custom settings
+sess = session.Session(
+ path="/tmp/analytics_db",
)
-print("Select from view:\n")
-print(sess.query("SELECT * FROM db_xxx.view_xxx", "Pretty"))
+
+
+# Query performance optimization
+result = sess.query("""
+ SELECT product, sum(amount) as total
+ FROM sales
+ GROUP BY product
+ ORDER BY total DESC
+ SETTINGS max_threads = 4
+""", "JSON")
```
-另请参见: [test_stateful.py](https://github.com/chdb-io/chdb/blob/main/tests/test_stateful.py)。
+另见:[test_stateful.py](https://github.com/chdb-io/chdb/blob/main/tests/test_stateful.py)。
+
+### Python DB-API 2.0 接口 {#python-db-api-20}
-### 使用 Python DB-API 2.0 的查询 {#query-with-python-db-api-20}
+标准数据库接口,以兼容现有的 Python 应用:
```python
import chdb.dbapi as dbapi
-print("chdb driver version: {0}".format(dbapi.get_client_info()))
-
-conn1 = dbapi.connect()
-cur1 = conn1.cursor()
-cur1.execute('select version()')
-print("description: ", cur1.description)
-print("data: ", cur1.fetchone())
-cur1.close()
-conn1.close()
+
+
+# Check driver information
+print(f"chDB driver version: {dbapi.get_client_info()}")
+
+
+# Create connection
+conn = dbapi.connect()
+cursor = conn.cursor()
+
+
+# Execute queries with parameters
+cursor.execute("""
+ SELECT number, number * ? as doubled
+ FROM system.numbers
+ LIMIT ?
+""", (2, 5))
+
+
+# Get metadata
+print("Column descriptions:", cursor.description)
+print("Row count:", cursor.rowcount)
+
+
+# Fetch results
+print("First row:", cursor.fetchone())
+print("Next 2 rows:", cursor.fetchmany(2))
+
+
+# Fetch remaining rows
+for row in cursor.fetchall():
+ print("Row:", row)
+
+
+# Batch operations
+data = [(1, 'Alice'), (2, 'Bob'), (3, 'Charlie')]
+cursor.execute("""
+ CREATE TABLE temp_users (
+ id UInt64,
+ name String
+ ) ENGINE = MergeTree()
+ ORDER BY (id)
+""")
+cursor.executemany(
+ "INSERT INTO temp_users (id, name) VALUES (?, ?)",
+ data
+)
```
-### 使用 UDF (用户定义函数) 的查询 {#query-with-udf-user-defined-functions}
+### 用户定义函数 (UDF) {#user-defined-functions}
+
+用自定义 Python 函数扩展 SQL:
+
+#### 基本 UDF 用法 {#basic-udf-usage}
```python
from chdb.udf import chdb_udf
from chdb import query
+
+# Simple mathematical function
@chdb_udf()
-def sum_udf(lhs, rhs):
- return int(lhs) + int(rhs)
+def add_numbers(a, b):
+ return int(a) + int(b)
-print(query("select sum_udf(12,22)"))
+
+# String processing function
+@chdb_udf()
+def reverse_string(text):
+ return text[::-1]
+
+
+# JSON processing function
+@chdb_udf()
+def extract_json_field(json_str, field):
+ import json
+ try:
+ data = json.loads(json_str)
+ return str(data.get(field, ''))
+ except:
+ return ''
+
+
+# Use UDFs in queries
+result = query("""
+ SELECT
+ add_numbers('10', '20') as sum_result,
+ reverse_string('hello') as reversed,
+ extract_json_field('{"name": "John", "age": 30}', 'name') as name
+""")
+print(result)
```
-关于 chDB Python UDF (用户定义函数) 装饰器的一些说明。
-1. 该函数应为无状态的。仅支持 UDF,不支持 UDAF(用户定义聚合函数)。
-2. 默认返回类型为字符串。如果您想更改返回类型,可以将返回类型作为参数传入。返回类型应为 [以下类型之一](/sql-reference/data-types)。
-3. 该函数应接受类型为字符串的参数。由于输入为制表符分隔的所有参数都是字符串。
-4. 该函数将在每一行输入时被调用。例如:
+#### 带有自定义返回类型的高级 UDF {#advanced-udf-custom-return-types}
+
```python
-def sum_udf(lhs, rhs):
- return int(lhs) + int(rhs)
-
-for line in sys.stdin:
- args = line.strip().split('\t')
- lhs = args[0]
- rhs = args[1]
- print(sum_udf(lhs, rhs))
- sys.stdout.flush()
+
+# UDF with specific return type
+@chdb_udf(return_type="Float64")
+def calculate_bmi(height_str, weight_str):
+ height = float(height_str) / 100 # Convert cm to meters
+ weight = float(weight_str)
+ return weight / (height * height)
+
+
+# UDF for data validation
+@chdb_udf(return_type="UInt8")
+def is_valid_email(email):
+ import re
+ pattern = r'^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$'
+ return 1 if re.match(pattern, email) else 0
+
+
+# Use in complex queries
+result = query("""
+ SELECT
+ name,
+ calculate_bmi(height, weight) as bmi,
+ is_valid_email(email) as has_valid_email
+ FROM (
+ SELECT
+ 'John' as name, '180' as height, '75' as weight, 'john@example.com' as email
+ UNION ALL
+ SELECT
+ 'Jane' as name, '165' as height, '60' as weight, 'invalid-email' as email
+ )
+""", "Pretty")
+print(result)
```
-5. 该函数应是一个纯 Python 函数。您应该导入所有在 **函数内部** 使用的 Python 模块。
+
+#### UDF 最佳实践 {#udf-best-practices}
+
+1. **无状态函数**:UDF 应该是没有副作用的纯函数
+2. **函数内部导入**:所有必要的模块必须在 UDF 内部导入
+3. **字符串输入/输出**:所有 UDF 参数都是字符串(TabSeparated 格式)
+4. **错误处理**:包括 try-catch 块以确保 UDF 的健壮性
+5. **性能**:UDF 对每一行都调用,因此需优化性能
+
```python
-def func_use_json(arg):
+
+# Well-structured UDF with error handling
+@chdb_udf(return_type="String")
+def safe_json_extract(json_str, path):
import json
- ...
+ try:
+ data = json.loads(json_str)
+ keys = path.split('.')
+ result = data
+ for key in keys:
+ if isinstance(result, dict) and key in result:
+ result = result[key]
+ else:
+ return 'null'
+ return str(result)
+ except Exception as e:
+ return f'error: {str(e)}'
+
+
+# Use with complex nested JSON
+query("""
+ SELECT safe_json_extract(
+ '{"user": {"profile": {"name": "Alice", "age": 25}}}',
+ 'user.profile.name'
+ ) as extracted_name
+""")
```
-6. 使用的 Python 解释器与运行脚本所使用的相同。您可以通过 `sys.executable` 获取它。
-另请参见: [test_udf.py](https://github.com/chdb-io/chdb/blob/main/tests/test_udf.py)。
+### 流式查询处理 {#streaming-queries}
+
+以恒定的内存使用处理大数据集:
+
+```python
+from chdb import session
+
+sess = session.Session()
+
+
+# Setup large dataset
+sess.query("""
+ CREATE TABLE large_data ENGINE = Memory() AS
+ SELECT number as id, toString(number) as data
+ FROM numbers(1000000)
+""")
+
+
+# Example 1: Basic streaming with context manager
+total_rows = 0
+with sess.send_query("SELECT * FROM large_data", "CSV") as stream:
+ for chunk in stream:
+ chunk_rows = len(chunk.data().split('\n')) - 1
+ total_rows += chunk_rows
+ print(f"Processed chunk: {chunk_rows} rows")
+
+ # Early termination if needed
+ if total_rows > 100000:
+ break
+
+print(f"Total rows processed: {total_rows}")
+
+
+# Example 2: Manual iteration with explicit cleanup
+stream = sess.send_query("SELECT * FROM large_data WHERE id % 100 = 0", "JSONEachRow")
+processed_count = 0
+
+while True:
+ chunk = stream.fetch()
+ if chunk is None:
+ break
+
+ # Process chunk data
+ lines = chunk.data().strip().split('\n')
+ for line in lines:
+ if line: # Skip empty lines
+ processed_count += 1
+
+ print(f"Processed {processed_count} records so far...")
+
+stream.close() # Important: explicit cleanup
+
+
+# Example 3: Arrow integration for external libraries
+import pyarrow as pa
+from deltalake import write_deltalake
+
+
+# Stream results in Arrow format
+stream = sess.send_query("SELECT * FROM large_data LIMIT 100000", "Arrow")
+
+
+# Create RecordBatchReader with custom batch size
+batch_reader = stream.record_batch(rows_per_batch=10000)
+
+
+# Export to Delta Lake
+write_deltalake(
+ table_or_uri="./my_delta_table",
+ data=batch_reader,
+ mode="overwrite"
+)
+
+stream.close()
+sess.close()
+```
### Python 表引擎 {#python-table-engine}
-### 对 Pandas DataFrame 的查询 {#query-on-pandas-dataframe}
+#### 查询 Pandas DataFrames {#query-pandas-dataframes}
```python
import chdb
import pandas as pd
-df = pd.DataFrame(
- {
- "a": [1, 2, 3, 4, 5, 6],
- "b": ["tom", "jerry", "auxten", "tom", "jerry", "auxten"],
- }
-)
-chdb.query("SELECT b, sum(a) FROM Python(df) GROUP BY b ORDER BY b").show()
+
+# Complex DataFrame with nested data
+df = pd.DataFrame({
+ "customer_id": [1, 2, 3, 4, 5, 6],
+ "customer_name": ["Alice", "Bob", "Charlie", "Alice", "Bob", "David"],
+ "orders": [
+ {"order_id": 101, "amount": 250.50, "items": ["laptop", "mouse"]},
+ {"order_id": 102, "amount": 89.99, "items": ["book"]},
+ {"order_id": 103, "amount": 1299.99, "items": ["phone", "case", "charger"]},
+ {"order_id": 104, "amount": 45.50, "items": ["pen", "paper"]},
+ {"order_id": 105, "amount": 199.99, "items": ["headphones"]},
+ {"order_id": 106, "amount": 15.99, "items": ["cable"]}
+ ]
+})
+
+
+# Advanced querying with JSON operations
+result = chdb.query("""
+ SELECT
+ customer_name,
+ count() as order_count,
+ sum(toFloat64(orders.amount)) as total_spent,
+ arrayStringConcat(
+ arrayDistinct(
+ arrayFlatten(
+ groupArray(orders.items)
+ )
+ ),
+ ', '
+ ) as all_items
+ FROM Python(df)
+ GROUP BY customer_name
+ HAVING total_spent > 100
+ ORDER BY total_spent DESC
+""").show()
+
+
+# Window functions on DataFrames
+window_result = chdb.query("""
+ SELECT
+ customer_name,
+ toFloat64(orders.amount) as amount,
+ sum(toFloat64(orders.amount)) OVER (
+ PARTITION BY customer_name
+ ORDER BY toInt32(orders.order_id)
+ ) as running_total
+ FROM Python(df)
+ ORDER BY customer_name, toInt32(orders.order_id)
+""", "Pretty")
+print(window_result)
```
-### 对 Arrow 表的查询 {#query-on-arrow-table}
+#### 使用 PyReader 的自定义数据源 {#custom-data-sources-pyreader}
+
+为专业数据源实现自定义数据读取器:
```python
import chdb
-import pyarrow as pa
-arrow_table = pa.table(
- {
- "a": [1, 2, 3, 4, 5, 6],
- "b": ["tom", "jerry", "auxten", "tom", "jerry", "auxten"],
- }
-)
+from typing import List, Tuple, Any
+import json
+
+class DatabaseReader(chdb.PyReader):
+ """Custom reader for database-like data sources"""
-chdb.query(
- "SELECT b, sum(a) FROM Python(arrow_table) GROUP BY b ORDER BY b", "debug"
-).show()
+ def __init__(self, connection_string: str):
+ # Simulate database connection
+ self.data = self._load_data(connection_string)
+ self.cursor = 0
+ self.batch_size = 1000
+ super().__init__(self.data)
+
+ def _load_data(self, conn_str):
+ # Simulate loading from database
+ return {
+ "id": list(range(1, 10001)),
+ "name": [f"user_{i}" for i in range(1, 10001)],
+ "score": [i * 10 + (i % 7) for i in range(1, 10001)],
+ "metadata": [
+ json.dumps({"level": i % 5, "active": i % 3 == 0})
+ for i in range(1, 10001)
+ ]
+ }
+
+ def get_schema(self) -> List[Tuple[str, str]]:
+ """Define table schema with explicit types"""
+ return [
+ ("id", "UInt64"),
+ ("name", "String"),
+ ("score", "Int64"),
+ ("metadata", "String") # JSON stored as string
+ ]
+
+ def read(self, col_names: List[str], count: int) -> List[List[Any]]:
+ """Read data in batches"""
+ if self.cursor >= len(self.data["id"]):
+ return [] # No more data
+
+ end_pos = min(self.cursor + min(count, self.batch_size), len(self.data["id"]))
+
+ # Return data for requested columns
+ result = []
+ for col in col_names:
+ if col in self.data:
+ result.append(self.data[col][self.cursor:end_pos])
+ else:
+ # Handle missing columns
+ result.append([None] * (end_pos - self.cursor))
+
+ self.cursor = end_pos
+ return result
+
+### JSON Type Inference and Handling {#json-type-inference-handling}
+
+chDB automatically handles complex nested data structures:
+
+```python
+import pandas as pd
+import chdb
+
+
+# DataFrame with mixed JSON objects
+df_with_json = pd.DataFrame({
+ "user_id": [1, 2, 3, 4],
+ "profile": [
+ {"name": "Alice", "age": 25, "preferences": ["music", "travel"]},
+ {"name": "Bob", "age": 30, "location": {"city": "NYC", "country": "US"}},
+ {"name": "Charlie", "skills": ["python", "sql", "ml"], "experience": 5},
+ {"score": 95, "rank": "gold", "achievements": [{"title": "Expert", "date": "2024-01-01"}]}
+ ]
+})
+
+
+# Control JSON inference with settings
+result = chdb.query("""
+ SELECT
+ user_id,
+ profile.name as name,
+ profile.age as age,
+ length(profile.preferences) as pref_count,
+ profile.location.city as city
+ FROM Python(df_with_json)
+ SETTINGS pandas_analyze_sample = 1000 -- Analyze all rows for JSON detection
+""", "Pretty")
+print(result)
+
+
+# Advanced JSON operations
+complex_json = chdb.query("""
+ SELECT
+ user_id,
+ JSONLength(toString(profile)) as json_fields,
+ JSONType(toString(profile), 'preferences') as pref_type,
+ if(
+ JSONHas(toString(profile), 'achievements'),
+ JSONExtractString(toString(profile), 'achievements[0].title'),
+ 'None'
+ ) as first_achievement
+ FROM Python(df_with_json)
+""", "JSONEachRow")
+print(complex_json)
```
-### 对 chdb.PyReader 类实例的查询 {#query-on-chdbpyreader-class-instance}
+## 性能和优化 {#performance-optimization}
+
+### 基准测试 {#benchmarks}
-1. 您必须继承 chdb.PyReader 类并实现 `read` 方法。
-2. `read` 方法应:
- 1. 返回一个列表的列表,第一维是列,第二维是行,列的顺序应与 `read` 的第一个参数 `col_names` 相同。
- 1. 在没有更多数据可读时返回空列表。
- 1. 是有状态的,游标应在 `read` 方法中更新。
-3. 可选的 `get_schema` 方法可以实现以返回表的模式。原型为 `def get_schema(self) -> List[Tuple[str, str]]:`,返回值是一个元组列表,每个元组包含列名和列类型。列类型应为 [以下类型之一](/sql-reference/data-types)。
+chDB 一直优于其他嵌入式引擎:
+- **DataFrame 操作**:比传统 DataFrame 库的分析查询快 2-5 倍
+- **Parquet 处理**:与领先的列式引擎竞争
+- **内存效率**:比其他选项的内存占用更低
-
+[更多基准测试结果详情](https://github.com/chdb-io/chdb?tab=readme-ov-file#benchmark)
+
+### 性能提示 {#performance-tips}
```python
import chdb
-class myReader(chdb.PyReader):
- def __init__(self, data):
- self.data = data
- self.cursor = 0
- super().__init__(data)
-
- def read(self, col_names, count):
- print("Python func read", col_names, count, self.cursor)
- if self.cursor >= len(self.data["a"]):
- return []
- block = [self.data[col] for col in col_names]
- self.cursor += len(block[0])
- return block
-
-reader = myReader(
- {
- "a": [1, 2, 3, 4, 5, 6],
- "b": ["tom", "jerry", "auxten", "tom", "jerry", "auxten"],
- }
-)
-chdb.query(
- "SELECT b, sum(a) FROM Python(reader) GROUP BY b ORDER BY b"
-).show()
-```
+# 1. Use appropriate output formats
+df_result = chdb.query("SELECT * FROM large_table", "DataFrame") # For analysis
+arrow_result = chdb.query("SELECT * FROM large_table", "Arrow") # For interop
+native_result = chdb.query("SELECT * FROM large_table", "Native") # For chDB-to-chDB
+
+
+# 2. Optimize queries with settings
+fast_result = chdb.query("""
+ SELECT customer_id, sum(amount)
+ FROM sales
+ GROUP BY customer_id
+ SETTINGS
+ max_threads = 8,
+ max_memory_usage = '4G',
+ use_uncompressed_cache = 1
+""", "DataFrame")
+
+
+# 3. Leverage streaming for large datasets
+from chdb import session
+
+sess = session.Session()
+
+
+# Setup large dataset
+sess.query("""
+ CREATE TABLE large_sales ENGINE = Memory() AS
+ SELECT
+ number as sale_id,
+ number % 1000 as customer_id,
+ rand() % 1000 as amount
+ FROM numbers(10000000)
+""")
-另请参见: [test_query_py.py](https://github.com/chdb-io/chdb/blob/main/tests/test_query_py.py)。
-## 限制 {#limitations}
+# Stream processing with constant memory usage
+total_amount = 0
+processed_rows = 0
-1. 支持的列类型:`pandas.Series`, `pyarrow.array`,`chdb.PyReader`
-1. 支持的数据类型:Int, UInt, Float, String, Date, DateTime, Decimal
-1. Python 对象类型将转化为字符串
-1. Pandas DataFrame 的性能最佳,Arrow 表优于 PyReader
+with sess.send_query("SELECT customer_id, sum(amount) as total FROM large_sales GROUP BY customer_id", "JSONEachRow") as stream:
+ for chunk in stream:
+ lines = chunk.data().strip().split('\n')
+ for line in lines:
+ if line: # Skip empty lines
+ import json
+ row = json.loads(line)
+ total_amount += row['total']
+ processed_rows += 1
-
+ print(f"Processed {processed_rows} customer records, running total: {total_amount}")
+
+ # Early termination for demo
+ if processed_rows > 1000:
+ break
+
+print(f"Final result: {processed_rows} customers processed, total amount: {total_amount}")
+
+
+# Stream to external systems (e.g., Delta Lake)
+stream = sess.send_query("SELECT * FROM large_sales LIMIT 1000000", "Arrow")
+batch_reader = stream.record_batch(rows_per_batch=50000)
+
+
+# Process in batches
+for batch in batch_reader:
+ print(f"Processing batch with {batch.num_rows} rows...")
+ # Transform or export each batch
+ # df_batch = batch.to_pandas()
+ # process_batch(df_batch)
+
+stream.close()
+sess.close()
+```
+
+## GitHub 仓库 {#github-repository}
-更多示例,请参见 [examples](https://github.com/chdb-io/chdb/tree/main/examples) 和 [tests](https://github.com/chdb-io/chdb/tree/main/tests)。
+- **主仓库**:[chdb-io/chdb](https://github.com/chdb-io/chdb)
+- **问题和支持**:在 [GitHub 仓库](https://github.com/chdb-io/chdb/issues) 上报告问题
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/python.md.hash b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/python.md.hash
index aca9c2fa810..a994ddbeaec 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/python.md.hash
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/python.md.hash
@@ -1 +1 @@
-ecb29b9291e6fa5c
+a7d9de3caef570e5
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/rust.md b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/rust.md
index 7486086f1d3..f449e492b53 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/rust.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/rust.md
@@ -1,28 +1,169 @@
---
-'title': '为 Rust 安装 chDB'
+'title': '安装 chDB 以用于 Rust'
'sidebar_label': 'Rust'
'slug': '/chdb/install/rust'
-'description': '如何为 Rust 安装 chDB'
+'description': '如何安装和使用 chDB Rust 绑定'
'keywords':
- 'chdb'
- 'embedded'
- 'clickhouse-lite'
-- 'bun'
+- 'rust'
- 'install'
+- 'ffi'
+- 'bindings'
+'doc_type': 'guide'
---
-## 要求 {#requirements}
-安装 [libchdb](https://github.com/chdb-io/chdb):
+# chDB for Rust {#chdb-for-rust}
+
+chDB-rust 提供实验性的 FFI(外部函数接口)绑定,用于 chDB,使您能够在 Rust 应用程序中直接运行 ClickHouse 查询,而无需任何外部依赖。
+
+## Installation {#installation}
+
+### Install libchdb {#install-libchdb}
+
+安装 chDB 库:
```bash
curl -sL https://lib.chdb.io | bash
```
-## 用法 {#usage}
+## Usage {#usage}
+
+chDB Rust 提供无状态和有状态查询执行模式。
+
+### Stateless usage {#stateless-usage}
+
+对于没有持久状态的简单查询:
+
+```rust
+use chdb_rust::{execute, arg::Arg, format::OutputFormat};
+
+fn main() -> Result<(), Box> {
+ // Execute a simple query
+ let result = execute(
+ "SELECT version()",
+ Some(&[Arg::OutputFormat(OutputFormat::JSONEachRow)])
+ )?;
+ println!("ClickHouse version: {}", result.data_utf8()?);
+
+ // Query with CSV file
+ let result = execute(
+ "SELECT * FROM file('data.csv', 'CSV')",
+ Some(&[Arg::OutputFormat(OutputFormat::JSONEachRow)])
+ )?;
+ println!("CSV data: {}", result.data_utf8()?);
+
+ Ok(())
+}
+```
+
+### Stateful usage (Sessions) {#stateful-usage-sessions}
+
+对于需要持久状态的查询,如数据库和表:
+
+```rust
+use chdb_rust::{
+ session::SessionBuilder,
+ arg::Arg,
+ format::OutputFormat,
+ log_level::LogLevel
+};
+use tempdir::TempDir;
+
+fn main() -> Result<(), Box> {
+ // Create a temporary directory for database storage
+ let tmp = TempDir::new("chdb-rust")?;
+
+ // Build session with configuration
+ let session = SessionBuilder::new()
+ .with_data_path(tmp.path())
+ .with_arg(Arg::LogLevel(LogLevel::Debug))
+ .with_auto_cleanup(true) // Cleanup on drop
+ .build()?;
+
+ // Create database and table
+ session.execute(
+ "CREATE DATABASE demo; USE demo",
+ Some(&[Arg::MultiQuery])
+ )?;
+
+ session.execute(
+ "CREATE TABLE logs (id UInt64, msg String) ENGINE = MergeTree() ORDER BY id",
+ None,
+ )?;
+
+ // Insert data
+ session.execute(
+ "INSERT INTO logs (id, msg) VALUES (1, 'Hello'), (2, 'World')",
+ None,
+ )?;
+
+ // Query data
+ let result = session.execute(
+ "SELECT * FROM logs ORDER BY id",
+ Some(&[Arg::OutputFormat(OutputFormat::JSONEachRow)]),
+ )?;
+
+ println!("Query results:\n{}", result.data_utf8()?);
+
+ // Get query statistics
+ println!("Rows read: {}", result.rows_read());
+ println!("Bytes read: {}", result.bytes_read());
+ println!("Query time: {:?}", result.elapsed());
+
+ Ok(())
+}
+```
+
+## Building and testing {#building-testing}
+
+### Build the project {#build-the-project}
+
+```bash
+cargo build
+```
+
+### Run tests {#run-tests}
+
+```bash
+cargo test
+```
+
+### Development dependencies {#development-dependencies}
-此绑定仍在进行中。请按照 [chdb-rust](https://github.com/chdb-io/chdb-rust) 中的说明开始。
+该项目包括以下开发依赖:
+- `bindgen` (v0.70.1) - 从 C 头文件生成 FFI 绑定
+- `tempdir` (v0.3.7) - 在测试中处理临时目录
+- `thiserror` (v1) - 错误处理工具
+
+## Error handling {#error-handling}
+
+chDB Rust 通过 `Error` 枚举提供全面的错误处理:
+
+```rust
+use chdb_rust::{execute, error::Error};
+
+match execute("SELECT 1", None) {
+ Ok(result) => {
+ println!("Success: {}", result.data_utf8()?);
+ },
+ Err(Error::QueryError(msg)) => {
+ eprintln!("Query failed: {}", msg);
+ },
+ Err(Error::NoResult) => {
+ eprintln!("No result returned");
+ },
+ Err(Error::NonUtf8Sequence(e)) => {
+ eprintln!("Invalid UTF-8: {}", e);
+ },
+ Err(e) => {
+ eprintln!("Other error: {}", e);
+ }
+}
+```
-## GitHub 仓库 {#github-repository}
+## GitHub repository {#github-repository}
您可以在 [chdb-io/chdb-rust](https://github.com/chdb-io/chdb-rust) 找到该项目的 GitHub 仓库。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/rust.md.hash b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/rust.md.hash
index 78064df2556..0e82181b79c 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/rust.md.hash
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/install/rust.md.hash
@@ -1 +1 @@
-5e56761c0e898d17
+a90aab846dc359bd
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/data-formats.md b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/data-formats.md
index 899876be534..2a8423fce4a 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/data-formats.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/data-formats.md
@@ -6,97 +6,105 @@
'keywords':
- 'chdb'
- 'data formats'
+'doc_type': 'reference'
---
当涉及到数据格式时,chDB 与 ClickHouse 具有 100% 的功能兼容性。
-输入格式用于解析提供给 `INSERT` 和 `SELECT` 的数据,这些数据来自文件支持的表,例如 `File`、`URL` 或 `S3`。
-输出格式用于整理 `SELECT` 的结果,并将 `INSERT` 语句写入到文件支持的表中。
-除了 ClickHouse 支持的数据格式,chDB 还支持:
+输入格式用于解析提供给 `INSERT` 和 `SELECT` 的数据,这些数据来自于文件支持的表,例如 `File`、`URL` 或 `S3`。
+输出格式用于安排 `SELECT` 的结果,并将数据执行 `INSERT` 到文件支持的表中。
+除了 ClickHouse 支持的数据格式外,chDB 还支持:
-- `ArrowTable` 作为输出格式,其类型为 Python `pyarrow.Table`
-- `DataFrame` 作为输入和输出格式,其类型为 Python `pandas.DataFrame`。有关示例,请参见 [`test_joindf.py`](https://github.com/chdb-io/chdb/blob/main/tests/test_joindf.py)
-- `Debug` 作为输出(作为 `CSV` 的别名),但启用了来自 ClickHouse 的调试详细输出。
+- `ArrowTable` 作为输出格式,类型为 Python `pyarrow.Table`
+- `DataFrame` 作为输入和输出格式,类型为 Python `pandas.DataFrame`。有关示例,请参见 [`test_joindf.py`](https://github.com/chdb-io/chdb/blob/main/tests/test_joindf.py)
+- `Debug` 作为输出(作为 `CSV` 的别名),但启用 ClickHouse 的调试详细输出。
ClickHouse 支持的数据格式包括:
-| 格式 | 输入 | 输出 |
-|-------------------------------|-------|--------|
-| TabSeparated | ✔ | ✔ |
-| TabSeparatedRaw | ✔ | ✔ |
-| TabSeparatedWithNames | ✔ | ✔ |
-| TabSeparatedWithNamesAndTypes | ✔ | ✔ |
-| TabSeparatedRawWithNames | ✔ | ✔ |
-| TabSeparatedRawWithNamesAndTypes| ✔ | ✔ |
-| Template | ✔ | ✔ |
-| TemplateIgnoreSpaces | ✔ | ✗ |
-| CSV | ✔ | ✔ |
-| CSVWithNames | ✔ | ✔ |
-| CSVWithNamesAndTypes | ✔ | ✔ |
-| CustomSeparated | ✔ | ✔ |
-| CustomSeparatedWithNames | ✔ | ✔ |
-| CustomSeparatedWithNamesAndTypes| ✔ | ✔ |
-| SQLInsert | ✗ | ✔ |
-| Values | ✔ | ✔ |
-| Vertical | ✗ | ✔ |
-| JSON | ✔ | ✔ |
-| JSONAsString | ✔ | ✗ |
-| JSONStrings | ✔ | ✔ |
-| JSONColumns | ✔ | ✔ |
-| JSONColumnsWithMetadata | ✔ | ✔ |
-| JSONCompact | ✔ | ✔ |
-| JSONCompactStrings | ✗ | ✔ |
-| JSONCompactColumns | ✔ | ✔ |
-| JSONEachRow | ✔ | ✔ |
-| PrettyJSONEachRow | ✗ | ✔ |
-| JSONEachRowWithProgress | ✗ | ✔ |
-| JSONStringsEachRow | ✔ | ✔ |
-| JSONStringsEachRowWithProgress | ✗ | ✔ |
-| JSONCompactEachRow | ✔ | ✔ |
-| JSONCompactEachRowWithNames | ✔ | ✔ |
-| JSONCompactEachRowWithNamesAndTypes | ✔ | ✔ |
-| JSONCompactStringsEachRow | ✔ | ✔ |
-| JSONCompactStringsEachRowWithNames | ✔ | ✔ |
+| 格式 | 输入 | 输出 |
+|---------------------------------|-------|-------|
+| TabSeparated | ✔ | ✔ |
+| TabSeparatedRaw | ✔ | ✔ |
+| TabSeparatedWithNames | ✔ | ✔ |
+| TabSeparatedWithNamesAndTypes | ✔ | ✔ |
+| TabSeparatedRawWithNames | ✔ | ✔ |
+| TabSeparatedRawWithNamesAndTypes| ✔ | ✔ |
+| Template | ✔ | ✔ |
+| TemplateIgnoreSpaces | ✔ | ✗ |
+| CSV | ✔ | ✔ |
+| CSVWithNames | ✔ | ✔ |
+| CSVWithNamesAndTypes | ✔ | ✔ |
+| CustomSeparated | ✔ | ✔ |
+| CustomSeparatedWithNames | ✔ | ✔ |
+| CustomSeparatedWithNamesAndTypes| ✔ | ✔ |
+| SQLInsert | ✗ | ✔ |
+| Values | ✔ | ✔ |
+| Vertical | ✗ | ✔ |
+| JSON | ✔ | ✔ |
+| JSONAsString | ✔ | ✗ |
+| JSONAsObject | ✔ | ✗ |
+| JSONStrings | ✔ | ✔ |
+| JSONColumns | ✔ | ✔ |
+| JSONColumnsWithMetadata | ✔ | ✔ |
+| JSONCompact | ✔ | ✔ |
+| JSONCompactStrings | ✗ | ✔ |
+| JSONCompactColumns | ✔ | ✔ |
+| JSONEachRow | ✔ | ✔ |
+| PrettyJSONEachRow | ✗ | ✔ |
+| JSONEachRowWithProgress | ✗ | ✔ |
+| JSONStringsEachRow | ✔ | ✔ |
+| JSONStringsEachRowWithProgress | ✗ | ✔ |
+| JSONCompactEachRow | ✔ | ✔ |
+| JSONCompactEachRowWithNames | ✔ | ✔ |
+| JSONCompactEachRowWithNamesAndTypes | ✔ | ✔ |
+| JSONCompactEachRowWithProgress | ✗ | ✔ |
+| JSONCompactStringsEachRow | ✔ | ✔ |
+| JSONCompactStringsEachRowWithNames | ✔ | ✔ |
| JSONCompactStringsEachRowWithNamesAndTypes | ✔ | ✔ |
-| JSONObjectEachRow | ✔ | ✔ |
-| BSONEachRow | ✔ | ✔ |
-| TSKV | ✔ | ✔ |
-| Pretty | ✗ | ✔ |
-| PrettyNoEscapes | ✗ | ✔ |
-| PrettyMonoBlock | ✗ | ✔ |
-| PrettyNoEscapesMonoBlock | ✗ | ✔ |
-| PrettyCompact | ✗ | ✔ |
-| PrettyCompactNoEscapes | ✗ | ✔ |
-| PrettyCompactMonoBlock | ✗ | ✔ |
-| PrettyCompactNoEscapesMonoBlock| ✗ | ✔ |
-| PrettySpace | ✗ | ✔ |
-| PrettySpaceNoEscapes | ✗ | ✔ |
-| PrettySpaceMonoBlock | ✗ | ✔ |
-| PrettySpaceNoEscapesMonoBlock | ✗ | ✔ |
-| Prometheus | ✗ | ✔ |
-| Protobuf | ✔ | ✔ |
-| ProtobufSingle | ✔ | ✔ |
-| Avro | ✔ | ✔ |
-| AvroConfluent | ✔ | ✗ |
-| Parquet | ✔ | ✔ |
-| ParquetMetadata | ✔ | ✗ |
-| Arrow | ✔ | ✔ |
-| ArrowStream | ✔ | ✔ |
-| ORC | ✔ | ✔ |
-| One | ✔ | ✗ |
-| RowBinary | ✔ | ✔ |
-| RowBinaryWithNames | ✔ | ✔ |
-| RowBinaryWithNamesAndTypes | ✔ | ✔ |
-| RowBinaryWithDefaults | ✔ | ✔ |
-| Native | ✔ | ✔ |
-| Null | ✗ | ✔ |
-| XML | ✗ | ✔ |
-| CapnProto | ✔ | ✔ |
-| LineAsString | ✔ | ✔ |
-| Regexp | ✔ | ✗ |
-| RawBLOB | ✔ | ✔ |
-| MsgPack | ✔ | ✔ |
-| MySQLDump | ✔ | ✗ |
-| Markdown | ✗ | ✔ |
+| JSONCompactStringsEachRowWithProgress | ✗ | ✔ |
+| JSONObjectEachRow | ✔ | ✔ |
+| BSONEachRow | ✔ | ✔ |
+| TSKV | ✔ | ✔ |
+| Pretty | ✗ | ✔ |
+| PrettyNoEscapes | ✗ | ✔ |
+| PrettyMonoBlock | ✗ | ✔ |
+| PrettyNoEscapesMonoBlock | ✗ | ✔ |
+| PrettyCompact | ✗ | ✔ |
+| PrettyCompactNoEscapes | ✗ | ✔ |
+| PrettyCompactMonoBlock | ✗ | ✔ |
+| PrettyCompactNoEscapesMonoBlock | ✗ | ✔ |
+| PrettySpace | ✗ | ✔ |
+| PrettySpaceNoEscapes | ✗ | ✔ |
+| PrettySpaceMonoBlock | ✗ | ✔ |
+| PrettySpaceNoEscapesMonoBlock | ✗ | ✔ |
+| Prometheus | ✗ | ✔ |
+| Protobuf | ✔ | ✔ |
+| ProtobufSingle | ✔ | ✔ |
+| ProtobufList | ✔ | ✔ |
+| Avro | ✔ | ✔ |
+| AvroConfluent | ✔ | ✗ |
+| Parquet | ✔ | ✔ |
+| ParquetMetadata | ✔ | ✗ |
+| Arrow | ✔ | ✔ |
+| ArrowStream | ✔ | ✔ |
+| ORC | ✔ | ✔ |
+| One | ✔ | ✗ |
+| Npy | ✔ | ✔ |
+| RowBinary | ✔ | ✔ |
+| RowBinaryWithNames | ✔ | ✔ |
+| RowBinaryWithNamesAndTypes | ✔ | ✔ |
+| RowBinaryWithDefaults | ✔ | ✗ |
+| Native | ✔ | ✔ |
+| Null | ✗ | ✔ |
+| XML | ✗ | ✔ |
+| CapnProto | ✔ | ✔ |
+| LineAsString | ✔ | ✔ |
+| Regexp | ✔ | ✗ |
+| RawBLOB | ✔ | ✔ |
+| MsgPack | ✔ | ✔ |
+| MySQLDump | ✔ | ✗ |
+| DWARF | ✔ | ✗ |
+| Markdown | ✗ | ✔ |
+| Form | ✔ | ✗ |
-有关更多信息和示例,请参见 [ClickHouse 输入和输出数据格式](/interfaces/formats)。
+如需进一步信息和示例,请参见 [ClickHouse 输入和输出数据格式](/interfaces/formats)。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/data-formats.md.hash b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/data-formats.md.hash
index 8cecc50002d..8827822075f 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/data-formats.md.hash
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/data-formats.md.hash
@@ -1 +1 @@
-c012a1bdc7473677
+b7c920ba6e75ebd7
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/index.md b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/index.md
index 9ee67a4e9c6..d93942f0bfa 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/index.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/index.md
@@ -5,9 +5,10 @@
'keywords':
- 'chdb'
- 'data formats'
+'doc_type': 'reference'
---
-| 参考页面 |
+| 参考页面 |
|----------------------|
| [数据格式](/chdb/reference/data-formats) |
-| [SQL 参考](/chdb/reference/sql-reference) |
+| [SQL参考](/chdb/reference/sql-reference) |
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/index.md.hash b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/index.md.hash
index 1ad71c93bc8..8b0ed5d16b0 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/index.md.hash
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/index.md.hash
@@ -1 +1 @@
-4079d1f7568f6f2b
+e1ecb4a270740e58
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/sql-reference.md b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/sql-reference.md
index 9b33c66e5b4..3b3552ed849 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/sql-reference.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/sql-reference.md
@@ -6,19 +6,20 @@
'keywords':
- 'chdb'
- 'sql reference'
+'doc_type': 'reference'
---
chdb 支持与 ClickHouse 相同的 SQL 语法、语句、引擎和函数:
-| 主题 |
+| 主题 |
|----------------------------|
| [SQL 语法](/sql-reference/syntax) |
| [语句](/sql-reference/statements) |
| [表引擎](/engines/table-engines) |
| [数据库引擎](/engines/database-engines) |
-| [常规函数](/sql-reference/functions) |
+| [普通函数](/sql-reference/functions) |
| [聚合函数](/sql-reference/aggregate-functions) |
| [表函数](/sql-reference/table-functions) |
| [窗口函数](/sql-reference/window-functions) |
-有关更多信息和示例,请参见 [ClickHouse SQL 参考](/sql-reference).
+有关更多信息和示例,请参见 [ClickHouse SQL 参考](/sql-reference)。
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/sql-reference.md.hash b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/sql-reference.md.hash
index 379625e2b43..6a8fccd6354 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/sql-reference.md.hash
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/chdb/reference/sql-reference.md.hash
@@ -1 +1 @@
-ede8d67f012db877
+962533c1f5ddf338
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud-index.md b/i18n/zh/docusaurus-plugin-content-docs/current/cloud-index.md
deleted file mode 100644
index c34752e1810..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/cloud-index.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-'slug': '/cloud/overview'
-'keywords':
-- 'AWS'
-- 'Cloud'
-- 'serverless'
-'title': '概述'
-'hide_title': true
-'description': 'Cloud 的概述页面'
----
-
-import Content from '@site/i18n/zh/docusaurus-plugin-content-docs/current/about-us/cloud.md';
-
-
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud-index.md.hash b/i18n/zh/docusaurus-plugin-content-docs/current/cloud-index.md.hash
deleted file mode 100644
index 5d558480400..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/cloud-index.md.hash
+++ /dev/null
@@ -1 +0,0 @@
-c9b65a8f4acfae0f
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/_snippets/_clickpipes_faq.md b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/_snippets/_clickpipes_faq.md
new file mode 100644
index 00000000000..213352f2d64
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/_snippets/_clickpipes_faq.md
@@ -0,0 +1,112 @@
+import Image from '@theme/IdealImage';
+import clickpipesPricingFaq1 from '@site/static/images/cloud/manage/jan2025_faq/external_clickpipes_pricing_faq_1.png';
+import clickpipesPricingFaq2 from '@site/static/images/cloud/manage/jan2025_faq/external_clickpipes_pricing_faq_2.png';
+import clickpipesPricingFaq3 from '@site/static/images/cloud/manage/jan2025_faq/external_clickpipes_pricing_faq_3.png';
+
+
+
+什么是 ClickPipes 副本?
+
+ClickPipes 通过专用基础设施从远程数据源摄取数据,这些基础设施独立于 ClickHouse Cloud 服务运行和扩展。
+因此,它使用专用计算副本。
+下面的图表展示了简化的架构。
+
+对于流式 ClickPipes,ClickPipes 副本访问远程数据源(例如:Kafka 代理),拉取数据,处理并将其摄取到目标 ClickHouse 服务中。
+
+
+
+在对象存储 ClickPipes 的情况下,
+ClickPipes 副本协调数据加载任务
+(识别要复制的文件,维护状态,并移动分区),
+而数据则直接从 ClickHouse 服务拉取。
+
+
+
+
+
+
+
+副本的默认数量及其大小是多少?
+
+每个 ClickPipe 默认有 1 个副本,配备 2 GiB 的 RAM 和 0.5 vCPU。
+这对应于 **0.25** ClickHouse 计算单位(1 单位 = 8 GiB RAM,2 vCPUs)。
+
+
+
+
+
+ClickPipes 副本可以扩展吗?
+
+是的,流式 ClickPipes 可以横向和纵向扩展。
+横向扩展增加更多副本以提高吞吐量,而纵向扩展则增加分配给每个副本的资源(CPU 和 RAM),以处理更密集的工作负载。
+这可以在 ClickPipe 创建期间配置,或者在 **设置** -> **高级设置** -> **扩展** 下的任何其他时候配置。
+
+
+
+
+
+我需要多少个 ClickPipes 副本?
+
+这取决于工作负载的吞吐量和延迟要求。
+我们建议从 1 个副本的默认值开始,测量延迟,并根据需要添加副本。
+请记住,对于 Kafka ClickPipes,您还必须相应地扩展 Kafka 代理的分区。
+扩展控件在每个流式 ClickPipe 的“设置”中可用。
+
+
+
+
+
+
+
+ClickPipes 的定价结构是什么样的?
+
+它由两个维度组成:
+- **计算**:每个单位每小时的价格
+ 计算代表运行 ClickPipes 副本 pod 的成本,无论它们是否正在积极摄取数据。
+ 它适用于所有 ClickPipes 类型。
+- **摄取的数据**:每 GB 定价
+ 摄取的数据费率适用于所有流式 ClickPipes
+ (Kafka、Confluent、Amazon MSK、Amazon Kinesis、Redpanda、WarpStream、Azure Event Hubs),用于通过副本 pod 传输的数据。
+ 摄取的数据大小(GB)根据从源收到的字节数收费(无压缩或压缩)。
+
+
+
+
+
+ClickPipes 的公开价格是什么?
+
+- 计算:每单位每小时 \$0.20(每副本每小时 \$0.05)
+- 摄取数据:每 GB \$0.04
+
+
+
+
+
+在一个示例中看起来如何?
+
+例如,使用 Kafka 连接器以单个副本(0.25 计算单位)在 24 小时内摄取 1 TB 数据的成本为:
+
+$$
+(0.25 \times 0.20 \times 24) + (0.04 \times 1000) = \$41.2
+$$
+
+
+对于对象存储连接器(S3 和 GCS),
+只会产生 ClickPipes 计算成本,因为 ClickPipes pod 并未处理数据
+而仅仅是协调由底层的 ClickHouse 服务操作的传输:
+
+$$
+0.25 \times 0.20 \times 24 = \$1.2
+$$
+
+
+
+
+
+ClickPipes 定价与市场相比如何?
+
+ClickPipes 定价的理念是
+覆盖平台的运营成本,同时提供一种简单可靠的方式将数据移动到 ClickHouse Cloud。
+从这个角度来看,我们的市场分析表明我们的定价具有竞争力。
+
+
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/_snippets/_clickpipes_faq.md.hash b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/_snippets/_clickpipes_faq.md.hash
new file mode 100644
index 00000000000..680d3b0de74
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/_snippets/_clickpipes_faq.md.hash
@@ -0,0 +1 @@
+19eddd9169b059eb
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/_snippets/_security_table_of_contents.md b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/_snippets/_security_table_of_contents.md
new file mode 100644
index 00000000000..a86a916dd32
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/_snippets/_security_table_of_contents.md
@@ -0,0 +1,8 @@
+| 页面 | 描述 |
+|---------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------|
+| [共享责任模型](/cloud/security/shared-responsibility-model) | 了解 ClickHouse Cloud 和您的组织在不同服务类型之间如何划分安全责任。 |
+| [云访问管理](/cloud/security/cloud-access-management) | 管理用户访问,包括身份验证、单点登录(SSO)、基于角色的权限和团队邀请。 |
+| [连接性](/cloud/security/connectivity) | 配置安全网络访问,包括 IP 允许列表、私有网络、S3 数据访问和云 IP 地址管理。 |
+| [增强加密](/cloud/security/cmek) | 了解默认的 AES 256 加密以及如何启用透明数据加密(TDE)以提供额外的静态数据保护。 |
+| [审核日志记录](/cloud/security/audit-logging) | 设置并使用审核日志记录,以跟踪和监控您在 ClickHouse Cloud 环境中的活动。 |
+| [隐私和合规性](/cloud/security/privacy-compliance-overview) | 审查安全认证、合规标准,并了解如何管理您的个人信息和数据权利。 |
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/_snippets/_security_table_of_contents.md.hash b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/_snippets/_security_table_of_contents.md.hash
new file mode 100644
index 00000000000..b81e45e251e
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/_snippets/_security_table_of_contents.md.hash
@@ -0,0 +1 @@
+5d9a7ef8c49befa0
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/_category_.yml b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/_category_.yml
deleted file mode 100644
index 1648e8a79cb..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/_category_.yml
+++ /dev/null
@@ -1,7 +0,0 @@
-label: 'Best Practices'
-collapsible: true
-collapsed: true
-link:
- type: generated-index
- title: Best Practices
- slug: /cloud/bestpractices/
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/index.md b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/index.md
deleted file mode 100644
index ee5e49a9a77..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/index.md
+++ /dev/null
@@ -1,42 +0,0 @@
----
-'slug': '/cloud/bestpractices'
-'keywords':
-- 'Cloud'
-- 'Best Practices'
-- 'Bulk Inserts'
-- 'Asynchronous Inserts'
-- 'Avoid Mutations'
-- 'Avoid Nullable Columns'
-- 'Avoid Optimize Final'
-- 'Low Cardinality Partitioning Key'
-- 'Multi Tenancy'
-- 'Usage Limits'
-'title': '概述'
-'hide_title': true
-'description': 'ClickHouse Cloud 中最佳实践部分的登录页面'
----
-
-
-# Best Practices in ClickHouse Cloud {#best-practices-in-clickhouse-cloud}
-
-本节提供您希望遵循的最佳实践,以充分利用 ClickHouse Cloud。
-
-| 页面 | 描述 |
-|----------------------------------------------------------|----------------------------------------------------------------------------|
-| [使用限制](/cloud/bestpractices/usage-limits)| 探索 ClickHouse 的限制。 |
-| [多租户](/cloud/bestpractices/multi-tenancy)| 了解实施多租户的不同策略。 |
-
-这些是适用于所有 ClickHouse 部署的标准最佳实践的补充。
-
-| 页面 | 描述 |
-|----------------------------------------------------------------------|--------------------------------------------------------------------------|
-| [选择主键](/best-practices/choosing-a-primary-key) | 关于在 ClickHouse 中选择有效主键的指导。 |
-| [选择数据类型](/best-practices/select-data-types) | 选择合适数据类型的建议。 |
-| [使用物化视图](/best-practices/use-materialized-views) | 何时以及如何受益于物化视图。 |
-| [最小化和优化 JOIN](/best-practices/minimize-optimize-joins)| 最小化和优化 JOIN 操作的最佳实践。 |
-| [选择分区键](/best-practices/choosing-a-partitioning-key) | 如何有效选择和应用分区键。 |
-| [选择插入策略](/best-practices/selecting-an-insert-strategy) | 在 ClickHouse 中高效数据插入的策略。 |
-| [数据跳过索引](/best-practices/use-data-skipping-indices-where-appropriate) | 何时应用数据跳过索引以提高性能。 |
-| [避免变更](/best-practices/avoid-mutations) | 避免变更的理由及如何在设计中不使用它们。 |
-| [避免 OPTIMIZE FINAL](/best-practices/avoid-optimize-final) | 为什么 `OPTIMIZE FINAL` 可能代价高昂以及如何规避它。 |
-| [适当使用 JSON](/best-practices/use-json-where-appropriate) | 在 ClickHouse 中使用 JSON 列的考虑事项。 |
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/index.md.hash b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/index.md.hash
deleted file mode 100644
index 1ae056246fd..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/index.md.hash
+++ /dev/null
@@ -1 +0,0 @@
-8c4d3e48a9af0c2f
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/multitenancy.md b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/multitenancy.md
deleted file mode 100644
index 9dcaf17df18..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/multitenancy.md
+++ /dev/null
@@ -1,377 +0,0 @@
----
-'slug': '/cloud/bestpractices/multi-tenancy'
-'sidebar_label': '实现多租户'
-'title': '多租户'
-'description': '实施多租户的最佳实践'
----
-
-在SaaS数据分析平台上,多个租户(如组织、客户或业务部门)共享相同的数据库基础设施,同时保持其数据的逻辑隔离,这种做法是很常见的。这允许不同的用户在同一平台上安全地访问他们自己的数据。
-
-根据需求,有不同的方法来实现多租户。以下是如何在ClickHouse Cloud中实现它们的指南。
-
-## 共享表 {#shared-table}
-
-在这种方法中,所有租户的数据存储在一个共享表中,使用一个字段(或一组字段)来识别每个租户的数据。为了最大化性能,这个字段应包含在 [主键](/sql-reference/statements/create/table#primary-key) 中。为了确保用户只能访问属于各自租户的数据,我们使用 [基于角色的访问控制](/operations/access-rights),通过 [行策略](/operations/access-rights#row-policy-management) 实现。
-
-> **我们推荐这种方法,因为这是最简单的管理方式,特别是当所有租户共享相同的数据结构且数据量适中(< TBs)时。**
-
-通过将所有租户数据合并到一个表中,存储效率通过优化的数据压缩和减少的元数据开销得以提高。此外,由于所有数据都在中央管理,模式更新也变得更简单。
-
-这种方法特别适合处理大量租户(可能达到数百万)。
-
-然而,如果租户具有不同的数据模式或预计会随着时间的推移而有所不同,则替代方法可能更为合适。
-
-在租户之间的数据量存在显著差距的情况下,较小的租户可能会遭遇不必要的查询性能影响。请注意,这个问题在很大程度上通过在主键中包含租户字段得以缓解。
-
-### 示例 {#shared-table-example}
-
-这是一个共享表多租户模型实现的示例。
-
-首先,让我们创建一个共享表,并将字段 `tenant_id` 包含在主键中。
-
-```sql
---- Create table events. Using tenant_id as part of the primary key
-CREATE TABLE events
-(
- tenant_id UInt32, -- Tenant identifier
- id UUID, -- Unique event ID
- type LowCardinality(String), -- Type of event
- timestamp DateTime, -- Timestamp of the event
- user_id UInt32, -- ID of the user who triggered the event
- data String, -- Event data
-)
-ORDER BY (tenant_id, timestamp)
-```
-
-让我们插入假数据。
-
-```sql
--- Insert some dummy rows
-INSERT INTO events (tenant_id, id, type, timestamp, user_id, data)
-VALUES
-(1, '7b7e0439-99d0-4590-a4f7-1cfea1e192d1', 'user_login', '2025-03-19 08:00:00', 1001, '{"device": "desktop", "location": "LA"}'),
-(1, '846aa71f-f631-47b4-8429-ee8af87b4182', 'purchase', '2025-03-19 08:05:00', 1002, '{"item": "phone", "amount": 799}'),
-(1, '6b4d12e4-447d-4398-b3fa-1c1e94d71a2f', 'user_logout', '2025-03-19 08:10:00', 1001, '{"device": "desktop", "location": "LA"}'),
-(2, '7162f8ea-8bfd-486a-a45e-edfc3398ca93', 'user_login', '2025-03-19 08:12:00', 2001, '{"device": "mobile", "location": "SF"}'),
-(2, '6b5f3e55-5add-479e-b89d-762aa017f067', 'purchase', '2025-03-19 08:15:00', 2002, '{"item": "headphones", "amount": 199}'),
-(2, '43ad35a1-926c-4543-a133-8672ddd504bf', 'user_logout', '2025-03-19 08:20:00', 2001, '{"device": "mobile", "location": "SF"}'),
-(1, '83b5eb72-aba3-4038-bc52-6c08b6423615', 'purchase', '2025-03-19 08:45:00', 1003, '{"item": "monitor", "amount": 450}'),
-(1, '975fb0c8-55bd-4df4-843b-34f5cfeed0a9', 'user_login', '2025-03-19 08:50:00', 1004, '{"device": "desktop", "location": "LA"}'),
-(2, 'f50aa430-4898-43d0-9d82-41e7397ba9b8', 'purchase', '2025-03-19 08:55:00', 2003, '{"item": "laptop", "amount": 1200}'),
-(2, '5c150ceb-b869-4ebb-843d-ab42d3cb5410', 'user_login', '2025-03-19 09:00:00', 2004, '{"device": "mobile", "location": "SF"}'),
-```
-
-然后创建两个用户 `user_1` 和 `user_2`。
-
-```sql
--- Create users
-CREATE USER user_1 IDENTIFIED BY ''
-CREATE USER user_2 IDENTIFIED BY ''
-```
-
-我们 [创建行策略](/sql-reference/statements/create/row-policy),限制 `user_1` 和 `user_2` 仅能访问各自租户的数据。
-
-```sql
--- Create row policies
-CREATE ROW POLICY user_filter_1 ON default.events USING tenant_id=1 TO user_1
-CREATE ROW POLICY user_filter_2 ON default.events USING tenant_id=2 TO user_2
-```
-
-然后通过一个公共角色对共享表使用 [`GRANT SELECT`](/sql-reference/statements/grant#usage) 权限。
-
-```sql
--- Create role
-CREATE ROLE user_role
-
--- Grant read only to events table.
-GRANT SELECT ON default.events TO user_role
-GRANT user_role TO user_1
-GRANT user_role TO user_2
-```
-
-现在,你可以以 `user_1` 身份连接,并运行一个简单的选择。仅返回第一个租户的行。
-
-```sql
--- Logged as user_1
-SELECT *
-FROM events
-
- ┌─tenant_id─┬─id───────────────────────────────────┬─type────────┬───────────timestamp─┬─user_id─┬─data────────────────────────────────────┐
-1. │ 1 │ 7b7e0439-99d0-4590-a4f7-1cfea1e192d1 │ user_login │ 2025-03-19 08:00:00 │ 1001 │ {"device": "desktop", "location": "LA"} │
-2. │ 1 │ 846aa71f-f631-47b4-8429-ee8af87b4182 │ purchase │ 2025-03-19 08:05:00 │ 1002 │ {"item": "phone", "amount": 799} │
-3. │ 1 │ 6b4d12e4-447d-4398-b3fa-1c1e94d71a2f │ user_logout │ 2025-03-19 08:10:00 │ 1001 │ {"device": "desktop", "location": "LA"} │
-4. │ 1 │ 83b5eb72-aba3-4038-bc52-6c08b6423615 │ purchase │ 2025-03-19 08:45:00 │ 1003 │ {"item": "monitor", "amount": 450} │
-5. │ 1 │ 975fb0c8-55bd-4df4-843b-34f5cfeed0a9 │ user_login │ 2025-03-19 08:50:00 │ 1004 │ {"device": "desktop", "location": "LA"} │
- └───────────┴──────────────────────────────────────┴─────────────┴─────────────────────┴─────────┴─────────────────────────────────────────┘
-```
-
-## 单独表 {#separate-tables}
-
-在这种方法中,每个租户的数据存储在同一数据库中的一个单独表中,消除了识别租户的特定字段的需要。用户访问通过 [GRANT 语句](/sql-reference/statements/grant) 强制执行,确保每个用户只能访问包含其租户数据的表。
-
-> **当租户具有不同的数据模式时,使用单独表是一个不错的选择。**
-
-对于涉及少量租户且数据集非常大的场景,当查询性能至关重要时,这种方法可能优于共享表模型。由于无需过滤其他租户的数据,因此查询可以更加高效。此外,主键可以进一步优化,因为不需要在主键中包含额外字段(例如租户ID)。
-
-请注意,这种方法无法扩展到数千个租户。请参见 [使用限制](/cloud/bestpractices/usage-limits)。
-
-### 示例 {#separate-tables-example}
-
-这是一个单独表多租户模型实现的示例。
-
-首先,让我们创建两个表,一个用于 `tenant_1` 的事件,另一个用于 `tenant_2` 的事件。
-
-```sql
--- Create table for tenant 1
-CREATE TABLE events_tenant_1
-(
- id UUID, -- Unique event ID
- type LowCardinality(String), -- Type of event
- timestamp DateTime, -- Timestamp of the event
- user_id UInt32, -- ID of the user who triggered the event
- data String, -- Event data
-)
-ORDER BY (timestamp, user_id) -- Primary key can focus on other attributes
-
--- Create table for tenant 2
-CREATE TABLE events_tenant_2
-(
- id UUID, -- Unique event ID
- type LowCardinality(String), -- Type of event
- timestamp DateTime, -- Timestamp of the event
- user_id UInt32, -- ID of the user who triggered the event
- data String, -- Event data
-)
-ORDER BY (timestamp, user_id) -- Primary key can focus on other attributes
-```
-
-让我们插入假数据。
-
-```sql
-INSERT INTO events_tenant_1 (id, type, timestamp, user_id, data)
-VALUES
-('7b7e0439-99d0-4590-a4f7-1cfea1e192d1', 'user_login', '2025-03-19 08:00:00', 1001, '{"device": "desktop", "location": "LA"}'),
-('846aa71f-f631-47b4-8429-ee8af87b4182', 'purchase', '2025-03-19 08:05:00', 1002, '{"item": "phone", "amount": 799}'),
-('6b4d12e4-447d-4398-b3fa-1c1e94d71a2f', 'user_logout', '2025-03-19 08:10:00', 1001, '{"device": "desktop", "location": "LA"}'),
-('83b5eb72-aba3-4038-bc52-6c08b6423615', 'purchase', '2025-03-19 08:45:00', 1003, '{"item": "monitor", "amount": 450}'),
-('975fb0c8-55bd-4df4-843b-34f5cfeed0a9', 'user_login', '2025-03-19 08:50:00', 1004, '{"device": "desktop", "location": "LA"}')
-
-INSERT INTO events_tenant_2 (id, type, timestamp, user_id, data)
-VALUES
-('7162f8ea-8bfd-486a-a45e-edfc3398ca93', 'user_login', '2025-03-19 08:12:00', 2001, '{"device": "mobile", "location": "SF"}'),
-('6b5f3e55-5add-479e-b89d-762aa017f067', 'purchase', '2025-03-19 08:15:00', 2002, '{"item": "headphones", "amount": 199}'),
-('43ad35a1-926c-4543-a133-8672ddd504bf', 'user_logout', '2025-03-19 08:20:00', 2001, '{"device": "mobile", "location": "SF"}'),
-('f50aa430-4898-43d0-9d82-41e7397ba9b8', 'purchase', '2025-03-19 08:55:00', 2003, '{"item": "laptop", "amount": 1200}'),
-('5c150ceb-b869-4ebb-843d-ab42d3cb5410', 'user_login', '2025-03-19 09:00:00', 2004, '{"device": "mobile", "location": "SF"}')
-```
-
-然后让我们创建两个用户 `user_1` 和 `user_2`。
-
-```sql
--- Create users
-CREATE USER user_1 IDENTIFIED BY ''
-CREATE USER user_2 IDENTIFIED BY ''
-```
-
-然后对相应表 `GRANT SELECT` 权限。
-
-```sql
--- Grant read only to events table.
-GRANT SELECT ON default.events_tenant_1 TO user_1
-GRANT SELECT ON default.events_tenant_2 TO user_2
-```
-
-现在你可以以 `user_1` 身份连接,并从与该用户对应的表中运行一个简单的选择。仅返回第一个租户的行。
-
-```sql
--- Logged as user_1
-SELECT *
-FROM default.events_tenant_1
-
- ┌─id───────────────────────────────────┬─type────────┬───────────timestamp─┬─user_id─┬─data────────────────────────────────────┐
-1. │ 7b7e0439-99d0-4590-a4f7-1cfea1e192d1 │ user_login │ 2025-03-19 08:00:00 │ 1001 │ {"device": "desktop", "location": "LA"} │
-2. │ 846aa71f-f631-47b4-8429-ee8af87b4182 │ purchase │ 2025-03-19 08:05:00 │ 1002 │ {"item": "phone", "amount": 799} │
-3. │ 6b4d12e4-447d-4398-b3fa-1c1e94d71a2f │ user_logout │ 2025-03-19 08:10:00 │ 1001 │ {"device": "desktop", "location": "LA"} │
-4. │ 83b5eb72-aba3-4038-bc52-6c08b6423615 │ purchase │ 2025-03-19 08:45:00 │ 1003 │ {"item": "monitor", "amount": 450} │
-5. │ 975fb0c8-55bd-4df4-843b-34f5cfeed0a9 │ user_login │ 2025-03-19 08:50:00 │ 1004 │ {"device": "desktop", "location": "LA"} │
- └──────────────────────────────────────┴─────────────┴─────────────────────┴─────────┴─────────────────────────────────────────┘
-```
-
-## 单独数据库 {#separate-databases}
-
-每个租户的数据存储在同一ClickHouse服务中的一个单独数据库中。
-
-> **这种方法对于每个租户需要大量表和可能的物化视图,并且具有不同数据模式的情况很有用。然而,如果租户数量较多,管理起来可能会变得具有挑战性。**
-
-实施类似于单独表的方法,但不是在表级别授予权限,而是在数据库级别授予权限。
-
-请注意,这种方法无法扩展到数千个租户。请参见 [使用限制](/cloud/bestpractices/usage-limits)。
-
-### 示例 {#separate-databases-example}
-
-这是一个单独数据库多租户模型实现的示例。
-
-首先,让我们为 `tenant_1` 创建一个数据库,为 `tenant_2` 创建另一个数据库。
-
-```sql
--- Create database for tenant_1
-CREATE DATABASE tenant_1;
-
--- Create database for tenant_2
-CREATE DATABASE tenant_2;
-```
-
-```sql
--- Create table for tenant_1
-CREATE TABLE tenant_1.events
-(
- id UUID, -- Unique event ID
- type LowCardinality(String), -- Type of event
- timestamp DateTime, -- Timestamp of the event
- user_id UInt32, -- ID of the user who triggered the event
- data String, -- Event data
-)
-ORDER BY (timestamp, user_id);
-
--- Create table for tenant_2
-CREATE TABLE tenant_2.events
-(
- id UUID, -- Unique event ID
- type LowCardinality(String), -- Type of event
- timestamp DateTime, -- Timestamp of the event
- user_id UInt32, -- ID of the user who triggered the event
- data String, -- Event data
-)
-ORDER BY (timestamp, user_id);
-```
-
-让我们插入假数据。
-
-```sql
-INSERT INTO tenant_1.events (id, type, timestamp, user_id, data)
-VALUES
-('7b7e0439-99d0-4590-a4f7-1cfea1e192d1', 'user_login', '2025-03-19 08:00:00', 1001, '{"device": "desktop", "location": "LA"}'),
-('846aa71f-f631-47b4-8429-ee8af87b4182', 'purchase', '2025-03-19 08:05:00', 1002, '{"item": "phone", "amount": 799}'),
-('6b4d12e4-447d-4398-b3fa-1c1e94d71a2f', 'user_logout', '2025-03-19 08:10:00', 1001, '{"device": "desktop", "location": "LA"}'),
-('83b5eb72-aba3-4038-bc52-6c08b6423615', 'purchase', '2025-03-19 08:45:00', 1003, '{"item": "monitor", "amount": 450}'),
-('975fb0c8-55bd-4df4-843b-34f5cfeed0a9', 'user_login', '2025-03-19 08:50:00', 1004, '{"device": "desktop", "location": "LA"}')
-
-INSERT INTO tenant_2.events (id, type, timestamp, user_id, data)
-VALUES
-('7162f8ea-8bfd-486a-a45e-edfc3398ca93', 'user_login', '2025-03-19 08:12:00', 2001, '{"device": "mobile", "location": "SF"}'),
-('6b5f3e55-5add-479e-b89d-762aa017f067', 'purchase', '2025-03-19 08:15:00', 2002, '{"item": "headphones", "amount": 199}'),
-('43ad35a1-926c-4543-a133-8672ddd504bf', 'user_logout', '2025-03-19 08:20:00', 2001, '{"device": "mobile", "location": "SF"}'),
-('f50aa430-4898-43d0-9d82-41e7397ba9b8', 'purchase', '2025-03-19 08:55:00', 2003, '{"item": "laptop", "amount": 1200}'),
-('5c150ceb-b869-4ebb-843d-ab42d3cb5410', 'user_login', '2025-03-19 09:00:00', 2004, '{"device": "mobile", "location": "SF"}')
-```
-
-然后让我们创建两个用户 `user_1` 和 `user_2`。
-
-```sql
--- Create users
-CREATE USER user_1 IDENTIFIED BY ''
-CREATE USER user_2 IDENTIFIED BY ''
-```
-
-然后对相应表 `GRANT SELECT` 权限。
-
-```sql
--- Grant read only to events table.
-GRANT SELECT ON tenant_1.events TO user_1
-GRANT SELECT ON tenant_2.events TO user_2
-```
-
-现在你可以以 `user_1` 身份连接,并在适当数据库的事件表上运行一个简单的选择。仅返回第一个租户的行。
-
-```sql
--- Logged as user_1
-SELECT *
-FROM tenant_1.events
-
- ┌─id───────────────────────────────────┬─type────────┬───────────timestamp─┬─user_id─┬─data────────────────────────────────────┐
-1. │ 7b7e0439-99d0-4590-a4f7-1cfea1e192d1 │ user_login │ 2025-03-19 08:00:00 │ 1001 │ {"device": "desktop", "location": "LA"} │
-2. │ 846aa71f-f631-47b4-8429-ee8af87b4182 │ purchase │ 2025-03-19 08:05:00 │ 1002 │ {"item": "phone", "amount": 799} │
-3. │ 6b4d12e4-447d-4398-b3fa-1c1e94d71a2f │ user_logout │ 2025-03-19 08:10:00 │ 1001 │ {"device": "desktop", "location": "LA"} │
-4. │ 83b5eb72-aba3-4038-bc52-6c08b6423615 │ purchase │ 2025-03-19 08:45:00 │ 1003 │ {"item": "monitor", "amount": 450} │
-5. │ 975fb0c8-55bd-4df4-843b-34f5cfeed0a9 │ user_login │ 2025-03-19 08:50:00 │ 1004 │ {"device": "desktop", "location": "LA"} │
- └──────────────────────────────────────┴─────────────┴─────────────────────┴─────────┴─────────────────────────────────────────┘
-```
-
-## 计算-计算分离 {#compute-compute-separation}
-
-上述三种方法也可以通过使用 [Warehouses](/cloud/reference/warehouses#what-is-a-warehouse) 进一步隔离。数据通过公共对象存储共享,但每个租户可以拥有自己的计算服务,thanks to [计算-计算分离](/cloud/reference/warehouses#what-is-compute-compute-separation),使用不同的CPU/内存比率。
-
-用户管理与之前描述的方法相似,因为仓库中的所有服务 [共享访问控制](/cloud/reference/warehouses#database-credentials)。
-
-请注意,仓库中子服务的数量限制为少数。参见 [仓库限制](/cloud/reference/warehouses#limitations)。
-
-## 单独云服务 {#separate-service}
-
-最激进的方法是为每个租户使用不同的ClickHouse服务。
-
-> **这种不太常见的方法将是一个解决方案,如果租户的数据需要存储在不同的区域——出于法律、安全或接近性原因。**
-
-必须在每个服务上创建用户帐户,用户可以访问各自租户的数据。
-
-这种方法的管理更加复杂,并且每个服务都有额外的负担,因为它们每个都需要自己的基础设施来运行。可以通过 [ClickHouse Cloud API](/cloud/manage/api/api-overview) 管理服务,同时也可以通过 [官方Terraform提供程序](https://registry.terraform.io/providers/ClickHouse/clickhouse/latest/docs) 进行编排。
-
-### 示例 {#separate-service-example}
-
-这是一个单独服务多租户模型实现的示例。请注意,示例展示了在一个ClickHouse服务上创建表和用户,所有服务上都必须复制相同的操作。
-
-首先,让我们创建表 `events`
-
-```sql
--- Create table for tenant_1
-CREATE TABLE events
-(
- id UUID, -- Unique event ID
- type LowCardinality(String), -- Type of event
- timestamp DateTime, -- Timestamp of the event
- user_id UInt32, -- ID of the user who triggered the event
- data String, -- Event data
-)
-ORDER BY (timestamp, user_id);
-```
-
-让我们插入假数据。
-
-```sql
-INSERT INTO events (id, type, timestamp, user_id, data)
-VALUES
-('7b7e0439-99d0-4590-a4f7-1cfea1e192d1', 'user_login', '2025-03-19 08:00:00', 1001, '{"device": "desktop", "location": "LA"}'),
-('846aa71f-f631-47b4-8429-ee8af87b4182', 'purchase', '2025-03-19 08:05:00', 1002, '{"item": "phone", "amount": 799}'),
-('6b4d12e4-447d-4398-b3fa-1c1e94d71a2f', 'user_logout', '2025-03-19 08:10:00', 1001, '{"device": "desktop", "location": "LA"}'),
-('83b5eb72-aba3-4038-bc52-6c08b6423615', 'purchase', '2025-03-19 08:45:00', 1003, '{"item": "monitor", "amount": 450}'),
-('975fb0c8-55bd-4df4-843b-34f5cfeed0a9', 'user_login', '2025-03-19 08:50:00', 1004, '{"device": "desktop", "location": "LA"}')
-```
-
-然后让我们创建两个用户 `user_1`
-
-```sql
--- Create users
-CREATE USER user_1 IDENTIFIED BY ''
-```
-
-然后对相应表 `GRANT SELECT` 权限。
-
-```sql
--- Grant read only to events table.
-GRANT SELECT ON events TO user_1
-```
-
-现在你可以在租户1的服务上以 `user_1` 身份连接并运行一个简单的选择。仅返回第一个租户的行。
-
-```sql
--- Logged as user_1
-SELECT *
-FROM events
-
- ┌─id───────────────────────────────────┬─type────────┬───────────timestamp─┬─user_id─┬─data────────────────────────────────────┐
-1. │ 7b7e0439-99d0-4590-a4f7-1cfea1e192d1 │ user_login │ 2025-03-19 08:00:00 │ 1001 │ {"device": "desktop", "location": "LA"} │
-2. │ 846aa71f-f631-47b4-8429-ee8af87b4182 │ purchase │ 2025-03-19 08:05:00 │ 1002 │ {"item": "phone", "amount": 799} │
-3. │ 6b4d12e4-447d-4398-b3fa-1c1e94d71a2f │ user_logout │ 2025-03-19 08:10:00 │ 1001 │ {"device": "desktop", "location": "LA"} │
-4. │ 83b5eb72-aba3-4038-bc52-6c08b6423615 │ purchase │ 2025-03-19 08:45:00 │ 1003 │ {"item": "monitor", "amount": 450} │
-5. │ 975fb0c8-55bd-4df4-843b-34f5cfeed0a9 │ user_login │ 2025-03-19 08:50:00 │ 1004 │ {"device": "desktop", "location": "LA"} │
- └──────────────────────────────────────┴─────────────┴─────────────────────┴─────────┴─────────────────────────────────────────┘
-```
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/multitenancy.md.hash b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/multitenancy.md.hash
deleted file mode 100644
index bf23ea018cd..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/multitenancy.md.hash
+++ /dev/null
@@ -1 +0,0 @@
-2815dfa7d00cce3f
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/usagelimits.md b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/usagelimits.md
deleted file mode 100644
index 06fe8e9c6a7..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/usagelimits.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-'slug': '/cloud/bestpractices/usage-limits'
-'sidebar_label': '使用限制'
-'title': '使用限制'
-'description': '描述 ClickHouse Cloud 中推荐的使用限制'
----
-
-虽然 ClickHouse 以其速度和可靠性而闻名,但在某些操作参数下才能实现最佳性能。例如,数据库、表或分片过多可能会对性能产生负面影响。为避免这种情况,Clickhouse Cloud 为几类项目设置了护栏。您可以在下面找到这些护栏的详细信息。
-
-:::tip
-如果您遇到了这些护栏,很可能是您以不优化的方式实现了您的用例。请联系我们的支持团队,我们将乐意帮助您优化用例,以避免超出护栏,或共同寻找如何以受控的方式提高护栏。
-:::
-
-| 维度 | 限制 |
-|--------------|-------|
-|**数据库** | 1000 |
-|**表** | 5000 |
-|**列** | ∼1000(首选宽格式而非紧凑格式)|
-|**分区** | 50k |
-|**分区片段** | 100k 整个实例范围内 |
-|**分区片段大小**| 150gb |
-|**每个组织的服务**| 20(软限制)|
-|**每个仓库的服务**| 5(软限制)|
-|**低基数** | 10k 或更少 |
-|**表中的主键**| 4-5 个,能够有效过滤数据 |
-|**查询并发性**| 1000 |
-|**批量导入** | 任何大于 1M 的数据都将由系统分割为 1M 行块 |
-
-:::note
-对于单副本服务,数据库的最大数量限制为 100,表的最大数量限制为 500。此外,基础层服务的存储限制为 1 TB。
-:::
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/usagelimits.md.hash b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/usagelimits.md.hash
deleted file mode 100644
index a06bfa084e9..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/bestpractices/usagelimits.md.hash
+++ /dev/null
@@ -1 +0,0 @@
-b8f70f140fe904c7
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/changelog-24-10.md b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/changelog-24-10.md
deleted file mode 100644
index e75eec40655..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/changelog-24-10.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-slug: /changelogs/24.10
-title: 'v24.10 Changelog for Cloud'
-description: 'Fast release changelog for v24.10'
-keywords: ['changelog', 'cloud']
-sidebar_label: 'v24.10'
----
-
-Relevant changes for ClickHouse Cloud services based on the v24.10 release.
-
-## Backward Incompatible Change {#backward-incompatible-change}
-- Allow to write `SETTINGS` before `FORMAT` in a chain of queries with `UNION` when subqueries are inside parentheses. This closes [#39712](https://github.com/ClickHouse/ClickHouse/issues/39712). Change the behavior when a query has the SETTINGS clause specified twice in a sequence. The closest SETTINGS clause will have a preference for the corresponding subquery. In the previous versions, the outermost SETTINGS clause could take a preference over the inner one. [#60197](https://github.com/ClickHouse/ClickHouse/pull/60197)[#68614](https://github.com/ClickHouse/ClickHouse/pull/68614) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- Reimplement Dynamic type. Now when the limit of dynamic data types is reached new types are not cast to String but stored in a special data structure in binary format with binary encoded data type. Now any type ever inserted into Dynamic column can be read from it as subcolumn. [#68132](https://github.com/ClickHouse/ClickHouse/pull/68132) ([Pavel Kruglov](https://github.com/Avogar)).
-- Expressions like `a[b].c` are supported for named tuples, as well as named subscripts from arbitrary expressions, e.g., `expr().name`. This is useful for processing JSON. This closes [#54965](https://github.com/ClickHouse/ClickHouse/issues/54965). In previous versions, an expression of form `expr().name` was parsed as `tupleElement(expr(), name)`, and the query analyzer was searching for a column `name` rather than for the corresponding tuple element; while in the new version, it is changed to `tupleElement(expr(), 'name')`. In most cases, the previous version was not working, but it is possible to imagine a very unusual scenario when this change could lead to incompatibility: if you stored names of tuple elements in a column or an alias, that was named differently than the tuple element's name: `SELECT 'b' AS a, CAST([tuple(123)] AS 'Array(Tuple(b UInt8))') AS t, t[1].a`. It is very unlikely that you used such queries, but we still have to mark this change as potentially backward incompatible. [#68435](https://github.com/ClickHouse/ClickHouse/pull/68435) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- When the setting `print_pretty_type_names` is enabled, it will print `Tuple` data type in a pretty form in `SHOW CREATE TABLE` statements, `formatQuery` function, and in the interactive mode in `clickhouse-client` and `clickhouse-local`. In previous versions, this setting was only applied to `DESCRIBE` queries and `toTypeName`. This closes [#65753](https://github.com/ClickHouse/ClickHouse/issues/65753). [#68492](https://github.com/ClickHouse/ClickHouse/pull/68492) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- Reordering of filter conditions from `[PRE]WHERE` clause is now allowed by default. It could be disabled by setting `allow_reorder_prewhere_conditions` to `false`. [#70657](https://github.com/ClickHouse/ClickHouse/pull/70657) ([Nikita Taranov](https://github.com/nickitat)).
-- Fix `optimize_functions_to_subcolumns` optimization (previously could lead to `Invalid column type for ColumnUnique::insertRangeFrom. Expected String, got LowCardinality(String)` error), by preserving `LowCardinality` type in `mapKeys`/`mapValues`. [#70716](https://github.com/ClickHouse/ClickHouse/pull/70716) ([Azat Khuzhin](https://github.com/azat)).
-
-
-## New Feature {#new-feature}
-- Refreshable materialized views are production ready. [#70550](https://github.com/ClickHouse/ClickHouse/pull/70550) ([Michael Kolupaev](https://github.com/al13n321)). Refreshable materialized views are now supported in Replicated databases. [#60669](https://github.com/ClickHouse/ClickHouse/pull/60669) ([Michael Kolupaev](https://github.com/al13n321)).
-- Function `toStartOfInterval()` now has a new overload which emulates TimescaleDB's `time_bucket()` function, respectively PostgreSQL's `date_bin()` function. ([#55619](https://github.com/ClickHouse/ClickHouse/issues/55619)). It allows to align date or timestamp values to multiples of a given interval from an *arbitrary* origin (instead of 0000-01-01 00:00:00.000 as *fixed* origin). For example, `SELECT toStartOfInterval(toDateTime('2023-01-01 14:45:00'), INTERVAL 1 MINUTE, toDateTime('2023-01-01 14:35:30'));` returns `2023-01-01 14:44:30` which is a multiple of 1 minute intervals, starting from origin `2023-01-01 14:35:30`. [#56738](https://github.com/ClickHouse/ClickHouse/pull/56738) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-- MongoDB integration refactored: migration to new driver mongocxx from deprecated Poco::MongoDB, remove support for deprecated old protocol, support for connection by URI, support for all MongoDB types, support for WHERE and ORDER BY statements on MongoDB side, restriction for expression unsupported by MongoDB. [#63279](https://github.com/ClickHouse/ClickHouse/pull/63279) ([Kirill Nikiforov](https://github.com/allmazz)).
-- A new `--progress-table` option in clickhouse-client prints a table with metrics changing during query execution; a new `--enable-progress-table-toggle` is associated with the `--progress-table` option, and toggles the rendering of the progress table by pressing the control key (Space). [#63689](https://github.com/ClickHouse/ClickHouse/pull/63689) ([Maria Khristenko](https://github.com/mariaKhr)).
-- This allows to grant access to the wildcard prefixes. `GRANT SELECT ON db.table_pefix_* TO user`. [#65311](https://github.com/ClickHouse/ClickHouse/pull/65311) ([pufit](https://github.com/pufit)).
-- Introduced JSONCompactWithProgress format where ClickHouse outputs each row as a newline-delimited JSON object, including metadata, data, progress, totals, and statistics. [#66205](https://github.com/ClickHouse/ClickHouse/pull/66205) ([Alexey Korepanov](https://github.com/alexkorep)).
-- Add system.query_metric_log which contains history of memory and metric values from table system.events for individual queries, periodically flushed to disk. [#66532](https://github.com/ClickHouse/ClickHouse/pull/66532) ([Pablo Marcos](https://github.com/pamarcos)).
-- Add the `input_format_json_empty_as_default` setting which, when enabled, treats empty fields in JSON inputs as default values. Closes [#59339](https://github.com/ClickHouse/ClickHouse/issues/59339). [#66782](https://github.com/ClickHouse/ClickHouse/pull/66782) ([Alexis Arnaud](https://github.com/a-a-f)).
-- Added functions `overlay` and `overlayUTF8` which replace parts of a string by another string. Example: `SELECT overlay('Hello New York', 'Jersey', 11)` returns `Hello New Jersey`. [#66933](https://github.com/ClickHouse/ClickHouse/pull/66933) ([李扬](https://github.com/taiyang-li)).
-- Add new Command, Lightweight Delete In Partition ``` DELETE FROM [db.]table [ON CLUSTER cluster] [IN PARTITION partition_expr] WHERE expr; ``` ``` VM-114-29-tos :) select * from ads_app_poster_ip_source_channel_di_replicated_local;. [#67805](https://github.com/ClickHouse/ClickHouse/pull/67805) ([sunny](https://github.com/sunny19930321)).
-- Implemented comparison for `Interval` data type values so they are converting now to the least supertype. [#68057](https://github.com/ClickHouse/ClickHouse/pull/68057) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-- Add create_if_not_exists setting to default to IF NOT EXISTS behavior during CREATE statements. [#68164](https://github.com/ClickHouse/ClickHouse/pull/68164) ([Peter Nguyen](https://github.com/petern48)).
-- Makes possible to read Iceberg tables in Azure and locally. [#68210](https://github.com/ClickHouse/ClickHouse/pull/68210) ([Daniil Ivanik](https://github.com/divanik)).
-- Add aggregate functions distinctDynamicTypes/distinctJSONPaths/distinctJSONPathsAndTypes for better introspection of JSON column type content. [#68463](https://github.com/ClickHouse/ClickHouse/pull/68463) ([Pavel Kruglov](https://github.com/Avogar)).
-- Query cache entries can now be dropped by tag. For example, the query cache entry created by `SELECT 1 SETTINGS use_query_cache = true, query_cache_tag = 'abc'` can now be dropped by `SYSTEM DROP QUERY CACHE TAG 'abc'` (or of course just: `SYSTEM DROP QUERY CACHE` which will clear the entire query cache). [#68477](https://github.com/ClickHouse/ClickHouse/pull/68477) ([Michał Tabaszewski](https://github.com/pinsvin00)).
-- A simple SELECT query can be written with implicit SELECT to enable calculator-style expressions, e.g., `ch "1 + 2"`. This is controlled by a new setting, `implicit_select`. [#68502](https://github.com/ClickHouse/ClickHouse/pull/68502) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- Support --copy mode for clickhouse local as a shortcut for format conversion [#68503](https://github.com/ClickHouse/ClickHouse/issues/68503). [#68583](https://github.com/ClickHouse/ClickHouse/pull/68583) ([Denis Hananein](https://github.com/denis-hananein)).
-- Added `ripeMD160` function, which computes the RIPEMD-160 cryptographic hash of a string. Example: `SELECT hex(ripeMD160('The quick brown fox jumps over the lazy dog'))` returns `37F332F68DB77BD9D7EDD4969571AD671CF9DD3B`. [#68639](https://github.com/ClickHouse/ClickHouse/pull/68639) ([Dergousov Maxim](https://github.com/m7kss1)).
-- Add virtual column _headers for url table engine. Closes [#65026](https://github.com/ClickHouse/ClickHouse/issues/65026). [#68867](https://github.com/ClickHouse/ClickHouse/pull/68867) ([flynn](https://github.com/ucasfl)).
-- Adding `system.projections` table to track available projections. [#68901](https://github.com/ClickHouse/ClickHouse/pull/68901) ([Jordi Villar](https://github.com/jrdi)).
-- Add support for `arrayUnion` function. [#68989](https://github.com/ClickHouse/ClickHouse/pull/68989) ([Peter Nguyen](https://github.com/petern48)).
-- Add new function `arrayZipUnaligned` for spark compatiablity(arrays_zip), which allowed unaligned arrays based on original `arrayZip`. ``` sql SELECT arrayZipUnaligned([1], [1, 2, 3]). [#69030](https://github.com/ClickHouse/ClickHouse/pull/69030) ([李扬](https://github.com/taiyang-li)).
-- Support aggregate function `quantileExactWeightedInterpolated`, which is a interpolated version based on quantileExactWeighted. Some people may wonder why we need a new `quantileExactWeightedInterpolated` since we already have `quantileExactInterpolatedWeighted`. The reason is the new one is more accurate than the old one. BTW, it is for spark compatibility in Apache Gluten. [#69619](https://github.com/ClickHouse/ClickHouse/pull/69619) ([李扬](https://github.com/taiyang-li)).
-- Support function arrayElementOrNull. It returns null if array index is out of range or map key not found. [#69646](https://github.com/ClickHouse/ClickHouse/pull/69646) ([李扬](https://github.com/taiyang-li)).
-- Support Dynamic type in most functions by executing them on internal types inside Dynamic. [#69691](https://github.com/ClickHouse/ClickHouse/pull/69691) ([Pavel Kruglov](https://github.com/Avogar)).
-- Adds argument `scale` (default: `true`) to function `arrayAUC` which allows to skip the normalization step (issue [#69609](https://github.com/ClickHouse/ClickHouse/issues/69609)). [#69717](https://github.com/ClickHouse/ClickHouse/pull/69717) ([gabrielmcg44](https://github.com/gabrielmcg44)).
-- Re-added `RIPEMD160` function, which computes the RIPEMD-160 cryptographic hash of a string. Example: `SELECT HEX(RIPEMD160('The quick brown fox jumps over the lazy dog'))` returns `37F332F68DB77BD9D7EDD4969571AD671CF9DD3B`. [#70087](https://github.com/ClickHouse/ClickHouse/pull/70087) ([Dergousov Maxim](https://github.com/m7kss1)).
-- Allow to cache read files for object storage table engines and data lakes using hash from ETag + file path as cache key. [#70135](https://github.com/ClickHouse/ClickHouse/pull/70135) ([Kseniia Sumarokova](https://github.com/kssenii)).
-- Support reading Iceberg tables on HDFS. [#70268](https://github.com/ClickHouse/ClickHouse/pull/70268) ([flynn](https://github.com/ucasfl)).
-- Allow to read/write JSON type as binary string in RowBinary format under settings `input_format_binary_read_json_as_string/output_format_binary_write_json_as_string`. [#70288](https://github.com/ClickHouse/ClickHouse/pull/70288) ([Pavel Kruglov](https://github.com/Avogar)).
-- Allow to serialize/deserialize JSON column as single String column in Native format. For output use setting `output_format_native_write_json_as_string`. For input, use serialization version `1` before the column data. [#70312](https://github.com/ClickHouse/ClickHouse/pull/70312) ([Pavel Kruglov](https://github.com/Avogar)).
-- Supports standard CTE, `with insert`, as previously only supports `insert ... with ...`. [#70593](https://github.com/ClickHouse/ClickHouse/pull/70593) ([Shichao Jin](https://github.com/jsc0218)).
-
-
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/changelog-24-12.md b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/changelog-24-12.md
deleted file mode 100644
index feb1b2fe93a..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/changelog-24-12.md
+++ /dev/null
@@ -1,226 +0,0 @@
----
-slug: /changelogs/24.12
-title: 'v24.12 Changelog for Cloud'
-description: 'Fast release changelog for v24.12'
-keywords: ['changelog', 'cloud']
-sidebar_label: 'v24.12'
----
-
-Relevant changes for ClickHouse Cloud services based on the v24.12 release.
-
-## Backward Incompatible Changes {#backward-incompatible-changes}
-
-- Functions `greatest` and `least` now ignore NULL input values, whereas they previously returned NULL if one of the arguments was NULL. For example, `SELECT greatest(1, 2, NULL)` now returns 2. This makes the behavior compatible with PostgreSQL. [#65519](https://github.com/ClickHouse/ClickHouse/pull/65519) ([kevinyhzou](https://github.com/KevinyhZou)).
-- Don't allow Variant/Dynamic types in ORDER BY/GROUP BY/PARTITION BY/PRIMARY KEY by default because it may lead to unexpected results. [#69731](https://github.com/ClickHouse/ClickHouse/pull/69731) ([Pavel Kruglov](https://github.com/Avogar)).
-- Remove system tables `generate_series` and `generateSeries`. They were added by mistake here: [#59390](https://github.com/ClickHouse/ClickHouse/issues/59390). [#71091](https://github.com/ClickHouse/ClickHouse/pull/71091) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- Remove `StorageExternalDistributed`. Closes [#70600](https://github.com/ClickHouse/ClickHouse/issues/70600). [#71176](https://github.com/ClickHouse/ClickHouse/pull/71176) ([flynn](https://github.com/ucasfl)).
-- Settings from server config (users.xml) now apply on the client too. Useful for format settings, e.g. `date_time_output_format`. [#71178](https://github.com/ClickHouse/ClickHouse/pull/71178) ([Michael Kolupaev](https://github.com/al13n321)).
-- Fix possible error `No such file or directory` due to unescaped special symbols in files for JSON subcolumns. [#71182](https://github.com/ClickHouse/ClickHouse/pull/71182) ([Pavel Kruglov](https://github.com/Avogar)).
-- The table engines Kafka, NATS and RabbitMQ are now covered by their own grants in the `SOURCES` hierarchy. Add grants to any non-default database users that create tables with these engine types. [#71250](https://github.com/ClickHouse/ClickHouse/pull/71250) ([Christoph Wurm](https://github.com/cwurm)).
-- Check the full mutation query before executing it (including subqueries). This prevents accidentally running an invalid query and building up dead mutations that block valid mutations. [#71300](https://github.com/ClickHouse/ClickHouse/pull/71300) ([Christoph Wurm](https://github.com/cwurm)).
-- Rename filesystem cache setting `skip_download_if_exceeds_query_cache` to `filesystem_cache_skip_download_if_exceeds_per_query_cache_write_limit`. [#71578](https://github.com/ClickHouse/ClickHouse/pull/71578) ([Kseniia Sumarokova](https://github.com/kssenii)).
-- Forbid Dynamic/Variant types in min/max functions to avoid confusion. [#71761](https://github.com/ClickHouse/ClickHouse/pull/71761) ([Pavel Kruglov](https://github.com/Avogar)).
-- Remove support for `Enum` as well as `UInt128` and `UInt256` arguments in `deltaSumTimestamp`. Remove support for `Int8`, `UInt8`, `Int16`, and `UInt16` of the second ("timestamp") argument of `deltaSumTimestamp`. [#71790](https://github.com/ClickHouse/ClickHouse/pull/71790) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- Added source query validation when ClickHouse is used as a source for a dictionary. [#72548](https://github.com/ClickHouse/ClickHouse/pull/72548) ([Alexey Katsman](https://github.com/alexkats)).
-
-## New Features {#new-features}
-
-- Implement SYSTEM LOAD PRIMARY KEY command to load primary indexes for all parts of a specified table or for all tables if no table is specified. This will be useful for benchmarks and to prevent extra latency during query execution. [#66252](https://github.com/ClickHouse/ClickHouse/pull/66252) ([ZAWA_ll](https://github.com/Zawa-ll)).
-- Added statement `SYSTEM LOAD PRIMARY KEY` for loading the primary indexes of all parts in a specified table or for all tables if no table is specified. This can be useful for benchmarking and to prevent extra latency during query execution. [#67733](https://github.com/ClickHouse/ClickHouse/pull/67733) ([ZAWA_ll](https://github.com/Zawa-ll)).
-- Add `CHECK GRANT` query to check whether the current user/role has been granted the specific privilege and whether the corresponding table/column exists in the memory. [#68885](https://github.com/ClickHouse/ClickHouse/pull/68885) ([Unalian](https://github.com/Unalian)).
-- Added SQL syntax to describe workload and resource management. https://clickhouse.com/docs/en/operations/workload-scheduling. [#69187](https://github.com/ClickHouse/ClickHouse/pull/69187) ([Sergei Trifonov](https://github.com/serxa)).
-- [The Iceberg data storage](https://iceberg.apache.org/spec/#file-system-operations) format provides the user with extensive options for modifying the schema of their table. In this pull request, reading a table in Iceberg format has been implemented, where the order of columns, column names, and simple type extensions have been changed. [#69445](https://github.com/ClickHouse/ClickHouse/pull/69445) ([Daniil Ivanik](https://github.com/divanik)).
-- Allow each authentication method to have its own expiration date, remove from user entity. [#70090](https://github.com/ClickHouse/ClickHouse/pull/70090) ([Arthur Passos](https://github.com/arthurpassos)).
-- Push external user roles from query originator to other nodes in cluster. Helpful when only originator has access to the external authenticator (like LDAP). [#70332](https://github.com/ClickHouse/ClickHouse/pull/70332) ([Andrey Zvonov](https://github.com/zvonand)).
-- Support alter from String to JSON. This PR also changes the serialization of JSON and Dynamic types to new version V2. Old version V1 can be still used by enabling setting `merge_tree_use_v1_object_and_dynamic_serialization` (can be used during upgrade to be able to rollback the version without issues). [#70442](https://github.com/ClickHouse/ClickHouse/pull/70442) ([Pavel Kruglov](https://github.com/Avogar)).
-- Add function `toUnixTimestamp64Second` which converts a `DateTime64` to a `Int64` value with fixed second precision, so we can support return negative value if date is before 00:00:00 UTC on Thursday, 1 January 1970. [#70597](https://github.com/ClickHouse/ClickHouse/pull/70597) ([zhanglistar](https://github.com/zhanglistar)).
-- Add new setting `enforce_index_structure_match_on_partition_manipulation` to allow attach when source table's projections and secondary indices is a subset of those in the target table. Close [#70602](https://github.com/ClickHouse/ClickHouse/issues/70602). [#70603](https://github.com/ClickHouse/ClickHouse/pull/70603) ([zwy991114](https://github.com/zwy991114)).
-- The output of function `cast` differs with Apache Spark which cause difference in gluten project, see https://github.com/apache/incubator-gluten/issues/7602 This PR adds Spark text output format support feature, default closed. [#70957](https://github.com/ClickHouse/ClickHouse/pull/70957) ([zhanglistar](https://github.com/zhanglistar)).
-- Added a new header type for S3 endpoints for user authentication (`access_header`). This allows to get some access header with the lowest priority, which will be overwritten with `access_key_id` from any other source (for example, a table schema or a named collection). [#71011](https://github.com/ClickHouse/ClickHouse/pull/71011) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
-- Initial implementation of settings tiers. [#71145](https://github.com/ClickHouse/ClickHouse/pull/71145) ([Raúl Marín](https://github.com/Algunenano)).
-- Add support for staleness clause in order by with fill operator. [#71151](https://github.com/ClickHouse/ClickHouse/pull/71151) ([Mikhail Artemenko](https://github.com/Michicosun)).
-- Implement simple CAST from Map/Tuple/Object to new JSON through serialization/deserialization from JSON string. [#71320](https://github.com/ClickHouse/ClickHouse/pull/71320) ([Pavel Kruglov](https://github.com/Avogar)).
-- Added aliases `anyRespectNulls`, `firstValueRespectNulls`, and `anyValueRespectNulls` for aggregation function `any`. Also added aliases `anyLastRespectNulls` and `lastValueRespectNulls` for aggregation function `anyLast`. This allows using more natural camel-case-only syntax rather than mixed camel-case/underscore syntax, for example: `SELECT anyLastRespectNullsStateIf` instead of `anyLast_respect_nullsStateIf`. [#71403](https://github.com/ClickHouse/ClickHouse/pull/71403) ([Peter Nguyen](https://github.com/petern48)).
-- Added the configuration `date_time_utc` parameter, enabling JSON log formatting to support UTC date-time in RFC 3339/ISO8601 format. [#71560](https://github.com/ClickHouse/ClickHouse/pull/71560) ([Ali](https://github.com/xogoodnow)).
-- Added an option to select the side of the join that will act as the inner (build) table in the query plan. This is controlled by `query_plan_join_swap_table`, which can be set to `auto`. In this mode, ClickHouse will try to choose the table with the smallest number of rows. [#71577](https://github.com/ClickHouse/ClickHouse/pull/71577) ([Vladimir Cherkasov](https://github.com/vdimir)).
-- Optimized memory usage for values of index granularity if granularity is constant for part. Added an ability to always select constant granularity for part (setting `use_const_adaptive_granularity`), which helps to ensure that it is always optimized in memory. It helps in large workloads (trillions of rows in shared storage) to avoid constantly growing memory usage by metadata (values of index granularity) of data parts. [#71786](https://github.com/ClickHouse/ClickHouse/pull/71786) ([Anton Popov](https://github.com/CurtizJ)).
-- Implement `allowed_feature_tier` as a global switch to disable all experimental / beta features. [#71841](https://github.com/ClickHouse/ClickHouse/pull/71841) ([Raúl Marín](https://github.com/Algunenano)).
-- Add `iceberg[S3;HDFS;Azure]Cluster`, `deltaLakeCluster`, `hudiCluster` table functions. [#72045](https://github.com/ClickHouse/ClickHouse/pull/72045) ([Mikhail Artemenko](https://github.com/Michicosun)).
-- Add syntax `ALTER USER {ADD|MODIFY|DROP SETTING}`, `ALTER USER {ADD|DROP PROFILE}`, the same for `ALTER ROLE` and `ALTER PROFILE`. [#72050](https://github.com/ClickHouse/ClickHouse/pull/72050) ([pufit](https://github.com/pufit)).
-- Added `arrayPrAUC` function, which calculates the AUC (Area Under the Curve) for the Precision Recall curve. [#72073](https://github.com/ClickHouse/ClickHouse/pull/72073) ([Emmanuel](https://github.com/emmanuelsdias)).
-- Added cache for primary index of `MergeTree` tables (can be enabled by table setting `use_primary_key_cache`). If lazy load and cache are enabled for primary index, it will be loaded to cache on demand (similar to mark cache) instead of keeping it in memory forever. Added prewarm of primary index on inserts/mergs/fetches of data parts and on restarts of table (can be enabled by setting `prewarm_primary_key_cache`). [#72102](https://github.com/ClickHouse/ClickHouse/pull/72102) ([Anton Popov](https://github.com/CurtizJ)).
-- Add indexOfAssumeSorted function for array types. Optimizes the search in the case of a sorted in non-decreasing order array. [#72517](https://github.com/ClickHouse/ClickHouse/pull/72517) ([Eric Kurbanov](https://github.com/erickurbanov)).
-- Allows to use a delimiter as a optional second argument for aggregate function `groupConcat`. [#72540](https://github.com/ClickHouse/ClickHouse/pull/72540) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-- A new setting, `http_response_headers` which allows you to customize the HTTP response headers. For example, you can tell the browser to render a picture that is stored in the database. This closes [#59620](https://github.com/ClickHouse/ClickHouse/issues/59620). [#72656](https://github.com/ClickHouse/ClickHouse/pull/72656) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- Add function `fromUnixTimestamp64Second` which converts a Int64 Unix timestamp value to a DateTime64. [#73146](https://github.com/ClickHouse/ClickHouse/pull/73146) ([Robert Schulze](https://github.com/rschu1ze)).
-
-## Performance Improvements {#performance-improvements}
-
-- Add 2 new settings `short_circuit_function_evaluation_for_nulls` and `short_circuit_function_evaluation_for_nulls_threshold` that allow to execute functions over `Nullable` columns in short-circuit manner when the ratio of NULL values in the block of data exceeds the specified threshold. It means that the function will be executed only on rows with non-null values. It applies only to functions that return NULL value for rows where at least one argument is NULL. [#60129](https://github.com/ClickHouse/ClickHouse/pull/60129) ([李扬](https://github.com/taiyang-li)).
-- Memory usage of `clickhouse disks remove --recursive` is reduced for object storage disks. [#67323](https://github.com/ClickHouse/ClickHouse/pull/67323) ([Kirill](https://github.com/kirillgarbar)).
-- Now we won't copy input blocks columns for `join_algorithm='parallel_hash'` when distribute them between threads for parallel processing. [#67782](https://github.com/ClickHouse/ClickHouse/pull/67782) ([Nikita Taranov](https://github.com/nickitat)).
-- Enable JIT compilation for more expressions: `abs`/`bitCount`/`sign`/`modulo`/`pmod`/`isNull`/`isNotNull`/`assumeNotNull`/`to(U)Int*`/`toFloat*`, comparison functions(`=`, `<`, `>`, `>=`, `<=`), logical functions(`and`, `or`). [#70598](https://github.com/ClickHouse/ClickHouse/pull/70598) ([李扬](https://github.com/taiyang-li)).
-- Now `parallel_hash` algorithm will be used (if applicable) when `join_algorithm` setting is set to `default`. Two previous alternatives (`direct` and `hash`) are still considered when `parallel_hash` cannot be used. [#70788](https://github.com/ClickHouse/ClickHouse/pull/70788) ([Nikita Taranov](https://github.com/nickitat)).
-- Optimized `Replacing` merge algorithm for non intersecting parts. [#70977](https://github.com/ClickHouse/ClickHouse/pull/70977) ([Anton Popov](https://github.com/CurtizJ)).
-- Do not list detached parts from readonly and write-once disks for metrics and system.detached_parts. [#71086](https://github.com/ClickHouse/ClickHouse/pull/71086) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- Do not calculate heavy asynchronous metrics by default. The feature was introduced in [#40332](https://github.com/ClickHouse/ClickHouse/issues/40332), but it isn't good to have a heavy background job that is needed for only a single customer. [#71087](https://github.com/ClickHouse/ClickHouse/pull/71087) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- Improve the performance and accuracy of system.query_metric_log collection interval by reducing the critical region. [#71473](https://github.com/ClickHouse/ClickHouse/pull/71473) ([Pablo Marcos](https://github.com/pamarcos)).
-- Add option to extract common expressions from `WHERE` and `ON` expressions in order to reduce the number of hash tables used during joins. Can be enabled by `optimize_extract_common_expressions = 1`. [#71537](https://github.com/ClickHouse/ClickHouse/pull/71537) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
-- Allows to use indexes on `SELECT` with `LowCardinality(String)`. [#71598](https://github.com/ClickHouse/ClickHouse/pull/71598) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-- During query execution with parallel replicas and enabled local plan, skip index analysis on workers. The coordinator will choose ranges to read for workers based on index analysis on its side (on query initiator). [#72109](https://github.com/ClickHouse/ClickHouse/pull/72109) ([Igor Nikonov](https://github.com/devcrafter)).
-- Bring back optimization for reading subcolumns of single column in Compact parts from https://github.com/ClickHouse/ClickHouse/pull/57631. It was deleted accidentally. [#72285](https://github.com/ClickHouse/ClickHouse/pull/72285) ([Pavel Kruglov](https://github.com/Avogar)).
-- Speedup sorting of `LowCardinality(String)` columns by de-virtualizing calls in comparator. [#72337](https://github.com/ClickHouse/ClickHouse/pull/72337) ([Alexander Gololobov](https://github.com/davenger)).
-- Optimize function argMin/Max for some simple data types. [#72350](https://github.com/ClickHouse/ClickHouse/pull/72350) ([alesapin](https://github.com/alesapin)).
-- Optimize locking with shared locks in the memory tracker to reduce lock contention. [#72375](https://github.com/ClickHouse/ClickHouse/pull/72375) ([Jiebin Sun](https://github.com/jiebinn)).
-- Add a new setting, `use_async_executor_for_materialized_views`. Use async and potentially multithreaded execution of materialized view query, can speedup views processing during INSERT, but also consume more memory. [#72497](https://github.com/ClickHouse/ClickHouse/pull/72497) ([alesapin](https://github.com/alesapin)).
-- Default values for settings `max_size_to_preallocate_for_aggregation`, `max_size_to_preallocate_for_joins` were further increased to `10^12`, so the optimisation will be applied in more cases. [#72555](https://github.com/ClickHouse/ClickHouse/pull/72555) ([Nikita Taranov](https://github.com/nickitat)).
-- Improved performance of deserialization of states of aggregate functions (in data type `AggregateFunction` and in distributed queries). Slightly improved performance of parsing of format `RowBinary`. [#72818](https://github.com/ClickHouse/ClickHouse/pull/72818) ([Anton Popov](https://github.com/CurtizJ)).
-
-## Improvement {#improvement}
-
-- Higher-order functions with constant arrays and constant captured arguments will return constants. [#58400](https://github.com/ClickHouse/ClickHouse/pull/58400) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- Read-in-order optimization via generating virtual rows, so less data would be read during merge sort especially useful when multiple parts exist. [#62125](https://github.com/ClickHouse/ClickHouse/pull/62125) ([Shichao Jin](https://github.com/jsc0218)).
-- Query plan step names (`EXPLAIN PLAN json=1`) and pipeline processor names (`EXPLAIN PIPELINE compact=0,graph=1`) now have a unique id as a suffix. This allows to match processors profiler output and OpenTelemetry traces with explain output. [#63518](https://github.com/ClickHouse/ClickHouse/pull/63518) ([qhsong](https://github.com/qhsong)).
-- Added option to check object exists after writing to Azure Blob Storage, this is controlled by setting `check_objects_after_upload`. [#64847](https://github.com/ClickHouse/ClickHouse/pull/64847) ([Smita Kulkarni](https://github.com/SmitaRKulkarni)).
-- Fix use-after-dtor logic in HashTable destroyElements. [#65279](https://github.com/ClickHouse/ClickHouse/pull/65279) ([cangyin](https://github.com/cangyin)).
-- Use `Atomic` database by default in `clickhouse-local`. Address items 1 and 5 from [#50647](https://github.com/ClickHouse/ClickHouse/issues/50647). Closes [#44817](https://github.com/ClickHouse/ClickHouse/issues/44817). [#68024](https://github.com/ClickHouse/ClickHouse/pull/68024) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- Write buffer has to be canceled or finalized explicitly. Exceptions break the HTTP protocol in order to alert the client about error. [#68800](https://github.com/ClickHouse/ClickHouse/pull/68800) ([Sema Checherinda](https://github.com/CheSema)).
-- Report running DDLWorker hosts by creating replica_dir and mark replicas active in DDLWorker. [#69658](https://github.com/ClickHouse/ClickHouse/pull/69658) ([Tuan Pham Anh](https://github.com/tuanpach)).
-- 1. Refactor `DDLQueryStatusSource`: - Rename `DDLQueryStatusSource` to `DistributedQueryStatusSource`, and make it a base class - Create two subclasses `DDLOnClusterQueryStatusSource` and `ReplicatedDatabaseQueryStatusSource` derived from `DDLQueryStatusSource` to query the status of DDL tasks from `DDL On Cluster and Replicated databases respectively. 2. Support stop waiting for offline hosts in `DDLOnClusterQueryStatusSource`. [#69660](https://github.com/ClickHouse/ClickHouse/pull/69660) ([Tuan Pham Anh](https://github.com/tuanpach)).
-- Adding a new cancellation logic: `CancellationChecker` checks timeouts for every started query and stops them once the timeout has reached. [#69880](https://github.com/ClickHouse/ClickHouse/pull/69880) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-- Remove the `allow_experimental_join_condition` setting, allowing non-equi conditions by default. [#69910](https://github.com/ClickHouse/ClickHouse/pull/69910) ([Vladimir Cherkasov](https://github.com/vdimir)).
-- Enable `parallel_replicas_local_plan` by default. Building a full-fledged local plan on the query initiator improves parallel replicas performance with less resource consumption, provides opportunities to apply more query optimizations. [#70171](https://github.com/ClickHouse/ClickHouse/pull/70171) ([Igor Nikonov](https://github.com/devcrafter)).
-- Add ability to set user/password in http_handlers (for `dynamic_query_handler`/`predefined_query_handler`). [#70725](https://github.com/ClickHouse/ClickHouse/pull/70725) ([Azat Khuzhin](https://github.com/azat)).
-- Support `ALTER TABLE ... MODIFY/RESET SETTING ...` for certain settings in storage S3Queue. [#70811](https://github.com/ClickHouse/ClickHouse/pull/70811) ([Kseniia Sumarokova](https://github.com/kssenii)).
-- Do not call the object storage API when listing directories, as this may be cost-inefficient. Instead, store the list of filenames in the memory. The trade-offs are increased initial load time and memory required to store filenames. [#70823](https://github.com/ClickHouse/ClickHouse/pull/70823) ([Julia Kartseva](https://github.com/jkartseva)).
-- Add `--threads` parameter to `clickhouse-compressor`, which allows to compress data in parallel. [#70860](https://github.com/ClickHouse/ClickHouse/pull/70860) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- Make the Replxx client history size configurable. [#71014](https://github.com/ClickHouse/ClickHouse/pull/71014) ([Jiří Kozlovský](https://github.com/jirislav)).
-- Added a setting `prewarm_mark_cache` which enables loading of marks to mark cache on inserts, merges, fetches of parts and on startup of the table. [#71053](https://github.com/ClickHouse/ClickHouse/pull/71053) ([Anton Popov](https://github.com/CurtizJ)).
-- Boolean support for parquet native reader. [#71055](https://github.com/ClickHouse/ClickHouse/pull/71055) ([Arthur Passos](https://github.com/arthurpassos)).
-- Retry more errors when interacting with S3, such as "Malformed message". [#71088](https://github.com/ClickHouse/ClickHouse/pull/71088) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- Lower log level for some messages about S3. [#71090](https://github.com/ClickHouse/ClickHouse/pull/71090) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- Support write hdfs files with space. [#71105](https://github.com/ClickHouse/ClickHouse/pull/71105) ([exmy](https://github.com/exmy)).
-- `system.session_log` is quite okay. This closes [#51760](https://github.com/ClickHouse/ClickHouse/issues/51760). [#71150](https://github.com/ClickHouse/ClickHouse/pull/71150) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- Fixes RIGHT / FULL joins in queries with parallel replicas. Now, RIGHT joins can be executed with parallel replicas (right table reading is distributed). FULL joins can't be parallelized among nodes, - executed locally. [#71162](https://github.com/ClickHouse/ClickHouse/pull/71162) ([Igor Nikonov](https://github.com/devcrafter)).
-- Added settings limiting the number of replicated tables, dictionaries and views. [#71179](https://github.com/ClickHouse/ClickHouse/pull/71179) ([Kirill](https://github.com/kirillgarbar)).
-- Fixes [#71227](https://github.com/ClickHouse/ClickHouse/issues/71227). [#71286](https://github.com/ClickHouse/ClickHouse/pull/71286) ([Arthur Passos](https://github.com/arthurpassos)).
-- Automatic `GROUP BY`/`ORDER BY` to disk based on the server/user memory usage. Controlled with `max_bytes_ratio_before_external_group_by`/`max_bytes_ratio_before_external_sort` query settings. [#71406](https://github.com/ClickHouse/ClickHouse/pull/71406) ([Azat Khuzhin](https://github.com/azat)).
-- Add per host dashboards `Overview (host)` and `Cloud overview (host)` to advanced dashboard. [#71422](https://github.com/ClickHouse/ClickHouse/pull/71422) ([alesapin](https://github.com/alesapin)).
-- Function `translate` now supports character deletion if the `from` argument contains more characters than the `to` argument. Example: `SELECT translate('clickhouse', 'clickhouse', 'CLICK')` now returns `CLICK`. [#71441](https://github.com/ClickHouse/ClickHouse/pull/71441) ([shuai.xu](https://github.com/shuai-xu)).
-- Added new functions `parseDateTime64`, `parseDateTime64OrNull` and `parseDateTime64OrZero`. Compared to the existing function `parseDateTime` (and variants), they return a value of type `DateTime64` instead of `DateTime`. [#71581](https://github.com/ClickHouse/ClickHouse/pull/71581) ([kevinyhzou](https://github.com/KevinyhZou)).
-- Shrink to fit index_granularity array in memory to reduce memory footprint for MergeTree table engines family. [#71595](https://github.com/ClickHouse/ClickHouse/pull/71595) ([alesapin](https://github.com/alesapin)).
-- The command line applications will highlight syntax even for multi-statements. [#71622](https://github.com/ClickHouse/ClickHouse/pull/71622) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- Command-line applications will return non-zero exit codes on errors. In previous versions, the `disks` application returned zero on errors, and other applications returned zero for errors 256 (`PARTITION_ALREADY_EXISTS`) and 512 (`SET_NON_GRANTED_ROLE`). [#71623](https://github.com/ClickHouse/ClickHouse/pull/71623) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- The `Vertical` format (which is also activated when you end your query with `\G`) gets the features of Pretty formats, such as: - highlighting thousand groups in numbers; - printing a readable number tip. [#71630](https://github.com/ClickHouse/ClickHouse/pull/71630) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- Allow to disable memory buffer increase for filesystem cache via setting `filesystem_cache_prefer_bigger_buffer_size`. [#71640](https://github.com/ClickHouse/ClickHouse/pull/71640) ([Kseniia Sumarokova](https://github.com/kssenii)).
-- Add a separate setting `background_download_max_file_segment_size` for background download max file segment size in filesystem cache. [#71648](https://github.com/ClickHouse/ClickHouse/pull/71648) ([Kseniia Sumarokova](https://github.com/kssenii)).
-- Changes the default value of `enable_http_compression` from 0 to 1. Closes [#71591](https://github.com/ClickHouse/ClickHouse/issues/71591). [#71774](https://github.com/ClickHouse/ClickHouse/pull/71774) ([Peter Nguyen](https://github.com/petern48)).
-- Support ALTER from Object to JSON. [#71784](https://github.com/ClickHouse/ClickHouse/pull/71784) ([Pavel Kruglov](https://github.com/Avogar)).
-- Slightly better JSON type parsing: if current block for the JSON path contains values of several types, try to choose the best type by trying types in special best-effort order. [#71785](https://github.com/ClickHouse/ClickHouse/pull/71785) ([Pavel Kruglov](https://github.com/Avogar)).
-- Previously reading from `system.asynchronous_metrics` would wait for concurrent update to finish. This can take long time if system is under heavy load. With this change the previously collected values can always be read. [#71798](https://github.com/ClickHouse/ClickHouse/pull/71798) ([Alexander Gololobov](https://github.com/davenger)).
-- Set `polling_max_timeout_ms` to 10 minutes, `polling_backoff_ms` to 30 seconds. [#71817](https://github.com/ClickHouse/ClickHouse/pull/71817) ([Kseniia Sumarokova](https://github.com/kssenii)).
-- Queries like 'SELECT - FROM t LIMIT 1' used to load part indexes even though they were not used. [#71866](https://github.com/ClickHouse/ClickHouse/pull/71866) ([Alexander Gololobov](https://github.com/davenger)).
-- Allow_reorder_prewhere_conditions is on by default with old compatibility settings. [#71867](https://github.com/ClickHouse/ClickHouse/pull/71867) ([Raúl Marín](https://github.com/Algunenano)).
-- Do not increment the `ILLEGAL_TYPE_OF_ARGUMENT` counter in the `system.errors` table when the `bitmapTransform` function is used, and argument types are valid. [#71971](https://github.com/ClickHouse/ClickHouse/pull/71971) ([Dmitry Novik](https://github.com/novikd)).
-- When retrieving data directly from a dictionary using Dictionary storage, dictionary table function, or direct SELECT from the dictionary itself, it is now enough to have `SELECT` permission or `dictGet` permission for the dictionary. This aligns with previous attempts to prevent ACL bypasses: https://github.com/ClickHouse/ClickHouse/pull/57362 and https://github.com/ClickHouse/ClickHouse/pull/65359. It also makes the latter one backward compatible. [#72051](https://github.com/ClickHouse/ClickHouse/pull/72051) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
-- On the advanced dashboard HTML page added a dropdown selector for the dashboard from `system.dashboards` table. [#72081](https://github.com/ClickHouse/ClickHouse/pull/72081) ([Sergei Trifonov](https://github.com/serxa)).
-- Respect `prefer_locahost_replica` when building plan for distributed `INSERT ... SELECT`. [#72190](https://github.com/ClickHouse/ClickHouse/pull/72190) ([filimonov](https://github.com/filimonov)).
-- The problem is [described here](https://github.com/ClickHouse/ClickHouse/issues/72091). Azure Iceberg Writer creates Iceberg metadata files (as well as manifest files) that violate specs. In this PR I added an attempt to read v1 iceberg format metadata with v2 reader (cause they write it in a this way), and added error when they didn't create corresponding fields in a manifest file. [#72277](https://github.com/ClickHouse/ClickHouse/pull/72277) ([Daniil Ivanik](https://github.com/divanik)).
-- Move JSON/Dynamic/Variant types from experimental features to beta. [#72294](https://github.com/ClickHouse/ClickHouse/pull/72294) ([Pavel Kruglov](https://github.com/Avogar)).
-- Now it's allowed to `CREATE MATERIALIZED VIEW` with `UNION [ALL]` in query. Behavior is the same as for matview with `JOIN`: **only first table in `SELECT` expression will work as trigger for insert*- , all other tables will be ignored. [#72347](https://github.com/ClickHouse/ClickHouse/pull/72347) ([alesapin](https://github.com/alesapin)).
-- Speed up insertions into merge tree in case of a single value of partition key inside inserted batch. [#72348](https://github.com/ClickHouse/ClickHouse/pull/72348) ([alesapin](https://github.com/alesapin)).
-- Add the new MergeTreeIndexGranularityInternalArraysTotalSize metric to system.metrics. This metric is needed to find the instances with huge datasets susceptible to the high memory usage issue. [#72490](https://github.com/ClickHouse/ClickHouse/pull/72490) ([Miсhael Stetsyuk](https://github.com/mstetsyuk)).
-- All spellings of word `Null` now recognised when query uses `Format Null`. Previously other forms (e.g. `NULL`) did not result in exceptions being thrown, but at the same time format `Null` wasn't actually used in those cases. [#72658](https://github.com/ClickHouse/ClickHouse/pull/72658) ([Nikita Taranov](https://github.com/nickitat)).
-- Allow unknown values in set that are not present in Enum. Fix [#72662](https://github.com/ClickHouse/ClickHouse/issues/72662). [#72686](https://github.com/ClickHouse/ClickHouse/pull/72686) ([zhanglistar](https://github.com/zhanglistar)).
-- Add total_bytes_with_inactive to system.tables to count the total bytes of inactive parts. [#72690](https://github.com/ClickHouse/ClickHouse/pull/72690) ([Kai Zhu](https://github.com/nauu)).
-- Add MergeTreeSettings to system.settings_changes. [#72694](https://github.com/ClickHouse/ClickHouse/pull/72694) ([Raúl Marín](https://github.com/Algunenano)).
-- Support string search operator(eg. like) for Enum data type, fix [#72661](https://github.com/ClickHouse/ClickHouse/issues/72661). [#72732](https://github.com/ClickHouse/ClickHouse/pull/72732) ([zhanglistar](https://github.com/zhanglistar)).
-- Support JSON type in notEmpty function. [#72741](https://github.com/ClickHouse/ClickHouse/pull/72741) ([Pavel Kruglov](https://github.com/Avogar)).
-- Support parsing GCS S3 error `AuthenticationRequired`. [#72753](https://github.com/ClickHouse/ClickHouse/pull/72753) ([Vitaly Baranov](https://github.com/vitlibar)).
-- Support Dynamic type in functions ifNull and coalesce. [#72772](https://github.com/ClickHouse/ClickHouse/pull/72772) ([Pavel Kruglov](https://github.com/Avogar)).
-- Added `JoinBuildTableRowCount/JoinProbeTableRowCount/JoinResultRowCount` profile events. [#72842](https://github.com/ClickHouse/ClickHouse/pull/72842) ([Vladimir Cherkasov](https://github.com/vdimir)).
-- Support Dynamic in functions toFloat64/touInt32/etc. [#72989](https://github.com/ClickHouse/ClickHouse/pull/72989) ([Pavel Kruglov](https://github.com/Avogar)).
-
-## Bug Fix (user-visible misbehavior in an official stable release) {#bug-fix}
-
-- The parts deduplicated during `ATTACH PART` query don't get stuck with the `attaching_` prefix anymore. [#65636](https://github.com/ClickHouse/ClickHouse/pull/65636) ([Kirill](https://github.com/kirillgarbar)).
-- Fix for the bug when dateTime64 losing precision for the `IN` function. [#67230](https://github.com/ClickHouse/ClickHouse/pull/67230) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-- Fix possible logical error when using functions with `IGNORE/RESPECT NULLS` in `ORDER BY ... WITH FILL`, close [#57609](https://github.com/ClickHouse/ClickHouse/issues/57609). [#68234](https://github.com/ClickHouse/ClickHouse/pull/68234) ([Vladimir Cherkasov](https://github.com/vdimir)).
-- Fixed rare logical errors in asynchronous inserts with format `Native` in case of reached memory limit. [#68965](https://github.com/ClickHouse/ClickHouse/pull/68965) ([Anton Popov](https://github.com/CurtizJ)).
-- Fix COMMENT in CREATE TABLE for EPHEMERAL column. [#70458](https://github.com/ClickHouse/ClickHouse/pull/70458) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
-- Fix logical error in JSONExtract with LowCardinality(Nullable). [#70549](https://github.com/ClickHouse/ClickHouse/pull/70549) ([Pavel Kruglov](https://github.com/Avogar)).
-- Fixes behaviour when table name is too long. [#70810](https://github.com/ClickHouse/ClickHouse/pull/70810) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-- Add ability to override Content-Type by user headers in the URL engine. [#70859](https://github.com/ClickHouse/ClickHouse/pull/70859) ([Artem Iurin](https://github.com/ortyomka)).
-- Fix logical error in `StorageS3Queue` "Cannot create a persistent node in /processed since it already exists". [#70984](https://github.com/ClickHouse/ClickHouse/pull/70984) ([Kseniia Sumarokova](https://github.com/kssenii)).
-- Fix the bug that didn't consider _row_exists column in rebuild option of projection lightweight delete. [#71089](https://github.com/ClickHouse/ClickHouse/pull/71089) ([Shichao Jin](https://github.com/jsc0218)).
-- Fix wrong value in system.query_metric_log due to unexpected race condition. [#71124](https://github.com/ClickHouse/ClickHouse/pull/71124) ([Pablo Marcos](https://github.com/pamarcos)).
-- Fix mismatched aggreage function name of quantileExactWeightedInterpolated. The bug was introduced in https://github.com/ClickHouse/ClickHouse/pull/69619. cc @Algunenano. [#71168](https://github.com/ClickHouse/ClickHouse/pull/71168) ([李扬](https://github.com/taiyang-li)).
-- Fix bad_weak_ptr exception with Dynamic in functions comparison. [#71183](https://github.com/ClickHouse/ClickHouse/pull/71183) ([Pavel Kruglov](https://github.com/Avogar)).
-- Don't delete a blob when there are nodes using it in ReplicatedMergeTree with zero-copy replication. [#71186](https://github.com/ClickHouse/ClickHouse/pull/71186) ([Antonio Andelic](https://github.com/antonio2368)).
-- Fix ignoring format settings in Native format via HTTP and Async Inserts. [#71193](https://github.com/ClickHouse/ClickHouse/pull/71193) ([Pavel Kruglov](https://github.com/Avogar)).
-- SELECT queries run with setting `use_query_cache = 1` are no longer rejected if the name of a system table appears as a literal, e.g. `SELECT - FROM users WHERE name = 'system.metrics' SETTINGS use_query_cache = true;` now works. [#71254](https://github.com/ClickHouse/ClickHouse/pull/71254) ([Robert Schulze](https://github.com/rschu1ze)).
-- Fix bug of memory usage increase if enable_filesystem_cache=1, but disk in storage configuration did not have any cache configuration. [#71261](https://github.com/ClickHouse/ClickHouse/pull/71261) ([Kseniia Sumarokova](https://github.com/kssenii)).
-- Fix possible error "Cannot read all data" erros during deserialization of LowCardinality dictionary from Dynamic column. [#71299](https://github.com/ClickHouse/ClickHouse/pull/71299) ([Pavel Kruglov](https://github.com/Avogar)).
-- Fix incomplete cleanup of parallel output format in the client. [#71304](https://github.com/ClickHouse/ClickHouse/pull/71304) ([Raúl Marín](https://github.com/Algunenano)).
-- Added missing unescaping in named collections. Without fix clickhouse-server can't start. [#71308](https://github.com/ClickHouse/ClickHouse/pull/71308) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
-- Fix async inserts with empty blocks via native protocol. [#71312](https://github.com/ClickHouse/ClickHouse/pull/71312) ([Anton Popov](https://github.com/CurtizJ)).
-- Fix inconsistent AST formatting when granting wrong wildcard grants [#71309](https://github.com/ClickHouse/ClickHouse/issues/71309). [#71332](https://github.com/ClickHouse/ClickHouse/pull/71332) ([pufit](https://github.com/pufit)).
-- Check suspicious and experimental types in JSON type hints. [#71369](https://github.com/ClickHouse/ClickHouse/pull/71369) ([Pavel Kruglov](https://github.com/Avogar)).
-- Fix error Invalid number of rows in Chunk with Variant column. [#71388](https://github.com/ClickHouse/ClickHouse/pull/71388) ([Pavel Kruglov](https://github.com/Avogar)).
-- Fix crash in `mongodb` table function when passing wrong arguments (e.g. `NULL`). [#71426](https://github.com/ClickHouse/ClickHouse/pull/71426) ([Vladimir Cherkasov](https://github.com/vdimir)).
-- Fix crash with optimize_rewrite_array_exists_to_has. [#71432](https://github.com/ClickHouse/ClickHouse/pull/71432) ([Raúl Marín](https://github.com/Algunenano)).
-- Fix NoSuchKey error during transaction rollback when creating a directory fails for the palin_rewritable disk. [#71439](https://github.com/ClickHouse/ClickHouse/pull/71439) ([Julia Kartseva](https://github.com/jkartseva)).
-- Fixed the usage of setting `max_insert_delayed_streams_for_parallel_write` in inserts. Previously it worked incorrectly which could lead to high memory usage in inserts which write data into several partitions. [#71474](https://github.com/ClickHouse/ClickHouse/pull/71474) ([Anton Popov](https://github.com/CurtizJ)).
-- Fix possible error `Argument for function must be constant` (old analyzer) in case when arrayJoin can apparently appear in `WHERE` condition. Regression after https://github.com/ClickHouse/ClickHouse/pull/65414. [#71476](https://github.com/ClickHouse/ClickHouse/pull/71476) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
-- Prevent crash in SortCursor with 0 columns (old analyzer). [#71494](https://github.com/ClickHouse/ClickHouse/pull/71494) ([Raúl Marín](https://github.com/Algunenano)).
-- Fix date32 out of range caused by uninitialized orc data. For more details, refer to https://github.com/apache/incubator-gluten/issues/7823. [#71500](https://github.com/ClickHouse/ClickHouse/pull/71500) ([李扬](https://github.com/taiyang-li)).
-- Fix counting column size in wide part for Dynamic and JSON types. [#71526](https://github.com/ClickHouse/ClickHouse/pull/71526) ([Pavel Kruglov](https://github.com/Avogar)).
-- Analyzer fix when query inside materialized view uses IN with CTE. Closes [#65598](https://github.com/ClickHouse/ClickHouse/issues/65598). [#71538](https://github.com/ClickHouse/ClickHouse/pull/71538) ([Maksim Kita](https://github.com/kitaisreal)).
-- Return 0 or default char instead of throwing an error in bitShift functions in case of out of bounds. [#71580](https://github.com/ClickHouse/ClickHouse/pull/71580) ([Pablo Marcos](https://github.com/pamarcos)).
-- Fix server crashes while using materialized view with certain engines. [#71593](https://github.com/ClickHouse/ClickHouse/pull/71593) ([Pervakov Grigorii](https://github.com/GrigoryPervakov)).
-- Array join with a nested data structure, which contains an alias to a constant array was leading to a null pointer dereference. This closes [#71677](https://github.com/ClickHouse/ClickHouse/issues/71677). [#71678](https://github.com/ClickHouse/ClickHouse/pull/71678) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- Fix LOGICAL_ERROR when doing ALTER with empty tuple. This fixes [#71647](https://github.com/ClickHouse/ClickHouse/issues/71647). [#71679](https://github.com/ClickHouse/ClickHouse/pull/71679) ([Amos Bird](https://github.com/amosbird)).
-- Don't transform constant set in predicates over partition columns in case of NOT IN operator. [#71695](https://github.com/ClickHouse/ClickHouse/pull/71695) ([Eduard Karacharov](https://github.com/korowa)).
-- Fix CAST from LowCardinality(Nullable) to Dynamic. Previously it could lead to error `Bad cast from type DB::ColumnVector to DB::ColumnNullable`. [#71742](https://github.com/ClickHouse/ClickHouse/pull/71742) ([Pavel Kruglov](https://github.com/Avogar)).
-- Fix exception for toDayOfWeek on WHERE condition with primary key of DateTime64 type. [#71849](https://github.com/ClickHouse/ClickHouse/pull/71849) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
-- Fixed filling of defaults after parsing into sparse columns. [#71854](https://github.com/ClickHouse/ClickHouse/pull/71854) ([Anton Popov](https://github.com/CurtizJ)).
-- Fix GROUPING function error when input is ALIAS on distributed table, close [#68602](https://github.com/ClickHouse/ClickHouse/issues/68602). [#71855](https://github.com/ClickHouse/ClickHouse/pull/71855) ([Vladimir Cherkasov](https://github.com/vdimir)).
-- Fixed select statements that use `WITH TIES` clause which might not return enough rows. [#71886](https://github.com/ClickHouse/ClickHouse/pull/71886) ([wxybear](https://github.com/wxybear)).
-- Fix an exception of TOO_LARGE_ARRAY_SIZE caused when a column of arrayWithConstant evaluation is mistaken to cross the array size limit. [#71894](https://github.com/ClickHouse/ClickHouse/pull/71894) ([Udi](https://github.com/udiz)).
-- `clickhouse-benchmark` reported wrong metrics for queries taking longer than one second. [#71898](https://github.com/ClickHouse/ClickHouse/pull/71898) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-- Fix data race between the progress indicator and the progress table in clickhouse-client. This issue is visible when FROM INFILE is used. Intercept keystrokes during INSERT queries to toggle progress table display. [#71901](https://github.com/ClickHouse/ClickHouse/pull/71901) ([Julia Kartseva](https://github.com/jkartseva)).
-- Fix serialization of Dynamic values in Pretty JSON formats. [#71923](https://github.com/ClickHouse/ClickHouse/pull/71923) ([Pavel Kruglov](https://github.com/Avogar)).
-- Fix rows_processed column in system.s3/azure_queue_log broken in 24.6. Closes [#69975](https://github.com/ClickHouse/ClickHouse/issues/69975). [#71946](https://github.com/ClickHouse/ClickHouse/pull/71946) ([Kseniia Sumarokova](https://github.com/kssenii)).
-- Fixed case when `s3`/`s3Cluster` functions could return incomplete result or throw an exception. It involved using glob pattern in s3 uri (like `pattern/*`) and an empty object should exist with the key `pattern/` (such objects automatically created by S3 Console). Also default value for setting `s3_skip_empty_files` changed from `false` to `true` by default. [#71947](https://github.com/ClickHouse/ClickHouse/pull/71947) ([Nikita Taranov](https://github.com/nickitat)).
-- Fix a crash in clickhouse-client syntax highlighting. Closes [#71864](https://github.com/ClickHouse/ClickHouse/issues/71864). [#71949](https://github.com/ClickHouse/ClickHouse/pull/71949) ([Nikolay Degterinsky](https://github.com/evillique)).
-- Fix `Illegal type` error for `MergeTree` tables with binary monotonic function in `ORDER BY` when the first argument is constant. Fixes [#71941](https://github.com/ClickHouse/ClickHouse/issues/71941). [#71966](https://github.com/ClickHouse/ClickHouse/pull/71966) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
-- Allow only SELECT queries in EXPLAIN AST used inside subquery. Other types of queries lead to logical error: 'Bad cast from type DB::ASTCreateQuery to DB::ASTSelectWithUnionQuery' or `Inconsistent AST formatting`. [#71982](https://github.com/ClickHouse/ClickHouse/pull/71982) ([Pavel Kruglov](https://github.com/Avogar)).
-- When insert a record by `clickhouse-client`, client will read column descriptions from server. but there was a bug that we wrote the descritions with a wrong order , it should be [statistics, ttl, settings]. [#71991](https://github.com/ClickHouse/ClickHouse/pull/71991) ([Han Fei](https://github.com/hanfei1991)).
-- Fix formatting of `MOVE PARTITION ... TO TABLE ...` alter commands when `format_alter_commands_with_parentheses` is enabled. [#72080](https://github.com/ClickHouse/ClickHouse/pull/72080) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
-- Add inferred format name to create query in File/S3/URL/HDFS/Azure engines. Previously the format name was inferred each time the server was restarted, and if the specified data files were removed, it led to errors during server startup. [#72108](https://github.com/ClickHouse/ClickHouse/pull/72108) ([Pavel Kruglov](https://github.com/Avogar)).
-- Fix a bug where `min_age_to_force_merge_on_partition_only` was getting stuck trying to merge down the same partition repeatedly that was already merged to a single part and not merging partitions that had multiple parts. [#72209](https://github.com/ClickHouse/ClickHouse/pull/72209) ([Christoph Wurm](https://github.com/cwurm)).
-- Fixed a crash in `SimpleSquashingChunksTransform` that occurred in rare cases when processing sparse columns. [#72226](https://github.com/ClickHouse/ClickHouse/pull/72226) ([Vladimir Cherkasov](https://github.com/vdimir)).
-- Fixed data race in `GraceHashJoin` as the result of which some rows might be missing in the join output. [#72233](https://github.com/ClickHouse/ClickHouse/pull/72233) ([Nikita Taranov](https://github.com/nickitat)).
-- Fixed `ALTER DELETE` queries with materialized `_block_number` column (if setting `enable_block_number_column` is enabled). [#72261](https://github.com/ClickHouse/ClickHouse/pull/72261) ([Anton Popov](https://github.com/CurtizJ)).
-- Fixed data race when `ColumnDynamic::dumpStructure()` is called concurrently e.g. in `ConcurrentHashJoin` constructor. [#72278](https://github.com/ClickHouse/ClickHouse/pull/72278) ([Nikita Taranov](https://github.com/nickitat)).
-- Fix possible `LOGICAL_ERROR` with duplicate columns in `ORDER BY ... WITH FILL`. [#72387](https://github.com/ClickHouse/ClickHouse/pull/72387) ([Vladimir Cherkasov](https://github.com/vdimir)).
-- Fixed mismatched types in several cases after applying `optimize_functions_to_subcolumns`. [#72394](https://github.com/ClickHouse/ClickHouse/pull/72394) ([Anton Popov](https://github.com/CurtizJ)).
-- Fix failure on parsing `BACKUP DATABASE db EXCEPT TABLES db.table` queries. [#72429](https://github.com/ClickHouse/ClickHouse/pull/72429) ([Konstantin Bogdanov](https://github.com/thevar1able)).
-- Don't allow creating empty Variant. [#72454](https://github.com/ClickHouse/ClickHouse/pull/72454) ([Pavel Kruglov](https://github.com/Avogar)).
-- Fix invalid formatting of `result_part_path` in `system.merges`. [#72567](https://github.com/ClickHouse/ClickHouse/pull/72567) ([Konstantin Bogdanov](https://github.com/thevar1able)).
-- Fix parsing a glob with one element. [#72572](https://github.com/ClickHouse/ClickHouse/pull/72572) ([Konstantin Bogdanov](https://github.com/thevar1able)).
-- Fix query generation for the follower server in case of a distributed query with ARRAY JOIN. Fixes [#69276](https://github.com/ClickHouse/ClickHouse/issues/69276). [#72608](https://github.com/ClickHouse/ClickHouse/pull/72608) ([Dmitry Novik](https://github.com/novikd)).
-- Fix a bug when DateTime64 in DateTime64 returns nothing. [#72640](https://github.com/ClickHouse/ClickHouse/pull/72640) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-- Fix "No such key" error in S3Queue Unordered mode with `tracked_files_limit` setting smaller than s3 files appearance rate. [#72738](https://github.com/ClickHouse/ClickHouse/pull/72738) ([Kseniia Sumarokova](https://github.com/kssenii)).
-- Dropping mark cache might take noticeable time if it is big. If we hold context mutex during this it block many other activities, even new client connection cannot be established until it is released. And holding this mutex is not actually required for synchronization, it is enough to have a local reference to the cache via shared ptr. [#72749](https://github.com/ClickHouse/ClickHouse/pull/72749) ([Alexander Gololobov](https://github.com/davenger)).
-- PK cache was heavily underestimating it's size on one of the test instances. In particular LowCardinality columns were not including dictionary size. The fix is to use column->allocatedBytes() plus some more overhead estimates for cache entry size. [#72750](https://github.com/ClickHouse/ClickHouse/pull/72750) ([Alexander Gololobov](https://github.com/davenger)).
-- Fix exception thrown in RemoteQueryExecutor when user does not exist locally. [#72759](https://github.com/ClickHouse/ClickHouse/pull/72759) ([Andrey Zvonov](https://github.com/zvonand)).
-- Fixed mutations with materialized `_block_number` column (if setting `enable_block_number_column` is enabled). [#72854](https://github.com/ClickHouse/ClickHouse/pull/72854) ([Anton Popov](https://github.com/CurtizJ)).
-- Fix backup/restore with plain rewritable disk in case there are empty files in backup. [#72858](https://github.com/ClickHouse/ClickHouse/pull/72858) ([Kseniia Sumarokova](https://github.com/kssenii)).
-- Properly cancel inserts in DistributedAsyncInsertDirectoryQueue. [#72885](https://github.com/ClickHouse/ClickHouse/pull/72885) ([Antonio Andelic](https://github.com/antonio2368)).
-- Fixed crash while parsing of incorrect data into sparse columns (can happen with enabled setting `enable_parsing_to_custom_serialization`). [#72891](https://github.com/ClickHouse/ClickHouse/pull/72891) ([Anton Popov](https://github.com/CurtizJ)).
-- Fix potential crash during backup restore. [#72947](https://github.com/ClickHouse/ClickHouse/pull/72947) ([Kseniia Sumarokova](https://github.com/kssenii)).
-- Fixed bug in `parallel_hash` JOIN method that might appear when query has complex condition in the `ON` clause with inequality filters. [#72993](https://github.com/ClickHouse/ClickHouse/pull/72993) ([Nikita Taranov](https://github.com/nickitat)).
-- Use default format settings during JSON parsing to avoid broken deserialization. [#73043](https://github.com/ClickHouse/ClickHouse/pull/73043) ([Pavel Kruglov](https://github.com/Avogar)).
-- Fix crash in transactions with unsupported storage. [#73045](https://github.com/ClickHouse/ClickHouse/pull/73045) ([Raúl Marín](https://github.com/Algunenano)).
-- Check for duplicate JSON keys during Tuple parsing. Previously it could lead to logical error `Invalid number of rows in Chunk` during parsing. [#73082](https://github.com/ClickHouse/ClickHouse/pull/73082) ([Pavel Kruglov](https://github.com/Avogar)).
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/changelog-24-5.md b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/changelog-24-5.md
deleted file mode 100644
index 256c6e4c3be..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/changelog-24-5.md
+++ /dev/null
@@ -1,182 +0,0 @@
----
-slug: /changelogs/24.5
-title: 'v24.5 Changelog for Cloud'
-description: 'Fast release changelog for v24.5'
-keywords: ['changelog', 'cloud']
-sidebar_label: 'v24.5'
----
-
-# v24.5 Changelog for Cloud
-
-Relevant changes for ClickHouse Cloud services based on the v24.5 release.
-
-## Breaking Changes {#breaking-changes}
-
-* Change the column name from duration_ms to duration_microseconds in the system.zookeeper table to reflect the reality that the duration is in the microsecond resolution. [#60774](https://github.com/ClickHouse/ClickHouse/pull/60774) (Duc Canh Le).
-
-* Don't allow to set max_parallel_replicas to 0 as it doesn't make sense. Setting it to 0 could lead to unexpected logical errors. Closes #60140. [#61201](https://github.com/ClickHouse/ClickHouse/pull/61201) (Kruglov Pavel).
-
-* Remove support for INSERT WATCH query (part of the experimental LIVE VIEW feature). [#62382](https://github.com/ClickHouse/ClickHouse/pull/62382) (Alexey Milovidov).
-
-* Usage of functions neighbor, runningAccumulate, runningDifferenceStartingWithFirstValue, runningDifference deprecated (because it is error-prone). Proper window functions should be used instead. To enable them back, set allow_deprecated_error_prone_window_functions=1. [#63132](https://github.com/ClickHouse/ClickHouse/pull/63132) (Nikita Taranov).
-
-
-## Backward Incompatible Changes {#backward-incompatible-changes}
-
-* In the new ClickHouse version, the functions geoDistance, greatCircleDistance, and greatCircleAngle will use 64-bit double precision floating point data type for internal calculations and return type if all the arguments are Float64. This closes #58476. In previous versions, the function always used Float32. You can switch to the old behavior by setting geo_distance_returns_float64_on_float64_arguments to false or setting compatibility to 24.2 or earlier. [#61848](https://github.com/ClickHouse/ClickHouse/pull/61848) (Alexey Milovidov).
-
-* Queries from system.columns will work faster if there is a large number of columns, but many databases or tables are not granted for SHOW TABLES. Note that in previous versions, if you grant SHOW COLUMNS to individual columns without granting SHOW TABLES to the corresponding tables, the system.columns table will show these columns, but in a new version, it will skip the table entirely. Remove trace log messages "Access granted" and "Access denied" that slowed down queries. [#63439](https://github.com/ClickHouse/ClickHouse/pull/63439) (Alexey Milovidov).
-
-* Fix crash in largestTriangleThreeBuckets. This changes the behaviour of this function and makes it to ignore NaNs in the series provided. Thus the resultset might differ from previous versions. [#62646](https://github.com/ClickHouse/ClickHouse/pull/62646) (Raúl Marín).
-
-## New Features {#new-features}
-
-* The new analyzer is enabled by default on new services.
-
-* Supports dropping multiple tables at the same time like drop table a,b,c;. [#58705](https://github.com/ClickHouse/ClickHouse/pull/58705) (zhongyuankai).
-
-* User can now parse CRLF with TSV format using a setting input_format_tsv_crlf_end_of_line. Closes #56257. [#59747](https://github.com/ClickHouse/ClickHouse/pull/59747) (Shaun Struwig).
-
-* Table engine is grantable now, and it won't affect existing users behavior. [#60117](https://github.com/ClickHouse/ClickHouse/pull/60117) (jsc0218).
-
-* Adds the Form Format to read/write a single record in the application/x-www-form-urlencoded format. [#60199](https://github.com/ClickHouse/ClickHouse/pull/60199) (Shaun Struwig).
-
-* Added possibility to compress in CROSS JOIN. [#60459](https://github.com/ClickHouse/ClickHouse/pull/60459) (p1rattttt).
-
-* New setting input_format_force_null_for_omitted_fields that forces NULL values for omitted fields. [#60887](https://github.com/ClickHouse/ClickHouse/pull/60887) (Constantine Peresypkin).
-
-* Support join with inequal conditions which involve columns from both left and right table. e.g. `t1.y < t2.y`. To enable, SET allow_experimental_join_condition = 1. [#60920](https://github.com/ClickHouse/ClickHouse/pull/60920) (lgbo).
-
-* Add a new function, getClientHTTPHeader. This closes #54665. Co-authored with @lingtaolf. [#61820](https://github.com/ClickHouse/ClickHouse/pull/61820) (Alexey Milovidov).
-
-* For convenience purpose, SELECT * FROM numbers() will work in the same way as SELECT * FROM system.numbers - without a limit. [#61969](https://github.com/ClickHouse/ClickHouse/pull/61969) (YenchangChan).
-
-* Modifying memory table settings through ALTER MODIFY SETTING is now supported. ALTER TABLE memory MODIFY SETTING min_rows_to_keep = 100, max_rows_to_keep = 1000;. [#62039](https://github.com/ClickHouse/ClickHouse/pull/62039) (zhongyuankai).
-
-* Analyzer support recursive CTEs. [#62074](https://github.com/ClickHouse/ClickHouse/pull/62074) (Maksim Kita).
-
-* Earlier our s3 storage and s3 table function didn't support selecting from archive files. I created a solution that allows to iterate over files inside archives in S3. [#62259](https://github.com/ClickHouse/ClickHouse/pull/62259) (Daniil Ivanik).
-
-* Support for conditional function clamp. [#62377](https://github.com/ClickHouse/ClickHouse/pull/62377) (skyoct).
-
-* Add npy output format. [#62430](https://github.com/ClickHouse/ClickHouse/pull/62430) (豪肥肥).
-
-* Analyzer support QUALIFY clause. Closes #47819. [#62619](https://github.com/ClickHouse/ClickHouse/pull/62619) (Maksim Kita).
-
-* Added role query parameter to the HTTP interface. It works similarly to SET ROLE x, applying the role before the statement is executed. This allows for overcoming the limitation of the HTTP interface, as multiple statements are not allowed, and it is not possible to send both SET ROLE x and the statement itself at the same time. It is possible to set multiple roles that way, e.g., ?role=x&role=y, which will be an equivalent of SET ROLE x, y. [#62669](https://github.com/ClickHouse/ClickHouse/pull/62669) (Serge Klochkov).
-
-* Add SYSTEM UNLOAD PRIMARY KEY. [#62738](https://github.com/ClickHouse/ClickHouse/pull/62738) (Pablo Marcos).
-
-* Added SQL functions generateUUIDv7, generateUUIDv7ThreadMonotonic, generateUUIDv7NonMonotonic (with different monotonicity/performance trade-offs) to generate version 7 UUIDs aka. timestamp-based UUIDs with random component. Also added a new function UUIDToNum to extract bytes from a UUID and a new function UUIDv7ToDateTime to extract timestamp component from a UUID version 7. [#62852](https://github.com/ClickHouse/ClickHouse/pull/62852) (Alexey Petrunyaka).
-
-* Raw as a synonym for TSVRaw. [#63394](https://github.com/ClickHouse/ClickHouse/pull/63394) (Unalian).
-
-* Added possibility to do cross join in temporary file if size exceeds limits. [#63432](https://github.com/ClickHouse/ClickHouse/pull/63432) (p1rattttt).
-
-## Performance Improvements {#performance-improvements}
-
-* Skip merging of newly created projection blocks during INSERT-s. [#59405](https://github.com/ClickHouse/ClickHouse/pull/59405) (Nikita Taranov).
-
-* Reduce overhead of the mutations for SELECTs (v2). [#60856](https://github.com/ClickHouse/ClickHouse/pull/60856) (Azat Khuzhin).
-
-* JOIN filter push down improvements using equivalent sets. [#61216](https://github.com/ClickHouse/ClickHouse/pull/61216) (Maksim Kita).
-
-* Add a new analyzer pass to optimize in single value. [#61564](https://github.com/ClickHouse/ClickHouse/pull/61564) (LiuNeng).
-
-* Process string functions XXXUTF8 'asciily' if input strings are all ASCII chars. Inspired by apache/doris#29799. Overall speed up by 1.07x~1.62x. Notice that peak memory usage had been decreased in some cases. [#61632](https://github.com/ClickHouse/ClickHouse/pull/61632) (李扬).
-
-* Enabled fast Parquet encoder by default (output_format_parquet_use_custom_encoder). [#62088](https://github.com/ClickHouse/ClickHouse/pull/62088) (Michael Kolupaev).
-
-* Improve JSONEachRowRowInputFormat by skipping all remaining fields when all required fields are read. [#62210](https://github.com/ClickHouse/ClickHouse/pull/62210) (lgbo).
-
-* Functions splitByChar and splitByRegexp were speed up significantly. [#62392](https://github.com/ClickHouse/ClickHouse/pull/62392) (李扬).
-
-* Improve trivial insert select from files in file/s3/hdfs/url/... table functions. Add separate max_parsing_threads setting to control the number of threads used in parallel parsing. [#62404](https://github.com/ClickHouse/ClickHouse/pull/62404) (Kruglov Pavel).
-
-* Support parallel write buffer for AzureBlobStorage managed by setting azure_allow_parallel_part_upload. [#62534](https://github.com/ClickHouse/ClickHouse/pull/62534) (SmitaRKulkarni).
-
-* Functions to_utc_timestamp and from_utc_timestamp are now about 2x faster. [#62583](https://github.com/ClickHouse/ClickHouse/pull/62583) (KevinyhZou).
-
-* Functions parseDateTimeOrNull, parseDateTimeOrZero, parseDateTimeInJodaSyntaxOrNull and parseDateTimeInJodaSyntaxOrZero now run significantly faster (10x - 1000x) when the input contains mostly non-parseable values. [#62634](https://github.com/ClickHouse/ClickHouse/pull/62634) (LiuNeng).
-
-* Change HostResolver behavior on fail to keep only one record per IP [#62652](https://github.com/ClickHouse/ClickHouse/pull/62652) (Anton Ivashkin).
-
-* Add a new configurationprefer_merge_sort_block_bytes to control the memory usage and speed up sorting 2 times when merging when there are many columns. [#62904](https://github.com/ClickHouse/ClickHouse/pull/62904) (LiuNeng).
-
-* QueryPlan convert OUTER JOIN to INNER JOIN optimization if filter after JOIN always filters default values. Optimization can be controlled with setting query_plan_convert_outer_join_to_inner_join, enabled by default. [#62907](https://github.com/ClickHouse/ClickHouse/pull/62907) (Maksim Kita).
-
-* Enable optimize_rewrite_sum_if_to_count_if by default. [#62929](https://github.com/ClickHouse/ClickHouse/pull/62929) (Raúl Marín).
-
-* Micro-optimizations for the new analyzer. [#63429](https://github.com/ClickHouse/ClickHouse/pull/63429) (Raúl Marín).
-
-* Index analysis will work if DateTime is compared to DateTime64. This closes #63441. [#63443](https://github.com/ClickHouse/ClickHouse/pull/63443) (Alexey Milovidov).
-
-* Speed up indices of type set a little (around 1.5 times) by removing garbage. [#64098](https://github.com/ClickHouse/ClickHouse/pull/64098) (Alexey Milovidov).
-
-# Improvements
-
-* Remove optimize_monotonous_functions_in_order_by setting this is becoming a no-op. [#63004](https://github.com/ClickHouse/ClickHouse/pull/63004) (Raúl Marín).
-
-* Maps can now have Float32, Float64, Array(T), Map(K,V) and Tuple(T1, T2, ...) as keys. Closes #54537. [#59318](https://github.com/ClickHouse/ClickHouse/pull/59318) (李扬).
-
-* Add asynchronous WriteBuffer for AzureBlobStorage similar to S3. [#59929](https://github.com/ClickHouse/ClickHouse/pull/59929) (SmitaRKulkarni).
-
-* Multiline strings with border preservation and column width change. [#59940](https://github.com/ClickHouse/ClickHouse/pull/59940) (Volodyachan).
-
-* Make RabbitMQ nack broken messages. Closes #45350. [#60312](https://github.com/ClickHouse/ClickHouse/pull/60312) (Kseniia Sumarokova).
-
-* Add a setting first_day_of_week which affects the first day of the week considered by functions toStartOfInterval(..., INTERVAL ... WEEK). This allows for consistency with function toStartOfWeek which defaults to Sunday as the first day of the week. [#60598](https://github.com/ClickHouse/ClickHouse/pull/60598) (Jordi Villar).
-
-* Added persistent virtual column _block_offset which stores original number of row in block that was assigned at insert. Persistence of column _block_offset can be enabled by setting enable_block_offset_column. Added virtual column_part_data_version which contains either min block number or mutation version of part. Persistent virtual column _block_number is not considered experimental anymore. [#60676](https://github.com/ClickHouse/ClickHouse/pull/60676) (Anton Popov).
-
-* Functions date_diff and age now calculate their result at nanosecond instead of microsecond precision. They now also offer nanosecond (or nanoseconds or ns) as a possible value for the unit parameter. [#61409](https://github.com/ClickHouse/ClickHouse/pull/61409) (Austin Kothig).
-
-* Now marks are not loaded for wide parts during merges. [#61551](https://github.com/ClickHouse/ClickHouse/pull/61551) (Anton Popov).
-
-* Enable output_format_pretty_row_numbers by default. It is better for usability. [#61791](https://github.com/ClickHouse/ClickHouse/pull/61791) (Alexey Milovidov).
-
-* The progress bar will work for trivial queries with LIMIT from system.zeros, system.zeros_mt (it already works for system.numbers and system.numbers_mt), and the generateRandom table function. As a bonus, if the total number of records is greater than the max_rows_to_read limit, it will throw an exception earlier. This closes #58183. [#61823](https://github.com/ClickHouse/ClickHouse/pull/61823) (Alexey Milovidov).
-
-* Add TRUNCATE ALL TABLES. [#61862](https://github.com/ClickHouse/ClickHouse/pull/61862) (豪肥肥).
-
-* Add a setting input_format_json_throw_on_bad_escape_sequence, disabling it allows saving bad escape sequences in JSON input formats. [#61889](https://github.com/ClickHouse/ClickHouse/pull/61889) (Kruglov Pavel).
-
-* Fixed grammar from "a" to "the" in the warning message. There is only one Atomic engine, so it should be "to the new Atomic engine" instead of "to a new Atomic engine". [#61952](https://github.com/ClickHouse/ClickHouse/pull/61952) (shabroo).
-
-* Fix logical-error when undoing quorum insert transaction. [#61953](https://github.com/ClickHouse/ClickHouse/pull/61953) (Han Fei).
-
-* Automatically infer Nullable column types from Apache Arrow schema. [#61984](https://github.com/ClickHouse/ClickHouse/pull/61984) (Maksim Kita).
-
-* Allow to cancel parallel merge of aggregate states during aggregation. Example: uniqExact. [#61992](https://github.com/ClickHouse/ClickHouse/pull/61992) (Maksim Kita).
-
-* Dictionary source with INVALIDATE_QUERY is not reloaded twice on startup. [#62050](https://github.com/ClickHouse/ClickHouse/pull/62050) (vdimir).
-
-* OPTIMIZE FINAL for ReplicatedMergeTree now will wait for currently active merges to finish and then reattempt to schedule a final merge. This will put it more in line with ordinary MergeTree behaviour. [#62067](https://github.com/ClickHouse/ClickHouse/pull/62067) (Nikita Taranov).
-
-* While read data from a hive text file, it would use the first line of hive text file to resize of number of input fields, and sometimes the fields number of first line is not matched with the hive table defined , such as the hive table is defined to have 3 columns, like test_tbl(a Int32, b Int32, c Int32), but the first line of text file only has 2 fields, and in this situation, the input fields will be resized to 2, and if the next line of the text file has 3 fields, then the third field can not be read but set a default value 0, which is not right. [#62086](https://github.com/ClickHouse/ClickHouse/pull/62086) (KevinyhZou).
-
-* The syntax highlighting while typing in the client will work on the syntax level (previously, it worked on the lexer level). [#62123](https://github.com/ClickHouse/ClickHouse/pull/62123) (Alexey Milovidov).
-
-* Fix an issue where when a redundant = 1 or = 0 is added after a boolean expression involving the primary key, the primary index is not used. For example, both `SELECT * FROM
WHERE IN () = 1` and `SELECT * FROM
WHERE NOT IN () = 0` will both perform a full table scan, when the primary index can be used. [#62142](https://github.com/ClickHouse/ClickHouse/pull/62142) (josh-hildred).
-
-* Added setting lightweight_deletes_sync (default value: 2 - wait all replicas synchronously). It is similar to setting mutations_sync but affects only behaviour of lightweight deletes. [#62195](https://github.com/ClickHouse/ClickHouse/pull/62195) (Anton Popov).
-
-* Distinguish booleans and integers while parsing values for custom settings: SET custom_a = true; SET custom_b = 1;. [#62206](https://github.com/ClickHouse/ClickHouse/pull/62206) (Vitaly Baranov).
-
-* Support S3 access through AWS Private Link Interface endpoints. Closes #60021, #31074 and #53761. [#62208](https://github.com/ClickHouse/ClickHouse/pull/62208) (Arthur Passos).
-
-* Client has to send header 'Keep-Alive: timeout=X' to the server. If a client receives a response from the server with that header, client has to use the value from the server. Also for a client it is better not to use a connection which is nearly expired in order to avoid connection close race. [#62249](https://github.com/ClickHouse/ClickHouse/pull/62249) (Sema Checherinda).
-
-* Added nano- micro- milliseconds unit for date_trunc. [#62335](https://github.com/ClickHouse/ClickHouse/pull/62335) (Misz606).
-
-* The query cache now no longer caches results of queries against system tables (system.*, information_schema.*, INFORMATION_SCHEMA.*). [#62376](https://github.com/ClickHouse/ClickHouse/pull/62376) (Robert Schulze).
-
-* MOVE PARTITION TO TABLE query can be delayed or can throw TOO_MANY_PARTS exception to avoid exceeding limits on the part count. The same settings and limits are applied as for theINSERT query (see max_parts_in_total, parts_to_delay_insert, parts_to_throw_insert, inactive_parts_to_throw_insert, inactive_parts_to_delay_insert, max_avg_part_size_for_too_many_parts, min_delay_to_insert_ms and max_delay_to_insert settings). [#62420](https://github.com/ClickHouse/ClickHouse/pull/62420) (Sergei Trifonov).
-
-* Make transform always return the first match. [#62518](https://github.com/ClickHouse/ClickHouse/pull/62518) (Raúl Marín).
-
-* Avoid evaluating table DEFAULT expressions while executing RESTORE. [#62601](https://github.com/ClickHouse/ClickHouse/pull/62601) (Vitaly Baranov).
-
-* Allow quota key with different auth scheme in HTTP requests. [#62842](https://github.com/ClickHouse/ClickHouse/pull/62842) (Kseniia Sumarokova).
-
-* Close session if user's valid_until is reached. [#63046](https://github.com/ClickHouse/ClickHouse/pull/63046) (Konstantin Bogdanov).
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/changelog-24-6.md b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/changelog-24-6.md
deleted file mode 100644
index 3dc8d747ea5..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/changelog-24-6.md
+++ /dev/null
@@ -1,140 +0,0 @@
----
-slug: /changelogs/24.6
-title: 'v24.6 Changelog for Cloud'
-description: 'Fast release changelog for v24.6'
-keywords: ['changelog', 'cloud']
-sidebar_label: 'v24.6'
----
-
-# v24.6 Changelog for Cloud
-
-Relevant changes for ClickHouse Cloud services based on the v24.6 release.
-
-## Backward Incompatible Change {#backward-incompatible-change}
-* Rework parallel processing in `Ordered` mode of storage `S3Queue`. This PR is backward incompatible for Ordered mode if you used settings `s3queue_processing_threads_num` or `s3queue_total_shards_num`. Setting `s3queue_total_shards_num` is deleted, previously it was allowed to use only under `s3queue_allow_experimental_sharded_mode`, which is now deprecated. A new setting is added - `s3queue_buckets`. [#64349](https://github.com/ClickHouse/ClickHouse/pull/64349) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* New functions `snowflakeIDToDateTime`, `snowflakeIDToDateTime64`, `dateTimeToSnowflakeID`, and `dateTime64ToSnowflakeID` were added. Unlike the existing functions `snowflakeToDateTime`, `snowflakeToDateTime64`, `dateTimeToSnowflake`, and `dateTime64ToSnowflake`, the new functions are compatible with function `generateSnowflakeID`, i.e. they accept the snowflake IDs generated by `generateSnowflakeID` and produce snowflake IDs of the same type as `generateSnowflakeID` (i.e. `UInt64`). Furthermore, the new functions default to the UNIX epoch (aka. 1970-01-01), just like `generateSnowflakeID`. If necessary, a different epoch, e.g. Twitter's/X's epoch 2010-11-04 aka. 1288834974657 msec since UNIX epoch, can be passed. The old conversion functions are deprecated and will be removed after a transition period: to use them regardless, enable setting `allow_deprecated_snowflake_conversion_functions`. [#64948](https://github.com/ClickHouse/ClickHouse/pull/64948) ([Robert Schulze](https://github.com/rschu1ze)).
-
-## New Feature {#new-feature}
-
-* Support empty tuples. [#55061](https://github.com/ClickHouse/ClickHouse/pull/55061) ([Amos Bird](https://github.com/amosbird)).
-* Add Hilbert Curve encode and decode functions. [#60156](https://github.com/ClickHouse/ClickHouse/pull/60156) ([Artem Mustafin](https://github.com/Artemmm91)).
-* Add support for index analysis over `hilbertEncode`. [#64662](https://github.com/ClickHouse/ClickHouse/pull/64662) ([Artem Mustafin](https://github.com/Artemmm91)).
-* Added support for reading `LINESTRING` geometry in the WKT format using function `readWKTLineString`. [#62519](https://github.com/ClickHouse/ClickHouse/pull/62519) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
-* Added new SQL functions `generateSnowflakeID` for generating Twitter-style Snowflake IDs. [#63577](https://github.com/ClickHouse/ClickHouse/pull/63577) ([Danila Puzov](https://github.com/kazalika)).
-* Add support for comparing `IPv4` and `IPv6` types using the `=` operator. [#64292](https://github.com/ClickHouse/ClickHouse/pull/64292) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)).
-* Support decimal arguments in binary math functions (pow, atan2, max2, min2, hypot). [#64582](https://github.com/ClickHouse/ClickHouse/pull/64582) ([Mikhail Gorshkov](https://github.com/mgorshkov)).
-* Added SQL functions `parseReadableSize` (along with `OrNull` and `OrZero` variants). [#64742](https://github.com/ClickHouse/ClickHouse/pull/64742) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)).
-* Add `_time` virtual column to file alike storages (s3/file/hdfs/url/azureBlobStorage). [#64947](https://github.com/ClickHouse/ClickHouse/pull/64947) ([Ilya Golshtein](https://github.com/ilejn)).
-* Introduced new functions `base64URLEncode`, `base64URLDecode` and `tryBase64URLDecode`. [#64991](https://github.com/ClickHouse/ClickHouse/pull/64991) ([Mikhail Gorshkov](https://github.com/mgorshkov)).
-* Add new function `editDistanceUTF8`, which calculates the [edit distance](https://en.wikipedia.org/wiki/Edit_distance) between two UTF8 strings. [#65269](https://github.com/ClickHouse/ClickHouse/pull/65269) ([LiuNeng](https://github.com/liuneng1994)).
-* Add `http_response_headers` configuration to support custom response headers in custom HTTP handlers. [#63562](https://github.com/ClickHouse/ClickHouse/pull/63562) ([Grigorii](https://github.com/GSokol)).
-* Added a new table function `loop` to support returning query results in an infinite loop. [#63452](https://github.com/ClickHouse/ClickHouse/pull/63452) ([Sariel](https://github.com/sarielwxm)). This is useful for testing.
-* Introduced two additional columns in the `system.query_log`: `used_privileges` and `missing_privileges`. `used_privileges` is populated with the privileges that were checked during query execution, and `missing_privileges` contains required privileges that are missing. [#64597](https://github.com/ClickHouse/ClickHouse/pull/64597) ([Alexey Katsman](https://github.com/alexkats)).
-* Added a setting `output_format_pretty_display_footer_column_names` which when enabled displays column names at the end of the table for long tables (50 rows by default), with the threshold value for minimum number of rows controlled by `output_format_pretty_display_footer_column_names_min_rows`. [#65144](https://github.com/ClickHouse/ClickHouse/pull/65144) ([Shaun Struwig](https://github.com/Blargian)).
-
-## Performance Improvement {#performance-improvement}
-
-* Fix performance regression in cross join introduced in #60459 (24.5). #65243 (Nikita Taranov).
-* Improve io_uring resubmits visibility. Rename profile event IOUringSQEsResubmits -> IOUringSQEsResubmitsAsync and add a new one IOUringSQEsResubmitsSync. #63699 (Tomer Shafir).
-* Introduce assertions to verify all functions are called with columns of the right size. #63723 (Raúl Marín).
-* Add the ability to reshuffle rows during insert to optimize for size without violating the order set by `PRIMARY KEY`. It's controlled by the setting `optimize_row_order` (off by default). [#63578](https://github.com/ClickHouse/ClickHouse/pull/63578) ([Igor Markelov](https://github.com/ElderlyPassionFruit)).
-* Add a native parquet reader, which can read parquet binary to ClickHouse Columns directly. It's controlled by the setting `input_format_parquet_use_native_reader` (disabled by default). [#60361](https://github.com/ClickHouse/ClickHouse/pull/60361) ([ZhiHong Zhang](https://github.com/copperybean)).
-* Support partial trivial count optimization when the query filter is able to select exact ranges from merge tree tables. [#60463](https://github.com/ClickHouse/ClickHouse/pull/60463) ([Amos Bird](https://github.com/amosbird)).
-* Reduce max memory usage of multi-threaded `INSERT`s by collecting chunks of multiple threads in a single transform. [#61047](https://github.com/ClickHouse/ClickHouse/pull/61047) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-* Reduce the memory usage when using Azure object storage by using fixed memory allocation, avoiding the allocation of an extra buffer. [#63160](https://github.com/ClickHouse/ClickHouse/pull/63160) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
-* Reduce the number of virtual function calls in `ColumnNullable::size`. [#60556](https://github.com/ClickHouse/ClickHouse/pull/60556) ([HappenLee](https://github.com/HappenLee)).
-* Speedup `splitByRegexp` when the regular expression argument is a single-character. [#62696](https://github.com/ClickHouse/ClickHouse/pull/62696) ([Robert Schulze](https://github.com/rschu1ze)).
-* Speed up aggregation by 8-bit and 16-bit keys by keeping track of the min and max keys used. This allows to reduce the number of cells that need to be verified. [#62746](https://github.com/ClickHouse/ClickHouse/pull/62746) ([Jiebin Sun](https://github.com/jiebinn)).
-* Optimize operator IN when the left hand side is `LowCardinality` and the right is a set of constants. [#64060](https://github.com/ClickHouse/ClickHouse/pull/64060) ([Zhiguo Zhou](https://github.com/ZhiguoZh)).
-* Use a thread pool to initialize and destroy hash tables inside `ConcurrentHashJoin`. [#64241](https://github.com/ClickHouse/ClickHouse/pull/64241) ([Nikita Taranov](https://github.com/nickitat)).
-* Optimized vertical merges in tables with sparse columns. [#64311](https://github.com/ClickHouse/ClickHouse/pull/64311) ([Anton Popov](https://github.com/CurtizJ)).
-* Enabled prefetches of data from remote filesystem during vertical merges. It improves latency of vertical merges in tables with data stored on remote filesystem. [#64314](https://github.com/ClickHouse/ClickHouse/pull/64314) ([Anton Popov](https://github.com/CurtizJ)).
-* Reduce redundant calls to `isDefault` of `ColumnSparse::filter` to improve performance. [#64426](https://github.com/ClickHouse/ClickHouse/pull/64426) ([Jiebin Sun](https://github.com/jiebinn)).
-* Speedup `find_super_nodes` and `find_big_family` keeper-client commands by making multiple asynchronous getChildren requests. [#64628](https://github.com/ClickHouse/ClickHouse/pull/64628) ([Alexander Gololobov](https://github.com/davenger)).
-* Improve function `least`/`greatest` for nullable numberic type arguments. [#64668](https://github.com/ClickHouse/ClickHouse/pull/64668) ([KevinyhZou](https://github.com/KevinyhZou)).
-* Allow merging two consequent filtering steps of a query plan. This improves filter-push-down optimization if the filter condition can be pushed down from the parent step. [#64760](https://github.com/ClickHouse/ClickHouse/pull/64760) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
-* Remove bad optimization in the vertical final implementation and re-enable vertical final algorithm by default. [#64783](https://github.com/ClickHouse/ClickHouse/pull/64783) ([Duc Canh Le](https://github.com/canhld94)).
-* Remove ALIAS nodes from the filter expression. This slightly improves performance for queries with `PREWHERE` (with the new analyzer). [#64793](https://github.com/ClickHouse/ClickHouse/pull/64793) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
-* Re-enable OpenSSL session caching. [#65111](https://github.com/ClickHouse/ClickHouse/pull/65111) ([Robert Schulze](https://github.com/rschu1ze)).
-* Added settings to disable materialization of skip indexes and statistics on inserts (`materialize_skip_indexes_on_insert` and `materialize_statistics_on_insert`). [#64391](https://github.com/ClickHouse/ClickHouse/pull/64391) ([Anton Popov](https://github.com/CurtizJ)).
-* Use the allocated memory size to calculate the row group size and reduce the peak memory of the parquet writer in the single-threaded mode. [#64424](https://github.com/ClickHouse/ClickHouse/pull/64424) ([LiuNeng](https://github.com/liuneng1994)).
-* Improve the iterator of sparse column to reduce call of `size`. [#64497](https://github.com/ClickHouse/ClickHouse/pull/64497) ([Jiebin Sun](https://github.com/jiebinn)).
-* Update condition to use server-side copy for backups to Azure blob storage. [#64518](https://github.com/ClickHouse/ClickHouse/pull/64518) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
-* Optimized memory usage of vertical merges for tables with high number of skip indexes. [#64580](https://github.com/ClickHouse/ClickHouse/pull/64580) ([Anton Popov](https://github.com/CurtizJ)).
-
-## Improvement {#improvement}
-
-* Returned back the behaviour of how ClickHouse works and interprets Tuples in CSV format. This change effectively reverts ClickHouse/ClickHouse#60994 and makes it available only under a few settings: output_format_csv_serialize_tuple_into_separate_columns, input_format_csv_deserialize_separate_columns_into_tuple and input_format_csv_try_infer_strings_from_quoted_tuples. #65170 (Nikita Mikhaylov).
-* `SHOW CREATE TABLE` executed on top of system tables will now show the super handy comment unique for each table which will explain why this table is needed. [#63788](https://github.com/ClickHouse/ClickHouse/pull/63788) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
-* The second argument (scale) of functions `round()`, `roundBankers()`, `floor()`, `ceil()` and `trunc()` can now be non-const. [#64798](https://github.com/ClickHouse/ClickHouse/pull/64798) ([Mikhail Gorshkov](https://github.com/mgorshkov)).
-* Avoid possible deadlock during MergeTree index analysis when scheduling threads in a saturated service. [#59427](https://github.com/ClickHouse/ClickHouse/pull/59427) ([Sean Haynes](https://github.com/seandhaynes)).
-* Several minor corner case fixes to S3 proxy support & tunneling. [#63427](https://github.com/ClickHouse/ClickHouse/pull/63427) ([Arthur Passos](https://github.com/arthurpassos)).
-* Add metrics to track the number of directories created and removed by the `plain_rewritable` metadata storage, and the number of entries in the local-to-remote in-memory map. [#64175](https://github.com/ClickHouse/ClickHouse/pull/64175) ([Julia Kartseva](https://github.com/jkartseva)).
-* The query cache now considers identical queries with different settings as different. This increases robustness in cases where different settings (e.g. `limit` or `additional_table_filters`) would affect the query result. [#64205](https://github.com/ClickHouse/ClickHouse/pull/64205) ([Robert Schulze](https://github.com/rschu1ze)).
-* Support the non standard error code `QpsLimitExceeded` in object storage as a retryable error. [#64225](https://github.com/ClickHouse/ClickHouse/pull/64225) ([Sema Checherinda](https://github.com/CheSema)).
-* Added a new setting `input_format_parquet_prefer_block_bytes` to control the average output block bytes, and modified the default value of `input_format_parquet_max_block_size` to 65409. [#64427](https://github.com/ClickHouse/ClickHouse/pull/64427) ([LiuNeng](https://github.com/liuneng1994)).
-* Settings from the user's config don't affect merges and mutations for `MergeTree` on top of object storage. [#64456](https://github.com/ClickHouse/ClickHouse/pull/64456) ([alesapin](https://github.com/alesapin)).
-* Support the non standard error code `TotalQpsLimitExceeded` in object storage as a retryable error. [#64520](https://github.com/ClickHouse/ClickHouse/pull/64520) ([Sema Checherinda](https://github.com/CheSema)).
-* Updated Advanced Dashboard for both open-source and ClickHouse Cloud versions to include a chart for 'Maximum concurrent network connections'. [#64610](https://github.com/ClickHouse/ClickHouse/pull/64610) ([Thom O'Connor](https://github.com/thomoco)).
-* Improve progress report on `zeros_mt` and `generateRandom`. [#64804](https://github.com/ClickHouse/ClickHouse/pull/64804) ([Raúl Marín](https://github.com/Algunenano)).
-* Add an asynchronous metric `jemalloc.profile.active` to show whether sampling is currently active. This is an activation mechanism in addition to prof.active; both must be active for the calling thread to sample. [#64842](https://github.com/ClickHouse/ClickHouse/pull/64842) ([Unalian](https://github.com/Unalian)).
-* Remove mark of `allow_experimental_join_condition` as important. This mark may have prevented distributed queries in a mixed versions cluster from being executed successfully. [#65008](https://github.com/ClickHouse/ClickHouse/pull/65008) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
-* Added server Asynchronous metrics `DiskGetObjectThrottler*` and `DiskGetObjectThrottler*` reflecting request per second rate limit defined with `s3_max_get_rps` and `s3_max_put_rps` disk settings and currently available number of requests that could be sent without hitting throttling limit on the disk. Metrics are defined for every disk that has a configured limit. [#65050](https://github.com/ClickHouse/ClickHouse/pull/65050) ([Sergei Trifonov](https://github.com/serxa)).
-* Add a validation when creating a user with `bcrypt_hash`. [#65242](https://github.com/ClickHouse/ClickHouse/pull/65242) ([Raúl Marín](https://github.com/Algunenano)).
-* Add profile events for number of rows read during/after `PREWHERE`. [#64198](https://github.com/ClickHouse/ClickHouse/pull/64198) ([Nikita Taranov](https://github.com/nickitat)).
-* Print query in `EXPLAIN PLAN` with parallel replicas. [#64298](https://github.com/ClickHouse/ClickHouse/pull/64298) ([vdimir](https://github.com/vdimir)).
-* Rename `allow_deprecated_functions` to `allow_deprecated_error_prone_window_functions`. [#64358](https://github.com/ClickHouse/ClickHouse/pull/64358) ([Raúl Marín](https://github.com/Algunenano)).
-* Respect `max_read_buffer_size` setting for file descriptors as well in the `file` table function. [#64532](https://github.com/ClickHouse/ClickHouse/pull/64532) ([Azat Khuzhin](https://github.com/azat)).
-* Disable transactions for unsupported storages even for materialized views. [#64918](https://github.com/ClickHouse/ClickHouse/pull/64918) ([alesapin](https://github.com/alesapin)).
-* Forbid `QUALIFY` clause in the old analyzer. The old analyzer ignored `QUALIFY`, so it could lead to unexpected data removal in mutations. [#65356](https://github.com/ClickHouse/ClickHouse/pull/65356) ([Dmitry Novik](https://github.com/novikd)).
-
-## Bug Fix (user-visible misbehavior in an official stable release) {#bug-fix-user-visible-misbehavior-in-an-official-stable-release}
-* Fixed 'set' skip index not working with IN and indexHint(). #62083 (Michael Kolupaev).
-* Fix queries with FINAL give wrong result when table does not use adaptive granularity. #62432 (Duc Canh Le).
-* Support executing function during assignment of parameterized view value. #63502 (SmitaRKulkarni).
-* Fixed parquet memory tracking. #63584 (Michael Kolupaev).
-* Fix rare case with missing data in the result of distributed query. #63691 (vdimir).
-* Fixed reading of columns of type Tuple(Map(LowCardinality(String), String), ...). #63956 (Anton Popov).
-* Fix resolve of unqualified COLUMNS matcher. Preserve the input columns order and forbid usage of unknown identifiers. #63962 (Dmitry Novik).
-* Fix an Cyclic aliases error for cyclic aliases of different type (expression and function). #63993 (Nikolai Kochetov).
-* This fix will use a proper redefined context with the correct definer for each individual view in the query pipeline. #64079 (pufit).
-* Fix analyzer: "Not found column" error is fixed when using INTERPOLATE. #64096 (Yakov Olkhovskiy).
-* Prevent LOGICAL_ERROR on CREATE TABLE as MaterializedView. #64174 (Raúl Marín).
-* The query cache now considers two identical queries against different databases as different. The previous behavior could be used to bypass missing privileges to read from a table. #64199 (Robert Schulze).
-* Fix possible abort on uncaught exception in ~WriteBufferFromFileDescriptor in StatusFile. #64206 (Kruglov Pavel).
-* Fix duplicate alias error for distributed queries with ARRAY JOIN. #64226 (Nikolai Kochetov).
-* Fix unexpected accurateCast from string to integer. #64255 (wudidapaopao).
-* Fixed CNF simplification, in case any OR group contains mutually exclusive atoms. #64256 (Eduard Karacharov).
-* Fix Query Tree size validation. #64377 (Dmitry Novik).
-* Fix Logical error: Bad cast for Buffer table with PREWHERE. #64388 (Nikolai Kochetov).
-* Fixed CREATE TABLE AS queries for tables with default expressions. #64455 (Anton Popov).
-* Fixed optimize_read_in_order behaviour for ORDER BY ... NULLS FIRST / LAST on tables with nullable keys. #64483 (Eduard Karacharov).
-* Fix the Expression nodes list expected 1 projection names and Unknown expression or identifier errors for queries with aliases to GLOBAL IN.. #64517 (Nikolai Kochetov).
-* Fix an error Cannot find column in distributed queries with constant CTE in the GROUP BY key. #64519 (Nikolai Kochetov).
-* Fix the output of function formatDateTimeInJodaSyntax when a formatter generates an uneven number of characters and the last character is 0. For example, SELECT formatDateTimeInJodaSyntax(toDate('2012-05-29'), 'D') now correctly returns 150 instead of previously 15. #64614 (LiuNeng).
-* Do not rewrite aggregation if -If combinator is already used. #64638 (Dmitry Novik).
-* Fix type inference for float (in case of small buffer, i.e. --max_read_buffer_size 1). #64641 (Azat Khuzhin).
-* Fix bug which could lead to non-working TTLs with expressions. #64694 (alesapin).
-* Fix removing the WHERE and PREWHERE expressions, which are always true (for the new analyzer). #64695 (Nikolai Kochetov).
-* Fixed excessive part elimination by token-based text indexes (ngrambf , full_text) when filtering by result of startsWith, endsWith, match, multiSearchAny. #64720 (Eduard Karacharov).
-* Fixes incorrect behaviour of ANSI CSI escaping in the UTF8::computeWidth function. #64756 (Shaun Struwig).
-* Fix a case of incorrect removal of ORDER BY / LIMIT BY across subqueries. #64766 (Raúl Marín).
-* Fix (experimental) unequal join with subqueries for sets which are in the mixed join conditions. #64775 (lgbo).
-* Fix crash in a local cache over plain_rewritable disk. #64778 (Julia Kartseva).
-* Fix Cannot find column in distributed query with ARRAY JOIN by Nested column. Fixes #64755. #64801 (Nikolai Kochetov).
-* Fix memory leak in slru cache policy. #64803 (Kseniia Sumarokova).
-* Fixed possible incorrect memory tracking in several kinds of queries: queries that read any data from S3, queries via http protocol, asynchronous inserts. #64844 (Anton Popov).
-* Fix the Block structure mismatch error for queries reading with PREWHERE from the materialized view when the materialized view has columns of different types than the source table. Fixes #64611. #64855 (Nikolai Kochetov).
-* Fix rare crash when table has TTL with subquery + database replicated + parallel replicas + analyzer. It's really rare, but please don't use TTLs with subqueries. #64858 (alesapin).
-* Fix ALTER MODIFY COMMENT query that was broken for parameterized VIEWs in ClickHouse/ClickHouse#54211. #65031 (Nikolay Degterinsky).
-* Fix host_id in DatabaseReplicated when cluster_secure_connection parameter is enabled. Previously all the connections within the cluster created by DatabaseReplicated were not secure, even if the parameter was enabled. #65054 (Nikolay Degterinsky).
-* Fixing the Not-ready Set error after the PREWHERE optimization for StorageMerge. #65057 (Nikolai Kochetov).
-* Avoid writing to finalized buffer in File-like storages. #65063 (Kruglov Pavel).
-* Fix possible infinite query duration in case of cyclic aliases. Fixes #64849. #65081 (Nikolai Kochetov).
-* Fix the Unknown expression identifier error for remote queries with INTERPOLATE (alias) (new analyzer). Fixes #64636. #65090 (Nikolai Kochetov).
-* Fix pushing arithmetic operations out of aggregation. In the new analyzer, optimization was applied only once. #65104 (Dmitry Novik).
-* Fix aggregate function name rewriting in the new analyzer. #65110 (Dmitry Novik).
-* Respond with 5xx instead of 200 OK in case of receive timeout while reading (parts of) the request body from the client socket. #65118 (Julian Maicher).
-* Fix possible crash for hedged requests. #65206 (Azat Khuzhin).
-* Fix the bug in Hashed and Hashed_Array dictionary short circuit evaluation, which may read uninitialized number, leading to various errors. #65256 (jsc0218).
-* This PR ensures that the type of the constant(IN operator's second parameter) is always visible during the IN operator's type conversion process. Otherwise, losing type information may cause some conversions to fail, such as the conversion from DateTime to Date. fix (#64487). #65315 (pn).
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/changelog-24-8.md b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/changelog-24-8.md
deleted file mode 100644
index 29cabc28e51..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/changelog-24-8.md
+++ /dev/null
@@ -1,66 +0,0 @@
----
-slug: /changelogs/24.8
-title: 'v24.8 Changelog for Cloud'
-description: 'Fast release changelog for v24.8'
-keywords: ['changelog', 'cloud']
-sidebar_label: 'v24.8'
----
-
-Relevant changes for ClickHouse Cloud services based on the v24.8 release.
-
-## Backward Incompatible Change {#backward-incompatible-change}
-
-- Change binary serialization of Variant data type: add compact mode to avoid writing the same discriminator multiple times for granules with single variant or with only NULL values. Add MergeTree setting use_compact_variant_discriminators_serialization that is enabled by default. Note that Variant type is still experimental and backward-incompatible change in serialization should not impact you unless you have been working with support to get this feature enabled earlier. [#62774](https://github.com/ClickHouse/ClickHouse/pull/62774) (Kruglov Pavel).
-
-- Forbid CREATE MATERIALIZED VIEW ... ENGINE Replicated*MergeTree POPULATE AS SELECT ... with Replicated databases. This specific PR is only applicable to users still using, ReplicatedMergeTree. [#63963](https://github.com/ClickHouse/ClickHouse/pull/63963) (vdimir).
-
-- Metric KeeperOutstandingRequets was renamed to KeeperOutstandingRequests. This fixes a typo reported in [#66179](https://github.com/ClickHouse/ClickHouse/issues/66179). [#66206](https://github.com/ClickHouse/ClickHouse/pull/66206) (Robert Schulze).
-
-- clickhouse-client and clickhouse-local now default to multi-query mode (instead single-query mode). As an example, clickhouse-client -q "SELECT 1; SELECT 2" now works, whereas users previously had to add --multiquery (or -n). The --multiquery/-n switch became obsolete. INSERT queries in multi-query statements are treated specially based on their FORMAT clause: If the FORMAT is VALUES (the most common case), the end of the INSERT statement is represented by a trailing semicolon ; at the end of the query. For all other FORMATs (e.g. CSV or JSONEachRow), the end of the INSERT statement is represented by two newlines \n\n at the end of the query. [#63898](https://github.com/ClickHouse/ClickHouse/pull/63898) (wxybear).
-
-- In previous versions, it was possible to use an alternative syntax for LowCardinality data types by appending WithDictionary to the name of the data type. It was an initial working implementation, and it was never documented or exposed to the public. Now, it is deprecated. If you have used this syntax, you have to ALTER your tables and rename the data types to LowCardinality. [#66842](https://github.com/ClickHouse/ClickHouse/pull/66842)(Alexey Milovidov).
-
-- Fix logical errors with storage Buffer used with distributed destination table. It's a backward incompatible change: queries using Buffer with a distributed destination table may stop working if the table appears more than once in the query (e.g., in a self-join). [#67015](https://github.com/vdimir) (vdimir).
-
-- In previous versions, calling functions for random distributions based on the Gamma function (such as Chi-Squared, Student, Fisher) with negative arguments close to zero led to a long computation or an infinite loop. In the new version, calling these functions with zero or negative arguments will produce an exception. This closes [#67297](https://github.com/ClickHouse/ClickHouse/issues/67297). [#67326](https://github.com/ClickHouse/ClickHouse/pull/67326) (Alexey Milovidov).
-
-- In previous versions, arrayWithConstant can be slow if asked to generate very large arrays. In the new version, it is limited to 1 GB per array. This closes [#32754](https://github.com/ClickHouse/ClickHouse/issues/32754). [#67741](https://github.com/ClickHouse/ClickHouse/pull/67741) (Alexey Milovidov).
-
-- Fix REPLACE modifier formatting (forbid omitting brackets). [#67774](https://github.com/ClickHouse/ClickHouse/pull/67774) (Azat Khuzhin).
-
-
-## New Feature {#new-feature}
-
-- Extend function tuple to construct named tuples in query. Introduce function tupleNames to extract names from tuples. [#54881](https://github.com/ClickHouse/ClickHouse/pull/54881) (Amos Bird).
-
-- ASOF JOIN support for full_sorting_join algorithm Close [#54493](https://github.com/ClickHouse/ClickHouse/issues/54493). [#55051](https://github.com/ClickHouse/ClickHouse/pull/55051) (vdimir).
-
-- A new table function, fuzzQuery, was added. This function allows you to modify a given query string with random variations. Example: SELECT query FROM fuzzQuery('SELECT 1');. [#62103](https://github.com/ClickHouse/ClickHouse/pull/62103) (pufit).
-
-- Add new window function percent_rank. [#62747](https://github.com/ClickHouse/ClickHouse/pull/62747) (lgbo).
-
-- Support JWT authentication in clickhouse-client. [#62829](https://github.com/ClickHouse/ClickHouse/pull/62829) (Konstantin Bogdanov).
-
-- Add SQL functions changeYear, changeMonth, changeDay, changeHour, changeMinute, changeSecond. For example, SELECT changeMonth(toDate('2024-06-14'), 7) returns date 2024-07-14. [#63186](https://github.com/ClickHouse/ClickHouse/pull/63186) (cucumber95).
-
-- Add system.error_log which contains history of error values from table system.errors, periodically flushed to disk. [#65381](https://github.com/ClickHouse/ClickHouse/pull/65381) (Pablo Marcos).
-
-- Add aggregate function groupConcat. About the same as arrayStringConcat( groupArray(column), ',') Can receive 2 parameters: a string delimiter and the number of elements to be processed. [#65451](https://github.com/ClickHouse/ClickHouse/pull/65451) (Yarik Briukhovetskyi).
-
-- Add AzureQueue storage. [#65458](https://github.com/ClickHouse/ClickHouse/pull/65458) (Kseniia Sumarokova).
-
-- Add a new setting to disable/enable writing page index into parquet files. [#65475](https://github.com/ClickHouse/ClickHouse/pull/65475) (lgbo).
-
-- Automatically append a wildcard * to the end of a directory path with table function file. [#66019](https://github.com/ClickHouse/ClickHouse/pull/66019) (Zhidong (David) Guo).
-
-- Add --memory-usage option to client in non interactive mode. [#66393](https://github.com/ClickHouse/ClickHouse/pull/66393) (vdimir).
-
-- Add _etag virtual column for S3 table engine. Fixes [#65312](https://github.com/ClickHouse/ClickHouse/issues/65312). [#65386](https://github.com/ClickHouse/ClickHouse/pull/65386) (skyoct)
-
-- This pull request introduces Hive-style partitioning for different engines (File, URL, S3, AzureBlobStorage, HDFS). Hive-style partitioning organizes data into partitioned sub-directories, making it efficient to query and manage large datasets. Currently, it only creates virtual columns with the appropriate name and data. The follow-up PR will introduce the appropriate data filtering (performance speedup). [#65997](https://github.com/ClickHouse/ClickHouse/pull/65997) (Yarik Briukhovetskyi).
-
-- Add function printf for spark compatibility. [#66257](https://github.com/ClickHouse/ClickHouse/pull/66257) (李扬).
-
-- Added support for reading MULTILINESTRING geometry in WKT format using function readWKTLineString. [#67647](https://github.com/ClickHouse/ClickHouse/pull/67647) (Jacob Reckhard).
-
-- Added a tagging (namespace) mechanism for the query cache. The same queries with different tags are considered different by the query cache. Example: SELECT 1 SETTINGS use_query_cache = 1, query_cache_tag = 'abc' and SELECT 1 SETTINGS use_query_cache = 1, query_cache_tag = 'def' now create different query cache entries. [#68235](https://github.com/ClickHouse/ClickHouse/pull/68235)(sakulali).
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/changelog-25_1-25_4.md b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/changelog-25_1-25_4.md
deleted file mode 100644
index 038dd45e061..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/changelog-25_1-25_4.md
+++ /dev/null
@@ -1,646 +0,0 @@
----
-slug: /changelogs/25.4
-title: 'v25.4 Changelog for Cloud'
-description: 'Changelog for v25.4'
-keywords: ['changelog', 'cloud']
-sidebar_label: 'v25.4'
----
-
-## Backward Incompatible Changes {#backward-incompatible-changes}
-
-* Parquet output format converts Date and DateTime columns to date/time types supported by Parquet, instead of writing them as raw numbers. DateTime becomes DateTime64(3) (was: UInt32); setting `output_format_parquet_datetime_as_uint32` brings back the old behavior. Date becomes Date32 (was: UInt16). [#70950](https://github.com/ClickHouse/ClickHouse/pull/70950) ([Michael Kolupaev](https://github.com/al13n321)).
-* Don't allow comparable types (like JSON/Object/AggregateFunction) in ORDER BY and comparison functions `less/greater/equal/etc` by default. [#73276](https://github.com/ClickHouse/ClickHouse/pull/73276) ([Pavel Kruglov](https://github.com/Avogar)).
-* `JSONEachRowWithProgress` will write the progress whenever the progress happens. In previous versions, the progress was shown only after each block of the result, which made it useless. Change the way how the progress is displayed: it will not show zero values. Keep in mind that the progress is sent even if it happens frequently. It can generate a significant volume of traffic. Keep in mind that the progress is not flushed when the output is compressed. This closes [#70800](https://github.com/ClickHouse/ClickHouse/issues/70800). [#73834](https://github.com/ClickHouse/ClickHouse/pull/73834) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* The `mysql` dictionary source no longer does `SHOW TABLE STATUS` query, because it does not provide any value for InnoDB tables, as long as for any recent MySQL versions. This closes [#72636](https://github.com/ClickHouse/ClickHouse/issues/72636). This change is backward compatible, but put in this category, so you have a chance to notice it. [#73914](https://github.com/ClickHouse/ClickHouse/pull/73914) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* `Merge` tables will unify the structure of underlying tables by using a union of their columns and deriving common types. This closes [#64864](https://github.com/ClickHouse/ClickHouse/issues/64864). This closes [#35307](https://github.com/ClickHouse/ClickHouse/issues/35307). In certain cases, this change could be backward incompatible. One example is when there is no common type between tables, but conversion to the type of the first table is still possible, such as in the case of UInt64 and Int64 or any numeric type and String. If you want to return to the old behavior, set `merge_table_max_tables_to_look_for_schema_inference` to `1` or set `compatibility` to `24.12` or earlier. [#73956](https://github.com/ClickHouse/ClickHouse/pull/73956) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* `CHECK TABLE` queries now require a separate, `CHECK` grant. In previous versions, it was enough to have `SHOW TABLES` grant to run these queries. But a `CHECK TABLE` query can be heavy, and usual query complexity limits for `SELECT` queries don't apply to it. It led to the potential of DoS. [#74471](https://github.com/ClickHouse/ClickHouse/pull/74471) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Check all columns in a materialized view match the target table if `allow_materialized_view_with_bad_select` is `false`. [#74481](https://github.com/ClickHouse/ClickHouse/pull/74481) ([Christoph Wurm](https://github.com/cwurm)).
-* Function `h3ToGeo()` now returns the results in the order `(lat, lon)` (which is the standard order for geometric functions). Users who wish to retain the legacy result order `(lon, lat)` can set setting `h3togeo_lon_lat_result_order = true`. [#74719](https://github.com/ClickHouse/ClickHouse/pull/74719) ([Manish Gill](https://github.com/mgill25)).
-* Add `JSONCompactEachRowWithProgress` and `JSONCompactStringsEachRowWithProgress` formats. Continuation of [#69989](https://github.com/ClickHouse/ClickHouse/issues/69989). The `JSONCompactWithNames` and `JSONCompactWithNamesAndTypes` no longer output "totals" - apparently, it was a mistake in the implementation. [#75037](https://github.com/ClickHouse/ClickHouse/pull/75037) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Change `format_alter_operations_with_parentheses` default to true to make alter commands list unambiguous (see https://github.com/ClickHouse/ClickHouse/pull/59532). This breaks replication with clusters prior to 24.3. If you are upgrading a cluster using older releases, turn off the setting in the server config or upgrade to 24.3 first. [#75302](https://github.com/ClickHouse/ClickHouse/pull/75302) ([Raúl Marín](https://github.com/Algunenano)).
-* Disallow truncate database for replicated databases. [#76651](https://github.com/ClickHouse/ClickHouse/pull/76651) ([Bharat Nallan](https://github.com/bharatnc)).
-* Disable parallel replicas by default when analyzer is disabled regardless `compatibility` setting. It's still possible to change this behavior by explicitly setting `parallel_replicas_only_with_analyzer` to `false`. [#77115](https://github.com/ClickHouse/ClickHouse/pull/77115) ([Igor Nikonov](https://github.com/devcrafter)).
-* It's no longer possible to use `NaN` or `inf` for float values as settings. [#77546](https://github.com/ClickHouse/ClickHouse/pull/77546) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-* Fixes cases where `dateTrunc` is used with negative date/datetime arguments. [#77622](https://github.com/ClickHouse/ClickHouse/pull/77622) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-* The legacy MongoDB integration has been removed. Server setting `use_legacy_mongodb_integration` became obsolete and now does nothing. [#77895](https://github.com/ClickHouse/ClickHouse/pull/77895) ([Robert Schulze](https://github.com/rschu1ze)).
-* Enhance SummingMergeTree validation to skip aggregation for columns used in partition or sort keys. [#78022](https://github.com/ClickHouse/ClickHouse/pull/78022) ([Pervakov Grigorii](https://github.com/GrigoryPervakov)).
-
-## New Features {#new-features}
-
-* Added an in-memory cache for deserialized skipping index granules. This should make repeated queries that use skipping indexes faster. The size of the new cache is controlled by server settings `skipping_index_cache_size` and `skipping_index_cache_max_entries`. The original motivation for the cache were vector similarity indexes which became a lot faster now. [#70102](https://github.com/ClickHouse/ClickHouse/pull/70102) ([Robert Schulze](https://github.com/rschu1ze)).
-* A new implementation of the Userspace Page Cache, which allows caching data in the in-process memory instead of relying on the OS page cache. It is useful when the data is stored on a remote virtual filesystem without backing with the local filesystem cache. [#70509](https://github.com/ClickHouse/ClickHouse/pull/70509) ([Michael Kolupaev](https://github.com/al13n321)).
-* Add setting to query Iceberg tables as of a specific timestamp. [#71072](https://github.com/ClickHouse/ClickHouse/pull/71072) ([Brett Hoerner](https://github.com/bretthoerner)).
-* Implement Iceberg tables partition pruning for time-related transform partition operations in Iceberg. [#72044](https://github.com/ClickHouse/ClickHouse/pull/72044) ([Daniil Ivanik](https://github.com/divanik)).
-* Add the ability to create min-max (skipping) indices by default for columns managed by MergeTree using settings `enable_minmax_index_for_all_numeric_columns` (for numeric columns) and `enable_minmax_index_for_all_string_columns` (for string columns). For now, both settings are disabled, so there is no behavior change yet. [#72090](https://github.com/ClickHouse/ClickHouse/pull/72090) ([Smita Kulkarni](https://github.com/SmitaRKulkarni)).
-* Added aggregation function sequenceMatchEvents which return timestamps of matched events for longest chain of events in pattern. [#72349](https://github.com/ClickHouse/ClickHouse/pull/72349) ([UnamedRus](https://github.com/UnamedRus)).
-* `SELECT` and `VIEW` statements now support aliases, e.g. `SELECT b FROM (SELECT number, number*2 FROM numbers(2)) AS x (a, b);`. This enables TPC-H query 15 to run without modifications. [#72480](https://github.com/ClickHouse/ClickHouse/pull/72480) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-* Added a new setting `enable_adaptive_memory_spill_scheduler` that allows multiple grace JOINs in the same query to monitor their combined memory footprint and trigger spilling into an external storage adaptively to prevent MEMORY_LIMIT_EXCEEDED. [#72728](https://github.com/ClickHouse/ClickHouse/pull/72728) ([lgbo](https://github.com/lgbo-ustc)).
-* Added function `arrayNormalizedGini`. [#72823](https://github.com/ClickHouse/ClickHouse/pull/72823) ([flynn](https://github.com/ucasfl)).
-* Support low cardinality decimal data types, fix [#72256](https://github.com/ClickHouse/ClickHouse/issues/72256). [#72833](https://github.com/ClickHouse/ClickHouse/pull/72833) ([zhanglistar](https://github.com/zhanglistar)).
-* When `min_age_to_force_merge_seconds` and `min_age_to_force_merge_on_partition_only` are both enabled, the part merging will ignore the max bytes limit. [#73656](https://github.com/ClickHouse/ClickHouse/pull/73656) ([Kai Zhu](https://github.com/nauu)).
-* Support reading HALF_FLOAT values from Apache Arrow/Parquet/ORC (they are read into Float32). This closes [#72960](https://github.com/ClickHouse/ClickHouse/issues/72960). Keep in mind that IEEE-754 half float is not the same as BFloat16. Closes [#73835](https://github.com/ClickHouse/ClickHouse/issues/73835). [#73836](https://github.com/ClickHouse/ClickHouse/pull/73836) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* The `system.trace_log` table will contain two new columns, `symbols` and `lines` containing symbolized stack trace. It allows for easy collection and export of profile information. This is controlled by the server configuration value `symbolize` inside `trace_log` and is enabled by default. [#73896](https://github.com/ClickHouse/ClickHouse/pull/73896) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Add a new function, `generateSerialID`, which can be used to generate auto-incremental numbers in tables. Continuation of [#64310](https://github.com/ClickHouse/ClickHouse/issues/64310) by [kazalika](https://github.com/kazalika). This closes [#62485](https://github.com/ClickHouse/ClickHouse/issues/62485). [#73950](https://github.com/ClickHouse/ClickHouse/pull/73950) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Add syntax `query1 PARALLEL WITH query2 PARALLEL WITH query3 ... PARALLEL WITH queryN` That means subqueries `{query1, query2, ... queryN}` are allowed to run in parallel with each other (and it's preferable). [#73983](https://github.com/ClickHouse/ClickHouse/pull/73983) ([Vitaly Baranov](https://github.com/vitlibar)).
-* Now, Play UI has a progress bar during query runtime. It allows cancelling queries. It displays the total number of records and the extended information about the speed. The table can be rendered incrementally as soon as data arrives. Enable HTTP compression. Rendering of the table became faster. The table header became sticky. It allows selecting cells and navigating them by arrow keys. Fix the issue when the outline of the selected cell makes it smaller. Cells no longer expand on mouse hover but only on selection. The moment to stop rendering the incoming data is decided on the client rather than on the server side. Highlight digit groups for numbers. The overall design was refreshed and became bolder. It checks if the server is reachable and the correctness of credentials and displays the server version and uptime. The cloud icon is contoured in every font, even in Safari. Big integers inside nested data types will be rendered better. It will display inf/nan correctly. It will display data types when the mouse is over a column header. [#74204](https://github.com/ClickHouse/ClickHouse/pull/74204) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Add the ability to create min-max (skipping) indices by default for columns managed by MergeTree using settings `add_minmax_index_for_numeric_columns` (for numeric columns) and `add_minmax_index_for_string_columns` (for string columns). For now, both settings are disabled, so there is no behavior change yet. [#74266](https://github.com/ClickHouse/ClickHouse/pull/74266) ([Smita Kulkarni](https://github.com/SmitaRKulkarni)).
-* Add `script_query_number` and `script_line_number` fields to `system.query_log`, to the ClientInfo in the native protocol, and to server logs. This closes [#67542](https://github.com/ClickHouse/ClickHouse/issues/67542). Credits to [pinsvin00](https://github.com/pinsvin00) for kicking off this feature earlier in [#68133](https://github.com/ClickHouse/ClickHouse/issues/68133). [#74477](https://github.com/ClickHouse/ClickHouse/pull/74477) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Add minus operator support for DateTime64, to allow subtraction between DateTime64 values, as well as DateTime. [#74482](https://github.com/ClickHouse/ClickHouse/pull/74482) ([Li Yin](https://github.com/liyinsg)).
-* Support DeltaLake table engine for AzureBlobStorage. Fixes [#68043](https://github.com/ClickHouse/ClickHouse/issues/68043). [#74541](https://github.com/ClickHouse/ClickHouse/pull/74541) ([Smita Kulkarni](https://github.com/SmitaRKulkarni)).
-* Add bind_host setting to set the source IP address for clickhouse client connections. [#74741](https://github.com/ClickHouse/ClickHouse/pull/74741) ([Todd Yocum](https://github.com/toddyocum)).
-* Added an ability to apply non-finished (not materialized by background process) mutations during the execution of `SELECT` queries immediately after submitting. It can be enabled by setting `apply_mutations_on_fly`. [#74877](https://github.com/ClickHouse/ClickHouse/pull/74877) ([Anton Popov](https://github.com/CurtizJ)).
-* Fixed some previously unexpected cases when `toStartOfInterval` datetime arguments are negative. Done by implementing a new function called toStartOfIntervalAllowNegative, which does pretty much the same but returns only Date32/DateTime64. [#74933](https://github.com/ClickHouse/ClickHouse/pull/74933) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-* A new function initialQueryStartTime has been added. It returns the start time of the current query. The value is the same across all shards during a distributed query. [#75087](https://github.com/ClickHouse/ClickHouse/pull/75087) ([Roman Lomonosov](https://github.com/lomik)).
-* Introduce parametrized_view_parameters in system.tables. Closes https://github.com/clickhouse/clickhouse/issues/66756. [#75112](https://github.com/ClickHouse/ClickHouse/pull/75112) ([NamNguyenHoai](https://github.com/NamHoaiNguyen)).
-* Allow altering a database comment. Closes [#73351](https://github.com/ClickHouse/ClickHouse/issues/73351) ### Documentation entry for user-facing changes. [#75622](https://github.com/ClickHouse/ClickHouse/pull/75622) ([NamNguyenHoai](https://github.com/NamHoaiNguyen)).
-* Add ability to ATTACH tables without database layer (avoids UUID hack). [#75788](https://github.com/ClickHouse/ClickHouse/pull/75788) ([Azat Khuzhin](https://github.com/azat)).
-* Added `concurrent_threads_scheduler` server setting that governs how CPU slots are distributed among concurrent queries. Could be set to `round_robin` (previous behavior) or `fair_round_robin` to address the issue of unfair CPU distribution between INSERTs and SELECTs. [#75949](https://github.com/ClickHouse/ClickHouse/pull/75949) ([Sergei Trifonov](https://github.com/serxa)).
-* Restore QPL codec which has been removed in v24.10 due to licensing issues. [#76021](https://github.com/ClickHouse/ClickHouse/pull/76021) ([Konstantin Bogdanov](https://github.com/thevar1able)).
-* Added function `arraySymmetricDifference`. It returns all elements from multiple array arguments which do not occur in all arguments. Example: `SELECT arraySymmetricDifference([1, 2], [2, 3])` returns `[1, 3]`. (issue [#61673](https://github.com/ClickHouse/ClickHouse/issues/61673)). [#76231](https://github.com/ClickHouse/ClickHouse/pull/76231) ([Filipp Abapolov](https://github.com/pheepa)).
-* Add `estimatecompressionratio` aggregate function- see [#70801](https://github.com/ClickHouse/ClickHouse/issues/70801). [#76661](https://github.com/ClickHouse/ClickHouse/pull/76661) ([Tariq Almawash](https://github.com/talmawash)).
-* `FilterTransformPassedRows` and `FilterTransformPassedBytes` profile events will show the number of rows and number of bytes filtered during the query execution. [#76662](https://github.com/ClickHouse/ClickHouse/pull/76662) ([Onkar Deshpande](https://github.com/onkar)).
-* Added the keccak256 hash function, commonly used in blockchain implementations, especially in EVM-based systems. [#76669](https://github.com/ClickHouse/ClickHouse/pull/76669) ([Arnaud Briche](https://github.com/arnaudbriche)).
-* Scram SHA256 & update postgres wire auth. [#76839](https://github.com/ClickHouse/ClickHouse/pull/76839) ([scanhex12](https://github.com/scanhex12)).
-* The functionality adds the ability to define a list of headers that are forwarded from the headers of the client request to the external http authenticator. [#77054](https://github.com/ClickHouse/ClickHouse/pull/77054) ([inv2004](https://github.com/inv2004)).
-* Support `IcebergMetadataFilesCache`, which will store manifest files/list and metadata.json in one cache. [#77156](https://github.com/ClickHouse/ClickHouse/pull/77156) ([Han Fei](https://github.com/hanfei1991)).
-* Add functions `arrayLevenshteinDistance`, `arrayLevenshteinDistanceWeighted`, and `arraySimilarity`. [#77187](https://github.com/ClickHouse/ClickHouse/pull/77187) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
-* Add three new functions. `icebergTruncate` according to specification. https://iceberg.apache.org/spec/#truncate-transform-details, `toYearNumSinceEpoch` and `toMonthNumSinceEpoch`. Support `truncate` transform in partition pruning for `Iceberg` engine. [#77403](https://github.com/ClickHouse/ClickHouse/pull/77403) ([alesapin](https://github.com/alesapin)).
-* Allows a user to query the state of an Iceberg table as it existed at a previous point in time. [#77439](https://github.com/ClickHouse/ClickHouse/pull/77439) ([Daniil Ivanik](https://github.com/divanik)).
-* Added CPU slot scheduling for workloads, see https://clickhouse.com/docs/operations/workload-scheduling#cpu_scheduling for details. [#77595](https://github.com/ClickHouse/ClickHouse/pull/77595) ([Sergei Trifonov](https://github.com/serxa)).
-* The `hasAll()` function can now take advantage of the tokenbf_v1, ngrambf_v1 full-text skipping indices. [#77662](https://github.com/ClickHouse/ClickHouse/pull/77662) ([UnamedRus](https://github.com/UnamedRus)).
-* `JSON` data type is production-ready. See https://jsonbench.com/. `Dynamic` and `Varaint` data types are production ready. [#77785](https://github.com/ClickHouse/ClickHouse/pull/77785) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Added an in-memory cache for deserialized vector similarity indexes. This should make repeated approximate nearest neighbor (ANN) search queries faster. The size of the new cache is controlled by server settings `vector_similarity_index_cache_size` and `vector_similarity_index_cache_max_entries`. This feature supersedes the skipping index cache feature of earlier releases. [#77905](https://github.com/ClickHouse/ClickHouse/pull/77905) ([Shankar Iyer](https://github.com/shankar-iyer)).
-* Functions sparseGrams and sparseGramsHashes with UTF8 versions added. Author: [scanhex12](https://github.com/scanhex12). [#78176](https://github.com/ClickHouse/ClickHouse/pull/78176) ([Pervakov Grigorii](https://github.com/GrigoryPervakov)).
-* Introduce `toInterval` function. This function accepts 2 arguments (value and unit), and converts the value to a specific `Interval` type. [#78723](https://github.com/ClickHouse/ClickHouse/pull/78723) ([Andrew Davis](https://github.com/pulpdrew)).
-
-## Experimental features {#experimental-features}
-
-* Allow automatic cleanup merges of entire partitions after a configurable timeout with a new setting `enable_replacing_merge_with_cleanup_for_min_age_to_force_merge`. [#76440](https://github.com/ClickHouse/ClickHouse/pull/76440) ([Christoph Wurm](https://github.com/cwurm)).
-* Add support [for Unity Catalog](https://www.databricks.com/product/unity-catalog) for DeltaLake tables on top of AWS S3 and local filesystem. [#76988](https://github.com/ClickHouse/ClickHouse/pull/76988) ([alesapin](https://github.com/alesapin)).
-* Introduce experimental integration with AWS Glue service catalog for Iceberg tables. [#77257](https://github.com/ClickHouse/ClickHouse/pull/77257) ([alesapin](https://github.com/alesapin)).
-
-## Performance improvements {#performance-improvements}
-
-* Optimize performance with lazy projection to avoid reading unused columns. [#55518](https://github.com/ClickHouse/ClickHouse/pull/55518) ([Xiaozhe Yu](https://github.com/wudidapaopao)).
-* Start to compare rows from most likely unequal columns first. [#63780](https://github.com/ClickHouse/ClickHouse/pull/63780) ([UnamedRus](https://github.com/UnamedRus)).
-* Optimize RowBinary input format. Closes [#63805](https://github.com/ClickHouse/ClickHouse/issues/63805). [#65059](https://github.com/ClickHouse/ClickHouse/pull/65059) ([Pavel Kruglov](https://github.com/Avogar)).
-* Speedup string deserialization by some low-level optimisation. [#65948](https://github.com/ClickHouse/ClickHouse/pull/65948) ([Nikita Taranov](https://github.com/nickitat)).
-* Apply `preserve_most` attribute at some places in code. [#67778](https://github.com/ClickHouse/ClickHouse/pull/67778) ([Nikita Taranov](https://github.com/nickitat)).
-* Implement query condition cache to improve query performance using repeated conditions. The range of the portion of data that does not meet the condition is remembered as a temporary index in memory. Subsequent queries will use this index. close [#67768](https://github.com/ClickHouse/ClickHouse/issues/67768) ### Documentation entry for user-facing changes. [#69236](https://github.com/ClickHouse/ClickHouse/pull/69236) ([zhongyuankai](https://github.com/zhongyuankai)).
-* Support async io prefetch for `NativeORCBlockInputFormat`, which improves overall performance by hiding remote io latency. Speedup ratio could reach 1.47x in my test case. [#70534](https://github.com/ClickHouse/ClickHouse/pull/70534) ([李扬](https://github.com/taiyang-li)).
-* Improve grace hash join performance by rerange the right join table by keys. [#72237](https://github.com/ClickHouse/ClickHouse/pull/72237) ([kevinyhzou](https://github.com/KevinyhZou)).
-* Reintroduce respect `ttl_only_drop_parts` on `materialize ttl`; only read the necessary columns to recalculate TTL and drop parts by replacing them with an empty one. [#72751](https://github.com/ClickHouse/ClickHouse/pull/72751) ([Andrey Zvonov](https://github.com/zvonand)).
-* Allow `arrayROCAUC` and `arrayAUCPR` to compute partial area of the whole curve, so that its calculation can be parallelized over huge datasets. [#72904](https://github.com/ClickHouse/ClickHouse/pull/72904) ([Emmanuel](https://github.com/emmanuelsdias)).
-* Avoid spawn too many idle threads. [#72920](https://github.com/ClickHouse/ClickHouse/pull/72920) ([Guo Wangyang](https://github.com/guowangy)).
-* Splitting of left table blocks by hash was removed from the probe phase of the `parallel_hash` JOIN algorithm. [#73089](https://github.com/ClickHouse/ClickHouse/pull/73089) ([Nikita Taranov](https://github.com/nickitat)).
-* Don't list blob storage keys if we only have curly brackets expansion in table function. Closes [#73333](https://github.com/ClickHouse/ClickHouse/issues/73333). [#73518](https://github.com/ClickHouse/ClickHouse/pull/73518) ([Konstantin Bogdanov](https://github.com/thevar1able)).
-* Replace Int256 and UInt256 with clang builtin i256 in arithmetic calculation according to tests in [#70502](https://github.com/ClickHouse/ClickHouse/issues/70502). [#73658](https://github.com/ClickHouse/ClickHouse/pull/73658) ([李扬](https://github.com/taiyang-li)).
-* Adds a fast path for functions with all argument types is numeric. Fix performance issues in https://github.com/ClickHouse/ClickHouse/pull/72258. [#73820](https://github.com/ClickHouse/ClickHouse/pull/73820) ([李扬](https://github.com/taiyang-li)).
-* Do not apply `maskedExecute` on non-function columns, improve the performance of short circuit execution. [#73965](https://github.com/ClickHouse/ClickHouse/pull/73965) ([lgbo](https://github.com/lgbo-ustc)).
-* Disable header detection for Kafka/NATS/RabbitMQ/FileLog to improve performance. [#74006](https://github.com/ClickHouse/ClickHouse/pull/74006) ([Azat Khuzhin](https://github.com/azat)).
-* Use log wrappers by value and don't allocate them in a heap. [#74034](https://github.com/ClickHouse/ClickHouse/pull/74034) ([Mikhail Artemenko](https://github.com/Michicosun)).
-* Execute a pipeline with a higher degree of parallelism after aggregation with grouping sets. [#74082](https://github.com/ClickHouse/ClickHouse/pull/74082) ([Nikita Taranov](https://github.com/nickitat)).
-* Reduce critical section in `MergeTreeReadPool`. [#74202](https://github.com/ClickHouse/ClickHouse/pull/74202) ([Guo Wangyang](https://github.com/guowangy)).
-* Optimized function `indexHint`. Now, columns that are only used as arguments of function `indexHint` are not read from the table. [#74314](https://github.com/ClickHouse/ClickHouse/pull/74314) ([Anton Popov](https://github.com/CurtizJ)).
-* Parallel replicas performance improvement. Packets deserialization on query initiator, for packets not related to parallel replicas protocol, now always happens in pipeline thread. Before, it could happen in a thread responsible for pipeline scheduling, which could make initiator less responsive and delay pipeline execution. [#74398](https://github.com/ClickHouse/ClickHouse/pull/74398) ([Igor Nikonov](https://github.com/devcrafter)).
-* Fixed calculation of size in memory for `LowCardinality` columns. [#74688](https://github.com/ClickHouse/ClickHouse/pull/74688) ([Nikita Taranov](https://github.com/nickitat)).
-* Improves the performance of whole JSON column reading in Wide parts from S3. It's done by adding prefetches for sub column prefixes deserialization, cache of deserialized prefixes and parallel deserialization of subcolumn prefixes. It improves reading of the JSON column from S3 4 times in query like `SELECT data FROM table` and about 10 times in query like `SELECT data FROM table LIMIT 10`. [#74827](https://github.com/ClickHouse/ClickHouse/pull/74827) ([Pavel Kruglov](https://github.com/Avogar)).
-* Preallocate memory used by async inserts to improve performance. [#74945](https://github.com/ClickHouse/ClickHouse/pull/74945) ([Ilya Golshtein](https://github.com/ilejn)).
-* Fixed double pre-allocation in `ConcurrentHashJoin` in case join sides are swapped by the optimizer. [#75149](https://github.com/ClickHouse/ClickHouse/pull/75149) ([Nikita Taranov](https://github.com/nickitat)).
-* Fixed unnecessary contention in `parallel_hash` when `max_rows_in_join = max_bytes_in_join = 0`. [#75155](https://github.com/ClickHouse/ClickHouse/pull/75155) ([Nikita Taranov](https://github.com/nickitat)).
-* Slight improvement in some join scenarios: precalculate number of output rows and reserve memory for them. [#75376](https://github.com/ClickHouse/ClickHouse/pull/75376) ([Alexander Gololobov](https://github.com/davenger)).
-* `plain_rewritable` metadata files are small and do not need a large default buffer. Use a write buffer sized appropriately to fit the given path, improving memory utilization for a large number of active parts. ### Documentation entry for user-facing changes. [#75758](https://github.com/ClickHouse/ClickHouse/pull/75758) ([Julia Kartseva](https://github.com/jkartseva)).
-* In some cases (e.g., empty array column) data parts can contain empty files. We can skip writing empty blobs to ObjectStorage and only store metadata for such files when the table resides on disk with separated metadata and object storages. [#75860](https://github.com/ClickHouse/ClickHouse/pull/75860) ([Alexander Gololobov](https://github.com/davenger)).
-* It was discovered that concurrency control could lead to unfair CPU distribution between INSERTs and SELECTs. When all CPU slots are allocated unconditionally (w/o competition) to INSERTs with `max_threads` = 1 while SELECTs with high `max_threads` values suffer from poor performance due to using only a single thread. [#75941](https://github.com/ClickHouse/ClickHouse/pull/75941) ([Sergei Trifonov](https://github.com/serxa)).
-* Trivial opt on wrapInNullable to avoid unnecessary null map allocation. [#76489](https://github.com/ClickHouse/ClickHouse/pull/76489) ([李扬](https://github.com/taiyang-li)).
-* Improve min/max performance for Decimal32/Decimal64/DateTime64. [#76570](https://github.com/ClickHouse/ClickHouse/pull/76570) ([李扬](https://github.com/taiyang-li)).
-* Actively evict data from the cache on parts removal. Do not let the cache grow to the maximum size if the amount of data is less. [#76641](https://github.com/ClickHouse/ClickHouse/pull/76641) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Query compilation (setting `compile_expressions`) now considers the machine type. This speeds up such queries significantly. [#76753](https://github.com/ClickHouse/ClickHouse/pull/76753) ([Robert Schulze](https://github.com/rschu1ze)).
-* Optimize arraySort. [#76850](https://github.com/ClickHouse/ClickHouse/pull/76850) ([李扬](https://github.com/taiyang-li)).
-* Speed-up building JOIN result by de-virtualizing calls to `col->insertFrom()`. [#77350](https://github.com/ClickHouse/ClickHouse/pull/77350) ([Alexander Gololobov](https://github.com/davenger)).
-* Merge marks of the same part and write them to the query condition cache at one time to reduce the consumption of locks. [#77377](https://github.com/ClickHouse/ClickHouse/pull/77377) ([zhongyuankai](https://github.com/zhongyuankai)).
-* Optimize order by single nullable or low-cardinality columns. [#77789](https://github.com/ClickHouse/ClickHouse/pull/77789) ([李扬](https://github.com/taiyang-li)).
-* Disable `filesystem_cache_prefer_bigger_buffer_size` when the cache is used passively, such as for merges. [#77898](https://github.com/ClickHouse/ClickHouse/pull/77898) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Implement trivial count optimization for Iceberg. Now queries with `count()` and without any filters should be faster. Closes [#77639](https://github.com/ClickHouse/ClickHouse/issues/77639). [#78090](https://github.com/ClickHouse/ClickHouse/pull/78090) ([alesapin](https://github.com/alesapin)).
-* Support Iceberg data pruning based on lower_bound and uppert_bound values for columns. Fixes [#77638](https://github.com/ClickHouse/ClickHouse/issues/77638). [#78242](https://github.com/ClickHouse/ClickHouse/pull/78242) ([alesapin](https://github.com/alesapin)).
-* Optimize memory usage for NativeReader. [#78442](https://github.com/ClickHouse/ClickHouse/pull/78442) ([Azat Khuzhin](https://github.com/azat)).
-* Trivial optimization: do not rewrite `count(if())` to countIf if `CAST` is required. Close [#78564](https://github.com/ClickHouse/ClickHouse/issues/78564). [#78565](https://github.com/ClickHouse/ClickHouse/pull/78565) ([李扬](https://github.com/taiyang-li)).
-
-## Improvements {#improvements}
-
-* Decrease the amount of Keeper requests by eliminating the use of single `get` requests, which could have caused a significant load on Keeper with the increased number of replicas, in places where `multiRead` is available. [#56862](https://github.com/ClickHouse/ClickHouse/pull/56862) ([Nikolay Degterinsky](https://github.com/evillique)).
-* Add support for SSL authentication with named collections for MySQL. Closes [#59111](https://github.com/ClickHouse/ClickHouse/issues/59111). [#59452](https://github.com/ClickHouse/ClickHouse/pull/59452) ([Nikolay Degterinsky](https://github.com/evillique)).
-* Improve new analyzer infrastructure performance via storing `ColumnPtr` instead of `Field` in the `ConstantNode`. Related to [#62245](https://github.com/ClickHouse/ClickHouse/issues/62245). [#63198](https://github.com/ClickHouse/ClickHouse/pull/63198) ([Dmitry Novik](https://github.com/novikd)).
-* Reject queries when the server is overloaded. The decision is made based on the ratio of wait time (`OSCPUWaitMicroseconds`) to busy time (`OSCPUVirtualTimeMicroseconds`). The query is dropped with some probability, when this ratio is between `min_os_cpu_wait_time_ratio_to_throw` and `max_os_cpu_wait_time_ratio_to_throw` (those are query level settings). [#63206](https://github.com/ClickHouse/ClickHouse/pull/63206) ([Alexey Katsman](https://github.com/alexkats)).
-* Drop blocks as early as possible to reduce the memory requirements. [#65647](https://github.com/ClickHouse/ClickHouse/pull/65647) ([lgbo](https://github.com/lgbo-ustc)).
-* `processors_profile_log` table now has default configuration with TTL of 30 days. [#66139](https://github.com/ClickHouse/ClickHouse/pull/66139) ([Ilya Yatsishin](https://github.com/qoega)).
-* Allow creating of a `bloom_filter` index on columns with datatype DateTime64. [#66416](https://github.com/ClickHouse/ClickHouse/pull/66416) ([Yutong Xiao](https://github.com/YutSean)).
-* Introduce latency buckets and use them to track first byte read/write and connect times for S3 requests. That way we can later use gathered data to calculate approximate percentiles and adapt timeouts. [#69783](https://github.com/ClickHouse/ClickHouse/pull/69783) ([Alexey Katsman](https://github.com/alexkats)).
-* Queries passed to `Executable` storage are no longer limited to single threaded execution. [#70084](https://github.com/ClickHouse/ClickHouse/pull/70084) ([yawnt](https://github.com/yawnt)).
-* Added HTTP headers to OpenTelemetry span logs table for enhanced traceability. [#70516](https://github.com/ClickHouse/ClickHouse/pull/70516) ([jonymohajanGmail](https://github.com/jonymohajanGmail)).
-* Support writing of orc files by custom time zone, not always by `GMT` time zone. [#70615](https://github.com/ClickHouse/ClickHouse/pull/70615) ([kevinyhzou](https://github.com/KevinyhZou)).
-* Replace table functions with their `-Cluster` alternatives if parallel replicas are enabled. Fixes [#65024](https://github.com/ClickHouse/ClickHouse/issues/65024). [#70659](https://github.com/ClickHouse/ClickHouse/pull/70659) ([Konstantin Bogdanov](https://github.com/thevar1able)).
-* Respect IO scheduling settings when writing backups across clouds. [#71093](https://github.com/ClickHouse/ClickHouse/pull/71093) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
-* Reestablish connection to MySQL and Postgres dictionary replicas in the background so it wouldn't delay requests to corresponding dictionaries. [#71101](https://github.com/ClickHouse/ClickHouse/pull/71101) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
-* Add metric alias name to system.asynchronous_metrics. [#71164](https://github.com/ClickHouse/ClickHouse/pull/71164) ([megao](https://github.com/jetgm)).
-* Refreshes of refreshable materialized views now appear in `system.query_log`. [#71333](https://github.com/ClickHouse/ClickHouse/pull/71333) ([Michael Kolupaev](https://github.com/al13n321)).
-* Evaluate parquet bloom filters and min/max indexes together. Necessary to properly support: `x = 3 or x > 5` where data = [1, 2, 4, 5]. [#71383](https://github.com/ClickHouse/ClickHouse/pull/71383) ([Arthur Passos](https://github.com/arthurpassos)).
-* Interactive metrics improvements. Fix metrics from parallel replicas not being fully displayed. Display the metrics in order of the most recent update, then lexicographically by name. Do not display stale metrics. [#71631](https://github.com/ClickHouse/ClickHouse/pull/71631) ([Julia Kartseva](https://github.com/jkartseva)).
-* Historically for some reason, the query `ALTER TABLE MOVE PARTITION TO TABLE` checked `SELECT` and `ALTER DELETE` rights instead of dedicated `ALTER_MOVE_PARTITION`. This PR makes use of this access type. For compatibility, this permission is also will be granted implicitly if `SELECT` and `ALTER DELETE` are granted, but this behavior will be removed in future releases. Closes [#16403](https://github.com/ClickHouse/ClickHouse/issues/16403). [#71632](https://github.com/ClickHouse/ClickHouse/pull/71632) ([pufit](https://github.com/pufit)).
-* Enables setting `use_hive_partitioning` by default. [#71636](https://github.com/ClickHouse/ClickHouse/pull/71636) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-* Throw an exception when trying to materialize a column in the sort key instead of allowing it to break the sort order. Does not solve [#71777](https://github.com/ClickHouse/ClickHouse/issues/71777), though. [#71891](https://github.com/ClickHouse/ClickHouse/pull/71891) ([Peter Nguyen](https://github.com/petern48)).
-* Allow more general join planning algorithm when hash join algorithm is enabled. [#71926](https://github.com/ClickHouse/ClickHouse/pull/71926) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
-* Hide secrets in `EXPLAIN QUERY TREE`. [#72025](https://github.com/ClickHouse/ClickHouse/pull/72025) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
-* Allow use of a configurable disk to store metadata files of databases and tables. The disk name can be set via `database_disk.disk` config parameter. [#72027](https://github.com/ClickHouse/ClickHouse/pull/72027) ([Tuan Pham Anh](https://github.com/tuanpach)).
-* Support parquet integer logical types on native reader. [#72105](https://github.com/ClickHouse/ClickHouse/pull/72105) ([Arthur Passos](https://github.com/arthurpassos)).
-* Make JSON output format pretty by default. Add new setting `output_format_json_pretty_print` to control it and enable it by default. [#72148](https://github.com/ClickHouse/ClickHouse/pull/72148) ([Pavel Kruglov](https://github.com/Avogar)).
-* Interactively request credentials in the browser if the default user requires a password. In previous versions, the server returned HTTP 403; now, it returns HTTP 401. [#72198](https://github.com/ClickHouse/ClickHouse/pull/72198) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* This PR converts access types `CREATE_USER`, `ALTER_USER`, `DROP_USER`, `CREATE_ROLE`, `ALTER_ROLE`, `DROP_ROLE` from global to parameterized. That means users can now grant access management grants more precise:. [#72246](https://github.com/ClickHouse/ClickHouse/pull/72246) ([pufit](https://github.com/pufit)).
-* Allow to shard names in cluster configuration. [#72276](https://github.com/ClickHouse/ClickHouse/pull/72276) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
-* Support CAST and ALTER between JSON types with different parameters. [#72303](https://github.com/ClickHouse/ClickHouse/pull/72303) ([Pavel Kruglov](https://github.com/Avogar)).
-* Add the `latest_fail_error_code_name` column to `system.mutations`. We need this column to introduce a new metric on stuck mutations and use it to build graphs of the errors encountered in the cloud as well as, optionally, adding a new less-noisy alert. [#72398](https://github.com/ClickHouse/ClickHouse/pull/72398) ([Miсhael Stetsyuk](https://github.com/mstetsyuk)).
-* Reduce the amount of allocation in attaching of partitions. [#72583](https://github.com/ClickHouse/ClickHouse/pull/72583) ([Konstantin Morozov](https://github.com/k-morozov)).
-* Make max_bytes_before_external_sort limit depends on total query memory consumption (previously it was number of bytes in the sorting block for one sorting thread, now it has the same meaning as `max_bytes_before_external_group_by` - it is total limit for the whole query memory for all threads). Also one more setting added to control on disk block size - `min_external_sort_block_bytes`. [#72598](https://github.com/ClickHouse/ClickHouse/pull/72598) ([Azat Khuzhin](https://github.com/azat)).
-* Ignore memory restrictions by trace collector. [#72606](https://github.com/ClickHouse/ClickHouse/pull/72606) ([Azat Khuzhin](https://github.com/azat)).
-* Support subcolumns in MergeTree sorting key and skip indexes. [#72644](https://github.com/ClickHouse/ClickHouse/pull/72644) ([Pavel Kruglov](https://github.com/Avogar)).
-* Add server settings `dictionaries_lazy_load` and `wait_dictionaries_load_at_startup` to `system.server_settings`. [#72664](https://github.com/ClickHouse/ClickHouse/pull/72664) ([Christoph Wurm](https://github.com/cwurm)).
-* Adds setting `max_backup_bandwidth` to the list of settings that can be specified as part of `BACKUP`/`RESTORE` queries. [#72665](https://github.com/ClickHouse/ClickHouse/pull/72665) ([Christoph Wurm](https://github.com/cwurm)).
-* Parallel replicas used historical information about replica availability to improve replica selection but did not update the replica's error count when the connection was unavailable. This PR updates the replica's error count when unavailable. [#72666](https://github.com/ClickHouse/ClickHouse/pull/72666) ([zoomxi](https://github.com/zoomxi)).
-* Reducing the log level for appearing replicated parts in the ReplicatedMergeTree engine to help minimize the volume of logs generated in a replicated cluster. [#72876](https://github.com/ClickHouse/ClickHouse/pull/72876) ([mor-akamai](https://github.com/morkalfon)).
-* A lot of new features will require better code incapsulation (what relates to Iceberg metadata) and better code abstractions. [#72941](https://github.com/ClickHouse/ClickHouse/pull/72941) ([Daniil Ivanik](https://github.com/divanik)).
-* Support equal comparison for values of JSON column. [#72991](https://github.com/ClickHouse/ClickHouse/pull/72991) ([Pavel Kruglov](https://github.com/Avogar)).
-* Improve formatting of identifiers with JSON subcolumns to avoid unnecessary back quotes. [#73085](https://github.com/ClickHouse/ClickHouse/pull/73085) ([Pavel Kruglov](https://github.com/Avogar)).
-* Log `PREWHERE` conditions with `Test` level. [#73116](https://github.com/ClickHouse/ClickHouse/pull/73116) ([Vladimir Cherkasov](https://github.com/vdimir)).
-* Support SETTINGS with implicit ENGINE and mixing engine and query settings. [#73120](https://github.com/ClickHouse/ClickHouse/pull/73120) ([Raúl Marín](https://github.com/Algunenano)).
-* Write parts with level 1 if `optimize_on_insert` is enabled. It allows to use several optimizations of queries with `FINAL` for freshly written parts. [#73132](https://github.com/ClickHouse/ClickHouse/pull/73132) ([Anton Popov](https://github.com/CurtizJ)).
-* For a query like, `WHERE a[...]`, and 3. also in the configuration file, via per-connection settings `[...]`. [#74168](https://github.com/ClickHouse/ClickHouse/pull/74168) ([Christoph Wurm](https://github.com/cwurm)).
-* Change prometheus remote write response success status from 200/OK to 204/NoContent. [#74170](https://github.com/ClickHouse/ClickHouse/pull/74170) ([Michael Dempsey](https://github.com/bluestealth)).
-* Expose X-ClickHouse HTTP headers to JavaScript in the browser. It makes writing applications more convenient. [#74180](https://github.com/ClickHouse/ClickHouse/pull/74180) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* The `JSONEachRowWithProgress` format will include events with metadata, as well as totals and extremes. It also includes `rows_before_limit_at_least` and `rows_before_aggregation`. The format prints the exception properly if it arrives after partial results. The progress now includes elapsed nanoseconds. One final progress event is emitted at the end. The progress during query runtime will be printed no more frequently than the value of the `interactive_delay` setting. [#74181](https://github.com/ClickHouse/ClickHouse/pull/74181) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Hourglass will rotate smoothly in Play UI. [#74182](https://github.com/ClickHouse/ClickHouse/pull/74182) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Even if the HTTP response is compressed, send packets as soon as they arrive. This allows the browser to receive progress packets and compressed data. [#74201](https://github.com/ClickHouse/ClickHouse/pull/74201) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Add ability to reload `max_remote_read_network_bandwidth_for_serve` and `max_remote_write_network_bandwidth_for_server` on fly without restart server. [#74206](https://github.com/ClickHouse/ClickHouse/pull/74206) ([Kai Zhu](https://github.com/nauu)).
-* Autodetect secure connection based on connecting to port 9440 in ClickHouse Client. [#74212](https://github.com/ClickHouse/ClickHouse/pull/74212) ([Christoph Wurm](https://github.com/cwurm)).
-* Authenticate users with username only for http_handlers (previously it requires user to put the password as well). [#74221](https://github.com/ClickHouse/ClickHouse/pull/74221) ([Azat Khuzhin](https://github.com/azat)).
-* Support for the alternative query languages PRQL and KQL was marked experimental. To use them, specify settings `allow_experimental_prql_dialect = 1` and `allow_experimental_kusto_dialect = 1`. [#74224](https://github.com/ClickHouse/ClickHouse/pull/74224) ([Robert Schulze](https://github.com/rschu1ze)).
-* Support returning the default Enum type in more aggregate functions. [#74272](https://github.com/ClickHouse/ClickHouse/pull/74272) ([Raúl Marín](https://github.com/Algunenano)).
-* In `OPTIMIZE TABLE`, it is now possible to specify keyword `FORCE` as an alternative to existing keyword `FINAL`. [#74342](https://github.com/ClickHouse/ClickHouse/pull/74342) ([Robert Schulze](https://github.com/rschu1ze)).
-* Added a merge tree setting `materialize_skip_indexes_on_merge` which suppresses the creation of skip indexes during merge. This allows users to control explicitly (via `ALTER TABLE [..] MATERIALIZE INDEX [...]`) when skip indexes are created. This can be useful if skip indexes are expensive to build (e.g. vector similarity indexes). [#74401](https://github.com/ClickHouse/ClickHouse/pull/74401) ([Robert Schulze](https://github.com/rschu1ze)).
-* Support subcolumns in default and materialized expressions. [#74403](https://github.com/ClickHouse/ClickHouse/pull/74403) ([Pavel Kruglov](https://github.com/Avogar)).
-* Optimize keeper requests in Storage(S3/Azure)Queue. [#74410](https://github.com/ClickHouse/ClickHouse/pull/74410) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Add the IsServerShuttingDown metric, which is needed to trigger an alert when the server shutdown takes too much time. [#74429](https://github.com/ClickHouse/ClickHouse/pull/74429) ([Miсhael Stetsyuk](https://github.com/mstetsyuk)).
-* Added iceberg tables names to EXPLAIN. [#74485](https://github.com/ClickHouse/ClickHouse/pull/74485) ([alekseev-maksim](https://github.com/alekseev-maksim)).
-* Use up to `1000` parallel replicas by default. [#74504](https://github.com/ClickHouse/ClickHouse/pull/74504) ([Konstantin Bogdanov](https://github.com/thevar1able)).
-* Provide a better error message when using RECURSIVE CTE with the old analyzer. [#74523](https://github.com/ClickHouse/ClickHouse/pull/74523) ([Raúl Marín](https://github.com/Algunenano)).
-* Optimize keeper requests in Storage(S3/Azure)Queue. [#74538](https://github.com/ClickHouse/ClickHouse/pull/74538) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Improve HTTP session reuse when reading from s3 disk ([#72401](https://github.com/ClickHouse/ClickHouse/issues/72401)). [#74548](https://github.com/ClickHouse/ClickHouse/pull/74548) ([Julian Maicher](https://github.com/jmaicher)).
-* Show extended error messages in `system.errors`. [#74574](https://github.com/ClickHouse/ClickHouse/pull/74574) ([Vitaly Baranov](https://github.com/vitlibar)).
-* Enabled a backoff logic for all types of replicated tasks. It will provide the ability to reduce CPU usage, memory usage, and log file sizes. Added new settings `max_postpone_time_for_failed_replicated_fetches_ms`, `max_postpone_time_for_failed_replicated_merges_ms` and `max_postpone_time_for_failed_replicated_tasks_ms` which are similar to `max_postpone_time_for_failed_mutations_ms`. [#74576](https://github.com/ClickHouse/ClickHouse/pull/74576) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
-* More accurate accounting for `max_joined_block_size_rows` setting for `parallel_hash` JOIN algorithm. Helps to avoid increased memory consumption compared to `hash` algorithm. [#74630](https://github.com/ClickHouse/ClickHouse/pull/74630) ([Nikita Taranov](https://github.com/nickitat)).
-* Added `dfs.client.use.datanode.hostname` libhdfs3 config option support. [#74635](https://github.com/ClickHouse/ClickHouse/pull/74635) ([Mikhail Tiukavkin](https://github.com/freshertm)).
-* Fixes Invalid: Codec 'snappy' doesn't support setting a compression level. [#74659](https://github.com/ClickHouse/ClickHouse/pull/74659) ([Arthur Passos](https://github.com/arthurpassos)).
-* Allow using password for client communication with clickhouse-keeper. This feature is not very useful if you specify proper SSL configuration for server and client, but still can be useful for some cases. Password cannot be longer than 16 characters. It's not connected with Keeper Auth model. [#74673](https://github.com/ClickHouse/ClickHouse/pull/74673) ([alesapin](https://github.com/alesapin)).
-* Allow using blob paths to calculate checksums while making a backup. [#74729](https://github.com/ClickHouse/ClickHouse/pull/74729) ([Vitaly Baranov](https://github.com/vitlibar)).
-* Use dynamic sharding for JOIN if the JOIN key is a prefix of PK for both parts. This optimization is enabled with `query_plan_join_shard_by_pk_ranges` setting (disabled by default). [#74733](https://github.com/ClickHouse/ClickHouse/pull/74733) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
-* Add error code for config reloader. [#74746](https://github.com/ClickHouse/ClickHouse/pull/74746) ([Garrett Thomas](https://github.com/garrettthomaskth)).
-* Added support for IPv6 addresses in MySQL and PostgreSQL table functions and engines. [#74796](https://github.com/ClickHouse/ClickHouse/pull/74796) ([Mikhail Koviazin](https://github.com/mkmkme)).
-* Parameters for the codec Gorilla will now always be saved in the table metadata in .sql file. This closes: [#70072](https://github.com/ClickHouse/ClickHouse/issues/70072). [#74814](https://github.com/ClickHouse/ClickHouse/pull/74814) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
-* Implement short circuit optimization for `divideDecimal`. Fixes [#74280](https://github.com/ClickHouse/ClickHouse/issues/74280). [#74843](https://github.com/ClickHouse/ClickHouse/pull/74843) ([Kevin Mingtarja](https://github.com/kevinmingtarja)).
-* Improve performance of larger multi requests in Keeper. [#74849](https://github.com/ClickHouse/ClickHouse/pull/74849) ([Antonio Andelic](https://github.com/antonio2368)).
-* Now users can be specified inside the startup scripts. [#74894](https://github.com/ClickHouse/ClickHouse/pull/74894) ([pufit](https://github.com/pufit)).
-* Fetch parts in parallel in ALTER TABLE FETCH PARTITION (thread pool size is controlled with `max_fetch_partition_thread_pool_size`). [#74978](https://github.com/ClickHouse/ClickHouse/pull/74978) ([Azat Khuzhin](https://github.com/azat)).
-* Added a query ID column to `system.query_cache` (issue [#68205](https://github.com/ClickHouse/ClickHouse/issues/68205)). [#74982](https://github.com/ClickHouse/ClickHouse/pull/74982) ([NamNguyenHoai](https://github.com/NamHoaiNguyen)).
-* Enabled SSH protocol back. Fixed some critical vulnerabilities so that it is no longer possible to use custom pager or specify `server-logs-file`. Disabled the ability to pass client options through the environment variables by default (it is still possible via `ssh-server.enable_client_options_passing` in config.xml). Supported progress table, query cancellation, completion, profile events progress, stdin and `send_logs_level` option. This closes: [#74340](https://github.com/ClickHouse/ClickHouse/issues/74340). [#74989](https://github.com/ClickHouse/ClickHouse/pull/74989) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
-* Fix formatting of exceptions using a custom format if they appear during query interpretation. In previous versions, exceptions were formatted using the default format rather than the format specified in the query. This closes [#55422](https://github.com/ClickHouse/ClickHouse/issues/55422). [#74994](https://github.com/ClickHouse/ClickHouse/pull/74994) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Implemented parsing enhancements (Sequence ID parsing: Added functionality to parse sequence identifiers in manifest files AND Avro metadata parsing: Redesigned the Avro metadata parser to be easily extendable for future enhancements). [#75010](https://github.com/ClickHouse/ClickHouse/pull/75010) ([Daniil Ivanik](https://github.com/divanik)).
-* It is allowed to cancel `ALTER TABLE ... FREEZE ...` queries with `KILL QUERY` and timeout(`max_execution_time`). [#75016](https://github.com/ClickHouse/ClickHouse/pull/75016) ([Kirill](https://github.com/kirillgarbar)).
-* Add support for `groupUniqArrayArrayMap` as `SimpleAggregateFunction`. [#75034](https://github.com/ClickHouse/ClickHouse/pull/75034) ([Miel Donkers](https://github.com/mdonkers)).
-* Support prepared statements in postgres wire protocol. [#75035](https://github.com/ClickHouse/ClickHouse/pull/75035) ([scanhex12](https://github.com/scanhex12)).
-* Hide catalog credential settings in database engine `Iceberg`. Closes [#74559](https://github.com/ClickHouse/ClickHouse/issues/74559). [#75080](https://github.com/ClickHouse/ClickHouse/pull/75080) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Added a few missing features into BuzzHouse: `ILIKE` and `REGEXP` operators, `<=>` and `IS NOT DISTINCT FROM`. [#75168](https://github.com/ClickHouse/ClickHouse/pull/75168) ([Pedro Ferreira](https://github.com/PedroTadim)).
-* The setting `min_chunk_bytes_for_parallel_parsing` cannot be zero anymore. This fixes: [#71110](https://github.com/ClickHouse/ClickHouse/issues/71110). [#75239](https://github.com/ClickHouse/ClickHouse/pull/75239) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
-* `intExp2` / `intExp10`: Define undefined behaviour: return 0 for too small argument, `18446744073709551615` for too big argument, throw exception if `nan`. [#75312](https://github.com/ClickHouse/ClickHouse/pull/75312) ([Vitaly Baranov](https://github.com/vitlibar)).
-* Support `s3.endpoint` natively from catalog config in `DatabaseIceberg`. Closes [#74558](https://github.com/ClickHouse/ClickHouse/issues/74558). [#75375](https://github.com/ClickHouse/ClickHouse/pull/75375) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Don't fail silently if user executing `SYSTEM DROP REPLICA` doesn't have enough permissions. [#75377](https://github.com/ClickHouse/ClickHouse/pull/75377) ([Bharat Nallan](https://github.com/bharatnc)).
-* Add a ProfileEvent about the number of times any of system logs has failed to flush. [#75466](https://github.com/ClickHouse/ClickHouse/pull/75466) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Add check and logging for decrypting and decompressing. [#75471](https://github.com/ClickHouse/ClickHouse/pull/75471) ([Vitaly Baranov](https://github.com/vitlibar)).
-* Added support for the micro sign (U+00B5) in the `parseTimeDelta` function. Now both the micro sign (U+00B5) and the Greek letter mu (U+03BC) are recognized as valid representations for microseconds, aligning ClickHouse's behavior with Go’s implementation ([see time.go](https://github.com/golang/go/blob/ad7b46ee4ac1cee5095d64b01e8cf7fcda8bee5e/src/time/time.go#L983C19-L983C20) and [time/format.go](https://github.com/golang/go/blob/ad7b46ee4ac1cee5095d64b01e8cf7fcda8bee5e/src/time/format.go#L1608-L1609)). [#75472](https://github.com/ClickHouse/ClickHouse/pull/75472) ([Vitaly Orlov](https://github.com/orloffv)).
-* Replace server setting (`send_settings_to_client`) with client setting (`apply_settings_from_server`) that controls whether client-side code (e.g. parsing INSERT data and formatting query output) should use settings from server's `users.xml` and user profile. Otherwise only settings from client command line, session, and the query are used. Note that this only applies to native client (not e.g. HTTP), and doesn't apply to most of query processing (which happens on the server). [#75478](https://github.com/ClickHouse/ClickHouse/pull/75478) ([Michael Kolupaev](https://github.com/al13n321)).
-* Keeper improvement: disable digest calculation when committing to in-memory storage for better performance. It can be enabled with `keeper_server.digest_enabled_on_commit` config. Digest is still calculated when preprocessing requests. [#75490](https://github.com/ClickHouse/ClickHouse/pull/75490) ([Antonio Andelic](https://github.com/antonio2368)).
-* Push down filter expression from JOIN ON when possible. [#75536](https://github.com/ClickHouse/ClickHouse/pull/75536) ([Vladimir Cherkasov](https://github.com/vdimir)).
-* Better error messages at syntax errors. Previously, if the query was too large, and the token whose length exceeds the limit is a very large string literal, the message about the reason was lost in the middle of two examples of this very long token. Fix the issue when a query with UTF-8 was cut incorrectly in the error message. Fix excessive quoting of query fragments. This closes [#75473](https://github.com/ClickHouse/ClickHouse/issues/75473). [#75561](https://github.com/ClickHouse/ClickHouse/pull/75561) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Add profile events in storage `S3(Azure)Queue`. [#75618](https://github.com/ClickHouse/ClickHouse/pull/75618) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Disable sending settings from server to client (`send_settings_to_client=false`) for compatibility (This feature will be re-implemented as client setting later for better usability). [#75648](https://github.com/ClickHouse/ClickHouse/pull/75648) ([Michael Kolupaev](https://github.com/al13n321)).
-* Add a config `memory_worker_correct_memory_tracker` to enable correction of internal memory tracker with information from different source read in the background thread periodically. [#75714](https://github.com/ClickHouse/ClickHouse/pull/75714) ([Antonio Andelic](https://github.com/antonio2368)).
-* Use Analyzer in PrometheusRemoteReadProtocol. [#75729](https://github.com/ClickHouse/ClickHouse/pull/75729) ([Dmitry Novik](https://github.com/novikd)).
-* We have support for gauge/counter metric types. However, they are insufficient for some metrics (e.g., the response times of requests to the keeper), so support for the histogram metric type is needed. The interface closely mirrors the Prometheus client, where you simply call `observe(value)` to increment the counter in the bucket corresponding to the value. The histogram metrics are exposed via system.histogram_metrics. [#75736](https://github.com/ClickHouse/ClickHouse/pull/75736) ([Miсhael Stetsyuk](https://github.com/mstetsyuk)).
-* Add column `normalized_query_hash` into `system.processes`. Note: while it can be easily calculated on the fly with the `normalizedQueryHash` function, this is needed to prepare for subsequent changes. [#75756](https://github.com/ClickHouse/ClickHouse/pull/75756) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Querying `system.tables` will not throw even if there is a `Merge` table created over a database that no longer exists. Remove the `getTotalRows` method from `Hive` tables, because we don't allow it to do complex work. [#75772](https://github.com/ClickHouse/ClickHouse/pull/75772) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Web UI now has interactive database navigation. [#75777](https://github.com/ClickHouse/ClickHouse/pull/75777) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Allow to combine read-only and read-write disks in storage policy (as multiple volumes, or multiple disks). This allows to read data from the entire volume, while inserts will prefer the writable disk (i.e. Copy-on-Write storage policy). [#75862](https://github.com/ClickHouse/ClickHouse/pull/75862) ([Azat Khuzhin](https://github.com/azat)).
-* Remove trace_id from default ORDER BY for system.opentelemetry_span_log. [#75907](https://github.com/ClickHouse/ClickHouse/pull/75907) ([Azat Khuzhin](https://github.com/azat)).
-* Encryption (XML attribute `encrypted_by`) can now be applied to any configuration file (config.xml, users.xml, nested configuration files). Previously, it worked only for the top-level config.xml file. [#75911](https://github.com/ClickHouse/ClickHouse/pull/75911) ([Mikhail Gorshkov](https://github.com/mgorshkov)).
-* Store start_time/end_time for Backups with microseconds. [#75929](https://github.com/ClickHouse/ClickHouse/pull/75929) ([Aleksandr Musorin](https://github.com/AVMusorin)).
-* Add `MemoryTrackingUncorrected` metric showing value of internal global memory tracker which is not corrected by RSS. [#75935](https://github.com/ClickHouse/ClickHouse/pull/75935) ([Antonio Andelic](https://github.com/antonio2368)).
-* Calculate columns and indices sizes lazily in MergeTree. [#75938](https://github.com/ClickHouse/ClickHouse/pull/75938) ([Pavel Kruglov](https://github.com/Avogar)).
-* Convert join to in subquery if output column is tied to the left table, need a uniqueness step at first, so disabled by default until the step is added later. [#75942](https://github.com/ClickHouse/ClickHouse/pull/75942) ([Shichao Jin](https://github.com/jsc0218)).
-* Added a server setting `throw_on_unknown_workload` that allows to choose behavior on query with `workload` setting set to unknown value: either allow unlimited access (default) or throw a `RESOURCE_ACCESS_DENIED` error. It is useful to force all queries to use workload scheduling. [#75999](https://github.com/ClickHouse/ClickHouse/pull/75999) ([Sergei Trifonov](https://github.com/serxa)).
-* Make the new, experimental Kafka table engine fully respect Keeper feature flags. [#76004](https://github.com/ClickHouse/ClickHouse/pull/76004) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
-* Don't rewrite subcolumns to getSubcolumn in ARRAY JOIN if not necessary. [#76018](https://github.com/ClickHouse/ClickHouse/pull/76018) ([Pavel Kruglov](https://github.com/Avogar)).
-* Retry coordination errors when loading tables. [#76020](https://github.com/ClickHouse/ClickHouse/pull/76020) ([Alexander Tokmakov](https://github.com/tavplubix)).
-* Improve the `system.warnings` table and add some dynamic warning messages that can be added, updated or removed. [#76029](https://github.com/ClickHouse/ClickHouse/pull/76029) ([Bharat Nallan](https://github.com/bharatnc)).
-* Support flushing individual logs in SYSTEM FLUSH LOGS. [#76132](https://github.com/ClickHouse/ClickHouse/pull/76132) ([Raúl Marín](https://github.com/Algunenano)).
-* Improved the `/binary` server's page. Using the Hilbert curve instead of the Morton curve. Display 512 MB worth of addresses in the square, which fills the square better (in previous versions, addresses fill only half of the square). Color addresses closer to the library name rather than the function name. Allow scrolling a bit more outside of the area. [#76192](https://github.com/ClickHouse/ClickHouse/pull/76192) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* This PR makes it impossible to run a query `ALTER USER user1 ADD PROFILES a, DROP ALL PROFILES` because all `DROP` operations should come first in the order. [#76242](https://github.com/ClickHouse/ClickHouse/pull/76242) ([pufit](https://github.com/pufit)).
-* Various enhancements for SYNC REPLICA (better error messages, better tests, sanity checks). [#76307](https://github.com/ClickHouse/ClickHouse/pull/76307) ([Azat Khuzhin](https://github.com/azat)).
-* Retry ON CLUSTER queries in case of TOO_MANY_SIMULTANEOUS_QUERIES. [#76352](https://github.com/ClickHouse/ClickHouse/pull/76352) ([Patrick Galbraith](https://github.com/CaptTofu)).
-* Changed the default value of `output_format_pretty_max_rows` from 10000 to 1000. I think it is better for usability. [#76407](https://github.com/ClickHouse/ClickHouse/pull/76407) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Support for a refresh in readonly MergeTree tables. [#76467](https://github.com/ClickHouse/ClickHouse/pull/76467) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Use correct fallback when multipart copy to S3 fails during backup with Access Denied. Multi part copy can generate Access Denied error when backup is done between buckets that have different credentials. [#76515](https://github.com/ClickHouse/ClickHouse/pull/76515) ([Antonio Andelic](https://github.com/antonio2368)).
-* Faster ClickHouse Servers shutdown (get rid of 2.5sec delay). [#76550](https://github.com/ClickHouse/ClickHouse/pull/76550) ([Azat Khuzhin](https://github.com/azat)).
-* Add query_id to system.errors. Related ticket [#75815](https://github.com/ClickHouse/ClickHouse/issues/75815). [#76581](https://github.com/ClickHouse/ClickHouse/pull/76581) ([Vladimir Baikov](https://github.com/bkvvldmr)).
-* Upgraded librdkafka to version 2.8.0 and improved the shutdown sequence for Kafka tables, reducing delays during table drops and server restarts. The `engine=Kafka` no longer explicitly leaves the consumer group when a table is dropped. Instead, the consumer remains in the group until it is automatically removed after `session_timeout_ms` (default: 45 seconds) of inactivity. [#76621](https://github.com/ClickHouse/ClickHouse/pull/76621) ([filimonov](https://github.com/filimonov)).
-* Fix validation of s3 request settings. [#76658](https://github.com/ClickHouse/ClickHouse/pull/76658) ([Vitaly Baranov](https://github.com/vitlibar)).
-* Avoid excess allocation in readbufferfroms3 and other remote reading buffers, reduce their memory consumption in half. [#76692](https://github.com/ClickHouse/ClickHouse/pull/76692) ([Sema Checherinda](https://github.com/CheSema)).
-* Support JSON type and subcolumns reading from View. [#76903](https://github.com/ClickHouse/ClickHouse/pull/76903) ([Pavel Kruglov](https://github.com/Avogar)).
-* Adding Support for Converting UInt128 to IPv6. This allows the `bitAnd` operation and arithmatics for IPv6 and conversion back to IPv6. Closes [#76752](https://github.com/ClickHouse/ClickHouse/issues/76752). This allows the result from `bitAnd` operation on IPv6 to be converted back to IPv6, as well. See: https://github.com/ClickHouse/ClickHouse/pull/57707. [#76928](https://github.com/ClickHouse/ClickHouse/pull/76928) ([Muzammil Abdul Rehman](https://github.com/muzammilar)).
-* System tables like `server_settings` or `settings` have a `default` value column which is convenient. only `merge_tree_settings` and `replicated_merge_tree_settings` do not have that column enabled. [#76942](https://github.com/ClickHouse/ClickHouse/pull/76942) ([Diego Nieto](https://github.com/lesandie)).
-* Don't parse special Bool values in text formats inside Variant type by default. It can be enabled using setting `allow_special_bool_values_inside_variant`. [#76974](https://github.com/ClickHouse/ClickHouse/pull/76974) ([Pavel Kruglov](https://github.com/Avogar)).
-* Support configurable per task waiting time of low priority query in session level and in server level. [#77013](https://github.com/ClickHouse/ClickHouse/pull/77013) ([VicoWu](https://github.com/VicoWu)).
-* Added `ProfileEvents::QueryPreempted`, which has the same logic as `CurrentMetrics::QueryPreempted`. [#77015](https://github.com/ClickHouse/ClickHouse/pull/77015) ([VicoWu](https://github.com/VicoWu)).
-* Previously database replicated might print credentials specified in a query to logs. This behaviour is fixed. This closes: [#77123](https://github.com/ClickHouse/ClickHouse/issues/77123). [#77133](https://github.com/ClickHouse/ClickHouse/pull/77133) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
-* Bump zstd from 1.5.5 to 1.5.7 which has pretty [good performance improvements](https://github.com/facebook/zstd/releases/tag/v1.5.7). [#77137](https://github.com/ClickHouse/ClickHouse/pull/77137) ([Pradeep Chhetri](https://github.com/chhetripradeep)).
-* Allow ALTER TABLE DROP PARTITION for plain_rewritable disk. [#77138](https://github.com/ClickHouse/ClickHouse/pull/77138) ([Julia Kartseva](https://github.com/jkartseva)).
-* Add the ability to randomly sleep up to 500ms independent of part sizes before merges/mutations execution in case of zero-copy replication. [#77165](https://github.com/ClickHouse/ClickHouse/pull/77165) ([Alexey Katsman](https://github.com/alexkats)).
-* Support atomic rename when `TRUNCATE` is used with `INTO OUTFILE`. Resolves [#70323](https://github.com/ClickHouse/ClickHouse/issues/70323). [#77181](https://github.com/ClickHouse/ClickHouse/pull/77181) ([Onkar Deshpande](https://github.com/onkar)).
-* Use FixedString for PostgreSQL's CHARACTER, CHAR and BPCHAR. [#77304](https://github.com/ClickHouse/ClickHouse/pull/77304) ([Pablo Marcos](https://github.com/pamarcos)).
-* Allow to explicitly specify metadata file to read for Iceberg with storage/table function setting `iceberg_metadata_file_path `. Fixes [#47412](https://github.com/ClickHouse/ClickHouse/issues/47412). [#77318](https://github.com/ClickHouse/ClickHouse/pull/77318) ([alesapin](https://github.com/alesapin)).
-* Support using a remote disk for databases to store metadata files. [#77365](https://github.com/ClickHouse/ClickHouse/pull/77365) ([Tuan Pham Anh](https://github.com/tuanpach)).
-* Implement comparison for values of JSON data type. Now JSON objects can be compared similarly to Maps. [#77397](https://github.com/ClickHouse/ClickHouse/pull/77397) ([Pavel Kruglov](https://github.com/Avogar)).
-* Change reverted. [#77399](https://github.com/ClickHouse/ClickHouse/pull/77399) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-* Backup/restore setting `allow_s3_native_copy` now supports value three possible values: - `False` - s3 native copy will not be used; - `True` (old default) - ClickHouse will try s3 native copy first, if it fails then fallback to the reading+writing approach; - `'auto'` (new default) - ClickHouse will compare the source and destination credentials first. If they are same, ClickHouse will try s3 native copy and then may fallback to the reading+writing approach. If they are different, ClickHouse will go directly to the reading+writing approach. [#77401](https://github.com/ClickHouse/ClickHouse/pull/77401) ([Vitaly Baranov](https://github.com/vitlibar)).
-* Support ALTER TABLE ... ATTACH|DETACH|MOVE|REPLACE PARTITION for the plain_rewritable disk. [#77406](https://github.com/ClickHouse/ClickHouse/pull/77406) ([Julia Kartseva](https://github.com/jkartseva)).
-* Skipping index cache is reverted. [#77447](https://github.com/ClickHouse/ClickHouse/pull/77447) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
-* Reduce memory usage during prefetches of JSON column in Wide parts. [#77640](https://github.com/ClickHouse/ClickHouse/pull/77640) ([Pavel Kruglov](https://github.com/Avogar)).
-* Support aws session token and environment credentials usage in delta kernel for DeltaLake table engine. [#77661](https://github.com/ClickHouse/ClickHouse/pull/77661) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Support query parameters inside `additional_table_filters` setting. After the change, the following query would succeed:. [#77680](https://github.com/ClickHouse/ClickHouse/pull/77680) ([wxybear](https://github.com/wxybear)).
-* User-defined functions (UDFs) can now be marked as deterministic via a new tag in their XML definition. Also, the query cache now checks if UDFs called within a query are deterministic. If this is the case, it caches the query result. (Issue [#59988](https://github.com/ClickHouse/ClickHouse/issues/59988)). [#77769](https://github.com/ClickHouse/ClickHouse/pull/77769) ([Jimmy Aguilar Mena](https://github.com/Ergus)).
-* Added Buffer table engine parameters validation. [#77840](https://github.com/ClickHouse/ClickHouse/pull/77840) ([Pervakov Grigorii](https://github.com/GrigoryPervakov)).
-* Add config `enable_hdfs_pread` to enable or disable hdfs pread. [#77885](https://github.com/ClickHouse/ClickHouse/pull/77885) ([kevinyhzou](https://github.com/KevinyhZou)).
-* Add profile events for number of zookeeper 'multi' read and write requests. [#77888](https://github.com/ClickHouse/ClickHouse/pull/77888) ([JackyWoo](https://github.com/JackyWoo)).
-* Allow creating and inserting into temp table when disable_insertion_and_mutation is on. [#77901](https://github.com/ClickHouse/ClickHouse/pull/77901) ([Xu Jia](https://github.com/XuJia0210)).
-* Decrease max_insert_delayed_streams_for_parallel_write (to 100). [#77919](https://github.com/ClickHouse/ClickHouse/pull/77919) ([Azat Khuzhin](https://github.com/azat)).
-* Add ability to configure number of columns that merges can flush in parallel using `max_merge_delayed_streams_for_parallel_write` (this should reduce memory usage for vertical merges to S3 about 25x times). [#77922](https://github.com/ClickHouse/ClickHouse/pull/77922) ([Azat Khuzhin](https://github.com/azat)).
-* Fix year parsing in joda syntax like 'yyy'. [#77973](https://github.com/ClickHouse/ClickHouse/pull/77973) ([李扬](https://github.com/taiyang-li)).
-* Attaching parts of MergeTree tables will be performed in their block order, which is important for special merging algorithms, such as ReplacingMergeTree. This closes [#71009](https://github.com/ClickHouse/ClickHouse/issues/71009). [#77976](https://github.com/ClickHouse/ClickHouse/pull/77976) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Query masking rules are now able to throw a LOGICAL_ERROR in case if the match happened. This will help to check if pre-defined password is leaking anywhere in logs. [#78094](https://github.com/ClickHouse/ClickHouse/pull/78094) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
-* Added column `index_length_column` to `information_schema.tables` for better compatibility with MySQL. [#78119](https://github.com/ClickHouse/ClickHouse/pull/78119) ([Paweł Zakrzewski](https://github.com/KrzaQ)).
-* Introduce two new metrics: `TotalMergeFailures` and `NonAbortedMergeFailures`. These metrics are needed to detect the cases where too many merges fail within a short period. [#78150](https://github.com/ClickHouse/ClickHouse/pull/78150) ([Miсhael Stetsyuk](https://github.com/mstetsyuk)).
-* Fix incorrect S3 uri parsing when key is not specified on path style. [#78185](https://github.com/ClickHouse/ClickHouse/pull/78185) ([Arthur Passos](https://github.com/arthurpassos)).
-* Fix incorrect values of `BlockActiveTime`, `BlockDiscardTime`, `BlockWriteTime`, `BlockQueueTime`, and `BlockReadTime` asynchronous metrics (before the change 1 second was incorrectly reported as 0.001). [#78211](https://github.com/ClickHouse/ClickHouse/pull/78211) ([filimonov](https://github.com/filimonov)).
-* Respect `loading_retries` limit for errors during push to materialized view for StorageS3(Azure)Queue. Before that such errors were retried indefinitely. [#78313](https://github.com/ClickHouse/ClickHouse/pull/78313) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* In StorageDeltaLake with delta-kernel-rs implementation, fix performance and progress bar. [#78368](https://github.com/ClickHouse/ClickHouse/pull/78368) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Vector similarity index could over-allocate main memory by up to 2x. This fix reworks the memory allocation strategy, reducing the memory consumption and improving the effectiveness of the vector similarity index cache. (issue [#78056](https://github.com/ClickHouse/ClickHouse/issues/78056)). [#78394](https://github.com/ClickHouse/ClickHouse/pull/78394) ([Shankar Iyer](https://github.com/shankar-iyer)).
-* Introduce a setting `schema_type` for `system.metric_log` table with schema type. There are three allowed schemas: `wide` -- current schema, each metric/event in a separate column (most effective for reads of separate columns), `transposed` -- similar to `system.asynchronous_metric_log`, metrics/events are stored as rows, and the most interesting `transposed_with_wide_view` -- create underlying table with `transposed` schema, but also introduce a view with `wide` schema which translates queries to underlying table. In `transposed_with_wide_view` subsecond resolution for view is not supported, `event_time_microseconds` is just an alias for backward compatibility. [#78412](https://github.com/ClickHouse/ClickHouse/pull/78412) ([alesapin](https://github.com/alesapin)).
-* Support `include`, `from_env`, `from_zk` for runtime disks. Closes [#78177](https://github.com/ClickHouse/ClickHouse/issues/78177). [#78470](https://github.com/ClickHouse/ClickHouse/pull/78470) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Add several convenient ways to resolve root metadata.json file in an iceberg table function and engine. Closes [#78455](https://github.com/ClickHouse/ClickHouse/issues/78455). [#78475](https://github.com/ClickHouse/ClickHouse/pull/78475) ([Daniil Ivanik](https://github.com/divanik)).
-* Support partition pruning in delta lake. [#78486](https://github.com/ClickHouse/ClickHouse/pull/78486) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Support password based auth in SSH protocol in ClickHouse. [#78586](https://github.com/ClickHouse/ClickHouse/pull/78586) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
-* Add a dynamic warning to the `system.warnings` table for long running mutations. [#78658](https://github.com/ClickHouse/ClickHouse/pull/78658) ([Bharat Nallan](https://github.com/bharatnc)).
-* Drop connections if the CPU is massively overloaded. The decision is made based on the ratio of wait time (`OSCPUWaitMicroseconds`) to busy time (`OSCPUVirtualTimeMicroseconds`). The query is dropped with some probability, when this ratio is between `min_os_cpu_wait_time_ratio_to_drop_connection` and `max_os_cpu_wait_time_ratio_to_drop_connection`. [#78778](https://github.com/ClickHouse/ClickHouse/pull/78778) ([Alexey Katsman](https://github.com/alexkats)).
-* Allow empty value on hive partitioning. [#78816](https://github.com/ClickHouse/ClickHouse/pull/78816) ([Arthur Passos](https://github.com/arthurpassos)).
-* Fix `IN` clause type coercion for `BFloat16` (i.e. `SELECT toBFloat16(1) IN [1, 2, 3];` now returns `1`). Closes [#78754](https://github.com/ClickHouse/ClickHouse/issues/78754). [#78839](https://github.com/ClickHouse/ClickHouse/pull/78839) ([Raufs Dunamalijevs](https://github.com/rienath)).
-* Do not check parts on other disks for MergeTree if disk= is set. [#78855](https://github.com/ClickHouse/ClickHouse/pull/78855) ([Azat Khuzhin](https://github.com/azat)).
-* Make data types in `used_data_type_families` in `system.query_log` canonical. [#78972](https://github.com/ClickHouse/ClickHouse/pull/78972) ([Kseniia Sumarokova](https://github.com/kssenii)).
-
-## Bug Fix (user-visible misbehavior in an official stable release) {#bug-fix}
-
-* Fix cannot create SEQUENTIAL node with keeper-client. [#64177](https://github.com/ClickHouse/ClickHouse/pull/64177) ([Duc Canh Le](https://github.com/canhld94)).
-* Fix identifier resolution from parent scopes. Allow the use of aliases to expressions in the WITH clause. Fixes [#58994](https://github.com/ClickHouse/ClickHouse/issues/58994). Fixes [#62946](https://github.com/ClickHouse/ClickHouse/issues/62946). Fixes [#63239](https://github.com/ClickHouse/ClickHouse/issues/63239). Fixes [#65233](https://github.com/ClickHouse/ClickHouse/issues/65233). Fixes [#71659](https://github.com/ClickHouse/ClickHouse/issues/71659). Fixes [#71828](https://github.com/ClickHouse/ClickHouse/issues/71828). Fixes [#68749](https://github.com/ClickHouse/ClickHouse/issues/68749). [#66143](https://github.com/ClickHouse/ClickHouse/pull/66143) ([Dmitry Novik](https://github.com/novikd)).
-* Fix incorrect character counting in PositionImpl::vectorVector. [#71003](https://github.com/ClickHouse/ClickHouse/pull/71003) ([思维](https://github.com/heymind)).
-* Fix negate function monotonicity. In previous versions, the query `select * from a where -x = -42;` where `x` is the primary key, can return a wrong result. [#71440](https://github.com/ClickHouse/ClickHouse/pull/71440) ([Michael Kolupaev](https://github.com/al13n321)).
-* `RESTORE` operations for access entities required more permission than necessary because of unhandled partial revokes. This PR fixes the issue. Closes [#71853](https://github.com/ClickHouse/ClickHouse/issues/71853). [#71958](https://github.com/ClickHouse/ClickHouse/pull/71958) ([pufit](https://github.com/pufit)).
-* Avoid pause after `ALTER TABLE REPLACE/MOVE PARTITION FROM/TO TABLE`. Retrieve correct settings for background task scheduling. [#72024](https://github.com/ClickHouse/ClickHouse/pull/72024) ([Aleksei Filatov](https://github.com/aalexfvk)).
-* Fix empty tuple handling in arrayIntersect. This fixes [#72578](https://github.com/ClickHouse/ClickHouse/issues/72578). [#72581](https://github.com/ClickHouse/ClickHouse/pull/72581) ([Amos Bird](https://github.com/amosbird)).
-* Fix handling of empty tuples in some input and output formats (e.g. Parquet, Arrow). [#72616](https://github.com/ClickHouse/ClickHouse/pull/72616) ([Michael Kolupaev](https://github.com/al13n321)).
-* Column-level GRANT SELECT/INSERT statements on wildcard databases/tables now throw an error. [#72646](https://github.com/ClickHouse/ClickHouse/pull/72646) ([Johann Gan](https://github.com/johanngan)).
-* Fix the situation when a user can't run `REVOKE ALL ON *.*` because of implicit grants in the target access entity. [#72872](https://github.com/ClickHouse/ClickHouse/pull/72872) ([pufit](https://github.com/pufit)).
-* Fix stuck while processing pending batch for async distributed INSERT (due to i.e. `No such file or directory`). [#72939](https://github.com/ClickHouse/ClickHouse/pull/72939) ([Azat Khuzhin](https://github.com/azat)).
-* Add support for Azure SAS Tokens. [#72959](https://github.com/ClickHouse/ClickHouse/pull/72959) ([Azat Khuzhin](https://github.com/azat)).
-* Fix positive timezone formatting of formatDateTime scalar function. [#73091](https://github.com/ClickHouse/ClickHouse/pull/73091) ([ollidraese](https://github.com/ollidraese)).
-* Fix to correctly reflect source port when connection made through PROXYv1 and `auth_use_forwarded_address` is set - previously proxy port was incorrectly used. Add `currentQueryID()` function. [#73095](https://github.com/ClickHouse/ClickHouse/pull/73095) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
-* Propagate format settings to NativeWriter in TCPHandler, so settings like `output_format_native_write_json_as_string` are applied correctly. [#73179](https://github.com/ClickHouse/ClickHouse/pull/73179) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix reading JSON sub-object subcolumns with incorrect prefix. [#73182](https://github.com/ClickHouse/ClickHouse/pull/73182) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix crash in StorageObjectStorageQueue. [#73274](https://github.com/ClickHouse/ClickHouse/pull/73274) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Fix rare crash in refreshable materialized view during server shutdown. [#73323](https://github.com/ClickHouse/ClickHouse/pull/73323) ([Michael Kolupaev](https://github.com/al13n321)).
-* The `%f` placeholder of function `formatDateTime` now unconditionally generates six (sub-second) digits. This makes the behavior compatible with MySQL `DATE_FORMAT` function. The previous behavior can be restored using setting `formatdatetime_f_prints_scale_number_of_digits = 1`. [#73324](https://github.com/ClickHouse/ClickHouse/pull/73324) ([ollidraese](https://github.com/ollidraese)).
-* Improved datetime conversion during index analysis by enforcing saturating behavior for implicit Date to DateTime conversions. This resolves potential index analysis inaccuracies caused by datetime range limitations. This fixes [#73307](https://github.com/ClickHouse/ClickHouse/issues/73307). It also fixes explicit `toDateTime` conversion when `date_time_overflow_behavior = 'ignore'` which is the default value. [#73326](https://github.com/ClickHouse/ClickHouse/pull/73326) ([Amos Bird](https://github.com/amosbird)).
-* Fixed filtering by `_etag` column while reading from `s3` storage and table function. [#73353](https://github.com/ClickHouse/ClickHouse/pull/73353) ([Anton Popov](https://github.com/CurtizJ)).
-* Fix `Not-ready Set is passed as the second argument for function 'in'` error when `IN (subquery)` is used in `JOIN ON` expression, with the old analyzer. [#73382](https://github.com/ClickHouse/ClickHouse/pull/73382) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
-* Fix preparing for squashin for Dynamic and JSON columns. Previously in some cases new types could be inserted into shared variant/shared data even when the limit on types/paths is not reached. [#73388](https://github.com/ClickHouse/ClickHouse/pull/73388) ([Pavel Kruglov](https://github.com/Avogar)).
-* Check for corrupted sizes during types binary decoding to avoid too big allocations. [#73390](https://github.com/ClickHouse/ClickHouse/pull/73390) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fixed a logical error when reading from single-replica cluster with parallel replicas enabled. [#73403](https://github.com/ClickHouse/ClickHouse/pull/73403) ([Michael Kolupaev](https://github.com/al13n321)).
-* Fix ObjectStorageQueue with ZooKeeper and older Keeper. [#73420](https://github.com/ClickHouse/ClickHouse/pull/73420) ([Antonio Andelic](https://github.com/antonio2368)).
-* Implements fix, needed to enable hive partitioning by default. [#73479](https://github.com/ClickHouse/ClickHouse/pull/73479) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-* Fix data race when creating vector similarity index. [#73517](https://github.com/ClickHouse/ClickHouse/pull/73517) ([Antonio Andelic](https://github.com/antonio2368)).
-* Fixes segfault when the source of the dictionary contains a function with wrong data. [#73535](https://github.com/ClickHouse/ClickHouse/pull/73535) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-* Fix retries on failed insert in storage S3(Azure)Queue. Closes [#70951](https://github.com/ClickHouse/ClickHouse/issues/70951). [#73546](https://github.com/ClickHouse/ClickHouse/pull/73546) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Fixed error in function `tupleElement` which may appear in some cases for tuples with `LowCardinality` elelments and enabled setting `optimize_functions_to_subcolumns`. [#73548](https://github.com/ClickHouse/ClickHouse/pull/73548) ([Anton Popov](https://github.com/CurtizJ)).
-* Fix parsing enum glob followed by range one. Fixes [#73473](https://github.com/ClickHouse/ClickHouse/issues/73473). [#73569](https://github.com/ClickHouse/ClickHouse/pull/73569) ([Konstantin Bogdanov](https://github.com/thevar1able)).
-* Fixed parallel_replicas_for_non_replicated_merge_tree being ignored in subqueries for non-replicated tables. [#73584](https://github.com/ClickHouse/ClickHouse/pull/73584) ([Igor Nikonov](https://github.com/devcrafter)).
-* Fix for `std::logical_error` thrown when a task cannot be scheduled. Found in stress tests. Example stacktrace: `2024.12.19 02:05:46.171833 [ 18190 ] {01f0daba-d3cc-4898-9e0e-c2c263306427} : Logical error: 'std::exception. Code: 1001, type: std::__1::future_error, e.what() = The associated promise has been destructed prior to the associated state becoming ready. (version 25.1.1.18724), Stack trace:.` [#73629](https://github.com/ClickHouse/ClickHouse/pull/73629) ([Alexander Gololobov](https://github.com/davenger)).
-* Do not interpret queries in `EXPLAIN SYNTAX` to avoid logical errors with incorrect processing stage for distributed queries. Fixes [#65205](https://github.com/ClickHouse/ClickHouse/issues/65205). [#73634](https://github.com/ClickHouse/ClickHouse/pull/73634) ([Dmitry Novik](https://github.com/novikd)).
-* Fix possible data inconsistency in Dynamic column. Fixes possible logical error `Nested columns sizes are inconsistent with local_discriminators column size`. [#73644](https://github.com/ClickHouse/ClickHouse/pull/73644) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fixed `NOT_FOUND_COLUMN_IN_BLOCK` in queries with `FINAL` and `SAMPLE`. Fixed incorrect result in selects with `FINAL` from `CollapsingMergeTree` and enabled optimizations of `FINAL` . [#73682](https://github.com/ClickHouse/ClickHouse/pull/73682) ([Anton Popov](https://github.com/CurtizJ)).
-* Fix crash in LIMIT BY COLUMNS. [#73686](https://github.com/ClickHouse/ClickHouse/pull/73686) ([Raúl Marín](https://github.com/Algunenano)).
-* Fix the bug when the normal projection is forced to use, and query is exactly the same as the projection defined, but the projection is not selected and thus error is prompted. [#73700](https://github.com/ClickHouse/ClickHouse/pull/73700) ([Shichao Jin](https://github.com/jsc0218)).
-* Fix deserialization of Dynamic/Object structure. It could lead to CANNOT_READ_ALL_DATA exceptions. [#73767](https://github.com/ClickHouse/ClickHouse/pull/73767) ([Pavel Kruglov](https://github.com/Avogar)).
-* Skip `metadata_version.txt` in while restoring parts from a backup. [#73768](https://github.com/ClickHouse/ClickHouse/pull/73768) ([Vitaly Baranov](https://github.com/vitlibar)).
-* Fix [#73737](https://github.com/ClickHouse/ClickHouse/issues/73737). [#73775](https://github.com/ClickHouse/ClickHouse/pull/73775) ([zhanglistar](https://github.com/zhanglistar)).
-* Fixes [#72078](https://github.com/ClickHouse/ClickHouse/issues/72078) ( S3 Express Support was broken ). [#73777](https://github.com/ClickHouse/ClickHouse/pull/73777) ([Sameer Tamsekar](https://github.com/stamsekar)).
-* Allow merging of rows with invalid sign column values in CollapsingMergeTree tables. [#73864](https://github.com/ClickHouse/ClickHouse/pull/73864) ([Christoph Wurm](https://github.com/cwurm)).
-* Fix the following error ``` Row 1: ────── hostname: c-test-wy-37-server-nlkyjyb-0.c-test-wy-37-server-headless.ns-test-wy-37.svc.cluster.local type: ExceptionWhileProcessing event_date: 2024-12-23 event_time: 2024-12-23 16:21:19 event_time_microseconds: 2024-12-23 16:21:19.824624 query_start_time: 2024-12-23 16:21:19 query_start_time_microseconds: 2024-12-23 16:21:19.747142 query_duration_ms: 77 read_rows: 1 read_bytes: 134 written_rows: 0 written_bytes: 0 result_rows: 0 result_bytes: 0 memory_usage: 7824 current_database: default query: CREATE DATABASE db0 formatted_query: normalized_query_hash: 7820917191074023511 -- 7.82 quintillion query_kind: Create databases: ['db0'] tables: [] columns: [] partitions: [] projections: [] views: [] exception_code: 170 exception: Code: 170. DB::Exception: Bad get: has Null, requested Int64: While executing DDLOnClusterQueryStatus. (BAD_GET) (version 25.1.1.19134 (official build)) stack_trace: 0. ./build_docker/./src/Common/Exception.cpp:107: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000da5e53b 1. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x00000000088aca4c 2. DB::Exception::Exception>, std::basic_string_view>>(int, FormatStringHelperImpl>>::type, std::type_identity>>::type>, std::basic_string_view>&&, std::basic_string_view>&&) @ 0x00000000088bae8b 3. auto& DB::Field::safeGet() & @ 0x0000000008a3c748 4. ./src/Core/Field.h:484: DB::ColumnVector::insert(DB::Field const&) @ 0x0000000012e44c0f 5. ./build_docker/./src/Interpreters/DDLOnClusterQueryStatusSource.cpp:53: DB::DDLOnClusterQueryStatusSource::generateChunkWithUnfinishedHosts() const @ 0x0000000012a40214 6. ./build_docker/./src/Interpreters/DDLOnClusterQueryStatusSource.cpp:104: DB::DDLOnClusterQueryStatusSource::handleTimeoutExceeded() @ 0x0000000012a41640 7. ./build_docker/./src/Interpreters/DDLOnClusterQueryStatusSource.cpp:109: DB::DDLOnClusterQueryStatusSource::stopWaitingOfflineHosts() @ 0x0000000012a41be9 8. ./build_docker/./src/Interpreters/DistributedQueryStatusSource.cpp:182: DB::DistributedQueryStatusSource::generate() @ 0x0000000011feb3bf 9. ./build_docker/./src/Processors/ISource.cpp:139: DB::ISource::tryGenerate() @ 0x0000000014148f5b 10. ./build_docker/./src/Processors/ISource.cpp:108: DB::ISource::work() @ 0x0000000014148c47 11. ./build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:49: DB::ExecutionThreadContext::executeTask() @ 0x0000000014164fc7 12. ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:290: DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic*) @ 0x00000000141577e5 ```. [#73876](https://github.com/ClickHouse/ClickHouse/pull/73876) ([Tuan Pham Anh](https://github.com/tuanpach)).
-* Fixes occasional failure to compare `map()` types due to possibility to create `Map` lacking explicit naming ('keys','values') of its nested tuple. [#73878](https://github.com/ClickHouse/ClickHouse/pull/73878) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
-* Ignore window functions during GROUP BY ALL clause resolution. Fix [#73501](https://github.com/ClickHouse/ClickHouse/issues/73501). [#73916](https://github.com/ClickHouse/ClickHouse/pull/73916) ([Dmitry Novik](https://github.com/novikd)).
-* Propogate Native format settings properly for client-server communication. [#73924](https://github.com/ClickHouse/ClickHouse/pull/73924) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix implicit privileges (worked as wildcard before). [#73932](https://github.com/ClickHouse/ClickHouse/pull/73932) ([Azat Khuzhin](https://github.com/azat)).
-* Fix high memory usage during nested Maps creation. [#73982](https://github.com/ClickHouse/ClickHouse/pull/73982) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix parsing nested JSON with empty keys. [#73993](https://github.com/ClickHouse/ClickHouse/pull/73993) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix: alias can be not added to the projection if it is referenced by another alias and selected in inverse order. [#74033](https://github.com/ClickHouse/ClickHouse/pull/74033) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
-* A disk using the plain_rewritable metadata can be shared among multiple server instances. It is expected for one instance to read a metadata object while another modifies it. Object not found errors are ignored during plain_rewritable initialization with Azure storage, similar to the behavior implemented for S3. [#74059](https://github.com/ClickHouse/ClickHouse/pull/74059) ([Julia Kartseva](https://github.com/jkartseva)).
-* Fix behaviour of `any` and `anyLast` with enum types and empty table. [#74061](https://github.com/ClickHouse/ClickHouse/pull/74061) ([Joanna Hulboj](https://github.com/jh0x)).
-* Fixes case when the user specifies keyword arguments in the kafka table engine. [#74064](https://github.com/ClickHouse/ClickHouse/pull/74064) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-* Fix altering Storage `S3Queue` settings with "s3queue_" prefix to without and vice versa. [#74075](https://github.com/ClickHouse/ClickHouse/pull/74075) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Add a setting `allow_push_predicate_ast_for_distributed_subqueries`. This adds AST-based predicate push-down for distributed queries with the analyzer. This is a temporary solution that we use until distributed queries with query plan serialization are supported. Closes [#66878](https://github.com/ClickHouse/ClickHouse/issues/66878) [#69472](https://github.com/ClickHouse/ClickHouse/issues/69472) [#65638](https://github.com/ClickHouse/ClickHouse/issues/65638) [#68030](https://github.com/ClickHouse/ClickHouse/issues/68030) [#73718](https://github.com/ClickHouse/ClickHouse/issues/73718). [#74085](https://github.com/ClickHouse/ClickHouse/pull/74085) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
-* Fixes issue when after [#73095](https://github.com/ClickHouse/ClickHouse/issues/73095) port can be present in the forwarded_for field, which leads to inability to resolve host name with port included. [#74116](https://github.com/ClickHouse/ClickHouse/pull/74116) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
-* Fixed incorrect formatting of `ALTER TABLE (DROP STATISTICS ...) (DROP STATISTICS ...)`. [#74126](https://github.com/ClickHouse/ClickHouse/pull/74126) ([Han Fei](https://github.com/hanfei1991)).
-* Fix for issue [#66112](https://github.com/ClickHouse/ClickHouse/issues/66112). [#74128](https://github.com/ClickHouse/ClickHouse/pull/74128) ([Anton Ivashkin](https://github.com/ianton-ru)).
-* It is no longer possible to use `Loop` as a table engine in `CREATE TABLE`. This combination was previously causing segfaults. [#74137](https://github.com/ClickHouse/ClickHouse/pull/74137) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-* Fix security issue to prevent SQL injection in postgresql and sqlite table functions. [#74144](https://github.com/ClickHouse/ClickHouse/pull/74144) ([Pablo Marcos](https://github.com/pamarcos)).
-* Fix crash when reading a subcolumn from the compressed Memory engine table. Fixes [#74009](https://github.com/ClickHouse/ClickHouse/issues/74009). [#74161](https://github.com/ClickHouse/ClickHouse/pull/74161) ([Nikita Taranov](https://github.com/nickitat)).
-* Fixed an infinite loop occurring with queries to the system.detached_tables. [#74190](https://github.com/ClickHouse/ClickHouse/pull/74190) ([Konstantin Morozov](https://github.com/k-morozov)).
-* Fix logical error in s3queue during setting file as failed. [#74216](https://github.com/ClickHouse/ClickHouse/pull/74216) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Check for not supported types for some storages. [#74218](https://github.com/ClickHouse/ClickHouse/pull/74218) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix crash with query `INSERT INTO SELECT` over PostgreSQL interface on macOS (issue [#72938](https://github.com/ClickHouse/ClickHouse/issues/72938)). [#74231](https://github.com/ClickHouse/ClickHouse/pull/74231) ([Artem Yurov](https://github.com/ArtemYurov)).
-* Fix native copy settings (`allow_s3_native_copy`/`allow_azure_native_copy`) for `RESTORE` from base backup. [#74286](https://github.com/ClickHouse/ClickHouse/pull/74286) ([Azat Khuzhin](https://github.com/azat)).
-* Fixed the issue when the number of detached tables in the database is a multiple of max_block_size. [#74289](https://github.com/ClickHouse/ClickHouse/pull/74289) ([Konstantin Morozov](https://github.com/k-morozov)).
-* Fix copying via ObjectStorage (i.e. S3) when source and destination credentials differs. [#74331](https://github.com/ClickHouse/ClickHouse/pull/74331) ([Azat Khuzhin](https://github.com/azat)).
-* Fixed uninitialized max_log_ptr in the replicated database. [#74336](https://github.com/ClickHouse/ClickHouse/pull/74336) ([Konstantin Morozov](https://github.com/k-morozov)).
-* Fix detection of "use the Rewrite method in the JSON API" for native copy on GCS. [#74338](https://github.com/ClickHouse/ClickHouse/pull/74338) ([Azat Khuzhin](https://github.com/azat)).
-* Fix crash when inserting interval (issue [#74299](https://github.com/ClickHouse/ClickHouse/issues/74299)). [#74478](https://github.com/ClickHouse/ClickHouse/pull/74478) ([NamNguyenHoai](https://github.com/NamHoaiNguyen)).
-* Fix incorrect projection analysis when `count(nullable)` is used in aggregate projections. This fixes [#74495](https://github.com/ClickHouse/ClickHouse/issues/74495) . This PR also adds some logs around projection analysis to clarify why a projection is used or why not. [#74498](https://github.com/ClickHouse/ClickHouse/pull/74498) ([Amos Bird](https://github.com/amosbird)).
-* Fix incorrect calculation of `BackgroundMergesAndMutationsPoolSize` (it was x2 from real value). [#74509](https://github.com/ClickHouse/ClickHouse/pull/74509) ([alesapin](https://github.com/alesapin)).
-* Fix the bug of leaking keeper watches when enable Cluster Discovery. [#74521](https://github.com/ClickHouse/ClickHouse/pull/74521) ([RinChanNOW](https://github.com/RinChanNOWWW)).
-* Fix formatting constant JSON literals. Previously it could lead to syntax errors during sending the query to another server. [#74533](https://github.com/ClickHouse/ClickHouse/pull/74533) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix mem alignment issue reported by UBSan [#74512](https://github.com/ClickHouse/ClickHouse/issues/74512). [#74534](https://github.com/ClickHouse/ClickHouse/pull/74534) ([Arthur Passos](https://github.com/arthurpassos)).
-* Fix KeeperMap concurrent cleanup during table creation. [#74568](https://github.com/ClickHouse/ClickHouse/pull/74568) ([Antonio Andelic](https://github.com/antonio2368)).
-* Do not remove unused projection columns in subqueries in the presence of `EXCEPT` or `INTERSECT` to preserve the correct query result. Fixes [#73930](https://github.com/ClickHouse/ClickHouse/issues/73930). Fixes [#66465](https://github.com/ClickHouse/ClickHouse/issues/66465). [#74577](https://github.com/ClickHouse/ClickHouse/pull/74577) ([Dmitry Novik](https://github.com/novikd)).
-* Fix broken create query when using constant partition expressions with implicit projections enabled. This fixes [#74596](https://github.com/ClickHouse/ClickHouse/issues/74596) . [#74634](https://github.com/ClickHouse/ClickHouse/pull/74634) ([Amos Bird](https://github.com/amosbird)).
-* Fixed `INSERT SELECT` queries between tables with `Tuple` columns and enabled sparse serialization. [#74698](https://github.com/ClickHouse/ClickHouse/pull/74698) ([Anton Popov](https://github.com/CurtizJ)).
-* Function `right` works incorrectly for const negative offset. [#74701](https://github.com/ClickHouse/ClickHouse/pull/74701) ([Daniil Ivanik](https://github.com/divanik)).
-* Fix insertion of gzip-ed data sometimes fails due to flawed decompression on client side. [#74707](https://github.com/ClickHouse/ClickHouse/pull/74707) ([siyuan](https://github.com/linkwk7)).
-* Avoid leaving connection in broken state after INSERT finishes with exception. [#74740](https://github.com/ClickHouse/ClickHouse/pull/74740) ([Azat Khuzhin](https://github.com/azat)).
-* Avoid reusing connections that had been left in the intermediate state. [#74749](https://github.com/ClickHouse/ClickHouse/pull/74749) ([Azat Khuzhin](https://github.com/azat)).
-* Partial revokes with wildcard grants could remove more privileges than expected. Closes [#74263](https://github.com/ClickHouse/ClickHouse/issues/74263). [#74751](https://github.com/ClickHouse/ClickHouse/pull/74751) ([pufit](https://github.com/pufit)).
-* Fix crash during JSON type declaration parsing when type name is not uppercase. [#74784](https://github.com/ClickHouse/ClickHouse/pull/74784) ([Pavel Kruglov](https://github.com/Avogar)).
-* Keeper fix: fix reading log entries from disk. [#74785](https://github.com/ClickHouse/ClickHouse/pull/74785) ([Antonio Andelic](https://github.com/antonio2368)).
-* Fixed checking grants for SYSTEM REFRESH/START/STOP VIEW, now it's not required to have this grant on `*.*` to execute a query for a specific view, only grant for this view are required. [#74789](https://github.com/ClickHouse/ClickHouse/pull/74789) ([Alexander Tokmakov](https://github.com/tavplubix)).
-* The `hasColumnInTable` function doesn't account for alias columns. Fix it to also work for alias columns. [#74841](https://github.com/ClickHouse/ClickHouse/pull/74841) ([Bharat Nallan](https://github.com/bharatnc)).
-* Keeper: fix logical_error when the connection had been terminated before establishing. [#74844](https://github.com/ClickHouse/ClickHouse/pull/74844) ([Michael Kolupaev](https://github.com/al13n321)).
-* Fix a behavior when the server couldn't startup when there's a table using `AzureBlobStorage`. Tables are loaded without any requests to Azure. [#74880](https://github.com/ClickHouse/ClickHouse/pull/74880) ([Alexey Katsman](https://github.com/alexkats)).
-* Fix missing `used_privileges` and `missing_privileges` fields in `query_log` for BACKUP and RESTORE operations. [#74887](https://github.com/ClickHouse/ClickHouse/pull/74887) ([Alexey Katsman](https://github.com/alexkats)).
-* Fix FILE_DOESNT_EXIST error occurring during data parts merge for a table with an empty column in Azure Blob Storage. [#74892](https://github.com/ClickHouse/ClickHouse/pull/74892) ([Julia Kartseva](https://github.com/jkartseva)).
-* Fix projection column name when joining temporary tables, close [#68872](https://github.com/ClickHouse/ClickHouse/issues/68872). [#74897](https://github.com/ClickHouse/ClickHouse/pull/74897) ([Vladimir Cherkasov](https://github.com/vdimir)).
-* HDFS refresh krb ticket if sasl error during hdfs select request. [#74930](https://github.com/ClickHouse/ClickHouse/pull/74930) ([inv2004](https://github.com/inv2004)).
-* Fix queries to Replicated database in startup_scripts. [#74942](https://github.com/ClickHouse/ClickHouse/pull/74942) ([Azat Khuzhin](https://github.com/azat)).
-* Fix issues with expressions type aliased in the JOIN ON clause when a null-safe comparison is used. [#74970](https://github.com/ClickHouse/ClickHouse/pull/74970) ([Vladimir Cherkasov](https://github.com/vdimir)).
-* Revert part's state from deleting back to outdated when remove operation has failed. [#74985](https://github.com/ClickHouse/ClickHouse/pull/74985) ([Sema Checherinda](https://github.com/CheSema)).
-* In previous versions, when there was a scalar subquery, we started writing the progress (accumulated from processing the subquery) during the initialization of the data format, which was before HTTP headers were written. This led to the loss of HTTP headers, such as X-ClickHouse-QueryId and X-ClickHouse-Format, as well as Content-Type. [#74991](https://github.com/ClickHouse/ClickHouse/pull/74991) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Fix `CREATE TABLE AS...` queries for `database_replicated_allow_replicated_engine_arguments=0`. [#75000](https://github.com/ClickHouse/ClickHouse/pull/75000) ([Bharat Nallan](https://github.com/bharatnc)).
-* Fix leaving connection in a bad state in client after INSERT exceptions. [#75030](https://github.com/ClickHouse/ClickHouse/pull/75030) ([Azat Khuzhin](https://github.com/azat)).
-* Fix crash due to uncaught exception in PSQL replication. [#75062](https://github.com/ClickHouse/ClickHouse/pull/75062) ([Azat Khuzhin](https://github.com/azat)).
-* Sasl can fail any rpc call, the fix helps to repeat the call in case if krb5 ticker is expired. [#75063](https://github.com/ClickHouse/ClickHouse/pull/75063) ([inv2004](https://github.com/inv2004)).
-* Fixed usage of indexes (primary and secondary) for `Array`, `Map` and `Nullable(..)` columns with enabled setting `optimize_function_to_subcolumns`. Previously, indexes for these columns could have been ignored. [#75081](https://github.com/ClickHouse/ClickHouse/pull/75081) ([Anton Popov](https://github.com/CurtizJ)).
-* Disable `flatten_nested` when creating materialized views with inner tables since it will not be possible to use such flattened columns. [#75085](https://github.com/ClickHouse/ClickHouse/pull/75085) ([Christoph Wurm](https://github.com/cwurm)).
-* Fix for some of IPv6 addresses (such as ::ffff:1.1.1.1) in forwarded_for field is wrongly interpreted resulting in client disconnect with exception. [#75133](https://github.com/ClickHouse/ClickHouse/pull/75133) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
-* Fix nullsafe JOIN handling for LowCardinality nullable data type. Previously JOIN ON with nullsafe comparison, such as `IS NOT DISTINCT FROM`, `<=>` , `a IS NULL AND b IS NULL OR a == b` didn't work correctly with LowCardinality columns. [#75143](https://github.com/ClickHouse/ClickHouse/pull/75143) ([Vladimir Cherkasov](https://github.com/vdimir)).
-* Fix queries with unused interpolation with the new analyzer. [#75173](https://github.com/ClickHouse/ClickHouse/pull/75173) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
-* Fix the crash bug of CTE with Insert. [#75188](https://github.com/ClickHouse/ClickHouse/pull/75188) ([Shichao Jin](https://github.com/jsc0218)).
-* Keeper fix: avoid writing to broken changelogs when rolling back logs. [#75197](https://github.com/ClickHouse/ClickHouse/pull/75197) ([Antonio Andelic](https://github.com/antonio2368)).
-* Use `BFloat16` as a supertype where appropriate. This closes: [#74404](https://github.com/ClickHouse/ClickHouse/issues/74404). [#75236](https://github.com/ClickHouse/ClickHouse/pull/75236) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
-* Fix unexpected defaults in join result with any_join_distinct_right_table_keys and OR in JOIN ON. [#75262](https://github.com/ClickHouse/ClickHouse/pull/75262) ([Vladimir Cherkasov](https://github.com/vdimir)).
-* Mask azureblobstorage table engine credentials. [#75319](https://github.com/ClickHouse/ClickHouse/pull/75319) ([Garrett Thomas](https://github.com/garrettthomaskth)).
-* Fixed behavior when ClickHouse may erroneously do a filter pushdown to an external database like PostgreSQL, MySQL, or SQLite. This closes: [#71423](https://github.com/ClickHouse/ClickHouse/issues/71423). [#75320](https://github.com/ClickHouse/ClickHouse/pull/75320) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
-* Fix crash in protobuf schema cache that can happen during output in Protobuf format and parallel query `SYSTEM DROP FORMAT SCHEMA CACHE`. [#75357](https://github.com/ClickHouse/ClickHouse/pull/75357) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix a possible logical error or uninitialized memory issue when a filter from `HAVING` is pushed down with parallel replicas. [#75363](https://github.com/ClickHouse/ClickHouse/pull/75363) ([Vladimir Cherkasov](https://github.com/vdimir)).
-* Hide sensitive info for `icebergS3`, `icebergAzure` table functions and table engines. [#75378](https://github.com/ClickHouse/ClickHouse/pull/75378) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Function `TRIM` with computed empty trim characters are now correctly handled. Example: `SELECT TRIM(LEADING concat('') FROM 'foo')` (Issue [#69922](https://github.com/ClickHouse/ClickHouse/issues/69922)). [#75399](https://github.com/ClickHouse/ClickHouse/pull/75399) ([Manish Gill](https://github.com/mgill25)).
-* Fix data race in IOutputFormat. [#75448](https://github.com/ClickHouse/ClickHouse/pull/75448) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix possible error `Elements ... and ... of Nested data structure ... (Array columns) have different array sizes` when JSON subcolumns with Array type are used in JOIN over distributed tables. [#75512](https://github.com/ClickHouse/ClickHouse/pull/75512) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix invalid result buffer size calculation. Closes [#70031](https://github.com/ClickHouse/ClickHouse/issues/70031). [#75548](https://github.com/ClickHouse/ClickHouse/pull/75548) ([Konstantin Bogdanov](https://github.com/thevar1able)).
-* Fix interaction between allow_feature_tier and compatibility mergetree setting. [#75635](https://github.com/ClickHouse/ClickHouse/pull/75635) ([Raúl Marín](https://github.com/Algunenano)).
-* Fix incorrect processed_rows value in system.s3queue_log in case file was retried. [#75666](https://github.com/ClickHouse/ClickHouse/pull/75666) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Respect `materialized_views_ignore_errors` when a materialized view writes to a URL engine and there is a connectivity issue. [#75679](https://github.com/ClickHouse/ClickHouse/pull/75679) ([Christoph Wurm](https://github.com/cwurm)).
-* Fixed rare crashes while reading from `MergeTree` table after multiple asynchronous `RENAME` queries (with `alter_sync = 0`) between columns with different types. [#75693](https://github.com/ClickHouse/ClickHouse/pull/75693) ([Anton Popov](https://github.com/CurtizJ)).
-* Fix `Block structure mismatch in QueryPipeline stream` error for some queries with `UNION ALL`. [#75715](https://github.com/ClickHouse/ClickHouse/pull/75715) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
-* Rebuild projection on alter modify of its PK column. Previously it could lead to `CANNOT_READ_ALL_DATA` errors during selects after alter modify of the column used in projection PK. [#75720](https://github.com/ClickHouse/ClickHouse/pull/75720) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix incorrect result of `ARRAY JOIN` for scalar subqueries (with analyzer). [#75732](https://github.com/ClickHouse/ClickHouse/pull/75732) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
-* Fixed null pointer dereference in `DistinctSortedStreamTransform`. [#75734](https://github.com/ClickHouse/ClickHouse/pull/75734) ([Nikita Taranov](https://github.com/nickitat)).
-* Fix `allow_suspicious_ttl_expressions` behaviour. [#75771](https://github.com/ClickHouse/ClickHouse/pull/75771) ([Aleksei Filatov](https://github.com/aalexfvk)).
-* Fix uninitialized memory read in function `translate`. This closes [#75592](https://github.com/ClickHouse/ClickHouse/issues/75592). [#75794](https://github.com/ClickHouse/ClickHouse/pull/75794) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Propagate format settings to JSON as string formatting in Native format. [#75832](https://github.com/ClickHouse/ClickHouse/pull/75832) ([Pavel Kruglov](https://github.com/Avogar)).
-* Recorded the default enablement of parallel hash as join algorithm in v24.12 in the settings change history. This means that ClickHouse will continue to join using non-parallel hash if an older compatibility level than v24.12 is configured. [#75870](https://github.com/ClickHouse/ClickHouse/pull/75870) ([Robert Schulze](https://github.com/rschu1ze)).
-* Fixed a bug that tables with implicitly added min-max indices could not be copied into a new table (issue [#75677](https://github.com/ClickHouse/ClickHouse/issues/75677)). [#75877](https://github.com/ClickHouse/ClickHouse/pull/75877) ([Smita Kulkarni](https://github.com/SmitaRKulkarni)).
-* `clickhouse-library-bridge` allows opening arbitrary libraries from the filesystem, which makes it safe to run only inside an isolated environment. To prevent a vulnerability when it is run near the clickhouse-server, we will limit the paths of libraries to a location, provided in the configuration. This vulnerability was found with the [ClickHouse Bug Bounty Program](https://github.com/ClickHouse/ClickHouse/issues/38986) by **Arseniy Dugin**. [#75954](https://github.com/ClickHouse/ClickHouse/pull/75954) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* We happened to use JSON serialization for some metadata, which was a mistake, because JSON does not support binary data inside string literals, including zero bytes. SQL queries can contain binary data and invalid UTF-8, so we have to support this in our metadata files as well. At the same time, ClickHouse's `JSONEachRow` and similar formats work around that by deviating from the JSON standard in favor of a perfect roundtrip for the binary data. See the motivation here: https://github.com/ClickHouse/ClickHouse/pull/73668#issuecomment-2560501790. The solution is to make `Poco::JSON` library consistent with the JSON format serialization in ClickHouse. This closes [#73668](https://github.com/ClickHouse/ClickHouse/issues/73668). [#75963](https://github.com/ClickHouse/ClickHouse/pull/75963) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Fix `Part <...> does not contain in snapshot of previous virtual parts. (PART_IS_TEMPORARILY_LOCKED)` during DETACH PART. [#76039](https://github.com/ClickHouse/ClickHouse/pull/76039) ([Aleksei Filatov](https://github.com/aalexfvk)).
-* Fix check for commit limits in storage `S3Queue`. [#76104](https://github.com/ClickHouse/ClickHouse/pull/76104) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Fix attaching MergeTree tables with auto indexes (`add_minmax_index_for_numeric_columns`/`add_minmax_index_for_string_columns`). [#76139](https://github.com/ClickHouse/ClickHouse/pull/76139) ([Azat Khuzhin](https://github.com/azat)).
-* Fixed issue of stack traces from parent threads of a job (`enable_job_stack_trace` setting) are not printed out. Fixed issue `enable_job_stack_trace` setting is not properly propagated to the threads resulting stack trace content not always respects this setting. [#76191](https://github.com/ClickHouse/ClickHouse/pull/76191) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
-* Fix reinterpretAs with FixedString on big-endian architecture. [#76253](https://github.com/ClickHouse/ClickHouse/pull/76253) ([Azat Khuzhin](https://github.com/azat)).
-* Fix all sort of bugs due to race between UUID and table names (for instance it will fix the race between `RENAME` and `RESTART REPLICA`, in case of concurrent `RENAME` with `SYSTEM RESTART REPLICA` you may get end up restarting wrong replica, or/and leaving one of the tables in a `Table X is being restarted` state). [#76308](https://github.com/ClickHouse/ClickHouse/pull/76308) ([Azat Khuzhin](https://github.com/azat)).
-* Removed allocation from the signal handler. [#76446](https://github.com/ClickHouse/ClickHouse/pull/76446) ([Nikita Taranov](https://github.com/nickitat)).
-* Fix dynamic filesystem cache resize handling unexpected errors during eviction. [#76466](https://github.com/ClickHouse/ClickHouse/pull/76466) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Fixed `used_flag` initialization in parallel hash. It might cause a server crash. [#76580](https://github.com/ClickHouse/ClickHouse/pull/76580) ([Nikita Taranov](https://github.com/nickitat)).
-* Fix a logical error when calling `defaultProfiles()` function inside a projection. [#76627](https://github.com/ClickHouse/ClickHouse/pull/76627) ([pufit](https://github.com/pufit)).
-* Do not request interactive basic auth in the browser in Web UI. Closes [#76319](https://github.com/ClickHouse/ClickHouse/issues/76319). [#76637](https://github.com/ClickHouse/ClickHouse/pull/76637) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Fix THERE_IS_NO_COLUMN exception when selecting boolean literal from distributed tables. [#76656](https://github.com/ClickHouse/ClickHouse/pull/76656) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
-* The subpath inside the table directory is chosen in a more profound way. [#76681](https://github.com/ClickHouse/ClickHouse/pull/76681) ([Daniil Ivanik](https://github.com/divanik)).
-* Fix an error `Not found column in block` after altering a table with a subcolumn in PK. After https://github.com/ClickHouse/ClickHouse/pull/72644, requires https://github.com/ClickHouse/ClickHouse/pull/74403. [#76686](https://github.com/ClickHouse/ClickHouse/pull/76686) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
-* Add performance tests for null short circuits and fix bugs. [#76708](https://github.com/ClickHouse/ClickHouse/pull/76708) ([李扬](https://github.com/taiyang-li)).
-* Flush output write buffers before finalizing them. Fix `LOGICAL_ERROR` generated during the finalization of some output format, e.g. `JSONEachRowWithProgressRowOutputFormat`. [#76726](https://github.com/ClickHouse/ClickHouse/pull/76726) ([Antonio Andelic](https://github.com/antonio2368)).
-* Added support for MongoDB binary UUID ([#74452](https://github.com/ClickHouse/ClickHouse/issues/74452)) - Fixed WHERE pushdown to MongoDB when using the table function ([#72210](https://github.com/ClickHouse/ClickHouse/issues/72210)) - Changed the MongoDB - ClickHouse type mapping such that MongoDB's binary UUID can only be parsed to ClickHouse's UUID. This should avoid ambiguities and surprises in future. - Fixed OID mapping, preserving backward compatibility. [#76762](https://github.com/ClickHouse/ClickHouse/pull/76762) ([Kirill Nikiforov](https://github.com/allmazz)).
-* Fix exception handling in parallel prefixes deserialization of JSON subcolumns. [#76809](https://github.com/ClickHouse/ClickHouse/pull/76809) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix lgamma function behavior for negative integers. [#76840](https://github.com/ClickHouse/ClickHouse/pull/76840) ([Ilya Kataev](https://github.com/IlyaKataev)).
-* Fix reverse key analysis for explicitly defined primary keys. Similar to [#76654](https://github.com/ClickHouse/ClickHouse/issues/76654). [#76846](https://github.com/ClickHouse/ClickHouse/pull/76846) ([Amos Bird](https://github.com/amosbird)).
-* Fix pretty print of Bool values in JSON format. [#76905](https://github.com/ClickHouse/ClickHouse/pull/76905) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix possible crash because of bad JSON column rollback on error during async inserts. [#76908](https://github.com/ClickHouse/ClickHouse/pull/76908) ([Pavel Kruglov](https://github.com/Avogar)).
-* Previously, `multi_if` may return different types of columns during planning and main execution. This resulted in code producing undefined behavior from the C++ perspective. [#76914](https://github.com/ClickHouse/ClickHouse/pull/76914) ([Nikita Taranov](https://github.com/nickitat)).
-* Fixed incorrect serialization of constant nullable keys in MergeTree. This fixes [#76939](https://github.com/ClickHouse/ClickHouse/issues/76939). [#76985](https://github.com/ClickHouse/ClickHouse/pull/76985) ([Amos Bird](https://github.com/amosbird)).
-* Fix sorting of `BFloat16` values. This closes [#75487](https://github.com/ClickHouse/ClickHouse/issues/75487). This closes [#75669](https://github.com/ClickHouse/ClickHouse/issues/75669). [#77000](https://github.com/ClickHouse/ClickHouse/pull/77000) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Bug fix JSON with variant subcolumn by adding check to skip ephemeral subcolumns in part consistency check. [#72187](https://github.com/ClickHouse/ClickHouse/issues/72187). [#77034](https://github.com/ClickHouse/ClickHouse/pull/77034) ([Smita Kulkarni](https://github.com/SmitaRKulkarni)).
-* Fix crash in template parsing in Values format in case of types mismatch. [#77071](https://github.com/ClickHouse/ClickHouse/pull/77071) ([Pavel Kruglov](https://github.com/Avogar)).
-* Don't allow creating EmbeddedRocksDB table with subcolumn in a primary key. Previously, such a table could be created but `SELECT` queries failed. [#77074](https://github.com/ClickHouse/ClickHouse/pull/77074) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix illegal comparison in distributed queries because pushing down predicates to remote doesn't respect literal types. [#77093](https://github.com/ClickHouse/ClickHouse/pull/77093) ([Duc Canh Le](https://github.com/canhld94)).
-* Fix crash during Kafka table creation with exception. [#77121](https://github.com/ClickHouse/ClickHouse/pull/77121) ([Pavel Kruglov](https://github.com/Avogar)).
-* Support new JSON and subcolumns in Kafka and RabbitMQ engines. [#77122](https://github.com/ClickHouse/ClickHouse/pull/77122) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix exceptions stack unwinding on MacOS. [#77126](https://github.com/ClickHouse/ClickHouse/pull/77126) ([Eduard Karacharov](https://github.com/korowa)).
-* Fix reading 'null' subcolumn in getSubcolumn function. [#77163](https://github.com/ClickHouse/ClickHouse/pull/77163) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix not working skip indexes with expression with literals in analyzer and remove trivial casts during indexes analysis. [#77229](https://github.com/ClickHouse/ClickHouse/pull/77229) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix bloom filter index with Array and not supported functions. [#77271](https://github.com/ClickHouse/ClickHouse/pull/77271) ([Pavel Kruglov](https://github.com/Avogar)).
-* We should only check the restriction on the amount of tables during the initial CREATE query. [#77274](https://github.com/ClickHouse/ClickHouse/pull/77274) ([Nikolay Degterinsky](https://github.com/evillique)).
-* `SELECT toBFloat16(-0.0) == toBFloat16(0.0)` now correctly returns `true` (from previously `false`). This makes the behavior consistent with `Float32` and `Float64`. [#77290](https://github.com/ClickHouse/ClickHouse/pull/77290) ([Shankar Iyer](https://github.com/shankar-iyer)).
-* Fix posbile incorrect reference to unintialized key_index variable, which may lead to crash in debug builds (this uninitialized reference won't cause issues in release builds because subsequent code are likely to throw errors.) ### documentation entry for user-facing changes. [#77305](https://github.com/ClickHouse/ClickHouse/pull/77305) ([wxybear](https://github.com/wxybear)).
-* Reverted. [#77307](https://github.com/ClickHouse/ClickHouse/pull/77307) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
-* Fix name for partition with a Bool value. It was broken in https://github.com/ClickHouse/ClickHouse/pull/74533. [#77319](https://github.com/ClickHouse/ClickHouse/pull/77319) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix comparison between tuples with nullable elements inside and strings. As an example, before the change comparison between a Tuple `(1, null)` and a String `'(1,null)'` would result in an error. Another example would be a comparison between a Tuple `(1, a)`, where `a` is a Nullable column, and a String `'(1, 2)'`. This change addresses these issues. [#77323](https://github.com/ClickHouse/ClickHouse/pull/77323) ([Alexey Katsman](https://github.com/alexkats)).
-* Fix crash in ObjectStorageQueueSource. Was intoduced in https://github.com/ClickHouse/ClickHouse/pull/76358. [#77325](https://github.com/ClickHouse/ClickHouse/pull/77325) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix a bug when `close_session` query parameter didn't have any effect leading to named sessions being closed only after `session_timeout`. [#77336](https://github.com/ClickHouse/ClickHouse/pull/77336) ([Alexey Katsman](https://github.com/alexkats)).
-* Fix `async_insert` with `input()`. [#77340](https://github.com/ClickHouse/ClickHouse/pull/77340) ([Azat Khuzhin](https://github.com/azat)).
-* Fix: `WITH FILL` may fail with `NOT_FOUND_COLUMN_IN_BLOCK` when planer removes sorting column. Similar issue related to inconsistent DAG calculated for `INTERPOLATE` expression. [#77343](https://github.com/ClickHouse/ClickHouse/pull/77343) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
-* Reverted. [#77390](https://github.com/ClickHouse/ClickHouse/pull/77390) ([Vladimir Cherkasov](https://github.com/vdimir)).
-* Fixed receiving messages from nats server without attached mv. [#77392](https://github.com/ClickHouse/ClickHouse/pull/77392) ([Dmitry Novikov](https://github.com/dmitry-sles-novikov)).
-* Fix logical error while reading from empty `FileLog` via `merge` table function, close [#75575](https://github.com/ClickHouse/ClickHouse/issues/75575). [#77441](https://github.com/ClickHouse/ClickHouse/pull/77441) ([Vladimir Cherkasov](https://github.com/vdimir)).
-* Fix several `LOGICAL_ERROR`s around setting an alias of invalid AST nodes. [#77445](https://github.com/ClickHouse/ClickHouse/pull/77445) ([Raúl Marín](https://github.com/Algunenano)).
-* In filesystem cache implementation fix error processing during file segment write. [#77471](https://github.com/ClickHouse/ClickHouse/pull/77471) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Make DatabaseIceberg use correct metadata file provided by catalog. Closes [#75187](https://github.com/ClickHouse/ClickHouse/issues/75187). [#77486](https://github.com/ClickHouse/ClickHouse/pull/77486) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Use default format settings in Dynamic serialization from shared variant. [#77572](https://github.com/ClickHouse/ClickHouse/pull/77572) ([Pavel Kruglov](https://github.com/Avogar)).
-* Revert 'Avoid toAST() in execution of scalar subqueries'. [#77584](https://github.com/ClickHouse/ClickHouse/pull/77584) ([Raúl Marín](https://github.com/Algunenano)).
-* Fix checking if the table data path exists on the local disk. [#77608](https://github.com/ClickHouse/ClickHouse/pull/77608) ([Tuan Pham Anh](https://github.com/tuanpach)).
-* The query cache now assumes that UDFs are non-deterministic. Accordingly, results of queries with UDFs are no longer cached. Previously, users were able to define non-deterministic UDFs whose result would erronously be cached (issue [#77553](https://github.com/ClickHouse/ClickHouse/issues/77553)). [#77633](https://github.com/ClickHouse/ClickHouse/pull/77633) ([Jimmy Aguilar Mena](https://github.com/Ergus)).
-* Fix sending constant values to remote for some types. [#77634](https://github.com/ClickHouse/ClickHouse/pull/77634) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fix system.filesystem_cache_log working only under setting `enable_filesystem_cache_log`. [#77650](https://github.com/ClickHouse/ClickHouse/pull/77650) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Fix a logical error when calling `defaultRoles()` function inside a projection. Follow-up for [#76627](https://github.com/ClickHouse/ClickHouse/issues/76627). [#77667](https://github.com/ClickHouse/ClickHouse/pull/77667) ([pufit](https://github.com/pufit)).
-* Fix crash because of expired context in StorageS3(Azure)Queue. [#77720](https://github.com/ClickHouse/ClickHouse/pull/77720) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Second arguments of type `Nullable` for function `arrayResize` are now disallowed. Previously, anything from errors to wrong results could happen with `Nullable` as second argument. (issue [#48398](https://github.com/ClickHouse/ClickHouse/issues/48398)). [#77724](https://github.com/ClickHouse/ClickHouse/pull/77724) ([Manish Gill](https://github.com/mgill25)).
-* Hide credentials in RabbitMQ, Nats, Redis, AzureQueue table engines. [#77755](https://github.com/ClickHouse/ClickHouse/pull/77755) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Fix undefined behaviour on NaN comparison in ArgMin/ArgMax. [#77756](https://github.com/ClickHouse/ClickHouse/pull/77756) ([Raúl Marín](https://github.com/Algunenano)).
-* Regularly check if merges and mutations were cancelled even in case when the operation doesn't produce any blocks to write. [#77766](https://github.com/ClickHouse/ClickHouse/pull/77766) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
-* Reverted. [#77843](https://github.com/ClickHouse/ClickHouse/pull/77843) ([Vladimir Cherkasov](https://github.com/vdimir)).
-* Fix possible crash when `NOT_FOUND_COLUMN_IN_BLOCK` error occurs. [#77854](https://github.com/ClickHouse/ClickHouse/pull/77854) ([Vladimir Cherkasov](https://github.com/vdimir)).
-* Fix crash that happens in the `StorageSystemObjectStorageQueueSettings` while filling data. [#77878](https://github.com/ClickHouse/ClickHouse/pull/77878) ([Bharat Nallan](https://github.com/bharatnc)).
-* Disable fuzzy search for history in SSH server (since it requires skim). [#78002](https://github.com/ClickHouse/ClickHouse/pull/78002) ([Azat Khuzhin](https://github.com/azat)).
-* Fixes a bug that a vector search query on a non-indexed column was returning incorrect results if there was another vector column in the table with a defined vector similarity index. (Issue [#77978](https://github.com/ClickHouse/ClickHouse/issues/77978)). [#78069](https://github.com/ClickHouse/ClickHouse/pull/78069) ([Shankar Iyer](https://github.com/shankar-iyer)).
-* Fix `The requested output format {} is binary... Do you want to output it anyway? [y/N]` prompt. [#78095](https://github.com/ClickHouse/ClickHouse/pull/78095) ([Azat Khuzhin](https://github.com/azat)).
-* Fix of a bug in case of `toStartOfInterval` with zero origin argument. [#78096](https://github.com/ClickHouse/ClickHouse/pull/78096) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-* Disallow specifying an empty `session_id` query parameter for HTTP interface. [#78098](https://github.com/ClickHouse/ClickHouse/pull/78098) ([Alexey Katsman](https://github.com/alexkats)).
-* Fix metadata override in Database Replicated which could have happened due to a RENAME query executed right after an ALTER query. [#78107](https://github.com/ClickHouse/ClickHouse/pull/78107) ([Nikolay Degterinsky](https://github.com/evillique)).
-* Fix crash in NATS engine. [#78108](https://github.com/ClickHouse/ClickHouse/pull/78108) ([Dmitry Novikov](https://github.com/dmitry-sles-novikov)).
-* Do not try to create a `history_file` in an embedded client for SSH. [#78112](https://github.com/ClickHouse/ClickHouse/pull/78112) ([Azat Khuzhin](https://github.com/azat)).
-* Fix system.detached_tables displaying incorrect information after RENAME DATABASE or DROP TABLE queries. [#78126](https://github.com/ClickHouse/ClickHouse/pull/78126) ([Nikolay Degterinsky](https://github.com/evillique)).
-* Fix for checks for too many tables with Database Replicated after https://github.com/ClickHouse/ClickHouse/pull/77274. Also, perform the check before creating the storage to avoid creating unaccounted nodes in ZooKeeper in the case of RMT or KeeperMap. [#78127](https://github.com/ClickHouse/ClickHouse/pull/78127) ([Nikolay Degterinsky](https://github.com/evillique)).
-* Fix possible crash due to concurrent S3Queue metadata initialization. [#78131](https://github.com/ClickHouse/ClickHouse/pull/78131) ([Azat Khuzhin](https://github.com/azat)).
-* `groupArray*` functions now produce BAD_ARGUMENTS error for Int-typed 0 value of max_size argument, like it's already done for UInt one, instead of trying to execute with it. [#78140](https://github.com/ClickHouse/ClickHouse/pull/78140) ([Eduard Karacharov](https://github.com/korowa)).
-* Prevent crash on recoverLostReplica if the local table is removed before it's detached. [#78173](https://github.com/ClickHouse/ClickHouse/pull/78173) ([Raúl Marín](https://github.com/Algunenano)).
-* Fix "alterable" column in system.s3_queue_settings returning always `false`. [#78187](https://github.com/ClickHouse/ClickHouse/pull/78187) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Mask azure access signature to be not visible to user or in logs. [#78189](https://github.com/ClickHouse/ClickHouse/pull/78189) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Fix prefetch of substreams with prefixes in Wide parts. [#78205](https://github.com/ClickHouse/ClickHouse/pull/78205) ([Pavel Kruglov](https://github.com/Avogar)).
-* Fixed crashes / incorrect result for `mapFromArrays` in case of `LowCardinality(Nullable)` type of key array. [#78240](https://github.com/ClickHouse/ClickHouse/pull/78240) ([Eduard Karacharov](https://github.com/korowa)).
-* Fix delta-kernel auth options. [#78255](https://github.com/ClickHouse/ClickHouse/pull/78255) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Not schedule RefreshMV task if a replica's `disable_insertion_and_mutation` is true. A task is some insertion, it will failed if `disable_insertion_and_mutation` is true. [#78277](https://github.com/ClickHouse/ClickHouse/pull/78277) ([Xu Jia](https://github.com/XuJia0210)).
-* Validate access to underlying tables for the Merge engine. [#78339](https://github.com/ClickHouse/ClickHouse/pull/78339) ([Pervakov Grigorii](https://github.com/GrigoryPervakov)).
-* FINAL modifier can be lost for `Distributed` engine table. [#78428](https://github.com/ClickHouse/ClickHouse/pull/78428) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
-* `Bitmapmin` returns uint32_max when the bitmap is `empty(uint64_max when input type >= 8bits)`, which matches the behavior of empty `roaring_bitmap`'s `minimum()`. [#78444](https://github.com/ClickHouse/ClickHouse/pull/78444) ([wxybear](https://github.com/wxybear)).
-* Revert "Apply preserve_most attribute at some places in code" since it may lead to crashes. [#78449](https://github.com/ClickHouse/ClickHouse/pull/78449) ([Azat Khuzhin](https://github.com/azat)).
-* Use insertion columns for INFILE schema inference. [#78490](https://github.com/ClickHouse/ClickHouse/pull/78490) ([Pervakov Grigorii](https://github.com/GrigoryPervakov)).
-* Disable parallelize query processing right after reading `FROM` when `distributed_aggregation_memory_efficient` enabled, it may lead to logical error. Closes [#76934](https://github.com/ClickHouse/ClickHouse/issues/76934). [#78500](https://github.com/ClickHouse/ClickHouse/pull/78500) ([flynn](https://github.com/ucasfl)).
-* Set at least one stream for reading in case there are zero planned streams after applying `max_streams_to_max_threads_ratio` setting. [#78505](https://github.com/ClickHouse/ClickHouse/pull/78505) ([Eduard Karacharov](https://github.com/korowa)).
-* In storage S3Queue fix logical error "Cannot unregister: table uuid is not registered". Closes [#78285](https://github.com/ClickHouse/ClickHouse/issues/78285). [#78541](https://github.com/ClickHouse/ClickHouse/pull/78541) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* ClickHouse is now able to figure out its cgroup v2 on systems with both cgroups v1 and v2 enabled. [#78566](https://github.com/ClickHouse/ClickHouse/pull/78566) ([Grigory Korolev](https://github.com/gkorolev)).
-* ObjectStorage cluster table functions failed when used with table level-settings. [#78587](https://github.com/ClickHouse/ClickHouse/pull/78587) ([Daniil Ivanik](https://github.com/divanik)).
-* Better checks for transactions are not supported by ReplicatedMergeTree on `INSERT`s. [#78633](https://github.com/ClickHouse/ClickHouse/pull/78633) ([Azat Khuzhin](https://github.com/azat)).
-* Apply query settings during attachment. [#78637](https://github.com/ClickHouse/ClickHouse/pull/78637) ([Raúl Marín](https://github.com/Algunenano)).
-* Fixes a crash when an invalid path is specified in `iceberg_metadata_file_path`. [#78688](https://github.com/ClickHouse/ClickHouse/pull/78688) ([alesapin](https://github.com/alesapin)).
-* In DeltaLake table engine with delta-kernel implementation fix case when read schema is different from table schema and there are partition columns at the same time leading to not found column error. [#78690](https://github.com/ClickHouse/ClickHouse/pull/78690) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* This update corrects a bug where a new named session would inadvertently close at the scheduled time of a previous session if both sessions shared the same name and the new one was created before the old one's timeout expired. [#78698](https://github.com/ClickHouse/ClickHouse/pull/78698) ([Alexey Katsman](https://github.com/alexkats)).
-* Don't block table shutdown while running CHECK TABLE. [#78782](https://github.com/ClickHouse/ClickHouse/pull/78782) ([Raúl Marín](https://github.com/Algunenano)).
-* Keeper fix: fix ephemeral count in all cases. [#78799](https://github.com/ClickHouse/ClickHouse/pull/78799) ([Antonio Andelic](https://github.com/antonio2368)).
-* Fix bad cast in `StorageDistributed` when using table functions other than `view()`. Closes [#78464](https://github.com/ClickHouse/ClickHouse/issues/78464). [#78828](https://github.com/ClickHouse/ClickHouse/pull/78828) ([Konstantin Bogdanov](https://github.com/thevar1able)).
-* Fix formatting for `tupleElement(*, 1)`. Closes [#78639](https://github.com/ClickHouse/ClickHouse/issues/78639). [#78832](https://github.com/ClickHouse/ClickHouse/pull/78832) ([Konstantin Bogdanov](https://github.com/thevar1able)).
-* Dictionaries of type `ssd_cache` now reject zero or negative `block_size` and `write_buffer_size` parameters (issue [#78314](https://github.com/ClickHouse/ClickHouse/issues/78314)). [#78854](https://github.com/ClickHouse/ClickHouse/pull/78854) ([Elmi Ahmadov](https://github.com/ahmadov)).
-* Fix crash in REFRESHABLE MV in case of ALTER after incorrect shutdown. [#78858](https://github.com/ClickHouse/ClickHouse/pull/78858) ([Azat Khuzhin](https://github.com/azat)).
-* Fix parsing of bad DateTime values in CSV format. [#78919](https://github.com/ClickHouse/ClickHouse/pull/78919) ([Pavel Kruglov](https://github.com/Avogar)).
-
-## Build/Testing/Packaging Improvement {#build-testing-packaging-improvement}
-
-* The internal dependency LLVM is bumped from 16 to 18. [#66053](https://github.com/ClickHouse/ClickHouse/pull/66053) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
-* Restore deleted nats integration tests and fix errors. - fixed some race conditions in nats engine - fixed data loss when streaming data to nats in case of connection loss - fixed freeze of receiving the last chunk of data when streaming from nats ended - nats_max_reconnect is deprecated and has no effect, reconnect is performed permanently with nats_reconnect_wait timeout. [#69772](https://github.com/ClickHouse/ClickHouse/pull/69772) ([Dmitry Novikov](https://github.com/dmitry-sles-novikov)).
-* Fix the issue that asm files of contrib openssl cannot be generated. [#72622](https://github.com/ClickHouse/ClickHouse/pull/72622) ([RinChanNOW](https://github.com/RinChanNOWWW)).
-* Fix stability for test 03210_variant_with_aggregate_function_type. [#74012](https://github.com/ClickHouse/ClickHouse/pull/74012) ([Anton Ivashkin](https://github.com/ianton-ru)).
-* Support build HDFS on both ARM and Intel Mac. [#74244](https://github.com/ClickHouse/ClickHouse/pull/74244) ([Yan Xin](https://github.com/yxheartipp)).
-* The universal installation script will propose installation even on macOS. [#74339](https://github.com/ClickHouse/ClickHouse/pull/74339) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Fix build when kerberos is not enabled. [#74771](https://github.com/ClickHouse/ClickHouse/pull/74771) ([flynn](https://github.com/ucasfl)).
-* Update to embedded LLVM 19. [#75148](https://github.com/ClickHouse/ClickHouse/pull/75148) ([Konstantin Bogdanov](https://github.com/thevar1able)).
-* *Potentially breaking*: Improvement to set even more restrictive defaults. The current defaults are already secure. The user has to specify an option to publish ports explicitly. But when the `default` user doesn’t have a password set by `CLICKHOUSE_PASSWORD` and/or a username changed by `CLICKHOUSE_USER` environment variables, it should be available only from the local system as an additional level of protection. [#75259](https://github.com/ClickHouse/ClickHouse/pull/75259) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
-* Integration tests have a 1-hour timeout for single batch of parallel tests running. When this timeout is reached `pytest` is killed without some logs. Internal pytest timeout is set to 55 minutes to print results from a session and not trigger external timeout signal. Closes [#75532](https://github.com/ClickHouse/ClickHouse/issues/75532). [#75533](https://github.com/ClickHouse/ClickHouse/pull/75533) ([Ilya Yatsishin](https://github.com/qoega)).
-* Make all clickhouse-server related actions a function, and execute them only when launching the default binary in `entrypoint.sh`. A long-postponed improvement was suggested in [#50724](https://github.com/ClickHouse/ClickHouse/issues/50724). Added switch `--users` to `clickhouse-extract-from-config` to get values from the `users.xml`. [#75643](https://github.com/ClickHouse/ClickHouse/pull/75643) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
-* For stress tests if server did not exit while we collected stacktraces via gdb additional wait time is added to make `Possible deadlock on shutdown (see gdb.log)` detection less noisy. It will only add delay for cases when test did not finish successfully. [#75668](https://github.com/ClickHouse/ClickHouse/pull/75668) ([Ilya Yatsishin](https://github.com/qoega)).
-* Restore deleted nats integration tests and fix errors. - fixed some race conditions in nats engine - fixed data loss when streaming data to nats in case of connection loss - fixed freeze of receiving the last chunk of data when streaming from nats ended - nats_max_reconnect is deprecated and has no effect, reconnect is performed permanently with nats_reconnect_wait timeout. [#75850](https://github.com/ClickHouse/ClickHouse/pull/75850) ([Dmitry Novikov](https://github.com/dmitry-sles-novikov)).
-* Enable ICU and GRPC when cross-compiling for Darwin. [#75922](https://github.com/ClickHouse/ClickHouse/pull/75922) ([Raúl Marín](https://github.com/Algunenano)).
-* Fixing splitting test's output because of `sleep` during the process group killing. [#76090](https://github.com/ClickHouse/ClickHouse/pull/76090) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
-* Do not collect the `docker-compose` logs at the end of running since the script is often killed. Instead, collect them in the background. [#76140](https://github.com/ClickHouse/ClickHouse/pull/76140) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
-* Split tests for kafka storage into a few files. Fixes [#69452](https://github.com/ClickHouse/ClickHouse/issues/69452). [#76208](https://github.com/ClickHouse/ClickHouse/pull/76208) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
-* `clickhouse-odbc-bridge` and `clickhouse-library-bridge` are moved to a separate repository, https://github.com/ClickHouse/odbc-bridge/. [#76225](https://github.com/ClickHouse/ClickHouse/pull/76225) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Remove about 20MB of dead code from the binary. [#76226](https://github.com/ClickHouse/ClickHouse/pull/76226) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Raise minimum required CMake version to 3.25 due to `block()` introduction. [#76316](https://github.com/ClickHouse/ClickHouse/pull/76316) ([Konstantin Bogdanov](https://github.com/thevar1able)).
-* Update fmt to 11.1.3. [#76547](https://github.com/ClickHouse/ClickHouse/pull/76547) ([Raúl Marín](https://github.com/Algunenano)).
-* Bump `lz4` to `1.10.0`. [#76571](https://github.com/ClickHouse/ClickHouse/pull/76571) ([Konstantin Bogdanov](https://github.com/thevar1able)).
-* Bump `curl` to `8.12.1`. [#76572](https://github.com/ClickHouse/ClickHouse/pull/76572) ([Konstantin Bogdanov](https://github.com/thevar1able)).
-* Bump `libcpuid` to `0.7.1`. [#76573](https://github.com/ClickHouse/ClickHouse/pull/76573) ([Konstantin Bogdanov](https://github.com/thevar1able)).
-* Use a machine-readable format to parse pytest results. [#76910](https://github.com/ClickHouse/ClickHouse/pull/76910) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
-* Fix rust cross-compilation and allow disabling Rust completely. [#76921](https://github.com/ClickHouse/ClickHouse/pull/76921) ([Raúl Marín](https://github.com/Algunenano)).
-* Require clang 19 to build the project. [#76945](https://github.com/ClickHouse/ClickHouse/pull/76945) ([Raúl Marín](https://github.com/Algunenano)).
-* The test is executed for 10+ seconds in the serial mode. It's too long for fast tests. [#76948](https://github.com/ClickHouse/ClickHouse/pull/76948) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
-* Bump `sccache` to `0.10.0`. [#77580](https://github.com/ClickHouse/ClickHouse/pull/77580) ([Konstantin Bogdanov](https://github.com/thevar1able)).
-* Respect CPU target features in rust and enable LTO in all crates. [#78590](https://github.com/ClickHouse/ClickHouse/pull/78590) ([Raúl Marín](https://github.com/Algunenano)).
-* Bump `minizip-ng` to `4.0.9`. [#78917](https://github.com/ClickHouse/ClickHouse/pull/78917) ([Konstantin Bogdanov](https://github.com/thevar1able)).
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/fast-release-24-2.md b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/fast-release-24-2.md
deleted file mode 100644
index 714f8a8f575..00000000000
--- a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/changelogs/fast-release-24-2.md
+++ /dev/null
@@ -1,241 +0,0 @@
----
-slug: /whats-new/changelog/24.2-fast-release
-title: 'v24.2 Changelog'
-description: 'Fast release changelog for v24.2'
-keywords: ['changelog']
-sidebar_label: 'v24.2'
----
-
-### ClickHouse release tag: 24.2.2.15987 {#clickhouse-release-tag-242215987}
-
-#### Backward Incompatible Change {#backward-incompatible-change}
-* Validate suspicious/experimental types in nested types. Previously we didn't validate such types (except JSON) in nested types like Array/Tuple/Map. [#59385](https://github.com/ClickHouse/ClickHouse/pull/59385) ([Kruglov Pavel](https://github.com/Avogar)).
-* The sort clause `ORDER BY ALL` (introduced with v23.12) is replaced by `ORDER BY *`. The previous syntax was too error-prone for tables with a column `all`. [#59450](https://github.com/ClickHouse/ClickHouse/pull/59450) ([Robert Schulze](https://github.com/rschu1ze)).
-* Add sanity check for number of threads and block sizes. [#60138](https://github.com/ClickHouse/ClickHouse/pull/60138) ([Raúl Marín](https://github.com/Algunenano)).
-* Reject incoming INSERT queries in case when query-level settings `async_insert` and `deduplicate_blocks_in_dependent_materialized_views` are enabled together. This behaviour is controlled by a setting `throw_if_deduplication_in_dependent_materialized_views_enabled_with_async_insert` and enabled by default. This is a continuation of https://github.com/ClickHouse/ClickHouse/pull/59699 needed to unblock https://github.com/ClickHouse/ClickHouse/pull/59915. [#60888](https://github.com/ClickHouse/ClickHouse/pull/60888) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
-* Utility `clickhouse-copier` is moved to a separate repository on GitHub: https://github.com/ClickHouse/copier. It is no longer included in the bundle but is still available as a separate download. This closes: [#60734](https://github.com/ClickHouse/ClickHouse/issues/60734) This closes: [#60540](https://github.com/ClickHouse/ClickHouse/issues/60540) This closes: [#60250](https://github.com/ClickHouse/ClickHouse/issues/60250) This closes: [#52917](https://github.com/ClickHouse/ClickHouse/issues/52917) This closes: [#51140](https://github.com/ClickHouse/ClickHouse/issues/51140) This closes: [#47517](https://github.com/ClickHouse/ClickHouse/issues/47517) This closes: [#47189](https://github.com/ClickHouse/ClickHouse/issues/47189) This closes: [#46598](https://github.com/ClickHouse/ClickHouse/issues/46598) This closes: [#40257](https://github.com/ClickHouse/ClickHouse/issues/40257) This closes: [#36504](https://github.com/ClickHouse/ClickHouse/issues/36504) This closes: [#35485](https://github.com/ClickHouse/ClickHouse/issues/35485) This closes: [#33702](https://github.com/ClickHouse/ClickHouse/issues/33702) This closes: [#26702](https://github.com/ClickHouse/ClickHouse/issues/26702) ### Documentation entry for user-facing changes. [#61058](https://github.com/ClickHouse/ClickHouse/pull/61058) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
-* To increase compatibility with MySQL, function `locate` now accepts arguments `(needle, haystack[, start_pos])` by default. The previous behavior `(haystack, needle, [, start_pos])` can be restored by setting `function_locate_has_mysql_compatible_argument_order = 0`. [#61092](https://github.com/ClickHouse/ClickHouse/pull/61092) ([Robert Schulze](https://github.com/rschu1ze)).
-* The obsolete in-memory data parts have been deprecated since version 23.5 and have not been supported since version 23.10. Now the remaining code is removed. Continuation of [#55186](https://github.com/ClickHouse/ClickHouse/issues/55186) and [#45409](https://github.com/ClickHouse/ClickHouse/issues/45409). It is unlikely that you have used in-memory data parts because they were available only before version 23.5 and only when you enabled them manually by specifying the corresponding SETTINGS for a MergeTree table. To check if you have in-memory data parts, run the following query: `SELECT part_type, count() FROM system.parts GROUP BY part_type ORDER BY part_type`. To disable the usage of in-memory data parts, do `ALTER TABLE ... MODIFY SETTING min_bytes_for_compact_part = DEFAULT, min_rows_for_compact_part = DEFAULT`. Before upgrading from old ClickHouse releases, first check that you don't have in-memory data parts. If there are in-memory data parts, disable them first, then wait while there are no in-memory data parts and continue the upgrade. [#61127](https://github.com/ClickHouse/ClickHouse/pull/61127) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Forbid `SimpleAggregateFunction` in `ORDER BY` of `MergeTree` tables (like `AggregateFunction` is forbidden, but they are forbidden because they are not comparable) by default (use `allow_suspicious_primary_key` to allow them). [#61399](https://github.com/ClickHouse/ClickHouse/pull/61399) ([Azat Khuzhin](https://github.com/azat)).
-* ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. This is controlled by the settings, `output_format_parquet_string_as_string`, `output_format_orc_string_as_string`, `output_format_arrow_string_as_string`. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases. Parquet/ORC/Arrow supports many compression methods, including lz4 and zstd. ClickHouse supports each and every compression method. Some inferior tools lack support for the faster `lz4` compression method, that's why we set `zstd` by default. This is controlled by the settings `output_format_parquet_compression_method`, `output_format_orc_compression_method`, and `output_format_arrow_compression_method`. We changed the default to `zstd` for Parquet and ORC, but not Arrow (it is emphasized for low-level usages). [#61817](https://github.com/ClickHouse/ClickHouse/pull/61817) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Fix for the materialized view security issue, which allowed a user to insert into a table without required grants for that. Fix validates that the user has permission to insert not only into a materialized view but also into all underlying tables. This means that some queries, which worked before, now can fail with Not enough privileges. To address this problem, the release introduces a new feature of SQL security for views [https://clickhouse.com/docs/sql-reference/statements/create/view#sql_security](/sql-reference/statements/create/view#sql_security). [#54901](https://github.com/ClickHouse/ClickHouse/pull/54901) ([pufit](https://github.com/pufit))
-
-#### New Feature {#new-feature}
-* Topk/topkweighed support mode, which return count of values and it's error. [#54508](https://github.com/ClickHouse/ClickHouse/pull/54508) ([UnamedRus](https://github.com/UnamedRus)).
-* Added new syntax which allows to specify definer user in View/Materialized View. This allows to execute selects/inserts from views without explicit grants for underlying tables. [#54901](https://github.com/ClickHouse/ClickHouse/pull/54901) ([pufit](https://github.com/pufit)).
-* Implemented automatic conversion of merge tree tables of different kinds to replicated engine. Create empty `convert_to_replicated` file in table's data directory (`/clickhouse/store/xxx/xxxyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy/`) and that table will be converted automatically on next server start. [#57798](https://github.com/ClickHouse/ClickHouse/pull/57798) ([Kirill](https://github.com/kirillgarbar)).
-* Added table function `mergeTreeIndex`. It represents the contents of index and marks files of `MergeTree` tables. It can be used for introspection. Syntax: `mergeTreeIndex(database, table, [with_marks = true])` where `database.table` is an existing table with `MergeTree` engine. [#58140](https://github.com/ClickHouse/ClickHouse/pull/58140) ([Anton Popov](https://github.com/CurtizJ)).
-* Try to detect file format automatically during schema inference if it's unknown in `file/s3/hdfs/url/azureBlobStorage` engines. Closes [#50576](https://github.com/ClickHouse/ClickHouse/issues/50576). [#59092](https://github.com/ClickHouse/ClickHouse/pull/59092) ([Kruglov Pavel](https://github.com/Avogar)).
-* Add generate_series as a table function. This function generates table with an arithmetic progression with natural numbers. [#59390](https://github.com/ClickHouse/ClickHouse/pull/59390) ([divanik](https://github.com/divanik)).
-* Added query `ALTER TABLE table FORGET PARTITION partition` that removes ZooKeeper nodes, related to an empty partition. [#59507](https://github.com/ClickHouse/ClickHouse/pull/59507) ([Sergei Trifonov](https://github.com/serxa)).
-* Support reading and writing backups as tar archives. [#59535](https://github.com/ClickHouse/ClickHouse/pull/59535) ([josh-hildred](https://github.com/josh-hildred)).
-* Provides new aggregate function 'groupArrayIntersect'. Follows up: [#49862](https://github.com/ClickHouse/ClickHouse/issues/49862). [#59598](https://github.com/ClickHouse/ClickHouse/pull/59598) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-* Implemented system.dns_cache table, which can be useful for debugging DNS issues. [#59856](https://github.com/ClickHouse/ClickHouse/pull/59856) ([Kirill Nikiforov](https://github.com/allmazz)).
-* Implemented support for S3Express buckets. [#59965](https://github.com/ClickHouse/ClickHouse/pull/59965) ([Nikita Taranov](https://github.com/nickitat)).
-* The codec `LZ4HC` will accept a new level 2, which is faster than the previous minimum level 3, at the expense of less compression. In previous versions, `LZ4HC(2)` and less was the same as `LZ4HC(3)`. Author: [Cyan4973](https://github.com/Cyan4973). [#60090](https://github.com/ClickHouse/ClickHouse/pull/60090) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Implemented system.dns_cache table, which can be useful for debugging DNS issues. New server setting dns_cache_max_size. [#60257](https://github.com/ClickHouse/ClickHouse/pull/60257) ([Kirill Nikiforov](https://github.com/allmazz)).
-* Added function `toMillisecond` which returns the millisecond component for values of type`DateTime` or `DateTime64`. [#60281](https://github.com/ClickHouse/ClickHouse/pull/60281) ([Shaun Struwig](https://github.com/Blargian)).
-* Support single-argument version for the merge table function, as `merge(['db_name', ] 'tables_regexp')`. [#60372](https://github.com/ClickHouse/ClickHouse/pull/60372) ([豪肥肥](https://github.com/HowePa)).
-* Make all format names case insensitive, like Tsv, or TSV, or tsv, or even rowbinary. [#60420](https://github.com/ClickHouse/ClickHouse/pull/60420) ([豪肥肥](https://github.com/HowePa)).
-* Added new syntax which allows to specify definer user in View/Materialized View. This allows to execute selects/inserts from views without explicit grants for underlying tables. [#60439](https://github.com/ClickHouse/ClickHouse/pull/60439) ([pufit](https://github.com/pufit)).
-* Add four properties to the `StorageMemory` (memory-engine) `min_bytes_to_keep, max_bytes_to_keep, min_rows_to_keep` and `max_rows_to_keep` - Add tests to reflect new changes - Update `memory.md` documentation - Add table `context` property to `MemorySink` to enable access to table parameter bounds. [#60612](https://github.com/ClickHouse/ClickHouse/pull/60612) ([Jake Bamrah](https://github.com/JakeBamrah)).
-* Added function `toMillisecond` which returns the millisecond component for values of type`DateTime` or `DateTime64`. [#60649](https://github.com/ClickHouse/ClickHouse/pull/60649) ([Robert Schulze](https://github.com/rschu1ze)).
-* Separate limits on number of waiting and executing queries. Added new server setting `max_waiting_queries` that limits the number of queries waiting due to `async_load_databases`. Existing limits on number of executing queries no longer count waiting queries. [#61053](https://github.com/ClickHouse/ClickHouse/pull/61053) ([Sergei Trifonov](https://github.com/serxa)).
-* Add support for `ATTACH PARTITION ALL`. [#61107](https://github.com/ClickHouse/ClickHouse/pull/61107) ([Kirill Nikiforov](https://github.com/allmazz)).
-
-#### Performance Improvement {#performance-improvement}
-* Eliminates min/max/any/anyLast aggregators of GROUP BY keys in SELECT section. [#52230](https://github.com/ClickHouse/ClickHouse/pull/52230) ([JackyWoo](https://github.com/JackyWoo)).
-* Improve the performance of serialized aggregation method when involving multiple [nullable] columns. This is a general version of [#51399](https://github.com/ClickHouse/ClickHouse/issues/51399) that doesn't compromise on abstraction integrity. [#55809](https://github.com/ClickHouse/ClickHouse/pull/55809) ([Amos Bird](https://github.com/amosbird)).
-* Lazy build join output to improve performance of ALL join. [#58278](https://github.com/ClickHouse/ClickHouse/pull/58278) ([LiuNeng](https://github.com/liuneng1994)).
-* Improvements to aggregate functions ArgMin / ArgMax / any / anyLast / anyHeavy, as well as `ORDER BY {u8/u16/u32/u64/i8/i16/u32/i64) LIMIT 1` queries. [#58640](https://github.com/ClickHouse/ClickHouse/pull/58640) ([Raúl Marín](https://github.com/Algunenano)).
-* Optimize performance of sum/avg conditionally for bigint and big decimal types by reducing branch miss. [#59504](https://github.com/ClickHouse/ClickHouse/pull/59504) ([李扬](https://github.com/taiyang-li)).
-* Improve performance of SELECTs with active mutations. [#59531](https://github.com/ClickHouse/ClickHouse/pull/59531) ([Azat Khuzhin](https://github.com/azat)).
-* Trivial optimize on column filter. Avoid those filter columns whoes underlying data type is not number being filtered with `result_size_hint = -1`. Peak memory can be reduced to 44% of the original in some cases. [#59698](https://github.com/ClickHouse/ClickHouse/pull/59698) ([李扬](https://github.com/taiyang-li)).
-* Primary key will use less amount of memory. [#60049](https://github.com/ClickHouse/ClickHouse/pull/60049) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Improve memory usage for primary key and some other operations. [#60050](https://github.com/ClickHouse/ClickHouse/pull/60050) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* The tables' primary keys will be loaded in memory lazily on first access. This is controlled by the new MergeTree setting `primary_key_lazy_load`, which is on by default. This provides several advantages: - it will not be loaded for tables that are not used; - if there is not enough memory, an exception will be thrown on first use instead of at server startup. This provides several disadvantages: - the latency of loading the primary key will be paid on the first query rather than before accepting connections; this theoretically may introduce a thundering-herd problem. This closes [#11188](https://github.com/ClickHouse/ClickHouse/issues/11188). [#60093](https://github.com/ClickHouse/ClickHouse/pull/60093) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Vectorized function `dotProduct` which is useful for vector search. [#60202](https://github.com/ClickHouse/ClickHouse/pull/60202) ([Robert Schulze](https://github.com/rschu1ze)).
-* If the table's primary key contains mostly useless columns, don't keep them in memory. This is controlled by a new setting `primary_key_ratio_of_unique_prefix_values_to_skip_suffix_columns` with the value `0.9` by default, which means: for a composite primary key, if a column changes its value for at least 0.9 of all the times, the next columns after it will be not loaded. [#60255](https://github.com/ClickHouse/ClickHouse/pull/60255) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Execute multiIf function columnarly when result_type's underlying type is number. [#60384](https://github.com/ClickHouse/ClickHouse/pull/60384) ([李扬](https://github.com/taiyang-li)).
-* As is shown in Fig 1, the replacement of "&&" with "&" could generate the SIMD code.  Fig 1. Code compiled from '&&' (left) and '&' (right). [#60498](https://github.com/ClickHouse/ClickHouse/pull/60498) ([Zhiguo Zhou](https://github.com/ZhiguoZh)).
-* Faster (almost 2x) mutexes (was slower due to ThreadFuzzer). [#60823](https://github.com/ClickHouse/ClickHouse/pull/60823) ([Azat Khuzhin](https://github.com/azat)).
-* Move connection drain from prepare to work, and drain multiple connections in parallel. [#60845](https://github.com/ClickHouse/ClickHouse/pull/60845) ([lizhuoyu5](https://github.com/lzydmxy)).
-* Optimize insertManyFrom of nullable number or nullable string. [#60846](https://github.com/ClickHouse/ClickHouse/pull/60846) ([李扬](https://github.com/taiyang-li)).
-* Optimized function `dotProduct` to omit unnecessary and expensive memory copies. [#60928](https://github.com/ClickHouse/ClickHouse/pull/60928) ([Robert Schulze](https://github.com/rschu1ze)).
-* Operations with the filesystem cache will suffer less from the lock contention. [#61066](https://github.com/ClickHouse/ClickHouse/pull/61066) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Optimize ColumnString::replicate and prevent memcpySmallAllowReadWriteOverflow15Impl from being optimized to built-in memcpy. Close [#61074](https://github.com/ClickHouse/ClickHouse/issues/61074). ColumnString::replicate speeds up by 2.46x on x86-64. [#61075](https://github.com/ClickHouse/ClickHouse/pull/61075) ([李扬](https://github.com/taiyang-li)).
-* 30x faster printing for 256-bit integers. [#61100](https://github.com/ClickHouse/ClickHouse/pull/61100) ([Raúl Marín](https://github.com/Algunenano)).
-* If a query with a syntax error contained COLUMNS matcher with a regular expression, the regular expression was compiled each time during the parser's backtracking, instead of being compiled once. This was a fundamental error. The compiled regexp was put to AST. But the letter A in AST means "abstract" which means it should not contain heavyweight objects. Parts of AST can be created and discarded during parsing, including a large number of backtracking. This leads to slowness on the parsing side and consequently allows DoS by a readonly user. But the main problem is that it prevents progress in fuzzers. [#61543](https://github.com/ClickHouse/ClickHouse/pull/61543) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-
-#### Improvement {#improvement}
-* While running the MODIFY COLUMN query for materialized views, check the inner table's structure to ensure every column exists. [#47427](https://github.com/ClickHouse/ClickHouse/pull/47427) ([sunny](https://github.com/sunny19930321)).
-* Added table `system.keywords` which contains all the keywords from parser. Mostly needed and will be used for better fuzzing and syntax highlighting. [#51808](https://github.com/ClickHouse/ClickHouse/pull/51808) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
-* Added support for parameterized view with analyzer to not analyze create parameterized view. Refactor existing parameterized view logic to not analyze create parameterized view. [#54211](https://github.com/ClickHouse/ClickHouse/pull/54211) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
-* Ordinary database engine is deprecated. You will receive a warning in clickhouse-client if your server is using it. This closes [#52229](https://github.com/ClickHouse/ClickHouse/issues/52229). [#56942](https://github.com/ClickHouse/ClickHouse/pull/56942) ([shabroo](https://github.com/shabroo)).
-* All zero copy locks related to a table have to be dropped when the table is dropped. The directory which contains these locks has to be removed also. [#57575](https://github.com/ClickHouse/ClickHouse/pull/57575) ([Sema Checherinda](https://github.com/CheSema)).
-* Add short-circuit ability for `dictGetOrDefault` function. Closes [#52098](https://github.com/ClickHouse/ClickHouse/issues/52098). [#57767](https://github.com/ClickHouse/ClickHouse/pull/57767) ([jsc0218](https://github.com/jsc0218)).
-* Allow declaring enum in external table structure. [#57857](https://github.com/ClickHouse/ClickHouse/pull/57857) ([Duc Canh Le](https://github.com/canhld94)).
-* Running `ALTER COLUMN MATERIALIZE` on a column with `DEFAULT` or `MATERIALIZED` expression now writes the correct values: The default value for existing parts with default value or the non-default value for existing parts with non-default value. Previously, the default value was written for all existing parts. [#58023](https://github.com/ClickHouse/ClickHouse/pull/58023) ([Duc Canh Le](https://github.com/canhld94)).
-* Enabled a backoff logic (e.g. exponential). Will provide an ability for reduced CPU usage, memory usage and log file sizes. [#58036](https://github.com/ClickHouse/ClickHouse/pull/58036) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
-* Consider lightweight deleted rows when selecting parts to merge. [#58223](https://github.com/ClickHouse/ClickHouse/pull/58223) ([Zhuo Qiu](https://github.com/jewelzqiu)).
-* Allow to define `volume_priority` in `storage_configuration`. [#58533](https://github.com/ClickHouse/ClickHouse/pull/58533) ([Andrey Zvonov](https://github.com/zvonand)).
-* Add support for Date32 type in T64 codec. [#58738](https://github.com/ClickHouse/ClickHouse/pull/58738) ([Hongbin Ma](https://github.com/binmahone)).
-* This PR makes http/https connections reusable for all uses cases. Even when response is 3xx or 4xx. [#58845](https://github.com/ClickHouse/ClickHouse/pull/58845) ([Sema Checherinda](https://github.com/CheSema)).
-* Added comments for columns for more system tables. Continuation of https://github.com/ClickHouse/ClickHouse/pull/58356. [#59016](https://github.com/ClickHouse/ClickHouse/pull/59016) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
-* Now we can use virtual columns in PREWHERE. It's worthwhile for non-const virtual columns like `_part_offset`. [#59033](https://github.com/ClickHouse/ClickHouse/pull/59033) ([Amos Bird](https://github.com/amosbird)).
-* Settings for the Distributed table engine can now be specified in the server configuration file (similar to MergeTree settings), e.g. ``` false ```. [#59291](https://github.com/ClickHouse/ClickHouse/pull/59291) ([Azat Khuzhin](https://github.com/azat)).
-* Keeper improvement: cache only a certain amount of logs in-memory controlled by `latest_logs_cache_size_threshold` and `commit_logs_cache_size_threshold`. [#59460](https://github.com/ClickHouse/ClickHouse/pull/59460) ([Antonio Andelic](https://github.com/antonio2368)).
-* Instead using a constant key, now object storage generates key for determining remove objects capability. [#59495](https://github.com/ClickHouse/ClickHouse/pull/59495) ([Sema Checherinda](https://github.com/CheSema)).
-* Don't infer floats in exponential notation by default. Add a setting `input_format_try_infer_exponent_floats` that will restore previous behaviour (disabled by default). Closes [#59476](https://github.com/ClickHouse/ClickHouse/issues/59476). [#59500](https://github.com/ClickHouse/ClickHouse/pull/59500) ([Kruglov Pavel](https://github.com/Avogar)).
-* Allow alter operations to be surrounded by parenthesis. The emission of parentheses can be controlled by the `format_alter_operations_with_parentheses` config. By default in formatted queries the parentheses are emitted as we store the formatted alter operations in some places as metadata (e.g.: mutations). The new syntax clarifies some of the queries where alter operations end in a list. E.g.: `ALTER TABLE x MODIFY TTL date GROUP BY a, b, DROP COLUMN c` cannot be parsed properly with the old syntax. In the new syntax the query `ALTER TABLE x (MODIFY TTL date GROUP BY a, b), (DROP COLUMN c)` is obvious. Older versions are not able to read the new syntax, therefore using the new syntax might cause issues if newer and older version of ClickHouse are mixed in a single cluster. [#59532](https://github.com/ClickHouse/ClickHouse/pull/59532) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
-* Bumped Intel QPL (used by codec `DEFLATE_QPL`) from v1.3.1 to v1.4.0 . Also fixed a bug for polling timeout mechanism, as we observed in same cases timeout won't work properly, if timeout happen, IAA and CPU may process buffer concurrently. So far, we'd better make sure IAA codec status is not QPL_STS_BEING_PROCESSED, then fallback to SW codec. [#59551](https://github.com/ClickHouse/ClickHouse/pull/59551) ([jasperzhu](https://github.com/jinjunzh)).
-* Add positional pread in libhdfs3. If you want to call positional read in libhdfs3, use the hdfsPread function in hdfs.h as follows. `tSize hdfsPread(hdfsFS fs, hdfsFile file, void * buffer, tSize length, tOffset position);`. [#59624](https://github.com/ClickHouse/ClickHouse/pull/59624) ([M1eyu](https://github.com/M1eyu2018)).
-* Check for stack overflow in parsers even if the user misconfigured the `max_parser_depth` setting to a very high value. This closes [#59622](https://github.com/ClickHouse/ClickHouse/issues/59622). [#59697](https://github.com/ClickHouse/ClickHouse/pull/59697) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Unify xml and sql created named collection behaviour in kafka storage. [#59710](https://github.com/ClickHouse/ClickHouse/pull/59710) ([Pervakov Grigorii](https://github.com/GrigoryPervakov)).
-* Allow uuid in replica_path if CREATE TABLE explicitly has it. [#59908](https://github.com/ClickHouse/ClickHouse/pull/59908) ([Azat Khuzhin](https://github.com/azat)).
-* Add column `metadata_version` of ReplicatedMergeTree table in `system.tables` system table. [#59942](https://github.com/ClickHouse/ClickHouse/pull/59942) ([Maksim Kita](https://github.com/kitaisreal)).
-* Keeper improvement: add retries on failures for Disk related operations. [#59980](https://github.com/ClickHouse/ClickHouse/pull/59980) ([Antonio Andelic](https://github.com/antonio2368)).
-* Add new config setting `backups.remove_backup_files_after_failure`: ``` true ```. [#60002](https://github.com/ClickHouse/ClickHouse/pull/60002) ([Vitaly Baranov](https://github.com/vitlibar)).
-* Use multiple threads while reading the metadata of tables from a backup while executing the RESTORE command. [#60040](https://github.com/ClickHouse/ClickHouse/pull/60040) ([Vitaly Baranov](https://github.com/vitlibar)).
-* Now if `StorageBuffer` has more than 1 shard (`num_layers` > 1) background flush will happen simultaneously for all shards in multiple threads. [#60111](https://github.com/ClickHouse/ClickHouse/pull/60111) ([alesapin](https://github.com/alesapin)).
-* Support specifying users for specific S3 settings in config using `user` key. [#60144](https://github.com/ClickHouse/ClickHouse/pull/60144) ([Antonio Andelic](https://github.com/antonio2368)).
-* Copy S3 file GCP fallback to buffer copy in case GCP returned `Internal Error` with `GATEWAY_TIMEOUT` HTTP error code. [#60164](https://github.com/ClickHouse/ClickHouse/pull/60164) ([Maksim Kita](https://github.com/kitaisreal)).
-* Allow "local" as object storage type instead of "local_blob_storage". [#60165](https://github.com/ClickHouse/ClickHouse/pull/60165) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Implement comparison operator for Variant values and proper Field inserting into Variant column. Don't allow creating `Variant` type with similar variant types by default (allow uder a setting `allow_suspicious_variant_types`) Closes [#59996](https://github.com/ClickHouse/ClickHouse/issues/59996). Closes [#59850](https://github.com/ClickHouse/ClickHouse/issues/59850). [#60198](https://github.com/ClickHouse/ClickHouse/pull/60198) ([Kruglov Pavel](https://github.com/Avogar)).
-* Improved overall usability of virtual columns. Now it is allowed to use virtual columns in `PREWHERE` (it's worthwhile for non-const virtual columns like `_part_offset`). Now a builtin documentation is available for virtual columns as a comment of column in `DESCRIBE` query with enabled setting `describe_include_virtual_columns`. [#60205](https://github.com/ClickHouse/ClickHouse/pull/60205) ([Anton Popov](https://github.com/CurtizJ)).
-* Short circuit execution for `ULIDStringToDateTime`. [#60211](https://github.com/ClickHouse/ClickHouse/pull/60211) ([Juan Madurga](https://github.com/jlmadurga)).
-* Added `query_id` column for tables `system.backups` and `system.backup_log`. Added error stacktrace to `error` column. [#60220](https://github.com/ClickHouse/ClickHouse/pull/60220) ([Maksim Kita](https://github.com/kitaisreal)).
-* Parallel flush of pending INSERT blocks of Distributed engine on `DETACH`/server shutdown and `SYSTEM FLUSH DISTRIBUTED` (Parallelism will work only if you have multi disk policy for table (like everything in Distributed engine right now)). [#60225](https://github.com/ClickHouse/ClickHouse/pull/60225) ([Azat Khuzhin](https://github.com/azat)).
-* Filter setting is improper in `joinRightColumnsSwitchNullability`, resolve [#59625](https://github.com/ClickHouse/ClickHouse/issues/59625). [#60259](https://github.com/ClickHouse/ClickHouse/pull/60259) ([lgbo](https://github.com/lgbo-ustc)).
-* Add a setting to force read-through cache for merges. [#60308](https://github.com/ClickHouse/ClickHouse/pull/60308) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Issue [#57598](https://github.com/ClickHouse/ClickHouse/issues/57598) mentions a variant behaviour regarding transaction handling. An issued COMMIT/ROLLBACK when no transaction is active is reported as an error contrary to MySQL behaviour. [#60338](https://github.com/ClickHouse/ClickHouse/pull/60338) ([PapaToemmsn](https://github.com/PapaToemmsn)).
-* Added `none_only_active` mode for `distributed_ddl_output_mode` setting. [#60340](https://github.com/ClickHouse/ClickHouse/pull/60340) ([Alexander Tokmakov](https://github.com/tavplubix)).
-* Connections through the MySQL port now automatically run with setting `prefer_column_name_to_alias = 1` to support QuickSight out-of-the-box. Also, settings `mysql_map_string_to_text_in_show_columns` and `mysql_map_fixed_string_to_text_in_show_columns` are now enabled by default, affecting also only MySQL connections. This increases compatibility with more BI tools. [#60365](https://github.com/ClickHouse/ClickHouse/pull/60365) ([Robert Schulze](https://github.com/rschu1ze)).
-* When output format is Pretty format and a block consists of a single numeric value which exceeds one million, A readable number will be printed on table right. e.g. ``` ┌──────count()─┐ │ 233765663884 │ -- 233.77 billion └──────────────┘ ```. [#60379](https://github.com/ClickHouse/ClickHouse/pull/60379) ([rogeryk](https://github.com/rogeryk)).
-* Allow configuring HTTP redirect handlers for clickhouse-server. For example, you can make `/` redirect to the Play UI. [#60390](https://github.com/ClickHouse/ClickHouse/pull/60390) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* The advanced dashboard has slightly better colors for multi-line graphs. [#60391](https://github.com/ClickHouse/ClickHouse/pull/60391) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Fix a race condition in JavaScript code leading to duplicate charts on top of each other. [#60392](https://github.com/ClickHouse/ClickHouse/pull/60392) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Check for stack overflow in parsers even if the user misconfigured the `max_parser_depth` setting to a very high value. This closes [#59622](https://github.com/ClickHouse/ClickHouse/issues/59622). [#60434](https://github.com/ClickHouse/ClickHouse/pull/60434) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Function `substring` now has a new alias `byteSlice`. [#60494](https://github.com/ClickHouse/ClickHouse/pull/60494) ([Robert Schulze](https://github.com/rschu1ze)).
-* Renamed server setting `dns_cache_max_size` to `dns_cache_max_entries` to reduce ambiguity. [#60500](https://github.com/ClickHouse/ClickHouse/pull/60500) ([Kirill Nikiforov](https://github.com/allmazz)).
-* `SHOW INDEX | INDEXES | INDICES | KEYS` no longer sorts by the primary key columns (which was unintuitive). [#60514](https://github.com/ClickHouse/ClickHouse/pull/60514) ([Robert Schulze](https://github.com/rschu1ze)).
-* Keeper improvement: abort during startup if an invalid snapshot is detected to avoid data loss. [#60537](https://github.com/ClickHouse/ClickHouse/pull/60537) ([Antonio Andelic](https://github.com/antonio2368)).
-* Added MergeTree read split ranges into intersecting and non intersecting fault injection using `merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_fault_probability` setting. [#60548](https://github.com/ClickHouse/ClickHouse/pull/60548) ([Maksim Kita](https://github.com/kitaisreal)).
-* The Advanced dashboard now has controls always visible on scrolling. This allows you to add a new chart without scrolling up. [#60692](https://github.com/ClickHouse/ClickHouse/pull/60692) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* String types and Enums can be used in the same context, such as: arrays, UNION queries, conditional expressions. This closes [#60726](https://github.com/ClickHouse/ClickHouse/issues/60726). [#60727](https://github.com/ClickHouse/ClickHouse/pull/60727) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Update tzdata to 2024a. [#60768](https://github.com/ClickHouse/ClickHouse/pull/60768) ([Raúl Marín](https://github.com/Algunenano)).
-* Support files without format extension in Filesystem database. [#60795](https://github.com/ClickHouse/ClickHouse/pull/60795) ([Kruglov Pavel](https://github.com/Avogar)).
-* Keeper improvement: support `leadership_expiry_ms` in Keeper's settings. [#60806](https://github.com/ClickHouse/ClickHouse/pull/60806) ([Brokenice0415](https://github.com/Brokenice0415)).
-* Always infer exponential numbers in JSON formats regardless of the setting `input_format_try_infer_exponent_floats`. Add setting `input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects` that allows to use String type for ambiguous paths instead of an exception during named Tuples inference from JSON objects. [#60808](https://github.com/ClickHouse/ClickHouse/pull/60808) ([Kruglov Pavel](https://github.com/Avogar)).
-* Add a flag for SMJ to treat null as biggest/smallest. So the behavior can be compitable with other SQL systems, like Apache Spark. [#60896](https://github.com/ClickHouse/ClickHouse/pull/60896) ([loudongfeng](https://github.com/loudongfeng)).
-* Clickhouse version has been added to docker labels. Closes [#54224](https://github.com/ClickHouse/ClickHouse/issues/54224). [#60949](https://github.com/ClickHouse/ClickHouse/pull/60949) ([Nikolay Monkov](https://github.com/nikmonkov)).
-* Add a setting `parallel_replicas_allow_in_with_subquery = 1` which allows subqueries for IN work with parallel replicas. [#60950](https://github.com/ClickHouse/ClickHouse/pull/60950) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
-* DNSResolver shuffles set of resolved IPs. [#60965](https://github.com/ClickHouse/ClickHouse/pull/60965) ([Sema Checherinda](https://github.com/CheSema)).
-* Support detect output format by file exctension in `clickhouse-client` and `clickhouse-local`. [#61036](https://github.com/ClickHouse/ClickHouse/pull/61036) ([豪肥肥](https://github.com/HowePa)).
-* Check memory limit update periodically. [#61049](https://github.com/ClickHouse/ClickHouse/pull/61049) ([Han Fei](https://github.com/hanfei1991)).
-* Enable processors profiling (time spent/in and out bytes for sorting, aggregation, ...) by default. [#61096](https://github.com/ClickHouse/ClickHouse/pull/61096) ([Azat Khuzhin](https://github.com/azat)).
-* Add the function `toUInt128OrZero`, which was missed by mistake (the mistake is related to https://github.com/ClickHouse/ClickHouse/pull/945). The compatibility aliases `FROM_UNIXTIME` and `DATE_FORMAT` (they are not ClickHouse-native and only exist for MySQL compatibility) have been made case insensitive, as expected for SQL-compatibility aliases. [#61114](https://github.com/ClickHouse/ClickHouse/pull/61114) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Improvements for the access checks, allowing to revoke of unpossessed rights in case the target user doesn't have the revoking grants either. Example: ```sql GRANT SELECT ON *.* TO user1; REVOKE SELECT ON system.* FROM user1;. [#61115](https://github.com/ClickHouse/ClickHouse/pull/61115) ([pufit](https://github.com/pufit)).
-* Fix an error in previeous opt: https://github.com/ClickHouse/ClickHouse/pull/59698: remove break to make sure the first filtered column has minimum size cc @jsc0218. [#61145](https://github.com/ClickHouse/ClickHouse/pull/61145) ([李扬](https://github.com/taiyang-li)).
-* Fix `has()` function with `Nullable` column (fixes [#60214](https://github.com/ClickHouse/ClickHouse/issues/60214)). [#61249](https://github.com/ClickHouse/ClickHouse/pull/61249) ([Mikhail Koviazin](https://github.com/mkmkme)).
-* Now it's possible to specify attribute `merge="true"` in config substitutions for subtrees ``. In case this attribute specified, clickhouse will merge subtree with existing configuration, otherwise default behavior is append new content to configuration. [#61299](https://github.com/ClickHouse/ClickHouse/pull/61299) ([alesapin](https://github.com/alesapin)).
-* Add async metrics for virtual memory mappings: VMMaxMapCount & VMNumMaps. Closes [#60662](https://github.com/ClickHouse/ClickHouse/issues/60662). [#61354](https://github.com/ClickHouse/ClickHouse/pull/61354) ([Tuan Pham Anh](https://github.com/tuanpavn)).
-* Use `temporary_files_codec` setting in all places where we create temporary data, for example external memory sorting and external memory GROUP BY. Before it worked only in `partial_merge` JOIN algorithm. [#61456](https://github.com/ClickHouse/ClickHouse/pull/61456) ([Maksim Kita](https://github.com/kitaisreal)).
-* Remove duplicated check `containing_part.empty()`, It's already being checked here: https://github.com/ClickHouse/ClickHouse/blob/1296dac3c7e47670872c15e3f5e58f869e0bd2f2/src/Storages/MergeTree/MergeTreeData.cpp#L6141. [#61467](https://github.com/ClickHouse/ClickHouse/pull/61467) ([William Schoeffel](https://github.com/wiledusc)).
-* Add a new setting `max_parser_backtracks` which allows to limit the complexity of query parsing. [#61502](https://github.com/ClickHouse/ClickHouse/pull/61502) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Less contention during dynamic resize of filesystem cache. [#61524](https://github.com/ClickHouse/ClickHouse/pull/61524) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Disallow sharded mode of StorageS3 queue, because it will be rewritten. [#61537](https://github.com/ClickHouse/ClickHouse/pull/61537) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Fixed typo: from `use_leagcy_max_level` to `use_legacy_max_level`. [#61545](https://github.com/ClickHouse/ClickHouse/pull/61545) ([William Schoeffel](https://github.com/wiledusc)).
-* Remove some duplicate entries in blob_storage_log. [#61622](https://github.com/ClickHouse/ClickHouse/pull/61622) ([YenchangChan](https://github.com/YenchangChan)).
-* Added `current_user` function as a compatibility alias for MySQL. [#61770](https://github.com/ClickHouse/ClickHouse/pull/61770) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-* Use managed identity for backups IO when using Azure Blob Storage. Add a setting to prevent ClickHouse from attempting to create a non-existent container, which requires permissions at the storage account level. [#61785](https://github.com/ClickHouse/ClickHouse/pull/61785) ([Daniel Pozo Escalona](https://github.com/danipozo)).
-* In the previous version, some numbers in Pretty formats were not pretty enough. [#61794](https://github.com/ClickHouse/ClickHouse/pull/61794) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* A long value in Pretty formats won't be cut if it is the single value in the resultset, such as in the result of the `SHOW CREATE TABLE` query. [#61795](https://github.com/ClickHouse/ClickHouse/pull/61795) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Similarly to `clickhouse-local`, `clickhouse-client` will accept the `--output-format` option as a synonym to the `--format` option. This closes [#59848](https://github.com/ClickHouse/ClickHouse/issues/59848). [#61797](https://github.com/ClickHouse/ClickHouse/pull/61797) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* If stdout is a terminal and the output format is not specified, `clickhouse-client` and similar tools will use `PrettyCompact` by default, similarly to the interactive mode. `clickhouse-client` and `clickhouse-local` will handle command line arguments for input and output formats in a unified fashion. This closes [#61272](https://github.com/ClickHouse/ClickHouse/issues/61272). [#61800](https://github.com/ClickHouse/ClickHouse/pull/61800) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Underscore digit groups in Pretty formats for better readability. This is controlled by a new setting, `output_format_pretty_highlight_digit_groups`. [#61802](https://github.com/ClickHouse/ClickHouse/pull/61802) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-
-#### Bug Fix (user-visible misbehavior in an official stable release) {#bug-fix-user-visible-misbehavior-in-an-official-stable-release}
-
-* Fix bug with `intDiv` for decimal arguments [#59243](https://github.com/ClickHouse/ClickHouse/pull/59243) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-* Fix_kql_issue_found_by_wingfuzz [#59626](https://github.com/ClickHouse/ClickHouse/pull/59626) ([Yong Wang](https://github.com/kashwy)).
-* Fix error "Read beyond last offset" for AsynchronousBoundedReadBuffer [#59630](https://github.com/ClickHouse/ClickHouse/pull/59630) ([Vitaly Baranov](https://github.com/vitlibar)).
-* rabbitmq: fix having neither acked nor nacked messages [#59775](https://github.com/ClickHouse/ClickHouse/pull/59775) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Fix function execution over const and LowCardinality with GROUP BY const for analyzer [#59986](https://github.com/ClickHouse/ClickHouse/pull/59986) ([Azat Khuzhin](https://github.com/azat)).
-* Fix scale conversion for DateTime64 [#60004](https://github.com/ClickHouse/ClickHouse/pull/60004) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-* Fix INSERT into SQLite with single quote (by escaping single quotes with a quote instead of backslash) [#60015](https://github.com/ClickHouse/ClickHouse/pull/60015) ([Azat Khuzhin](https://github.com/azat)).
-* Fix optimize_uniq_to_count removing the column alias [#60026](https://github.com/ClickHouse/ClickHouse/pull/60026) ([Raúl Marín](https://github.com/Algunenano)).
-* Fix finished_mutations_to_keep=0 for MergeTree (as docs says 0 is to keep everything) [#60031](https://github.com/ClickHouse/ClickHouse/pull/60031) ([Azat Khuzhin](https://github.com/azat)).
-* Fix possible exception from s3queue table on drop [#60036](https://github.com/ClickHouse/ClickHouse/pull/60036) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* PartsSplitter invalid ranges for the same part [#60041](https://github.com/ClickHouse/ClickHouse/pull/60041) ([Maksim Kita](https://github.com/kitaisreal)).
-* Use max_query_size from context in DDLLogEntry instead of hardcoded 4096 [#60083](https://github.com/ClickHouse/ClickHouse/pull/60083) ([Kruglov Pavel](https://github.com/Avogar)).
-* Fix inconsistent formatting of queries [#60095](https://github.com/ClickHouse/ClickHouse/pull/60095) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Fix inconsistent formatting of explain in subqueries [#60102](https://github.com/ClickHouse/ClickHouse/pull/60102) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Fix cosineDistance crash with Nullable [#60150](https://github.com/ClickHouse/ClickHouse/pull/60150) ([Raúl Marín](https://github.com/Algunenano)).
-* Allow casting of bools in string representation to to true bools [#60160](https://github.com/ClickHouse/ClickHouse/pull/60160) ([Robert Schulze](https://github.com/rschu1ze)).
-* Fix system.s3queue_log [#60166](https://github.com/ClickHouse/ClickHouse/pull/60166) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Fix arrayReduce with nullable aggregate function name [#60188](https://github.com/ClickHouse/ClickHouse/pull/60188) ([Raúl Marín](https://github.com/Algunenano)).
-* Fix actions execution during preliminary filtering (PK, partition pruning) [#60196](https://github.com/ClickHouse/ClickHouse/pull/60196) ([Azat Khuzhin](https://github.com/azat)).
-* Hide sensitive info for s3queue [#60233](https://github.com/ClickHouse/ClickHouse/pull/60233) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Revert "Replace `ORDER BY ALL` by `ORDER BY *`" [#60248](https://github.com/ClickHouse/ClickHouse/pull/60248) ([Robert Schulze](https://github.com/rschu1ze)).
-* Azure Blob Storage : Fix issues endpoint and prefix [#60251](https://github.com/ClickHouse/ClickHouse/pull/60251) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
-* Fix http exception codes. [#60252](https://github.com/ClickHouse/ClickHouse/pull/60252) ([Austin Kothig](https://github.com/kothiga)).
-* fix LRUResource Cache bug (Hive cache) [#60262](https://github.com/ClickHouse/ClickHouse/pull/60262) ([shanfengp](https://github.com/Aed-p)).
-* s3queue: fix bug (also fixes flaky test_storage_s3_queue/test.py::test_shards_distributed) [#60282](https://github.com/ClickHouse/ClickHouse/pull/60282) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Fix use-of-uninitialized-value and invalid result in hashing functions with IPv6 [#60359](https://github.com/ClickHouse/ClickHouse/pull/60359) ([Kruglov Pavel](https://github.com/Avogar)).
-* Force reanalysis if parallel replicas changed [#60362](https://github.com/ClickHouse/ClickHouse/pull/60362) ([Raúl Marín](https://github.com/Algunenano)).
-* Fix usage of plain metadata type with new disks configuration option [#60396](https://github.com/ClickHouse/ClickHouse/pull/60396) ([Kseniia Sumarokova](https://github.com/kssenii)).
-* Don't allow to set max_parallel_replicas to 0 as it doesn't make sense [#60430](https://github.com/ClickHouse/ClickHouse/pull/60430) ([Kruglov Pavel](https://github.com/Avogar)).
-* Try to fix logical error 'Cannot capture column because it has incompatible type' in mapContainsKeyLike [#60451](https://github.com/ClickHouse/ClickHouse/pull/60451) ([Kruglov Pavel](https://github.com/Avogar)).
-* Fix OptimizeDateOrDateTimeConverterWithPreimageVisitor with null arguments [#60453](https://github.com/ClickHouse/ClickHouse/pull/60453) ([Raúl Marín](https://github.com/Algunenano)).
-* Try to avoid calculation of scalar subqueries for CREATE TABLE. [#60464](https://github.com/ClickHouse/ClickHouse/pull/60464) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
-* Merging [#59674](https://github.com/ClickHouse/ClickHouse/issues/59674). [#60470](https://github.com/ClickHouse/ClickHouse/pull/60470) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Correctly check keys in s3Cluster [#60477](https://github.com/ClickHouse/ClickHouse/pull/60477) ([Antonio Andelic](https://github.com/antonio2368)).
-* Fix deadlock in parallel parsing when lots of rows are skipped due to errors [#60516](https://github.com/ClickHouse/ClickHouse/pull/60516) ([Kruglov Pavel](https://github.com/Avogar)).
-* Fix_max_query_size_for_kql_compound_operator: [#60534](https://github.com/ClickHouse/ClickHouse/pull/60534) ([Yong Wang](https://github.com/kashwy)).
-* Keeper fix: add timeouts when waiting for commit logs [#60544](https://github.com/ClickHouse/ClickHouse/pull/60544) ([Antonio Andelic](https://github.com/antonio2368)).
-* Reduce the number of read rows from `system.numbers` [#60546](https://github.com/ClickHouse/ClickHouse/pull/60546) ([JackyWoo](https://github.com/JackyWoo)).
-* Don't output number tips for date types [#60577](https://github.com/ClickHouse/ClickHouse/pull/60577) ([Raúl Marín](https://github.com/Algunenano)).
-* Fix reading from MergeTree with non-deterministic functions in filter [#60586](https://github.com/ClickHouse/ClickHouse/pull/60586) ([Kruglov Pavel](https://github.com/Avogar)).
-* Fix logical error on bad compatibility setting value type [#60596](https://github.com/ClickHouse/ClickHouse/pull/60596) ([Kruglov Pavel](https://github.com/Avogar)).
-* Fix inconsistent aggregate function states in mixed x86-64 / ARM clusters [#60610](https://github.com/ClickHouse/ClickHouse/pull/60610) ([Harry Lee](https://github.com/HarryLeeIBM)).
-* fix(prql): Robust panic handler [#60615](https://github.com/ClickHouse/ClickHouse/pull/60615) ([Maximilian Roos](https://github.com/max-sixty)).
-* Fix `intDiv` for decimal and date arguments [#60672](https://github.com/ClickHouse/ClickHouse/pull/60672) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
-* Fix: expand CTE in alter modify query [#60682](https://github.com/ClickHouse/ClickHouse/pull/60682) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
-* Fix system.parts for non-Atomic/Ordinary database engine (i.e. Memory) [#60689](https://github.com/ClickHouse/ClickHouse/pull/60689) ([Azat Khuzhin](https://github.com/azat)).
-* Fix "Invalid storage definition in metadata file" for parameterized views [#60708](https://github.com/ClickHouse/ClickHouse/pull/60708) ([Azat Khuzhin](https://github.com/azat)).
-* Fix buffer overflow in CompressionCodecMultiple [#60731](https://github.com/ClickHouse/ClickHouse/pull/60731) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Remove nonsense from SQL/JSON [#60738](https://github.com/ClickHouse/ClickHouse/pull/60738) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Remove wrong sanitize checking in aggregate function quantileGK [#60740](https://github.com/ClickHouse/ClickHouse/pull/60740) ([李扬](https://github.com/taiyang-li)).
-* Fix insert-select + insert_deduplication_token bug by setting streams to 1 [#60745](https://github.com/ClickHouse/ClickHouse/pull/60745) ([Jordi Villar](https://github.com/jrdi)).
-* Prevent setting custom metadata headers on unsupported multipart upload operations [#60748](https://github.com/ClickHouse/ClickHouse/pull/60748) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)).
-* Fix toStartOfInterval [#60763](https://github.com/ClickHouse/ClickHouse/pull/60763) ([Andrey Zvonov](https://github.com/zvonand)).
-* Fix crash in arrayEnumerateRanked [#60764](https://github.com/ClickHouse/ClickHouse/pull/60764) ([Raúl Marín](https://github.com/Algunenano)).
-* Fix crash when using input() in INSERT SELECT JOIN [#60765](https://github.com/ClickHouse/ClickHouse/pull/60765) ([Kruglov Pavel](https://github.com/Avogar)).
-* Fix crash with different allow_experimental_analyzer value in subqueries [#60770](https://github.com/ClickHouse/ClickHouse/pull/60770) ([Dmitry Novik](https://github.com/novikd)).
-* Remove recursion when reading from S3 [#60849](https://github.com/ClickHouse/ClickHouse/pull/60849) ([Antonio Andelic](https://github.com/antonio2368)).
-* Fix possible stuck on error in HashedDictionaryParallelLoader [#60926](https://github.com/ClickHouse/ClickHouse/pull/60926) ([vdimir](https://github.com/vdimir)).
-* Fix async RESTORE with Replicated database [#60934](https://github.com/ClickHouse/ClickHouse/pull/60934) ([Antonio Andelic](https://github.com/antonio2368)).
-* Fix deadlock in async inserts to `Log` tables via native protocol [#61055](https://github.com/ClickHouse/ClickHouse/pull/61055) ([Anton Popov](https://github.com/CurtizJ)).
-* Fix lazy execution of default argument in dictGetOrDefault for RangeHashedDictionary [#61196](https://github.com/ClickHouse/ClickHouse/pull/61196) ([Kruglov Pavel](https://github.com/Avogar)).
-* Fix multiple bugs in groupArraySorted [#61203](https://github.com/ClickHouse/ClickHouse/pull/61203) ([Raúl Marín](https://github.com/Algunenano)).
-* Fix Keeper reconfig for standalone binary [#61233](https://github.com/ClickHouse/ClickHouse/pull/61233) ([Antonio Andelic](https://github.com/antonio2368)).
-* Fix usage of session_token in S3 engine [#61234](https://github.com/ClickHouse/ClickHouse/pull/61234) ([Kruglov Pavel](https://github.com/Avogar)).
-* Fix possible incorrect result of aggregate function `uniqExact` [#61257](https://github.com/ClickHouse/ClickHouse/pull/61257) ([Anton Popov](https://github.com/CurtizJ)).
-* Fix bugs in show database [#61269](https://github.com/ClickHouse/ClickHouse/pull/61269) ([Raúl Marín](https://github.com/Algunenano)).
-* Fix logical error in RabbitMQ storage with MATERIALIZED columns [#61320](https://github.com/ClickHouse/ClickHouse/pull/61320) ([vdimir](https://github.com/vdimir)).
-* Fix CREATE OR REPLACE DICTIONARY [#61356](https://github.com/ClickHouse/ClickHouse/pull/61356) ([Vitaly Baranov](https://github.com/vitlibar)).
-* Fix ATTACH query with external ON CLUSTER [#61365](https://github.com/ClickHouse/ClickHouse/pull/61365) ([Nikolay Degterinsky](https://github.com/evillique)).
-* fix issue of actions dag split [#61458](https://github.com/ClickHouse/ClickHouse/pull/61458) ([Raúl Marín](https://github.com/Algunenano)).
-* Fix finishing a failed RESTORE [#61466](https://github.com/ClickHouse/ClickHouse/pull/61466) ([Vitaly Baranov](https://github.com/vitlibar)).
-* Disable async_insert_use_adaptive_busy_timeout correctly with compatibility settings [#61468](https://github.com/ClickHouse/ClickHouse/pull/61468) ([Raúl Marín](https://github.com/Algunenano)).
-* Allow queuing in restore pool [#61475](https://github.com/ClickHouse/ClickHouse/pull/61475) ([Nikita Taranov](https://github.com/nickitat)).
-* Fix bug when reading system.parts using UUID (issue 61220). [#61479](https://github.com/ClickHouse/ClickHouse/pull/61479) ([Dan Wu](https://github.com/wudanzy)).
-* Fix crash in window view [#61526](https://github.com/ClickHouse/ClickHouse/pull/61526) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
-* Fix `repeat` with non native integers [#61527](https://github.com/ClickHouse/ClickHouse/pull/61527) ([Antonio Andelic](https://github.com/antonio2368)).
-* Fix client `-s` argument [#61530](https://github.com/ClickHouse/ClickHouse/pull/61530) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
-* Fix crash in arrayPartialReverseSort [#61539](https://github.com/ClickHouse/ClickHouse/pull/61539) ([Raúl Marín](https://github.com/Algunenano)).
-* Fix string search with const position [#61547](https://github.com/ClickHouse/ClickHouse/pull/61547) ([Antonio Andelic](https://github.com/antonio2368)).
-* Fix addDays cause an error when used datetime64 [#61561](https://github.com/ClickHouse/ClickHouse/pull/61561) ([Shuai li](https://github.com/loneylee)).
-* Fix `system.part_log` for async insert with deduplication [#61620](https://github.com/ClickHouse/ClickHouse/pull/61620) ([Antonio Andelic](https://github.com/antonio2368)).
-* Fix Non-ready set for system.parts. [#61666](https://github.com/ClickHouse/ClickHouse/pull/61666) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/cloud/features/01_cloud_tiers.md b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/features/01_cloud_tiers.md
new file mode 100644
index 00000000000..a9b8846575e
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/cloud/features/01_cloud_tiers.md
@@ -0,0 +1,206 @@
+---
+'sidebar_label': 'ClickHouse Cloud 层级'
+'slug': '/cloud/manage/cloud-tiers'
+'title': 'ClickHouse Cloud 层级'
+'description': 'ClickHouse Cloud 中可用的云层级'
+'doc_type': 'reference'
+---
+
+
+# ClickHouse Cloud 级别
+
+在 ClickHouse Cloud 中有几个可用的级别。
+级别在任何组织级别上分配。因此,组织内的服务属于同一级别。
+本页面讨论哪些级别适合您的特定用例。
+
+**云级别摘要:**
+
+