A typed ClickHouse query layer for modern JavaScript runtimes.
It gives you:
- a schema DSL for ClickHouse tables and columns
- a typed query builder for the common path
- raw SQL when ClickHouse-specific syntax is the better tool
- session helpers for temporary-table workflows
- observability hooks for logging, tracing, and custom instrumentation
The design goal is straightforward: make everyday ClickHouse access easier to structure without hiding the parts that make ClickHouse different.
- Installation
- Quick start
- Mental model
- Schema DSL
- Client configuration
- Query builder
- Join null semantics
- Writes
- Raw SQL
- Functions and table functions
- Sessions and temporary tables
- Runtime methods
- Observability
- Error model
- Security
bun add ck-ormnpm install ck-ormWhen you change public behavior, update these files together so the docs do not drift away from the tested contract:
README.mdexamples/README.mdand any touched example filessrc/runtime.typecheck.tse2e/api-matrix.md
The examples below use a single table so the main flow stays easy to follow.
import { chTable, chType } from "ck-orm";
export const orderRewardLog = chTable(
"order_reward_log",
{
id: chType.int32(),
user_id: chType.string(),
campaign_id: chType.int32(),
order_id: chType.int64(),
reward_points: chType.decimal(20, 5),
status: chType.int16(),
created_at: chType.int32(),
_peerdb_synced_at: chType.dateTime64(9),
_peerdb_is_deleted: chType.uint8(),
_peerdb_version: chType.uint64(),
},
(table) => ({
engine: "ReplacingMergeTree",
orderBy: [table.user_id, table.created_at, table.id],
versionColumn: table._peerdb_version,
}),
);
export const commerceSchema = {
orderRewardLog,
};import { clickhouseClient } from "ck-orm";
import { commerceSchema } from "./schema";
export const db = clickhouseClient({
host: "http://127.0.0.1:8123", database: "demo_store", username: "default", password: "<password>", schema: commerceSchema, clickhouse_settings: {
max_execution_time: 10, }, });import { ck, fn } from "ck-orm";
import { db } from "./db";
import { orderRewardLog } from "./schema";
const query = db
.select({
userId: orderRewardLog.user_id, totalRewardPoints: fn.sum(orderRewardLog.reward_points).as(
"total_reward_points", ), })
.from(orderRewardLog)
.where(ck.eq(orderRewardLog.status, 1))
.groupBy(orderRewardLog.user_id)
.orderBy(ck.desc(fn.sum(orderRewardLog.reward_points)))
.limit(20);
const rows = await query;Builder queries are thenable, so await query executes the query directly.
Use ck-orm with these boundaries in mind:
chType.*defines schema column typeschTable(...)defines table schemasck.*is the query-helper namespacefn.*is the SQL function-helper namespace- schema describes tables and columns, not the database name
- the database connection lives on
clickhouseClient(...) - builder queries are the default path, raw SQL is the escape hatch
runInSession()is a ClickHouse session helper, not a transactionleftJoin()uses SQL-style null semantics by default- large or high-risk numeric results are decoded conservatively rather than silently coerced
ck-orm is not trying to be:
- a migration framework
- a schema sync tool
- a transaction-oriented ORM with a unit-of-work abstraction
- a fake OLTP abstraction layered over ClickHouse
Use chTable(name, columns, options?) to define a table.
Examples in this section assume:
import { chTable, chType, ck } from "ck-orm";const orderRewardLog = chTable("order_reward_log", {
id: chType.int32(), user_id: chType.string(), reward_points: chType.decimal(20, 5), });The third argument can be a plain object or a factory function:
const orderRewardLog = chTable(
"order_reward_log", {
id: chType.int32(), user_id: chType.string(), reward_points: chType.decimal(20, 5), }, (table) => ({
engine: "ReplacingMergeTree", orderBy: [table.user_id, table.id], }), );Public table options:
enginepartitionByprimaryKeyorderBysampleByttlsettingscommentversionColumn
Column definitions can also carry DDL metadata directly:
.default(expr).materialized(expr).aliasExpr(expr).comment(text).codec(expr).ttl(expr)
Example:
const orderRewardLog = chTable(
"order_reward_log", {
id: chType.int32(), created_at: chType.dateTime(), shard_day: chType.date().materialized(ck.sql`toDate(created_at)`), note: chType.string().default(ck.sql`'pending'`), }, (table) => ({
engine: "ReplacingMergeTree", partitionBy: ck.sql`toYYYYMM(created_at)`, orderBy: [table.id], versionColumn: table.created_at, }), );Every table exposes:
table.$inferSelecttable.$inferInsert
type RewardLogRow = typeof orderRewardLog.$inferSelect;
type RewardLogInsert = typeof orderRewardLog.$inferInsert;For generic helpers, use:
InferSelectModel<TTable>InferInsertModel<TTable>InferSelectSchema<TSchema>InferInsertSchema<TSchema>
import type {
InferInsertModel, InferSelectModel, InferSelectSchema } from "ck-orm";
type RewardLogRow = InferSelectModel<typeof orderRewardLog>;
type RewardLogInsert = InferInsertModel<typeof orderRewardLog>;
type CommerceRows = InferSelectSchema<typeof commerceSchema>;Use alias() when the same table needs to appear more than once in a query.
import { fn, ck, alias } from "ck-orm";
const rewardLog = alias(orderRewardLog, "reward_log");Aliased columns are rebound automatically to the alias.
Use chType.* for schema column builders. The schema DSL covers the common ClickHouse type families:
- integers:
int8,int16,int32,int64,uint8,uint16,uint32,uint64 - floating point and decimal:
float32,float64,bfloat16,decimal - scalar types:
bool,string,fixedString,uuid,ipv4,ipv6 - time types:
date,date32,dateTime,dateTime64,time,time64 - enums and special types:
enum8,enum16,json,dynamic,qbit - containers:
nullable,array,tuple,map,nested,variant,lowCardinality - aggregate types:
aggregateFunction,simpleAggregateFunction - geometry types:
point,ring,lineString,multiLineString,polygon,multiPolygon
Create a client with clickhouseClient():
const db = clickhouseClient({
databaseUrl: "http://default:<password>@127.0.0.1:8123/demo_store",
schema: commerceSchema,
});clickhouseClient() supports two mutually exclusive connection styles.
Use databaseUrl when you want a single connection string:
const db = clickhouseClient({
databaseUrl: "http://default:secret@127.0.0.1:8123/demo_store",
schema: commerceSchema,
});When databaseUrl is present, do not also pass:
hostdatabaseusernamepasswordpathname
Use explicit fields when you want each part configured separately:
const db = clickhouseClient({
host: "http://127.0.0.1:8123",
database: "demo_store",
username: "default",
password: "<password>",
schema: commerceSchema,
});Structured mode defaults:
host:http://localhost:8123database:defaultusername:defaultpassword:""
Most projects only need a small subset of client fields:
| Field | Purpose |
|---|---|
schema |
Application schema |
request_timeout |
Default request timeout in milliseconds |
clickhouse_settings |
Default ClickHouse settings |
application |
Set the ClickHouse application name |
Use these only when you actually need the corresponding behavior:
| Field | Purpose |
|---|---|
http_headers |
Additional default headers |
role |
Default ClickHouse role or roles |
session_id |
Default session id |
session_max_concurrent_requests |
Maximum in-flight requests allowed per session_id within this client (default 1) |
compression.response |
Request compressed responses |
logger / logLevel |
Logger integration |
tracing |
OpenTelemetry integration |
instrumentation |
Custom query lifecycle hooks |
Session lifetime controls are intentionally request-scoped. Pass session_timeout to a single query or to runInSession(...). Use session_check when you are continuing an existing session_id, not when bootstrapping a brand-new session.
session_max_concurrent_requests is different: it is a client-level guard that throttles overlapping requests that target the same session_id.
Real ClickHouse sessions are still server-locked, so increasing session_max_concurrent_requests above 1 can surface SESSION_IS_LOCKED instead of giving you true same-session parallelism.
Keep the default 1 unless you intentionally want to remove local serialization and are prepared to handle server-side session-lock failures.
ck-orm also keeps JSON parse/stringify hooks internal to the fetch transport. The public client config does not expose a json override.
ck-orm uses basic authentication for database connections.
- in
databaseUrlmode, credentials may be embedded in the URL - in structured mode, use
usernameandpassword - if no credentials are provided, the default is
defaultwith an empty password
The snippets below assume db, orderRewardLog, and the referenced helpers imported from ck-orm.
Explicit selection gives you an explicitly shaped result:
const rows = await db
.select({
userId: orderRewardLog.user_id,
rewardPoints: orderRewardLog.reward_points,
})
.from(orderRewardLog)
.limit(10);Projection objects are built from public Selection values or columns. In practice that means table columns, fn.*(...), and ck.expr(...) outputs all compose the same way inside select({ ... }).
Implicit selection returns the full table model when there are no joins:
const rows = await db.select().from(orderRewardLog).limit(10);With joins, implicit selection groups fields by source and returns nested objects.
import { alias, ck } from "ck-orm";
const rewardLog = alias(orderRewardLog, "reward_log");
const matchedRewardLog = alias(orderRewardLog, "matched_reward_log");
const rows = await db
.select({
userId: rewardLog.user_id,
rewardEventId: rewardLog.id,
matchedRewardEventId: matchedRewardLog.id,
})
.from(rewardLog)
.leftJoin(
matchedRewardLog,
ck.eq(rewardLog.user_id, matchedRewardLog.user_id),
);Public condition helpers:
ck.andck.orck.notck.eqck.neck.gtck.gteck.ltck.lteck.betweenck.hasck.hasAllck.hasAnyck.likeck.notLikeck.ilikeck.notIlikeck.escapeLikeck.inArrayck.notInArrayck.existsck.notExists
.where(...predicates) is a variadic AND entrypoint. It ignores undefined, so you can either pass multiple predicates directly or build grouped predicates with ck.and(...) and ck.or(...).
import { ck } from "ck-orm";
const query = db
.select({
userId: orderRewardLog.user_id,
rewardPoints: orderRewardLog.reward_points,
})
.from(orderRewardLog)
.where(
ck.eq(orderRewardLog.status, 1),
ck.inArray(orderRewardLog.campaign_id, [10, 20, 30]),
ck.between(orderRewardLog.created_at, 1710000000, 1719999999),
);ck.and(...) skips undefined, which makes inline dynamic filters easy to assemble:
import { ck } from "ck-orm";
const query = db
.select({
id: orderRewardLog.id,
status: orderRewardLog.status,
})
.from(orderRewardLog)
.where(
ck.and(
minId !== undefined ? ck.gt(orderRewardLog.id, minId) : undefined,
status !== undefined
? ck.or(ck.eq(orderRewardLog.status, status), ck.eq(orderRewardLog.status, 9))
: undefined,
),
);For larger runtime-built filters, prefer Predicate[] plus variadic .where(...predicates):
import { ck, type Predicate } from "ck-orm";
const predicates: Predicate[] = [];
if (minId !== undefined) {
predicates.push(ck.gt(orderRewardLog.id, minId));
}
if (status !== undefined) {
predicates.push(ck.or(ck.eq(orderRewardLog.status, status), ck.eq(orderRewardLog.status, 9)));
}
const query = db
.select({
id: orderRewardLog.id,
status: orderRewardLog.status,
})
.from(orderRewardLog)
.where(...predicates);Predicate is the public name for reusable boolean SQL clauses. You can use the same predicate objects in where, having, join on clauses, and boolean-aware helpers such as ck.exists(...).
Selection is the public name for reusable computed builder values such as fn.sum(...), fn.toString(...), and ck.expr(ck.sql...). Use .as(...) to alias them and .mapWith(...) to override decoding. Order is the clause object returned by ck.asc(...) and ck.desc(...).
ck.has(...), ck.hasAll(...), and ck.hasAny(...) map directly to the native ClickHouse functions and keep ClickHouse's array, map, and JSON semantics.
where(...) is variadic, while having(...) takes a single predicate. For multi-clause having, compose the predicate first with ck.and(...) or ck.or(...).
When you need to match literal % or _ characters, escape the pattern before passing it to ck.like(...):
import { ck } from "ck-orm";
const rows = await db
.select({
userId: orderRewardLog.user_id,
})
.from(orderRewardLog)
.where(ck.like(orderRewardLog.user_id, ck.escapeLike("user_100%")));import { ck, fn } from "ck-orm";
const totalRewardPoints = fn.sum(orderRewardLog.reward_points).as(
"total_reward_points",
);
const query = db
.select({
userId: orderRewardLog.user_id,
totalRewardPoints,
})
.from(orderRewardLog)
.groupBy(orderRewardLog.user_id)
.having(ck.gt(fn.sum(orderRewardLog.reward_points), "100.00000"))
.orderBy(ck.desc(orderRewardLog.created_at))
.limit(20)
.offset(0);groupBy() and limitBy([...]) accept columns and computed Selection values from helpers like fn.*(...) or ck.expr(...).
orderBy() accepts:
ck.desc(selection)ck.asc(selection)- a column directly
Append FINAL to a table query:
const query = db.select().from(orderRewardLog).final();Use ClickHouse LIMIT ... BY ...:
import { ck } from "ck-orm";
const query = db
.select({
userId: orderRewardLog.user_id,
createdAt: orderRewardLog.created_at,
})
.from(orderRewardLog)
.orderBy(ck.desc(orderRewardLog.created_at))
.limitBy([orderRewardLog.user_id], 1);Builder queries can be executed in three ways:
const query = db.select().from(orderRewardLog).limit(10);
const rows = await query;
const sameRows = await query.execute();
for await (const row of query.iterator()) {
console.log(row);
}query.iterator() uses the same session-aware concurrency rules as db.stream(): if the query targets a session_id, the slot stays occupied until iteration finishes or the iterator is closed early.
Use .as("alias") to turn a builder into a subquery:
const latestRewardEvent = db
.select({
userId: orderRewardLog.user_id,
createdAt: orderRewardLog.created_at,
})
.from(orderRewardLog)
.orderBy(ck.desc(orderRewardLog.created_at))
.limit(10)
.as("latest_reward_event");Use $with() and with() for CTEs:
import { ck, fn } from "ck-orm";
const rankedUsers = db.$with("ranked_users").as(
db
.select({
userId: orderRewardLog.user_id,
totalRewardPoints: fn.sum(orderRewardLog.reward_points).as(
"total_reward_points",
),
})
.from(orderRewardLog)
.groupBy(orderRewardLog.user_id),
);
const rows = await db
.with(rankedUsers)
.select({
userId: rankedUsers.userId,
totalRewardPoints: rankedUsers.totalRewardPoints,
})
.from(rankedUsers);Use db.count(source, ...predicates) for a Drizzle-style count helper. It follows the same predicate semantics as .where(...predicates): multiple predicates are combined with AND, and undefined values are ignored.
import { ck } from "ck-orm";
const total = await db.count(
orderRewardLog,
ck.eq(orderRewardLog.status, 1),
ck.gt(orderRewardLog.id, 1000),
);For more complex result sets, count a subquery or CTE:
const activeUsers = db
.select({
userId: orderRewardLog.user_id,
})
.from(orderRewardLog)
.where(ck.eq(orderRewardLog.status, 1))
.as("active_users");
const total = await db.count(activeUsers);db.count(...) decodes to number for convenience. If you need explicit aggregate control or exact handling for very large counts, use fn.count().
leftJoin() defaults to SQL-style null semantics by automatically applying join_use_nulls = 1.
That means:
- the right side of a default left join is inferred as nullable
- if you explicitly disable
join_use_nulls, the inferred types change as well
To align with ClickHouse default join behavior:
const rawDefaultDb = db.withSettings({
join_use_nulls: 0,
});The forced join_use_nulls = 1 setting is preserved when a joined query is reused as a subquery, CTE, ck.exists(...), or ck.inArray(...) source, so builder types stay aligned with runtime behavior.
Use the builder when you want typed inserts that follow the table schema:
await db.insert(orderRewardLog).values({
id: 1,
user_id: "user_100",
campaign_id: 10,
order_id: 900001,
reward_points: "42.50000",
status: 1,
created_at: 1710000000,
_peerdb_synced_at: new Date("2026-04-21T00:00:00.000Z"),
_peerdb_is_deleted: 0,
_peerdb_version: 1n,
});Insert rows must use keys from the table schema. Unknown keys are rejected early. Omitted columns continue to use DEFAULT.
Use insertJsonEachRow() when you already have object rows or an async row stream:
await db.insertJsonEachRow("tmp_scope", [
{ user_id: "user_100" },
{ user_id: "user_200" },
]);It accepts:
- a string table name
- a table object created by
chTable() - a regular array
- an
AsyncIterable
ck-orm includes its own SQL template API. Use it when builder syntax would be less direct than the SQL you already want to write.
import { ck, fn } from "ck-orm";
const rows = await db.execute(ck.sql`
select
${orderRewardLog.user_id},
${fn.sum(orderRewardLog.reward_points)} as total_reward_points
from ${orderRewardLog}
where ${orderRewardLog.id} > ${10}
group by ${orderRewardLog.user_id}
`);import { ck } from "ck-orm";
const rows = await db.execute("SELECT 1 AS one");
const sameRows = await db.execute(ck.sql("SELECT 1 AS one"));const fields = ck.sql.join(
[ck.sql.identifier("user_id"), ck.sql.identifier("reward_points")],
", ",
);
const rows = await db.execute(
ck.sql`${ck.sql.raw("select ")}${fields}${ck.sql.raw(" from ")}${ck.sql.identifier("order_reward_log")}`,
);import { ck } from "ck-orm";
const rows = await db.execute(
ck.sql`select user_id, reward_points from order_reward_log where user_id = {user_id:String} limit {limit:Int64}`,
{
query_params: {
user_id: "user_100",
limit: 10,
},
},
);Parameter transport is chosen automatically. You do not need to configure multipart handling for query_params.
query_params keys that start with orm_param are rejected. That prefix is reserved for parameters generated internally by ck.sql\...``.
Use ck.expr() to wrap a raw SQL fragment as a reusable Selection:
import { ck } from "ck-orm";
const query = db.select({
constantOne: ck.expr(ck.sql.raw("1")).as("constant_one"),
});Raw eager queries only support JSON output:
const rows = await db.execute("select 1", {
format: "JSON",
});Raw streaming queries only support JSONEachRow output:
for await (const row of db.stream("select 1", {
format: "JSONEachRow",
})) {
console.log(row);
}Common helpers include:
fn.call()fn.withParams()fn.toString()fn.toDate()fn.toDateTime()fn.toStartOfMonth()fn.count()fn.countIf()fn.sum()fn.sumIf()fn.avg()fn.min()fn.max()fn.uniqExact()fn.coalesce()fn.tuple()fn.arrayZip()fn.not()
fn.call(name, ...) and fn.withParams(name, ...) validate name as a SQL identifier before compilation.
import { ck, fn } from "ck-orm";
const query = db
.select({
month: fn.toStartOfMonth(orderRewardLog.created_at).as("month"),
totalRewardPoints: fn.sum(orderRewardLog.reward_points).as(
"total_reward_points",
),
})
.from(orderRewardLog);import { fn } from "ck-orm";
const numbers = fn.table.call("numbers", 10).as("n");
const query = db
.select({
total: fn.count(),
})
.from(numbers);ClickHouse does not provide traditional transactions. For scoped analytics and large-filter workflows, session-bound temporary tables are often the right primitive.
Inside a session callback, ck-orm gives you:
runInSession()registerTempTable()createTemporaryTable()createTemporaryTableRaw()
import { chTable, chType } from "ck-orm";
const tmpScope = chTable("tmp_scope", {
user_id: chType.string(),
});
await db.runInSession(async (session) => {
// Temporary tables live only inside this Session and are cleaned up automatically.
await session.createTemporaryTable(tmpScope);
await session.insertJsonEachRow(tmpScope, [
{ user_id: "user_100" },
{ user_id: "user_200" },
]);
return session.execute(`
SELECT user_id
FROM order_reward_log
WHERE user_id IN (SELECT user_id FROM tmp_scope)
`);
});- requests targeting the same
session_idshare a session concurrency controller; the client default issession_max_concurrent_requests = 1 - real ClickHouse sessions are exclusive on the server side, so raising
session_max_concurrent_requestsabove1disables local serialization but can still fail withSESSION_IS_LOCKED createTemporaryTable()automatically registers the table for cleanupcreateTemporaryTable()consumes schema objects; temporary-table lifecycle stays onSession, not on the schema itselfcreateTemporaryTableRaw()is the trusted-only raw SQL escape hatch and rejects multi-statement definitionsrunInSession()drops registered temporary tables when the callback finishes- nested
runInSession()calls always create a new child session - nested calls may not reuse any active ancestor
session_id - nested child sessions do not share temp tables or same-session concurrency slots with their parent because they always use a different
session_id
The practical rules are:
- the same explicit
session_idis serialized by default - the client default
session_idparticipates in the same limiter - child clients created by
withSettings()share the same limiter as their parent when they target the samesession_id - different
session_idvalues do not block each other stream()and builderiterator()hold the same-session slot until the iterator finishes or is closed- nested
runInSession()calls create a fresh childsession_id, so they do not share the parent's same-session slot
Recommended usage:
- keep
session_max_concurrent_requests = 1for temporary-table workflows and any continued-session logic - if you need true parallelism, use different
session_idvalues instead of raising the same-session limit - only raise
session_max_concurrent_requestswhen you explicitly want to remove local backpressure and can tolerateSESSION_IS_LOCKED
Beyond the builder, the client also exposes lower-level runtime methods.
Common per-query fields include:
clickhouse_settingsquery_paramsquery_idsession_idsession_timeoutsession_checkroleauthabort_signalhttp_headersignore_error_response
session_timeout and session_check live here on purpose: they describe a specific request or a specific session block, not a global client default.
session_check does not bootstrap a new ClickHouse session. Use it when you need ClickHouse to verify that an explicit session_id already exists.
Any request that carries a session_id participates in the same per-session limiter, including raw execute(), command(), stream(), builder .execute(), and builder .iterator().
Execute a raw query and return Record<string, unknown>[]:
const rows = await db.execute("SELECT 1 AS one");Stream raw query results:
for await (const row of db.stream("SELECT number FROM numbers(10)")) {
console.log(row);
}If stream() targets a session_id, the same-session slot is released only after the async iterator completes or you close it early by breaking out of the loop.
Execute a command that does not return a row set:
await db.command("SYSTEM FLUSH LOGS");Create a child client with additional default settings:
const reportDb = db.withSettings({
max_execution_time: 60,
join_use_nulls: 0,
});The returned client keeps the same schema, transport, and session concurrency controller as the parent client.
await db.ping();
await db.replicasStatus();ck-orm provides three built-in observability layers:
loggertracinginstrumentation
import type { ClickHouseOrmLogger } from "ck-orm";
const logger: ClickHouseOrmLogger = {
trace(message, fields) {
console.debug(message, fields);
},
debug(message, fields) {
console.debug(message, fields);
},
info(message, fields) {
console.info(message, fields);
},
warn(message, fields) {
console.warn(message, fields);
},
error(message, fields) {
console.error(message, fields);
},
};
const db = clickhouseClient({
databaseUrl: "http://127.0.0.1:8123/demo_store",
schema: commerceSchema,
logger,
logLevel: "info",
});import { clickhouseClient } from "ck-orm";
import { commerceSchema } from "./schema";
const tracer = myObservabilityStack.getTracer("ck-orm-example");
const db = clickhouseClient({
databaseUrl: "http://127.0.0.1:8123/demo_store", schema: commerceSchema, tracing: {
tracer, dbName: "demo_store", }, });import type { ClickHouseOrmInstrumentation } from "ck-orm";
const instrumentation: ClickHouseOrmInstrumentation = {
onQueryStart(event) {
console.log("start", event.operation, event.statement);
},
onQuerySuccess(event) {
console.log("success", event.durationMs, event.rowCount);
},
onQueryError(event) {
console.error("error", event.error);
},
};
const db = clickhouseClient({
databaseUrl: "http://127.0.0.1:8123/demo_store",
schema: commerceSchema,
instrumentation: [instrumentation],
});Public event types:
ClickHouseOrmQueryEventClickHouseOrmQueryResultEventClickHouseOrmQueryErrorEventClickHouseOrmTracingOptionsClickHouseOrmLogLevel
Public error types (type-only exports):
ClickHouseOrmErrorDecodeError
Runtime error checks:
isClickHouseOrmError(error)isDecodeError(error)
ClickHouseOrmError preserves as much context as possible, including:
kindexecutionStatequeryIdsessionIdhttpStatusclickhouseCodeclickhouseNameresponseTextrequestTimeoutMs
Current public kind values:
client_validationrequest_faileddecodetimeoutabortedsession
ck-orm is designed to make unsafe SQL construction difficult by default.
The built-in protections include:
- identifiers passed to
ck.sql.identifier()are validated and quoted - function names used by
fn.call(...),fn.withParams(...),chType.aggregateFunction(...),chType.simpleAggregateFunction(...), andchType.nested({...})keys are validated - values interpolated into
sql\...`` become ClickHouse named parameters rather than raw SQL text query_paramskeys that start withorm_paramare rejected because that prefix is reserved for internal SQL-template parameters- only a single top-level statement is allowed per request
- authorization headers derived from connection config cannot be overridden by user-supplied
http_headers - structured connection config rejects credentials embedded in
host - tracing destination data is normalized before it is attached to spans
Treat the following APIs like eval: only pass developer-controlled strings, never user-derived input.
ck.sql.raw(value)db.createTemporaryTableRaw(name, definition)
fn.call(name, ...) and fn.withParams(name, ...) validate name, but you should still treat dynamically chosen function names as developer-controlled input rather than end-user input.
ck.sql.identifier(...) validates on the compile/execute boundary. Constructing the fragment is cheap; the error appears when the fragment is compiled into a query.
tracing.includeStatement defaults to true, so compacted SQL text is attached to spans as db.statement.
Bound values are stored separately as ClickHouse named parameters and are not included in the statement text, but table names, column names, and query shape still appear in tracing output. If your trace backend is shared or off-host, disable includeStatement or filter db.statement in your collector.
ClickHouseOrmError.responseText and the error message may contain raw text from the ClickHouse server, including SQL fragments and database object names.
Do not forward those directly to untrusted clients. Log them server-side and expose a generic message instead.
