Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
.DS_Store
Binary file not shown.
56 changes: 56 additions & 0 deletions .mintlify/skills/clickhouse-js-node-troubleshooting/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
---
name: clickhouse-js-node-troubleshooting
description: >
Troubleshoot and resolve common issues with the ClickHouse Node.js client
(@clickhouse/client). Use this skill whenever a user reports errors, unexpected
behavior, or configuration questions involving the Node.js client specifically —
including socket hang-up errors, Keep-Alive problems, stream handling issues, data
type mismatches, read-only user restrictions, proxy/TLS setup problems, or long-running
query timeouts. Trigger even when the user hasn't precisely named the issue; vague
symptoms like "my inserts keep failing" or "connection drops randomly" in a Node.js
context are strong signals to use this skill. Do NOT use for browser/Web client issues.
license: MIT
metadata:
author: ClickHouse Inc
version: "0.1.0"
---

# ClickHouse Node.js Client Troubleshooting

Reference: https://clickhouse.com/docs/integrations/javascript

> **⚠️ Node.js runtime only.** This skill covers the `@clickhouse/client` package running in a **Node.js runtime** exclusively — including **Next.js Node runtime** API routes, React Server Components, Server Actions, and standard Node.js processes. Do **not** apply this skill to browser client components, Web Workers, **Next.js Edge runtime**, Cloudflare Workers, or any usage of `@clickhouse/client-web`. For browser/edge environments, the correct package is `@clickhouse/client-web`.

---

## How to Use This Skill

1. **Identify the issue** — match symptoms to the Issue Index below and read the corresponding reference file.
2. **Lead with the diagnosis** — explain what's likely causing the issue before giving the fix.
3. **Note version constraints** — flag if a fix requires a minimum client version and check it against what the user provided.
4. **Ask only what's missing** — if the fix is version-dependent and you don't know their version, ask; otherwise help immediately.

---

## Issue Index

Identify the user's issue from the list below and read the corresponding reference file for detailed troubleshooting steps.

| Issue | Symptoms | Reference file |
| ------------------------------------- | ---------------------------------------------------------------------------------------------- | ----------------------------- |
| **Socket Hang-Up / ECONNRESET** | `socket hang up`, `ECONNRESET`, intermittent connection drops, long-running queries timing out | `reference/socket-hangup.md` |
| **Data Type Mismatches** | Large integers returned as strings, decimal precision loss, Date/DateTime insertion failures | `reference/data-types.md` |
| **Read-Only User Errors** | Errors when using response compression with `readonly=1` users | `reference/readonly-users.md` |
| **Proxy / Pathname URL Confusion** | Wrong database selected, requests failing behind a proxy with a path prefix | `reference/proxy-pathname.md` |
| **TLS / Certificate Errors** | TLS handshake failures, certificate verification issues, mutual TLS setup | `reference/tls.md` |
| **Compression Not Working** | GZIP compression not activating for requests or responses | `reference/compression.md` |
| **Logging Not Showing Anything** | No log output, need custom logger integration | `reference/logging.md` |
| **Query Parameters Not Interpolated** | Parameterized queries not working, SQL injection concerns | `reference/query-params.md` |

---

## Still Stuck?

- [JS client source + full examples](https://github.com/ClickHouse/clickhouse-js/tree/main/examples)
- [ClickHouse JS client docs](https://clickhouse.com/docs/integrations/javascript)
- [ClickHouse supported formats](https://clickhouse.com/docs/interfaces/formats)
10 changes: 10 additions & 0 deletions .mintlify/skills/clickhouse-js-node-troubleshooting/metadata.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
{
"version": "0.1.0",
"organization": "ClickHouse Inc",
"date": "April 2026",
"abstract": "Troubleshooting guide for the ClickHouse Node.js client (@clickhouse/client). Covers common failure modes including socket hang-up/ECONNRESET, Keep-Alive misconfiguration, data type mismatches, read-only user restrictions, proxy/pathname URL confusion, TLS certificate errors, compression issues, logging setup, and query parameter interpolation.",
"references": [
"https://clickhouse.com/docs/integrations/javascript",
"https://github.com/ClickHouse/clickhouse-js"
]
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Compression Not Working

> **Applies to:** all versions. Response compression was enabled by default in `< 1.0.0` and **disabled by default since `>= 1.0.0`** — you must explicitly enable it. Request compression has always been opt-in.

Both request and response compression are supported. Only **GZIP** is supported (via zlib).

```js
import { createClient } from '@clickhouse/client'
const client = createClient({
compression: {
response: true,
request: true,
},
})
```

## Compression enabled but getting an error?

If you enable `compression.response: true` and get a ClickHouse settings error, you are likely connecting as a `readonly=1` user. Response compression requires the `enable_http_compression` setting, which read-only users cannot change.

See [`reference/readonly-users.md`](./readonly-users.md) for the fix.

## Compression enabled but response doesn't seem compressed?

- Verify your version-specific defaults — response compression was enabled by default in `< 1.0.0` and is **disabled by default** in `>= 1.0.0`, so on newer versions you must enable `compression.response: true` explicitly.
- Check that the ClickHouse server has HTTP compression enabled (`enable_http_compression = 1` in server config). By default this is enabled on ClickHouse Cloud and most self-hosted setups.
- Request compression (`compression.request: true`) compresses the request body sent to ClickHouse. It has no effect on the response.
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# Data Type Mismatches

## Large integers returned as strings

> **Applies to:** all versions. The `output_format_json_quote_64bit_integers` ClickHouse setting is server-side and can be passed via `clickhouse_settings` in any client version.

`UInt64`, `Int64`, `UInt128`, `Int128`, `UInt256`, `Int256` are serialized as **strings** in `JSON*` formats to prevent overflow (they exceed `Number.MAX_SAFE_INTEGER`).

To receive them as numbers (use with caution — precision loss possible):

```js
const resultSet = await client.query({
query: 'SELECT toUInt64(9007199254740993)',
format: 'JSONEachRow',
clickhouse_settings: { output_format_json_quote_64bit_integers: 0 },
})
```

> **Tip (`>= 1.15.0`):** BigInt values are now supported in query parameters, so you can safely pass large integers as bind params without string workarounds.

## Decimals losing precision on read

> **Applies to:** all versions (this is a ClickHouse JSON serialization behavior). For custom JSON parse/stringify (e.g., using a BigInt-safe parser), see `>= 1.14.0` which added configurable `json.parse` and `json.stringify` functions.

ClickHouse returns Decimals as numbers by default in `JSON*` formats. Cast to string in the query:

```js
const resultSet = await client.query({
query: `
SELECT toString(my_decimal) AS my_decimal
FROM my_table
`,
format: 'JSONEachRow',
})
```

When inserting, always use the string representation to avoid precision loss:

```js
await client.insert({
table: 'my_table',
values: [{ dec64: '123456789123456.789' }],
format: 'JSONEachRow',
})
```

## Format Selection Quick Reference

| Use case | Recommended format | Min version |
| --------------------------- | ----------------------------------- | ------------------------------------- |
| Insert/select JS objects | `JSONEachRow` | all |
| Bulk insert arrays | `JSONEachRow` | all |
| Stream large result sets | `JSONEachRow`, `JSONCompactEachRow` | all |
| CSV file streaming | `CSV`, `CSVWithNames` | all |
| Parquet file streaming | `Parquet` | `>= 0.2.6` |
| Single JSON object response | `JSON`, `JSONCompact` | `JSON` all; `JSONCompact` `>= 0.0.14` |
| Stream with progress | `JSONEachRowWithProgress` | `>= 1.7.0` |

> ⚠️ `JSON` and `JSONCompact` return a single object and **cannot be streamed**.

## Date/DateTime insertion fails or produces wrong values

> **Applies to:** all versions. Note that `>= 0.2.1` changed Date object serialization to use time-zone-agnostic Unix timestamps instead of timezone-naive datetime strings, which fixed timezone mismatch issues between client and server.

- `Date` / `Date32` columns accept **strings only** (e.g., `'2024-01-15'`).
- `DateTime` / `DateTime64` columns accept strings **or** JS `Date` objects. To use `Date` objects, set:

```js
import { createClient } from '@clickhouse/client'
const client = createClient({
clickhouse_settings: { date_time_input_format: 'best_effort' },
})
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Logging Not Showing Anything

> **Requires:** `>= 0.2.0` (explicit `log.level` config option introduced in 0.2.0, replacing the `CLICKHOUSE_LOG_LEVEL` env var from 0.0.11). Custom `LoggerClass` also available since `>= 0.2.0`. In `>= 1.18.1`, the default changed from `OFF` to `WARN` and logging became lazy (messages only constructed if the log level matches). In `>= 1.18.1`, structured context fields (`connection_id`, `query_id`, `request_id`, `socket_id`) are available in logger `args`.

The default log level is **OFF** (for `< 1.18.1`) or **WARN** (for `>= 1.18.1`). Enable it explicitly:

```js
import { ClickHouseLogLevel, createClient } from '@clickhouse/client'

const client = createClient({
log: {
level: ClickHouseLogLevel.DEBUG, // TRACE | DEBUG | INFO | WARN | ERROR
},
})
```

To use a custom logger (e.g., to pipe to your observability stack), implement the `Logger` interface:

```ts
import { ClickHouseLogLevel, createClient } from '@clickhouse/client'
import type { Logger } from '@clickhouse/client'

class MyLogger implements Logger {
debug({ module, message, args }) {
/* ... */
}
info({ module, message, args }) {
/* ... */
}
warn({ module, message, args, err }) {
/* ... */
}
error({ module, message, args, err }) {
/* ... */
}
trace({ module, message, args }) {
/* ... */
}
}

const client = createClient({
log: { LoggerClass: MyLogger, level: ClickHouseLogLevel.INFO },
})
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Proxy / Pathname URL Confusion

> **Requires:** `>= 1.0.0` (the `pathname` config option and URL-based configuration were introduced in 1.0.0). For `< 1.0.0`, a partial fix for pathname handling in the `host` parameter was shipped in `0.2.5`.

**Symptom:** Wrong database is selected, or requests fail when ClickHouse is behind a proxy with a path prefix (e.g., `http://proxy:8123/clickhouse_server`).

**Cause:** Passing the pathname in `url` makes the client treat it as the database name.

**Fix:** Use the `pathname` option separately:

```js
import { createClient } from '@clickhouse/client'

const client = createClient({
url: 'http://proxy:8123',
pathname: '/clickhouse_server', // leading slash optional; multiple segments supported
})
```

For proxies that require custom auth headers:

> **Requires:** `>= 1.0.0` (`http_headers` config option; replaces the deprecated `additional_headers` from `>= 0.2.9`). Per-request `http_headers` overrides are available since `>= 1.11.0`.

```js
import { createClient } from '@clickhouse/client'

const client = createClient({
http_headers: {
'My-Auth-Header': 'secret',
},
})
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
# Query Parameters Not Interpolated

> **Applies to:** all versions. NULL parameter binding was fixed in `0.0.16`. Tuple support via `TupleParam` wrapper and JS `Map` as a query parameter were added in `>= 1.9.0`. BigInt values in query parameters are supported since `>= 1.15.0`. Boolean formatting in `Array`/`Tuple`/`Map` params was fixed in `>= 1.13.0`.

Use the `{name: type}` syntax in the query string and pass values via `query_params`:

```js
await client.query({
query: 'SELECT plus({val1: Int32}, {val2: Int32})',
format: 'CSV',
query_params: { val1: 10, val2: 20 },
})
```

## Never use template literals for user values

When `$1`/`?` don't work, a common instinct is to interpolate values directly with a template literal. Don't — this bypasses ClickHouse's server-side escaping and opens the door to SQL injection:

```js
// ❌ Dangerous — never do this with user-controlled values
const userId = req.params.id
await client.query({ query: `SELECT * FROM users WHERE id = ${userId}` })

// ✓ Safe — parameterized
await client.query({
query: 'SELECT * FROM users WHERE id = {id: UInt32}',
query_params: { id: userId },
})
```

Always bring this up when answering query-params questions, especially when the user is coming from another database (PostgreSQL, MySQL, etc.) — they're the most likely to reach for template literals as a fallback.

## Common mistake: wrong parameter syntax

The ClickHouse JS client uses ClickHouse's native `{name: type}` syntax — not `$1`/`?`/`:name` placeholders from other databases:

```js
// ❌ Wrong — these don't work
await client.query({
query: 'SELECT * FROM t WHERE id = $1',
query: 'SELECT * FROM t WHERE id = ?',
query: 'SELECT * FROM t WHERE id = :id',
query_params: { id: 42 },
})

// ✓ Correct
await client.query({
query: 'SELECT * FROM t WHERE id = {id: UInt32}',
query_params: { id: 42 },
})
```

## Array parameters

```js
await client.query({
query: 'SELECT * FROM t WHERE id IN {ids: Array(UInt32)}',
format: 'JSONEachRow',
query_params: { ids: [1, 2, 3] },
})
```

## Tuple parameters (`>= 1.9.0`)

Use the `TupleParam` wrapper to pass a tuple:

```js
import { TupleParam, createClient } from '@clickhouse/client'

const client = createClient({
url: 'http://localhost:8123',
})

await client.query({
query: 'SELECT {t: Tuple(UInt32, String)}',
format: 'JSONEachRow',
query_params: { t: new TupleParam([42, 'hello']) },
})
```

## Map parameters (`>= 1.9.0`)

Pass a JS `Map` directly:

```js
await client.query({
query: 'SELECT {m: Map(String, UInt32)}',
format: 'JSONEachRow',
query_params: { m: new Map([['key', 1]]) },
})
```

## NULL parameters

Pass `null` directly — binding fixed in `0.0.16`:

```js
await client.query({
query: 'SELECT {val: Nullable(String)}',
format: 'JSONEachRow',
query_params: { val: null },
})
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Read-Only User Errors

> **Applies to:** all versions. In `>= 1.0.0`, `compression.response` was changed to **disabled by default** specifically to avoid this confusing error for read-only users. If you are on `< 1.0.0`, response compression was enabled by default and you must explicitly disable it.

**Symptom:** Error when using `compression: { response: true }` with a `readonly=1` user.

**Cause:** Response compression requires the `enable_http_compression` setting, which `readonly=1` users cannot change. Note: **request compression** (`compression: { request: true }`) is unaffected by this restriction — only response compression triggers the error.

**Fix:** Remove response compression for read-only users:

```js
import { createClient } from '@clickhouse/client'

// Don't do this with a readonly=1 user:
// compression: { response: true }

const client = createClient({
username: 'my_readonly_user',
password: '...',
// compression omitted, or explicitly set to false
compression: {
response: false,
},
})
```
Loading