Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions docs/en/_snippets/_users-and-roles-common.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ Roles are used to define groups of users for certain privileges instead of manag

1. Log into the clickhouse client using the `clickhouse_admin` user

```
```bash
clickhouse-client --user clickhouse_admin --password password
```

Expand All @@ -194,7 +194,7 @@ Roles are used to define groups of users for certain privileges instead of manag

3. Log into the ClickHouse client using the `column_user` user

```
```bash
clickhouse-client --user column_user --password password
```

Expand Down Expand Up @@ -245,7 +245,7 @@ Roles are used to define groups of users for certain privileges instead of manag

1. Log into the ClickHouse client using `row_user`

```
```bash
clickhouse-client --user row_user --password password
```

Expand Down Expand Up @@ -295,7 +295,7 @@ For example, if one `role1` allows for only select on `column1` and `role2` allo

4. Log into the ClickHouse client using `row_and_column_user`

```
```bash
clickhouse-client --user row_and_column_user --password password;
```

Expand Down
2 changes: 1 addition & 1 deletion docs/en/chdb/guides/clickhouse-local.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ from chdb import session as chs

Initialize a session pointing to `demo..chdb`:

```
```python
sess = chs.Session("demo.chdb")
```

Expand Down
2 changes: 1 addition & 1 deletion docs/en/chdb/guides/querying-pandas.md
Original file line number Diff line number Diff line change
Expand Up @@ -321,7 +321,7 @@ from chdb import session as chs

Initialize a session:

```
```python
sess = chs.Session()
```

Expand Down
4 changes: 3 additions & 1 deletion docs/en/cloud/manage/account-close.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,11 +39,13 @@ below.
3. Click the Help button (question mark in the upper right corner of the screen).
4. Under 'Support' click 'Create case.'
5. In the 'Create new case' screen, enter the following:
```

```text
Priority: Severity 3
Subject: Please close my ClickHouse account
Description: We would appreciate it if you would share a brief note about why you are cancelling.
```

5. Click 'Create new case'
6. We will close your account and send a confirmation email to let you know when it is complete.

Expand Down
10 changes: 5 additions & 5 deletions docs/en/cloud/security/accessing-s3-data-securely.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ Trust policy (Please replace `{ClickHouse_IAM_ARN}` with the IAM Role arn belon

IAM policy (Please replace `{BUCKET_NAME}` with your bucket name):

```
```json
{
"Version": "2012-10-17",
"Statement": [
Expand Down Expand Up @@ -126,15 +126,15 @@ IAM policy (Please replace `{BUCKET_NAME}` with your bucket name):

ClickHouse Cloud has a new feature that allows you to specify `extra_credentials` as part of the S3 table function. Below is an example of how to run a query using the newly created role copied from above.

```
describe table s3('https://s3.amazonaws.com/BUCKETNAME/BUCKETOBJECT.csv','CSVWithNames',extra_credentials(role_arn = 'arn:aws:iam::111111111111:role/ClickHouseAccessRole-001'))
```sql
DESCRIBE TABLE s3('https://s3.amazonaws.com/BUCKETNAME/BUCKETOBJECT.csv','CSVWithNames',extra_credentials(role_arn = 'arn:aws:iam::111111111111:role/ClickHouseAccessRole-001'))
```


Below is an example query that uses the `role_session_name` as a shared secret to query data from a bucket. If the `role_session_name` is not correct, this operation will fail.

```
describe table s3('https://s3.amazonaws.com/BUCKETNAME/BUCKETOBJECT.csv','CSVWithNames',extra_credentials(role_arn = 'arn:aws:iam::111111111111:role/ClickHouseAccessRole-001', role_session_name = 'secret-role-name'))
```sql
DESCRIBE TABLE s3('https://s3.amazonaws.com/BUCKETNAME/BUCKETOBJECT.csv','CSVWithNames',extra_credentials(role_arn = 'arn:aws:iam::111111111111:role/ClickHouseAccessRole-001', role_session_name = 'secret-role-name'))
```

:::note
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ To change the password assigned to the `default` account in the console, go to t

We recommend creating a new user account associated with a person and granting the user the default_role. This is so activities performed by users are identified to their user IDs and the `default` account is reserved for break-glass type activities.

```
```sql
CREATE USER userID IDENTIFIED WITH sha256_hash by 'hashed_password';
GRANT default_role to userID;
```
Expand Down Expand Up @@ -88,7 +88,7 @@ Custom roles may be created and associated with SQL console users. Since SQL con
To create a custom role for a SQL console user and grant it a general role, run the following commands. The email address must match the user's email address in the console.
1. Create the database_developer role and grant SHOW, CREATE, ALTER, and DELETE permissions.

```
```sql
CREATE ROLE OR REPLACE database_developer;
GRANT SHOW ON * TO database_developer;
GRANT CREATE ON * TO database_developer;
Expand All @@ -98,14 +98,14 @@ GRANT DELETE ON * TO database_developer;

2. Create a role for the SQL console user my.user@domain.com and assign it the database_developer role.

```
```sql
CREATE ROLE OR REPLACE `sql-console-role:my.user@domain.com`;
GRANT database_developer TO `sql-console-role:my.user@domain.com`;
```

When using this role construction, the query to show user access needs to be modified to include the role-to-role grant when the user is not present.

```
```sql
SELECT grants.user_name,
grants.role_name,
users.name AS role_member,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -124,6 +124,6 @@ Use the SHA256_hash method when [creating user accounts](/docs/en/sql-reference/
**TIP:** Since users with less than administrative privileges cannot set their own password, ask the user to hash their password using a generator
such as [this one](https://tools.keycdn.com/sha256-online-generator) before providing it to the admin to setup the account. Passwords should follow the [requirements](#password-settings) listed above.

```
```sql
CREATE USER userName IDENTIFIED WITH sha256_hash BY 'hash';
```
2 changes: 1 addition & 1 deletion docs/en/cloud/security/cloud-endpoints-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ If you are using an integration like the MySQL or PostgreSQL Engine, it is possi

For example, to allow access from a ClickHouse Cloud service hosted on AWS in the region `ap-south-1`, you can add the `egress_ips` addresses for that region:

```
```bash
❯ curl -s https://api.clickhouse.cloud/static-ips.json | jq '.'
{
"aws": [
Expand Down
2 changes: 1 addition & 1 deletion docs/en/data-modeling/schema-design.md
Original file line number Diff line number Diff line change
Expand Up @@ -233,7 +233,7 @@ Applying the above guidelines to our `posts` table, let's assume that our users

The query for this question using our earlier `posts_v2` table with optimized types but no ordering key:

```
```sql
SELECT
Id,
Title,
Expand Down
39 changes: 25 additions & 14 deletions docs/en/deployment-guides/horizontal-scaling.md
Original file line number Diff line number Diff line change
Expand Up @@ -371,20 +371,24 @@ As `chnode3` is not storing data and is only used for ClickHouse Keeper to provi
## Testing

1. Connect to `chnode1` and verify that the cluster `cluster_2S_1R` configured above exists
```sql

```sql title="Query"
SHOW CLUSTERS
```
```response

```response title="Response"
┌─cluster───────┐
│ cluster_2S_1R │
└───────────────┘
```

2. Create a database on the cluster
```sql

```sql title="Query"
CREATE DATABASE db1 ON CLUSTER cluster_2S_1R
```
```response

```response title="Response"
┌─host────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ chnode2 │ 9000 │ 0 │ │ 1 │ 0 │
│ chnode1 │ 9000 │ 0 │ │ 0 │ 0 │
Expand All @@ -396,7 +400,7 @@ CREATE DATABASE db1 ON CLUSTER cluster_2S_1R
We do not need not to specify parameters on the table engine since these will be automatically defined based on our macros
:::

```sql
```sql title="Query"
CREATE TABLE db1.table1 ON CLUSTER cluster_2S_1R
(
`id` UInt64,
Expand All @@ -405,30 +409,33 @@ CREATE TABLE db1.table1 ON CLUSTER cluster_2S_1R
ENGINE = MergeTree
ORDER BY id
```
```response
```response title="Response"
┌─host────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ chnode1 │ 9000 │ 0 │ │ 1 │ 0 │
│ chnode2 │ 9000 │ 0 │ │ 0 │ 0 │
└─────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
```

4. Connect to `chnode1` and insert a row
```sql

```sql title="Query"
INSERT INTO db1.table1 (id, column1) VALUES (1, 'abc');
```

5. Connect to `chnode2` and insert a row

```sql
```sql title="Query"
INSERT INTO db1.table1 (id, column1) VALUES (2, 'def');
```

6. Connect to either node, `chnode1` or `chnode2` and you will see only the row that was inserted into that table on that node.
for example, on `chnode2`
```sql

```sql title="Query"
SELECT * FROM db1.table1;
```
```response

```response title="Response"
┌─id─┬─column1─┐
│ 2 │ def │
└────┴─────────┘
Expand All @@ -437,26 +444,30 @@ SELECT * FROM db1.table1;

7. Create a distributed table to query both shards on both nodes.
(In this example, the `rand()` function is set as the sharding key so that it randomly distributes each insert)
```sql

```sql title="Query"
CREATE TABLE db1.table1_dist ON CLUSTER cluster_2S_1R
(
`id` UInt64,
`column1` String
)
ENGINE = Distributed('cluster_2S_1R', 'db1', 'table1', rand())
```
```response

```response title="Response"
┌─host────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ chnode2 │ 9000 │ 0 │ │ 1 │ 0 │
│ chnode1 │ 9000 │ 0 │ │ 0 │ 0 │
└─────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘
```

8. Connect to either `chnode1` or `chnode2` and query the distributed table to see both rows.
```

```sql title="Query"
SELECT * FROM db1.table1_dist;
```
```reponse

```reponse title="Response"
┌─id─┬─column1─┐
│ 2 │ def │
└────┴─────────┘
Expand Down
2 changes: 1 addition & 1 deletion docs/en/faq/operations/delete-old-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ More details on [configuring TTL](../../engines/table-engines/mergetree-family/m

:::note
DELETE FROM is generally available from version 23.3 and newer. On older versions, it is experimental and must be enabled with:
```
```sql
SET allow_experimental_lightweight_delete = true;
```
:::
Expand Down
20 changes: 11 additions & 9 deletions docs/en/guides/best-practices/skipping-indexes.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ When a user creates a data skipping index, there will be two additional files in

If some portion of the WHERE clause filtering condition matches the skip index expression when executing a query and reading the relevant column files, ClickHouse will use the index file data to determine whether each relevant block of data must be processed or can be bypassed (assuming that the block has not already been excluded by applying the primary key). To use a very simplified example, consider the following table loaded with predictable data.

```
```sql
CREATE TABLE skip_table
(
my_key UInt64,
Expand All @@ -48,7 +48,7 @@ INSERT INTO skip_table SELECT number, intDiv(number,4096) FROM numbers(100000000
When executing a simple query that does not use the primary key, all 100 million entries in the `my_value`
column are scanned:

```
```sql
SELECT * FROM skip_table WHERE my_value IN (125, 700)

┌─my_key─┬─my_value─┐
Expand All @@ -62,21 +62,21 @@ SELECT * FROM skip_table WHERE my_value IN (125, 700)

Now add a very basic skip index:

```
```sql
ALTER TABLE skip_table ADD INDEX vix my_value TYPE set(100) GRANULARITY 2;
```

Normally skip indexes are only applied on newly inserted data, so just adding the index won't affect the above query.

To index already existing data, use this statement:

```
```sql
ALTER TABLE skip_table MATERIALIZE INDEX vix;
```

Rerun the query with the newly created index:

```
```sql
SELECT * FROM skip_table WHERE my_value IN (125, 700)

┌─my_key─┬─my_value─┐
Expand All @@ -99,13 +99,13 @@ were skipped without reading from disk:
Users can access detailed information about skip index usage by enabling the trace when executing queries. From
clickhouse-client, set the `send_logs_level`:

```
```sql
SET send_logs_level='trace';
```
This will provide useful debugging information when trying to tune query SQL and table indexes. From the above
example, the debug log shows that the skip index dropped all but two granules:

```
```sql
<Debug> default.skip_table (933d4b2c-8cea-4bf9-8c93-c56e900eefd1) (SelectExecutor): Index `vix` has dropped 6102/6104 granules.
```
## Skip Index Types
Expand Down Expand Up @@ -139,7 +139,7 @@ There are three Data Skipping Index types based on Bloom filters:
This index works only with String, FixedString, and Map datatypes. The input expression is split into character sequences separated by non-alphanumeric characters. For example, a column value of `This is a candidate for a "full text" search` will contain the tokens `This` `is` `a` `candidate` `for` `full` `text` `search`. It is intended for use in LIKE, EQUALS, IN, hasToken() and similar searches for words and other values within longer strings. For example, one possible use might be searching for a small number of class names or line numbers in a column of free form application log lines.

* The specialized **ngrambf_v1**. This index functions the same as the token index. It takes one additional parameter before the Bloom filter settings, the size of the ngrams to index. An ngram is a character string of length `n` of any characters, so the string `A short string` with an ngram size of 4 would be indexed as:
```
```text
'A sh', ' sho', 'shor', 'hort', 'ort ', 'rt s', 't st', ' str', 'stri', 'trin', 'ring'
```
This index can also be useful for text searches, particularly languages without word breaks, such as Chinese.
Expand Down Expand Up @@ -176,7 +176,9 @@ Consider the following data distribution:

Assume the primary/order by key is `timestamp`, and there is an index on `visitor_id`. Consider the following query:

`SELECT timestamp, url FROM table WHERE visitor_id = 1001`
```sql
SELECT timestamp, url FROM table WHERE visitor_id = 1001`
```

A traditional secondary index would be very advantageous with this kind of data distribution. Instead of reading all 32768 rows to find
the 5 rows with the requested visitor_id, the secondary index would include just five row locations, and only those five rows would be
Expand Down
8 changes: 2 additions & 6 deletions docs/en/guides/developer/alternative-query-languages.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,15 +58,11 @@ SET allow_experimental_kusto_dialect = 1; -- this SET statement is required only
SET dialect = 'kusto'
```

Example KQL query:

```kql
```kql title="Query"
numbers(10) | project number
```

Result:

```
```response title="Response"
┌─number─┐
│ 0 │
│ 1 │
Expand Down
2 changes: 1 addition & 1 deletion docs/en/guides/developer/cascading-materialized-views.md
Original file line number Diff line number Diff line change
Expand Up @@ -365,7 +365,7 @@ GROUP BY

This query should output something like:

```
```response
┌────on_date─┬─domain_name────┬─impressions─┬─clicks─┐
│ 2019-01-01 │ clickhouse.com │ 2 │ 2 │
│ 2019-03-01 │ clickhouse.com │ 1 │ 1 │
Expand Down
Loading