diff --git a/docs/en/_snippets/_users-and-roles-common.md b/docs/en/_snippets/_users-and-roles-common.md index 9549009d22f..9343c588e05 100644 --- a/docs/en/_snippets/_users-and-roles-common.md +++ b/docs/en/_snippets/_users-and-roles-common.md @@ -170,7 +170,7 @@ Roles are used to define groups of users for certain privileges instead of manag 1. Log into the clickhouse client using the `clickhouse_admin` user - ``` + ```bash clickhouse-client --user clickhouse_admin --password password ``` @@ -194,7 +194,7 @@ Roles are used to define groups of users for certain privileges instead of manag 3. Log into the ClickHouse client using the `column_user` user - ``` + ```bash clickhouse-client --user column_user --password password ``` @@ -245,7 +245,7 @@ Roles are used to define groups of users for certain privileges instead of manag 1. Log into the ClickHouse client using `row_user` - ``` + ```bash clickhouse-client --user row_user --password password ``` @@ -295,7 +295,7 @@ For example, if one `role1` allows for only select on `column1` and `role2` allo 4. Log into the ClickHouse client using `row_and_column_user` - ``` + ```bash clickhouse-client --user row_and_column_user --password password; ``` diff --git a/docs/en/chdb/guides/clickhouse-local.md b/docs/en/chdb/guides/clickhouse-local.md index a00787adf4b..6f108759388 100644 --- a/docs/en/chdb/guides/clickhouse-local.md +++ b/docs/en/chdb/guides/clickhouse-local.md @@ -98,7 +98,7 @@ from chdb import session as chs Initialize a session pointing to `demo..chdb`: -``` +```python sess = chs.Session("demo.chdb") ``` diff --git a/docs/en/chdb/guides/querying-pandas.md b/docs/en/chdb/guides/querying-pandas.md index 52f806f3b37..9341276cf60 100644 --- a/docs/en/chdb/guides/querying-pandas.md +++ b/docs/en/chdb/guides/querying-pandas.md @@ -321,7 +321,7 @@ from chdb import session as chs Initialize a session: -``` +```python sess = chs.Session() ``` diff --git a/docs/en/cloud/manage/account-close.md b/docs/en/cloud/manage/account-close.md index fd2e97f080a..643aadaeacf 100644 --- a/docs/en/cloud/manage/account-close.md +++ b/docs/en/cloud/manage/account-close.md @@ -39,11 +39,13 @@ below. 3. Click the Help button (question mark in the upper right corner of the screen). 4. Under 'Support' click 'Create case.' 5. In the 'Create new case' screen, enter the following: -``` + +```text Priority: Severity 3 Subject: Please close my ClickHouse account Description: We would appreciate it if you would share a brief note about why you are cancelling. ``` + 5. Click 'Create new case' 6. We will close your account and send a confirmation email to let you know when it is complete. diff --git a/docs/en/cloud/security/accessing-s3-data-securely.md b/docs/en/cloud/security/accessing-s3-data-securely.md index 323e0914411..7188736be8e 100644 --- a/docs/en/cloud/security/accessing-s3-data-securely.md +++ b/docs/en/cloud/security/accessing-s3-data-securely.md @@ -92,7 +92,7 @@ Trust policy (Please replace `{ClickHouse_IAM_ARN}` with the IAM Role arn belon IAM policy (Please replace `{BUCKET_NAME}` with your bucket name): -``` +```json { "Version": "2012-10-17", "Statement": [ @@ -126,15 +126,15 @@ IAM policy (Please replace `{BUCKET_NAME}` with your bucket name): ClickHouse Cloud has a new feature that allows you to specify `extra_credentials` as part of the S3 table function. Below is an example of how to run a query using the newly created role copied from above. -``` -describe table s3('https://s3.amazonaws.com/BUCKETNAME/BUCKETOBJECT.csv','CSVWithNames',extra_credentials(role_arn = 'arn:aws:iam::111111111111:role/ClickHouseAccessRole-001')) +```sql +DESCRIBE TABLE s3('https://s3.amazonaws.com/BUCKETNAME/BUCKETOBJECT.csv','CSVWithNames',extra_credentials(role_arn = 'arn:aws:iam::111111111111:role/ClickHouseAccessRole-001')) ``` Below is an example query that uses the `role_session_name` as a shared secret to query data from a bucket. If the `role_session_name` is not correct, this operation will fail. -``` -describe table s3('https://s3.amazonaws.com/BUCKETNAME/BUCKETOBJECT.csv','CSVWithNames',extra_credentials(role_arn = 'arn:aws:iam::111111111111:role/ClickHouseAccessRole-001', role_session_name = 'secret-role-name')) +```sql +DESCRIBE TABLE s3('https://s3.amazonaws.com/BUCKETNAME/BUCKETOBJECT.csv','CSVWithNames',extra_credentials(role_arn = 'arn:aws:iam::111111111111:role/ClickHouseAccessRole-001', role_session_name = 'secret-role-name')) ``` :::note diff --git a/docs/en/cloud/security/cloud-access-management/cloud-access-management.md b/docs/en/cloud/security/cloud-access-management/cloud-access-management.md index 79973cecd9d..8323b6c2c9d 100644 --- a/docs/en/cloud/security/cloud-access-management/cloud-access-management.md +++ b/docs/en/cloud/security/cloud-access-management/cloud-access-management.md @@ -40,7 +40,7 @@ To change the password assigned to the `default` account in the console, go to t We recommend creating a new user account associated with a person and granting the user the default_role. This is so activities performed by users are identified to their user IDs and the `default` account is reserved for break-glass type activities. -``` +```sql CREATE USER userID IDENTIFIED WITH sha256_hash by 'hashed_password'; GRANT default_role to userID; ``` @@ -88,7 +88,7 @@ Custom roles may be created and associated with SQL console users. Since SQL con To create a custom role for a SQL console user and grant it a general role, run the following commands. The email address must match the user's email address in the console. 1. Create the database_developer role and grant SHOW, CREATE, ALTER, and DELETE permissions. -``` +```sql CREATE ROLE OR REPLACE database_developer; GRANT SHOW ON * TO database_developer; GRANT CREATE ON * TO database_developer; @@ -98,14 +98,14 @@ GRANT DELETE ON * TO database_developer; 2. Create a role for the SQL console user my.user@domain.com and assign it the database_developer role. -``` +```sql CREATE ROLE OR REPLACE `sql-console-role:my.user@domain.com`; GRANT database_developer TO `sql-console-role:my.user@domain.com`; ``` When using this role construction, the query to show user access needs to be modified to include the role-to-role grant when the user is not present. -``` +```sql SELECT grants.user_name, grants.role_name, users.name AS role_member, diff --git a/docs/en/cloud/security/cloud-access-management/cloud-authentication.md b/docs/en/cloud/security/cloud-access-management/cloud-authentication.md index 1b9f786c071..743936e67a4 100644 --- a/docs/en/cloud/security/cloud-access-management/cloud-authentication.md +++ b/docs/en/cloud/security/cloud-access-management/cloud-authentication.md @@ -124,6 +124,6 @@ Use the SHA256_hash method when [creating user accounts](/docs/en/sql-reference/ **TIP:** Since users with less than administrative privileges cannot set their own password, ask the user to hash their password using a generator such as [this one](https://tools.keycdn.com/sha256-online-generator) before providing it to the admin to setup the account. Passwords should follow the [requirements](#password-settings) listed above. -``` +```sql CREATE USER userName IDENTIFIED WITH sha256_hash BY 'hash'; ``` diff --git a/docs/en/cloud/security/cloud-endpoints-api.md b/docs/en/cloud/security/cloud-endpoints-api.md index 67168efd1b2..e101656aec1 100644 --- a/docs/en/cloud/security/cloud-endpoints-api.md +++ b/docs/en/cloud/security/cloud-endpoints-api.md @@ -101,7 +101,7 @@ If you are using an integration like the MySQL or PostgreSQL Engine, it is possi For example, to allow access from a ClickHouse Cloud service hosted on AWS in the region `ap-south-1`, you can add the `egress_ips` addresses for that region: -``` +```bash ❯ curl -s https://api.clickhouse.cloud/static-ips.json | jq '.' { "aws": [ diff --git a/docs/en/data-modeling/schema-design.md b/docs/en/data-modeling/schema-design.md index 713217f7c92..9f60fc3c13a 100644 --- a/docs/en/data-modeling/schema-design.md +++ b/docs/en/data-modeling/schema-design.md @@ -233,7 +233,7 @@ Applying the above guidelines to our `posts` table, let's assume that our users The query for this question using our earlier `posts_v2` table with optimized types but no ordering key: -``` +```sql SELECT Id, Title, diff --git a/docs/en/deployment-guides/horizontal-scaling.md b/docs/en/deployment-guides/horizontal-scaling.md index 7a7b43b80dd..e7d4e937a95 100644 --- a/docs/en/deployment-guides/horizontal-scaling.md +++ b/docs/en/deployment-guides/horizontal-scaling.md @@ -371,20 +371,24 @@ As `chnode3` is not storing data and is only used for ClickHouse Keeper to provi ## Testing 1. Connect to `chnode1` and verify that the cluster `cluster_2S_1R` configured above exists -```sql + +```sql title="Query" SHOW CLUSTERS ``` -```response + +```response title="Response" ┌─cluster───────┐ │ cluster_2S_1R │ └───────────────┘ ``` 2. Create a database on the cluster -```sql + +```sql title="Query" CREATE DATABASE db1 ON CLUSTER cluster_2S_1R ``` -```response + +```response title="Response" ┌─host────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐ │ chnode2 │ 9000 │ 0 │ │ 1 │ 0 │ │ chnode1 │ 9000 │ 0 │ │ 0 │ 0 │ @@ -396,7 +400,7 @@ CREATE DATABASE db1 ON CLUSTER cluster_2S_1R We do not need not to specify parameters on the table engine since these will be automatically defined based on our macros ::: -```sql +```sql title="Query" CREATE TABLE db1.table1 ON CLUSTER cluster_2S_1R ( `id` UInt64, @@ -405,7 +409,7 @@ CREATE TABLE db1.table1 ON CLUSTER cluster_2S_1R ENGINE = MergeTree ORDER BY id ``` -```response +```response title="Response" ┌─host────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐ │ chnode1 │ 9000 │ 0 │ │ 1 │ 0 │ │ chnode2 │ 9000 │ 0 │ │ 0 │ 0 │ @@ -413,22 +417,25 @@ ORDER BY id ``` 4. Connect to `chnode1` and insert a row -```sql + +```sql title="Query" INSERT INTO db1.table1 (id, column1) VALUES (1, 'abc'); ``` 5. Connect to `chnode2` and insert a row -```sql +```sql title="Query" INSERT INTO db1.table1 (id, column1) VALUES (2, 'def'); ``` 6. Connect to either node, `chnode1` or `chnode2` and you will see only the row that was inserted into that table on that node. for example, on `chnode2` -```sql + +```sql title="Query" SELECT * FROM db1.table1; ``` -```response + +```response title="Response" ┌─id─┬─column1─┐ │ 2 │ def │ └────┴─────────┘ @@ -437,7 +444,8 @@ SELECT * FROM db1.table1; 7. Create a distributed table to query both shards on both nodes. (In this example, the `rand()` function is set as the sharding key so that it randomly distributes each insert) -```sql + +```sql title="Query" CREATE TABLE db1.table1_dist ON CLUSTER cluster_2S_1R ( `id` UInt64, @@ -445,7 +453,8 @@ CREATE TABLE db1.table1_dist ON CLUSTER cluster_2S_1R ) ENGINE = Distributed('cluster_2S_1R', 'db1', 'table1', rand()) ``` -```response + +```response title="Response" ┌─host────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐ │ chnode2 │ 9000 │ 0 │ │ 1 │ 0 │ │ chnode1 │ 9000 │ 0 │ │ 0 │ 0 │ @@ -453,10 +462,12 @@ ENGINE = Distributed('cluster_2S_1R', 'db1', 'table1', rand()) ``` 8. Connect to either `chnode1` or `chnode2` and query the distributed table to see both rows. -``` + +```sql title="Query" SELECT * FROM db1.table1_dist; ``` -```reponse + +```reponse title="Response" ┌─id─┬─column1─┐ │ 2 │ def │ └────┴─────────┘ diff --git a/docs/en/faq/operations/delete-old-data.md b/docs/en/faq/operations/delete-old-data.md index 8852d917a1d..f9563da0c4f 100644 --- a/docs/en/faq/operations/delete-old-data.md +++ b/docs/en/faq/operations/delete-old-data.md @@ -26,7 +26,7 @@ More details on [configuring TTL](../../engines/table-engines/mergetree-family/m :::note DELETE FROM is generally available from version 23.3 and newer. On older versions, it is experimental and must be enabled with: -``` +```sql SET allow_experimental_lightweight_delete = true; ``` ::: diff --git a/docs/en/guides/best-practices/skipping-indexes.md b/docs/en/guides/best-practices/skipping-indexes.md index 847acc738f5..d9f7e7a0df4 100644 --- a/docs/en/guides/best-practices/skipping-indexes.md +++ b/docs/en/guides/best-practices/skipping-indexes.md @@ -33,7 +33,7 @@ When a user creates a data skipping index, there will be two additional files in If some portion of the WHERE clause filtering condition matches the skip index expression when executing a query and reading the relevant column files, ClickHouse will use the index file data to determine whether each relevant block of data must be processed or can be bypassed (assuming that the block has not already been excluded by applying the primary key). To use a very simplified example, consider the following table loaded with predictable data. -``` +```sql CREATE TABLE skip_table ( my_key UInt64, @@ -48,7 +48,7 @@ INSERT INTO skip_table SELECT number, intDiv(number,4096) FROM numbers(100000000 When executing a simple query that does not use the primary key, all 100 million entries in the `my_value` column are scanned: -``` +```sql SELECT * FROM skip_table WHERE my_value IN (125, 700) ┌─my_key─┬─my_value─┐ @@ -62,7 +62,7 @@ SELECT * FROM skip_table WHERE my_value IN (125, 700) Now add a very basic skip index: -``` +```sql ALTER TABLE skip_table ADD INDEX vix my_value TYPE set(100) GRANULARITY 2; ``` @@ -70,13 +70,13 @@ Normally skip indexes are only applied on newly inserted data, so just adding th To index already existing data, use this statement: -``` +```sql ALTER TABLE skip_table MATERIALIZE INDEX vix; ``` Rerun the query with the newly created index: -``` +```sql SELECT * FROM skip_table WHERE my_value IN (125, 700) ┌─my_key─┬─my_value─┐ @@ -99,13 +99,13 @@ were skipped without reading from disk: Users can access detailed information about skip index usage by enabling the trace when executing queries. From clickhouse-client, set the `send_logs_level`: -``` +```sql SET send_logs_level='trace'; ``` This will provide useful debugging information when trying to tune query SQL and table indexes. From the above example, the debug log shows that the skip index dropped all but two granules: -``` +```sql default.skip_table (933d4b2c-8cea-4bf9-8c93-c56e900eefd1) (SelectExecutor): Index `vix` has dropped 6102/6104 granules. ``` ## Skip Index Types @@ -139,7 +139,7 @@ There are three Data Skipping Index types based on Bloom filters: This index works only with String, FixedString, and Map datatypes. The input expression is split into character sequences separated by non-alphanumeric characters. For example, a column value of `This is a candidate for a "full text" search` will contain the tokens `This` `is` `a` `candidate` `for` `full` `text` `search`. It is intended for use in LIKE, EQUALS, IN, hasToken() and similar searches for words and other values within longer strings. For example, one possible use might be searching for a small number of class names or line numbers in a column of free form application log lines. * The specialized **ngrambf_v1**. This index functions the same as the token index. It takes one additional parameter before the Bloom filter settings, the size of the ngrams to index. An ngram is a character string of length `n` of any characters, so the string `A short string` with an ngram size of 4 would be indexed as: - ``` + ```text 'A sh', ' sho', 'shor', 'hort', 'ort ', 'rt s', 't st', ' str', 'stri', 'trin', 'ring' ``` This index can also be useful for text searches, particularly languages without word breaks, such as Chinese. @@ -176,7 +176,9 @@ Consider the following data distribution: Assume the primary/order by key is `timestamp`, and there is an index on `visitor_id`. Consider the following query: - `SELECT timestamp, url FROM table WHERE visitor_id = 1001` +```sql +SELECT timestamp, url FROM table WHERE visitor_id = 1001` +``` A traditional secondary index would be very advantageous with this kind of data distribution. Instead of reading all 32768 rows to find the 5 rows with the requested visitor_id, the secondary index would include just five row locations, and only those five rows would be diff --git a/docs/en/guides/developer/alternative-query-languages.md b/docs/en/guides/developer/alternative-query-languages.md index eb31f9d7168..d1979e78263 100644 --- a/docs/en/guides/developer/alternative-query-languages.md +++ b/docs/en/guides/developer/alternative-query-languages.md @@ -58,15 +58,11 @@ SET allow_experimental_kusto_dialect = 1; -- this SET statement is required only SET dialect = 'kusto' ``` -Example KQL query: - -```kql +```kql title="Query" numbers(10) | project number ``` -Result: - -``` +```response title="Response" ┌─number─┐ │ 0 │ │ 1 │ diff --git a/docs/en/guides/developer/cascading-materialized-views.md b/docs/en/guides/developer/cascading-materialized-views.md index e495fc06721..f7140173749 100644 --- a/docs/en/guides/developer/cascading-materialized-views.md +++ b/docs/en/guides/developer/cascading-materialized-views.md @@ -365,7 +365,7 @@ GROUP BY This query should output something like: -``` +```response ┌────on_date─┬─domain_name────┬─impressions─┬─clicks─┐ │ 2019-01-01 │ clickhouse.com │ 2 │ 2 │ │ 2019-03-01 │ clickhouse.com │ 1 │ 1 │ diff --git a/docs/en/guides/developer/lightweight-update.md b/docs/en/guides/developer/lightweight-update.md index 9bfc82cecdc..dadb03e33bd 100644 --- a/docs/en/guides/developer/lightweight-update.md +++ b/docs/en/guides/developer/lightweight-update.md @@ -47,7 +47,7 @@ SELECT id, v FROM test_on_fly_mutations ORDER BY id; Note that the values of the rows have not yet been updated when we query the new table: -``` +```response ┌─id─┬─v─┐ │ 1 │ a │ │ 2 │ b │ @@ -66,7 +66,7 @@ SELECT id, v FROM test_on_fly_mutations ORDER BY id; The `SELECT` query now returns the correct result immediately, without having to wait for the mutations to be applied: -``` +```response ┌─id─┬─v─┐ │ 3 │ c │ └────┴───┘ diff --git a/docs/en/guides/developer/understanding-query-execution-with-the-analyzer.md b/docs/en/guides/developer/understanding-query-execution-with-the-analyzer.md index f7015a06590..f90f227fc48 100644 --- a/docs/en/guides/developer/understanding-query-execution-with-the-analyzer.md +++ b/docs/en/guides/developer/understanding-query-execution-with-the-analyzer.md @@ -142,7 +142,7 @@ SELECT type, min(timestamp) AS minimum_date, max(timestamp) AS maximum_date, cou Even though this is giving us some information, we can get more. For example, maybe we want to know the column's name on top of which we need the projections. You can add the header to the query: -``` +```SQL EXPLAIN header = 1 WITH ( SELECT count(*) @@ -347,7 +347,7 @@ GROUP BY type FORMAT TSV ``` -``` +```response digraph { rankdir="LR"; @@ -397,7 +397,7 @@ GROUP BY type FORMAT TSV ``` -``` +```response digraph { rankdir="LR"; diff --git a/docs/en/guides/sre/keeper/index.md b/docs/en/guides/sre/keeper/index.md index 41a83b31e9f..2055b8bc4e0 100644 --- a/docs/en/guides/sre/keeper/index.md +++ b/docs/en/guides/sre/keeper/index.md @@ -171,7 +171,7 @@ The 4lw commands has a white list configuration `four_letter_word_white_list` wh You can issue the commands to ClickHouse Keeper via telnet or nc, at the client port. -``` +```bash echo mntr | nc localhost 9181 ``` @@ -179,13 +179,13 @@ Bellow is the detailed 4lw commands: - `ruok`: Tests if server is running in a non-error state. The server will respond with `imok` if it is running. Otherwise, it will not respond at all. A response of `imok` does not necessarily indicate that the server has joined the quorum, just that the server process is active and bound to the specified client port. Use "stat" for details on state with respect to quorum and client connection information. -``` +```response imok ``` - `mntr`: Outputs a list of variables that could be used for monitoring the health of the cluster. -``` +```response zk_version v21.11.1.1-prestable-7a4a0b0edef0ad6e0aa662cd3b90c3f4acf796e7 zk_avg_latency 0 zk_max_latency 0 @@ -207,7 +207,7 @@ zk_synced_followers 0 - `srvr`: Lists full details for the server. -``` +```response ClickHouse Keeper version: v21.11.1.1-prestable-7a4a0b0edef0ad6e0aa662cd3b90c3f4acf796e7 Latency min/avg/max: 0/0/0 Received: 2 @@ -221,7 +221,7 @@ Node count: 4 - `stat`: Lists brief details for the server and connected clients. -``` +```response ClickHouse Keeper version: v21.11.1.1-prestable-7a4a0b0edef0ad6e0aa662cd3b90c3f4acf796e7 Clients: 192.168.1.1:52852(recved=0,sent=0) @@ -244,7 +244,7 @@ Server stats reset. - `conf`: Print details about serving configuration. -``` +```response server_id=1 tcp_port=2181 four_letter_word_white_list=* @@ -277,20 +277,20 @@ configuration_change_tries_count=20 - `cons`: List full connection/session details for all clients connected to this server. Includes information on numbers of packets received/sent, session id, operation latencies, last operation performed, etc... -``` +```response 192.168.1.1:52163(recved=0,sent=0,sid=0xffffffffffffffff,lop=NA,est=1636454787393,to=30000,lzxid=0xffffffffffffffff,lresp=0,llat=0,minlat=0,avglat=0,maxlat=0) 192.168.1.1:52042(recved=9,sent=18,sid=0x0000000000000001,lop=List,est=1636454739887,to=30000,lcxid=0x0000000000000005,lzxid=0x0000000000000005,lresp=1636454739892,llat=0,minlat=0,avglat=0,maxlat=0) ``` - `crst`: Reset connection/session statistics for all connections. -``` +```response Connection stats reset. ``` - `envi`: Print details about serving environment -``` +```response Environment: clickhouse.keeper.version=v21.11.1.1-prestable-7a4a0b0edef0ad6e0aa662cd3b90c3f4acf796e7 host.name=ZBMAC-C02D4054M.local @@ -307,7 +307,7 @@ user.tmp=/var/folders/b4/smbq5mfj7578f2jzwn602tt40000gn/T/ - `dirs`: Shows the total size of snapshot and log files in bytes -``` +```response snapshot_dir_size: 0 log_dir_size: 3875 ``` @@ -320,28 +320,28 @@ rw - `wchs`: Lists brief information on watches for the server. -``` +```response 1 connections watching 1 paths Total watches:1 ``` - `wchc`: Lists detailed information on watches for the server, by session. This outputs a list of sessions (connections) with associated watches (paths). Note, depending on the number of watches this operation may be expensive (impact server performance), use it carefully. -``` +```response 0x0000000000000001 /clickhouse/task_queue/ddl ``` - `wchp`: Lists detailed information on watches for the server, by path. This outputs a list of paths (znodes) with associated sessions. Note, depending on the number of watches this operation may be expensive (i.e., impact server performance), use it carefully. -``` +```response /clickhouse/task_queue/ddl 0x0000000000000001 ``` - `dump`: Lists the outstanding sessions and ephemeral nodes. This only works on the leader. -``` +```response Sessions dump (2): 0x0000000000000001 0x0000000000000002 @@ -352,13 +352,13 @@ Sessions with Ephemerals (1): - `csnp`: Schedule a snapshot creation task. Return the last committed log index of the scheduled snapshot if success or `Failed to schedule snapshot creation task.` if failed. Note that `lgif` command can help you determine whether the snapshot is done. -``` +```response 100 ``` - `lgif`: Keeper log information. `first_log_idx` : my first log index in log store; `first_log_term` : my first log term; `last_log_idx` : my last log index in log store; `last_log_term` : my last log term; `last_committed_log_idx` : my last committed log index in state machine; `leader_committed_log_idx` : leader's committed log index from my perspective; `target_committed_log_idx` : target log index should be committed to; `last_snapshot_idx` : the largest committed log index in last snapshot. -``` +```response first_log_idx 1 first_log_term 1 last_log_idx 101 @@ -371,13 +371,13 @@ last_snapshot_idx 50 - `rqld`: Request to become new leader. Return `Sent leadership request to leader.` if request sent or `Failed to send leadership request to leader.` if request not sent. Note that if node is already leader the outcome is same as the request is sent. -``` +```response Sent leadership request to leader. ``` - `ftfl`: Lists all feature flags and whether they are enabled for the Keeper instance. -``` +```response filtered_list 1 multi_read 1 check_not_exists 0 @@ -385,13 +385,13 @@ check_not_exists 0 - `ydld`: Request to yield leadership and become follower. If the server receiving the request is leader, it will pause write operations first, wait until the successor (current leader can never be successor) finishes the catch-up of the latest log, and then resign. The successor will be chosen automatically. Return `Sent yield leadership request to leader.` if request sent or `Failed to send yield leadership request to leader.` if request not sent. Note that if node is already follower the outcome is same as the request is sent. -``` +```response Sent yield leadership request to leader. ``` - `pfev`: Returns the values for all collected events. For each event it returns event name, event value, and event's description. -``` +```response FileOpen 62 Number of files opened. Seek 4 Number of times the 'lseek' function was called. ReadBufferFromFileDescriptorRead 126 Number of reads (read/pread) from a file descriptor. Does not include sockets. diff --git a/docs/en/guides/sre/user-management/index.md b/docs/en/guides/sre/user-management/index.md index 4722efedc42..5abe2fe4d3f 100644 --- a/docs/en/guides/sre/user-management/index.md +++ b/docs/en/guides/sre/user-management/index.md @@ -241,7 +241,7 @@ To `GRANT` or `REVOKE` privileges, the user must have those privileges themselve The `ALTER` hierarchy: -``` +```response . ├── ALTER (only for table and view)/ │ ├── ALTER TABLE/ @@ -286,11 +286,13 @@ The `ALTER` hierarchy: Using an `GRANT ALTER on *.* TO my_user` will only affect top-level `ALTER TABLE` and `ALTER VIEW` , other `ALTER` statements must be individually granted or revoked. for example, granting basic `ALTER` privilege: + ```sql GRANT ALTER ON my_db.my_table TO my_user; ``` Resulting set of privileges: + ```sql SHOW GRANTS FOR my_user; ``` @@ -310,11 +312,13 @@ This will grant all permissions under `ALTER TABLE` and `ALTER VIEW` from the ex If only a subset of `ALTER` permissions is needed then each can be granted separately, if there are sub-privileges to that permission then those would be automatically granted also. For example: + ```sql GRANT ALTER COLUMN ON my_db.my_table TO my_user; ``` Grants would be set as: + ```sql SHOW GRANTS FOR my_user; ``` @@ -332,6 +336,7 @@ Query id: 47b3d03f-46ac-4385-91ec-41119010e4e2 ``` This also gives the following sub-privileges: + ```sql ALTER ADD COLUMN ALTER DROP COLUMN @@ -348,6 +353,7 @@ The `REVOKE` statement works similarly to the `GRANT` statement. If a user/role was granted a sub-privilege, you can either revoke that sub-privilege directly or revoke the higher-level privilege it inherits from. For example, if the user was granted `ALTER ADD COLUMN` + ```sql GRANT ALTER ADD COLUMN ON my_db.my_table TO my_user; ``` @@ -377,12 +383,14 @@ Query id: 27791226-a18f-46c8-b2b4-a9e64baeb683 ``` A privilege can be revoked individually: + ```sql REVOKE ALTER ADD COLUMN ON my_db.my_table FROM my_user; ``` Or can be revoked from any of the upper levels (revoke all of the COLUMN sub privileges): -``` + +```response REVOKE ALTER COLUMN ON my_db.my_table FROM my_user; ``` @@ -411,10 +419,12 @@ Ok. ``` **Additional** + The privileges must be granted by a user that not only has the `WITH GRANT OPTION` but also has the privileges themselves. 1. To grant an admin user the privilege and also allow them to administer a set of privileges Below is an example: + ```sql GRANT SELECT, ALTER COLUMN ON my_db.my_table TO my_alter_admin WITH GRANT OPTION; ``` diff --git a/docs/en/guides/sre/user-management/ssl-user-auth.md b/docs/en/guides/sre/user-management/ssl-user-auth.md index 0ac0766eaea..e0f4a75cdd2 100644 --- a/docs/en/guides/sre/user-management/ssl-user-auth.md +++ b/docs/en/guides/sre/user-management/ssl-user-auth.md @@ -102,7 +102,7 @@ For details on how to enable SQL users and set roles, refer to [Defining SQL Use ``` 3. Run `clickhouse-client`. - ``` + ```bash clickhouse-client --user --query 'SHOW TABLES' ``` :::note diff --git a/docs/en/integrations/clickhouse-client-local.md b/docs/en/integrations/clickhouse-client-local.md index 2b7471d5c68..6b7c5715c30 100644 --- a/docs/en/integrations/clickhouse-client-local.md +++ b/docs/en/integrations/clickhouse-client-local.md @@ -29,7 +29,7 @@ bash ## Download ClickHouse -``` +```bash curl https://clickhouse.com/ | sh ``` diff --git a/docs/en/integrations/data-ingestion/apache-spark/spark-native-connector.md b/docs/en/integrations/data-ingestion/apache-spark/spark-native-connector.md index 6e106f62d43..b84188d5fd2 100644 --- a/docs/en/integrations/data-ingestion/apache-spark/spark-native-connector.md +++ b/docs/en/integrations/data-ingestion/apache-spark/spark-native-connector.md @@ -85,7 +85,7 @@ Both approaches ensure the ClickHouse connector is available in your Spark envir Add the following repository if you want to use SNAPSHOT version. -``` +```maven sonatype-oss-snapshots @@ -149,7 +149,7 @@ for production. The name pattern of the binary JAR is: -``` +```bash clickhouse-spark-runtime-${spark_binary_version}_${scala_binary_version}-${version}.jar ``` diff --git a/docs/en/integrations/data-ingestion/clickpipes/postgres/source/generic.md b/docs/en/integrations/data-ingestion/clickpipes/postgres/source/generic.md index d75e41322fe..74d069890d4 100644 --- a/docs/en/integrations/data-ingestion/clickpipes/postgres/source/generic.md +++ b/docs/en/integrations/data-ingestion/clickpipes/postgres/source/generic.md @@ -81,7 +81,7 @@ Make sure to replace `clickpipes_user` and `clickpipes_password` with your desir If you are self serving, you need to allow connections to the ClickPipes user from the ClickPipes IP addresses by following the below steps. If you are using a managed service, you can do the same by following the provider's documentation. 1. Make necessary changes to the `pg_hba.conf` file to allow connections to the ClickPipes user from the ClickPipes IP addresses. An example entry in the `pg_hba.conf` file would look like: - ``` + ```response host all clickpipes_user 0.0.0.0/0 scram-sha-256 ``` diff --git a/docs/en/integrations/data-ingestion/clickpipes/secure-kinesis.md b/docs/en/integrations/data-ingestion/clickpipes/secure-kinesis.md index a729db82ede..e1d94fd6560 100644 --- a/docs/en/integrations/data-ingestion/clickpipes/secure-kinesis.md +++ b/docs/en/integrations/data-ingestion/clickpipes/secure-kinesis.md @@ -59,7 +59,7 @@ Trust policy (Please replace `{ClickHouse_IAM_ARN}` with the IAM Role arn belong IAM policy (Please replace `{STREAM_NAME}` with your Kinesis stream name): -``` +```json { "Version": "2012-10-17", "Statement": [ diff --git a/docs/en/integrations/data-ingestion/data-formats/binary.md b/docs/en/integrations/data-ingestion/data-formats/binary.md index c6fbbb5d0c4..f0af3f90b32 100644 --- a/docs/en/integrations/data-ingestion/data-formats/binary.md +++ b/docs/en/integrations/data-ingestion/data-formats/binary.md @@ -199,7 +199,7 @@ This saves data to the [proto.bin](assets/proto.bin) file. ClickHouse also suppo Another popular binary serialization format supported by ClickHouse is [Cap’n Proto](https://capnproto.org/). Similarly to `Protobuf` format, we have to define a schema file ([`schema.capnp`](assets/schema.capnp)) in our example: -``` +```response @0xec8ff1a10aa10dbe; struct PathStats { diff --git a/docs/en/integrations/data-ingestion/data-formats/csv-tsv.md b/docs/en/integrations/data-ingestion/data-formats/csv-tsv.md index 56cdf7ad690..ad4f51014f3 100644 --- a/docs/en/integrations/data-ingestion/data-formats/csv-tsv.md +++ b/docs/en/integrations/data-ingestion/data-formats/csv-tsv.md @@ -332,7 +332,7 @@ In sophisticated cases, text data can be formatted in a highly custom manner but Suppose we have the following data in the file: -``` +```text row('Akiba_Hebrew_Academy';'2017-08-01';241),row('Aegithina_tiphia';'2018-02-01';34),... ``` diff --git a/docs/en/integrations/data-ingestion/data-formats/json/formats.md b/docs/en/integrations/data-ingestion/data-formats/json/formats.md index 9561fae982d..1bb8a7335e9 100644 --- a/docs/en/integrations/data-ingestion/data-formats/json/formats.md +++ b/docs/en/integrations/data-ingestion/data-formats/json/formats.md @@ -96,9 +96,10 @@ SELECT * FROM sometable; In some cases, the list of JSON objects can be encoded as object properties instead of array elements (see [objects.json](../assets/objects.json) for example): -``` +```bash cat objects.json ``` + ```response { "a": { diff --git a/docs/en/integrations/data-ingestion/data-formats/templates-regex.md b/docs/en/integrations/data-ingestion/data-formats/templates-regex.md index d3077a3484a..9de5f05b85d 100644 --- a/docs/en/integrations/data-ingestion/data-formats/templates-regex.md +++ b/docs/en/integrations/data-ingestion/data-formats/templates-regex.md @@ -23,7 +23,7 @@ head error.log We can use a [Template](/docs/en/interfaces/formats.md/#format-template) format to import this data. We have to define a template string with values placeholders for each row of input data: -``` +```response