diff --git a/docs/cloud/features/03_sql_console_features/03_query-endpoints.md b/docs/cloud/features/03_sql_console_features/03_query-endpoints.md index 80dff31fb3e..6f6a897c219 100644 --- a/docs/cloud/features/03_sql_console_features/03_query-endpoints.md +++ b/docs/cloud/features/03_sql_console_features/03_query-endpoints.md @@ -1,507 +1,24 @@ --- sidebar_title: 'Query API Endpoints' -slug: /cloud/get-started/query-endpoints +slug: /cloud/features/query-api-endpoints description: 'Easily spin up REST API endpoints from your saved queries' keywords: ['api', 'query api endpoints', 'query endpoints', 'query rest api'] title: 'Query API Endpoints' doc_type: 'guide' --- -import Image from '@theme/IdealImage'; -import endpoints_testquery from '@site/static/images/cloud/sqlconsole/endpoints-testquery.png'; -import endpoints_savequery from '@site/static/images/cloud/sqlconsole/endpoints-savequery.png'; -import endpoints_configure from '@site/static/images/cloud/sqlconsole/endpoints-configure.png'; -import endpoints_completed from '@site/static/images/cloud/sqlconsole/endpoints-completed.png'; -import endpoints_curltest from '@site/static/images/cloud/sqlconsole/endpoints-curltest.png'; -import endpoints_monitoring from '@site/static/images/cloud/sqlconsole/endpoints-monitoring.png'; +import {CardSecondary} from '@clickhouse/click-ui/bundled'; +import Link from '@docusaurus/Link' # Query API endpoints -The **Query API Endpoints** feature allows you to create an API endpoint directly from any saved SQL query in the ClickHouse Cloud console. You'll be able to access API endpoints via HTTP to execute your saved queries without needing to connect to your ClickHouse Cloud service via a native driver. +Building interactive data-driven applications requires not only a fast database, well-structured data, and optimized queries. +Your front-end and microservices also need an easy way to consume the data returned by those queries, preferably via well-structured APIs. -## Quick-start guide {#quick-start-guide} +The **Query API Endpoints** feature allows you to create an API endpoint directly from any saved SQL query in the ClickHouse Cloud console. +You'll be able to access API endpoints via HTTP to execute your saved queries without needing to connect to your ClickHouse Cloud service via a native driver. -Before proceeding, ensure you have an API key and an Admin Console Role. You can follow this guide to [create an API key](/cloud/manage/openapi). - -### Creating a saved query {#creating-a-saved-query} - -If you have a saved query, you can skip this step. - -Open a new query tab. For demonstration purposes, we'll use the [youtube dataset](/getting-started/example-datasets/youtube-dislikes), which contains approximately 4.5 billion records. As an example query, we'll return the top 10 uploaders by average views per video in a user-inputted `year` parameter: - -```sql -WITH sum(view_count) AS view_sum, - round(view_sum / num_uploads, 2) AS per_upload -SELECT - uploader, - count() AS num_uploads, - formatReadableQuantity(view_sum) AS total_views, - formatReadableQuantity(per_upload) AS views_per_video -FROM - youtube -WHERE - toYear(upload_date) = {year: UInt16} -group by uploader -order by per_upload desc -limit 10 -``` - -Note that this query contains a parameter (`year`). The SQL console query editor automatically detects ClickHouse query parameter expressions and provides an input for each parameter. Let's quickly run this query to make sure that it works: - -Test the example query - -Next step, we'll go ahead and save the query: - -Save example query - -More documentation around saved queries can be found [here](/cloud/get-started/sql-console#saving-a-query). - -### Configuring the query API endpoint {#configuring-the-query-api-endpoint} - -Query API endpoints can be configured directly from query view by clicking the **Share** button and selecting `API Endpoint`. You'll be prompted to specify which API key(s) should be able to access the endpoint: - -Configure query endpoint - -After selecting an API key, the query API endpoint will automatically be provisioned. An example `curl` command will be displayed so you can send a test request: - -Endpoint curl command - -### Query API parameters {#query-api-parameters} - -Query parameters in a query can be specified with the syntax `{parameter_name: type}`. These parameters will be automatically detected and the example request payload will contain a `queryVariables` object through which you can pass these parameters. - -### Testing and monitoring {#testing-and-monitoring} - -Once a Query API endpoint is created, you can test that it works by using `curl` or any other HTTP client: - -endpoint curl test - -After you've sent your first request, a new button should appear immediately to the right of the **Share** button. Clicking it will open a flyout containing monitoring data about the query: - -Endpoint monitoring - -## Implementation details {#implementation-details} - -### Description {#description} - -This route runs a query on a specified query endpoint. It supports different versions, formats, and query variables. The response can be streamed (_version 2 only_) or returned as a single payload. - -### Authentication {#authentication} - -- **Required**: Yes -- **Method**: Basic Auth via OpenAPI Key/Secret -- **Permissions**: Appropriate permissions for the query endpoint. - -### URL parameters {#url-parameters} - -- `queryEndpointId` (required): The unique identifier of the query endpoint to run. - -### Query parameters {#query-parameters} - -#### V1 {#v1} - -None - -#### V2 {#v2} - -- `format` (optional): The format of the response. Supports all formats supported by ClickHouse. -- `param_:name` Query variables to be used in the query. `name` should match the variable name in the query. This should only to be used when the body of the request is a stream. -- `:clickhouse_setting` Any supported [ClickHouse setting](/operations/settings/settings) can be passed as a query parameter. - -### Headers {#headers} - -- `x-clickhouse-endpoint-version` (optional): The version of the query endpoint. Supported versions are `1` and `2`. If not provided, the default version is last saved for the endpoint. -- `x-clickhouse-endpoint-upgrade` (optional): Set this header to upgrade the endpoint version. This works in conjunction with the `x-clickhouse-endpoint-version` header. - -### Request body {#request-body} - -- `queryVariables` (optional): An object containing variables to be used in the query. -- `format` (optional): The format of the response. If Query API Endpoint is version 2 any ClickHouse supported format is possible. Supported formats for v1 are: - - TabSeparated - - TabSeparatedWithNames - - TabSeparatedWithNamesAndTypes - - JSON - - JSONEachRow - - CSV - - CSVWithNames - - CSVWithNamesAndTypes - -### Responses {#responses} - -- **200 OK**: The query was successfully executed. -- **400 Bad Request**: The request was malformed. -- **401 Unauthorized**: The request was made without authentication or with insufficient permissions. -- **404 Not Found**: The specified query endpoint was not found. - -### Error handling {#error-handling} - -- Ensure that the request includes valid authentication credentials. -- Validate the `queryEndpointId` and `queryVariables` to ensure they are correct. -- Handle any server errors gracefully, returning appropriate error messages. - -### Upgrading the endpoint version {#upgrading-the-endpoint-version} - -To upgrade the endpoint version from `v1` to `v2`, include the `x-clickhouse-endpoint-upgrade` header in the request and set it to `1`. This will trigger the upgrade process and allow you to use the features and improvements available in `v2`. - -## Examples {#examples} - -### Basic request {#basic-request} - -**Query API Endpoint SQL:** - -```sql -SELECT database, name AS num_tables FROM system.tables LIMIT 3; -``` - -#### Version 1 {#version-1} - -**cURL:** - -```bash -curl -X POST 'https://console-api.clickhouse.cloud/.api/query-endpoints//run' \ ---user '' \ --H 'Content-Type: application/json' \ --d '{ "format": "JSONEachRow" }' -``` - -**JavaScript:** - -```javascript -fetch( - "https://console-api.clickhouse.cloud/.api/query-endpoints//run", - { - method: "POST", - headers: { - Authorization: "Basic ", - "Content-Type": "application/json", - }, - body: JSON.stringify({ - format: "JSONEachRow", - }), - } -) - .then((response) => response.json()) - .then((data) => console.log(data)) - .catch((error) => console.error("Error:", error)); -``` - -**Response:** - -```json -{ - "data": { - "columns": [ - { - "name": "database", - "type": "String" - }, - { - "name": "num_tables", - "type": "String" - } - ], - "rows": [ - ["INFORMATION_SCHEMA", "COLUMNS"], - ["INFORMATION_SCHEMA", "KEY_COLUMN_USAGE"], - ["INFORMATION_SCHEMA", "REFERENTIAL_CONSTRAINTS"] - ] - } -} -``` - -#### Version 2 {#version-2} - -**cURL:** - -```bash -curl -X POST 'https://console-api.clickhouse.cloud/.api/query-endpoints//run?format=JSONEachRow' \ ---user '' \ --H 'Content-Type: application/json' \ --H 'x-clickhouse-endpoint-version: 2' -``` - -**JavaScript:** - -```javascript -fetch( - "https://console-api.clickhouse.cloud/.api/query-endpoints//run?format=JSONEachRow", - { - method: "POST", - headers: { - Authorization: "Basic ", - "Content-Type": "application/json", - "x-clickhouse-endpoint-version": "2", - }, - } -) - .then((response) => response.json()) - .then((data) => console.log(data)) - .catch((error) => console.error("Error:", error)); -``` - -**Response:** - -```application/x-ndjson -{"database":"INFORMATION_SCHEMA","num_tables":"COLUMNS"} -{"database":"INFORMATION_SCHEMA","num_tables":"KEY_COLUMN_USAGE"} -{"database":"INFORMATION_SCHEMA","num_tables":"REFERENTIAL_CONSTRAINTS"} -``` - -### Request with query variables and version 2 on JSONCompactEachRow format {#request-with-query-variables-and-version-2-on-jsoncompacteachrow-format} - -**Query API Endpoint SQL:** - -```sql -SELECT name, database FROM system.tables WHERE match(name, {tableNameRegex: String}) AND database = {database: String}; -``` - -**cURL:** - -```bash -curl -X POST 'https://console-api.clickhouse.cloud/.api/query-endpoints//run?format=JSONCompactEachRow' \ ---user '' \ --H 'Content-Type: application/json' \ --H 'x-clickhouse-endpoint-version: 2' \ --d '{ "queryVariables": { "tableNameRegex": "query.*", "database": "system" } }' -``` - -**JavaScript:** - -```javascript -fetch( - "https://console-api.clickhouse.cloud/.api/query-endpoints//run?format=JSONCompactEachRow", - { - method: "POST", - headers: { - Authorization: "Basic ", - "Content-Type": "application/json", - "x-clickhouse-endpoint-version": "2", - }, - body: JSON.stringify({ - queryVariables: { - tableNameRegex: "query.*", - database: "system", - }, - }), - } -) - .then((response) => response.json()) - .then((data) => console.log(data)) - .catch((error) => console.error("Error:", error)); -``` - -**Response:** - -```application/x-ndjson -["query_cache", "system"] -["query_log", "system"] -["query_views_log", "system"] -``` - -### Request with array in the query variables that inserts data into a table {#request-with-array-in-the-query-variables-that-inserts-data-into-a-table} - -**Table SQL:** - -```SQL -CREATE TABLE default.t_arr -( - `arr` Array(Array(Array(UInt32))) -) -ENGINE = MergeTree -ORDER BY tuple() -``` - -**Query API Endpoint SQL:** - -```sql - INSERT INTO default.t_arr VALUES ({arr: Array(Array(Array(UInt32)))}); -``` - -**cURL:** - -```bash -curl -X POST 'https://console-api.clickhouse.cloud/.api/query-endpoints//run' \ ---user '' \ --H 'Content-Type: application/json' \ --H 'x-clickhouse-endpoint-version: 2' \ --d '{ - "queryVariables": { - "arr": [[[12, 13, 0, 1], [12]]] - } -}' -``` - -**JavaScript:** - -```javascript -fetch( - "https://console-api.clickhouse.cloud/.api/query-endpoints//run", - { - method: "POST", - headers: { - Authorization: "Basic ", - "Content-Type": "application/json", - "x-clickhouse-endpoint-version": "2", - }, - body: JSON.stringify({ - queryVariables: { - arr: [[[12, 13, 0, 1], [12]]], - }, - }), - } -) - .then((response) => response.json()) - .then((data) => console.log(data)) - .catch((error) => console.error("Error:", error)); -``` - -**Response:** - -```text -OK -``` - -### Request with ClickHouse settings max_threads set to 8` {#request-with-clickhouse-settings-max_threads-set-to-8} - -**Query API Endpoint SQL:** - -```sql -SELECT * FROM system.tables; -``` - -**cURL:** - -```bash -curl -X POST 'https://console-api.clickhouse.cloud/.api/query-endpoints//run?max_threads=8,' \ ---user '' \ --H 'Content-Type: application/json' \ --H 'x-clickhouse-endpoint-version: 2' \ -``` - -**JavaScript:** - -```javascript -fetch( - "https://console-api.clickhouse.cloud/.api/query-endpoints//run?max_threads=8", - { - method: "POST", - headers: { - Authorization: "Basic ", - "Content-Type": "application/json", - "x-clickhouse-endpoint-version": "2", - }, - } -) - .then((response) => response.json()) - .then((data) => console.log(data)) - .catch((error) => console.error("Error:", error)); -``` - -### Request and parse the response as a stream` {#request-and-parse-the-response-as-a-stream} - -**Query API Endpoint SQL:** - -```sql -SELECT name, database FROM system.tables; -``` - -**Typescript:** - -```typescript -async function fetchAndLogChunks( - url: string, - openApiKeyId: string, - openApiKeySecret: string -) { - const auth = Buffer.from(`${openApiKeyId}:${openApiKeySecret}`).toString( - "base64" - ); - - const headers = { - Authorization: `Basic ${auth}`, - "x-clickhouse-endpoint-version": "2", - }; - - const response = await fetch(url, { - headers, - method: "POST", - body: JSON.stringify({ format: "JSONEachRow" }), - }); - - if (!response.ok) { - console.error(`HTTP error! Status: ${response.status}`); - return; - } - - const reader = response.body as unknown as Readable; - reader.on("data", (chunk) => { - console.log(chunk.toString()); - }); - - reader.on("end", () => { - console.log("Stream ended."); - }); - - reader.on("error", (err) => { - console.error("Stream error:", err); - }); -} - -const endpointUrl = - "https://console-api.clickhouse.cloud/.api/query-endpoints//run?format=JSONEachRow"; -const openApiKeyId = ""; -const openApiKeySecret = ""; -// Usage example -fetchAndLogChunks(endpointUrl, openApiKeyId, openApiKeySecret).catch((err) => - console.error(err) -); -``` - -**Output** - -```shell -> npx tsx index.ts -> {"name":"COLUMNS","database":"INFORMATION_SCHEMA"} -> {"name":"KEY_COLUMN_USAGE","database":"INFORMATION_SCHEMA"} -... -> Stream ended. -``` - -### Insert a stream from a file into a table {#insert-a-stream-from-a-file-into-a-table} - -create a file ./samples/my_first_table_2024-07-11.csv with the following content: - -```csv -"user_id","json","name" -"1","{""name"":""John"",""age"":30}","John" -"2","{""name"":""Jane"",""age"":25}","Jane" -``` - -**Create Table SQL:** - -```sql -create table default.my_first_table -( - user_id String, - json String, - name String, -) ENGINE = MergeTree() -ORDER BY user_id; -``` - -**Query API Endpoint SQL:** - -```sql -INSERT INTO default.my_first_table -``` - -**cURL:** - -```bash -cat ./samples/my_first_table_2024-07-11.csv | curl --user '' \ - -X POST \ - -H 'Content-Type: application/octet-stream' \ - -H 'x-clickhouse-endpoint-version: 2' \ - "https://console-api.clickhouse.cloud/.api/query-endpoints//run?format=CSV" \ - --data-binary @- -``` +:::tip Guide +See the [Query API endpoints guide](/cloud/get-started/query-endpoints) for instructions on how to set up +query API endpoints in a few easy steps +::: \ No newline at end of file diff --git a/docs/cloud/guides/SQL_console/query-endpoints.md b/docs/cloud/guides/SQL_console/query-endpoints.md new file mode 100644 index 00000000000..2aee3abc33a --- /dev/null +++ b/docs/cloud/guides/SQL_console/query-endpoints.md @@ -0,0 +1,645 @@ +--- +sidebar_title: 'Query API Endpoints' +slug: /cloud/get-started/query-endpoints +description: 'Easily spin up REST API endpoints from your saved queries' +keywords: ['api', 'query api endpoints', 'query endpoints', 'query rest api'] +title: 'Query API Endpoints' +doc_type: 'guide' +--- + +import Image from '@theme/IdealImage'; +import endpoints_testquery from '@site/static/images/cloud/sqlconsole/endpoints-testquery.png'; +import endpoints_savequery from '@site/static/images/cloud/sqlconsole/endpoints-savequery.png'; +import endpoints_configure from '@site/static/images/cloud/sqlconsole/endpoints-configure.png'; +import endpoints_completed from '@site/static/images/cloud/sqlconsole/endpoints-completed.png'; +import endpoints_curltest from '@site/static/images/cloud/sqlconsole/endpoints-curltest.png'; +import endpoints_monitoring from '@site/static/images/cloud/sqlconsole/endpoints-monitoring.png'; +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +# Setting up query API endpoints + +The **Query API Endpoints** feature allows you to create an API endpoint directly from any saved SQL query in the ClickHouse Cloud console. You'll be able to access API endpoints via HTTP to execute your saved queries without needing to connect to your ClickHouse Cloud service via a native driver. + +## Pre-requisites {#quick-start-guide} + +Before proceeding, ensure you have: +- an API key +- an Admin Console Role. + +You can follow this guide to [create an API key](/cloud/manage/openapi) if you don't yet have one. + + + +### Create a saved query {#creating-a-saved-query} + +If you have a saved query, you can skip this step. + +Open a new query tab. For demonstration purposes, we'll use the [youtube dataset](/getting-started/example-datasets/youtube-dislikes), which contains approximately 4.5 billion records. +Follow the steps in section ["Create table"](/getting-started/example-datasets/youtube-dislikes#create-the-table) to create the table on your Cloud service and insert data to it. + +:::tip `LIMIT` the number of rows +The example dataset tutorial inserts a lot of data - 4.65 billion rows which can take some time to insert. +For the purposes of this guide we recommend to use the `LIMIT` clause to insert a smaller amount of data, +for example 10 million rows. +::: + +As an example query, we'll return the top 10 uploaders by average views per video in a user-inputted `year` parameter. + +```sql +WITH sum(view_count) AS view_sum, + round(view_sum / num_uploads, 2) AS per_upload +SELECT + uploader, + count() AS num_uploads, + formatReadableQuantity(view_sum) AS total_views, + formatReadableQuantity(per_upload) AS views_per_video +FROM + youtube +WHERE +-- highlight-next-line + toYear(upload_date) = {year: UInt16} +GROUP BY uploader +ORDER BY per_upload desc + LIMIT 10 +``` + +Note that this query contains a parameter (`year`) which is highlighted in the snippet above. +You can specify query parameters using curly brackets `{ }` together with the type of the parameter. +The SQL console query editor automatically detects ClickHouse query parameter expressions and provides an input for each parameter. + +Let's quickly run this query to make sure that it works by specifying the year `2010` in the query variables input box on the right side of the SQL editor: + +Test the example query + +Next, save the query: + +Save example query + +More documentation around saved queries can be found in section ["Saving a query"](/cloud/get-started/sql-console#saving-a-query). + +### Configuring the query API endpoint {#configuring-the-query-api-endpoint} + +Query API endpoints can be configured directly from query view by clicking the **Share** button and selecting `API Endpoint`. +You'll be prompted to specify which API key(s) should be able to access the endpoint: + +Configure query endpoint + +After selecting an API key, you will be asked to: +- Select the Database role that will be used to run the query (`Full access`, `Read only` or `Create a custom role`) +- Specify cross-origin resource sharing (CORS) allowed domains + +After selecting these options, the query API endpoint will automatically be provisioned. + +An example `curl` command will be displayed so you can send a test request: + +Endpoint curl command + +The curl command displayed in the interface is given below for convenience: + +```bash +curl -H "Content-Type: application/json" -s --user ':' '?format=JSONEachRow¶m_year=' +``` + +### Query API parameters {#query-api-parameters} + +Query parameters in a query can be specified with the syntax `{parameter_name: type}`. These parameters will be automatically detected and the example request payload will contain a `queryVariables` object through which you can pass these parameters. + +### Testing and monitoring {#testing-and-monitoring} + +Once a Query API endpoint is created, you can test that it works by using `curl` or any other HTTP client: + +endpoint curl test + +After you've sent your first request, a new button should appear immediately to the right of the **Share** button. Clicking it will open a flyout containing monitoring data about the query: + +Endpoint monitoring + + + +## Implementation details {#implementation-details} + +This endpoint executes queries on your saved Query API endpoints. +It supports multiple versions, flexible response formats, parameterized queries, and optional streaming responses (version 2 only). + +**Endpoint:** + +```text +GET /query-endpoints/{queryEndpointId}/run +POST /query-endpoints/{queryEndpointId}/run +``` + +### HTTP methods {#http-methods} + +| Method | Use Case | Parameters | +|---------|----------|------------| +| **GET** | Simple queries with parameters | Pass query variables via URL parameters (`?param_name=value`) | +| **POST** | Complex queries or when using request body | Pass query variables in request body (`queryVariables` object) | + +**When to use GET:** +- Simple queries without complex nested data +- Parameters can be easily URL-encoded +- Caching benefits from HTTP GET semantics + +**When to use POST:** +- Complex query variables (arrays, objects, large strings) +- When request body is preferred for security/privacy +- Streaming file uploads or large data + +### Authentication {#authentication} + +**Required:** Yes +**Method:** Basic Auth using OpenAPI Key/Secret +**Permissions:** Appropriate permissions for the query endpoint + +### Request configuration {#request-configuration} + +#### URL parameters {#url-params} + +| Parameter | Required | Description | +|-----------|----------|-------------| +| `queryEndpointId` | **Yes** | The unique identifier of the query endpoint to run | + +#### Query parameters {#query-params} + +| Parameter | Required | Description | Example | +|-----------|----------|-------------|---------| +| `format` | No | Response format (supports all ClickHouse formats) | `?format=JSONEachRow` | +| `param_:name` | No | Query variables when request body is a stream. Replace `:name` with your variable name | `?param_year=2024` | +| `:clickhouse_setting` | No | Any supported [ClickHouse setting](https://clickhouse.com/docs/operations/settings/settings) | `?max_threads=8` | + +#### Headers {#headers} + +| Header | Required | Description | Values | +|--------|----------|-------------|--------| +| `x-clickhouse-endpoint-version` | No | Specifies the endpoint version | `1` or `2` (defaults to last saved version) | +| `x-clickhouse-endpoint-upgrade` | No | Triggers endpoint version upgrade (use with version header) | `1` to upgrade | + +--- + +### Request body {#request-body} + +#### Parameters {#params} + +| Parameter | Type | Required | Description | +|-----------|------|----------|-------------| +| `queryVariables` | object | No | Variables to be used in the query | +| `format` | string | No | Response format | + +#### Supported formats {#supported-formats} + +| Version | Supported Formats | +|-------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------| +| **Version 2** | All ClickHouse-supported formats | +| **Version 1 (limited)** | TabSeparated
TabSeparatedWithNames
TabSeparatedWithNamesAndTypes
JSON
JSONEachRow
CSV
CSVWithNames
CSVWithNamesAndTypes | + +--- + +### Responses {#responses} + +#### Success {#success} + +**Status:** `200 OK` +The query was successfully executed. + +#### Error codes {#error-codes} + +| Status Code | Description | +|-------------|-------------| +| `400 Bad Request` | The request was malformed | +| `401 Unauthorized` | Missing authentication or insufficient permissions | +| `404 Not Found` | The specified query endpoint was not found | + +#### Error handling best practices {#error-handling-best-practices} + +- Ensure valid authentication credentials are included in the request +- Validate the `queryEndpointId` and `queryVariables` before sending +- Implement graceful error handling with appropriate error messages + +--- + +### Upgrading endpoint versions {#upgrading-endpoint-versions} + +To upgrade from version 1 to version 2: + +1. Include the `x-clickhouse-endpoint-upgrade` header set to `1` +2. Include the `x-clickhouse-endpoint-version` header set to `2` + +This enables access to version 2 features including: +- Support for all ClickHouse formats +- Response streaming capabilities +- Enhanced performance and functionality + +## Examples {#examples} + +### Basic request {#basic-request} + +**Query API Endpoint SQL:** + +```sql +SELECT database, name AS num_tables FROM system.tables LIMIT 3; +``` + +#### Version 1 {#version-1} + + + + +```bash +curl -X POST 'https://console-api.clickhouse.cloud/.api/query-endpoints//run' \ +--user '' \ +-H 'Content-Type: application/json' \ +-d '{ "format": "JSONEachRow" }' +``` + + + +```javascript +fetch( + "https://console-api.clickhouse.cloud/.api/query-endpoints//run", + { + method: "POST", + headers: { + Authorization: "Basic ", + "Content-Type": "application/json", + }, + body: JSON.stringify({ + format: "JSONEachRow", + }), + } +) + .then((response) => response.json()) + .then((data) => console.log(data)) + .catch((error) => console.error("Error:", error)); +``` + +```json title="Response" +{ + "data": { + "columns": [ + { + "name": "database", + "type": "String" + }, + { + "name": "num_tables", + "type": "String" + } + ], + "rows": [ + ["INFORMATION_SCHEMA", "COLUMNS"], + ["INFORMATION_SCHEMA", "KEY_COLUMN_USAGE"], + ["INFORMATION_SCHEMA", "REFERENTIAL_CONSTRAINTS"] + ] + } +} +``` + + + +#### Version 2 {#version-2} + + + + +```bash +curl 'https://console-api.clickhouse.cloud/.api/query-endpoints//run?format=JSONEachRow' \ +--user '' \ +-H 'x-clickhouse-endpoint-version: 2' +``` + +```application/x-ndjson title="Response" +{"database":"INFORMATION_SCHEMA","num_tables":"COLUMNS"} +{"database":"INFORMATION_SCHEMA","num_tables":"KEY_COLUMN_USAGE"} +{"database":"INFORMATION_SCHEMA","num_tables":"REFERENTIAL_CONSTRAINTS"} +``` + + + + +```bash +curl -X POST 'https://console-api.clickhouse.cloud/.api/query-endpoints//run?format=JSONEachRow' \ +--user '' \ +-H 'Content-Type: application/json' \ +-H 'x-clickhouse-endpoint-version: 2' +``` + + + +```javascript +fetch( + "https://console-api.clickhouse.cloud/.api/query-endpoints//run?format=JSONEachRow", + { + method: "POST", + headers: { + Authorization: "Basic ", + "Content-Type": "application/json", + "x-clickhouse-endpoint-version": "2", + }, + } +) + .then((response) => response.json()) + .then((data) => console.log(data)) + .catch((error) => console.error("Error:", error)); +``` + +```application/x-ndjson title="Response" +{"database":"INFORMATION_SCHEMA","num_tables":"COLUMNS"} +{"database":"INFORMATION_SCHEMA","num_tables":"KEY_COLUMN_USAGE"} +{"database":"INFORMATION_SCHEMA","num_tables":"REFERENTIAL_CONSTRAINTS"} +``` + + + +### Request with query variables and version 2 on JSONCompactEachRow format {#request-with-query-variables-and-version-2-on-jsoncompacteachrow-format} + +**Query API Endpoint SQL:** + +```sql +SELECT name, database FROM system.tables WHERE match(name, {tableNameRegex: String}) AND database = {database: String}; +``` + + + + +```bash +curl 'https://console-api.clickhouse.cloud/.api/query-endpoints//run?format=JSONCompactEachRow¶m_tableNameRegex=query.*¶m_database=system' \ +--user '' \ +-H 'x-clickhouse-endpoint-version: 2' +``` + +```application/x-ndjson title="Response" +["query_cache", "system"] +["query_log", "system"] +["query_views_log", "system"] +``` + + + + +```bash +curl -X POST 'https://console-api.clickhouse.cloud/.api/query-endpoints//run?format=JSONCompactEachRow' \ +--user '' \ +-H 'Content-Type: application/json' \ +-H 'x-clickhouse-endpoint-version: 2' \ +-d '{ "queryVariables": { "tableNameRegex": "query.*", "database": "system" } }' +``` + + + + +```javascript +fetch( + "https://console-api.clickhouse.cloud/.api/query-endpoints//run?format=JSONCompactEachRow", + { + method: "POST", + headers: { + Authorization: "Basic ", + "Content-Type": "application/json", + "x-clickhouse-endpoint-version": "2", + }, + body: JSON.stringify({ + queryVariables: { + tableNameRegex: "query.*", + database: "system", + }, + }), + } +) + .then((response) => response.json()) + .then((data) => console.log(data)) + .catch((error) => console.error("Error:", error)); +``` + +```application/x-ndjson title="Response" +["query_cache", "system"] +["query_log", "system"] +["query_views_log", "system"] +``` + + + +### Request with array in the query variables that inserts data into a table {#request-with-array-in-the-query-variables-that-inserts-data-into-a-table} + +**Table SQL:** + +```SQL +CREATE TABLE default.t_arr +( + `arr` Array(Array(Array(UInt32))) +) +ENGINE = MergeTree +ORDER BY tuple() +``` + +**Query API Endpoint SQL:** + +```sql +INSERT INTO default.t_arr VALUES ({arr: Array(Array(Array(UInt32)))}); +``` + + + + +```bash +curl -X POST 'https://console-api.clickhouse.cloud/.api/query-endpoints//run' \ +--user '' \ +-H 'Content-Type: application/json' \ +-H 'x-clickhouse-endpoint-version: 2' \ +-d '{ + "queryVariables": { + "arr": [[[12, 13, 0, 1], [12]]] + } +}' +``` + + + + +```javascript +fetch( + "https://console-api.clickhouse.cloud/.api/query-endpoints//run", + { + method: "POST", + headers: { + Authorization: "Basic ", + "Content-Type": "application/json", + "x-clickhouse-endpoint-version": "2", + }, + body: JSON.stringify({ + queryVariables: { + arr: [[[12, 13, 0, 1], [12]]], + }, + }), + } +) + .then((response) => response.json()) + .then((data) => console.log(data)) + .catch((error) => console.error("Error:", error)); +``` + +```text title="Response" +OK +``` + + + + +### Request with ClickHouse settings `max_threads` set to 8 {#request-with-clickhouse-settings-max_threads-set-to-8} + +**Query API Endpoint SQL:** + +```sql +SELECT * FROM system.tables; +``` + + + + +```bash +curl 'https://console-api.clickhouse.cloud/.api/query-endpoints//run?max_threads=8' \ +--user '' \ +-H 'x-clickhouse-endpoint-version: 2' +``` + + + + +```bash +curl -X POST 'https://console-api.clickhouse.cloud/.api/query-endpoints//run?max_threads=8,' \ +--user '' \ +-H 'Content-Type: application/json' \ +-H 'x-clickhouse-endpoint-version: 2' \ +``` + + + + +```javascript +fetch( + "https://console-api.clickhouse.cloud/.api/query-endpoints//run?max_threads=8", + { + method: "POST", + headers: { + Authorization: "Basic ", + "Content-Type": "application/json", + "x-clickhouse-endpoint-version": "2", + }, + } +) + .then((response) => response.json()) + .then((data) => console.log(data)) + .catch((error) => console.error("Error:", error)); +``` + + + + +### Request and parse the response as a stream` {#request-and-parse-the-response-as-a-stream} + +**Query API Endpoint SQL:** + +```sql +SELECT name, database FROM system.tables; +``` + + + + +```typescript +async function fetchAndLogChunks( + url: string, + openApiKeyId: string, + openApiKeySecret: string +) { + const auth = Buffer.from(`${openApiKeyId}:${openApiKeySecret}`).toString( + "base64" + ); + + const headers = { + Authorization: `Basic ${auth}`, + "x-clickhouse-endpoint-version": "2", + }; + + const response = await fetch(url, { + headers, + method: "POST", + body: JSON.stringify({ format: "JSONEachRow" }), + }); + + if (!response.ok) { + console.error(`HTTP error! Status: ${response.status}`); + return; + } + + const reader = response.body as unknown as Readable; + reader.on("data", (chunk) => { + console.log(chunk.toString()); + }); + + reader.on("end", () => { + console.log("Stream ended."); + }); + + reader.on("error", (err) => { + console.error("Stream error:", err); + }); +} + +const endpointUrl = + "https://console-api.clickhouse.cloud/.api/query-endpoints//run?format=JSONEachRow"; +const openApiKeyId = ""; +const openApiKeySecret = ""; +// Usage example +fetchAndLogChunks(endpointUrl, openApiKeyId, openApiKeySecret).catch((err) => + console.error(err) +); +``` + +```shell title="Output" +> npx tsx index.ts +> {"name":"COLUMNS","database":"INFORMATION_SCHEMA"} +> {"name":"KEY_COLUMN_USAGE","database":"INFORMATION_SCHEMA"} +... +> Stream ended. +``` + + + + +### Insert a stream from a file into a table {#insert-a-stream-from-a-file-into-a-table} + +Create a file `./samples/my_first_table_2024-07-11.csv` with the following content: + +```csv +"user_id","json","name" +"1","{""name"":""John"",""age"":30}","John" +"2","{""name"":""Jane"",""age"":25}","Jane" +``` + +**Create Table SQL:** + +```sql +create table default.my_first_table +( + user_id String, + json String, + name String, +) ENGINE = MergeTree() +ORDER BY user_id; +``` + +**Query API Endpoint SQL:** + +```sql +INSERT INTO default.my_first_table +``` + +```bash +cat ./samples/my_first_table_2024-07-11.csv | curl --user '' \ + -X POST \ + -H 'Content-Type: application/octet-stream' \ + -H 'x-clickhouse-endpoint-version: 2' \ + "https://console-api.clickhouse.cloud/.api/query-endpoints//run?format=CSV" \ + --data-binary @- +``` diff --git a/docs/getting-started/example-datasets/youtube-dislikes.md b/docs/getting-started/example-datasets/youtube-dislikes.md index 14b1a5b029c..c5a0b99da59 100644 --- a/docs/getting-started/example-datasets/youtube-dislikes.md +++ b/docs/getting-started/example-datasets/youtube-dislikes.md @@ -22,7 +22,11 @@ The steps below will easily work on a local install of ClickHouse too. The only ## Step-by-step instructions {#step-by-step-instructions} -1. Let's see what the data looks like. The `s3cluster` table function returns a table, so we can `DESCRIBE` the result: + + +### Data exploration {#data-exploration} + +Let's see what the data looks like. The `s3cluster` table function returns a table, so we can `DESCRIBE` the result: ```sql DESCRIBE s3( @@ -59,7 +63,10 @@ ClickHouse infers the following schema from the JSON file: └─────────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘ ``` -2. Based on the inferred schema, we cleaned up the data types and added a primary key. Define the following table: +### Create the table {#create-the-table} + +Based on the inferred schema, we cleaned up the data types and added a primary key. +Define the following table: ```sql CREATE TABLE youtube @@ -90,7 +97,9 @@ ENGINE = MergeTree ORDER BY (uploader, upload_date) ``` -3. The following command streams the records from the S3 files into the `youtube` table. +### Insert data {#insert-data} + +The following command streams the records from the S3 files into the `youtube` table. :::important This inserts a lot of data - 4.65 billion rows. If you do not want the entire dataset, simply add a `LIMIT` clause with the desired number of rows. @@ -133,7 +142,10 @@ Some comments about our `INSERT` command: - The `upload_date` column contains valid dates, but it also contains strings like "4 hours ago" - which is certainly not a valid date. We decided to store the original value in `upload_date_str` and attempt to parse it with `toDate(parseDateTimeBestEffortUSOrZero(upload_date::String))`. If the parsing fails we just get `0` - We used `ifNull` to avoid getting `NULL` values in our table. If an incoming value is `NULL`, the `ifNull` function is setting the value to an empty string -4. Open a new tab in the SQL Console of ClickHouse Cloud (or a new `clickhouse-client` window) and watch the count increase. It will take a while to insert 4.56B rows, depending on your server resources. (Without any tweaking of settings, it takes about 4.5 hours.) +### Count the number of rows {#count-row-numbers} + +Open a new tab in the SQL Console of ClickHouse Cloud (or a new `clickhouse-client` window) and watch the count increase. +It will take a while to insert 4.56B rows, depending on your server resources. (Without any tweaking of settings, it takes about 4.5 hours.) ```sql SELECT formatReadableQuantity(count()) @@ -146,7 +158,9 @@ FROM youtube └─────────────────────────────────┘ ``` -5. Once the data is inserted, go ahead and count the number of dislikes of your favorite videos or channels. Let's see how many videos were uploaded by ClickHouse: +### Explore the data {#explore-the-data} + +Once the data is inserted, go ahead and count the number of dislikes of your favorite videos or channels. Let's see how many videos were uploaded by ClickHouse: ```sql SELECT count() @@ -166,7 +180,7 @@ WHERE uploader = 'ClickHouse'; The query above runs so quickly because we chose `uploader` as the first column of the primary key - so it only had to process 237k rows. ::: -6. Let's look and likes and dislikes of ClickHouse videos: +Let's look and likes and dislikes of ClickHouse videos: ```sql SELECT @@ -193,7 +207,7 @@ The response looks like: 84 rows in set. Elapsed: 0.013 sec. Processed 155.65 thousand rows, 16.94 MB (11.96 million rows/s., 1.30 GB/s.) ``` -7. Here is a search for videos with **ClickHouse** in the `title` or `description` fields: +Here is a search for videos with **ClickHouse** in the `title` or `description` fields: ```sql SELECT @@ -224,6 +238,8 @@ The results look like: │ 3534 │ 62 │ 1 │ https://youtu.be/8nWRhK9gw10 │ CLICKHOUSE - Arquitetura Modular │ ``` + + ## Questions {#questions} ### If someone disables comments does it lower the chance someone will actually click like or dislike? {#if-someone-disables-comments-does-it-lower-the-chance-someone-will-actually-click-like-or-dislike} diff --git a/static/images/cloud/sqlconsole/endpoints-completed.png b/static/images/cloud/sqlconsole/endpoints-completed.png index 55f3c5dd1fb..5d100c27b16 100644 Binary files a/static/images/cloud/sqlconsole/endpoints-completed.png and b/static/images/cloud/sqlconsole/endpoints-completed.png differ diff --git a/static/images/cloud/sqlconsole/endpoints-configure.png b/static/images/cloud/sqlconsole/endpoints-configure.png index 75c4f934187..21a0e10567c 100644 Binary files a/static/images/cloud/sqlconsole/endpoints-configure.png and b/static/images/cloud/sqlconsole/endpoints-configure.png differ diff --git a/static/images/cloud/sqlconsole/endpoints-curltest.png b/static/images/cloud/sqlconsole/endpoints-curltest.png index 44917ae8f69..be3b766c469 100644 Binary files a/static/images/cloud/sqlconsole/endpoints-curltest.png and b/static/images/cloud/sqlconsole/endpoints-curltest.png differ diff --git a/static/images/cloud/sqlconsole/endpoints-monitoring.png b/static/images/cloud/sqlconsole/endpoints-monitoring.png index 1ad8cdb79ea..fe33f02b19a 100644 Binary files a/static/images/cloud/sqlconsole/endpoints-monitoring.png and b/static/images/cloud/sqlconsole/endpoints-monitoring.png differ diff --git a/static/images/cloud/sqlconsole/endpoints-savequery.png b/static/images/cloud/sqlconsole/endpoints-savequery.png index ffa80de31f6..cd9f7ddb606 100644 Binary files a/static/images/cloud/sqlconsole/endpoints-savequery.png and b/static/images/cloud/sqlconsole/endpoints-savequery.png differ diff --git a/static/images/cloud/sqlconsole/endpoints-testquery.png b/static/images/cloud/sqlconsole/endpoints-testquery.png index f51f86e50bf..9098f6624d7 100644 Binary files a/static/images/cloud/sqlconsole/endpoints-testquery.png and b/static/images/cloud/sqlconsole/endpoints-testquery.png differ