Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
63 changes: 48 additions & 15 deletions docs/jit.md
Original file line number Diff line number Diff line change
@@ -1,34 +1,67 @@
# Just-In-Time (JIT) Compilation

Starting from Timeplus Enterprise 2.9, the JIT compilation is enabled by default. For example, if you need to run the following SQL multiple times:
## Overview

Timeplus can **compile SQL expressions into native machine code** to significantly improve query performance.
This optimization is especially beneficial for **streaming queries**, where the compiled expressions are reused throughout the entire query lifetime, reducing runtime overhead.

**Example:**

```sql
select ts, key, value as v, 1+2*v*v+3*v*v*v as calc from stream
SELECT ts, key, value AS v, (a + (b * c)) + 5 AS calc
FROM stream;
```
Timeplus will compile the complex SQL expression to machine code to improve the runtime performance. For more technical details of the implementation, please check the [blog](https://maksimkita.com/blog/jit_in_clickhouse.html).

In the above example, the expression `(a + (b * c)) + 5` can be executed in two ways:

- **Interpreted execution**:
Each operation `(+, *)` is evaluated separately through an expression tree, adding overhead for each computation step.

- **JIT-compiled execution**:
The entire expression is **fused into a single machine instruction sequence**, eliminating interpretation overhead and enabling much faster execution.

![JIT](/img/jit.png)

## Settings
The following settings can be overridden in the query time using `SET key=value`. You can also query the current value and description via

The following settings control Just-In-Time (JIT) compilation behavior.
You can override them at **query time** using:

```sql
SET <key> = <value>;
```

You can also query the current setting values and their descriptions from the system tables.
```sql
select * from system.settings where name like '%to_compile%';
```

### min_count_to_compile_expression
Minimum count of executing same expression before it is get compiled.
### `min_count_to_compile_expression`

Specifies the **minimum number of times** an identical expression must be executed before it becomes eligible for JIT compilation.

- **Type**: uint64
- **Default**: 3

### `min_count_to_compile_aggregate_expression`

Specifies the **minimum number of identical aggregate expressions** required to trigger JIT compilation.
This setting takes effect only if `compile_aggregate_expressions` is enabled.

- **Type**: uint64
- **Default**: 3

`uint64` type. Default to 3.
### `min_count_to_compile_sort_description`

### min_count_to_compile_aggregate_expression
The minimum number of identical aggregate expressions to start JIT-compilation. Works only if the compile_aggregate_expressions setting is enabled.
Specifies the **number of identical sort descriptions** that must appear before they are JIT-compiled.

`uint64` type. Default to 3.
- **Type**: uint64
- **Default**: 3

### min_count_to_compile_sort_description
The number of identical sort descriptions before they are JIT-compiled.
## Metrics

`uint64` type. Default to 3.
You can monitor JIT compilation activity by querying the system counters:

## Monitoring
You can run the following query to check the counts of JIT events:
```sql
select * from system.events where event like 'Compile%';
```
30 changes: 6 additions & 24 deletions docs/sql-alter-stream.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# ALTER STREAM

You can modify the retention policy for historical store via [MODIFY TTL](#ttl) and modify the retention policy for streaming storage via [MODIFY SETTING](#modify_setting). For mutable streams, you can also run `MODIFY SETTING` to change the RocksDB settings.

You can also use [ALTER VIEW](/sql-alter-view) to modify the settings of materialized views (only available in Timeplus Enterprise).
Expand All @@ -20,8 +21,6 @@ logstore_retention_ms = ...,
logstore_retention_bytes = ...;
```

Starting from Timeplus Enterprise 2.7, you can also modify the RocksDB settings for mutable streams. e.g.

```sql
ALTER STREAM test MODIFY SETTING log_kvstore=1, kvstore_options='write_buffer_size=1024;max_write_buffer_number=2;max_background_jobs=4';
```
Expand All @@ -32,19 +31,8 @@ You can also change the codec for mutable streams. e.g.
ALTER STREAM test MODIFY SETTING logstore_codec='lz4';
```

Starting from Timeplus Enterprise 2.8.2, you can also modify the TTL for mutable stream.
```sql
ALTER STREAM test MODIFY SETTING ttl_seconds = 10;
```

## MODIFY QUERY SETTING

:::info
This feature is available in Timeplus Enterprise v2.2.8 or above. Not available in Timeplus Proton.

Please use [ALTER VIEW](/sql-alter-view) for this use cases. Altering views or materialized views will be deprecated and removed from the `ALTER STREAM` SQL command.
:::

By default, the checkpoint will be updated every 15 minutes for materialized views. You can change the checkpoint interval without recreating the materialized views.

```sql
Expand All @@ -53,12 +41,6 @@ ALTER STREAM mv_with_inner_stream MODIFY QUERY SETTING checkpoint_interval=600

## RESET QUERY SETTING

:::info
This feature is available in Timeplus Enterprise v2.2.8 or above. Not available in Timeplus Proton.

Please use [ALTER VIEW](/sql-alter-view) for this use cases. Altering views or materialized views will be deprecated and removed from the `ALTER STREAM` SQL command.
:::

By default, the checkpoint will be updated every 15 minutes for materialized views. After you change the interval you can reset it.

```sql
Expand All @@ -74,23 +56,20 @@ Syntax:
ALTER STREAM stream_name ADD COLUMN column_name data_type
```

Since Timeplus Enterprise 2.8.2, you can also add multiple columns at once:
```sql
ALTER STREAM stream_99005 ADD COLUMN e int, ADD COLUMN f int;
```

`DELETE COLUMN` is not supported yet. Contact us if you have strong use cases.

## RENAME COLUMN
Since Timeplus Enterprise 2.9, you can rename columns in append streams.

```sql
ALTER STREAM stream_name RENAME COLUMN column_name TO new_column_name
```

## ADD INDEX

Since Timeplus Enterprise v2.6.0, you can add an index to a mutable stream.
```sql
ALTER STREAM mutable_stream ADD INDEX index_name
```
Expand All @@ -103,7 +82,9 @@ ALTER STREAM mutable_stream DROP INDEX index_name
```

## MATERIALIZE INDEX
Since Timeplus Enterprise 2.8.2, you can rebuild the secondary index `name` for the specified `partition_name`.

You can rebuild the secondary index `name` for the specified `partition_name`.

```sql
ALTER STREAM mutable_stream MATERIALIZE INDEX [IF EXISTS] name [IN PARTITION partition_name] SETTINGS mutations_sync = 2"
```
Expand All @@ -114,7 +95,8 @@ ALTER STREAM minmax_idx MATERIALIZE INDEX idx IN PARTITION 2 SETTINGS mutations_
```

## CLEAR INDEX
Since Timeplus Enterprise 2.8.2, you can delete the secondary index `name` from disk.

You can delete the secondary index `name` from disk.
```sql
ALTER STREAM mutable_stream CLEAR INDEX [IF EXISTS] name [IN PARTITION partition_name] SETTINGS mutations_sync = 2"
```
Expand Down
9 changes: 1 addition & 8 deletions docs/sql-alter-view.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,9 @@
# ALTER VIEW

You can use this SQL to change a view or a materialized view. Today only the settings can be changed. To change the SQL query behinds the view, you have to drop and re-create it.

## MODIFY QUERY SETTING

:::info
This feature will be available in [Timeplus Enterprise v2.5](/enterprise-v2.5). Not available in Timeplus Proton.
:::

By default, the checkpoint will be updated every 15 minutes for materialized views. You can change the checkpoint interval without recreating the materialized views.

```sql
Expand All @@ -15,10 +12,6 @@ ALTER VIEW mv_with_inner_stream MODIFY QUERY SETTING checkpoint_interval=600

## RESET QUERY SETTING

:::info
This feature will be available in [Timeplus Enterprise v2.5](/enterprise-v2.5). Not available in Timeplus Proton.
:::

By default, the checkpoint will be updated every 15 minutes for materialized views. After you change the interval you can reset it.

```sql
Expand Down
2 changes: 0 additions & 2 deletions docs/sql-system-set-log-level.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
# SYSTEM SET LOG LEVEL

This feature is available in Timeplus Enterprise v2.8.2 or above. Not available in Timeplus Proton.

Example:
```sql
-- Setting global log level to information
Expand Down
Binary file added static/img/jit.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
23 changes: 2 additions & 21 deletions static/llms-full.txt
Original file line number Diff line number Diff line change
Expand Up @@ -387,7 +387,7 @@ The difference between a materialized view and a regular view is that the materi

## 📄️ CREATE MUTABLE STREAM

Regular streams in Timeplus are immutable, and stored in columnar format. Mutable streams are stored in row format (implemented via RocksDB), and can be updated or deleted. It is only available in Timeplus Enterprise. Please check this page for details.
Regular streams in Timeplus are immutable, and stored in columnar format. Mutable streams are stored in row format (implemented via RocksDB), and can be updated or deleted. Please check this page for details.

## 📄️ CREATE RANDOM STREAM

Expand Down Expand Up @@ -497,8 +497,6 @@ SYSTEM RESUME MATERIALIZED VIEW

## 📄️ SYSTEM SET LOG LEVEL

This feature is available in Timeplus Enterprise v2.8.2 or above. Not available in Timeplus Proton.

## 📄️ SYSTEM TRANSFER LEADER

Transfer the leader of a materialized view to another node in the cluster.
Expand Down Expand Up @@ -1022,8 +1020,6 @@ Migrate data and resources between Timeplus deployments, including:
- Migrate the data and configuration from a single-node Timeplus Proton to self-hosted Timeplus Enterprise.
- Migrate the data and configuration among Timeplus Enterprise deployments, even there are breaking changes in the data format. This tool can also be used to apply changes to production deployments after verifying changes in the staging deployments.

This tool is available in Timeplus Enterprise 2.5. It supports Timeplus Enterprise 2.4.19 or above, and Timeplus Proton 1.5.18 or above. Contact us if you need to migrate from an older version.

## How It Works​

The migration is done via capturing the SQL DDL from the source deployment and rerunning those SQL DDL in the target deployment. Data are read from source Timeplus via Timeplus External Streams and write to the target Timeplus via INSERT INTO .. SELECT .. FROM table(tp_ext_stream). The data files won't be copied among the source and target Timeplus, but you need to ensure the target Timeplus can access to the source Timeplus, so that it can read data via Timeplus External Streams.
Expand Down Expand Up @@ -1468,17 +1464,11 @@ For example the SQL file is:

## Cli-user

- [timeplus user | Timeplus](https://docs.timeplus.com/cli-user): This command is no longer available since Timeplus Enterprise 2.7. Please manage the users and groups via the web console or Helm chart.

- Integrations
- CLI, APIs & SDKs
- timeplus (CLI)
- timeplus user

# timeplus user

This command is no longer available since Timeplus Enterprise 2.7. Please manage the users and groups via the web console or Helm chart.

## timeplus user​

When you run timeplus user without extra parameters, it will list all available sub-commmands, e.g.
Expand Down Expand Up @@ -3071,7 +3061,6 @@ Timeplus supports 4 types of external streams:

- Kafka External Stream
- Pulsar External Stream
- Timeplus External Stream, only available in Timeplus Enterprise
- Log External Stream (experimental)

Besides external streams, Timeplus also provides external tables to query data in ClickHouse, MySQL, Postgres or S3/Iceberg. The difference of external tables and external streams is that external tables are not real-time, and they are not designed for streaming analytics. You can use external tables to query data in the external systems, but you cannot run streaming SQL on them. Learn more about external tables.
Expand Down Expand Up @@ -3392,14 +3381,10 @@ group_array_last(<column_name>, max_size) to combine the values of the specific

group_array_sorted(<column_name>) to combine the values of the specific column as an array, sorted in ascending order. For example, if there are 3 rows and the values for these columns are "c","b","a". This function will generate a single row and single column with value ['a','b','c'].

This function is available in Timeplus Enterprise v2.8 or later.

### group_array_sample​

group_array_sample(<column_name>, <max_length>) to combine the values of the specific column as an array, sampled randomly. For example, if there are 3 rows and the values for these columns are "a","b","c". This function will generate a single row and single column with value ['a','b'] with group_array_sample(col,2).

This function is available in Timeplus Enterprise v2.8 or later.

### group_uniq_array​

group_uniq_array(<column_name>) to combine the values of the specific column as an array, making sure only unique values in it. For example, if there are 3 rows and the values for these columns are "a","a","c". This function will generate a single row and single column with value ['a','c'].
Expand Down Expand Up @@ -4194,14 +4179,10 @@ json_query(json, path) allows you to access the nested JSON objects as JSON arra

### json_encode​

This function is available since Timeplus Enterprise v2.9.

This takes one or more parameters and return a json string. You can also turn all column values in the row as a json string via json_encode(*).

### json_cast​

This function is available since Timeplus Enterprise v2.9.

This takes one or more parameters and return a json object. You can also turn all column values in the row as a json object via json_cast(*).

### json_array_length​
Expand Down Expand Up @@ -5761,7 +5742,7 @@ The type of the external stream. The value must be http to send data to HTTP end

#### config_file​

The config_file setting is available since Timeplus Enterprise 2.7. You can specify the path to a file that contains the configuration settings. The file should be in the format of key=value pairs, one pair per line. You can set the HTTP credentials or Authentication tokens in the file.
You can specify the path to a file that contains the configuration settings. The file should be in the format of key=value pairs, one pair per line. You can set the HTTP credentials or Authentication tokens in the file.

Please follow the example in Kafka External Stream.

Expand Down
2 changes: 1 addition & 1 deletion static/llms.txt
Original file line number Diff line number Diff line change
Expand Up @@ -588,7 +588,7 @@

## Sql-system-set-log-level

- [SYSTEM SET LOG LEVEL | Timeplus](https://docs.timeplus.com/sql-system-set-log-level): This feature is available in Timeplus Enterprise v2.8.2 or above. Not available in Timeplus Proton.
- [SYSTEM SET LOG LEVEL | Timeplus](https://docs.timeplus.com/sql-system-set-log-level)

## Sql-system-transfer-leader

Expand Down