diff --git a/docs/automq-kafka-source.md b/docs/automq-kafka-source.md
index 49b9c6bd9..2e8ca10a2 100644
--- a/docs/automq-kafka-source.md
+++ b/docs/automq-kafka-source.md
@@ -28,5 +28,5 @@ Click **Next**. Timeplus will connect to the server and list all topics. Choose
In the next step, confirm the schema of the Timeplus stream and specify a name. At the end of the wizard, an external stream will be created in Timeplus. You can query data or even write data to the AutoMQ topic with SQL.
See also:
-* [Kafka External Stream](/proton-kafka)
+* [Kafka External Stream](/kafka-source)
* [Tutorial: Streaming ETL from Kafka to ClickHouse](/tutorial-sql-etl-kafka-to-ch)
diff --git a/docs/bigquery-external.md b/docs/bigquery-external.md
new file mode 100644
index 000000000..957261113
--- /dev/null
+++ b/docs/bigquery-external.md
@@ -0,0 +1,34 @@
+# BigQuery
+
+Leveraging HTTP external stream, you can write / materialize data to BigQuery directly from Timeplus.
+
+## Write to BigQuery {#example-write-to-bigquery}
+
+Assume you have created a table in BigQuery with 2 columns:
+```sql
+create table `PROJECT.DATASET.http_sink_t1`(
+ num int,
+ str string);
+```
+
+Follow [the guide](https://cloud.google.com/bigquery/docs/authentication) to choose the proper authentication to Google Cloud, such as via the gcloud CLI `gcloud auth application-default print-access-token`.
+
+Create the HTTP external stream in Timeplus:
+```sql
+CREATE EXTERNAL STREAM http_bigquery_t1 (num int,str string)
+SETTINGS
+type = 'http',
+http_header_Authorization='Bearer $OAUTH_TOKEN',
+url = 'https://bigquery.googleapis.com/bigquery/v2/projects/$PROJECT/datasets/$DATASET/tables/$TABLE/insertAll',
+data_format = 'Template',
+format_template_resultset_format='{"rows":[${data}]}',
+format_template_row_format='{"json":{"num":${num:JSON},"str":${str:JSON}}}',
+format_template_rows_between_delimiter=','
+```
+
+Replace the `OAUTH_TOKEN` with the output of `gcloud auth application-default print-access-token` or other secure way to obtain OAuth token. Replace `PROJECT`, `DATASET` and `TABLE` to match your BigQuery table path. Also change `format_template_row_format` to match the table schema.
+
+Then you can insert data via a materialized view or just via `INSERT` command:
+```sql
+INSERT INTO http_bigquery_t1 VALUES(10,'A'),(11,'B');
+```
diff --git a/docs/changelog-stream.md b/docs/changelog-stream.md
index c42416baa..ae4a89963 100644
--- a/docs/changelog-stream.md
+++ b/docs/changelog-stream.md
@@ -403,7 +403,7 @@ Debezium also read all existing rows and generate messages like this
### Load data to Timeplus
-You can follow this [guide](/proton-kafka) to add 2 external streams to load data from Kafka or Redpanda. For example:
+You can follow this [guide](/kafka-source) to add 2 external streams to load data from Kafka or Redpanda. For example:
* Data source name `s1` to load data from topic `doc.public.dim_products` and put in a new stream `rawcdc_dim_products`
* Data source name `s2` to load data from topic `doc.public.orders` and put in a new stream `rawcdc_orders`
diff --git a/docs/cli-migrate.md b/docs/cli-migrate.md
index 9224d143c..6bcc2edd9 100644
--- a/docs/cli-migrate.md
+++ b/docs/cli-migrate.md
@@ -8,7 +8,7 @@ This tool is available in Timeplus Enterprise 2.5. It supports [Timeplus Enterpr
## How It Works
-The migration is done via capturing the SQL DDL from the source deployment and rerunning those SQL DDL in the target deployment. Data are read from source Timeplus via [Timeplus External Streams](/timeplus-external-stream) and write to the target Timeplus via `INSERT INTO .. SELECT .. FROM table(tp_ext_stream)`. The data files won't be copied among the source and target Timeplus, but you need to ensure the target Timeplus can access to the source Timeplus, so that it can read data via Timeplus External Streams.
+The migration is done via capturing the SQL DDL from the source deployment and rerunning those SQL DDL in the target deployment. Data are read from source Timeplus via [Timeplus External Streams](/timeplus-source) and write to the target Timeplus via `INSERT INTO .. SELECT .. FROM table(tp_ext_stream)`. The data files won't be copied among the source and target Timeplus, but you need to ensure the target Timeplus can access to the source Timeplus, so that it can read data via Timeplus External Streams.
## Supported Resources
diff --git a/docs/proton-clickhouse-external-table.md b/docs/clickhouse-external-table.md
similarity index 98%
rename from docs/proton-clickhouse-external-table.md
rename to docs/clickhouse-external-table.md
index b5420575c..9632ddca4 100644
--- a/docs/proton-clickhouse-external-table.md
+++ b/docs/clickhouse-external-table.md
@@ -1,5 +1,7 @@
# ClickHouse External Table
+## Overview
+
Timeplus can read or write ClickHouse tables directly. This unlocks a set of new use cases, such as
- Use Timeplus to efficiently process real-time data in Kafka/Redpanda, apply flat transformation or stateful aggregation, then write the data to the local or remote ClickHouse for further analysis or visualization.
@@ -41,7 +43,7 @@ The required settings are type and address. For other settings, the default valu
The `config_file` setting is available since Timeplus Enterprise 2.7. You can specify the path to a file that contains the configuration settings. The file should be in the format of `key=value` pairs, one pair per line. You can set the ClickHouse user and password in the file.
-Please follow the example in [Kafka External Stream](/proton-kafka#config_file).
+Please follow the example in [Kafka External Stream](/kafka-source#config_file).
You don't need to specify the columns, since the table schema will be fetched from the ClickHouse server.
diff --git a/docs/ingestion.md b/docs/connect-data-in.md
similarity index 96%
rename from docs/ingestion.md
rename to docs/connect-data-in.md
index c13dafa5d..5ba63fb76 100644
--- a/docs/ingestion.md
+++ b/docs/connect-data-in.md
@@ -1,9 +1,9 @@
-# Getting Data In
+# Connect Data In
Timeplus supports multiple ways to load data into the system, or access the external data without copying them in Timeplus:
- [External Stream for Apache Kafka](/external-stream), Confluent, Redpanda, and other Kafka API compatible data streaming platform. This feature is also available in Timeplus Proton.
-- [External Stream for Apache Pulsar](/pulsar-external-stream) is available in Timeplus Enterprise 2.5 and above.
+- [External Stream for Apache Pulsar](/pulsar-source) is available in Timeplus Enterprise 2.5 and above.
- Source for extra wide range of data sources. This is only available in Timeplus Enterprise. This integrates with [Redpanda Connect](https://redpanda.com/connect), supporting 200+ connectors.
- On Timeplus web console, you can also [upload CSV files](#csv) and import them into streams.
- For Timeplus Enterprise, [REST API](/ingest-api) and SDKs are provided to push data to Timeplus programmatically.
@@ -15,12 +15,12 @@ Timeplus supports multiple ways to load data into the system, or access the exte
Choose "Data Collection" from the navigation menu to setup data access to other systems. There are two categories:
* Timeplus Connect: directly supported by Timeplus Inc, with easy-to-use setup wizards.
* Demo Stream: generate random data for various use cases. [Learn more](#streamgen)
- * Timeplus: read data from another Timeplus deployment. [Learn more](/timeplus-external-stream)
+ * Timeplus: read data from another Timeplus deployment. [Learn more](/timeplus-source)
* Apache Kafka: setup external streams to read from Apache Kafka. [Learn more](#kafka)
* Confluent Cloud: setup external streams to read from Confluent Cloud
* Redpanda: setup external streams to read from Redpanda
* Apache Pulsar: setup external streams to read from Apache Pulsar. [Learn more](#pulsar)
- * ClickHouse: setup external tables to read from ClickHouse, without duplicating data in Timeplus. [Learn more](/proton-clickhouse-external-table)
+ * ClickHouse: setup external tables to read from ClickHouse, without duplicating data in Timeplus. [Learn more](/clickhouse-external-table)
* NATS: load data from NATS to Timeplus streams
* WebSocket: load data from WebSocket to Timeplus streams
* HTTP Stream: load data from HTTP stream to Timeplus streams
@@ -29,19 +29,17 @@ Choose "Data Collection" from the navigation menu to setup data access to other
* Stream Ingestion: a wizard to guide you to push data to Timeplus via Ingest REST API. [Learn more](/ingest-api)
* Redpanda Connect: available since Timeplus Enterprise 2.5 or above. Set up data access to other systems by editing a YAML file. Powered by Redpanda Connect, supported by Redpanda Data Inc. or Redpanda Community.
-
-
### Load streaming data from Apache Kafka {#kafka}
As of today, Kafka is the primary data integration for Timeplus. With our strong partnership with Confluent, you can load your real-time data from Confluent Cloud, Confluent Platform, or Apache Kafka into the Timeplus streaming engine. You can also create [external streams](/external-stream) to analyze data in Confluent/Kafka/Redpanda without moving data.
-[Learn more.](/proton-kafka)
+[Learn more.](/kafka-source)
### Load streaming data from Apache Pulsar {#pulsar}
Apache® Pulsar™ is a cloud-native, distributed, open source messaging and streaming platform for real-time workloads. Since Timeplus Enterprise 2.5, Pulsar External Streams can be created to read or write data for Pulsar.
-[Learn more.](/pulsar-external-stream)
+[Learn more.](/pulsar-source)
### Upload local files {#csv}
diff --git a/docs/databricks-external.md b/docs/databricks-external.md
new file mode 100644
index 000000000..106792b36
--- /dev/null
+++ b/docs/databricks-external.md
@@ -0,0 +1,39 @@
+# Databricks
+
+Leveraging HTTP external stream, you can write / materialize data to Databricks directly from Timeplus.
+
+## Write to Databricks {#example-write-to-databricks}
+
+Follow [the guide](https://docs.databricks.com/aws/en/dev-tools/auth/pat) to create an access token for your Databricks workspace.
+
+Assume you have created a table in Databricks SQL warehouse with 2 columns:
+```sql
+CREATE TABLE sales (
+ product STRING,
+ quantity INT
+);
+```
+
+Create the HTTP external stream in Timeplus:
+```sql
+CREATE EXTERNAL STREAM http_databricks_t1 (product string, quantity int)
+SETTINGS
+type = 'http',
+http_header_Authorization='Bearer $TOKEN',
+url = 'https://$HOST.cloud.databricks.com/api/2.0/sql/statements/',
+data_format = 'Template',
+format_template_resultset_format='{"warehouse_id":"$WAREHOUSE_ID","statement": "INSERT INTO sales (product, quantity) VALUES (:product, :quantity)", "parameters": [${data}]}',
+format_template_row_format='{ "name": "product", "value": ${product:JSON}, "type": "STRING" },{ "name": "quantity", "value": ${quantity:JSON}, "type": "INT" }',
+format_template_rows_between_delimiter=''
+```
+
+Replace the `TOKEN`, `HOST`, and `WAREHOUSE_ID` to match your Databricks settings. Also change `format_template_row_format` and `format_template_row_format` to match the table schema.
+
+Then you can insert data via a materialized view or just via `INSERT` command:
+```sql
+INSERT INTO http_databricks_t1(product, quantity) VALUES('test',95);
+```
+
+This will insert one row per request. We plan to support batch insert and Databricks specific format to support different table schemas in the future.
+
+
diff --git a/docs/datadog-external.md b/docs/datadog-external.md
new file mode 100644
index 000000000..fb7dd39fa
--- /dev/null
+++ b/docs/datadog-external.md
@@ -0,0 +1,24 @@
+# Datadog
+
+Leveraging HTTP external stream, you can write / materialize data to Datadog directly from Timeplus.
+
+## Write to Datadog {#example-write-to-datadog}
+
+Create or use an existing API key with the proper permission for sending data.
+
+Create the HTTP external stream in Timeplus:
+```sql
+CREATE EXTERNAL STREAM datadog_t1 (event string)
+SETTINGS
+type = 'http',
+data_format = 'JSONEachRow',
+output_format_json_array_of_rows = 1,
+http_header_DD_API_KEY = 'THE_API_KEY',
+http_header_Content_Type = 'application/json',
+url = 'https://http-intake.logs.us3.datadoghq.com/api/v2/logs' --make sure you set the right region
+```
+
+Then you can insert data via a materialized view or just
+```sql
+INSERT INTO datadog_t1(message, hostname) VALUES('test message','a.test.com'),('test2','a.test.com');
+```
diff --git a/docs/elastic-external.md b/docs/elastic-external.md
new file mode 100644
index 000000000..fe5ea7f03
--- /dev/null
+++ b/docs/elastic-external.md
@@ -0,0 +1,25 @@
+# Elastic Search
+
+Leveraging HTTP external stream, you can write data to Elastic Search or Open Search directly from Timeplus.
+
+## Write to OpenSearch / ElasticSearch {#example-write-to-es}
+
+Assuming you have created an index `students` in a deployment of OpenSearch or ElasticSearch, you can create the following external stream to write data to the index.
+
+```sql
+CREATE EXTERNAL STREAM opensearch_t1 (
+ name string,
+ gpa float32,
+ grad_year int16
+) SETTINGS
+type = 'http',
+data_format = 'OpenSearch', --can also use the alias "ElasticSearch"
+url = 'https://opensearch.company.com:9200/students/_bulk',
+username='admin',
+password='..'
+```
+
+Then you can insert data via a materialized view or just
+```sql
+INSERT INTO opensearch_t1(name,gpa,grad_year) VALUES('Jonathan Powers',3.85,2025);
+```
diff --git a/docs/enterprise-v2.4.md b/docs/enterprise-v2.4.md
index ed38d5d40..7a677de4c 100644
--- a/docs/enterprise-v2.4.md
+++ b/docs/enterprise-v2.4.md
@@ -12,10 +12,10 @@ Each component tracks their changes with own version numbers. The version number
## Key Highlights
Key highlights of this release:
* [Distributed Mutable Streams](/mutable-stream) for high performance query and UPSERT (UPDATE or INSERT), with primary keys, secondary keys, column families, sorting columns, parallel full scan and many more
-* [External Streams](/timeplus-external-stream) to query or write to remote Timeplus, designed for data migration or hybrid deployment
+* [External Streams](/timeplus-source) to query or write to remote Timeplus, designed for data migration or hybrid deployment
* Built-in system observability. Your workspace now comes with a system dashboard to monitor your cluster, including charts for running nodes and failed nodes, read/write throughput and EPS, used disk storage, and more. See additional metrics for resources in the details side panel, accessed via the data lineage or resource list pages, including status and any last errors
-* [Kafka schema registry support for Avro output format](/proton-schema-registry#write)
-* Read/write Kafka message keys via [_tp_message_key column](/proton-kafka#_tp_message_key)
+* [Kafka schema registry support for Avro output format](/kafka-schema-registry#write)
+* Read/write Kafka message keys via [_tp_message_key column](/kafka-source)
* More performance enhancements, including:
* Concurrent and [idempotent data ingestion](/idempotent)
* Memory efficiency improvement for window processing
@@ -48,7 +48,7 @@ Compared to the [2.4.28](#2_4_28) release:
* fix: truncate garbage data at tail for reverse indexes
#### Known issues {#known_issue_2_4_29}
-1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for migration.
+1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for migration.
### 2.4.28 (Stable) {#2_4_28}
Built on 08-12-2025. You can install via:
@@ -70,7 +70,7 @@ Compared to the [2.4.27](#2_4_27) release:
* fix: timestamp sequence deserialization issue
#### Known issues {#known_issue_2_4_28}
-1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for migration.
+1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for migration.
### 2.4.27 (Stable) {#2_4_27}
Built on 08-05-2025. You can install via:
@@ -94,7 +94,7 @@ Compared to the [2.4.26](#2_4_26) release:
* fix: log truncation and garbage collection
#### Known issues {#known_issue_2_4_27}
-1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for migration.
+1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for migration.
### 2.4.26 (Stable) {#2_4_26}
@@ -119,7 +119,7 @@ Compared to the [2.4.25](#2_4_25) release:
* fix a bug during versioned schema fetch for inner storage of materialized views
#### Known issues {#known_issue_2_4_26}
-1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for migration.
+1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for migration.
### 2.4.25 (Stable) {#2_4_25}
Built on 01-31-2025. You can install via:
@@ -143,7 +143,7 @@ Compared to the [2.4.23](#2_4_23) release:
* set mutable streams' default logstore retention policy from keeping forever to automatic
#### Known issues {#known_issue_2_4_25}
-1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for migration.
+1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for migration.
### 2.4.23 (Stable) {#2_4_23}
Built on 08-22-2024. You can install via:
@@ -168,7 +168,7 @@ Compared to the [2.4.19](#2_4_19) release:
* bugfixes and performance enhancements
#### Known issues {#known_issue_2_4_23}
-1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for migration.
+1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for migration.
### 2.4.19 {#2_4_19}
@@ -186,7 +186,7 @@ Compared to the [2.4.17](#2_4_17) release:
* feat(ingest): use username:password for ingest API wizard
#### Known issues {#known_issue_2_4_19}
-1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for migration.
+1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for migration.
2. In Timeplus Console, no result will be shown for SQL [SHOW FORMAT SCHEMAS](/sql-show-format-schemas) or [SHOW FUNCTIONS](/sql-show-functions). This only impacts the web interface. Running such SQL via `timeplusd client` CLI or JDBC/ODBC will get the expected results.
### 2.4.17 {#2_4_17}
@@ -201,14 +201,14 @@ Compared to the [2.4.16](#2_4_16) release:
Components:
* timeplusd
- * feat: support running [table function](/functions_for_streaming#table) on [Timeplus External Stream](/timeplus-external-stream)
+ * feat: support running [table function](/functions_for_streaming#table) on [Timeplus External Stream](/timeplus-source)
* improvement: track more stats: external_stream_read_failed, external_stream_written_failed, mv_recover_times, mv_memory_usage.
* improvement: better track memory usage in macOS and Docker container.
* feat: allow you to [drop streams](/sql-drop-stream#force_drop_big_stream) with `force_drop_big_stream=true` setting.
* improvement: default listen for 0.0.0.0 instead 127.1 (localhost)
#### Known issues {#known_issue_2_4_17}
-1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for migration.
+1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for migration.
2. In Timeplus Console, no result will be shown for SQL [SHOW FORMAT SCHEMAS](/sql-show-format-schemas) or [SHOW FUNCTIONS](/sql-show-functions). This only impacts the web interface. Running such SQL via `timeplusd client` CLI or JDBC/ODBC will get the expected results.
### 2.4.16 (Stable) {#2_4_16}
@@ -245,7 +245,7 @@ Components:
* fix: list users properly
#### Known issues {#known_issue_2_4_16}
-1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for migration.
+1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for migration.
2. In Timeplus Console, no result will be shown for SQL [SHOW FORMAT SCHEMAS](/sql-show-format-schemas) or [SHOW FUNCTIONS](/sql-show-functions). This only impacts the web interface. Running such SQL via `timeplusd client` CLI or JDBC/ODBC will get the expected results.
@@ -269,7 +269,7 @@ Components:
* timeplusd
* feat: [new mutable stream](/mutable-stream) for fast UPSERT and high performance point or range query.
* perf: better asof join performance
- * feat: [external stream to read data from the remote timeplusd](/timeplus-external-stream)
+ * feat: [external stream to read data from the remote timeplusd](/timeplus-source)
* feat: [parallel key space scan](/mutable-stream#key_space_full_scan_threads)
* feat: force_full_scan for mutable stream
* feat: user management on cluster
@@ -277,8 +277,8 @@ Components:
* feat: support remote UDF on cluster
* feat: primary key columns in secondary key
* feat: support [ALTER STREAM .. ADD COLUMN ..](sql-alter-stream#add-column)
- * feat: _tp_message_key to [read/write message keys in Kafka](/proton-kafka#_tp_message_key)
- * feat: [Kafka schema registry support for Avro output format](/proton-schema-registry#write)
+ * feat: _tp_message_key to [read/write message keys in Kafka](/kafka-source)
+ * feat: [Kafka schema registry support for Avro output format](/kafka-schema-registry#write)
* feat: support [idempotent keys processing](/idempotent)
* feat: collect node free memory usage. You can get it via `select cluster_id, node_id, os_memory_total_mb, os_memory_free_mb, memory_used_mb, disk_total_mb, disk_free_mb, timestamp from system.cluster`
* fix: nullptr access in window function
@@ -317,6 +317,6 @@ Components:
* feat: for stop command, terminate the service if graceful stop times out
#### Known issues {#known_issue_2_4_15}
-1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for migration.
+1. If you have deployed one of the [2.3.x releases](/enterprise-v2.3), you cannot reuse the data and configuration directly. Please have a clean installation of 2.4.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for migration.
2. In Timeplus Console, no result will be shown for SQL [SHOW FORMAT SCHEMAS](/sql-show-format-schemas) or [SHOW FUNCTIONS](/sql-show-functions). This only impacts the web interface. Running such SQL via `timeplusd client` CLI or JDBC/ODBC will get the expected results.
3. For [timeplus user](/cli-user) CLI, you need to add `--verbose` to `timeplus user list` command, in order to list users.
diff --git a/docs/enterprise-v2.5.md b/docs/enterprise-v2.5.md
index 59bbe3699..331bfa7d7 100644
--- a/docs/enterprise-v2.5.md
+++ b/docs/enterprise-v2.5.md
@@ -11,13 +11,13 @@ Each component tracks their changes with own version numbers. The version number
## Key Highlights
Key highlights of this release:
-* Reading or writing data in Apache Pulsar or StreamNative via External Stream. [Learn more](/pulsar-external-stream).
+* Reading or writing data in Apache Pulsar or StreamNative via External Stream. [Learn more](/pulsar-source).
* Connecting to various input or output systems via Redpanda Connect. [Learn more](/redpanda-connect).
* Creating and managing users in the Web Console. You can change the password and assign the user either Administrator or Read-only role.
* New [migrate](/cli-migrate) subcommand in [timeplus CLI](/cli-reference) for data migration and backup/restore.
-* Materialized views auto-rebalancing in the cluster mode. [Learn more](/view#auto-balancing).
+* Materialized views auto-rebalancing in the cluster mode. [Learn more](/materialized-view#auto-balancing).
* Approximately 30% faster data ingestion and replication in the cluster mode.
-* Performance improvement for [ASOF JOIN](/joins) and [EMIT ON UPDATE](/streaming-aggregations#emit_on_update).
+* Performance improvement for [ASOF JOIN](/streaming-joins) and [EMIT ON UPDATE](/streaming-aggregations#emit_on_update).
## Supported OS {#os}
|Deployment Type| OS |
@@ -49,7 +49,7 @@ Compared to the [2.5.13](#2_5_13) release:
* Handle log corruption more gracefully and fixes log truncation.
#### Known issues {#known_issue_2_5_14}
-1. If you have deployed one of the [2.4.x releases](/enterprise-v2.4), you can reuse the data and configuration directly. However, if your current deployment is [2.3](/enterprise-v2.3) or earlier, you cannot upgrade directly. Please have a clean installation of 2.5.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for migration.
+1. If you have deployed one of the [2.4.x releases](/enterprise-v2.4), you can reuse the data and configuration directly. However, if your current deployment is [2.3](/enterprise-v2.3) or earlier, you cannot upgrade directly. Please have a clean installation of 2.5.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for migration.
2. Pulsar external streams are only available in Linux bare metal builds and Linux-based Docker images. This type of external stream is not available in macOS bare metal builds.
### 2.5.13 (Public GA) {#2_5_13}
@@ -72,7 +72,7 @@ Compared to the [2.5.12](#2_5_12) release:
* Bug fixes without new features
#### Known issues {#known_issue_2_5_13}
-1. If you have deployed one of the [2.4.x releases](/enterprise-v2.4), you can reuse the data and configuration directly. However, if your current deployment is [2.3](/enterprise-v2.3) or earlier, you cannot upgrade directly. Please have a clean installation of 2.5.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for migration.
+1. If you have deployed one of the [2.4.x releases](/enterprise-v2.4), you can reuse the data and configuration directly. However, if your current deployment is [2.3](/enterprise-v2.3) or earlier, you cannot upgrade directly. Please have a clean installation of 2.5.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for migration.
2. Pulsar external streams are only available in Linux bare metal builds and Linux-based Docker images. This type of external stream is not available in macOS bare metal builds.
### 2.5.12 (Public GA) {#2_5_12}
@@ -95,7 +95,7 @@ Compared to the [2.5.11](#2_5_11) release:
* Able to drop malformed UDFs with `DROP FUNCTION udf_name SETTINGS force=true`.
#### Known issues {#known_issue_2_5_12}
-1. If you have deployed one of the [2.4.x releases](/enterprise-v2.4), you can reuse the data and configuration directly. However, if your current deployment is [2.3](/enterprise-v2.3) or earlier, you cannot upgrade directly. Please have a clean installation of 2.5.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for migration.
+1. If you have deployed one of the [2.4.x releases](/enterprise-v2.4), you can reuse the data and configuration directly. However, if your current deployment is [2.3](/enterprise-v2.3) or earlier, you cannot upgrade directly. Please have a clean installation of 2.5.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for migration.
2. Pulsar external streams are only available in Linux bare metal builds and Linux-based Docker images. This type of external stream is not available in macOS bare metal builds.
### 2.5.11 (Public GA) {#2_5_11}
@@ -121,7 +121,7 @@ Compared to the [2.5.10](#2_5_10) release:
You can upgrade a deployment of Timeplus Enterprise 2.4 to Timeplus Enterprise 2.5, by stopping the components and replacing the binary files, or reusing the Docker or Kubernetes volumes and update the image versions.
#### Known issues {#known_issue_2_5_11}
-1. If you have deployed one of the [2.4.x releases](/enterprise-v2.4), you can reuse the data and configuration directly. However, if your current deployment is [2.3](/enterprise-v2.3) or earlier, you cannot upgrade directly. Please have a clean installation of 2.5.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for migration.
+1. If you have deployed one of the [2.4.x releases](/enterprise-v2.4), you can reuse the data and configuration directly. However, if your current deployment is [2.3](/enterprise-v2.3) or earlier, you cannot upgrade directly. Please have a clean installation of 2.5.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for migration.
2. Pulsar external streams are only available in Linux bare metal builds and Linux-based Docker images. This type of external stream is not available in macOS bare metal builds.
### 2.5.10 (Controlled Release) {#2_5_10}
@@ -149,7 +149,7 @@ Compared to the [2.5.9](#2_5_9) release:
You can upgrade a deployment of Timeplus Enterprise 2.4 to Timeplus Enterprise 2.5, by stopping the components and replacing the binary files, or reusing the Docker or Kubernetes volumes and update the image versions.
#### Known issues {#known_issue_2_5_10}
-1. If you have deployed one of the [2.4.x releases](/enterprise-v2.4), you can reuse the data and configuration directly. However, if your current deployment is [2.3](/enterprise-v2.3) or earlier, you cannot upgrade directly. Please have a clean installation of 2.5.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for migration.
+1. If you have deployed one of the [2.4.x releases](/enterprise-v2.4), you can reuse the data and configuration directly. However, if your current deployment is [2.3](/enterprise-v2.3) or earlier, you cannot upgrade directly. Please have a clean installation of 2.5.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for migration.
2. Pulsar external streams are only available in Linux bare metal builds and Linux-based Docker images. This type of external stream is not available in macOS bare metal builds.
### 2.5.9 (Controlled Release) {#2_5_9}
@@ -168,12 +168,12 @@ Component versions:
Compared to the [2.4.23](/enterprise-v2.4#2_4_23) release:
* timeplusd 2.3.30 -> 2.4.23
- * new type of [External Streams for Apache Pulsar](/pulsar-external-stream).
+ * new type of [External Streams for Apache Pulsar](/pulsar-source).
* for bare metal installation, previously you can login with the username `default` with empty password. To improve the security, this user has been removed.
* enhancement for nullable data types in streaming and historical queries.
- * Materialized views auto-rebalancing in the cluster mode.[Learn more](/view#auto-balancing).
+ * Materialized views auto-rebalancing in the cluster mode.[Learn more](/materialized-view#auto-balancing).
* Approximately 30% faster data ingestion and replication in the cluster mode.
- * Performance improvement for [ASOF JOIN](/joins) and [EMIT ON UPDATE](/streaming-aggregations#emit_on_update).
+ * Performance improvement for [ASOF JOIN](/streaming-joins) and [EMIT ON UPDATE](/streaming-aggregations#emit_on_update).
* timeplus_web 1.4.33 -> 2.0.6
* UI to add/remove user or change role and password. This works for both single node and cluster.
* UI for inputs/outputs from Redpanda Connect.
@@ -194,5 +194,5 @@ Compared to the [2.4.23](/enterprise-v2.4#2_4_23) release:
You can upgrade a deployment of Timeplus Enterprise 2.4 to Timeplus Enterprise 2.5, by stopping the components and replacing the binary files, or reusing the Docker or Kubernetes volumes and update the image versions.
#### Known issues {#known_issue_2_5_9}
-1. If you have deployed one of the [2.4.x releases](/enterprise-v2.4), you can reuse the data and configuration directly. However, if your current deployment is [2.3](/enterprise-v2.3) or earlier, you cannot upgrade directly. Please have a clean installation of 2.5.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for migration.
+1. If you have deployed one of the [2.4.x releases](/enterprise-v2.4), you can reuse the data and configuration directly. However, if your current deployment is [2.3](/enterprise-v2.3) or earlier, you cannot upgrade directly. Please have a clean installation of 2.5.x release, then use tools like [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for migration.
2. Pulsar external streams are only available in Linux bare metal builds and Linux-based Docker images. This type of external stream is not available in macOS bare metal builds.
diff --git a/docs/enterprise-v2.6.md b/docs/enterprise-v2.6.md
index b6993a136..267539d3e 100644
--- a/docs/enterprise-v2.6.md
+++ b/docs/enterprise-v2.6.md
@@ -13,7 +13,7 @@ Each component maintains its own version numbers. The version number for each Ti
Key highlights of this release:
* **Revolutionary hybrid hash table technology.** For streaming SQL with JOINs or aggregations, by default a memory based hash table is used. This is helpful for preventing the memory limits from being exceeded for large data streams with hundreds of GB of data. You can adjust the query setting to apply the new hybrid hash table, which uses both the memory and the local disk to store the internal state as a hash table.
* **Enhanced operational visibility.** Gain complete transparency into your system's performance through comprehensive monitoring of materialized views and streams. Track state changes, errors, and throughput metrics via [system.stream_state_log](/system-stream-state-log) and [system.stream_metric_log](/system-stream-metric-log).
-* **Advanced cross-deployment integration.** Seamlessly write data to remote Timeplus deployments by configuring [Timeplus external stream](/timeplus-external-stream) as targets in materialized views.
+* **Advanced cross-deployment integration.** Seamlessly write data to remote Timeplus deployments by configuring [Timeplus external stream](/timeplus-source) as targets in materialized views.
* **Improved data management capabilities.** Add new columns to an existing stream. Truncate historical data for streams. Create new databases to organize your streams and materialized views.
* **Optimized ClickHouse integration.** Significant performance improvements for read/write operations with ClickHouse external tables.
* **Enhanced user experience.** New UI wizards for Coinbase data sources and Apache Pulsar external streams, alongside a redesigned SQL Console and SQL Helper interface for improved usability. Quick access to streams, dashboards, and common actions via Command+K (Mac) or Windows+K (PC) keyboard shortcuts.
@@ -51,7 +51,7 @@ Upgrade Instructions:
Users can upgrade from Timeplus Enterprise 2.5 to 2.6 by stopping components and replacing binary files, or by updating Docker/Kubernetes image versions while maintaining existing volumes.
#### Known issues {#known_issue_2_6_8}
-1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.6.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for data migration.
+1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.6.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for data migration.
2. Pulsar external stream functionality is limited to Linux bare metal builds and Linux-based Docker images, excluding macOS bare metal builds.
3. The `timeplus_connector` component may experience health issues on Ubuntu Linux with x86_64 chips, affecting Redpanda Connect functionality. This issue is specific to Ubuntu and does not affect other Linux distributions.
@@ -80,7 +80,7 @@ Upgrade Instructions:
Users can upgrade from Timeplus Enterprise 2.5 to 2.6 by stopping components and replacing binary files, or by updating Docker/Kubernetes image versions while maintaining existing volumes.
#### Known issues {#known_issue_2_6_7}
-1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.6.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for data migration.
+1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.6.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for data migration.
2. Pulsar external stream functionality is limited to Linux bare metal builds and Linux-based Docker images, excluding macOS bare metal builds.
3. The `timeplus_connector` component may experience health issues on Ubuntu Linux with x86_64 chips, affecting Redpanda Connect functionality. This issue is specific to Ubuntu and does not affect other Linux distributions.
@@ -108,7 +108,7 @@ Upgrade Instructions:
Users can upgrade from Timeplus Enterprise 2.5 to 2.6 by stopping components and replacing binary files, or by updating Docker/Kubernetes image versions while maintaining existing volumes.
#### Known issues {#known_issue_2_6_6}
-1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.6.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for data migration.
+1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.6.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for data migration.
2. Pulsar external stream functionality is limited to Linux bare metal builds and Linux-based Docker images, excluding macOS bare metal builds.
3. The `timeplus_connector` component may experience health issues on Ubuntu Linux with x86_64 chips, affecting Redpanda Connect functionality. This issue is specific to Ubuntu and does not affect other Linux distributions.
@@ -136,7 +136,7 @@ Upgrade Instructions:
Users can upgrade from Timeplus Enterprise 2.5 to 2.6 by stopping components and replacing binary files, or by updating Docker/Kubernetes image versions while maintaining existing volumes.
#### Known issues {#known_issue_2_6_5}
-1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.6.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for data migration.
+1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.6.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for data migration.
2. Pulsar external stream functionality is limited to Linux bare metal builds and Linux-based Docker images, excluding macOS bare metal builds.
3. The `timeplus_connector` component may experience health issues on Ubuntu Linux with x86_64 chips, affecting Redpanda Connect functionality. This issue is specific to Ubuntu and does not affect other Linux distributions.
@@ -164,7 +164,7 @@ Upgrade Instructions:
Users can upgrade from Timeplus Enterprise 2.5 to 2.6 by stopping components and replacing binary files, or by updating Docker/Kubernetes image versions while maintaining existing volumes.
#### Known issues {#known_issue_2_6_4}
-1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.6.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for data migration.
+1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.6.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for data migration.
2. Pulsar external stream functionality is limited to Linux bare metal builds and Linux-based Docker images, excluding macOS bare metal builds.
3. The `timeplus_connector` component may experience health issues on Ubuntu Linux with x86_64 chips, affecting Redpanda Connect functionality. This issue is specific to Ubuntu and does not affect other Linux distributions.
@@ -192,7 +192,7 @@ Upgrade Instructions:
Users can upgrade from Timeplus Enterprise 2.5 to 2.6 by stopping components and replacing binary files, or by updating Docker/Kubernetes image versions while maintaining existing volumes.
#### Known issues {#known_issue_2_6_3}
-1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.6.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for data migration.
+1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.6.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for data migration.
2. Pulsar external stream functionality is limited to Linux bare metal builds and Linux-based Docker images, excluding macOS bare metal builds.
3. The `timeplus_connector` component may experience health issues on Ubuntu Linux with x86_64 chips, affecting Redpanda Connect functionality. This issue is specific to Ubuntu and does not affect other Linux distributions.
@@ -223,7 +223,7 @@ Upgrade Instructions:
Users can upgrade from Timeplus Enterprise 2.5 to 2.6 by stopping components and replacing binary files, or by updating Docker/Kubernetes image versions while maintaining existing volumes.
#### Known issues {#known_issue_2_6_2}
-1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.6.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for data migration.
+1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.6.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for data migration.
2. Pulsar external stream functionality is limited to Linux bare metal builds and Linux-based Docker images, excluding macOS bare metal builds.
3. The `timeplus_connector` component may experience health issues on Ubuntu Linux with x86_64 chips, affecting Redpanda Connect functionality. This issue is specific to Ubuntu and does not affect other Linux distributions.
@@ -252,7 +252,7 @@ Compared to the [2.5.12](/enterprise-v2.5#2_5_12) release:
* Implemented Kafka offset tracking in [system.stream_state_log](/system-stream-state-log), exportable via [timeplus diag](/cli-diag) command.
* A `_tp_sn` column is added to each stream (except external streams or random streams), as the sequence number in the unified streaming and historical storage. This column is used for data replication among the cluster. By default, it is hidden in the query results. You can show it by setting `SETTINGS asterisk_include_tp_sn_column=true`. This setting is required when you use `INSERT..SELECT` SQL to copy data between streams: `INSERT INTO stream2 SELECT * FROM stream1 SETTINGS asterisk_include_tp_sn_column=true`.
* New Features:
- * Support for continuous data writing to remote Timeplus deployments via setting a [Timeplus external stream](/timeplus-external-stream) as the target in a materialized view.
+ * Support for continuous data writing to remote Timeplus deployments via setting a [Timeplus external stream](/timeplus-source) as the target in a materialized view.
* New [EMIT PERIODIC .. REPEAT](/streaming-aggregations#emit_periodic_repeat) syntax for emitting the last aggregation result even when there is no new event.
* Able to create or drop databases via SQL in a cluster. The web console will be enhanced to support different databases in the next release.
* Historical data of a stream can be removed by `TRUNCATE STREAM stream_name`.
@@ -288,6 +288,6 @@ Upgrade Instructions:
Users can upgrade from Timeplus Enterprise 2.5 to 2.6 by stopping components and replacing binary files, or by updating Docker/Kubernetes image versions while maintaining existing volumes.
#### Known issues {#known_issue_2_6_0}
-1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.6.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for data migration.
+1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.6.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for data migration.
2. Pulsar external stream functionality is limited to Linux bare metal builds and Linux-based Docker images, excluding macOS bare metal builds.
3. The `timeplus_connector` component may experience health issues on Ubuntu Linux with x86_64 chips, affecting Redpanda Connect functionality. This issue is specific to Ubuntu and does not affect other Linux distributions.
diff --git a/docs/enterprise-v2.7.md b/docs/enterprise-v2.7.md
index f80825fbb..7d4ccf564 100644
--- a/docs/enterprise-v2.7.md
+++ b/docs/enterprise-v2.7.md
@@ -11,7 +11,7 @@ Each component maintains its own version numbers. The version number for each Ti
## Key Highlights
Key highlights of this release:
-* **Stream processing for files in S3 buckets:** With the new [S3 external table](/s3-external), Timeplus Enterprise now supports writing stream processing results to S3 buckets, or reading files in S3.
+* **Stream processing for files in S3 buckets:** With the new [S3 external table](/s3-sink), Timeplus Enterprise now supports writing stream processing results to S3 buckets, or reading files in S3.
* **Join the latest data from MySQL or ClickHouse via dictionary:** You can now create a [dictionary](/sql-create-dictionary) to store key-value pairs in memory or a mutable stream, with data from various sources, such as files, MySQL/ClickHouse databases, or streams in Timeplus.
* **PostgreSQL and MySQL CDC via Redpanda Connect:** Timeplus Enterprise now supports CDC (Change Data Capture) for PostgreSQL and MySQL databases via Redpanda Connect. This feature enables real-time data ingestion from these databases into Timeplus.
* **Support IAM authentication for accessing Amazon MSK:** Avoid storing static credentials in Kafka external streams by setting `sasl_mechanism` to `AWS_MSK_IAM`.
@@ -29,7 +29,7 @@ Key highlights of this release:
|Kubernetes|Kubernetes 1.25+, with Helm 3.12+|
## Upgrade Guide
-1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.7.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for data migration.
+1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.7.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for data migration.
2. For bare metal users, you can upgrade from Timeplus Enterprise 2.6 to 2.7 by stopping components and replacing binary files.
3. For Kubernetes users, please follow [the guide](/k8s-helm#v5-to-v6) carefully since a few timeplusd built-in users are removed in the new helm chart, and you can configure ingress for Appserver and Timeplusd independently.
@@ -304,13 +304,13 @@ Compared to the [2.6.0](/enterprise-v2.6#2_6_0) release:
* To improve performance, we have optimized the schema for [system.stream_metric_log](/system-stream-metric-log) and [system.stream_state_log](/system-stream-state-log).
* Security Enhancements:
* **Support IAM authentication for accessing Amazon MSK:** Avoid storing static credentials in Kafka external streams by setting `sasl_mechanism` to `AWS_MSK_IAM`.
- * **Integration with HashiCorp Vault:** You can now use HashiCorp Vault to store sensitive data, such as password for all types of external streams or external tables, and reference them in [config_file](/proton-kafka#config_file) setting.
+ * **Integration with HashiCorp Vault:** You can now use HashiCorp Vault to store sensitive data, such as password for all types of external streams or external tables, and reference them in [config_file](/kafka-source#config_file) setting.
* Specify the non-root user in the Docker image to improve security.
* New Features:
- * **Stream processing for files in S3 buckets:** With the new [S3 external table](/s3-external), Timeplus Enterprise now supports writing stream processing results to S3 buckets, or read files in S3.
+ * **Stream processing for files in S3 buckets:** With the new [S3 external table](/s3-sink), Timeplus Enterprise now supports writing stream processing results to S3 buckets, or read files in S3.
* **Join the latest data from MySQL or ClickHouse via dictionary:** You can now create a [dictionary](/sql-create-dictionary) to store key-value pairs in memory or a mutable stream, with data from various sources, such as files, MySQL/ClickHouse databases, or streams in Timeplus.
* Replay historical data in local streams or Kafka external streams with the [replay_speed](/query-settings#replay_speed) setting.
- * Read the header key-value pairs in the kafka external stream. [Learn more](/proton-kafka#_tp_message_headers)
+ * Read the header key-value pairs in the kafka external stream. [Learn more](/kafka-source)
* [Python UDF](/py-udf): You can now create user-defined functions (UDFs) in Python to extend the functionality of Timeplus with rich ecosystem of Python. It's currently in technical preview for Linux x86_64 only.
* timeplus_web 2.1.7 -> 2.2.10
* Significant improvements of materialized view monitoring and troubleshooting UI.
diff --git a/docs/enterprise-v2.8.md b/docs/enterprise-v2.8.md
index ba6b4b90d..f1c89a0eb 100644
--- a/docs/enterprise-v2.8.md
+++ b/docs/enterprise-v2.8.md
@@ -11,8 +11,8 @@ Each component maintains its own version numbers. The version number for each Ti
## Key Highlights
Key highlights of this release:
-* New Compute Node server role to [run materialized views elastically](/view#autoscaling_mv) with checkpoints on S3 storage.
-* Timeplus can read or write data in Apache Iceberg tables. [Learn more](/iceberg)
+* New Compute Node server role to [run materialized views elastically](/materialized-view#autoscaling_mv) with checkpoints on S3 storage.
+* Timeplus can read or write data in Apache Iceberg tables. [Learn more](/iceberg-sink)
* Timeplus can read or write PostgreSQL tables directly via [PostgreSQL External Table](/pg-external-table) or look up data via [dictionaries](/sql-create-dictionary#source_pg).
* Use S3 as the [tiered storage](/tiered-storage) for streams.
* New SQL command to [rename streams](/sql-rename-stream) or [columns](/sql-alter-stream#rename-column).
@@ -135,11 +135,11 @@ Compared to the [2.8.1](#2_8_1) release:
* Able to add or drop secondary index for mutable streams.
* Able to set `version_column` to make sure only rows with higher value of the `version_column` will override the rows with same primary key. This setting can work with or without `coalesced`.
* Support the `UUID` data type for primary key columns.
- * **[HTTP External Stream](/http-external):** Added a new type of external stream to send streaming data to external HTTP endpoints, such as Splunk, Open Search and Slack.
- * **[MongoDB External Table](/mongo-external):** Added a new type of external table to send streaming data to MongoDB.
+ * **[HTTP External Stream](/http-external-stream):** Added a new type of external stream to send streaming data to external HTTP endpoints, such as Splunk, Open Search and Slack.
+ * **[MongoDB External Table](/mongo-external-table):** Added a new type of external table to send streaming data to MongoDB.
* Enhanced [MySQL External Table](/mysql-external-table) to support `replace_query` and `on_duplicate_clause` settings.
- * Enhanced [Kafka External Stream](/proton-kafka) allows to customize the `partitioner` property, e.g. `settings properties='partitioner=murmur2'`.
- * Enhanced [Kafka External Stream](/proton-kafka) and [Pulsar External Stream](/pulsar-external-stream) to support write message headers via `_tp_message_headers`.
+ * Enhanced [Kafka External Stream](/kafka-source) allows to customize the `partitioner` property, e.g. `settings properties='partitioner=murmur2'`.
+ * Enhanced [Kafka External Stream](/kafka-source) and [Pulsar External Stream](/pulsar-source) to support write message headers via `_tp_message_headers`.
* Support [map_from_arrays](/functions_for_comp#map_from_arrays) and [map_cast](/functions_for_comp#map_cast) with 4 or more parameters.
* [SHOW CREATE](/sql-show-create#show_multi_versions) command supports `show_multi_versions=true` to get the history of the object.
* New query setting [precise_float_parsing](/query-settings#precise_float_parsing) to precisely handle float numbers.
@@ -150,7 +150,7 @@ Compared to the [2.8.1](#2_8_1) release:
* Improved the support for gRPC protocol.
* Support [EMIT TIMEOUT](/streaming-aggregations#emit-timeout) for both global aggregations and window aggregations.
* Able to change log level during runtime via [SYSTEM SET LOG LEVEL](/sql-system-set-log-level) or REST API.
- * Support new JOIN type [FULL LATEST JOIN](/joins#full-latest-join).
+ * Support new JOIN type [FULL LATEST JOIN](/streaming-joins#full-latest-join).
* timeplus_web 2.8.8 -> 2.8.12
* Some new UI features and enhancements in 2.9 are ported to 2.8.2:
* **Materialized Views (MVs):**
@@ -205,7 +205,7 @@ Compared to the [2.8.0 (Preview)](#2_8_0) release:
* Fix Kafka external stream parsing issue.
* Improve mutable stream creation flow when defined via engine.
* When using `CREATE OR REPLACE FORMAT SCHEMA` to update an existing schema, and using `DROP FORMAT SCHEMA` to delete a schema, Timeplus will clean up the Protobuf schema cache to avoid misleading errors.
- * Support writing Kafka message timestamp via [_tp_time](/proton-kafka)
+ * Support writing Kafka message timestamp via [_tp_time](/kafka-source)
* Enable IPv6 support for KeyValueService
* Simplified the [EMIT syntax](/streaming-aggregations#emit) to make it easier to read and use.
* Support [EMIT ON UPDATE WITH DELAY](/streaming-aggregations#emit_on_update_with_delay)
@@ -281,7 +281,7 @@ If you are still not sure, here are the things that would be broken without migr
For Kubernetes users, please follow [the guide](/k8s-helm#v6-to-v7) to do the migration.
#### Known issues {#known_issue_2_8_0}
-1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.7.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for data migration.
+1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.7.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for data migration.
2. Pulsar external stream functionality is limited to Linux bare metal builds and Linux-based Docker images, excluding macOS bare metal builds.
3. The `timeplus_connector` component may experience health issues on Ubuntu Linux with x86_64 chips, affecting Redpanda Connect functionality. This issue is specific to Ubuntu and does not affect other Linux distributions.
4. Python UDF support is limited to Linux x86_64 bare metal and Linux x86_64 Docker image, excluding macOS or ARM builds.
diff --git a/docs/enterprise-v2.9.md b/docs/enterprise-v2.9.md
index 27e35c53e..5480d6e92 100644
--- a/docs/enterprise-v2.9.md
+++ b/docs/enterprise-v2.9.md
@@ -15,9 +15,9 @@ Key highlights of the Timeplus 2.9 release include:
* **Enhanced Mutable Streams:** Introducing online schema evolution, versioning, coalesced storage, Time-To-Live (TTL), and secondary index management capabilities.
* **Native JSON Support:** A new native JSON data type and powerful [json_encode](/functions_for_json#json_encode) / [json_cast](/functions_for_json#json_cast) functions simplify working with JSON.
* **Improved Data Integrity:** Dead Letter Queue (DLQ) support for Materialized Views ensures robust data processing.
-* **Expanded Connectivity:** Native [HTTP External Stream](/http-external) for seamless integration with systems like Splunk, Elasticsearch, and more.
+* **Expanded Connectivity:** Native [HTTP External Stream](/http-external-stream) for seamless integration with systems like Splunk, Elasticsearch, and more.
* **Performance Boost:** [JIT (Just-In-Time) compilation](/jit) for streaming queries delivers significant performance and efficiency improvements. Large cardinality sessionization.
-* **Parameterized Views:** Create [Parameterized Views](/view#parameterized-views) for more flexible and reusable query patterns.
+* **Parameterized Views:** Create [Parameterized Views](/view#parameterized-view) for more flexible and reusable query patterns.
* **Scalable Log Processing:** Distributed LogStream enables efficient handling of large volumes of log data.
* **Broader UDF Support:** Python UDFs now run on ARM CPUs (Linux/macOS), and JavaScript UDFs benefit from multiple V8 instances.
* **Refined Cluster UI:** The web console offers an improved experience for visualizing and managing cluster nodes.
@@ -34,7 +34,7 @@ We recommend using stable releases for production deployment. Engineering builds
### 2.9.0 (Preview 3) {#2_9_0-preview_3}
Released on 07-31-2025. Installation options:
-* For Linux or Mac users: `curl https://install.timeplus.com/2.9 | sh` [Downloads](/release-downloads#2_9_0-preview_3)
+* For Linux or Mac users: `curl https://install.timeplus.com/2.9 | sh` [Downloads](/release-downloads)
* For Docker users (not recommended for production): `docker run -p 8000:8000 docker.timeplus.com/timeplus/timeplus-enterprise:2.9.0-preview.3`
* We will provide new Helm Charts for Kubernetes deployment when v2.9 is GA.
@@ -49,15 +49,15 @@ Component versions:
Compared to the [2.8.1](/enterprise-v2.8#2_8_1) release:
* timeplusd 2.8.26 -> 2.9.9-rc.26
* New Features:
- * **Parameterized Views:** You can now create [parameterized views](/view#parameterized-views), allowing for more dynamic and reusable view definitions.
+ * **Parameterized Views:** You can now create [parameterized views](/view#parameterized-view), allowing for more dynamic and reusable view definitions.
* **JIT Compilation for Queries:** Introduced [Just-In-Time (JIT) compilation](/jit) for queries, potentially improving execution performance for certain query types.
* **New JSON Data Type & SQL Functions:** Added a native JSON data type and SQL functions [json_encode](/functions_for_json#json_encode), [json_cast](/functions_for_json#json_cast), [json_array_length](/functions_for_json#json_array_length), [json_merge_patch](/functions_for_json#json_merge_patch) for powerful JSON manipulation.
* **Mutable Stream TTL:** You can now define Time-To-Live (TTL) for data in mutable streams, automatically managing data retention.
* **Materialized View DLQ:** Introduced Dead Letter Queue (DLQ) support for materialized views to handle data processing errors more robustly.
- * **[HTTP External Stream](/http-external):** Added a new type of external stream to send streaming data to external HTTP endpoints, such as Splunk, Open Search and Slack.
- * **[MongoDB External Table](/mongo-external):** Added a new type of external table to send streaming data to MongoDB.
+ * **[HTTP External Stream](/http-external-stream):** Added a new type of external stream to send streaming data to external HTTP endpoints, such as Splunk, Open Search and Slack.
+ * **[MongoDB External Table](/mongo-external-table):** Added a new type of external table to send streaming data to MongoDB.
* Enhanced [MySQL External Table](/mysql-external-table) to support `replace_query` and `on_duplicate_clause` settings.
- * Enhanced [Kafka External Stream](/proton-kafka) and [Pulsar External Stream](/pulsar-external-stream) to support write message headers via `_tp_message_headers`
+ * Enhanced [Kafka External Stream](/kafka-source) and [Pulsar External Stream](/pulsar-source) to support write message headers via `_tp_message_headers`
* Build and manage [Alerts](/alert) with SQL. Monitor your streaming data and automatically trigger actions when specific conditions are met.
* **Python UDFs on ARM:** Python User-Defined Functions (UDFs) are now supported on ARM-based architectures (Linux/macOS), expanding platform compatibility.
* **Improved JavaScript UDFs:** Enhanced JavaScript UDF execution with support for multiple V8 instances, improving concurrency and isolation (also available in 2.8.1 or above). JavaScript User Defined Aggregation Function supports null value as input.
@@ -72,7 +72,7 @@ Compared to the [2.8.1](/enterprise-v2.8#2_8_1) release:
* **Modifying Comments:** Added `ALTER COMMENT` support for streams, views, materialized views, KVStreams, and RandomStreams.
* **Mutable Stream Schema Evolution:** Support for adding new columns and dropping secondary indexes in mutable streams.
* Support writing to nested array of records Avro schemas
- * Enhanced [Kafka External Stream](/proton-kafka) allows to customize the `partitioner` property, e.g. `settings properties='partitioner=murmur2'`
+ * Enhanced [Kafka External Stream](/kafka-source) allows to customize the `partitioner` property, e.g. `settings properties='partitioner=murmur2'`
* New query setting [precise_float_parsing](/query-settings#precise_float_parsing) to precisely handle float numbers.
* Added emit policy [EMIT TIMEOUT](/streaming-aggregations#emit-timeout) and [EMIT PER EVENT](/streaming-aggregations#emit-per-event).
* Added new functions `array_partial_sort`, `array_partial_reverse_sort`, and `ulid_string_to_date_time`.
@@ -106,7 +106,7 @@ Compared to the [2.8.1](/enterprise-v2.8#2_8_1) release:
* Improved layout for HTTP source creation and other external stream Guided Data Ingestion (GDI) UIs.
* **SQL Query:** side panel is simplified by removing the snippets and functions accordion, long SQL statement is wrapped by default, cursor position is kept when you switch pages or tabs.
* Resource Management (Streams, MVs, Views, UDFs):
- * Replaced the Redpanda-Connect based HTTP sink and Slack sink with the new [HTTP External Stream](/http-external) in the core engine.
+ * Replaced the Redpanda-Connect based HTTP sink and Slack sink with the new [HTTP External Stream](/http-external-stream) in the core engine.
* **Materialized Views (MVs):**
* Added UI support for **pausing and resuming** materialized views.
* Introduced **Dead Letter Queue (DLQ)** support and UI for MVs.
@@ -147,7 +147,7 @@ Upgrade Instructions:
If you install Timeplus Enterprise 2.7 or earlier, the metadata for the Redpanda Connect sources and sinks are saved in a special key/value service. v2.8 switches to mutable streams for such metadata by default and provides a migration tool. In 2.9, all metadata are saved in mutable streams and the previous key/value service has been removed. Please upgrade to 2.8 first if you are on 2.7 or earlier. Then upgrade to 2.9.
#### Known issues {#known_issue_2_9_0-preview_2}
-1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.9.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-external-stream) for data migration.
+1. Direct upgrades from version 2.3 or earlier are not supported. Please perform a clean installation of 2.9.x and utilize [timeplus sync](/cli-sync) CLI or [Timeplus External Stream](/timeplus-source) for data migration.
2. For existing deployments with any version from 2.3 to 2.7, please upgrade to 2.8 first and migrate the metadata. .
3. Pulsar external stream functionality is limited to Linux bare metal builds and Linux-based Docker images, excluding macOS bare metal builds.
4. The `timeplus_connector` component may experience health issues on Ubuntu Linux with x86_64 chips, affecting Redpanda Connect functionality. This issue is specific to Ubuntu and does not affect other Linux distributions.
diff --git a/docs/external-stream.md b/docs/external-stream.md
index 4d9a5237b..93ec88c15 100644
--- a/docs/external-stream.md
+++ b/docs/external-stream.md
@@ -5,9 +5,9 @@ You can create **External Streams** in Timeplus to query data in the external sy
You can run streaming analytics with the external streams in the similar way as other streams.
Timeplus supports 4 types of external streams:
-* [Kafka External Stream](/proton-kafka)
-* [Pulsar External Stream](/pulsar-external-stream)
-* [Timeplus External Stream](/timeplus-external-stream), only available in Timeplus Enterprise
+* [Kafka External Stream](/kafka-source)
+* [Pulsar External Stream](/pulsar-source)
+* [Timeplus External Stream](/timeplus-source), only available in Timeplus Enterprise
* [Log External Stream](/log-stream) (experimental)
-Besides external streams, Timeplus also provides external tables to query data in ClickHouse, MySQL, Postgres or S3/Iceberg. The difference of external tables and external streams is that external tables are not real-time, and they are not designed for streaming analytics. You can use external tables to query data in the external systems, but you cannot run streaming SQL on them. [Learn more about external tables](/proton-clickhouse-external-table).
+Besides external streams, Timeplus also provides external tables to query data in ClickHouse, MySQL, Postgres or S3/Iceberg. The difference of external tables and external streams is that external tables are not real-time, and they are not designed for streaming analytics. You can use external tables to query data in the external systems, but you cannot run streaming SQL on them. [Learn more about external tables](/clickhouse-external-table).
diff --git a/docs/functions_for_datetime.md b/docs/functions_for_datetime.md
index e3748d7f0..38cf7c839 100644
--- a/docs/functions_for_datetime.md
+++ b/docs/functions_for_datetime.md
@@ -216,7 +216,7 @@ Supported unit:
### date_diff_within
-`date_diff_within(timegap,time1, time2)` returns true or false. This function only works in [stream-to-stream join](/joins). Check whether the gap between `time1` and `time2` are within the specific range. For example `date_diff_within(10s,payment.time,notification.time)` to check whether the payment time and notification time are within 10 seconds or less.
+`date_diff_within(timegap,time1, time2)` returns true or false. This function only works in [stream-to-stream join](/streaming-joins). Check whether the gap between `time1` and `time2` are within the specific range. For example `date_diff_within(10s,payment.time,notification.time)` to check whether the payment time and notification time are within 10 seconds or less.
### date_trunc
diff --git a/docs/functions_for_streaming.md b/docs/functions_for_streaming.md
index 9e722ccb8..242626d71 100644
--- a/docs/functions_for_streaming.md
+++ b/docs/functions_for_streaming.md
@@ -22,7 +22,7 @@ Please note, the `table` function also works in other types of streams:
* Timeplus external stream: read the existing data for a stream in a remote Timeplus.
* Random stream: generate a block of random data. The number of rows in the block is predefined and subject to change. The current value is 65409. For testing or demonstration purpose, you can create a random stream with multiple columns and use the table function to generate random data at once.
-Learn more about [Non-streaming queries](/history).
+Learn more about [Non-streaming queries](/historical-query).
### tumble
@@ -114,7 +114,7 @@ Otherwise, if you run queries with `dedup(table(my_stream),id)` the earliest eve
### date_diff_within
-`date_diff_within(timegap,time1, time2)` returns true or false. This function only works in [Range Bidirectional Join](/joins#range-join). Check whether the gap between `time1` and `time2` are within the specific range. For example `date_diff_within(10s,payment.time,notification.time)` to check whether the payment time and notification time are within 10 seconds or less.
+`date_diff_within(timegap,time1, time2)` returns true or false. This function only works in [Range Bidirectional Join](/streaming-joins#range-join). Check whether the gap between `time1` and `time2` are within the specific range. For example `date_diff_within(10s,payment.time,notification.time)` to check whether the payment time and notification time are within 10 seconds or less.
✅ streaming query
diff --git a/docs/glossary.md b/docs/glossary.md
index b02b39b54..d6d4df1fb 100644
--- a/docs/glossary.md
+++ b/docs/glossary.md
@@ -45,7 +45,7 @@ Event time is used almost everywhere in Timeplus data processing and analysis wo
#### Specify during data ingestion
-When you [ingest data](/ingestion) into Timeplus, you can specify an attribute in the data which best represents the event time. Even if the attribute is in `String` type, Timeplus will automatically convert it to a timestamp for further processing.
+When you [ingest data](/connect-data-in) into Timeplus, you can specify an attribute in the data which best represents the event time. Even if the attribute is in `String` type, Timeplus will automatically convert it to a timestamp for further processing.
If you don't choose an attribute in the wizard, then Timeplus will use the ingestion time to present the event time, i.e. when Timeplus receives the data. This may work well for most static or dimensional data, such as city names with zip codes.
@@ -98,7 +98,7 @@ Once the materialized view is created, Timeplus will run the query in the backgr
Timeplus provides powerful streaming analytics capabilities through the enhanced SQL. By default, queries are unbounded and keep pushing the latest results to the client. The unbounded query can be converted to a bounded query by applying the function [table()](/functions_for_streaming#table), when the user wants to ask the question about what has happened like the traditional SQL.
-Learn more: [Streaming Query](/stream-query) and [Non-Streaming Query](/history)
+Learn more: [Streaming Query](/streaming-query) and [Non-Streaming Query](/historical-query)
## sink {#sink}
@@ -106,13 +106,13 @@ a.k.a. destination. Only available in Timeplus Enterprise, not in Timeplus Proto
Timeplus enables you to send real-time insights or transformed data to other systems, either to notify individuals or power up downstream applications.
-Learn more: [Destination](/destination).
+Learn more: [Destination](/send-data-out).
## source {#source}
-A source is a background job in Timeplus Enterprise to load data into a [stream](#stream). For Kafka API compatible streaming data platform, you need to create [Kafka external streams](/proton-kafka).
+A source is a background job in Timeplus Enterprise to load data into a [stream](#stream). For Kafka API compatible streaming data platform, you need to create [Kafka external streams](/kafka-source).
-Learn more: [Data Collection](/ingestion)
+Learn more: [Data Collection](/connect-data-in)
## stream {#stream}
@@ -128,4 +128,4 @@ When you create a source and preview the data, you can choose a column as the ti
You can define reusable SQL statements as views, so that you can query them as if they are streams `select .. from view1 ..` By default, views don't take any extra computing or storage resources. They are expanded to the SQL definition when they are queried. You can also create materialized views to 'materialize' them (keeping running them in the background and saving the results to the disk).
-Learn more: [View](/view) and [Materialized View](/view#m_view)
+Learn more: [View](/materialized-view) and [Materialized View](/materialized-view)
diff --git a/docs/history.md b/docs/historical-query.md
similarity index 100%
rename from docs/history.md
rename to docs/historical-query.md
diff --git a/docs/howtos.md b/docs/howtos.md
index ee969b021..13d341a07 100644
--- a/docs/howtos.md
+++ b/docs/howtos.md
@@ -2,7 +2,7 @@
## How to read/write Kafka or Redpanda {#kafka}
-You use [External Stream](/proton-kafka) to read from Kafka topics or write data to the topics. We verified the integration with Apache Kafka, Confluent Cloud, Confluent Platform, Redpanda, WarpStream and many more.
+You use [External Stream](/kafka-source) to read from Kafka topics or write data to the topics. We verified the integration with Apache Kafka, Confluent Cloud, Confluent Platform, Redpanda, WarpStream and many more.
```sql
CREATE EXTERNAL STREAM [IF NOT EXISTS] stream_name
@@ -19,11 +19,11 @@ For PostgreSQL, MySQL or other OLTP databases, you can apply the CDC (Change Dat
-If you have data in local ClickHouse or ClickHouse Cloud, you can also use [External Table](/proton-clickhouse-external-table) to read data.
+If you have data in local ClickHouse or ClickHouse Cloud, you can also use [External Table](/clickhouse-external-table) to read data.
## How to read/write ClickHouse {#clickhouse}
-You use [External Table](/proton-clickhouse-external-table) to read from ClickHouse tables or write data to the ClickHouse tables. We verified the integration with self-hosted ClickHouse, ClickHouse Cloud, Aiven for ClickHouse and many more.
+You use [External Table](/clickhouse-external-table) to read from ClickHouse tables or write data to the ClickHouse tables. We verified the integration with self-hosted ClickHouse, ClickHouse Cloud, Aiven for ClickHouse and many more.
@@ -37,7 +37,7 @@ You can use tools like Debezium to send CDC messages to Timeplus, or just use `I
## How to work with JSON {#json}
-Proton supports powerful, yet easy-to-use JSON processing. You can save the entire JSON document as a `raw` column in `string` type. Then use JSON path as the shortcut to access those values as string. For example `raw:a.b.c`. If your data is in int/float/bool or other type, you can also use `::` to convert them. For example `raw:a.b.c::int`. If you want to read JSON documents in Kafka topics, you can choose to read each JSON as a `raw` string, or read each top level key/value pairs as columns. Please check the [doc](/proton-kafka) for details.
+Proton supports powerful, yet easy-to-use JSON processing. You can save the entire JSON document as a `raw` column in `string` type. Then use JSON path as the shortcut to access those values as string. For example `raw:a.b.c`. If your data is in int/float/bool or other type, you can also use `::` to convert them. For example `raw:a.b.c::int`. If you want to read JSON documents in Kafka topics, you can choose to read each JSON as a `raw` string, or read each top level key/value pairs as columns. Please check the [doc](/kafka-source) for details.
diff --git a/docs/http-external.md b/docs/http-external-stream.md
similarity index 58%
rename from docs/http-external.md
rename to docs/http-external-stream.md
index 2831f7f94..62ba2ceb6 100644
--- a/docs/http-external.md
+++ b/docs/http-external-stream.md
@@ -4,7 +4,7 @@ You can send data to HTTP endpoints via the HTTP External Stream. You can use th
Currently, it only supports writing data to HTTP endpoints, but reading data from HTTP endpoints is not supported yet.
-## CREATE EXTERNAL STREAM
+## Create HTTP External Stream
To create an external stream for HTTP endpoints, you can run the following DDL SQL:
@@ -40,66 +40,6 @@ For the full list of settings, see the [DDL Settings](#ddl-settings) section.
### Examples
-#### Write to OpenSearch/ElasticSearch {#example-write-to-es}
-Assuming you have created an index `students` in a deployment of OpenSearch or ElasticSearch, you can create the following external stream to write data to the index.
-
-```sql
-CREATE EXTERNAL STREAM opensearch_t1 (
- name string,
- gpa float32,
- grad_year int16
-) SETTINGS
-type = 'http',
-data_format = 'OpenSearch', --can also use the alias "ElasticSearch"
-url = 'https://opensearch.company.com:9200/students/_bulk',
-username='admin',
-password='..'
-```
-
-Then you can insert data via a materialized view or just
-```sql
-INSERT INTO opensearch_t1(name,gpa,grad_year) VALUES('Jonathan Powers',3.85,2025);
-```
-
-#### Write to Splunk {#example-write-to-splunk}
-Follow [the guide](https://docs.splunk.com/Documentation/Splunk/9.4.1/Data/UsetheHTTPEventCollector) to set up and use HTTP Event Collector(HEC) in Splunk. Make sure you create a HEC token for the desired index and enable it.
-
-Create the HTTP external stream in Timeplus:
-```sql
-CREATE EXTERNAL STREAM http_splunk_t1 (event string)
-SETTINGS
-type = 'http',
-data_format = 'JSONEachRow',
-http_header_Authorization='Splunk the-hec-token',
-url = 'http://host:8088/services/collector/event'
-```
-
-Then you can insert data via a materialized view or just
-```sql
-INSERT INTO http_splunk_t1 VALUES('test1'),('test2');
-```
-
-#### Write to Datadog {#example-write-to-datadog}
-
-Create or use an existing API key with the proper permission for sending data.
-
-Create the HTTP external stream in Timeplus:
-```sql
-CREATE EXTERNAL STREAM datadog_t1 (event string)
-SETTINGS
-type = 'http',
-data_format = 'JSONEachRow',
-output_format_json_array_of_rows = 1,
-http_header_DD_API_KEY = 'THE_API_KEY',
-http_header_Content_Type = 'application/json',
-url = 'https://http-intake.logs.us3.datadoghq.com/api/v2/logs' --make sure you set the right region
-```
-
-Then you can insert data via a materialized view or just
-```sql
-INSERT INTO datadog_t1(message, hostname) VALUES('test message','a.test.com'),('test2','a.test.com');
-```
-
#### Write to Algolia {#example-write-to-algolia}
The [Algolia Ingestion API](https://www.algolia.com/doc/rest-api/ingestion/) accepts multiple rows in a single request in the following JSON payload:
@@ -138,93 +78,6 @@ INSERT INTO http_algolia_t1(firstname,lastname,zip_code)
VALUES('firstnameA','lastnameA',123),('firstnameB','lastnameB',987)
```
-#### Write to BigQuery {#example-write-to-bigquery}
-
-Assume you have created a table in BigQuery with 2 columns:
-```sql
-create table `PROJECT.DATASET.http_sink_t1`(
- num int,
- str string);
-```
-
-Follow [the guide](https://cloud.google.com/bigquery/docs/authentication) to choose the proper authentication to Google Cloud, such as via the gcloud CLI `gcloud auth application-default print-access-token`.
-
-Create the HTTP external stream in Timeplus:
-```sql
-CREATE EXTERNAL STREAM http_bigquery_t1 (num int,str string)
-SETTINGS
-type = 'http',
-http_header_Authorization='Bearer $OAUTH_TOKEN',
-url = 'https://bigquery.googleapis.com/bigquery/v2/projects/$PROJECT/datasets/$DATASET/tables/$TABLE/insertAll',
-data_format = 'Template',
-format_template_resultset_format='{"rows":[${data}]}',
-format_template_row_format='{"json":{"num":${num:JSON},"str":${str:JSON}}}',
-format_template_rows_between_delimiter=','
-```
-
-Replace the `OAUTH_TOKEN` with the output of `gcloud auth application-default print-access-token` or other secure way to obtain OAuth token. Replace `PROJECT`, `DATASET` and `TABLE` to match your BigQuery table path. Also change `format_template_row_format` to match the table schema.
-
-Then you can insert data via a materialized view or just via `INSERT` command:
-```sql
-INSERT INTO http_bigquery_t1 VALUES(10,'A'),(11,'B');
-```
-
-#### Write to Databricks {#example-write-to-databricks}
-
-Follow [the guide](https://docs.databricks.com/aws/en/dev-tools/auth/pat) to create an access token for your Databricks workspace.
-
-Assume you have created a table in Databricks SQL warehouse with 2 columns:
-```sql
-CREATE TABLE sales (
- product STRING,
- quantity INT
-);
-```
-
-Create the HTTP external stream in Timeplus:
-```sql
-CREATE EXTERNAL STREAM http_databricks_t1 (product string, quantity int)
-SETTINGS
-type = 'http',
-http_header_Authorization='Bearer $TOKEN',
-url = 'https://$HOST.cloud.databricks.com/api/2.0/sql/statements/',
-data_format = 'Template',
-format_template_resultset_format='{"warehouse_id":"$WAREHOUSE_ID","statement": "INSERT INTO sales (product, quantity) VALUES (:product, :quantity)", "parameters": [${data}]}',
-format_template_row_format='{ "name": "product", "value": ${product:JSON}, "type": "STRING" },{ "name": "quantity", "value": ${quantity:JSON}, "type": "INT" }',
-format_template_rows_between_delimiter=''
-```
-
-Replace the `TOKEN`, `HOST`, and `WAREHOUSE_ID` to match your Databricks settings. Also change `format_template_row_format` and `format_template_row_format` to match the table schema.
-
-Then you can insert data via a materialized view or just via `INSERT` command:
-```sql
-INSERT INTO http_databricks_t1(product, quantity) VALUES('test',95);
-```
-
-This will insert one row per request. We plan to support batch insert and Databricks specific format to support different table schemas in the future.
-
-#### Trigger Slack Notifications {#example-trigger-slack}
-
-You can follow [the guide](https://api.slack.com/messaging/webhooks) to configure an "incoming webhook" to send notifications to a Slack channel.
-
-```sql
-CREATE EXTERNAL STREAM http_slack_t1 (text string) SETTINGS
-type = 'http', data_format='Template',
-format_template_resultset_format='{"blocks":[{"type":"section","text":{"type":"mrkdwn","text":"${data}"}}]}',
-format_template_row_format='${text:Raw}',
-url = 'https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX'
-```
-
-Then you can insert data via a materialized view or just via `INSERT` command:
-```sql
-INSERT INTO http_slack_t1 VALUES('Hello World!');
-INSERT INTO http_slack_t1 VALUES('line1\nline2');
-INSERT INTO http_slack_t1 VALUES('msg1'),('msg2');
-INSERT INTO http_slack_t1 VALUES('This is unquoted text\n>This is quoted text\n>This is still quoted text\nThis is unquoted text again');
-```
-
-Please follow Slack's [text formats](https://api.slack.com/reference/surfaces/formatting) guide to add rich text to your messages.
-
### DDL Settings
#### type
@@ -233,7 +86,7 @@ The type of the external stream. The value must be `http` to send data to HTTP e
#### config_file
The `config_file` setting is available since Timeplus Enterprise 2.7. You can specify the path to a file that contains the configuration settings. The file should be in the format of `key=value` pairs, one pair per line. You can set the HTTP credentials or Authentication tokens in the file.
-Please follow the example in [Kafka External Stream](/proton-kafka#config_file).
+Please follow the example in [Kafka External Stream](/kafka-source#config_file).
#### url
The endpoint of the HTTP service. Different services and different use cases may have different endpoints. For example, to send data to a specified OpenSearch index, you can use `http://host:port/my_index/_bulk`. To send data to multiple indexes (depending on the column in the streaming SQL), you can use `http://host:port/_bulk` and also specify the `output_format_opensearch_index_column`.
@@ -309,9 +162,3 @@ username='..',
password='..',
url = 'https://api.openobserve.ai/api/../default/_json'
```
-
-## DROP EXTERNAL STREAM
-
-```sql
-DROP STREAM [IF EXISTS] name
-```
diff --git a/docs/iceberg-external-stream-sink.md b/docs/iceberg-external-stream-sink.md
new file mode 100644
index 000000000..0bf531735
--- /dev/null
+++ b/docs/iceberg-external-stream-sink.md
@@ -0,0 +1,8 @@
+---
+id: iceberg-sink
+title: Iceberg External Stream
+---
+
+import ExternalIcebergWrite from './shared/iceberg-external-stream.md';
+
+
diff --git a/docs/iceberg-external-stream-source.md b/docs/iceberg-external-stream-source.md
new file mode 100644
index 000000000..d4b1912f7
--- /dev/null
+++ b/docs/iceberg-external-stream-source.md
@@ -0,0 +1,8 @@
+---
+id: iceberg-source
+title: Iceberg External Stream
+---
+
+import ExternalIcebergRead from './shared/iceberg-external-stream.md';
+
+
diff --git a/docs/index.mdx b/docs/index.mdx
index 4ebe6797b..7dc4490f2 100644
--- a/docs/index.mdx
+++ b/docs/index.mdx
@@ -30,8 +30,8 @@ Still curious about [the benefits of using Timeplus](/why-timeplus)? Explore our
-
-
Ingest data →
+
+
Connect Data In →
Connect Timeplus to Apache Kafka, Apache Pulsar, Confluent Cloud, or push with a REST API, SDKs, and beyond.
@@ -62,7 +62,7 @@ Still curious about [the benefits of using Timeplus](/why-timeplus)? Explore our
-
+
Materialized Views →
Data streaming processing pipeline via streaming SQL. The results can be written to native Timeplus stream or external systems.
diff --git a/docs/kafka-external-stream-sink.mdx b/docs/kafka-external-stream-sink.mdx
new file mode 100644
index 000000000..fc7601549
--- /dev/null
+++ b/docs/kafka-external-stream-sink.mdx
@@ -0,0 +1,12 @@
+---
+id: kafka-sink
+title: Kafka Sink
+---
+
+import ExternalKafkaBasics from './shared/kafka-external-stream.md';
+import ExternalKafkaWrite from './shared/kafka-external-stream-write.md';
+import ExternalKafkaClientProperties from './shared/kafka-external-stream-client-properties.md';
+
+
+
+
diff --git a/docs/kafka-external-stream-source.mdx b/docs/kafka-external-stream-source.mdx
new file mode 100644
index 000000000..8d565809e
--- /dev/null
+++ b/docs/kafka-external-stream-source.mdx
@@ -0,0 +1,12 @@
+---
+id: kafka-source
+title: Kafka Source
+---
+
+import ExternalKafkaBasics from './shared/kafka-external-stream.md';
+import ExternalKafkaRead from './shared/kafka-external-stream-read.md';
+import ExternalKafkaClientProperties from './shared/kafka-external-stream-client-properties.md';
+
+
+
+
diff --git a/docs/proton-schema-registry.md b/docs/kafka-schema-registry.md
similarity index 98%
rename from docs/proton-schema-registry.md
rename to docs/kafka-schema-registry.md
index 751a1a0b8..cb16627ca 100644
--- a/docs/proton-schema-registry.md
+++ b/docs/kafka-schema-registry.md
@@ -70,4 +70,4 @@ INSERT INTO my_ex_stream SETTINGS force_refresh_schema=true ...
```
:::
-For the data type mappings between Avro and Timeplus data type, please check [this doc](/proton-format-schema#avro_types).
+For the data type mappings between Avro and Timeplus data type, please check [this doc](/timeplus-format-schema#avro_types).
diff --git a/docs/log-stream.md b/docs/log-stream.md
index 44ea74bc4..9a9f21bd2 100644
--- a/docs/log-stream.md
+++ b/docs/log-stream.md
@@ -1,8 +1,10 @@
-# Log Files
+# Log External Stream
-You can use Timeplus as a lightweight and high-performance tool for log analysis. Please check [the blog](https://www.timeplus.com/post/log-stream-analysis) for more details.
+## Overview
-## Syntax
+You can use Timeplus as a lightweight and high-performance tool for application log analysis. Please check [the blog](https://www.timeplus.com/post/log-stream-analysis) for more details.
+
+## Create External Log Stream
Create an external stream with the log type to monitor log files, e.g.
diff --git a/docs/materialized-view.md b/docs/materialized-view.md
new file mode 100644
index 000000000..85658e734
--- /dev/null
+++ b/docs/materialized-view.md
@@ -0,0 +1,132 @@
+# Materialized View {#m_view}
+Real-time data pipelines are built via Materialized Views in Timeplus.
+
+The difference between a materialized view and a regular view is that the materialized view is running in background after creation and the resulting stream is physically written to internal storage (hence it's called materialized).
+
+Once the materialized view is created, Timeplus will run the query in the background continuously and incrementally emit the calculated results according to the semantics of its underlying streaming select.
+
+## Create a Materialized View
+
+```sql
+CREATE MATERIALIZED VIEW [IF NOT EXISTS]
+[INTO ]
+AS