Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/enterprise-v2.8.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ Compared to the [2.8.0 (Preview)](#2_8_0) release:
* Fix Kafka external stream parsing issue.
* Improve mutable stream creation flow when defined via engine.
* When using `CREATE OR REPLACE FORMAT SCHEMA` to update an existing schema, and using `DROP FORMAT SCHEMA` to delete a schema, Timeplus will clean up the Protobuf schema cache to avoid misleading errors.
* Support writing Kafka message timestamp via [_tp_time](/proton-kafka#_tp_time)
* Support writing Kafka message timestamp via [_tp_time](/proton-kafka)
* Enable IPv6 support for KeyValueService
* Simplified the [EMIT syntax](/streaming-aggregations#emit) to make it easier to read and use.
* Support [EMIT ON UPDATE WITH DELAY](/streaming-aggregations#emit_on_update_with_delay)
Expand Down
2 changes: 1 addition & 1 deletion docs/external-stream.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

You can create **External Streams** in Timeplus to query data in the external systems without loading the data into Timeplus. The main benefit for doing so is to keep a single source of truth in the external systems (e.g. Apache Kafka), without duplicating them. In many cases, this can also achieve even lower latency to process Kafka or Pulsar data, because the data is read directly by Timeplus core engine, without other components, such as Redpanda Connect or [Airbyte](https://airbyte.com/connectors/timeplus).

You can run streaming analytics with the external streams in the similar way as other streams, with [some limitations](/proton-kafka#limitations).
You can run streaming analytics with the external streams in the similar way as other streams.

Timeplus supports 4 types of external streams:
* [Kafka External Stream](/proton-kafka)
Expand Down
2 changes: 1 addition & 1 deletion docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ You can use tools like Debezium to send CDC messages to Timeplus, or just use `I

## How to work with JSON {#json}

Proton supports powerful, yet easy-to-use JSON processing. You can save the entire JSON document as a `raw` column in `string` type. Then use JSON path as the shortcut to access those values as string. For example `raw:a.b.c`. If your data is in int/float/bool or other type, you can also use `::` to convert them. For example `raw:a.b.c::int`. If you want to read JSON documents in Kafka topics, you can choose to read each JSON as a `raw` string, or read each top level key/value pairs as columns. Please check the [doc](/proton-kafka#multi_col_read) for details.
Proton supports powerful, yet easy-to-use JSON processing. You can save the entire JSON document as a `raw` column in `string` type. Then use JSON path as the shortcut to access those values as string. For example `raw:a.b.c`. If your data is in int/float/bool or other type, you can also use `::` to convert them. For example `raw:a.b.c::int`. If you want to read JSON documents in Kafka topics, you can choose to read each JSON as a `raw` string, or read each top level key/value pairs as columns. Please check the [doc](/proton-kafka) for details.

<iframe width="560" height="315" src="https://www.youtube.com/embed/dTKr1-B5clg?si=eaeQ21SjY8JpUXID" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

Expand Down
4 changes: 2 additions & 2 deletions docs/v1-release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ _Timeplus Cloud:_
_Proton (Current version: v1.4.2):_

- Since Proton v1.4.2, we’ve added support to read or write ClickHouse tables. To do this, we’ve introduced a new concept in Proton: "External Table". Similar to [External Stream](/external-stream), no data is persisted in Proton. In the future, we will support more integration by introducing other types of External Table. [See our docs](/proton-clickhouse-external-table) for use cases and more details.
- Based on user feedback, we’ve simplified the process of reading key/value pairs in the JSON document in a Kafka topic. You don’t need to define all keys as columns, and no need to set `input_format_skip_unknown_fields` in DDL or SQL. [Learn more](/proton-kafka#multi_col_read)
- Based on user feedback, we’ve simplified the process of reading key/value pairs in the JSON document in a Kafka topic. You don’t need to define all keys as columns, and no need to set `input_format_skip_unknown_fields` in DDL or SQL. [Learn more](/proton-kafka)
- For random streams, you can now define the EPS (event per second) as a number between 0 to 1. For example, eps=0.5 means generating an event every 2 seconds.
- A new [extract_key_value_pairs](/functions_for_text#extract_key_value_pairs) function is added to extract key value pairs from a string to a map.
- We’ve refined the anonymous telemetry configuration. Regardless if it’s a single binary or Docker deployment, you can set a `TELEMETRY_ENABLED` environment variable. The reporting interval is adjusted from 2 minutes to 5 minutes.
Expand All @@ -156,7 +156,7 @@ _Timeplus Cloud:_
_Proton:_

- Proton v1.4.1 is now released. Please note: you cannot use an older version of Proton client to connect to the new v1.4 Proton server — be sure to update your Proton client. All existing JDBC, ODBC, Go, and Python drivers will still work as usual.
- (v1.3.31) Write to Kafka in plain text: you can now [produce raw format data](/proton-kafka#single_col_write) to a Kafka external stream with a single column.
- (v1.3.31) Write to Kafka in plain text: you can now [produce raw format data](/proton-kafka) to a Kafka external stream with a single column.
- (v1.3.31) By default, we disable sort for historical backfill. [Learn more](/query-settings) in our query guide, including how to enable.

_Timeplus Cloud:_
Expand Down