Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 32 additions & 1 deletion docs/user-guide/ingest-data/for-observability/loki.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,4 +125,35 @@ WITH(
append_mode = 'true'
)
1 row in set (0.00 sec)
```
```

## Using pipeline in Loki push API

:::warning Experimental Feature
This experimental feature may contain unexpected behavior, have its functionality change in the future.
:::

Starting from `v0.15`, GreptimeDB supports using pipeline to process Loki push requests.
You can simply set the HTTP header `x-greptime-pipeline-name` to the target pipeline name to enable pipeline processing.

Note, if the request data go through the pipeline engine, GreptimeDB will add prefix to the label and metadata column names:
- `loki_label_` before each label name
- `loki_metadata_` before each structured metadata name
- The original Loki log line would be named `loki_line`

An example of data model using `greptime_identity` would be like the following:
```
mysql> select * from loki_logs limit 1;
+----------------------------+---------------------+---------------------------+---------------------------------------------------------------------------+
| greptime_timestamp | loki_label_platform | loki_label_service_name | loki_line |
+----------------------------+---------------------+---------------------------+---------------------------------------------------------------------------+
| 2025-07-15 11:40:26.651141 | docker | docker-monitoring-alloy-1 | ts=2025-07-15T11:40:15.532342849Z level=info "boringcrypto enabled"=false |
+----------------------------+---------------------+---------------------------+---------------------------------------------------------------------------+
1 row in set (0.00 sec)
```

You can see that the label column names are prefixed with `loki_label_`.
The actual log line is named `loki_line`.
You can use a custom pipeline to process the data. It would be working like a normal pipeline process.

You can refer to the [pipeline's documentation](/user-guide/logs/pipeline-config.md) for more details.
38 changes: 38 additions & 0 deletions docs/user-guide/ingest-data/for-observability/prometheus.md
Original file line number Diff line number Diff line change
Expand Up @@ -217,6 +217,44 @@ It can also be helpful to group metrics by their frequency.
Note, each metric's logical table is bound to a physical table upon creation.
So setting different physical table for the same metric within the same database won't work.

## Using pipeline in remote write

:::warning Experimental Feature
This experimental feature may contain unexpected behavior, have its functionality change in the future.
:::

Starting from `v0.15`, GreptimeDB supports using pipeline to process Prometheus remote write requests.
You can simply set the HTTP header `x-greptime-pipeline-name` to the target pipeline name to enable pipeline processing.

Here is a very simple pipeline configuration, using `vrl` processor to add a `source` label to each metric:
```YAML
version: 2
processors:
- vrl:
source: |
.source = "local_laptop"
.

transform:
- field: greptime_timestamp
type: time, ms
index: timestamp
```

The result looks something like this
```
mysql> select * from `go_memstats_mcache_inuse_bytes`;
+----------------------------+----------------+--------------------+---------------+--------------+
| greptime_timestamp | greptime_value | instance | job | source |
+----------------------------+----------------+--------------------+---------------+--------------+
| 2025-07-11 07:42:03.064000 | 1200 | node_exporter:9100 | node-exporter | local_laptop |
| 2025-07-11 07:42:18.069000 | 1200 | node_exporter:9100 | node-exporter | local_laptop |
+----------------------------+----------------+--------------------+---------------+--------------+
2 rows in set (0.01 sec)
```

You can refer to the [pipeline's documentation](/user-guide/logs/pipeline-config.md) for more details.

## Performance tuning

By default, the metric engine will automatically create a physical table named `greptime_physical_table` if it does not already exist. For performance optimization, you may choose to create a physical table with customized configurations.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -124,4 +124,35 @@ WITH(
append_mode = 'true'
)
1 row in set (0.00 sec)
```
```

## Using pipeline in Loki push API

:::warning 实验性特性
此实验性功能可能存在预期外的行为,其功能未来可能发生变化。
:::

从 `v0.15` 开始,GreptimeDB 支持使用 pipeline 来处理 Loki 的写入请求。
你可以通过在 HTTP header 中将 `x-greptime-pipeline-name` 的值设置为需要执行的 pipeline 名称来使用 pipeline 处理流程。

请注意,如果使用 pipeline 处理流程,GreptimeDB 将会在 label 和 structure metadata 的列名前加上前缀:
- 对 label 列,加上 `loki_label_` 的前缀
- 对 structured metadata 列,加上 `loki_metadata_` 的前缀
- Loki 自身的日志行则会被命名为 `loki_line`

一个使用 `greptime_identity` 的数据样例将如下所示:
```
mysql> select * from loki_logs limit 1;
+----------------------------+---------------------+---------------------------+---------------------------------------------------------------------------+
| greptime_timestamp | loki_label_platform | loki_label_service_name | loki_line |
+----------------------------+---------------------+---------------------------+---------------------------------------------------------------------------+
| 2025-07-15 11:40:26.651141 | docker | docker-monitoring-alloy-1 | ts=2025-07-15T11:40:15.532342849Z level=info "boringcrypto enabled"=false |
+----------------------------+---------------------+---------------------------+---------------------------------------------------------------------------+
1 row in set (0.00 sec)
```

可以见到 label 列的名称加上了 `loki_label_` 的前缀。
实际的日志列则被命名为 `loki_line`。
你可以使用一个自定义的 pipeline 来处理数据,这将和其他 pipeline 处理流程一致。

更多配置详情请参考 [pipeline 相关文档](/user-guide/logs/pipeline-config.md)。
Original file line number Diff line number Diff line change
Expand Up @@ -208,6 +208,44 @@ GreptimeDB 可以识别一些标签的名称,并将它们转换成写入时的

注意,指标的逻辑表在创建时就与物理表一一关联。在同一数据库下为同一指标设定不同的物理表不会生效。

## 在 Remote write 中使用 pipeline

:::warning 实验性特性
此实验性功能可能存在预期外的行为,其功能未来可能发生变化。
:::

从 `v0.15` 开始,GreptimeDB 支持在 Prometheus Remote Write 协议入口使用 pipeline 处理数据。
你可以通过在 HTTP header 中将 `x-greptime-pipeline-name` 的值设置为需要执行的 pipeline 名称来使用 pipeline 处理流程。

以下是一个非常简单的 pipeline 配置例子,使用 `vrl` 处理器来对每个指标增加一个 `source` 标签:
```YAML
version: 2
processors:
- vrl:
source: |
.source = "local_laptop"
.

transform:
- field: greptime_timestamp
type: time, ms
index: timestamp
```

结果如下所示
```
mysql> select * from `go_memstats_mcache_inuse_bytes`;
+----------------------------+----------------+--------------------+---------------+--------------+
| greptime_timestamp | greptime_value | instance | job | source |
+----------------------------+----------------+--------------------+---------------+--------------+
| 2025-07-11 07:42:03.064000 | 1200 | node_exporter:9100 | node-exporter | local_laptop |
| 2025-07-11 07:42:18.069000 | 1200 | node_exporter:9100 | node-exporter | local_laptop |
+----------------------------+----------------+--------------------+---------------+--------------+
2 rows in set (0.01 sec)
```

更多配置详情请参考 [pipeline 相关文档](/user-guide/logs/pipeline-config.md)。

## 性能优化

默认情况下,metric engine 会自动创建一个名为 `greptime_physical_table` 的物理表。
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -124,4 +124,35 @@ WITH(
append_mode = 'true'
)
1 row in set (0.00 sec)
```
```

## Using pipeline in Loki push API

:::warning 实验性特性
此实验性功能可能存在预期外的行为,其功能未来可能发生变化。
:::

从 `v0.15` 开始,GreptimeDB 支持使用 pipeline 来处理 Loki 的写入请求。
你可以通过在 HTTP header 中将 `x-greptime-pipeline-name` 的值设置为需要执行的 pipeline 名称来使用 pipeline 处理流程。

请注意,如果使用 pipeline 处理流程,GreptimeDB 将会在 label 和 structure metadata 的列名前加上前缀:
- 对 label 列,加上 `loki_label_` 的前缀
- 对 structured metadata 列,加上 `loki_metadata_` 的前缀
- Loki 自身的日志行则会被命名为 `loki_line`

一个使用 `greptime_identity` 的数据样例将如下所示:
```
mysql> select * from loki_logs limit 1;
+----------------------------+---------------------+---------------------------+---------------------------------------------------------------------------+
| greptime_timestamp | loki_label_platform | loki_label_service_name | loki_line |
+----------------------------+---------------------+---------------------------+---------------------------------------------------------------------------+
| 2025-07-15 11:40:26.651141 | docker | docker-monitoring-alloy-1 | ts=2025-07-15T11:40:15.532342849Z level=info "boringcrypto enabled"=false |
+----------------------------+---------------------+---------------------------+---------------------------------------------------------------------------+
1 row in set (0.00 sec)
```

可以见到 label 列的名称加上了 `loki_label_` 的前缀。
实际的日志列则被命名为 `loki_line`。
你可以使用一个自定义的 pipeline 来处理数据,这将和其他 pipeline 处理流程一致。

更多配置详情请参考 [pipeline 相关文档](/user-guide/logs/pipeline-config.md)。
Original file line number Diff line number Diff line change
Expand Up @@ -208,6 +208,44 @@ GreptimeDB 可以识别一些标签的名称,并将它们转换成写入时的

注意,指标的逻辑表在创建时就与物理表一一关联。在同一数据库下为同一指标设定不同的物理表不会生效。

## 在 Remote write 中使用 pipeline

:::warning 实验性特性
此实验性功能可能存在预期外的行为,其功能未来可能发生变化。
:::

从 `v0.15` 开始,GreptimeDB 支持在 Prometheus Remote Write 协议入口使用 pipeline 处理数据。
你可以通过在 HTTP header 中将 `x-greptime-pipeline-name` 的值设置为需要执行的 pipeline 名称来使用 pipeline 处理流程。

以下是一个非常简单的 pipeline 配置例子,使用 `vrl` 处理器来对每个指标增加一个 `source` 标签:
```YAML
version: 2
processors:
- vrl:
source: |
.source = "local_laptop"
.

transform:
- field: greptime_timestamp
type: time, ms
index: timestamp
```

结果如下所示
```
mysql> select * from `go_memstats_mcache_inuse_bytes`;
+----------------------------+----------------+--------------------+---------------+--------------+
| greptime_timestamp | greptime_value | instance | job | source |
+----------------------------+----------------+--------------------+---------------+--------------+
| 2025-07-11 07:42:03.064000 | 1200 | node_exporter:9100 | node-exporter | local_laptop |
| 2025-07-11 07:42:18.069000 | 1200 | node_exporter:9100 | node-exporter | local_laptop |
+----------------------------+----------------+--------------------+---------------+--------------+
2 rows in set (0.01 sec)
```

更多配置详情请参考 [pipeline 相关文档](/user-guide/logs/pipeline-config.md)。

## 性能优化

默认情况下,metric engine 会自动创建一个名为 `greptime_physical_table` 的物理表。
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -125,4 +125,35 @@ WITH(
append_mode = 'true'
)
1 row in set (0.00 sec)
```
```

## Using pipeline in Loki push API

:::warning Experimental Feature
This experimental feature may contain unexpected behavior, have its functionality change in the future.
:::

Starting from `v0.15`, GreptimeDB supports using pipeline to process Loki push requests.
You can simply set the HTTP header `x-greptime-pipeline-name` to the target pipeline name to enable pipeline processing.

Note, if the request data go through the pipeline engine, GreptimeDB will add prefix to the label and metadata column names:
- `loki_label_` before each label name
- `loki_metadata_` before each structured metadata name
- The original Loki log line would be named `loki_line`

An example of data model using `greptime_identity` would be like the following:
```
mysql> select * from loki_logs limit 1;
+----------------------------+---------------------+---------------------------+---------------------------------------------------------------------------+
| greptime_timestamp | loki_label_platform | loki_label_service_name | loki_line |
+----------------------------+---------------------+---------------------------+---------------------------------------------------------------------------+
| 2025-07-15 11:40:26.651141 | docker | docker-monitoring-alloy-1 | ts=2025-07-15T11:40:15.532342849Z level=info "boringcrypto enabled"=false |
+----------------------------+---------------------+---------------------------+---------------------------------------------------------------------------+
1 row in set (0.00 sec)
```

You can see that the label column names are prefixed with `loki_label_`.
The actual log line is named `loki_line`.
You can use a custom pipeline to process the data. It would be working like a normal pipeline process.

You can refer to the [pipeline's documentation](/user-guide/logs/pipeline-config.md) for more details.
Original file line number Diff line number Diff line change
Expand Up @@ -217,6 +217,44 @@ It can also be helpful to group metrics by their frequency.
Note, each metric's logical table is bound to a physical table upon creation.
So setting different physical table for the same metric within the same database won't work.

## Using pipeline in remote write

:::warning Experimental Feature
This experimental feature may contain unexpected behavior, have its functionality change in the future.
:::

Starting from `v0.15`, GreptimeDB supports using pipeline to process Prometheus remote write requests.
You can simply set the HTTP header `x-greptime-pipeline-name` to the target pipeline name to enable pipeline processing.

Here is a very simple pipeline configuration, using `vrl` processor to add a `source` label to each metric:
```YAML
version: 2
processors:
- vrl:
source: |
.source = "local_laptop"
.

transform:
- field: greptime_timestamp
type: time, ms
index: timestamp
```

The result looks something like this
```
mysql> select * from `go_memstats_mcache_inuse_bytes`;
+----------------------------+----------------+--------------------+---------------+--------------+
| greptime_timestamp | greptime_value | instance | job | source |
+----------------------------+----------------+--------------------+---------------+--------------+
| 2025-07-11 07:42:03.064000 | 1200 | node_exporter:9100 | node-exporter | local_laptop |
| 2025-07-11 07:42:18.069000 | 1200 | node_exporter:9100 | node-exporter | local_laptop |
+----------------------------+----------------+--------------------+---------------+--------------+
2 rows in set (0.01 sec)
```

You can refer to the [pipeline's documentation](/user-guide/logs/pipeline-config.md) for more details.

## Performance tuning

By default, the metric engine will automatically create a physical table named `greptime_physical_table` if it does not already exist. For performance optimization, you may choose to create a physical table with customized configurations.
Expand Down