diff --git a/docs/environment-variables.md b/docs/environment-variables.md index 85a18ec9..5f024739 100644 --- a/docs/environment-variables.md +++ b/docs/environment-variables.md @@ -15,7 +15,7 @@ OpenObserve is configured using the following environment variables. | ZO_LOCAL_MODE | true | If local mode is set to true, OpenObserve becomes single node deployment.If it is set to false, it indicates cluster mode deployment which supports multiple nodes with different roles. For local mode one needs to configure SQLite DB, for cluster mode one needs to configure PostgreSQL (recommended) or MySQL. | | ZO_LOCAL_MODE_STORAGE | disk | Applicable only for local mode. By default, local disk is used as storage. OpenObserve supports both disk and S3 in local mode. | | ZO_NODE_ROLE | all | Node role assignment. Possible values are ingester, querier, router, compactor, alertmanager, and all. A single node can have multiple roles by specifying them as a comma-separated list. For example, compactor, alertmanager. | -| ZO_NODE_ROLE_GROUP | "" | Each query-processing node can be assigned to a specific group using ZO_NODE_ROLE_GROUP.
- **interactive**: Handles queries triggered directly by users through the UI.
- **background**: Handles automated or scheduled queries, such as alerts and reports.
- **empty string** (default): Handles all query types.
+| ZO_NODE_ROLE_GROUP | "" | Each query-processing node can be assigned to a specific group using ZO_NODE_ROLE_GROUP.
- **interactive**: Handles queries triggered directly by users through the UI.
- **background**: Handles automated or scheduled queries, such as alerts and reports.
- **empty string** (default): Handles all query types.
In high-load environments, alerts or reports might run large, resource-intensive queries. By assigning dedicated groups, administrators can prevent such queries from blocking or slowing down real-time user searches. | | ZO_NODE_HEARTBEAT_TTL | 30 | Time-to-live (TTL) for node heartbeats in seconds. | | ZO_INSTANCE_NAME | - | In the cluster mode, each node has a instance name. Default is instance hostname. | @@ -87,9 +87,12 @@ In high-load environments, alerts or reports might run large, resource-intensive | ZO_MEM_TABLE_MAX_SIZE | 0 | Total size limit of all memtables. Multiple memtables exist for different organizations and stream types. Each memtable cannot exceed ZO_MAX_FILE_SIZE_IN_MEMORY, and the combined size cannot exceed this limit. If exceeded, the system returns a MemoryTableOverflowError to prevent out-of-memory conditions. Default is 50 percent of total memory. | | ZO_MEM_PERSIST_INTERVAL | 5 | Interval in seconds at which immutable memtables are persisted from memory to disk. Default is 5 seconds. | | ZO_FEATURE_SHARED_MEMTABLE_ENABLED | false | When set to true, it turns on the shared memtable feature and several organizations can use the same in-memory table instead of each organization creating its own. This helps reduce memory use when many organizations send data at the same time. It also works with older non-shared write-ahead log (WAL) files. | -| ZO_MEM_TABLE_BUCKET_NUM | 1 | This setting controls how many in-memory tables OpenObserve creates, and works differently depending on whether shared memtable is enabled or disabled.
**When ZO_FEATURE_SHARED_MEMTABLE_ENABLED is true (shared memtable enabled)**: OpenObserve creates the specified number of shared in-memory tables that all organizations use together.
- **If the number is higher**: OpenObserve creates more shared tables. Each table holds data from fewer organizations. This can make data writing faster because each table handles less data. However, it also uses more memory.
- **If the number is lower**: OpenObserve creates fewer shared tables. Each table holds data from more organizations. This saves memory but can make data writing slightly slower when many organizations send data at the same time. -
**When ZO_FEATURE_SHARED_MEMTABLE_ENABLED is false (shared memtable disabled)**: Each organization creates its own set of in-memory tables based on the ZO_MEM_TABLE_BUCKET_NUM value. -
For example, if ZO_MEM_TABLE_BUCKET_NUM is set to 4, each organization will create 4 separate in-memory tables. This is particularly useful when you have only one organization, as creating multiple in-memory tables for that single organization can improve ingestion performance.| +| ZO_MEM_TABLE_BUCKET_NUM | 1 | This setting controls how many in-memory tables OpenObserve creates, and works differently depending on whether shared memtable is enabled or disabled.
- **When ZO_FEATURE_SHARED_MEMTABLE_ENABLED is true (shared memtable enabled)**: OpenObserve creates the specified number of shared in-memory tables that all organizations use together.
**If the number is higher**: OpenObserve creates more shared tables. Each table holds data from fewer organizations. This can make data writing faster because each table handles less data. However, it also uses more memory.
+**If the number is lower**: OpenObserve creates fewer shared tables. Each table holds data from more organizations. This saves memory but can make data writing slightly slower when many organizations send data at the same time.
- **When ZO_FEATURE_SHARED_MEMTABLE_ENABLED is false (shared memtable disabled)**: +Each organization creates its own set of in-memory tables based on the ZO_MEM_TABLE_BUCKET_NUM value. +
+For example, if ZO_MEM_TABLE_BUCKET_NUM is set to 4, each organization will create 4 separate in-memory tables. +This is particularly useful when you have only one organization, as creating multiple in-memory tables for that single organization can improve ingestion performance.| ## Indexing | Environment Variable | Default Value | Description | diff --git a/docs/images/click-explore-metrics.png b/docs/images/click-explore-metrics.png new file mode 100644 index 00000000..dc6e6aff Binary files /dev/null and b/docs/images/click-explore-metrics.png differ diff --git a/docs/images/explore-metrics.png b/docs/images/explore-metrics.png new file mode 100644 index 00000000..e90c6c09 Binary files /dev/null and b/docs/images/explore-metrics.png differ diff --git a/docs/images/metrics-records.png b/docs/images/metrics-records.png new file mode 100644 index 00000000..a9af4af5 Binary files /dev/null and b/docs/images/metrics-records.png differ diff --git a/docs/images/view-custom-chart.png b/docs/images/view-custom-chart.png new file mode 100644 index 00000000..0acd2a07 Binary files /dev/null and b/docs/images/view-custom-chart.png differ diff --git a/docs/images/view-raw-metrics-data.png b/docs/images/view-raw-metrics-data.png new file mode 100644 index 00000000..b044d0ce Binary files /dev/null and b/docs/images/view-raw-metrics-data.png differ diff --git a/docs/user-guide/dashboards/custom-charts/.pages b/docs/user-guide/dashboards/custom-charts/.pages index 061e1df1..def797af 100644 --- a/docs/user-guide/dashboards/custom-charts/.pages +++ b/docs/user-guide/dashboards/custom-charts/.pages @@ -4,4 +4,4 @@ nav: - Custom Charts with Flat Data: custom-charts-flat-data.md - Custom Charts with Nested Data: custom-charts-nested-data.md - Event Handlers and Custom Functions: custom-charts-event-handlers-and-custom-functions.md - \ No newline at end of file + - Custom charts for metrics using PromQL: custom-charts-for-metrics-using-promql.md \ No newline at end of file diff --git a/docs/user-guide/dashboards/custom-charts/custom-charts-for-metrics-using-promql.md b/docs/user-guide/dashboards/custom-charts/custom-charts-for-metrics-using-promql.md new file mode 100644 index 00000000..179e1799 --- /dev/null +++ b/docs/user-guide/dashboards/custom-charts/custom-charts-for-metrics-using-promql.md @@ -0,0 +1,326 @@ + +This guide explains how to build custom charts for metrics in OpenObserve using PromQL. The goal is to help new and advanced users understand how raw metrics data transforms into a fully rendered chart through predictable and repeatable steps. + +## How metric data flows into a chart +Metrics data in OpenObserve follows a fixed transformation pipeline: + +`Metrics data in OpenObserve > PromQL query > Matrix JSON > Transform into timestamp-value pairs > Render chart` + +This data pipeline never changes. Only two things vary: + +- The **PromQL query** +- The **JavaScript transformation logic** that prepares data based on the chart you want to build + +The example uses the `container_cpu_time metric` and builds a time-series line chart. + + +## How to build the custom chart for metrics using PromQL + +??? "Prerequisites" + ### Prerequisites + + Before building a custom chart, ensure the following: + + - You have metrics data available in a metrics stream. + - You know the basics of PromQL. + - You know basic JavaScript because custom charts require writing JavaScript inside the editor. + - You know the chart type you want to create and the data structure that chart expects. + +??? "Step 1: Explore the metrics data" + ### Step 1: Explore the metrics data + + OpenObserve stores metrics as time series with labels and values. To understand how your metrics look, explore them directly: + + 1. Go to **Streams**. + 2. Click the **Metrics** tab. + 3. Navigate to the metrics stream. For example, `container_cpu_time` + ![explore-metrics](../../../images/explore-metrics.png) + 4.Click **Explore.** + ![click-explore-metrics](../../../images/click-explore-metrics.png) + This takes you to the **Logs** page and shows a time-series view: + ![metrics-records](../../../images/metrics-records.png) +
+ + The two most important fields for charting are: + + - `timestamp` + - `value` + + All charts ultimately use these two fields. + +??? "Step 2: Decide the chart you want to build" + ### Step 2: Decide the chart you want to build + + Before you write a query or JavaScript, you must decide the chart type because every chart expects a specific structure. + + For example: + + - A line chart requires `[timestamp, value]` pairs + - A bar chart requires `[category, value]` pairs + - A multi-series chart requires an array of datasets + + Knowing the expected structure helps you prepare the right PromQL query and the right JavaScript transformation. + + +??? "Step 3: Create a dashboard and select the metrics dataset" + ### Step 3: Create a dashboard and select the metrics dataset + + 1. In the left navigation panel, select **Dashboards** and open or create a dashboard. + 2. Add a panel and go to **Custom Chart** mode. + 3. In the **Fields** section on the left, set **Stream Type** to **metrics**. + 4. Select your metrics stream from the dropdown. For example: `container_cpu_time` + + This ensures that the PromQL query will run against the correct metrics dataset. + +??? "Step 4: Query and view your PromQL data" + ### Step 4: Query and view your PromQL data + Before building any chart, you must query the required metric. You can view the raw PromQL response to understand the structure that your JavaScript code must transform. + + 1. Navigate to the bottom of the panel editor. + 2. The query editor section appears with two modes, **PromQL** and **Custom SQL**. + 3. Click **PromQL** to switch the editor into PromQL mode. + 4. In the PromQL editor, enter a PromQL expression. For example: `container_cpu_time{}` + 5. To understand the data structure returned by the PromQL query, paste the following JavaScript in the code editor: + ```js linenums="1" + console.clear(); + console.log("=== RAW DATA ARRAY ==="); + console.log(data); + + // Pretty JSON view + console.log("=== RAW DATA (Pretty JSON) ==="); + console.log(JSON.stringify(data, null, 2)); + + // Print first query object safely + if (Array.isArray(data) && data.length > 0) { + console.log("=== FIRST QUERY OBJECT ==="); + console.dir(data[0]); // Removed depth option + } + + // Minimal valid option to avoid rendering errors + option = { + xAxis: { type: "time" }, + yAxis: { type: "value" }, + series: [] + }; + ``` + 6. Select the time range in the time range selector. + 7. Open your browser developer tools. Right-click anywhere inside the dashboard and select **Inspect**. + 7. Open the **Console** tab. + 8. In the panel editor, click **Apply**. + ![view-raw-metrics-data](../../../images/view-raw-metrics-data.png) + You get to see the complete raw PromQL response. + + !!! note "How to interpret it" + OpenObserve returns PromQL data in the following structure: + ```js linenums="1" + [ + { + resultType: "matrix", + result: [ + { + metric: { ...labels... }, + values: [ + [timestamp, value], + + ... + ] + } + + ] + + } + + ] + ``` + Here, + + - The outer array represents all PromQL queries in the panel. If you run one query, the array contains one item. + - `resultType`: "matrix" indicates that PromQL returned time-series data. + - The `result` array contains one entry for each time series in the query result. + - Each metric object contains the labels that identify the series, such as `k8s_pod_name`, `container_id`, or `service_name`. + - The `values` array contains the actual time-series datapoints. Each entry is `[timestamp, value]` where: + + - `timestamp` is in Unix seconds + - `value` is the metric value at that moment + + This structure does not change. All metric visualizations in custom charts follow this same model. This is the starting point for all PromQL-based custom charts. + +??? "Step 5: Understand how to transform the data and render the chart" + ### Step 5: Understand how to transform the data and render the chart + Now that you have inspected the raw PromQL response, you can prepare the data and build a chart. + Every PromQL-based custom chart in OpenObserve follows the same pipeline: + `data > transform > series > option > chart` + The following subsections explain each part in the correct order. + + #### `data`: The raw PromQL matrix + This is the starting point. `data` object is automatically available inside your custom chart editor. It holds the raw response from your PromQL query. + + As shown in step 4, you will see the `data` object in the following structure: + ```js linenums="1" + [ + { + "resultType": "matrix", + "result": [ + { + "metric": { + "k8s_pod_name": "o2c-openobserve-collector-agent-collector-rkggr", + "container_id": "d622222c9880db586ef3a81614ef720b5030e5a4c404ff89d1616abc117cf867" + }, + "values": [ + [1763035098, "39370.53"], + [1763035101, "39370.53"], + ... + ] + } + ] + } + ] + ``` + Here: + + - Each object inside result represents one metric series. + - The metric object holds all identifying labels. + - The values array holds the actual time-series data as `[timestamp, value]`. + + + #### Transformation: Convert raw datapoints into chart-friendly points + This is where you prepare the data for visualization. The chart that you want to build expects the data in a specific format, where each point is `[x, y]`. + + - `x` > time (in ISO format) + - `y` > numeric value + + Perform the following conversion in JavaScript: + ```js linenums="1" + const points = item.values.map(([timestamp, value]) => [ + new Date(timestamp * 1000).toISOString(), + Number(value) + ]); + ``` + After this step, you have clean, chart-ready data such as: + ```js linenums="1" + [ + ["2025-11-13T09:18:00Z", 39370.53], + ["2025-11-13T09:18:03Z", 39370.80] + ] + ``` + !!! note "Note" + Every chart type, whether line, bar, or scatter, starts with this transformation. Only how you display it changes later. + + #### `series`: Build one chart series per metric + `series` is an array you create in your JavaScript code. Each entry in series describes one visual line, bar set, scatter set, and so on. + + Each entry has: + + - A name for the legend + - A type such as line + - A data array with the points you want to plot + + For example: + + ```js linenums="1" + series.push({ + name: item.metric.k8s_pod_name || "default", + type: "line", + data: points, + smooth: true, + showSymbol: false + }); + ``` + + #### `option`: Define the final chart configuration + `option` defines how the chart looks and behaves. It tells the system what axes to use, whether to display tooltips or legends, and how to organize the visual elements. + ```js linenums="1" + option = { + tooltip: { trigger: "axis" }, + legend: { type: "scroll" }, + xAxis: { type: "time", name: "Time" }, + yAxis: { type: "value", name: "CPU Time" }, + series + }; + ``` + The `series` array you built earlier is now linked here. + +??? "Step 6: Transform the data and render the chart" + ### Step 6: Transform the data and render the chart + + Here is the complete JavaScript code example that combines all steps mentioned in Step 5. +
+ + **PromQL query:** + ``` + container_cpu_time{} + ``` +
+ + **JavaScript code:** + + ```js linenums="1" + // Step 1: prepare an empty list of series + const series = []; + + // Step 2: read the PromQL response from OpenObserve + if (Array.isArray(data) && data.length > 0) { + const query = data[0]; + if (query.result && Array.isArray(query.result)) { + for (const item of query.result) { + if (!Array.isArray(item.values)) { + continue; + } + + // Step 3: convert [timestamp, value] to [ISO time, number] + const points = item.values.map(([timestamp, value]) => [ + new Date(timestamp * 1000).toISOString(), + Number(value) + ]); + + // Step 4: choose a label for the legend + const name = + item.metric.k8s_pod_name || + item.metric.container_id || + "unknown"; + + // Step 5: add one line series for this metric + series.push({ + name: name, + type: "line", + data: points, + smooth: true, + showSymbol: false + }); + + } + + } + + } + + // Step 6: define how the chart should be drawn + + option = { + tooltip: { trigger: "axis" }, + legend: { type: "scroll", top: "top" }, + xAxis: { type: "time", name: "Time" }, + yAxis: { type: "value", name: "Value" }, + series: series + }; + ``` + + The line chart uses `[timestamp, value]` pairs and plots each metric as a line across time. + +??? "Step 7: View the result" + ### Step 7: View the result + ![view-custom-chart](../../../images/view-custom-chart.png) + Select the time range from the time range selector and click **Apply** to render your chart. + + Each unique metric label combination will appear as a separate line. + +!!! note "Note" + You can use the same JavaScript code to create other charts that use [timestamp, value]. For example, bar charts or scatter charts. Only change the **type** in the above JavaScript code: + ``` + type: "bar" + ``` + or + + ``` + type: "scatter" + ``` \ No newline at end of file diff --git a/docs/user-guide/logs/explain-analyze-query.md b/docs/user-guide/logs/explain-analyze-query.md index a684ec0b..f98c915c 100644 --- a/docs/user-guide/logs/explain-analyze-query.md +++ b/docs/user-guide/logs/explain-analyze-query.md @@ -116,16 +116,20 @@ The Physical Plan shows how OpenObserve executes your query, including the speci ![physical-plan](../../images/physical-plan.png) !!! note "Common operators you will see:" - - **DataSourceExec**: Reads data from storage. - - **RemoteScanExec**: Reads data from distributed partitions or remote nodes. - - **FilterExec**: Applies filtering operations. - - **ProjectionExec**: Handles column selection and expression computation. - - **AggregateExec**: Performs aggregation operations. May show `mode=Partial` or `mode=FinalPartitioned`. - - **RepartitionExec**: Redistributes data across partitions. May show `Hash([column], N)` or `RoundRobinBatch(N)`. - - **CoalesceBatchesExec**: Combines data batches. - - **SortExec**: Sorts data. May show `TopK(fetch=N)` for optimized sorting. - - **SortPreservingMergeExec**: Merges sorted data streams. - - **CooperativeExec**: Coordinates distributed execution. + + - **DataSourceExec**: Reads data from storage + - **RemoteScanExec**: Reads data from distributed partitions or remote nodes + - **FilterExec**: Applies filtering operations + - **ProjectionExec**: Handles column selection and expression computation + - **AggregateExec**: Performs aggregation operations + - May show `mode=Partial` or `mode=FinalPartitioned` + - **RepartitionExec**: Redistributes data across partitions + - May show `Hash([column], N)` or `RoundRobinBatch(N)` + - **CoalesceBatchesExec**: Combines data batches + - **SortExec**: Sorts data + - May show `TopK(fetch=N)` for optimized sorting + - **SortPreservingMergeExec**: Merges sorted data streams + - **CooperativeExec**: Coordinates distributed execution ---