diff --git a/docs/environment-variables.md b/docs/environment-variables.md
index 85a18ec9..5f024739 100644
--- a/docs/environment-variables.md
+++ b/docs/environment-variables.md
@@ -15,7 +15,7 @@ OpenObserve is configured using the following environment variables.
| ZO_LOCAL_MODE | true | If local mode is set to true, OpenObserve becomes single node deployment.If it is set to false, it indicates cluster mode deployment which supports multiple nodes with different roles. For local mode one needs to configure SQLite DB, for cluster mode one needs to configure PostgreSQL (recommended) or MySQL. |
| ZO_LOCAL_MODE_STORAGE | disk | Applicable only for local mode. By default, local disk is used as storage. OpenObserve supports both disk and S3 in local mode. |
| ZO_NODE_ROLE | all | Node role assignment. Possible values are ingester, querier, router, compactor, alertmanager, and all. A single node can have multiple roles by specifying them as a comma-separated list. For example, compactor, alertmanager. |
-| ZO_NODE_ROLE_GROUP | "" | Each query-processing node can be assigned to a specific group using ZO_NODE_ROLE_GROUP.
- **interactive**: Handles queries triggered directly by users through the UI.
- **background**: Handles automated or scheduled queries, such as alerts and reports.
- **empty string** (default): Handles all query types.
+| ZO_NODE_ROLE_GROUP | "" | Each query-processing node can be assigned to a specific group using ZO_NODE_ROLE_GROUP.
- **interactive**: Handles queries triggered directly by users through the UI.
- **background**: Handles automated or scheduled queries, such as alerts and reports.
- **empty string** (default): Handles all query types.
In high-load environments, alerts or reports might run large, resource-intensive queries. By assigning dedicated groups, administrators can prevent such queries from blocking or slowing down real-time user searches. |
| ZO_NODE_HEARTBEAT_TTL | 30 | Time-to-live (TTL) for node heartbeats in seconds. |
| ZO_INSTANCE_NAME | - | In the cluster mode, each node has a instance name. Default is instance hostname. |
@@ -87,9 +87,12 @@ In high-load environments, alerts or reports might run large, resource-intensive
| ZO_MEM_TABLE_MAX_SIZE | 0 | Total size limit of all memtables. Multiple memtables exist for different organizations and stream types. Each memtable cannot exceed ZO_MAX_FILE_SIZE_IN_MEMORY, and the combined size cannot exceed this limit. If exceeded, the system returns a MemoryTableOverflowError to prevent out-of-memory conditions. Default is 50 percent of total memory. |
| ZO_MEM_PERSIST_INTERVAL | 5 | Interval in seconds at which immutable memtables are persisted from memory to disk. Default is 5 seconds. |
| ZO_FEATURE_SHARED_MEMTABLE_ENABLED | false | When set to true, it turns on the shared memtable feature and several organizations can use the same in-memory table instead of each organization creating its own. This helps reduce memory use when many organizations send data at the same time. It also works with older non-shared write-ahead log (WAL) files. |
-| ZO_MEM_TABLE_BUCKET_NUM | 1 | This setting controls how many in-memory tables OpenObserve creates, and works differently depending on whether shared memtable is enabled or disabled.
**When ZO_FEATURE_SHARED_MEMTABLE_ENABLED is true (shared memtable enabled)**: OpenObserve creates the specified number of shared in-memory tables that all organizations use together.
- **If the number is higher**: OpenObserve creates more shared tables. Each table holds data from fewer organizations. This can make data writing faster because each table handles less data. However, it also uses more memory.
- **If the number is lower**: OpenObserve creates fewer shared tables. Each table holds data from more organizations. This saves memory but can make data writing slightly slower when many organizations send data at the same time.
-
**When ZO_FEATURE_SHARED_MEMTABLE_ENABLED is false (shared memtable disabled)**: Each organization creates its own set of in-memory tables based on the ZO_MEM_TABLE_BUCKET_NUM value.
-
For example, if ZO_MEM_TABLE_BUCKET_NUM is set to 4, each organization will create 4 separate in-memory tables. This is particularly useful when you have only one organization, as creating multiple in-memory tables for that single organization can improve ingestion performance.|
+| ZO_MEM_TABLE_BUCKET_NUM | 1 | This setting controls how many in-memory tables OpenObserve creates, and works differently depending on whether shared memtable is enabled or disabled.
- **When ZO_FEATURE_SHARED_MEMTABLE_ENABLED is true (shared memtable enabled)**: OpenObserve creates the specified number of shared in-memory tables that all organizations use together.
**If the number is higher**: OpenObserve creates more shared tables. Each table holds data from fewer organizations. This can make data writing faster because each table handles less data. However, it also uses more memory.
+**If the number is lower**: OpenObserve creates fewer shared tables. Each table holds data from more organizations. This saves memory but can make data writing slightly slower when many organizations send data at the same time.
- **When ZO_FEATURE_SHARED_MEMTABLE_ENABLED is false (shared memtable disabled)**:
+Each organization creates its own set of in-memory tables based on the ZO_MEM_TABLE_BUCKET_NUM value.
+
+For example, if ZO_MEM_TABLE_BUCKET_NUM is set to 4, each organization will create 4 separate in-memory tables.
+This is particularly useful when you have only one organization, as creating multiple in-memory tables for that single organization can improve ingestion performance.|
## Indexing
| Environment Variable | Default Value | Description |
diff --git a/docs/images/click-explore-metrics.png b/docs/images/click-explore-metrics.png
new file mode 100644
index 00000000..dc6e6aff
Binary files /dev/null and b/docs/images/click-explore-metrics.png differ
diff --git a/docs/images/distinct-values-access.png b/docs/images/distinct-values-access.png
new file mode 100644
index 00000000..2dad0bec
Binary files /dev/null and b/docs/images/distinct-values-access.png differ
diff --git a/docs/images/explore-metrics.png b/docs/images/explore-metrics.png
new file mode 100644
index 00000000..e90c6c09
Binary files /dev/null and b/docs/images/explore-metrics.png differ
diff --git a/docs/images/metadata-distinct-values.png b/docs/images/metadata-distinct-values.png
new file mode 100644
index 00000000..708489e7
Binary files /dev/null and b/docs/images/metadata-distinct-values.png differ
diff --git a/docs/images/metrics-records.png b/docs/images/metrics-records.png
new file mode 100644
index 00000000..a9af4af5
Binary files /dev/null and b/docs/images/metrics-records.png differ
diff --git a/docs/images/view-custom-chart.png b/docs/images/view-custom-chart.png
new file mode 100644
index 00000000..0acd2a07
Binary files /dev/null and b/docs/images/view-custom-chart.png differ
diff --git a/docs/images/view-raw-metrics-data.png b/docs/images/view-raw-metrics-data.png
new file mode 100644
index 00000000..b044d0ce
Binary files /dev/null and b/docs/images/view-raw-metrics-data.png differ
diff --git a/docs/user-guide/actions/actions-in-openobserve.md b/docs/user-guide/actions/actions-in-openobserve.md
index edaa739b..2d2fcd98 100644
--- a/docs/user-guide/actions/actions-in-openobserve.md
+++ b/docs/user-guide/actions/actions-in-openobserve.md
@@ -6,7 +6,7 @@ description: >-
This guide explains what Actions are, their types, and use cases.
!!! info "Availability"
- This feature is available in Enterprise Edition and Cloud. Not available in Open Source.
+ This feature is available in Enterprise Edition. Not available in Open Source and Cloud.
## What are Actions
Actions in OpenObserve are user-defined Python scripts that support custom automation workflows. They can be applied to log data directly from the Logs UI or used as alert destinations.
diff --git a/docs/user-guide/dashboards/custom-charts/.pages b/docs/user-guide/dashboards/custom-charts/.pages
index 061e1df1..def797af 100644
--- a/docs/user-guide/dashboards/custom-charts/.pages
+++ b/docs/user-guide/dashboards/custom-charts/.pages
@@ -4,4 +4,4 @@ nav:
- Custom Charts with Flat Data: custom-charts-flat-data.md
- Custom Charts with Nested Data: custom-charts-nested-data.md
- Event Handlers and Custom Functions: custom-charts-event-handlers-and-custom-functions.md
-
\ No newline at end of file
+ - Custom charts for metrics using PromQL: custom-charts-for-metrics-using-promql.md
\ No newline at end of file
diff --git a/docs/user-guide/dashboards/custom-charts/custom-charts-for-metrics-using-promql.md b/docs/user-guide/dashboards/custom-charts/custom-charts-for-metrics-using-promql.md
new file mode 100644
index 00000000..4845249c
--- /dev/null
+++ b/docs/user-guide/dashboards/custom-charts/custom-charts-for-metrics-using-promql.md
@@ -0,0 +1,319 @@
+This guide explains how to build custom charts for metrics in OpenObserve using PromQL. The goal is to help new and advanced users understand how raw metrics data transforms into a fully rendered chart through predictable and repeatable steps.
+
+## How metric data flows into a chart
+Metrics data in OpenObserve follows a fixed transformation flow:
+
+`Metrics data in OpenObserve > PromQL query > Matrix JSON > Transform into timestamp-value pairs > Render chart`
+
+This data flow never changes. Only two things vary:
+
+- The **PromQL query**
+- The **JavaScript transformation logic** that prepares data based on the chart you want to build
+
+The example uses the `container_cpu_time` metric and builds a time-series line chart.
+
+## How to build the custom chart for metrics using PromQL
+
+??? "Prerequisites"
+ ### Prerequisites
+
+ Before building a custom chart, ensure the following:
+
+ - You have metrics data available in a metrics stream.
+ - You know the basics of PromQL.
+ - You know basic JavaScript because custom charts require writing JavaScript inside the editor.
+ - You know the chart type you want to create and the data structure that chart expects.
+
+??? "Step 1: Explore the metrics data"
+ ### Step 1: Explore the metrics data
+
+ OpenObserve stores metrics as time series with labels and values. To understand how your metrics look, explore them directly:
+
+ 1. Go to **Streams**.
+ 2. Click the **Metrics** tab.
+ 3. Navigate to the metrics stream. For example, `container_cpu_time`
+ 
+ 4.Click **Explore.**
+ 
+ This takes you to the **Logs** page and shows a time-series view:
+ 
+
+
+ The two most important fields for charting are:
+
+ - `timestamp`
+ - `value`
+
+ All charts ultimately use these two fields.
+
+??? "Step 2: Decide the chart you want to build"
+ ### Step 2: Decide the chart you want to build
+
+ Before you write a query or JavaScript, you must decide the chart type because every chart expects a specific structure.
+
+ For example:
+
+ - A line chart requires `[timestamp, value]` pairs
+ - A bar chart requires `[category, value]` pairs
+ - A multi-series chart requires an array of datasets
+
+ Knowing the expected structure helps you prepare the right PromQL query and the right JavaScript transformation.
+
+
+??? "Step 3: Create a dashboard and select the metrics dataset"
+ ### Step 3: Create a dashboard and select the metrics dataset
+
+ 1. In the left navigation panel, select **Dashboards** and open or create a dashboard.
+ 2. Add a panel and go to **Custom Chart** mode.
+ 3. In the **Fields** section on the left, set **Stream Type** to **metrics**.
+ 4. Select your metrics stream from the dropdown. For example: `container_cpu_time`
+
+ This ensures that the PromQL query will run against the correct metrics dataset.
+
+??? "Step 4: Query and view your PromQL data"
+ ### Step 4: Query and view your PromQL data
+ Before building any chart, you must query the required metric. You can view the raw PromQL response to understand the structure that your JavaScript code must transform.
+
+ 1. Navigate to the bottom of the panel editor.
+ 2. The query editor section appears with two modes, **PromQL** and **Custom SQL**.
+ 3. Click **PromQL** to switch the editor into PromQL mode.
+ 4. In the PromQL editor, enter a PromQL expression. For example: `container_cpu_time{}`
+ 5. To understand the data structure returned by the PromQL query, paste the following JavaScript in the code editor:
+ ```js linenums="1"
+ console.clear();
+ console.log("=== RAW DATA ARRAY ===");
+ console.log(data);
+
+ // Pretty JSON view
+ console.log("=== RAW DATA (Pretty JSON) ===");
+ console.log(JSON.stringify(data, null, 2));
+
+ // Print first query object safely
+ if (Array.isArray(data) && data.length > 0) {
+ console.log("=== FIRST QUERY OBJECT ===");
+ console.dir(data[0]); // Removed depth option
+ }
+
+ // Minimal valid option to avoid rendering errors
+ option = {
+ xAxis: { type: "time" },
+ yAxis: { type: "value" },
+ series: []
+ };
+ ```
+ 6. Select the time range in the time range selector.
+ 7. Open your browser developer tools. Right-click anywhere inside the dashboard and select **Inspect**.
+ 7. Open the **Console** tab.
+ 8. In the panel editor, click **Apply**.
+ 
+ You get to see the complete raw PromQL response.
+
+ !!! note "How to interpret it"
+ OpenObserve returns PromQL data in the following structure:
+ ```js linenums="1"
+ [
+ {
+ resultType: "matrix",
+ result: [
+ {
+ metric: { ...labels... },
+ values: [
+ [timestamp, value],
+ ...
+ ]
+ }
+ ]
+ }
+ ]
+ ```
+ Here,
+
+ - The outer array represents all PromQL queries in the panel. If you run one query, the array contains one item.
+ - `resultType`: "matrix" indicates that PromQL returned time-series data.
+ - The `result` array contains one entry for each time series in the query result.
+ - Each metric object contains the labels that identify the series, such as `k8s_pod_name`, `container_id`, or `service_name`.
+ - The `values` array contains the actual time-series datapoints. Each entry is `[timestamp, value]` where:
+
+ - `timestamp` is in Unix seconds
+ - `value` is the metric value at that moment
+
+ This structure does not change. All metric visualizations in custom charts follow this same model. This is the starting point for all PromQL-based custom charts.
+
+??? "Step 5: Understand how to transform the data and render the chart"
+ ### Step 5: Understand how to transform the data and render the chart
+ Now that you have inspected the raw PromQL response, you can prepare the data and build a chart.
+ Every PromQL-based custom chart in OpenObserve follows the same flow:
+ `data > transform > series > option > chart`
+ The following subsections explain each part in the correct order.
+
+ #### `data`: The raw PromQL matrix
+ This is the starting point. `data` object is automatically available inside your custom chart editor. It holds the raw response from your PromQL query.
+
+ As shown in step 4, you will see the `data` object in the following structure:
+ ```js linenums="1"
+ [
+ {
+ "resultType": "matrix",
+ "result": [
+ {
+ "metric": {
+ "k8s_pod_name": "o2c-openobserve-collector-agent-collector-rkggr",
+ "container_id": "d622222c9880db586ef3a81614ef720b5030e5a4c404ff89d1616abc117cf867"
+ },
+ "values": [
+ [1763035098, "39370.53"],
+ [1763035101, "39370.53"],
+ ...
+ ]
+ }
+ ]
+ }
+ ]
+ ```
+ Here:
+
+ - Each object inside result represents one metric series.
+ - The `metric` object holds all identifying labels.
+ - The values array holds the actual time-series data as `[timestamp, value]`.
+
+
+ #### Transformation: Convert raw datapoints into chart-friendly points
+ This is where you prepare the data for visualization. The chart that you want to build expects the data in a specific format, where each point is `[x, y]`.
+
+ - `x` > time (in ISO format)
+ - `y` > numeric value
+
+ Perform the following conversion in JavaScript:
+ ```js linenums="1"
+ const points = item.values.map(([timestamp, value]) => [
+ new Date(timestamp * 1000).toISOString(),
+ Number(value)
+ ]);
+ ```
+ After this step, you have clean, chart-ready data such as:
+ ```js linenums="1"
+ [
+ ["2025-11-13T09:18:00Z", 39370.53],
+ ["2025-11-13T09:18:03Z", 39370.80]
+ ]
+ ```
+ !!! note "Note"
+ Every chart type, whether line, bar, or scatter, starts with this transformation. Only how you display it changes later.
+
+ #### `series`: Build one chart series per metric
+ `series` is an array you create in your JavaScript code. Each entry in series describes one visual line, bar set, scatter set, and so on.
+
+ Each entry has:
+
+ - A name for the legend
+ - A type such as line
+ - A data array with the points you want to plot
+
+ For example:
+
+ ```js linenums="1"
+ series.push({
+ name: item.metric.k8s_pod_name || "default",
+ type: "line",
+ data: points,
+ smooth: true,
+ showSymbol: false
+ });
+ ```
+
+ #### `option`: Define the final chart configuration
+ `option` defines how the chart looks and behaves. It tells the system what axes to use, whether to display tooltips or legends, and how to organize the visual elements.
+ ```js linenums="1"
+ option = {
+ tooltip: { trigger: "axis" },
+ legend: { type: "scroll" },
+ xAxis: { type: "time", name: "Time" },
+ yAxis: { type: "value", name: "CPU Time" },
+ series
+ };
+ ```
+ The `series` array you built earlier is now linked here.
+
+??? "Step 6: Transform the data and render the chart"
+ ### Step 6: Transform the data and render the chart
+
+ Here is the complete JavaScript code example that combines all steps mentioned in Step 5.
+
+
+ **PromQL query:**
+ ```
+ container_cpu_time{}
+ ```
+
+
+ **JavaScript code:**
+
+ ```js linenums="1"
+ /// Set the chart type you want
+ // Supported examples: "line", "scatter", "bar"
+ const chartType = "bar";
+
+ // Step 1: prepare an empty list of series
+ const series = [];
+
+ // Step 2: read the PromQL response from OpenObserve
+ if (Array.isArray(data) && data.length > 0) {
+ const query = data[0];
+
+ if (query.result && Array.isArray(query.result)) {
+ for (const item of query.result) {
+ if (!Array.isArray(item.values)) {
+ continue;
+ }
+
+ // Step 3: convert [timestamp, value] to [ISO time, number]
+ const points = item.values.map(([timestamp, value]) => [
+ new Date(timestamp * 1000).toISOString(),
+ Number(value)
+ ]);
+
+ // Step 4: choose a label for the legend
+ const name =
+ item.metric.k8s_pod_name ||
+ item.metric.container_id ||
+ "unknown";
+
+ // Step 5: add one series entry for this metric
+ series.push({
+ name: name,
+ type: ,
+ data: points
+ });
+ }
+ }
+ }
+
+ // Step 6: define how the chart should be drawn
+ option = {
+ xAxis: { type: "time", name: "Time" },
+ yAxis: { type: "value", name: "Value" },
+ legend: { type: "scroll", top: "top" },
+ tooltip: { trigger: chartType === "scatter" ? "item" : "axis" },
+ series: series
+ };
+ ```
+
+ The line chart uses `[timestamp, value]` pairs and plots each metric as a line across time.
+
+??? "Step 7: View the result"
+ ### Step 7: View the result
+ 
+ Select the time range from the time range selector and click **Apply** to render your chart.
+
+ Each unique metric label combination will appear as a separate line.
+
+!!! note "Note"
+ You can use the same JavaScript code to create other charts that use [timestamp, value]. For example, bar charts or scatter charts. Only change the **type** in the above JavaScript code:
+ ```
+ type: "bar"
+ ```
+ or
+
+ ```
+ type: "scatter"
+ ```
\ No newline at end of file
diff --git a/docs/user-guide/logs/explain-analyze-query.md b/docs/user-guide/logs/explain-analyze-query.md
index a684ec0b..f98c915c 100644
--- a/docs/user-guide/logs/explain-analyze-query.md
+++ b/docs/user-guide/logs/explain-analyze-query.md
@@ -116,16 +116,20 @@ The Physical Plan shows how OpenObserve executes your query, including the speci

!!! note "Common operators you will see:"
- - **DataSourceExec**: Reads data from storage.
- - **RemoteScanExec**: Reads data from distributed partitions or remote nodes.
- - **FilterExec**: Applies filtering operations.
- - **ProjectionExec**: Handles column selection and expression computation.
- - **AggregateExec**: Performs aggregation operations. May show `mode=Partial` or `mode=FinalPartitioned`.
- - **RepartitionExec**: Redistributes data across partitions. May show `Hash([column], N)` or `RoundRobinBatch(N)`.
- - **CoalesceBatchesExec**: Combines data batches.
- - **SortExec**: Sorts data. May show `TopK(fetch=N)` for optimized sorting.
- - **SortPreservingMergeExec**: Merges sorted data streams.
- - **CooperativeExec**: Coordinates distributed execution.
+
+ - **DataSourceExec**: Reads data from storage
+ - **RemoteScanExec**: Reads data from distributed partitions or remote nodes
+ - **FilterExec**: Applies filtering operations
+ - **ProjectionExec**: Handles column selection and expression computation
+ - **AggregateExec**: Performs aggregation operations
+ - May show `mode=Partial` or `mode=FinalPartitioned`
+ - **RepartitionExec**: Redistributes data across partitions
+ - May show `Hash([column], N)` or `RoundRobinBatch(N)`
+ - **CoalesceBatchesExec**: Combines data batches
+ - **SortExec**: Sorts data
+ - May show `TopK(fetch=N)` for optimized sorting
+ - **SortPreservingMergeExec**: Merges sorted data streams
+ - **CooperativeExec**: Coordinates distributed execution
---
diff --git a/docs/user-guide/streams/.pages b/docs/user-guide/streams/.pages
index 48f944c2..41f3c772 100644
--- a/docs/user-guide/streams/.pages
+++ b/docs/user-guide/streams/.pages
@@ -5,7 +5,7 @@ nav:
- Schema Settings: schema-settings.md
- Extended Retention: extended-retention.md
- Summary Streams: summary-streams.md
- - Field and Index Types in Streams: fields-and-index-in-streams.md
+ - Field and Index Types in Streams: data-type-and-index-type-in-streams.md
- Query Recommendations Stream: query-recommendations.md
- Distinct Values: distinct-values.md
diff --git a/docs/user-guide/streams/distinct-values.md b/docs/user-guide/streams/distinct-values.md
index e69de29b..23f780cc 100644
--- a/docs/user-guide/streams/distinct-values.md
+++ b/docs/user-guide/streams/distinct-values.md
@@ -0,0 +1,46 @@
+---
+title: Distinct Values Stream in OpenObserve
+description: ollects unique values during ingestion, stores them in metadata streams, and supports faster distinct queries in OpenObserve.
+---
+This document explains how the distinct values feature in OpenObserve works.
+## Overview
+The distinct values feature automatically collects unique values for a stream when data is ingested. The system writes these values to disk at a defined interval. Distinct values are stored in a special stream named `distinct_values`, which is used to accelerate distinct queries.
+!!! note "Who can access it"
+ By default, the `Root` user has access. Access for other users is managed through **IAM** permissions in the **Metadata** module.
+
+ 
+!!! note "Where to find it"
+ Distinct values are written into automatically created metadata streams. The naming pattern is `distinct_values__`. For example, For example: `distinct_values_logs_default` and `distinct_values_logs_k8s_events`.
+## Environment Variables
+| Variable | Description | Default |
+| ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
+| `ZO_DISTINCT_VALUES_INTERVAL` | Defines how often distinct values collected during ingestion are written from memory to the `distinct_values` stream on disk. This prevents frequent small writes by batching distinct values at the configured interval. | `10s` |
+| `ZO_DISTINCT_VALUES_HOURLY` | Enables hourly deduplication of distinct values stored in the `distinct_values` stream. When set to true, repeated values within one hour are merged into a single record, and a count of occurrences is logged. | `false` |
+## How it works
+1. During ingestion, OpenObserve automatically collects distinct values for each stream.
+2. These values are stored in memory and written to disk in the `distinct_values__` under **Streams > Metadata** at intervals defined by `ZO_DISTINCT_VALUES_INTERVAL`.
+
+3. If `ZO_DISTINCT_VALUES_HOURLY` is enabled, values in the `distinct_values` stream are further deduplicated at the hourly level, with counts aggregated.
+- The `distinct_values` streams help accelerate `DISTINCT` queries by using pre-computed distinct values instead of scanning all ingested logs.
+## Example
+Ingested data:
+```json
+2025/09/10T10:00:01Z, job=test, level=info, service=test, request_id=123
+2025/09/10T10:00:02Z, job=test, level=info, service=test, request_id=124
+2025/09/10T10:01:03Z, job=test, level=info, service=test, request_id=123
+2025/09/10T10:10:00Z, job=test, level=info, service=test, request_id=123
+2025/09/10T11:10:00Z, job=test, level=info, service=test, request_id=123
+```
+With `ZO_DISTINCT_VALUES_INTERVAL=10s`, the system first collects values in memory and then writes to disk:
+```yaml
+2025/09/10T10:00:01Z request_id: 123, count: 2
+2025/09/10T10:00:02Z request_id: 124, count: 1
+2025/09/10T10:10:02Z request_id: 123, count: 1
+2025/09/10T11:10:02Z request_id: 123, count: 1
+```
+If `ZO_DISTINCT_VALUES_HOURLY=true`, the system merges values by hour:
+```yaml
+2025/09/10T10:00:01Z request_id: 123, count: 3
+2025/09/10T10:00:02Z request_id: 124, count: 1
+2025/09/10T11:10:02Z request_id: 123, count: 1
+```
\ No newline at end of file
diff --git a/docs/user-guide/streams/schema-settings.md b/docs/user-guide/streams/schema-settings.md
index 7d2b528b..4f4fd968 100644
--- a/docs/user-guide/streams/schema-settings.md
+++ b/docs/user-guide/streams/schema-settings.md
@@ -22,10 +22,10 @@ For example:
- `58.0` as `Float64`
- `"58%"` as `Utf8`
-## Index Type
+## Index type
You can modify or assign an index type to a field to improve search performance. Indexing can reduce the amount of data that must be scanned during queries.
-To learn more, visit the [Fields and Index in Streams](streams/fields-and-index-in-streams) page.
+To learn more, visit the [Fields and Index in Streams](https://openobserve.ai/docs/user-guide/streams/data-type-and-index-type-in-streams/) page.
!!! Warning
Changing the index after storing data may lead to inconsistent query results or data retrieval failures.