diff --git a/docs/environment-variables.md b/docs/environment-variables.md
index e9cc3ccf..fe605386 100644
--- a/docs/environment-variables.md
+++ b/docs/environment-variables.md
@@ -333,6 +333,8 @@ OpenObserve is configured using the following environment variables.
| ZO_NATS_DELIVER_POLICY | all | Starting point in the stream for message delivery. Allowed values are all, last, new. |
| ZO_NATS_SUB_CAPACITY | 65535 | Maximum subscription capacity. |
| ZO_NATS_QUEUE_MAX_SIZE | 2048 | Maximum queue size in megabytes. |
+| ZO_NATS_KV_WATCH_MODULES | 2048 | Defines which internal modules use the NATS Key-Value Watcher instead of the default NATS Queue for event synchronization. Add one or more module prefixes separated by commas, such as /nodes/ or /user_sessions/. When left empty, all modules use the default NATS Queue mechanism. |
+| ZO_NATS_EVENT_STORAGE | memory | Controls how NATS JetStream stores event data. Use memory for high-speed, in-memory event storage or file for durable, disk-based storage that persists across restarts.
Performance Benchmark Results:
• File Storage: 10,965 ops/sec (10.71 MB/s throughput, ~911 µs mean latency)
• Memory Storage: 16,957 ops/sec (16.56 MB/s throughput, ~589 µs mean latency)
Memory storage offers ~55 percent higher throughput and lower latency, while file storage ensures durability. |
## S3 and Object Storage
diff --git a/docs/images/example-1-query-recommendations.png b/docs/images/example-1-query-recommendations.png
new file mode 100644
index 00000000..26a945ed
Binary files /dev/null and b/docs/images/example-1-query-recommendations.png differ
diff --git a/docs/images/example-2-query-recommendations.png b/docs/images/example-2-query-recommendations.png
new file mode 100644
index 00000000..c9c7e707
Binary files /dev/null and b/docs/images/example-2-query-recommendations.png differ
diff --git a/docs/images/match-all-hash.png b/docs/images/match-all-hash.png
new file mode 100644
index 00000000..fe80a35d
Binary files /dev/null and b/docs/images/match-all-hash.png differ
diff --git a/docs/images/organization-in-openobserve.png b/docs/images/organization-in-openobserve.png
index 021e4151..cf54150a 100644
Binary files a/docs/images/organization-in-openobserve.png and b/docs/images/organization-in-openobserve.png differ
diff --git a/docs/images/organization-role-permission.png b/docs/images/organization-role-permission.png
index 925a840f..7885414a 100644
Binary files a/docs/images/organization-role-permission.png and b/docs/images/organization-role-permission.png differ
diff --git a/docs/images/select-query-recommendations.png b/docs/images/select-query-recommendations.png
new file mode 100644
index 00000000..b273c9e4
Binary files /dev/null and b/docs/images/select-query-recommendations.png differ
diff --git a/docs/images/use-query-recommendations.png b/docs/images/use-query-recommendations.png
new file mode 100644
index 00000000..a63d0b79
Binary files /dev/null and b/docs/images/use-query-recommendations.png differ
diff --git a/docs/user-guide/enrichment-tables/enrichment-table-upload-recovery.md b/docs/user-guide/enrichment-tables/enrichment-table-upload-recovery.md
index 8b5851c8..37f764bb 100644
--- a/docs/user-guide/enrichment-tables/enrichment-table-upload-recovery.md
+++ b/docs/user-guide/enrichment-tables/enrichment-table-upload-recovery.md
@@ -59,4 +59,25 @@ When no local disk cache is available:
- The querier fetches the latest enrichment data from the metadata database, such as PostgreSQL, and the remote storage system, such as S3. It then provides the data to the restarting node.
+## Region-based caching in multi-region super clusters
+In a multi-region super cluster deployment, enrichment tables are typically queried from all regions when a node starts up and rebuilds its cache. While this ensures data completeness, it can slow startup or cause failures if one or more regions are unavailable.
+
+To address this, OpenObserve Enterprise supports primary region–based caching, controlled by the environment variable `ZO_ENRICHMENT_TABLE_GET_REGION`.
+
+### Requirements
+
+- Available only in Enterprise Edition.
+- Requires Super Cluster to be enabled.
+- The `ZO_ENRICHMENT_TABLE_GET_REGION` variable must specify a valid region name.
+
+### How it works
+When a node starts, OpenObserve calls internal methods such as `get_enrichment_table_data()` and `cache_enrichment_tables()` to retrieve enrichment table data.
+The boolean parameter `apply_primary_region_if_specified` controls whether to use only the primary region for these fetch operations.
+
+In a multi-region super cluster deployment, when `apply_primary_region_if_specified = true`, OpenObserve checks the value of `ZO_ENRICHMENT_TABLE_GET_REGION`.
+
+- If `ZO_ENRICHMENT_TABLE_GET_REGION` specifies a primary region, the node queries only that region to fetch enrichment table data during cache initialization.
+- If `ZO_ENRICHMENT_TABLE_GET_REGION` is not set, or the region name is empty, OpenObserve continues to query all regions as before.
+
+
diff --git a/docs/user-guide/identity-and-access-management/organizations.md b/docs/user-guide/identity-and-access-management/organizations.md
index 304188cb..01c5d844 100644
--- a/docs/user-guide/identity-and-access-management/organizations.md
+++ b/docs/user-guide/identity-and-access-management/organizations.md
@@ -11,48 +11,48 @@ Organizations provide logical boundaries for separating data, users, and access

-## Organization Types
+## Organization types
OpenObserve supports two types of organizations:
- **Default organization:** Automatically created for each user upon account creation. Typically named **default** and owned by the user. The UI labels it as type **default**.
- **Custom organization:** Any organization other than the **default**. These are created manually using the UI or ingestion (if enabled). Displayed in the UI as type **custom**.
-!!! Info "What Is **_meta** Organization?"
- **_meta Organization** is considered as a **custom** organization. It is a system-level organization that exists in both single-node and multi-node (HA) deployments.
-
- - The **_meta** organization provides visibility into the health and status of the OpenObserve instance, including node metrics, resource usage, and configuration across all organizations.
- - Use the **IAM > Roles > Permission** in the **_meta** organization to manage users across all organizations and control who can list, create, update, or delete organizations.
-
-## Access
-
-In OpenObserve, access to organization-level operations, such as listing, creating, updating, or deleting organizations, depends on the deployment mode.
-
-### Open-Source Mode
-Any authenticated user can create new organizations using the Add Organization button in the UI.
-### Enterprise Mode with RBAC Enabled
-- Access to organization management is strictly controlled through RBAC, which must be configured in the _meta organization.
-- The **root** user always has unrestricted access to all organizations, including **_meta**.
-- Only roles defined in **_meta** can include permissions for managing organizations.
-- The **organization** module is available in the role editor only within the **_meta** organization.
-
-!!! Info "How to Grant Organization Management Access?"
- To delegate organization management to users in enterprise mode:
-
- 1. Switch to the **_meta** organization.
- 2. Go to **IAM > Roles**.
- 3. Create a new role or edit an existing one.
- 4. In the **Permissions** tab, locate the Organizations module.
- 5. Select the required operations:
-
- - **List**: View existing organizations
- - **Create**: Add new organizations
- - **Update**: Modify organization details
- - **Delete**: Remove organizations
- 6. Click **Save**.
- 
-
- Once this role is assigned to a user within the **_meta** organization, they will have access to manage organizations across the system.
+### _meta organization
+**_meta Organization** is considered as a **custom** organization. It is a system-level organization that exists in both single-node and multi-node (HA) deployments.
+
+- The **_meta** organization provides visibility into the health and status of the OpenObserve instance, including node metrics, resource usage, and configuration across all organizations.
+- Use the **IAM > Roles > Permission** in the **_meta** organization to manage users across all organizations and control who can list, create, update, or delete organizations.
+
+!!! note "Who can access"
+ ## Who can access
+ In OpenObserve, access to organization-level operations, such as listing, creating, updating, or deleting organizations, depends on the deployment mode.
+
+ ### Access in the open-source mode
+ Any authenticated user can create new organizations using the **Add Organization** button in the UI.
+ ### Access in the enterprise mode with RBAC enabled
+ - Access to organization management is strictly controlled through RBAC, which must be configured in the _meta organization.
+ - The **root** user always has unrestricted access to all organizations, including **_meta**.
+ - Only roles defined in **_meta** can include permissions for managing organizations.
+ - The **organization** module is available in the role editor only within the **_meta** organization.
+
+## How to grant organization management access?
+To delegate organization management to users in enterprise mode:
+
+1. Switch to the **_meta** organization.
+2. Go to **IAM > Roles**.
+3. Create a new role or edit an existing one.
+4. In the **Permissions** tab, locate the Organizations module.
+5. Select the required operations:
+
+ - **Create**: Add new organizations
+ - **Update**: Modify organization details
+ !!! note "Note"
+ By default, OpenObserve displays the list of organizations a user belongs to. You do not need to explicitly grant permission to view or retrieve organization details.
+6. Click **Save**.
+
+
+Once this role is assigned to a user within the **_meta** organization, they will have access to manage organizations across the system.
## Create an Organization
diff --git a/docs/user-guide/management/aggregation-cache.md b/docs/user-guide/management/aggregation-cache.md
index 873a04a6..1303f012 100644
--- a/docs/user-guide/management/aggregation-cache.md
+++ b/docs/user-guide/management/aggregation-cache.md
@@ -5,6 +5,8 @@ description: Learn how streaming aggregation works in OpenObserve Enterprise.
---
This page explains what streaming aggregation is and shows how to use it to improve query performance with aggregation cache in OpenObserve.
+!!! info "Availability"
+ This feature is available in Enterprise Edition.
=== "Overview"
diff --git a/docs/user-guide/management/sensitive-data-redaction.md b/docs/user-guide/management/sensitive-data-redaction.md
index 5c46546b..4867a298 100644
--- a/docs/user-guide/management/sensitive-data-redaction.md
+++ b/docs/user-guide/management/sensitive-data-redaction.md
@@ -16,17 +16,14 @@ The **Sensitive Data Redaction** feature helps prevent accidental exposure of se
> **Note**: Use ingestion time redaction, hash, or drop when you want to ensure sensitive data is never stored on disk. This is the most secure option for compliance requirements, as the original sensitive data cannot be recovered once it is redacted, hashed, or dropped during ingestion.
- **Redact**: Sensitive data is masked before being stored on disk.
-- **Hash**: Sensitive data is replaced with a **hash prefix** to protect the original data.
+- **Hash**: Sensitive data is replaced with a [searchable](#search-hashed-values-uusing-match_all_hash) hash before being stored on disk.
- **Drop**: Sensitive data is removed before being stored on disk.
**Query time**
> **Note**: If you have already ingested sensitive data and it is stored on disk, you can use query time redaction or drop to protect it. This allows you to apply sensitive data redaction to existing data.
- **Redaction**: Sensitive data is read from disk but masked before results are displayed.
-- **Hash**: Sensitive data is replaced with a hashed prefix during query evaluation, preserving correlation without revealing the value.
-!!! note "Configure hash pattern length"
- `ZO_RE_PATTERN_HASH_LENGTH` sets the number of hash characters kept for display and search.
- Default 12. Allowed range 12 to 64.
+- **Hash**: Sensitive data is read from disk but masked with a [searchable](#search-hashed-values-uusing-match_all_hash) hash before results are displayed.
- **Drop**: Sensitive data is read from disk but excluded from the query results.
!!! note "Where to find"
@@ -277,8 +274,8 @@ The following regex patterns are applied to the `message` field of the `pii_test
- Other fields remain intact.
- This demonstrates field-level drop at ingestion.
-??? "Test 3: Hashed at ingestion time"
- ### Hashed at ingestion time
+??? "Test 3: Hash at ingestion time"
+ ### Hash at ingestion time
**Pattern Configuration**:

@@ -347,8 +344,8 @@ The following regex patterns are applied to the `message` field of the `pii_test
- The `message` field with the credit card details gets dropped in query results.
- This demonstrates field-level drop at query time.
-??? "Test 6: Hashed at query time"
- ### Hashed at query time
+??? "Test 6: Hash at query time"
+ ### Hash at query time
**Pattern Configuration**:

@@ -365,6 +362,18 @@ The following regex patterns are applied to the `message` field of the `pii_test
6. Verify results:

+## Search hashed values uUsing `match_all_hash`
+The `match_all_hash` user-defined function (UDF) complements the SDR Hash feature. It allows you to search for logs that contain the hashed equivalent of a specific sensitive value.
+When data is hashed using Sensitive Data Redaction, the original value is replaced with a deterministic hash. You can use `match_all_hash()` to find all records that contain the hashed token, even though the original value no longer exists in storage.
+Example:
+```sql
+match_all_hash('4111-1111-1111-1111')
+```
+This query returns all records where the SDR Hash of the provided value exists in any field.
+In the example below, it retrieves the log entry containing
+[REDACTED:907fe4882defa795fa74d530361d8bfb], the hashed version of the given card number.
+
+
## Limitations
diff --git a/docs/user-guide/pipelines/pipelines.md b/docs/user-guide/pipelines/pipelines.md
index d9722a07..afbe8ddd 100644
--- a/docs/user-guide/pipelines/pipelines.md
+++ b/docs/user-guide/pipelines/pipelines.md
@@ -34,6 +34,9 @@ Use real-time pipelines when you need immediate processing, such as monitoring l
A scheduled pipeline automates the processing of historical data from an existing stream at user-defined intervals. This is useful when you need to extract, transform, and load (ETL) data at regular intervals without manual intervention.

+!!! note "Performance"
+ OpenObserve maintains a cache for scheduled pipelines to prevent the alert manager from making unnecessary database calls. This cache becomes particularly beneficial when the number of scheduled pipelines is high. For example, with 500 scheduled pipelines, the cache eliminates 500 separate database queries each time the pipelines are triggered, significantly improving performance.
+
#### How they work
1. **Source**: To create a scheduled pipeline, you need an existing stream, which serves as the source stream.
@@ -44,7 +47,7 @@ A scheduled pipeline automates the processing of historical data from an existin

4. **Destination**: The transformed data is sent to the following destination(s) for storage or further processing:
- **Stream**: The supported destination stream types are Logs, Metrics, Traces, or Enrichment tables.
**Note**: Enrichment Tables can only be used as destination streams in scheduled pipelines.
- - **Remote**: Select **Remote** if you wish to send data to [external destination](#external-pipeline-destinations).
+ - **Remote**: Select **Remote** if you wish to send data to [external destination](https://openobserve.ai/docs/user-guide/pipelines/remote-destination/).
#### Frequency and Period
The scheduled pipeline runs based on the user-defined **Frequency** and **Period**.
@@ -60,20 +63,6 @@ The scheduled pipeline runs based on the user-defined **Frequency** and **Period
#### When to use
Use scheduled pipelines for tasks that require processing at fixed intervals instead of continuously, such as generating periodic reports and processing historical data in batches.
-## External Pipeline Destinations
-OpenObserve allows you to route pipeline data to external destinations.
-
-To configure an external destination for pipelines:
-
-1. Navigate to the **Pipeline Destination** configuration page. You can access the configuration page while setting up the remote pipeline destination from the pipeline editor or directly from **Management** (Settings icon in the navigation menu) > **Pipeline Destinations** > **Add Destination**.
-2. In the **Add Destination** form, provide a descriptive name for the external destination.
-3. Under **URL**, specify the endpoint where the data should be sent.
-4. Select the HTTP method based on your requirement.
-5. Add headers for authentication. In the **Header** field, enter authentication-related details (e.g., Authorization). In the **Value** field, provide the corresponding authentication token.
-6. Use the toggle **Skip TLS Verify** to enable or disable Transport Layer Security (TLS) verification.
-**Note**: Enable the **Skip TLS Verify** toggle to bypass security and certificate verification checks for the selected destination. Use with caution, as disabling verification may expose data to security risks. You may enable the toggle for development or testing environments but is not recommended for production unless absolutely necessary.
-
-
## Next Steps
- [Create and Use Pipelines](../use-pipelines/)
diff --git a/docs/user-guide/streams/query-recommendations.md b/docs/user-guide/streams/query-recommendations.md
new file mode 100644
index 00000000..e9bed99b
--- /dev/null
+++ b/docs/user-guide/streams/query-recommendations.md
@@ -0,0 +1,69 @@
+---
+title: Query Recommendations Stream in OpenObserve
+description: Understand the purpose, structure, and usage of the query_recommendations stream in the _meta organization in OpenObserve.
+---
+
+This document explains the function and application of the query_recommendations stream within the _meta organization in OpenObserve. It provides guidance for users who want to optimize query performance using system-generated recommendations based on observed query patterns.
+
+!!! info "Availability"
+ This feature is available in Enterprise Edition.
+
+## Overview
+OpenObserve continuously analyzes user queries across streams to identify optimization opportunities. These suggestions are stored in the `query_recommendations` stream under the `_meta` organization. The recommendations focus on improving performance by suggesting secondary indexes when patterns in field access indicate consistent and potentially costly lookups.
+
+
+!!! note "Where to find it"
+ The query recommendations are published into the `query_recommendations` stream under the `_meta` organization.
+ 
+
+!!! note "Who can access it"
+ All Enterprise Edition users with access to the `_meta` organization can access the `query_recommendations` stream.
+
+!!! note "When to use it"
+ Use this stream when:
+
+ - You notice slow query performance for specific fields or patterns.
+ - You are planning schema-level optimizations.
+ - You want to validate whether frequently queried fields would benefit from indexing.
+
+## How to use it
+1. Switch to the `_meta` organization in OpenObserve.
+2. Go to the **Logs** section.
+3. From the stream selection dropdown, select the `query_recommendations` stream.
+4. Select the desired time range.
+5. Click **Run query**.
+
+
+## Field descriptions
+| Field | Description |
+|-----------------------|-----------------------------------------------------------------------------|
+| `_timestamp` | Time when the recommendation was recorded. |
+| `column_name` | Field name in the stream that the recommendation applies to. |
+| `stream_name` | The stream where this field was queried. |
+| `all_operators` | All operators observed for the field (example: =, >, <). |
+| `operator` | Primary operator considered for recommendation. |
+| `occurrences` | Number of times the field was queried with the specified operator. |
+| `total_occurrences` | Total number of queries examined. |
+| `num_distinct_values` | Count of distinct values seen in the field. |
+| `duration_hrs` | Duration (in hours) over which this pattern was observed. |
+| `reason` | Explanation behind the recommendation. |
+| `recommendation` | Specific action suggested (typically, create secondary index). |
+| `type` | Always `SecondaryIndexStreamSettings` for this stream. |
+
+## Examples and how to interpret them
+
+**Example 1**
+
+This recommendation indicates that across the last 360000000 hours of query data, the job field in the `default` stream was queried with an equality (`=`) operator 1220 times out of 1220 total queries. Since all queries used this field with the `=` operator, a secondary index could improve performance.
+
+!!! note "Interpretation"
+ Add a secondary index on the `job` field in the `default` stream for improved performance.
+
+
+
+**Example 2**
+
+This recommendation is for the `status` field in the `alert_test` stream. All 5 queries used `status` with an equality operator. Although the number is small, the uniform pattern indicates a potential for future optimization.
+
+!!! note "Interpretation"
+ Consider indexing status if query volume increases or performance becomes a concern.
\ No newline at end of file