Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/user-guide/management/audit-trail.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ description: >-
!!! info "Availability"
This feature is available in Enterprise Edition and Cloud. Not available in Open Source.

## What is Audit Trail
## What is audit trail?
Audit Trail records user actions across all organizations in OpenObserve. It captures non-ingestion API calls and helps you monitor activity and improve security.

!!! note "Who can access"
Expand Down
22 changes: 11 additions & 11 deletions docs/user-guide/metrics/downsampling-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This guide provides an overview of downsampling, including its configuration, ru

Downsampling summarizes historical data into fewer data points. Each summarized data point is calculated using an aggregation method, such as the last recorded value, the average, or the total, applied over a defined time block.

## Configure Downsampling
## Configure downsampling

Downsampling is configured using the following environment variables.:

Expand All @@ -21,7 +21,7 @@ Downsampling is configured using the following environment variables.:

> Refer to the [Downsampling Rule](#downsampling-rule) section. <br>

#### Downsampling Configuration For Helm Chart Users
#### Downsampling configuration for Helm Chart users

Add the environment variables under the `enterprise.parameters` section in your `values.yaml` file:
```
Expand All @@ -32,7 +32,7 @@ enterprise:
`O2_METRICS_DOWNSAMPLING_RULES`: "o2_cpu_usage:avg:30d:5m"
```

#### Downsampling Configuration For Terraform Users
#### Downsampling configuration for Terraform users

Set the same variables in your `terraform.tfvars` file:
```
Expand All @@ -42,7 +42,7 @@ Set the same variables in your `terraform.tfvars` file:
> **Note**: **After setting the environment variables, make sure to redeploy the OpenObserve instance for the changes to apply.**


### Downsampling Rule
### Downsampling rule

User-defined rules determine how downsampling is applied to metrics streams. You can define multiple downsampling rules to target different streams or use different configurations.

Expand All @@ -64,14 +64,14 @@ Here:
- **offset**: It defines the age of data eligible for downsampling. For example, 15d for applying downsampling on data older than 15 days.
- **step**: The time block used to group data points. For example, 30m for applying downsampling to retain one value every 30 minutes.

### Sample Downsampling Rules
### Sample downsampling rules

#### Single Rule
#### Single rule
```yaml
O2_METRICS_DOWNSAMPLING_RULES: "o2_cpu_metrics:avg:30d:5m"
```
Retains one average value every 5 minutes for `o2_cpu_metrics` data older than 30 days.<br>
**Multiple Rules**
**Multiple rules**
```yaml
O2_METRICS_DOWNSAMPLING_RULES: "o2_cpu_metrics:avg:30d:5m, o2_app_logs:last:10d:10m"
```
Expand All @@ -82,7 +82,7 @@ O2_METRICS_DOWNSAMPLING_RULES: "o2_cpu_.*:sum:10d:60m"
```
Targets all streams starting with `o2_cpu_`, and for each matching stream, retains one hourly sum for data older than 10 days.

### Downsampling Example
### Downsampling example

**Scenario**<br>
A system is recording CPU usage data every 10 seconds to the stream `o2_cpu_usage`, generating a large volume of high-resolution metrics. Over time, this data becomes too granular and expensive to store or query efficiently for historical analysis.
Expand All @@ -94,7 +94,7 @@ Downsample data older than 30 days to retain one average for every 2-minute time
`O2_COMPACT_DOWNSAMPLING_INTERVAL` = "180"
`O2_METRICS_DOWNSAMPLING_RULES` = "o2_cpu_usage:avg:30d:2m"

**Input Metrics**<br>
**Input metrics**<br>

```json

Expand Down Expand Up @@ -158,15 +158,15 @@ Downsample data older than 30 days to retain one average for every 2-minute time
{ "timestamp": "2024-03-01 00:08:50", "cpu": 20.2 }
```

**Downsampling Time Blocks (Step = 2m) and Average CPU Usage**
**Downsampling time blocks (Step = 2m) and average CPU usage**

- Time Block 1: From 00:00:00 to 00:01:59, average CPU usage is 20.55
- Time Block 2: From 00:02:00 to 00:03:59, average CPU usage is 21.75
- Time Block 3: From 00:04:00 to 00:05:59, average CPU usage is 20.66
- Time Block 4: From 00:06:00 to 00:07:59, average CPU usage is 21.65
- Time Block 5: From 00:08:00 to 00:09:59, average CPU usage is 20.88 (not processed yet)

**Downsampling Job Runs and Outputs**
**Downsampling job runs and outputs**

Job 1 runs at 00:03:00 and processes Time Block 1 <br>
Output:
Expand Down
6 changes: 3 additions & 3 deletions docs/user-guide/metrics/file-access-time-metric.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,11 @@ description: >-
Analyze file access age in OpenObserve to gauge query performance. Buckets
track how recently files were accessed, revealing hot vs. cold data trends.
---
## What Is File Access Time Metric?
## What is file access time metric?

This histogram metric helps analyze the age of files accessed by the querier. This helps in understanding the distribution of file access times across queries and evaluating system performance.

## How Does It Works?
## How does it work?
The metric tracks file age in hourly buckets ranging from 1 hour to 32 hours. Each data point represents how long ago a file was accessed during query execution.

**The metric is exposed as:**
Expand All @@ -16,7 +16,7 @@ The metric tracks file age in hourly buckets ranging from 1 hour to 32 hours. Ea
Zo_file_access_time_bucket
```

## Example Usage
## Example usage
To calculate the 95th percentile of file access age for logs over a 5-minute window:

```
Expand Down
4 changes: 3 additions & 1 deletion docs/user-guide/streams/.pages
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,6 @@ nav:
- Schema Settings: schema-settings.md
- Extended Retention: extended-retention.md
- Summary Streams: summary-streams.md
- Field and Index Types in Streams: fields-and-index-in-streams.md
- Field and Index Types in Streams: fields-and-index-in-streams.md
- Query Recommendations Stream: query-recommendations.md

4 changes: 3 additions & 1 deletion docs/user-guide/streams/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,6 @@ Streams define how observability data is ingested, stored, indexed, and queried
- ### [Schema Settings](schema-settings.md)
- ### [Extended Retention](extended-retention.md)
- ### [Summary Streams](summary-streams.md)
- ### [Data Type and Index Types in Streams](data-type-and-index-type-in-streams.md)
- ### [Data Type and Index Types in Streams](data-type-and-index-type-in-streams.md)
- ### [Field and Index Types in Streams](fields-and-index-in-streams.md)
- ### [Query Recommendations Stream](query-recommendations.md)
2 changes: 1 addition & 1 deletion docs/user-guide/streams/query-recommendations.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ This recommendation indicates that across the last 360000000 hours of query data
<br>

**Example 2** <br>
![example-2-query-recommendations](../../images/example-2-query-recommendationsage.png)
![example-2-query-recommendations](../../images/example-2-query-recommendations.png)
This recommendation is for the `status` field in the `alert_test` stream. All 5 queries used `status` with an equality operator. Although the number is small, the uniform pattern indicates a potential for future optimization.

!!! note "Interpretation"
Expand Down