diff --git a/doc-assets/shots/example-metrics-query.png b/doc-assets/shots/example-metrics-query.png
new file mode 100644
index 00000000..e07f5647
Binary files /dev/null and b/doc-assets/shots/example-metrics-query.png differ
diff --git a/doc-assets/shots/otel-metrics-dashboard.png b/doc-assets/shots/otel-metrics-dashboard.png
new file mode 100644
index 00000000..8a5b0bdf
Binary files /dev/null and b/doc-assets/shots/otel-metrics-dashboard.png differ
diff --git a/docs.json b/docs.json
index 175d6c34..411ca83a 100644
--- a/docs.json
+++ b/docs.json
@@ -66,7 +66,14 @@
"query-data/visualizations",
"query-data/views",
"query-data/virtual-fields",
- "query-data/traces"
+ "query-data/traces",
+ {
+ "group": "Metrics",
+ "pages": [
+ "query-data/metrics/overview",
+ "query-data/metrics/query-metrics"
+ ]
+ }
]
},
{
diff --git a/introduction.mdx b/introduction.mdx
index eec481e7..be8744bd 100644
--- a/introduction.mdx
+++ b/introduction.mdx
@@ -9,15 +9,23 @@ Trusted by 30,000+ organizations, from high-growth startups to global enterprise
## Components
-Axiom consists of two fundamental components:
+Axiom consists of two purpose-built data stores supported by a unified console experience:
### EventDB
Robust, cost-effective, and scalable datastore specifically optimized for timestamped event data. Built from the ground up to handle the vast volumes and high velocity of event ingestion, EventDB ensures:
-* **Scalable data loading:** Events are ingested seamlessly without complex middleware, scaling linearly with no single points of failure.
-* **Extreme compression:** Tuned storage format compresses data 25-50x, significantly reducing storage costs and ensuring data remains queryable at any time.
-* **Serverless querying:** Axiom spins up ephemeral, serverless runtimes on-demand to execute queries efficiently, minimizing idle compute resources and costs.
+- **Scalable data loading:** Events are ingested seamlessly without complex middleware, scaling linearly with no single points of failure.
+- **Extreme compression:** Tuned storage format compresses data 25-50x, significantly reducing storage costs and ensuring data remains queryable at any time.
+- **Serverless querying:** Axiom spins up ephemeral, serverless runtimes on-demand to execute queries efficiently, minimizing idle compute resources and costs.
+
+### MetricsDB
+
+Dedicated metrics database engineered specifically for high-cardinality time-series data. Unlike traditional metrics solutions that penalize you for dimensional complexity, MetricsDB embraces high-cardinality tags as a design principle:
+
+- **High-cardinality native:** Store metrics with high-cardinality dimensional tags without performance degradation or cost penalties.
+- **Optimized storage:** Purpose-built storage format designed for time-series workloads delivers efficient compression and fast aggregations across millions of unique tag combinations.
+- **Thoughtful constraints:** Design choices prioritize the most common metrics use cases while maintaining exceptional performance.
For more information, see [Axiom’s architecture](/platform-overview/architecture).
@@ -25,18 +33,18 @@ For more information, see [Axiom’s architecture](/platform-overview/architectu
Intuitive web app built for exploration, visualization, and monitoring of your data.
-* **Real-time exploration:** Effortlessly query and visualize data streams in real-time, providing instant clarity on operational and business conditions.
-* **Dynamic visualizations:** Generate insightful visualizations, from straightforward counts to sophisticated aggregations, tailored specifically to your needs.
-* **Robust monitoring:** Set up threshold-based and anomaly driven alerts, ensuring proactive visibility into potential issues.
+- **Real-time exploration:** Effortlessly query and visualize data streams in real-time, providing instant clarity on operational and business conditions.
+- **Dynamic visualizations:** Generate insightful visualizations, from straightforward counts to sophisticated aggregations, tailored specifically to your needs.
+- **Robust monitoring:** Set up threshold-based and anomaly driven alerts, ensuring proactive visibility into potential issues.
## Why choose Axiom?
-* **Cost-efficiency:** Axiom dramatically lowers data ingestion and storage costs compared to traditional observability and logging solutions.
-* **Flexible insights:** Real-time query capabilities and an increasingly intelligent UI help pinpoint issues and opportunities without sampling.
-* **AI engineering:** Axiom provides specialized features designed explicitly for AI engineering workflows, allowing teams to confidently build, deploy, and optimize AI capabilities.
+- **Cost-efficiency:** Axiom dramatically lowers data ingestion and storage costs compared to traditional observability and logging solutions.
+- **Flexible insights:** Real-time query capabilities and an increasingly intelligent UI help pinpoint issues and opportunities without sampling.
+- **AI engineering:** Axiom provides specialized features designed explicitly for AI engineering workflows, allowing teams to confidently build, deploy, and optimize AI capabilities.
## Getting started
-* [Learn more about Axiom’s features](/platform-overview/features).
-* [Explore the interactive demo playground](https://play.axiom.co/).
-* [Create your own organization](https://app.axiom.co/register).
+- [Learn more about Axiom’s features](/platform-overview/features).
+- [Explore the interactive demo playground](https://play.axiom.co/).
+- [Create your own organization](https://app.axiom.co/register).
diff --git a/platform-overview/architecture.mdx b/platform-overview/architecture.mdx
index f1a8ad9f..7ef1575c 100644
--- a/platform-overview/architecture.mdx
+++ b/platform-overview/architecture.mdx
@@ -8,92 +8,88 @@ description: "Technical deep-dive into Axiom’s distributed architecture."
You don’t need to understand any of the following material to get massive value from Axiom. As a fully managed data platform, Axiom just works. This technical deep-dive is intended for curious minds wondering: Why is Axiom different?
-Axiom routes ingestion requests through a distributed edge layer to a cluster of specialized services that process and store data in a proprietary columnar format optimized for event data. Query requests are executed by ephemeral, serverless workers that operate directly on compressed data stored in object storage.
+Axiom routes ingestion requests through a distributed edge layer to a cluster of specialized services that process and store data in proprietary columnar formats optimized for different data types. EventDB handles high-volume event data, while MetricsDB is purpose-built for time-series metrics with high-cardinality dimensions. Query requests are executed by ephemeral, serverless workers that operate directly on compressed data stored in object storage.
## Ingestion architecture
Data flows through a multi-layered ingestion system designed for high throughput and reliability:
-**Regional Edge Layer**: HTTPS ingestion requests are received by regional edge proxies positioned to meet data jurisdiction requirements. These proxies handle protocol translation, authentication, and initial data validation. The edge layer supports multiple input formats (JSON, CSV, compressed streams) and can buffer data during downstream issues.
-
-**High-availability routing**: The system provides intelligent routing to healthy database nodes using real-time health monitoring. When primary ingestion paths fail, requests are automatically routed to available nodes or queued in a backlog system that processes data when systems recover.
-
-**Streaming Pipeline**: Raw events are parsed, validated, and transformed in streaming fashion. Field limits and schema validation occur during this phase.
-
-**Write-Ahead Logging**: All ingested data is durably written to a distributed write-ahead log before being processed. This ensures zero data loss even during system failures and supports concurrent writes across multiple ingestion nodes.
+- **Regional Edge Layer:** HTTPS ingestion requests are received by regional edge proxies positioned to meet data jurisdiction requirements. These proxies handle protocol translation, authentication, and initial data validation. The edge layer supports multiple input formats (JSON, CSV, compressed streams) and can buffer data during downstream issues.
+- **High-availability routing:** The system provides intelligent routing to healthy database nodes using real-time health monitoring. When primary ingestion paths fail, requests are automatically routed to available nodes or queued in a backlog system that processes data when systems recover.
+- **Streaming Pipeline:** Raw events are parsed, validated, and transformed in streaming fashion. Field limits and schema validation occur during this phase.
+- **Write-Ahead Logging:** All ingested data is durably written to a distributed write-ahead log before being processed. This ensures zero data loss even during system failures and supports concurrent writes across multiple ingestion nodes.
## Storage architecture
-Axiom’s storage layer is built around a custom columnar format that achieves extreme compression ratios:
+Axiom’s storage layer uses specialized columnar formats optimized for different workload types:
-**Columnar organization**: Events are decomposed into columns and stored using specialized encodings optimized for each data type. String columns use dictionary encoding, numeric columns use various compression schemes, and boolean columns use bitmap compression.
+### EventDB storage
-**Block-based storage**: Data is organized into immutable blocks that are written once and read many times. Each block contains:
+EventDB’s storage is built around a custom columnar format that achieves extreme compression ratios:
-- Column metadata and statistics
-- Compressed column data in a proprietary format
-- Separate time indexes for temporal queries
-- Field schemas and type information
+- **Columnar organization:** Events are decomposed into columns and stored using specialized encodings optimized for each data type. String columns use dictionary encoding, numeric columns use various compression schemes, and boolean columns use bitmap compression.
+- **Block-based storage:** Data is organized into immutable blocks that are written once and read many times. Each block contains:
-**Compression pipeline**: Data flows through multiple compression stages:
+ - Column metadata and statistics
+ - Compressed column data in a proprietary format
+ - Separate time indexes for temporal queries
+ - Field schemas and type information
-1. **Ingestion compression**: Real-time compression during ingestion (25-50% reduction)
-2. **Block compression**: Columnar compression within storage blocks (10-20x additional compression)
-3. **Compaction compression**: Background compaction further optimizes storage (additional 2-5x compression)
+- **Compression pipeline:** Data flows through multiple compression stages:
-**Object storage integration**: Blocks are stored in object storage (S3) with intelligent partitioning strategies that distribute load and avoid hot-spotting. The system supports multiple storage tiers and automatic lifecycle management.
+ 1. **Ingestion compression:** Real-time compression during ingestion (25-50% reduction)
+ 1. **Block compression:** Columnar compression within storage blocks (10-20x additional compression)
+ 1. **Compaction compression:** Background compaction further optimizes storage (additional 2-5x compression)
-## Query architecture
+- **Object storage integration:** Blocks are stored in object storage (S3) with intelligent partitioning strategies that distribute load and avoid hot-spotting. The system supports multiple storage tiers and automatic lifecycle management.
-Axiom executes queries using a serverless architecture that spins up compute resources on-demand:
+### MetricsDB storage
-**Query compilation**: The APL (Axiom Processing Language) query is parsed, optimized, and compiled into an execution plan. The compiler performs predicate pushdown, projection optimization, and identifies which blocks need to be read.
+MetricsDB uses a specialized columnar format engineered for time-series metrics with high-cardinality tags:
-**Serverless Workers**: Query execution occurs in ephemeral workers optimized through "Fusion queries"—a system that runs parallel queries inside a single worker to reduce costs and leave more resources for large queries. Workers download only the necessary column data from object storage, enabling efficient resource utilization. Multiple workers can process different blocks in parallel.
+- **High-cardinality optimization:** Unlike traditional metrics databases that struggle with dimensional complexity, MetricsDB is designed from the ground up to handle high numbers of unique tag combinations efficiently.
+- **Intentional design constraints:** MetricsDB makes deliberate trade-offs to optimize for the most common metrics use cases. These constraints are purposeful architectural choices that enable MetricsDB to deliver exceptional performance and cost-efficiency for real-world metrics workloads. Where other systems penalize you for high cardinality or force you to pre-aggregate data, MetricsDB lets you store and query metrics with full dimensional flexibility.
+- **Unified observability:** Query metrics alongside logs and traces, enabling powerful correlations across all your telemetry data without switching tools or learning multiple query languages.
-**Block-level parallelism**: Each query spawns multiple workers that process different blocks concurrently. Workers read compressed column data directly from object storage, decompress it in memory, and execute the query.
+## Query architecture
-**Result aggregation**: Worker results are streamed back and aggregated by a coordinator process. Large result sets are automatically spilled to object storage and streamed to clients via signed URLs.
+Axiom executes queries using a serverless architecture that spins up compute resources on-demand:
-**Intelligent caching**: Query results are cached in object storage with intelligent cache keys that account for time ranges and query patterns. Cache hits dramatically reduce query latency for repeated queries.
+- **Query compilation:** The APL (Axiom Processing Language) query is parsed, optimized, and compiled into an execution plan. The compiler performs predicate pushdown, projection optimization, and identifies which blocks need to be read.
+- **Serverless Workers:** Query execution occurs in ephemeral workers optimized through "Fusion queries"—a system that runs parallel queries inside a single worker to reduce costs and leave more resources for large queries. Workers download only the necessary column data from object storage, enabling efficient resource utilization. Multiple workers can process different blocks in parallel.
+- **Block-level parallelism:** Each query spawns multiple workers that process different blocks concurrently. Workers read compressed column data directly from object storage, decompress it in memory, and execute the query.
+- **Result aggregation:** Worker results are streamed back and aggregated by a coordinator process. Large result sets are automatically spilled to object storage and streamed to clients via signed URLs.
+- **Intelligent caching:** Query results are cached in object storage with intelligent cache keys that account for time ranges and query patterns. Cache hits dramatically reduce query latency for repeated queries.
## Compaction system
A background compaction system continuously optimizes storage efficiency:
-**Automatic compaction**: The compaction scheduler identifies blocks that can be merged based on size, age, and access patterns. Small blocks are combined into larger "superblocks" that provide better compression ratios and query performance.
-
-**Multiple strategies**: The system supports several compaction algorithms:
+- **Automatic compaction:** The compaction scheduler identifies blocks that can be merged based on size, age, and access patterns. Small blocks are combined into larger "superblocks" that provide better compression ratios and query performance.
+- **Multiple strategies:** The system supports several compaction algorithms:
-- **Default**: General-purpose compaction with optimal compression
-- **Clustered**: Groups data by common field values for better locality
-- **Fieldspace**: Optimizes for specific field access patterns
-- **Concat**: Simple concatenation for append-heavy workloads
+ - **Default:** General-purpose compaction with optimal compression
+ - **Clustered:** Groups data by common field values for better locality
+ - **Fieldspace:** Optimizes for specific field access patterns
+ - **Concat:** Simple concatenation for append-heavy workloads
-**Compression optimization**: During compaction, data is recompressed using more aggressive algorithms and column-specific optimizations that aren’t feasible during real-time ingestion.
+- **Compression optimization:** During compaction, data is recompressed using more aggressive algorithms and column-specific optimizations that aren’t feasible during real-time ingestion.
## System architecture
The overall system is composed of specialized microservices:
-**Core services**: Handle authentication, billing, dataset management, and API routing. These services are stateless and horizontally scalable.
-
-**Database layer**: The core database engine processes ingestion, manages storage, and coordinates query execution. It supports multiple deployment modes and automatic failover.
-
-**Orchestration layer**: Manages distributed operations, monitors system health, and coordinates background processes like compaction and maintenance.
-
-**Edge services**: Handle real-time data ingestion, protocol translation, and provide regional data collection points.
+- **Core services:** Handle authentication, billing, dataset management, and API routing. These services are stateless and horizontally scalable.
+- **Database layer:** The core database engine processes ingestion, manages storage, and coordinates query execution. It supports multiple deployment modes and automatic failover.
+- **Orchestration layer:** Manages distributed operations, monitors system health, and coordinates background processes like compaction and maintenance.
+- **Edge services:** Handle real-time data ingestion, protocol translation, and provide regional data collection points.
## Why this architecture wins
-**Cost efficiency**: Serverless query execution means you only pay for compute during active queries. Extreme compression (25-50x) dramatically reduces storage costs compared to traditional row-based systems.
-
-**Operational simplicity**: The system is designed to be self-managing. Automatic compaction, intelligent caching, and distributed coordination eliminate operational overhead.
-
-**Elastic scale**: Each component scales independently. Ingestion scales with edge capacity, storage scales with object storage, and query capacity scales with serverless workers.
-
-**Fault tolerance**: Write-ahead logging, distributed routing, and automatic failover ensure high availability. The system gracefully handles node failures and storage outages.
-
-**Real-time performance**: Despite the distributed architecture, the system maintains sub-second query performance through intelligent caching, predicate pushdown, and columnar storage optimizations.
+- **Cost efficiency:** Serverless query execution means you only pay for compute during active queries. Extreme compression (25-50x) dramatically reduces storage costs compared to traditional row-based systems.
+- **Operational simplicity:** The system is designed to be self-managing. Automatic compaction, intelligent caching, and distributed coordination eliminate operational overhead.
+- **Elastic scale:** Each component scales independently. Ingestion scales with edge capacity, storage scales with object storage, and query capacity scales with serverless workers.
+- **Fault tolerance:** Write-ahead logging, distributed routing, and automatic failover ensure high availability. The system gracefully handles node failures and storage outages.
+- **Real-time performance:** Despite the distributed architecture, the system maintains sub-second query performance through intelligent caching, predicate pushdown, and columnar storage optimizations.
This architecture enables Axiom to ingest millions of events per second while maintaining sub-second query latency at a fraction of the cost of traditional logging and observability solutions.
\ No newline at end of file
diff --git a/platform-overview/features.mdx b/platform-overview/features.mdx
index 30457a24..47f3e3c7 100644
--- a/platform-overview/features.mdx
+++ b/platform-overview/features.mdx
@@ -16,6 +16,11 @@ mode: "wide"
| **EventDB** | Query | [APL (Axiom Processing Language)](/apl/introduction) | Powerful query language supporting filtering, aggregations, transformations, and specialized operators. |
| **EventDB** | Query | [Virtual fields](/query-data/virtual-fields) | Ability to derive new values from data in real-time during queries without pre-structuring or transforming data during ingestion. |
| | | | |
+| **MetricsDB** | - | - | Dedicated metrics database purpose-built for high-cardinality time-series data without the cost penalties of traditional metrics systems. |
+| **MetricsDB** | Ingest | [OpenTelemetry Metrics](/send-data/opentelemetry) | Native support for OpenTelemetry (gRPC over HTTP) metrics protocol. |
+| **MetricsDB** | Storage | High-cardinality native | Store metrics with high-cardinality tags without performance degradation. |
+| **MetricsDB** | Query | [APL-powered queries](/query-data/metrics/query-metrics) | Query metrics with support for time-series aggregations, dimensional filtering, and cross-metric correlations. |
+| | | | |
| **Console** | - | - | Web UI for data management, querying, dashboarding, monitoring, and user administration. |
| **Console** | Query | [Simple Query Builder](/query-data/explore) | Guided interface to quickly filter and group data. |
| **Console** | Query | [Advanced Query Builder](/query-data/explore) | A full APL-based environment for complex aggregations, transformations, and correlations. |
diff --git a/platform-overview/roadmap.mdx b/platform-overview/roadmap.mdx
index 38b9210d..fdcd7e0e 100644
--- a/platform-overview/roadmap.mdx
+++ b/platform-overview/roadmap.mdx
@@ -46,7 +46,7 @@ Each feature of Axiom is in one of the following states:
- **End of life:** The feature is no longer available or supported. Axiom has sunset it in favor of newer solutions.
-Private and public preview features are experimental, aren’t guaranteed to work as expected, and may return unexpected query results. Please consider the risk you run when you use preview features against production workloads.
+Private and public preview features are experimental. They aren’t guaranteed to work as expected and may return unexpected query results. Axiom doesn’t guarantee data integrity or accuracy for preview features. Consider the risk you run when you use preview features against production workloads.
{/*
@@ -56,4 +56,5 @@ Current private preview features:
Current public preview features:
- [Cursor-based pagination](/restapi/pagination)
- [`externaldata` operator](/apl/tabular-operators/externaldata-operator)
-- [`join` operator](/apl/tabular-operators/join-operator)
\ No newline at end of file
+- [`join` operator](/apl/tabular-operators/join-operator)
+- [OTel Metrics](/query-data/metrics/overview)
diff --git a/query-data/metrics/overview.mdx b/query-data/metrics/overview.mdx
new file mode 100644
index 00000000..b5e21efe
--- /dev/null
+++ b/query-data/metrics/overview.mdx
@@ -0,0 +1,96 @@
+---
+title: Metrics
+description: This section explains how to work with OpenTelemetry (OTel) metrics.
+sidebarTitle: Overview
+---
+
+import Prerequisites from "/snippets/standard-prerequisites.mdx"
+
+Axiom’s dedicated MetricsDB provides a purpose-built metrics database that handles high-cardinality time-series data without the cost penalties and performance degradation common in traditional metrics systems. This section explains how to work with OpenTelemetry (OTel) metrics in Axiom.
+
+
+Support for OTel metrics is currently in public preview. For more information, see [Feature states](/platform-overview/roadmap#feature-states).
+
+Axiom is confident in the quality of MetricsDB and the reliability of the data it stores. However, preview features are experimental, and Axiom doesn’t guarantee data integrity or accuracy for preview features.
+
+
+## What makes MetricsDB different
+
+MetricsDB is engineered from the ground up to embrace dimensional complexity:
+
+- **High-cardinality as a design principle**: Store metrics with high-cardinality tags. Where other metrics databases penalize you with higher costs or degraded performance, MetricsDB treats high cardinality as a core capability.
+- **Intentional architecture**: The storage format, query engine, and compression algorithms are specifically optimized for time-series metrics workloads. These design constraints are thoughtful trade-offs that deliver exceptional performance and cost-efficiency for real-world metrics use cases.
+- **Unified observability**: Query metrics alongside logs and traces, enabling powerful correlations across all your telemetry data without switching tools or learning multiple query languages.
+
+
+
+
+You must use a dedicated dataset for OTel metrics. When you create a dataset, select the type of OTel data you want to send to it. For more information, see [Create dataset](/reference/datasets#create-dataset).
+
+
+## Ingest metrics
+
+You can ingest OTel metrics the same way you ingest logs and traces.
+
+For more information, see [Send OpenTelemetry data to Axiom](/send-data/opentelemetry).
+
+
+The `/v1/metrics` endpoint only supports the `application/x-protobuf` content type. JSON format isn’t supported for metrics ingestion.
+
+
+## Query metrics
+
+You can query metric data using the Axiom Console. For more information, see [Query metrics](/query-data/metrics/query-metrics).
+
+
+
+
+
+## Dashboards and monitors
+
+You can use OTel metrics in dashboards and monitors the same way you use logs and traces.
+
+- Build visualizations using metrics queries.
+- Set alerts on derived metrics such as error rate or latency percentiles.
+- Combine multiple signals in a single panel.
+
+For more information, see [Dashboards](/dashboards/overview) and [Monitors](/monitor-data/monitors).
+
+
+
+
+
+## Design choices and constraints
+
+MetricsDB makes intentional architectural trade-offs to optimize for the most common metrics use cases while maintaining exceptional performance at scale.
+
+### Query scope
+
+You can query one dataset per query.
+
+### Supported data types
+
+MetricsDB focuses on the core OpenTelemetry metric types that cover the vast majority of observability scenarios.
+
+Axiom supports the following OpenTelemetry metric types:
+- **Counter**: Monotonically increasing values. For example, request count.
+- **UpDownCounter**: Values that can increase or decrease. For example, active connections.
+- **Gauge**: Point-in-time measurements. For example, CPU usage or temperature.
+- **Histogram**: Distribution of values with configurable buckets. For example, request latency.
+
+Axiom doesn’t currently support the following data types:
+- Exponential histograms
+- `bytes`, `kvlist`, and `array` tag value types
+- Exemplar, baggage, and context data
+- Nanosecond-precision timestamps
+
+### Data model optimizations
+
+MetricsDB applies the following transformations to improve query performance and reduce storage costs:
+
+- **Timestamp precision**: Truncate nanosecond timestamps to second precision. MetricsDB is built for use cases where second-level granularity is sufficient, and this optimization significantly improves compression ratios and query speed.
+- **Unified tag namespace**: Flatten resource, scope, and metric tags into a single namespace. This simplification makes queries more straightforward and enables faster dimensional filtering. You don’t need to remember which tags came from which scope.
+- **Unit normalization**: Convert the `unit` attribute to `otel.metric.unit` for consistent handling across all metric types.
+- **Histogram handling**: Assume equal-width histograms and don’t preserve histogram metadata. This trade-off supports the most common histogram analysis patterns (percentiles, distribution visualization) while reducing storage requirements.
+
+These design choices reflect real-world metrics usage patterns. If your use case requires capabilities not currently supported, [contact Axiom](https://axiom.co/contact) to discuss your requirements. Your feedback helps shape MetricsDB’s evolution.
diff --git a/query-data/metrics/query-metrics.mdx b/query-data/metrics/query-metrics.mdx
new file mode 100644
index 00000000..68da47af
--- /dev/null
+++ b/query-data/metrics/query-metrics.mdx
@@ -0,0 +1,123 @@
+---
+title: Query metrics
+description: This page explains how to query OpenTelemetry metrics.
+---
+
+Query and analyze your OpenTelemetry metrics data using the Axiom Console. This page shows you how to extract insights from your metrics through filtering, aggregation, and transformation operations.
+
+For more information on working with metrics at Axiom, see [Metrics overview](/query-data/metrics/overview).
+
+
+Support for OTel metrics is currently in public preview. For more information, see [Feature states](/platform-overview/roadmap#feature-states).
+
+
+## Concepts
+
+- **Dataset:** A group of related metrics.
+- **Metric:** A measurement that tracks a specific aspect of your system over time.
+- **Tag:** A key-value pair identifying a series.
+- **Series:** A unique combination of a metric and tag set.
+
+## Build metrics query
+
+To build a typical metrics query:
+
+1. Click the Query tab.
+1. Click **Builder** in the top left.
+1. In **Dataset**, define the source of your query. Select an OTel metrics dataset, and then select the metric you want to query.
+1. In **Where**, filter the results. Restrict the query to a set of series whose tag values match the conditions you specify.
+1. In **Transformations**, select the transformations you want to apply to the data.
+1. Click **Run**.
+
+## Example query
+
+
+
+
+
+This example queries the `axiom-dev.metrics` dataset’s `alertmanager_alerts` metric one hour before the current time. It filters results to events where `k8s.namespace.name` is `monitoring` and aggregates events over 30-second time windows into their average value.
+
+## Elements of queries
+
+The following explains each element of a metrics query.
+
+### Source
+
+Specify the dataset and the metric in the **Dataset** field. The dataset and metric names are separated by a colon in the Builder interface.
+
+For example, `axiom-dev.metrics:alertmanager_alerts`.
+
+### Filter
+
+Use the **Where** section to filter series based on tag values.
+1. Click **+** in the **Where** section.
+1. Select the tag where you want to filter for values.
+1. Select the logical operator of the filter. Available operators are:
+ - Equality: `==`, `!=`
+ - Comparisons: `<`, `<=`, `>`, `>=`
+1. Specify the value for which you want to filter.
+1. Click **+** to add another filter and join them with the logical and operator.
+
+For example, the following joins three filters: `project == /.*metrics.*/ and code >= 200 and code < 300`.
+
+### Transformations
+
+Use the **Transformations** section to transform individual values or series.
+
+1. Click **+** in the **Transformations** section.
+1. Select the transformation you want to apply to the data. Available transformations are:
+ - **map:** Map the data to a new value using the expression you specify.
+ - **align:** Aggregate data using the function and the time window you specify.
+ - **group:** Group the data by a set of tags using the aggregation function you specify.
+
+#### Map
+
+Use `map` to transform individual values.
+
+Available mapping functions:
+
+| Function | Description |
+|-----------------------|------------------------------------------------------------------|
+| `rate` | Computes the per-second rate of change for a metric. |
+| `abs` | Returns the absolute value of each data point. |
+| `interpolate::linear` | Linear interpolation of missing values. |
+| `fill::prev` | Fills missing values using the previous non-null value. |
+| `!` | Negates the value. Maps 0 to 1, and all other values to 0. |
+
+For example, to calculate rate per second for the metric, use `map rate`. To fill empty values with the latest value, use `map fill::prev`.
+
+#### Align
+
+Use `align` to aggregate over time windows. You can specify the time window and the aggregation function to apply.
+
+Available aggregation functions:
+
+| Function | Description |
+|----------|---------------------------------------|
+| `avg` | Averages values in each interval. |
+| `count` | Counts non-null values per interval. |
+| `max` | Takes the maximum value per interval. |
+| `min` | Takes the minimum value per interval. |
+| `prom::rate` | Prometheus style rate |
+| `sum` | Sums values in each interval. |
+
+For example, to calculate the average over 5-minute time windows, use `align to 5m using avg`. To count the data points in the last hour, use `align to 1h using count`.
+
+#### Group
+
+Use `group` to combine series by tags. You can specify the tags to group by and the aggregation function to apply. If you don’t specify tags, Axiom aggregates all series into one group.
+
+Available aggregation functions:
+
+| Function | Description |
+|----------|---------------------------------------|
+| `avg` | Averages values in each group. |
+| `count` | Counts non-null values per group. |
+| `max` | Takes the maximum value per group. |
+| `min` | Takes the minimum value per group. |
+| `sum` | Sums values in each group. |
+
+For example:
+- To calculate the number of series, use `group using count`.
+To sum the values of all series, use `group using sum`.
+- To group data by the `project` and `namespace` tags using the `sum` aggregation, use `group by project, namespace using sum`.
diff --git a/query-data/traces.mdx b/query-data/traces.mdx
index 6b7371c8..9de8a179 100644
--- a/query-data/traces.mdx
+++ b/query-data/traces.mdx
@@ -1,6 +1,7 @@
---
title: 'Explore traces'
description: "Learn how to observe how requests propagate through your distributed systems, understand the interactions between microservices, and trace the life of the request through your app’s architecture."
+sidebarTitle: Traces
keywords: ['axiom documentation', 'documentation', 'axiom', 'traces', 'tracing', 'span', 'trace', 'schema', 'http', 'otel', 'otlp', 'waterfall view']
---
diff --git a/reference/datasets.mdx b/reference/datasets.mdx
index b189f104..04ff989e 100644
--- a/reference/datasets.mdx
+++ b/reference/datasets.mdx
@@ -72,7 +72,12 @@ To create a dataset using the Axiom app, follow these steps:
1. Click
**Settings > Datasets and views**.
1. Click **New dataset**.
-1. Name the dataset, and then click **Add**.
+1. Name the dataset and add an optional description.
+1. In **Kind**, select one of the following:
+ - Select **Axiom Events** if you plan to use the dataset for event data that doesn’t follow OpenTelemetry (OTel) conventions.
+ - If you plan to send OTel data to the dataset, select the type of OTel data you want to use the dataset for. You must use dedicated datasets for each OTel component. For more information, see [Send OpenTelemetry data to Axiom](/send-data/opentelemetry).
+1. In **Data retention**, select for how long to store your data in this dataset. For more information, see [Specify data retention period](#specify-data-retention-period).
+1. Click **Save dataset**.
To create a dataset using the Axiom API, send a POST request to the [datasets endpoint](https://axiom.co/docs/restapi/endpoints/createDataset).
diff --git a/reference/system-requirements.mdx b/reference/system-requirements.mdx
index 53c23d9d..46d41464 100644
--- a/reference/system-requirements.mdx
+++ b/reference/system-requirements.mdx
@@ -19,6 +19,8 @@ Some actions in the Dashboards tab, such as moving dashboard elements, aren’t
## OpenTelemetry
+### Semantic conventions
+
Axiom supports the following versions of OTel semantic conventions:
| Version | Date when supported added | Schema in OTel docs |
@@ -34,7 +36,7 @@ Axiom supports the following versions of OTel semantic conventions:
| 1.27.0 | 12-06-2025 | [1.27.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.27.0) |
| 1.26.0 | 03-07-2024 | [1.26.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.26.0) |
| 1.25.0 | 26-04-2024 | [1.25.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.25.0) |
-| 1.24.0 | 19-01-2024| [1.24.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.24.0) |
+| 1.24.0 | 19-01-2024 | [1.24.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.24.0) |
| 1.23.1 | 26-03-2024 | [1.23.1](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.23.1) |
| 1.23.0 | 26-03-2024 | [1.23.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.23.0) |
| 1.22.0 | 26-03-2024 | [1.22.0](https://github.com/open-telemetry/semantic-conventions/blob/main/schemas/1.22.0) |
@@ -43,3 +45,15 @@ Axiom supports the following versions of OTel semantic conventions:
Version 1.29.0 and version 1.35.0 of OTel semantic conventions aren’t supported.
For more information, see [Semantic conventions](/reference/semantic-conventions).
+
+### Logs, traces, and metrics
+
+| OpenTelemetry component | Support |
+| ------------------------------------------------------------------ | ------------------- |
+| [Logs](https://opentelemetry.io/docs/concepts/signals/logs/) | ✓ |
+| [Traces](https://opentelemetry.io/docs/concepts/signals/traces/) | ✓ |
+| [Metrics](https://opentelemetry.io/docs/concepts/signals/metrics/) | Public preview |
+
+
+Support for OTel metrics is currently in public preview. For more information, see [Feature states](/platform-overview/roadmap#feature-states).
+
diff --git a/send-data/opentelemetry.mdx b/send-data/opentelemetry.mdx
index 95d59f3f..458e9e30 100644
--- a/send-data/opentelemetry.mdx
+++ b/send-data/opentelemetry.mdx
@@ -19,20 +19,48 @@ The OpenTelemetry project has published strong specifications for the three main
OpenTelemetry-compatible events flow into Axiom, where they’re organized into datasets for easy segmentation. Users can create a dataset to receive OpenTelemetry data and obtain an API token for ingestion. Axiom provides comprehensive observability through browsing, querying, dashboards, and alerting of OpenTelemetry data.
-OTel traces and OTel logs support are already live. Axiom will soon support OpenTelemetry Metrics (OTel Metrics).
-
-| OpenTelemetry component | Currently supported |
+| OpenTelemetry component | Support |
| ------------------------------------------------------------------ | ------------------- |
-| [Traces](https://opentelemetry.io/docs/concepts/signals/traces/) | Yes |
-| [Logs](https://opentelemetry.io/docs/concepts/signals/logs/) | Yes |
-| [Metrics](https://opentelemetry.io/docs/concepts/signals/metrics/) | No (coming soon) |
+| [Logs](https://opentelemetry.io/docs/concepts/signals/logs/) | ✓ |
+| [Traces](https://opentelemetry.io/docs/concepts/signals/traces/) | ✓ |
+| [Metrics](https://opentelemetry.io/docs/concepts/signals/metrics/) | Public preview |
+
+
+Support for OTel metrics is currently in public preview. For more information, see [Feature states](/platform-overview/roadmap#feature-states).
+
+You must use a different, dedicated dataset for each OTel component. When you create a dataset, select the type of OTel data you want to send to it. For more information, see [Create dataset](/reference/datasets#create-dataset).
+
## OpenTelemetry Collector
-Configuring the OpenTelemetry collector is as simple as creating an HTTP exporter that sends data to the Axiom API together with headers to set the dataset and API token:
+Configuring the OpenTelemetry collector is as simple as creating an HTTP exporter that sends data to the Axiom API together with headers to set the dataset and API token.
+
+
+```yaml
+exporters:
+ otlphttp:
+ compression: gzip
+ endpoint: https://AXIOM_DOMAIN
+ headers:
+ authorization: Bearer API_TOKEN
+ x-axiom-dataset: DATASET_NAME
+
+service:
+ pipelines:
+ logs:
+ receivers:
+ - otlp
+ processors:
+ - memory_limiter
+ - batch
+ exporters:
+ - otlphttp
+```
+
+
```yaml
exporters:
otlphttp:
@@ -53,16 +81,46 @@ service:
exporters:
- otlphttp
```
+
+
+```yaml
+exporters:
+ otlphttp:
+ compression: gzip
+ endpoint: https://AXIOM_DOMAIN
+ headers:
+ authorization: Bearer API_TOKEN
+ x-axiom-metrics-dataset: DATASET_NAME
+
+service:
+ pipelines:
+ metrics:
+ receivers:
+ - otlp
+ processors:
+ - memory_limiter
+ - batch
+ exporters:
+ - otlphttp
+```
+
+
+When configuring metrics, use the `x-axiom-metrics-dataset` header instead of `x-axiom-dataset`.
-When using the OTLP/HTTP endpoint for traces and logs, the following endpoint URLs should be used in your SDK exporter OTel configuration.
+When using the OTLP/HTTP endpoint, the following endpoint URLs should be used in your SDK exporter OTel configuration.
- Traces: `https://AXIOM_DOMAIN/v1/traces`
- Logs: `https://AXIOM_DOMAIN/v1/logs`
+- Metrics: `https://AXIOM_DOMAIN/v1/metrics`
+
+
+The `/v1/metrics` endpoint only supports the `application/x-protobuf` content type. JSON format isn’t supported for metrics ingestion.
+
## OpenTelemetry for Go
diff --git a/send-data/reference-architectures.mdx b/send-data/reference-architectures.mdx
index 253dbc67..6d2d46bd 100644
--- a/send-data/reference-architectures.mdx
+++ b/send-data/reference-architectures.mdx
@@ -101,12 +101,21 @@ Axiom natively supports the OpenTelemetry Line Protocol (OTLP). Configuring the
```yaml
exporters:
+ # Exporter for logs and traces
otlphttp:
compression: gzip
endpoint: https://AXIOM_DOMAIN
headers:
authorization: Bearer API_TOKEN
x-axiom-dataset: DATASET_NAME
+
+ # Exporter for metrics
+ otlphttp/metrics:
+ compression: gzip
+ endpoint: https://AXIOM_DOMAIN
+ headers:
+ authorization: Bearer API_TOKEN
+ x-axiom-metrics-dataset: DATASET_NAME
service:
pipelines:
@@ -118,11 +127,30 @@ service:
- batch
exporters:
- otlphttp
+
+ logs:
+ receivers:
+ - otlp
+ processors:
+ - memory_limiter
+ - batch
+ exporters:
+ - otlphttp
+
+ metrics:
+ receivers:
+ - otlp
+ processors:
+ - memory_limiter
+ - batch
+ exporters:
+ - otlphttp/metrics
```
+When configuring metrics, use the `x-axiom-metrics-dataset` header instead of `x-axiom-dataset`.
### Vector