Skip to content
Merged
2 changes: 2 additions & 0 deletions _topic_maps/_topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3225,6 +3225,8 @@ Topics:
Topics:
- Name: Release notes for the Red Hat build of OpenTelemetry
File: otel-rn
- Name: About the Red Hat build of OpenTelemetry
File: otel-architecture
- Name: Installing the Red Hat build of OpenTelemetry
File: otel-installing
- Name: Configuring the Collector
Expand Down
22 changes: 0 additions & 22 deletions modules/distr-tracing-product-overview.adoc

This file was deleted.

14 changes: 14 additions & 0 deletions modules/distr-tracing-tempo-about-rn.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
// Module included in the following assemblies:
//
// * observability/distr_tracing/distr-tracing-rn.adoc

:_mod-docs-content-type: REFERENCE
[id="distr-tracing-product-overview_{context}"]
= About this release

{DTShortName} 3.7 is provided through the link:https://catalog.redhat.com/software/containers/rhosdt/tempo-operator-bundle/642c3e0eacf1b5bdbba7654a/history[{TempoOperator} 0.18.0] and based on the open source link:https://grafana.com/oss/tempo/[Grafana Tempo] 2.8.2.

[NOTE]
====
Some linked Jira tickets are accessible only with Red Hat credentials.
====
9 changes: 9 additions & 0 deletions modules/distr-tracing-tempo-coo-ui-plugin.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
// Module included in the following assemblies:
//
// * observability/distr_tracing/distr-tracing-tempo-configuring.adoc

:_mod-docs-content-type: REFERENCE
[id="distr-tracing-tempo-coo-ui-plugin_{context}"]
= Configuring the UI

You can use the distributed tracing UI plugin of the {coo-first} as the user interface (UI) for the {DTProductName}. For more information about installing and using the distributed tracing UI plugin, see "Distributed tracing UI plugin" in _Cluster Observability Operator_.
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
// Module included in the following assemblies:
//
// * observability/distr-tracing-architecture.adoc
// * service_mesh/v2x/ossm-architecture.adoc
// * service_mesh/v1x/ossm-architecture.adoc
// * serverless/observability/tracing/serverless-tracing.adoc

:_mod-docs-content-type: CONCEPT
[id="distr-tracing-tempo-key-concepts-in-distributed-tracing_{context}"]
= Key concepts in distributed tracing

Every time a user takes an action in an application, a request is executed by the architecture that may require dozens of different services to participate to produce a response.
{DTProductName} lets you perform distributed tracing, which records the path of a request through various microservices that make up an application.

_Distributed tracing_ is a technique that is used to tie the information about different units of work together — usually executed in different processes or hosts — to understand a whole chain of events in a distributed transaction.
Developers can visualize call flows in large microservice architectures with distributed tracing.
It is valuable for understanding serialization, parallelism, and sources of latency.

{DTProductName} records the execution of individual requests across the whole stack of microservices, and presents them as traces. A _trace_ is a data/execution path through the system. An end-to-end trace consists of one or more spans.

A _span_ represents a logical unit of work in {DTProductName} that has an operation name, the start time of the operation, and the duration, as well as potentially tags and logs. Spans may be nested and ordered to model causal relationships.

As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture.
You can use {DTProductName} for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications.

With {DTShortName}, you can perform the following functions:

* Monitor distributed transactions

* Optimize performance and latency

* Perform root cause analysis

You can combine {DTShortName} with other relevant components of the {product-title}:

* {OTELName} for forwarding traces to a TempoStack instance

* Distributed tracing UI plugin of the {coo-first}
11 changes: 11 additions & 0 deletions modules/distr-tracing-tempo-rn-bug-fixes.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
// Module included in the following assemblies:
//
// * observability/distr_tracing/distr-tracing-rn.adoc

:_mod-docs-content-type: REFERENCE
[id="fixed-issues_{context}"]
= Fixed issues

This release fixes the following CVE:

* link:https://access.redhat.com/security/cve/cve-2025-22874[CVE-2025-22874]
9 changes: 9 additions & 0 deletions modules/distr-tracing-tempo-rn-deprecated-features.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
// Module included in the following assemblies:
//
// * observability/distr_tracing/distr-tracing-rn.adoc

:_mod-docs-content-type: REFERENCE
[id="deprecated-features_{context}"]
= Deprecated features

None.
10 changes: 10 additions & 0 deletions modules/distr-tracing-tempo-rn-enhancements.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
// Module included in the following assemblies:
//
// * observability/distr_tracing/distr-tracing-rn.adoc

:_mod-docs-content-type: REFERENCE
[id="new-features-and-enhancements_{context}"]
= New features and enhancements

Network policy to restrict API access::
With this update, the {TempoOperator} creates a network policy for the Operator to restrict access to the used APIs.
14 changes: 14 additions & 0 deletions modules/distr-tracing-tempo-rn-known-issues.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
// Module included in the following assemblies:
//
// * observability/distr_tracing/distr-tracing-rn.adoc

:_mod-docs-content-type: REFERENCE
[id="known-issues_{context}"]
= Known issues

Tempo query frontend fails to fetch trace JSON::
In the Jaeger UI, clicking on *Trace* and refreshing the page, or accessing *Trace* -> *Trace Timeline* -> *Trace JSON* from the Tempo query frontend, might result in the Tempo query pod failing with an EOF error.
+
To work around this problem, use the distributed tracing UI plugin to view traces.
+
link:https://issues.redhat.com/browse/TRACING-5483[TRACING-5483]
9 changes: 9 additions & 0 deletions modules/distr-tracing-tempo-rn-removed-features.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
// Module included in the following assemblies:
//
// * observability/distr_tracing/distr-tracing-rn.adoc

:_mod-docs-content-type: REFERENCE
[id="removed-features_{context}"]
= Removed features

None.
12 changes: 12 additions & 0 deletions modules/distr-tracing-tempo-rn-technology-preview-features.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
// Module included in the following assemblies:
//
// * observability/distr_tracing/distr-tracing-rn.adoc

:_mod-docs-content-type: REFERENCE
[id="technology-preview-features_{context}"]
= Technology Preview features

None.

//:FeatureName: Each of these features
//include::snippets/technology-preview.adoc[leveloffset=+1]
14 changes: 14 additions & 0 deletions modules/otel-about-rn.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
// Module included in the following assemblies:
//
// * observability/otel/otel-rn.adoc

:_mod-docs-content-type: REFERENCE
[id="otel-product-overview_{context}"]
= About this release

{OTELName} 3.7 is provided through the link:https://catalog.redhat.com/software/containers/rhosdt/opentelemetry-operator-bundle/615618406feffc5384e84400/history[{OTELOperator} 0.135.0] and based on the open source link:https://opentelemetry.io/docs/collector/[OpenTelemetry] release 0.135.0.

[NOTE]
====
Some linked Jira tickets are accessible only with Red Hat credentials.
====
15 changes: 8 additions & 7 deletions modules/otel-product-overview.adoc → modules/otel-about.adoc
Original file line number Diff line number Diff line change
@@ -1,21 +1,20 @@
// Module included in the following assemblies:
//
// * observability/otel/otel_rn/otel-rn-3-2.adoc
// * observability/otel/otel_rn/otel-rn-past-releases.adoc
// * observability/otel/otel-architecture.adoc

:_mod-docs-content-type: CONCEPT
[id="otel-product-overview_{context}"]
= {OTELName} overview
[id="otel-about-product_{context}"]
= About {OTELName}

{OTELName} is based on the open source link:https://opentelemetry.io/[OpenTelemetry project], which aims to provide unified, standardized, and vendor-neutral telemetry data collection for cloud-native software. {OTELName} product provides support for deploying and managing the OpenTelemetry Collector and simplifying the workload instrumentation.
{OTELName} is based on the open source link:https://opentelemetry.io/[OpenTelemetry project], which aims to provide unified, standardized, and vendor-neutral telemetry data collection for cloud-native software. {OTELName} provides support for deploying and managing the OpenTelemetry Collector and simplifying the workload instrumentation.

The link:https://opentelemetry.io/docs/collector/[OpenTelemetry Collector] can receive, process, and forward telemetry data in multiple formats, making it the ideal component for telemetry processing and interoperability between telemetry systems. The Collector provides a unified solution for collecting and processing metrics, traces, and logs.

The OpenTelemetry Collector has a number of features including the following:
The OpenTelemetry Collector provides several features including the following:

Data Collection and Processing Hub:: It acts as a central component that gathers telemetry data like metrics and traces from various sources. This data can be created from instrumented applications and infrastructure.

Customizable telemetry data pipeline:: The OpenTelemetry Collector is designed to be customizable. It supports various processors, exporters, and receivers.
Customizable telemetry data pipeline:: The OpenTelemetry Collector is customizable and supports various processors, exporters, and receivers.

Auto-instrumentation features:: Automatic instrumentation simplifies the process of adding observability to applications. Developers do not need to manually instrument their code for basic telemetry data.

Expand All @@ -26,3 +25,5 @@ Centralized data collection:: In a microservices architecture, the Collector can
Data enrichment and processing:: Before forwarding data to analysis tools, the Collector can enrich, filter, and process this data.

Multi-backend receiving and exporting:: The Collector can receive and send data to multiple monitoring and analysis platforms simultaneously.

You can use {OTELName} in combination with {TempoName}.
45 changes: 45 additions & 0 deletions modules/otel-collector-deployment-modes.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
//Module included in the following assemblies:
//
// * observability/otel/otel-collector/otel-collector-configuration-intro.adoc

:_mod-docs-content-type: REFERENCE
[id="otel-collector-deployment-modes_{context}"]
= Deployment modes

The `OpenTelemetryCollector` custom resource allows you to specify one of the following deployment modes for the OpenTelemetry Collector:

Deployment:: The default.

StatefulSet:: If you need to run stateful workloads, for example when using the Collector's File Storage Extension or Tail Sampling Processor, use the StatefulSet deployment mode.

DaemonSet:: If you need to scrape telemetry data from every node, for example by using the Collector's Filelog Receiver to read container logs, use the DaemonSet deployment mode.

Sidecar:: If you need access to log files inside a container, inject the Collector as a sidecar, and use the Collector's Filelog Receiver and a shared volume such as `emptyDir`.
+
If you need to configure an application to send telemetry data via `localhost`, inject the Collector as a sidecar, and set up the Collector to forward the telemetry data to an external service via an encrypted and authenticated connection. The Collector runs in the same pod as the application when injected as a sidecar.
+
[NOTE]
====

If you choose the sidecar deployment mode, then in addition to setting the `spec.mode: sidecar` field in the `OpenTelemetryCollector` custom resource CR, you must also set the `sidecar.opentelemetry.io/inject` annotation as a pod annotation or namespace annotation. If you set this annotation on both the pod and namespace, the pod annotation takes precedence if it is set to either `false` or the `OpenTelemetryCollector` CR name.

As a pod annotation, the `sidecar.opentelemetry.io/inject` annotation supports several values:

[source,yaml]
----
apiVersion: v1
kind: Pod
metadata:
...
annotations:
sidecar.opentelemetry.io/inject: "<supported_value>" <1>
...
----
<1> Supported values:
+
`false`:: Does not inject the Collector. This is the default if the annotation is missing.
`true`:: Injects the Collector with the configuration of the `OpenTelemetryCollector` CR in the same namespace.
`<collector_name>`:: Injects the Collector with the configuration of the `<collector_name>` `OpenTelemetryCollector` CR in the same namespace.
`<namespace>/<collector_name>`:: Injects the Collector with the configuration of the `<collector_name>` `OpenTelemetryCollector` CR in the `<namespace>` namespace.

====
7 changes: 6 additions & 1 deletion modules/otel-config-send-metrics-monitoring-stack.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,12 @@ spec:
service:
telemetry:
metrics:
address: ":8888"
readers:
- pull:
exporter:
prometheus:
host: 0.0.0.0
port: 8888
pipelines:
metrics:
exporters: [prometheus]
Expand Down
100 changes: 100 additions & 0 deletions modules/otel-forwarding-data-to-third-party-systems.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
//Module included in the following assemblies:
//
// * observability/otel/otel-forwarding-data.adoc

:_mod-docs-content-type: PROCEDURE
[id="otel-forwarding-data-to-third-party-systems_{context}"]
= Forwarding telemetry data to third-party systems

The OpenTelemetry Collector exports telemetry data by using the OTLP exporter via the OpenTelemetry Protocol (OTLP) that is implemented over the gRPC or HTTP transports. If you need to forward telemetry data to your third-party system and it does not support the OTLP or other supported protocol in the {OTELShortName}, then you can deploy an unsupported custom OpenTelemetry Collector that can receive telemetry data via the OTLP and export it to your third-party system by using a custom exporter.

[WARNING]
====
Red{nbsp}Hat does not support custom deployments.
====

.Prerequisites

* You have developed your own unsupported custom exporter that can export telemetry data via the OTLP to your third-party system.

.Procedure

* Deploy a custom Collector either through the OperatorHub or manually:

** If your third-party system supports it, deploy the custom Collector by using the OperatorHub.

** Deploy the custom Collector manually by using a config map, deployment, and service.
+
.Example of a custom Collector deployment
[source,yaml]
----
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-otel-collector-config
data:
otel-collector-config.yaml: |
receivers:
otlp:
protocols:
grpc:
exporters:
debug: {}
prometheus:
service:
pipelines:
traces:
receivers: [otlp]
exporters: [debug] # <1>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: custom-otel-collector-deployment
spec:
replicas: 1
selector:
matchLabels:
component: otel-collector
template:
metadata:
labels:
component: otel-collector
spec:
containers:
- name: opentelemetry-collector
image: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:latest # <2>
command:
- "/otelcol-contrib"
- "--config=/conf/otel-collector-config.yaml"
ports:
- name: otlp
containerPort: 4317
protocol: TCP
volumeMounts:
- name: otel-collector-config-vol
mountPath: /conf
readOnly: true
volumes:
- name: otel-collector-config-vol
configMap:
name: custom-otel-collector-config
---
apiVersion: v1
kind: Service
metadata:
name: custom-otel-collector-service # <3>
labels:
component: otel-collector
spec:
type: ClusterIP
ports:
- name: otlp-grpc
port: 4317
targetPort: 4317
selector:
component: otel-collector
----
<1> Replace `debug` with the required exporter for your third-party system.
<2> Replace the image with the required version of the OpenTelemetry Collector that has the required exporter for your third-party system.
<3> The service name is used in the Red Hat build of OpenTelemetry Collector CR to configure the OTLP exporter.
Loading