Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion modules/distr-tracing-tempo-about-rn.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,12 @@
[id="distr-tracing-product-overview_{context}"]
= About this release

{DTShortName} 3.7 is provided through the link:https://catalog.redhat.com/software/containers/rhosdt/tempo-operator-bundle/642c3e0eacf1b5bdbba7654a/history[{TempoOperator} 0.18.0] and based on the open source link:https://grafana.com/oss/tempo/[Grafana Tempo] 2.8.2.
[role="_abstract"]
{DTShortName} 3.8 is provided through the link:https://catalog.redhat.com/software/containers/rhosdt/tempo-operator-bundle/642c3e0eacf1b5bdbba7654a/history[{TempoOperator} 0.??.0] and based on the open source link:https://grafana.com/oss/tempo/[Grafana Tempo] 2.?.?.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] OpenShiftAsciiDoc.SuggestAttribute: Use the AsciiDoc attribute '{TempoName}' or '{TempoShortName}' rather than the plain text product term 'Tempo', unless your use case is an exception.


////
[NOTE]
====
Some linked Jira tickets are accessible only with Red Hat credentials.
====
////
15 changes: 13 additions & 2 deletions modules/distr-tracing-tempo-rn-bug-fixes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,17 @@
[id="fixed-issues_{context}"]
= Fixed issues

This release fixes the following CVE:
Resolved issue with TLS certificates affecting Tempo pods::
Before this update, the Tempo pods would stop to communicate because internal TLS certificates were renewed. With this update, the Tempo pods automatically restart when certificates are renewed.
+
link:https://issues.redhat.com/browse/TRACING-5622[TRACING-5622]

* link:https://access.redhat.com/security/cve/cve-2025-22874[CVE-2025-22874]
Tempo query frontend no longer fails to fetch trace JSON::
Before this update, clicking on *Trace* in the Jaeger UI and refreshing the page, or accessing *Trace* -> *Trace Timeline* -> *Trace JSON* from the Tempo query frontend, might result in the Tempo query pod failing with an EOF error. With this update, this issue is resolved.
+
link:https://issues.redhat.com/browse/TRACING-5483[TRACING-5483]

[NOTE]
====
Some linked Jira tickets are accessible only with Red Hat credentials.
====
3 changes: 1 addition & 2 deletions modules/distr-tracing-tempo-rn-enhancements.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,4 @@
[id="new-features-and-enhancements_{context}"]
= New features and enhancements

Network policy to restrict API access::
With this update, the {TempoOperator} creates a network policy for the Operator to restrict access to the used APIs.
None.
14 changes: 8 additions & 6 deletions modules/distr-tracing-tempo-rn-known-issues.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,11 @@
[id="known-issues_{context}"]
= Known issues

Tempo query frontend fails to fetch trace JSON::
In the Jaeger UI, clicking on *Trace* and refreshing the page, or accessing *Trace* -> *Trace Timeline* -> *Trace JSON* from the Tempo query frontend, might result in the Tempo query pod failing with an EOF error.
+
To work around this problem, use the distributed tracing UI plugin to view traces.
+
link:https://issues.redhat.com/browse/TRACING-5483[TRACING-5483]
None.

////
[NOTE]
====
Some linked Jira tickets are accessible only with Red Hat credentials.
====
////
Original file line number Diff line number Diff line change
Expand Up @@ -8,5 +8,13 @@

None.

[IMPORTANT]
====
[subs="attributes+"]
Technology Preview features are not supported with Red{nbsp}Hat production service level agreements (SLAs) and might not be functionally complete. Red{nbsp}Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red{nbsp}Hat Technology Preview features, see link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview Features Support Scope].
====

//:FeatureName: Each of these features
//include::snippets/technology-preview.adoc[leveloffset=+1]
5 changes: 4 additions & 1 deletion modules/otel-about-rn.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,12 @@
[id="otel-product-overview_{context}"]
= About this release

{OTELName} 3.7 is provided through the link:https://catalog.redhat.com/software/containers/rhosdt/opentelemetry-operator-bundle/615618406feffc5384e84400/history[{OTELOperator} 0.135.0] and based on the open source link:https://opentelemetry.io/docs/collector/[OpenTelemetry] release 0.135.0.
[role="_abstract"]
{OTELName} 3.8 is provided through the link:https://catalog.redhat.com/software/containers/rhosdt/opentelemetry-operator-bundle/615618406feffc5384e84400/history[{OTELOperator} 0.140.0] and based on the open source link:https://opentelemetry.io/docs/collector/[OpenTelemetry] release 0.140.0.

////
[NOTE]
====
Some linked Jira tickets are accessible only with Red Hat credentials.
====
////
2 changes: 1 addition & 1 deletion modules/otel-collector-config-options.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * observability/otel/otel-configuration-of-otel-collector.adoc
// * observability/otel/otel-configuration-of-otel-intro.adoc

:_mod-docs-content-type: REFERENCE
[id="otel-collector-config-options_{context}"]
Expand Down
60 changes: 60 additions & 0 deletions modules/otel-collector-profile-signal.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
// Module included in the following assemblies:
//
// * observability/otel/otel-configuration-of-otel-intro.adoc

:_mod-docs-content-type: REFERENCE
[id="otel-collector-profile-signal_{context}"]
= Profile signal

[role="_abstract"]
The Profile signal is an emerging telemetry data format for observing code execution and resource consumption.

:FeatureName: The Profile signal
include::snippets/technology-preview.adoc[leveloffset=+1]

The Profile signal allows you to pinpoint inefficient code down to specific functions. Such profiling allows you to precisely identify performance bottlenecks and resource inefficiencies down to the specific line of code. By correlating such high-fidelity profile data with traces, metrics, and logs, it enables comprehensive performance analysis and targeted code optimization in production environments.

Profiling can target an application or operating system:

* Using profiling to observe an application can help developers validate code performance, prevent regressions, and monitor resource consumption such as of memory and CPU, and thus identify and improve inefficient code.
* Using profiling to observe operating systems can provide insights into the infrastracture, system calls, kernel operations, and I/O wait times, and thus help in optimizing infrastructure for efficiency and cost savings.
.OpenTelemetry Collector custom resource with the enabled Profile signal
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] AsciiDocDITA.BlockTitle: Block titles can only be assigned to examples, figures, and tables in DITA.

[source,yaml]
----
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: otel-profiles-collector
namespace: otel-profile
spec:
args:
feature-gates: service.profilesSupport # <1>
config:
receivers:
otlp: # <2>
protocols:
grpc:
endpoint: '0.0.0.0:4317'
http:
endpoint: '0.0.0.0:4318'
exporters:
otlp/pyroscope:
endpoint: "pyroscope.pyroscope-monitoring.svc.cluster.local:4317" # <3>
service:
pipelines: # <4>
profiles:
receivers: [otlp]
exporters: [otlp/pyroscope]
# ...
----
<1> Enables profiles by setting the `feature-gates` field as shown here.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] AsciiDocDITA.CalloutList: Callouts are not supported in DITA.

<2> Configures the OTLP Receiver to set up the OpenTelemetry Collector to receive profiling data via the OTLP.
<3> Configures where to export profiles to, such as a storage.
<4> Defines a profiling pipeline, including a configuration for forwarding the received profile data to an OTLP-compatible profiling backend such as Grafana Pyroscope.

[role="_additional-resources"]
.Additional resources
* link:https://opentelemetry.io/docs/specs/otel/profiles/[OpenTelemetry Profiles]
* link:https://opentelemetry.io/docs/specs/semconv/general/profiles/[Profiles attributes]
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * observability/otel/otel-configuration-of-otel-collector.adoc
// * observability/otel/otel-configuration-of-otel-intro.adoc

:_mod-docs-content-type: PROCEDURE
[id="otel-creating-required-RBAC-resources-automatically_{context}"]
Expand Down
2 changes: 2 additions & 0 deletions modules/otel-exporters-aws-cloudwatch-exporter.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,13 +24,15 @@ include::snippets/technology-preview.adoc[]
region: <aws_region_of_log_stream> # <3>
endpoint: <protocol><service_endpoint_of_amazon_cloudwatch_logs> # <4>
log_retention: <supported_value_in_days> # <5>
role_arn: "<iam_role>" # <6>
# ...
----
<1> Required. If the log group does not exist yet, it is automatically created.
<2> Required. If the log stream does not exist yet, it is automatically created.
<3> Optional. If the AWS region is not already set in the default credential chain, you must specify it.
<4> Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. You must include the protocol, such as `https://`, as part of the endpoint value. For the list of service endpoints by region, see link:https://docs.aws.amazon.com/general/latest/gr/cwl_region.html[Amazon CloudWatch Logs endpoints and quotas] (AWS General Reference).
<5> Optional. With this parameter, you can set the log retention policy for new Amazon CloudWatch log groups. If this parameter is omitted or set to `0`, the logs never expire by default. Supported values for retention in days are `1`, `3`, `5`, `7`, `14`, `30`, `60`, `90`, `120`, `150`, `180`, `365`, `400`, `545`, `731`, `1827`, `2192`, `2557`, `2922`, `3288`, or `3653`.
<6> Optional. The AWS Identity and Access Management (IAM) role for uploading the logs segments to a different account.

[role="_additional-resources"]
.Additional resources
Expand Down
2 changes: 2 additions & 0 deletions modules/otel-exporters-aws-emf-exporter.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ include::snippets/technology-preview.adoc[]
endpoint: <protocol><endpoint> # <5>
log_retention: <supported_value_in_days> # <6>
namespace: <custom_namespace> # <7>
role_arn: "<iam_role>" # <8>
# ...
----
<1> You can use the `log_group_name` parameter to customize the log group name or set the default `+/metrics/default+` value or the following placeholders:
Expand Down Expand Up @@ -64,6 +65,7 @@ If no resource attribute is found in the resource attribute map, the placeholder
<5> Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. You must include the protocol, such as `https://`, as part of the endpoint value. For the list of service endpoints by region, see link:https://docs.aws.amazon.com/general/latest/gr/cwl_region.html[Amazon CloudWatch Logs endpoints and quotas] (AWS General Reference).
<6> Optional. With this parameter, you can set the log retention policy for new Amazon CloudWatch log groups. If this parameter is omitted or set to `0`, the logs never expire by default. Supported values for retention in days are `1`, `3`, `5`, `7`, `14`, `30`, `60`, `90`, `120`, `150`, `180`, `365`, `400`, `545`, `731`, `1827`, `2192`, `2557`, `2922`, `3288`, or `3653`.
<7> Optional. A custom namespace for the Amazon CloudWatch metrics.
<8> Optional. The AWS Identity and Access Management (IAM) role for uploading the metrics segments to a different account.

[role="_additional-resources"]
.Additional resources
Expand Down
46 changes: 46 additions & 0 deletions modules/otel-exporters-google-cloud-exporter.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
// Module included in the following assemblies:
//
// * observability/otel/otel-collector/otel-collector-exporters.adoc

:_mod-docs-content-type: REFERENCE
[id="otel-exporters-google-cloud-exporter_{context}"]
= Google Cloud Exporter

[role="_abstract"]
You can use the Google Cloud Exporter to export telemetry data to Google Cloud Operations Suite. Using the Google Cloud Exporter, you can export metrics to Google Cloud Monitoring, logs to Google Cloud Logging, and traces to Google Cloud Trace.

:FeatureName: The Google Cloud Exporter
include::snippets/technology-preview.adoc[]

.OpenTelemetry Collector custom resource with the enabled Google Cloud Exporter
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] AsciiDocDITA.BlockTitle: Block titles can only be assigned to examples, figures, and tables in DITA.

[source,yaml]
----
# ...
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json # <1>
volumeMounts:
- name: google-application-credentials
mountPath: /var/secrets/google
readOnly: true
volumes:
- name: google-application-credentials
secret:
secretName: google-application-credentials
config:
exporters:
googlecloud:
project: # <2>
# ...
----
<1> The `GOOGLE_APPLICATION_CREDENTIALS` environment variable that points to the authentication `key.json` file. The `key.json` file is mounted as a secret volume to the OpenTelemetry Collector.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] AsciiDocDITA.CalloutList: Callouts are not supported in DITA.

<2> Optional. The project identifier. If not specified, the project is automatically determined from the credentials.
+
By default, the exporter sends telemetry data to the project specified in the `project` field of the exporter's configuration. You can have an override set up on a per-metric basis by using the `gcp.project.id` resource attribute. For example, if a metric has a label project, you can use the Group-by-Attributes Processor to promote it to a resource label, and then use the Resource Processor to rename the attribute from `project` to `gcp.project.id`.

[role="_additional-resources"]
.Additional resources
* link:https://cloud.google.com/monitoring[Google Cloud Monitoring]
* link:https://cloud.google.com/logging[Google Cloud Logging]
* link:https://cloud.google.com/trace[Google Cloud Trace]
* link:https://cloud.google.com/iam/docs/workload-identity-federation-with-kubernetes#deploy[Google Cloud Guides: Configure Workload Identity Federation with Kubernetes]
12 changes: 12 additions & 0 deletions modules/otel-forwarding-data-to-aws.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
// Module included in the following assemblies:
//
// * observability/otel/otel-forwarding-telemetry-data.adoc

:_mod-docs-content-type: CONCEPT
[id="otel-forwarding-data-to-aws_{context}"]
= Forwarding telemetry data to AWS

[role="_abstract"]
You can have metrics, logs, and traces forwarded to the Amazon CloudWatch and AWS X-Ray services by using the following exporters of the OpenTelemetry Collector: AWS CloudWatch Logs Exporter, AWS EMF Exporter, and AWS X-Ray Exporter.

// Currently, this docs repository does not permit linking from here to the sections for AWS CloudWatch Logs Exporter, AWS EMF Exporter, and AWS X-Ray Exporter. See the xref to the "Exporters" page placed in observability/otel/otel-forwarding-telemetry-data.adoc.
12 changes: 12 additions & 0 deletions modules/otel-forwarding-data-to-google-cloud.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
// Module included in the following assemblies:
//
// * observability/otel/otel-forwarding-telemetry-data.adoc

:_mod-docs-content-type: CONCEPT
[id="otel-forwarding-data-to-google-cloud_{context}"]
= Forwarding telemetry data to Google Cloud

[role="_abstract"]
You can have metrics, logs, and traces forwarded to Google Cloud Operations Suite by using the Google Cloud Exporter of the OpenTelemetry Collector.

// Currently, this docs repository does not permit linking from here to the Google Cloud Exporter section. See the xref to the "Exporters" page placed in observability/otel/otel-forwarding-telemetry-data.adoc.
45 changes: 45 additions & 0 deletions modules/otel-receivers-filelog-receiver.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -32,3 +32,48 @@ include::snippets/technology-preview.adoc[]
----
<1> A list of file glob patterns that match the file paths to be read.
<2> An array of Operators. Each Operator performs a simple task such as parsing a timestamp or JSON. To process logs into a desired format, chain the Operators together.

To collect logs from application containers, you can use this receiver with sidecar injection. The {OTELOperator} allows injecting an OpenTelemetry Collector as a sidecar container into a application pod. This approach is useful when your application writes logs to files within the container filesystem. To access the generated files, both pods require a shared volume for the application container and the sidecar Collector. This receiver can then tail log files and apply Operators to parse and transform the logs. To use this receiver in sidecar mode to collect logs from application containers, you must configure volume mounts in the `OpenTelemetryCollector` custom resource. The Collector requires access to the log files through a shared volume, such as `emptyDir`, that is mounted in both the application container and the sidecar Collector container. The following is a complete example of this approach:

.OpenTelemetry Collector custom resource with the Filelog Receiver configured in sidecar mode
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] AsciiDocDITA.BlockTitle: Block titles can only be assigned to examples, figures, and tables in DITA.

[source,yaml]
----
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: filelog
namespace: otel-logging
spec:
mode: sidecar
volumeMounts: # <1>
- name: logs
mountPath: /var/log/app
config:
receivers:
filelog:
include: # <2>
- /var/log/app/*.log
operators:
- type: regex_parser
regex: '^(?P<timestamp>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) \[(?P<level>\w+)\] (?P<message>.*)$'
timestamp:
parse_from: attributes.timestamp
layout: '%Y-%m-%d %H:%M:%S'
processors: {}
exporters:
debug:
verbosity: detailed
service:
pipelines:
logs:
receivers: [filelog]
processors: []
exporters: [debug]
----
<1> Defines the volume mount that the sidecar Collector uses to access the target log files. This volume must match the volume name defined in the application deployment.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] AsciiDocDITA.CalloutList: Callouts are not supported in DITA.

<2> Specifies file glob patterns for matching the log files to tail. This receiver watches these paths for new log entries.
+
[IMPORTANT]
====
The `volumeMounts` field in the `OpenTelemetryCollector` custom resource is critical for the sidecar to access log files. The volume specified here must be defined in the application's `Deployment` or `Pod` specification. Both the application container and the sidecar Collector must mount the same volume.
====
55 changes: 55 additions & 0 deletions modules/otel-receivers-prometheus-remote-write-receiver.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
// Module included in the following assemblies:
//
// * observability/otel/otel-collector/otel-collector-receivers.adoc

:_mod-docs-content-type: REFERENCE
[id="otel-receivers-prometheus-remote-write-receiver_{context}"]
= Prometheus Remote Write Receiver

[role="_abstract"]
The Prometheus Remote Write Receiver receives metrics from Prometheus via the Remote Write protocol and converts them to the OpenTelemetry format. This receiver supports only the Prometheus Remote Write v2 protocol.

:FeatureName: The Prometheus Remote Write Receiver
include::snippets/technology-preview.adoc[]

.OpenTelemetry Collector custom resource with the enabled Prometheus Remote Write Receiver
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] AsciiDocDITA.BlockTitle: Block titles can only be assigned to examples, figures, and tables in DITA.

[source,yaml]
----
# ...
config:
receivers:
prometheusremotewrite:
endpoint: 0.0.0.0:9009 # <1>
service:
pipelines:
metrics:
receivers: [prometheusremotewrite]
# ...
----
<1> The endpoint where the receiver listens for Prometheus Remote Write requests.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] AsciiDocDITA.CalloutList: Callouts are not supported in DITA.


The following are the prerequisites for using the Prometheus Remote Write Receiver with Prometheus:

* Prometheus is started with the metadata WAL records feature flag enabled:
+
[source,yaml]
----
./prometheus --config.file config.yml --enable-feature=metadata-wal-records
----
* Prometheus Remote Write v2 Protocol is enabled in the Prometheus configuration file:
+
[source,yaml]
----
remote_write:
- url: "<your_chosen_prometheus_remote_write_receiver_endpoint>"
protobuf_message: io.prometheus.write.v2.Request
----
* Native histograms are enabled in Prometheus. For more information about enabling native histograms in Prometheus, see the Prometheus documentation.
[role="_additional-resources"]
.Additional resources
* link:https://prometheus.io/docs/prometheus/latest/feature_flags/#metadata-wal-records[Metadata WAL Records]
* link:https://prometheus.io/docs/specs/prw/remote_write_spec_2_0/[Prometheus Remote-Write 2.0 specification [EXPERIMENTAL]]
* link:https://prometheus.io/docs/specs/native_histograms/[Native Histograms]
Loading