Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
705e465
update DTProductVersion, JaegerVersion, and TempoVersion values in _a…
max-cx Sep 14, 2023
222220b
Document that OTEL operator can create service monitors
pavolloffay Sep 15, 2023
56d2d66
Drop OTELcol Jaeger exporter
pavolloffay Sep 20, 2023
26b645b
Merge pull request #12 from pavolloffay/distributed-tracing-3.0-drop-…
max-cx Sep 20, 2023
7e21cbd
Review
pavolloffay Sep 20, 2023
4548ff9
Review
pavolloffay Sep 20, 2023
8541093
Merge pull request #10 from pavolloffay/distributed-tracing-3.0-prome…
max-cx Sep 20, 2023
b7f9b8b
distr-otel: add description of the basic auth extension
frzifus Sep 21, 2023
39300f5
Update modules/distr-tracing-otel-config-collector.adoc
frzifus Sep 22, 2023
10b6250
Update modules/distr-tracing-otel-config-collector.adoc
frzifus Sep 22, 2023
f56db38
Update modules/distr-tracing-otel-config-collector.adoc
frzifus Sep 22, 2023
679907d
Update modules/distr-tracing-otel-config-collector.adoc
frzifus Sep 22, 2023
1b45f5a
Update modules/distr-tracing-otel-config-collector.adoc
frzifus Sep 22, 2023
ff7945b
Update modules/distr-tracing-otel-config-collector.adoc
frzifus Sep 22, 2023
1bbc3ff
Update modules/distr-tracing-otel-config-collector.adoc
frzifus Sep 22, 2023
a44cbba
Update modules/distr-tracing-otel-config-collector.adoc
frzifus Sep 22, 2023
9e8f50b
Update modules/distr-tracing-otel-config-collector.adoc
frzifus Sep 22, 2023
c24b710
Update modules/distr-tracing-otel-config-collector.adoc
frzifus Sep 22, 2023
125027f
Update modules/distr-tracing-otel-config-collector.adoc
frzifus Sep 25, 2023
fd8a1fb
Update modules/distr-tracing-otel-config-collector.adoc
frzifus Sep 25, 2023
4dc5a8a
Update modules/distr-tracing-otel-config-collector.adoc
frzifus Sep 25, 2023
1c86957
Merge pull request #13 from max-cx/distributed-tracing-3.0_extension_…
max-cx Sep 25, 2023
6a1a483
Document Tempo monitor tab and span RED metrics
pavolloffay Sep 15, 2023
2d060b9
Fix
pavolloffay Sep 27, 2023
d27ae7c
Fix
pavolloffay Sep 27, 2023
7785d6e
Fix
pavolloffay Sep 27, 2023
90ef61d
Merge pull request #11 from pavolloffay/distributed-tracing-3.0-span-red
max-cx Sep 27, 2023
e6cffef
TRACING-3552: Add documentation for Kafka receiver and exporter in OTEL
andreasgerstmayr Oct 3, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions _attributes/common-attributes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -116,18 +116,18 @@ endif::[]
//distributed tracing
:DTProductName: Red Hat OpenShift distributed tracing platform
:DTShortName: distributed tracing platform
:DTProductVersion: 2.9
:DTProductVersion: 3.0
:JaegerName: Red Hat OpenShift distributed tracing platform (Jaeger)
:JaegerShortName: distributed tracing platform (Jaeger)
:JaegerVersion: 1.47.0
:JaegerVersion: ?.??.?
:OTELName: Red Hat OpenShift distributed tracing data collection
:OTELShortName: distributed tracing data collection
:OTELOperator: Red Hat OpenShift distributed tracing data collection Operator
:OTELVersion: 0.81.0
:TempoName: Red Hat OpenShift distributed tracing platform (Tempo)
:TempoShortName: distributed tracing platform (Tempo)
:TempoOperator: Tempo Operator
:TempoVersion: 2.1.1
:TempoVersion: ?.?.?
//logging
:logging-title: logging subsystem for Red Hat OpenShift
:logging-title-uc: Logging subsystem for Red Hat OpenShift
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,8 @@ include::modules/distr-tracing-tempo-config-storage.adoc[leveloffset=+2]

include::modules/distr-tracing-tempo-config-query-frontend.adoc[leveloffset=+2]

include::modules/distr-tracing-tempo-config-spanmetrics.adoc[leveloffset=+2]

[id="setting-up-monitoring-for-tempo"]
== Setting up monitoring for the {TempoShortName}

Expand Down
196 changes: 165 additions & 31 deletions modules/distr-tracing-otel-config-collector.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ Processors:: Optional. Processors run through the data between it is received an

Exporters:: An exporter, which can be push or pull based, is how you send data to one or more back ends or destinations. By default, no exporters are configured. One or more exporters must be configured. Exporters can support one or more data sources. Exporters might be used with their default settings, but many exporters require configuration to specify at least the destination and security settings.

Connectors:: A connector connects two pipelines: It consumes data as an exporter at the end of one pipeline and emits data as a receiver at the start of another pipeline. It can consume and emit data of the same or different data type. It can generate and emit data to summarize the consumed data, or it can merely replicate or route data.

You can define multiple instances of components in a custom resource YAML file. When configured, these components must be enabled through pipelines defined in the `spec.config.service` section of the YAML file. As a best practice, only enable the components that you need.

.Example of the OpenTelemetry Collector custom resource file
Expand All @@ -26,10 +28,9 @@ metadata:
namespace: tracing-system
spec:
mode: deployment
ports:
- name: promexporter
port: 8889
protocol: TCP
observability:
metrics:
enableMetrics: true
config: |
receivers:
otlp:
Expand All @@ -38,8 +39,8 @@ spec:
http:
processors:
exporters:
jaeger:
endpoint: jaeger-production-collector-headless.tracing-system.svc:14250
otlp:
endpoint: jaeger-production-collector-headless.tracing-system.svc:4317
tls:
ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt"
prometheus:
Expand Down Expand Up @@ -76,7 +77,7 @@ spec:

|exporters:
|An exporter sends data to one or more back ends or destinations. By default, no exporters are configured. There must be at least one enabled exporter for a configuration to be considered valid. Exporters are enabled by being added to a pipeline. Exporters might be used with their default settings, but many require configuration to specify at least the destination and security settings.
|`otlp`, `otlphttp`, `jaeger`, `logging`, `prometheus`
|`otlp`, `otlphttp`, `logging`, `prometheus`
|None

|service:
Expand Down Expand Up @@ -236,6 +237,46 @@ The Zipkin receiver ingests data in the Zipkin v1 and v2 formats.
<1> The Zipkin HTTP endpoint. If omitted, the default `+0.0.0.0:9411+` is used.
<2> The TLS server side configuration. See the OTLP receiver configuration section for more details.

[id="kafka-receiver_{context}"]
==== Kafka Receiver

The Kafka receiver receives traces, metrics, and logs from Kafka in OTLP format.

* Support level: link:https://access.redhat.com/support/offerings/techpreview[Technology Preview]
* Supported signals: metrics, logs, traces

.OpenTelemetry Collector custom resource with enabled Kafka receiver
[source,yaml]
----
config: |
receivers:
kafka:
brokers: ["localhost:9092"] <1>
protocol_version: 2.0.0 <2>
topic: otlp_spans <3>
auth:
plain_text: <4>
username: example
password: example
tls: <5>
ca_file: ca.pem
cert_file: cert.pem
key_file: key.pem
insecure: false <6>
server_name_override: kafka.example.corp <7>
service:
pipelines:
traces:
receivers: [kafka]
----
<1> The list of Kafka brokers. Default: `+localhost:9092+`
<2> Kafka protocol version, for example `+2.0.0+`. This is a required field.
<3> The name of the Kafka topic to read from. Default: `+otlp_spans+`
<4> The plaintext authentication configuration. If omitted, plaintext authentication is disabled.
<5> The client side TLS configuration. Defines paths to TLS certificates. If omitted, TLS authentication is disabled.
<6> Disable verifying the server's certificate chain and host name. Default: `+false+`.
<7> ServerName indicates the name of the server requested by the client, in order to support virtual hosting.

[id="processors_{context}"]
=== Processors

Expand Down Expand Up @@ -343,53 +384,146 @@ The OTLP HTTP exporter exports data using the OpenTelemetry protocol (OTLP).
<2> The client side TLS configuration. Defines paths to TLS certificates.
<3> Headers are sent in every HTTP request.

[id="jaeger-exporter_{context}"]
==== Jaeger exporter
[id="logging-exporter_{context}"]
==== Logging exporter

The Jaeger exporter exports data using the Jaeger proto format through gRPC.
The Logging exporter prints data to the standard output.

* Support level: link:https://access.redhat.com/support/offerings/techpreview[Technology Preview]
* Supported signals: traces
* Supported signals: traces, metrics

.OpenTelemetry Collector custom resource with enabled Jaeger exporter
.OpenTelemetry Collector custom resource with an enabled Logging exporter
[source,yaml]
----
config: |
exporters:
jaeger:
endpoint: jaeger-all-in-one:14250 <1>
tls: <2>
ca_file: ca.pem
cert_file: cert.pem
key_file: key.pem
logging:
verbosity: detailed <1>
service:
pipelines:
traces:
exporters: [jaeger]
exporters: [logging]
metrics:
exporters: [logging]
----
<1> The Jaeger gRPC endpoint.
<2> The client side TLS configuration. Defines paths to TLS certificates.
<1> Verbosity of the logging export: `detailed` or `normal` or `basic`. When set to `detailed`, pipeline data is verbosely logged. Defaults to `normal`.

[id="logging-exporter_{context}"]
==== Logging exporter
[id="kafka-exporter_{context}"]
==== Kafka exporter

The Logging exporter prints data to the standard output.
The Kafka exporter exports logs, metrics, and traces to Kafka. This exporter uses a synchronous producer that blocks and does not batch messages, therefore it should be used with batch and queued retry processors for higher throughput and resiliency.

* Support level: link:https://access.redhat.com/support/offerings/techpreview[Technology Preview]
* Supported signals: traces, metrics
* Supported signals: metrics, logs, traces

.OpenTelemetry Collector custom resource with an enabled Logging exporter
.OpenTelemetry Collector custom resource with enabled Kafka exporter
[source,yaml]
----
config: |
exporters:
logging:
verbosity: detailed <1>
kafka:
brokers: ["localhost:9092"] <1>
protocol_version: 2.0.0 <2>
topic: otlp_spans <3>
auth:
plain_text: <4>
username: example
password: example
tls: <5>
ca_file: ca.pem
cert_file: cert.pem
key_file: key.pem
insecure: false <6>
server_name_override: kafka.example.corp <7>
service:
pipelines:
traces:
exporters: [logging]
exporters: [kafka]
----
<1> The list of Kafka brokers. Default: `+localhost:9092+`
<2> Kafka protocol version, for example `+2.0.0+`. This is a required field.
<3> The name of the Kafka topic to read from. Default: `+otlp_spans+` for traces, `+otlp_metrics+` for metrics, `+otlp_logs+` for logs.
<4> The plaintext authentication configuration. If omitted, plaintext authentication is disabled.
<5> The client side TLS configuration. Defines paths to TLS certificates. If omitted, TLS authentication is disabled.
<6> Disable verifying the server's certificate chain and host name. Default: `+false+`.
<7> ServerName indicates the name of the server requested by the client, in order to support virtual hosting.

[id="extensions_{context}"]
=== Extensions

[id="basicauth-extension_{context}"]
==== BasicAuth extension

You can use the BasicAuth extension as an authenticator on receivers and exporters.
Client authentication and server authentication for the BasicAuth extension are configured in separate sections in the OpenTelemetry Collector custom resource.

* Support level: link:https://access.redhat.com/support/offerings/techpreview[Technology Preview]
* Supported signals: traces, metrics, logs

.OpenTelemetry Collector custom resource with client and server authentication configured for the BasicAuth extension
[source,yaml]
----
config: |
extensions:
basicauth/server:
htpasswd:
file: .htpasswd <1>
inline: |
${env:BASIC_AUTH_USERNAME}:${env:BASIC_AUTH_PASSWORD} <2>

basicauth/client:
client_auth:
username: username <3>
password: password <4>

receivers:
otlp:
protocols:
http:
auth:
authenticator: basicauth/server <5>
exporters:
otlp:
auth:
authenticator: basicauth/client <6>

service:
extensions: [basicauth/server, basicauth/client]
pipelines:
traces:
receivers: [otlp]
exporters: [otlp]
----
<1> The BasicAuth extension can be configured as a server authenticator that reads credentials from a `.htpasswd` file.
<2> The BasicAuth extension can be configured as a client authenticator to read the credentials from an inline string that consists of environment variables.
<3> The client username is configured as a client authenticator for the BasicAuth extension.
<4> The client password is configured as a client authenticator for the BasicAuth extension.
<5> The authenticator configuration can be assigned to an OTLP receiver.
<6> The authenticator configuration can be assigned to an OTLP exporter.

[id="connectors_{context}"]
=== Connectors

[id="spanmetrics-connector_{context}"]
==== Spanmetrics connector

Aggregates Request, Error, and Duration (R.E.D) OpenTelemetry metrics from span data.

* Support level: link:https://access.redhat.com/support/offerings/techpreview[Technology Preview]
* Supported signals: traces

.OpenTelemetry Collector custom resource with an enabled spanmetrics connector
[source,yaml]
----
config: |
connectors:
spanmetrics:
metrics_flush_interval: 15s <1>
service:
pipelines:
traces:
exporters: [spanmetrics]
metrics:
exporters: [logging]
receivers: [spanmetrics]
----
<1> Verbosity of the logging export: `detailed` or `normal` or `basic`. When set to `detailed`, pipeline data is verbosely logged. Defaults to `normal`.
<1>: Defines the flush interval of the generated metrics. Defaults to `15s`.
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,35 @@ This module is included in the following assemblies:
[id="distr-tracing-send-metrics-monitoring-stack_{context}"]
= Sending metrics to the monitoring stack

You can configure the monitoring stack to scrape OpenTelemetry Collector metrics endpoints and to remove duplicated labels that the monitoring stack has added during scraping.
You can configure the OpenTelemetry Collector custom resource (CR) to create a Prometheus `ServiceMonitor` CR to scrape the collector's pipeline metrics and the enabled Prometheus exporters.

.Example of the OpenTelemetry Collector custom resource with the Prometheus exporter
[source,yaml]
----
spec:
mode: deployment
observability:
metrics:
enableMetrics: true <1>
config: |
exporters:
prometheus:
endpoint: 0.0.0.0:8889
resource_to_telemetry_conversion:
enabled: true # by default resource attributes are dropped
service:
telemetry:
metrics:
address: ":8888"
pipelines:
metrics:
receivers: [otlp]
exporters: [prometheus]
----
<1> Configures the operator to create the Prometheus `ServiceMonitor` CR to scrape the collector's internal metrics endpoint and Prometheus exporter metric endpoints. The metrics will be stored in the OpenShift monitoring stack.


Alternatively, the Prometheus `PodMonitor` can be created manually, which offers more fine-grained control, for instance remove duplicated labels added during Prometheus scraping.

.Sample `PodMonitor` custom resource (CR) that configures the monitoring stack to scrape Collector metrics
[source,yaml]
Expand All @@ -18,7 +46,7 @@ metadata:
spec:
selector:
matchLabels:
app.kubernetes.io/name: otel-collector
app.kubernetes.io/name: `<cr-name>-collector` <1>
podMetricsEndpoints:
- port: metrics <1>
- port: promexporter <2>
Expand All @@ -35,5 +63,6 @@ spec:
- action: labeldrop
regex: job
----
<1> The name of the internal metrics port for the OpenTelemetry Collector. This port name is always `metrics`.
<2> The name of the Prometheus exporter port for the OpenTelemetry Collector. This port name is defined in the `.spec.ports` section of the `OpenTelemetryCollector` CR.
<1> The name of the OpenTelemetry custom resource.
<2> The name of the internal metrics port for the OpenTelemetry Collector. This port name is always `metrics`.
<3> The name of the Prometheus exporter port for the OpenTelemetry Collector.
Loading