From 0bf1e93d57511b33a38517b4b980dfd9208d8685 Mon Sep 17 00:00:00 2001 From: Ruben Vargas Date: Sun, 28 Jul 2024 00:43:43 -0600 Subject: [PATCH 1/7] OBSDOCS-864: Documentation for using OpenShift service CA in Tempo Signed-off-by: Ruben Vargas --- ...nfig-receiver-tls-for-tempomonolithic.adoc | 64 +++++++++++++++++++ ...po-config-receiver-tls-for-tempostack.adoc | 59 +++++++++++++++++ .../distr-tracing-tempo-configuring.adoc | 21 ++++++ 3 files changed, 144 insertions(+) create mode 100644 modules/distr-tracing-tempo-config-receiver-tls-for-tempomonolithic.adoc create mode 100644 modules/distr-tracing-tempo-config-receiver-tls-for-tempostack.adoc diff --git a/modules/distr-tracing-tempo-config-receiver-tls-for-tempomonolithic.adoc b/modules/distr-tracing-tempo-config-receiver-tls-for-tempomonolithic.adoc new file mode 100644 index 000000000000..fe29e1fb35a4 --- /dev/null +++ b/modules/distr-tracing-tempo-config-receiver-tls-for-tempomonolithic.adoc @@ -0,0 +1,64 @@ +// Module included in the following assemblies: +// +// * observability/distr_tracing/distr_tracing_tempo/distr-tracing-tempo-configuring.adoc + +:_mod-docs-content-type: REFERENCE +[id="distr-tracing-tempo-config-receiver-tls-for-tempomonolithic_{context}"] += Receiver TLS configuration for a TempoMonolithic instance + +You can provide a TLS certificate in a secret or use the service serving certificates that are generated by {product-title}. + +* To provide a TLS certificate in a secret, configure it in the `TempoMonolithic` custom resource. ++ +[NOTE] +==== +This feature is not supported with the enabled Tempo Gateway. +==== ++ +.TLS for receivers and using a user-provided certificate in a secret +[source,yaml] +---- +apiVersion: tempo.grafana.com/v1alpha1 +kind: TempoMonolithic +# ... + spec: +# ... + ingestion: + otlp: + grpc: + tls: + enabled: true # <1> + certName: # <2> + caName: # <3> +# ... +---- +<1> TLS enabled at the Tempo Distributor. +<2> Secret containing a `tls.key` key and `tls.crt` certificate that you apply in advance. +<3> Optional: CA in a config map to enable mutual TLS authentication (mTLS). + +* Alternatively, you can use the service serving certificates that are generated by {product-title}. ++ +[NOTE] +==== +Mutual TLS authentication (mTLS) is not supported with this feature. +==== ++ +.TLS for receivers and using the service serving certificates that are generated by {product-title} +[source,yaml] +---- +apiVersion: tempo.grafana.com/v1alpha1 +kind: TempoMonolithic +# ... + spec: +# ... + ingestion: + otlp: + grpc: + tls: + enabled: true + http: + tls: + enabled: true # <1> +# ... +---- +<1> Minimal configuration for the TLS at the Tempo Distributor. diff --git a/modules/distr-tracing-tempo-config-receiver-tls-for-tempostack.adoc b/modules/distr-tracing-tempo-config-receiver-tls-for-tempostack.adoc new file mode 100644 index 000000000000..a3504e9591bf --- /dev/null +++ b/modules/distr-tracing-tempo-config-receiver-tls-for-tempostack.adoc @@ -0,0 +1,59 @@ +// Module included in the following assemblies: +// +// * observability/distr_tracing/distr_tracing_tempo/distr-tracing-tempo-configuring.adoc + +:_mod-docs-content-type: REFERENCE +[id="distr-tracing-tempo-config-receiver-tls-for-tempostack_{context}"] += Receiver TLS configuration for a TempoStack instance + +You can provide a TLS certificate in a secret or use the service serving certificates that are generated by {product-title}. + +* To provide a TLS certificate in a secret, configure it in the `TempoStack` custom resource. ++ +[NOTE] +==== +This feature is not supported with the enabled Tempo Gateway. +==== ++ +.TLS for receivers and using a user-provided certificate in a secret +[source,yaml] +---- +apiVersion: tempo.grafana.com/v1alpha1 +kind: TempoStack +# ... +spec: +# ... + template: + distributor: + tls: + enabled: true # <1> + certName: # <2> + caName: # <3> +# ... +---- +<1> TLS enabled at the Tempo Distributor. +<2> Secret containing a `tls.key` key and `tls.crt` certificate that you apply in advance. +<3> Optional: CA in a config map to enable mutual TLS authentication (mTLS). + +* Alternatively, you can use the service serving certificates that are generated by {product-title}. ++ +[NOTE] +==== +Mutual TLS authentication (mTLS) is not supported with this feature. +==== ++ +.TLS for receivers and using the service serving certificates that are generated by {product-title} +[source,yaml] +---- +apiVersion: tempo.grafana.com/v1alpha1 +kind: TempoStack +# ... +spec: +# ... + template: + distributor: + tls: + enabled: true <1> +# ... +---- +<1> Sufficient configuration for the TLS at the Tempo Distributor. diff --git a/observability/distr_tracing/distr_tracing_tempo/distr-tracing-tempo-configuring.adoc b/observability/distr_tracing/distr_tracing_tempo/distr-tracing-tempo-configuring.adoc index 74c9e77f2432..2d66a2dcf677 100644 --- a/observability/distr_tracing/distr_tracing_tempo/distr-tracing-tempo-configuring.adoc +++ b/observability/distr_tracing/distr_tracing_tempo/distr-tracing-tempo-configuring.adoc @@ -30,6 +30,27 @@ include::modules/distr-tracing-tempo-config-query-frontend.adoc[leveloffset=+1] include::modules/distr-tracing-tempo-config-spanmetrics.adoc[leveloffset=+1] +[id="config-receiver-tls_{context}"] +== Configuring the receiver TLS + +The custom resource of your TempoStack or TempoMonolithic instance supports configuring the TLS for receivers by using user-provided certificates or OpenShift's service serving certificates. + +include::modules/distr-tracing-tempo-config-receiver-tls-for-tempostack.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../security/certificates/service-serving-certificate.adoc#understanding-service-serving_service-serving-certificate[Understanding service serving certificates] +* xref:../../../security/certificate_types_descriptions/service-ca-certificates.adoc#cert-types-service-ca-certificates[Service CA certificates] + +include::modules/distr-tracing-tempo-config-receiver-tls-for-tempomonolithic.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* xref:../../../security/certificates/service-serving-certificate.adoc#understanding-service-serving_service-serving-certificate[Understanding service serving certificates] +* xref:../../../security/certificate_types_descriptions/service-ca-certificates.adoc#cert-types-service-ca-certificates[Service CA certificates] + include::modules/distr-tracing-tempo-config-multitenancy.adoc[leveloffset=+1] [id="taints-and-tolerations_{context}"] From 9d31d7be9c623c04ce8eab0682a7a941d6f9a896 Mon Sep 17 00:00:00 2001 From: Pavol Loffay Date: Tue, 6 Aug 2024 11:38:24 +0200 Subject: [PATCH 2/7] OBSDOCS-1254: Document AWS STS Signed-off-by: Pavol Loffay --- ...-object-storage-setup-aws-sts-install.adoc | 83 +++++++++++++++++++ .../distr-tracing-tempo-installing.adoc | 10 +++ ...cing-tempo-required-secret-parameters.adoc | 10 +++ 3 files changed, 103 insertions(+) create mode 100644 modules/distr-tracing-tempo-object-storage-setup-aws-sts-install.adoc diff --git a/modules/distr-tracing-tempo-object-storage-setup-aws-sts-install.adoc b/modules/distr-tracing-tempo-object-storage-setup-aws-sts-install.adoc new file mode 100644 index 000000000000..7b5022590eb3 --- /dev/null +++ b/modules/distr-tracing-tempo-object-storage-setup-aws-sts-install.adoc @@ -0,0 +1,83 @@ +// Module included in the following assemblies: +// +//* observability/distr_tracing/distr_tracing_tempo/distr-tracing-tempo-installing.adoc + +:_mod-docs-content-type: PROCEDURE +[id="distr-tracing-tempo-object-storage-setup-aws-sts-install_{context}"] += Setting up the Amazon S3 storage with the Security Token Service + +You can set up the Amazon S3 storage with the Security Token Service (STS) by using the AWS Command Line Interface (AWS CLI). + +:FeatureName: The Amazon S3 storage with the Security Token Service +include::snippets/technology-preview.adoc[leveloffset=+1] + +.Prerequisites + +* You have installed the latest version of the AWS CLI. + +.Procedure + +. Create an AWS S3 bucket. + +. Create the following `trust.json` file for the AWS IAM policy that will set up a trust relationship for the AWS IAM role, created in the next step, with the service account of the TempoStack instance: ++ +[source,yaml] +---- +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Federated": "arn:aws:iam::${}:oidc-provider/${}" # <1> + }, + "Action": "sts:AssumeRoleWithWebIdentity", + "Condition": { + "StringEquals": { + "${OIDC_PROVIDER}:sub": [ + "system:serviceaccount:${}:tempo-${}" # <2> + "system:serviceaccount:${}:tempo-${}-query-frontend" + ] + } + } + } + ] +} +---- +<1> OIDC provider that you have configured on the {product-title}. You can get the configured OIDC provider value also by running the following command: `$ oc get authentication cluster -o json | jq -r '.spec.serviceAccountIssuer' | sed 's~http[s]*://~~g'`. +<2> Namespace in which you intend to create the TempoStack instance. + +. Create an AWS IAM role by attaching the `trust.json` policy file that you created: ++ +[source,terminal] +---- +$ aws iam create-role \ + --role-name "tempo-s3-access" \ + --assume-role-policy-document "file:///tmp/trust.json" \ + --query Role.Arn \ + --output text +---- + +. Attach an AWS IAM policy to the created role: ++ +[source,terminal] +---- +$ aws iam attach-role-policy \ + --role-name "tempo-s3-access" \ + --policy-arn "arn:aws:iam::aws:policy/AmazonS3FullAccess" +---- + +. In the {product-title}, create an object storage secret with keys as follows: ++ +[source,yaml] +---- +apiVersion: v1 +kind: Secret +metadata: + name: minio-test +stringData: + bucket: + region: + role_arn: +type: Opaque +---- diff --git a/observability/distr_tracing/distr_tracing_tempo/distr-tracing-tempo-installing.adoc b/observability/distr_tracing/distr_tracing_tempo/distr-tracing-tempo-installing.adoc index 56b2f35879d9..15d7284a230b 100644 --- a/observability/distr_tracing/distr_tracing_tempo/distr-tracing-tempo-installing.adoc +++ b/observability/distr_tracing/distr_tracing_tempo/distr-tracing-tempo-installing.adoc @@ -61,6 +61,16 @@ include::modules/distr-tracing-tempo-install-tempomonolithic-cli.adoc[leveloffse include::modules/distr-tracing-tempo-storage-ref.adoc[leveloffset=+1] +include::modules/distr-tracing-tempo-object-storage-setup-aws-sts-install.adoc[leveloffset=+2] + +[role="_additional-resources"] +.Additional resources + +* link:https://docs.aws.amazon.com/iam/[AWS Identity and Access Management Documentation] +* link:https://docs.aws.amazon.com/cli/[AWS Command Line Interface Documentation] +* xref:../../../authentication/identity_providers/configuring-oidc-identity-provider.adoc#configuring-oidc-identity-provider[Configuring an OpenID Connect identity provider] +* link:https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html[Identify AWS resources with Amazon Resource Names (ARNs)] + [role="_additional-resources"] [id="additional-resources_dist-tracing-tempo-installing"] == Additional resources diff --git a/snippets/distr-tracing-tempo-required-secret-parameters.adoc b/snippets/distr-tracing-tempo-required-secret-parameters.adoc index 934225e3063a..8ae477e11235 100644 --- a/snippets/distr-tracing-tempo-required-secret-parameters.adoc +++ b/snippets/distr-tracing-tempo-required-secret-parameters.adoc @@ -51,6 +51,16 @@ See link:https://operator.min.io/[MinIO Operator]. `access_key_secret: ` +|Amazon S3 with Security Token Service (STS) +| +`name: tempostack-dev-s3 # example` + +`bucket: # link:https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html[Amazon S3 documentation]` + +`region: ` + +`role_arn: ` + |Microsoft Azure Blob Storage | `name: tempostack-dev-azure # example` From d1446c76e418521c7758414f33aa8c0671772004 Mon Sep 17 00:00:00 2001 From: Ruben Vargas Date: Sat, 22 Jun 2024 16:37:17 -0600 Subject: [PATCH 3/7] OBSDOCS-1146: Documentation for the group-by-attribute processor Signed-off-by: Ruben Vargas --- .../otel-collector-processors.adoc | 31 ++++++++++++++++--- 1 file changed, 27 insertions(+), 4 deletions(-) diff --git a/observability/otel/otel-collector/otel-collector-processors.adoc b/observability/otel/otel-collector/otel-collector-processors.adoc index c00f419a7461..2832de7b62f9 100644 --- a/observability/otel/otel-collector/otel-collector-processors.adoc +++ b/observability/otel/otel-collector/otel-collector-processors.adoc @@ -397,18 +397,18 @@ config: | You can optionally create an `attribute_source` configuration, which defines where to look for the attribute in `from_attribute`. The allowed value is `context` to search the context, which includes the HTTP headers, or `resource` to search the resource attributes. [id="cumulativetodelta-processor_{context}"] -== Cumulative to Delta Processor +== Cumulative-to-Delta Processor -This processor converts monotonic, cumulative-sum, and histogram metrics to monotonic delta metrics. +The Cumulative-to-Delta Processor processor converts monotonic, cumulative-sum, and histogram metrics to monotonic delta metrics. You can filter metrics by using the `include:` or `exclude:` fields and specifying the `strict` or `regexp` metric name matching. This processor does not convert non-monotonic sums and exponential histograms. -:FeatureName: The Cumulative to Delta Processor +:FeatureName: The Cumulative-to-Delta Processor include::snippets/technology-preview.adoc[] -.Example of an OpenTelemetry Collector custom resource with an enabled Cumulative to Delta Processor +.Example of an OpenTelemetry Collector custom resource with an enabled Cumulative-to-Delta Processor [source,yaml] ---- # ... @@ -430,3 +430,26 @@ config: | <2> Defines a value provided in the `metrics` field as a `strict` exact match or `regexp` regular expression. <3> Lists the metric names, which are exact matches or matches for regular expressions, of the metrics to be converted to delta metrics. If a metric matches both the `include` and `exclude` filters, the `exclude` filter takes precedence. <4> Optional: Configures which metrics to exclude. When omitted, no metrics are excluded from conversion to delta metrics. + +[id="groupbyattrsprocessor-processor_{context}"] +== Group-by-Attributes Processor + +The Group-by-Attributes Processor groups all spans, log records, and metric datapoints that share the same attributes by reassigning them to a Resource that matches those attributes. + +:FeatureName: The Group-by-Attributes Processor +include::snippets/technology-preview.adoc[] + +At minimum, configuring this processor involves specifying an array of attribute keys to be used to group spans, log records, or metric datapoints together, as in the following example: + +[source,yaml] +---- +# ... +processors: + groupbyattrs: + keys: # <1> + - # <2> + - +# ... +---- +<1> Specifies attribute keys to group by. +<2> If a processed span, log record, or metric datapoint contains at least one of the specified attribute keys, it is reassigned to a Resource that shares the same attribute values; and if no such Resource exists, a new one is created. If none of the specified attribute keys is present in the processed span, log record, or metric datapoint, then it remains associated with its current Resource. Multiple instances of the same Resource are consolidated. From 453a858ea5d40c9a8319674c2db4f14c6142f31a Mon Sep 17 00:00:00 2001 From: Max Leonov Date: Thu, 22 Aug 2024 15:33:58 +0200 Subject: [PATCH 4/7] OBSDOCS-1151: Add documentation for transform processor Signed-off-by: Ruben Vargas ruben.vp8510@gmail.com --- .../otel-collector-processors.adoc | 107 +++++++++++++++++- 1 file changed, 106 insertions(+), 1 deletion(-) diff --git a/observability/otel/otel-collector/otel-collector-processors.adoc b/observability/otel/otel-collector/otel-collector-processors.adoc index 2832de7b62f9..b3a3b5ccfc97 100644 --- a/observability/otel/otel-collector/otel-collector-processors.adoc +++ b/observability/otel/otel-collector/otel-collector-processors.adoc @@ -394,7 +394,7 @@ config: | <2> The default exporter when the attribute value is not present in the table in the next section. <3> The table that defines which values are to be routed to which exporters. -You can optionally create an `attribute_source` configuration, which defines where to look for the attribute in `from_attribute`. The allowed value is `context` to search the context, which includes the HTTP headers, or `resource` to search the resource attributes. +Optionally, you can create an `attribute_source` configuration, which defines where to look for the attribute that you specify in the `from_attribute` field. The supported values are `context` for searching the context including the HTTP headers, and `resource` for searching the resource attributes. [id="cumulativetodelta-processor_{context}"] == Cumulative-to-Delta Processor @@ -453,3 +453,108 @@ processors: ---- <1> Specifies attribute keys to group by. <2> If a processed span, log record, or metric datapoint contains at least one of the specified attribute keys, it is reassigned to a Resource that shares the same attribute values; and if no such Resource exists, a new one is created. If none of the specified attribute keys is present in the processed span, log record, or metric datapoint, then it remains associated with its current Resource. Multiple instances of the same Resource are consolidated. + +[id="transform-processor_{context}"] +== Transform Processor + +The Transform Processor enables modification of telemetry data according to specified rules and in the link:https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/pkg/ottl[OpenTelemetry Transformation Language (OTTL)]. +For each signal type, the processor processes a series of conditions and statements associated with a specific OTTL Context type and then executes them in sequence on incoming telemetry data as specified in the configuration. +Each condition and statement can access and modify telemetry data by using various functions, allowing conditions to dictate if a function is to be executed. + +All statements are written in the OTTL. +You can configure multiple context statements for different signals, traces, metrics, and logs. +The value of the `context` type specifies which OTTL Context the processor must use when interpreting the associated statements. + +:FeatureName: The Transform Processor +include::snippets/technology-preview.adoc[] + +.Configuration summary +[source,yaml] +---- +# ... +config: | + processors: + transform: + error_mode: ignore # <1> + _statements: # <2> + - context: # <3> + conditions: # <4> + - + - + statements: # <5> + - + - + - + - context: + statements: + - + - + - +# ... +---- +<1> Optional: See the following table "Values for the optional `error_mode` field". +<2> Indicates a signal to be transformed. +<3> See the following table "Values for the `context` field". +<4> Optional: Conditions for performing a transformation. + +.Configuration example +[source,yaml] +---- +# ... +config: | + transform: + error_mode: ignore + trace_statements: # <1> + - context: resource + statements: + - keep_keys(attributes, ["service.name", "service.namespace", "cloud.region", "process.command_line"]) # <2> + - replace_pattern(attributes["process.command_line"], "password\\=[^\\s]*(\\s?)", "password=***") # <3> + - limit(attributes, 100, []) + - truncate_all(attributes, 4096) + - context: span # <4> + statements: + - set(status.code, 1) where attributes["http.path"] == "/health" + - set(name, attributes["http.route"]) + - replace_match(attributes["http.target"], "/user/*/list/*", "/user/{userId}/list/{listId}") + - limit(attributes, 100, []) + - truncate_all(attributes, 4096) +# ... +---- +<1> Transforms a trace signal. +<2> Keeps keys on the resources. +<3> Replaces attributes and replaces string characters in password fields with asterisks. +<4> Performs transformations at the span level. + +.Values for the `context` field +[options="header"] +[cols="a,a"] +|=== +|Signal Statement |Valid Contexts + +|`trace_statements` +|`resource`, `scope`, `span`, `spanevent` + +|`metric_statements` +|`resource`, `scope`, `metric`, `datapoint` + +|`log_statements` +|`resource`, `scope`, `log` + +|=== + +.Values for the optional `error_mode` field +[options="header"] +[cols="a,a"] +|=== +|Value |Description + +|`ignore` +|Ignores and logs errors returned by statements and then continues to the next statement. + +|`silent` +|Ignores and doesn't log errors returned by statements and then continues to the next statement. + +|`propagate` +|Returns errors up the pipeline and drops the payload. Implicit default. + +|=== From a9b2a80f702b92247585384bcc9fad75a575bc4c Mon Sep 17 00:00:00 2001 From: Max Leonov Date: Wed, 7 Aug 2024 16:40:52 +0200 Subject: [PATCH 5/7] OBSDOCS-1124: Release notes for Distributed tracing 3.3 --- .../distr_tracing/distr-tracing-rn.adoc | 135 +++++++++++++++++- observability/otel/otel-rn.adoc | 69 +++++++++ 2 files changed, 203 insertions(+), 1 deletion(-) diff --git a/observability/distr_tracing/distr-tracing-rn.adoc b/observability/distr_tracing/distr-tracing-rn.adoc index 70ad036d5660..8fe551581909 100644 --- a/observability/distr_tracing/distr-tracing-rn.adoc +++ b/observability/distr_tracing/distr-tracing-rn.adoc @@ -8,10 +8,143 @@ toc::[] include::modules/distr-tracing-product-overview.adoc[leveloffset=+1] -You can use the {DTShortName} xref:../otel/otel-forwarding.adoc#otel-forwarding-traces[in combination with] the xref:../otel/otel-installing.adoc#install-otel[{OTELName}]. +You can use the {TempoName} xref:../otel/otel-forwarding.adoc#otel-forwarding-traces[in combination with] the xref:../otel/otel-rn.adoc#otel_rn[{OTELName}]. include::snippets/distr-tracing-and-otel-disclaimer-about-docs-for-supported-features-only.adoc[] +[id="distr-tracing_3-3_{context}"] +== Release notes for {DTProductName} 3.3 + +This release of the {DTProductName} includes the {TempoName} and the deprecated {JaegerName}. + +//// +[id="distr-tracing_3-3_cves_{context}"] +=== CVEs + +This release fixes the following CVEs: + +* link:https://access.redhat.com/security/cve/CVE-202?-????/[CVE-202?-????] +//// + +[id="distr-tracing_3-3_tempo-release-notes_{context}"] +=== {TempoName} + +The {TempoName} is provided through the {TempoOperator}. + +The {TempoName} 3.3 is based on the open source link:https://grafana.com/oss/tempo/[Grafana Tempo] 2.5.0. + +//// +[id="distr-tracing_3-3_tempo-release-notes_technology-preview-features_{context}"] +==== Technology Preview features + +This update introduces the following Technology Preview feature: + +* ???. + +:FeatureName: The Tempo monolithic deployment +include::snippets/technology-preview.adoc[leveloffset=+1] +//// + +[id="distr-tracing_3-3_tempo-release-notes_new-features-and-enhancements_{context}"] +==== New features and enhancements + +This update introduces the following enhancements: + +* Support for securing the Jaeger UI and Jaeger APIs with the OpenShift OAuth Proxy. (link:https://issues.redhat.com/browse/TRACING-4108[TRACING-4108]) +* Support for using the service serving certificates, which are generated by {product-title}, on ingestion APIs when multitenancy is disabled. (link:https://issues.redhat.com/browse/TRACING-3954[TRACING-3954]) +* Support for ingesting by using the OTLP/HTTP protocol when multitenancy is enabled. (link:https://issues.redhat.com/browse/TRACING-4171[TRACING-4171]) +* Support for the AWS S3 Secure Token authentication. (link:https://issues.redhat.com/browse/TRACING-4176[TRACING-4176]) +* Support for automatically reloading certificates. (link:https://issues.redhat.com/browse/TRACING-4185[TRACING-4185]) +* Support for configuring the duration for which service names are available for querying. (link:https://issues.redhat.com/browse/TRACING-4214[TRACING-4214]) + +//// +[id="distr-tracing_3-3_tempo-release-notes_deprecated-functionality_{context}"] +==== Deprecated functionality + +In the {TempoName} 3.3, ???. +//// + +//// +[id="distr-tracing_3-3_tempo-release-notes_removal-notice_{context}"] +==== Removal notice + +In the {TempoName} 3.3, the FEATURE has been removed. Bug fixes and support are provided only through the end of the 3.? lifecycle. As an alternative to the FEATURE for USE CASE, you can use the ALTERNATIVE instead. +//// + +[id="distr-tracing_3-3_tempo-release-notes_bug-fixes_{context}"] +==== Bug fixes + +This update introduces the following bug fixes: + +* Before this update, storage certificate names did not support dots. With this update, storage certificate name can contain dots. (link:https://issues.redhat.com/browse/TRACING-4348[TRACING-4348]) +* Before this update, some users had to select a certificate when accessing the gateway route. With this update, there is no prompt to select a certificate. (link:https://issues.redhat.com/browse/TRACING-4431[TRACING-4431]) +* Before this update, the gateway component was not scalable. With this update, the gateway component is scalable. (link:https://issues.redhat.com/browse/TRACING-4497[TRACING-4497]) +* Before this update the Jaeger UI might fail with the *504 Gateway Time-out* error when accessed via a route. With this update, users can specify route annotations for increasing timeout, such as `haproxy.router.openshift.io/timeout: 3m`, when querying large data sets. (link:https://issues.redhat.com/browse/TRACING-4511[TRACING-4511]) + +[id="distr-tracing_3-3_tempo-release-notes_known-issues_{context}"] +==== Known issues + +There is currently a known issue: + +* Currently, the {TempoShortName} fails on the {ibm-z-title} (`s390x`) architecture. (link:https://issues.redhat.com/browse/TRACING-3545[TRACING-3545]) + +[id="distr-tracing_3-3_jaeger-release-notes_{context}"] +=== {JaegerName} + +The {JaegerName} is provided through the {JaegerOperator} Operator. + +The {JaegerName} 3.3 is based on the open source link:https://www.jaegertracing.io/[Jaeger] release 1.57.0. + +[IMPORTANT] +==== +Jaeger does not use FIPS validated cryptographic modules. +==== + +[id="distr-tracing_3-3_jaeger-release-notes_support-for-elasticsearch-operator_{context}"] +==== Support for the {es-op} + +The {JaegerName} 3.3 is supported for use with the {es-op} 5.6, 5.7, and 5.8. + +[id="distr-tracing_3-3_jaeger-release-notes_deprecated-functionality_{context}"] +==== Deprecated functionality + +In the {DTProductName} 3.3, Jaeger and support for Elasticsearch remain deprecated, and both are planned to be removed in a future release. +Red Hat will provide support for these components and fixes for CVEs and bugs with critical and higher severity during the current release lifecycle, but these components will no longer receive feature enhancements. +The {TempoOperator} and the {OTELName} are the preferred Operators for distributed tracing collection and storage. +Users must adopt the OpenTelemetry and Tempo distributed tracing stack because it is the stack to be enhanced going forward. + +In the {DTProductName} 3.3, the Jaeger agent is deprecated and planned to be removed in the following release. +Red Hat will provide bug fixes and support for the Jaeger agent during the current release lifecycle, but the Jaeger agent will no longer receive enhancements and will be removed. +The OpenTelemetry Collector provided by the {OTELName} is the preferred Operator for injecting the trace collector agent. + +//// +[id="distr-tracing_3-3_jaeger-release-notes_removal-notice_{context}"] +==== Removal notice + +In the {JaegerName} 3.3, the FEATURE has been removed. Bug fixes and support are provided only through the end of the 3.? lifecycle. As an alternative to the FEATURE for USE CASE, you can use the ALTERNATIVE instead. +//// + +//// +[id="distr-tracing_3-3_jaeger-release-notes_bug-fixes_{context}"] +==== Bug fixes + +This update introduces the following bug fixes: + +* Before this update, ???. With this update, ???. (link:https://issues.redhat.com/browse/TRACING-????/[TRACING-????]) +//// + +[id="distr-tracing_3-3_jaeger-release-notes_known-issues_{context}"] +==== Known issues + +There are currently known issues: + +* Currently, Apache Spark is not supported. + +ifndef::openshift-rosa[] + +* Currently, the streaming deployment via AMQ/Kafka is not supported on the {ibm-z-title} and {ibm-power-title} architectures. +endif::openshift-rosa[] + [id="distr-tracing_3-2-2_{context}"] == Release notes for {DTProductName} 3.2.2 diff --git a/observability/otel/otel-rn.adoc b/observability/otel/otel-rn.adoc index 8df234e56828..a824c5f34c27 100644 --- a/observability/otel/otel-rn.adoc +++ b/observability/otel/otel-rn.adoc @@ -8,8 +8,77 @@ toc::[] include::modules/otel-product-overview.adoc[leveloffset=+1] +You can use the {OTELName} xref:otel-forwarding.adoc#otel-forwarding-traces[in combination with] the xref:../distr_tracing/distr-tracing-rn.adoc#distr-tracing-rn[{TempoName}]. + include::snippets/distr-tracing-and-otel-disclaimer-about-docs-for-supported-features-only.adoc[] +[id="otel_3-3_{context}"] +== Release notes for {OTELName} 3.3 + +The {OTELName} is provided through the {OTELOperator}. + +The {OTELName} 3.3 is based on the open source link:https://opentelemetry.io/docs/collector/[OpenTelemetry] release 0.107.0. + +[id="otel_3-3_cves_{context}"] +=== CVEs + +This release fixes the following CVEs: + +* link:https://access.redhat.com/security/cve/CVE-2024-6104[CVE-2024-6104] +* link:https://access.redhat.com/security/cve/CVE-2024-42368[CVE-2024-42368] + +[id="otel_3-3_technology-preview-features_{context}"] +=== Technology Preview features + +This update introduces the following Technology Preview features: + +* Group-by-attribute processor +* Metrics transform processor +* Routing connector +* Exporting logs to the LokiStack log store + +:FeatureName: Each of these features +include::snippets/technology-preview.adoc[leveloffset=+1] + +[id="otel_3-3_new-features-and-enhancements_{context}"] +=== New features and enhancements + +This update introduces the following enhancements: + +* Collector dashboard for the internal Collector metrics and analyzing Collector health and performance. (link:https://issues.redhat.com/browse/TRACING-3768[TRACING-3768]) +* Support for automatically reloading certificates in both the OpenTelemetry Collector and instrumentation. (link:https://issues.redhat.com/browse/TRACING-4186[TRACING-4186]) + +//// +[id="otel_3-3_jaeger-release-notes_deprecated-functionality_{context}"] +=== Deprecated functionality + +In the {OTELName} 3.3, ???. (link:https://issues.redhat.com/browse/TRACING-????/[TRACING-????]) +//// + +//// +[id="otel_3-3_removal-notice_{context}"] +=== Removal notice + +In the {OTELName} 3.3, the FEATURE has been removed. Bug fixes and support are provided only through the end of the 3.? lifecycle. As an alternative to the FEATURE for USE CASE, you can use the ALTERNATIVE instead. +//// + +[id="otel_3-3_bug-fixes_{context}"] +=== Bug fixes + +This update introduces the following bug fixes: + +* Before this update, the `ServiceMonitor` object was failing to scrape operator metrics due to missing permissions for accessing the metrics endpoint. With this update, this issue is fixed by creating the `ServiceMonitor` custom resource when operator monitoring is enabled. (link:https://issues.redhat.com/browse/TRACING-4288[TRACING-4288]) +* Before this update, the Collector service and the headless service were both monitoring the same endpoints, which caused duplication of metrics collection and `ServiceMonitor` objects. With this update, this issue is fixed by not creating the headless service. (link:https://issues.redhat.com/browse/OBSDA-773[OBSDA-773]) + +//// +[id="otel_3-3_known-issues_{context}"] +=== Known issues + +There are currently known issues: + +* ???. (link:https://issues.redhat.com/browse/TRACING-????/[TRACING-????]) +//// + [id="otel_3-2-2_{context}"] == Release notes for {OTELName} 3.2.2 From 4775290463722a04a6058b68b31d8a06a4ac2ec4 Mon Sep 17 00:00:00 2001 From: Israel Blancas Date: Mon, 26 Aug 2024 10:41:28 +0200 Subject: [PATCH 6/7] OBSDOCS-1272/TRACING-4573: add documentation for Prometheus Remote Write Exporter --- .../otel-collector-connectors.adoc | 5 ++ .../otel-collector-exporters.adoc | 47 +++++++++++++++++++ .../otel-collector-extensions.adoc | 5 ++ .../otel-collector-processors.adoc | 4 ++ .../otel-collector-receivers.adoc | 5 ++ 5 files changed, 66 insertions(+) diff --git a/observability/otel/otel-collector/otel-collector-connectors.adoc b/observability/otel/otel-collector/otel-collector-connectors.adoc index 5c4f2d3e6578..13bc2dedce4c 100644 --- a/observability/otel/otel-collector/otel-collector-connectors.adoc +++ b/observability/otel/otel-collector/otel-collector-connectors.adoc @@ -78,3 +78,8 @@ include::snippets/technology-preview.adoc[] # ... ---- <1> Defines the flush interval of the generated metrics. Defaults to `15s`. + +[role="_additional-resources"] +[id="additional-resources_otel-collector-connectors_{context}"] +== Additional resources +* link:https://opentelemetry.io/docs/specs/otlp/[OpenTelemetry Protocol (OTLP) documentation] diff --git a/observability/otel/otel-collector/otel-collector-exporters.adoc b/observability/otel/otel-collector/otel-collector-exporters.adoc index 5702823682b6..fb7038048202 100644 --- a/observability/otel/otel-collector/otel-collector-exporters.adoc +++ b/observability/otel/otel-collector/otel-collector-exporters.adoc @@ -189,6 +189,48 @@ include::snippets/technology-preview.adoc[] <8> Defines how long metrics are exposed without updates. The default is `5m`. <9> Adds the metrics types and units suffixes. Must be disabled if the monitor tab in Jaeger console is enabled. The default is `true`. +[id="prometheus-remote-write-exporter_{context}"] +== Prometheus Remote Write Exporter + +The Prometheus Remote Write Exporter exports metrics to compatible back ends. + +:FeatureName: The Prometheus Remote Write Exporter +include::snippets/technology-preview.adoc[] + +.OpenTelemetry Collector custom resource with an enabled Prometheus Remote Write Exporter +[source,yaml] +---- +# ... + config: | + exporters: + prometheusremotewrite: + endpoint: "https://my-prometheus:7900/api/v1/push" # <1> + tls: # <2> + ca_file: ca.pem + cert_file: cert.pem + key_file: key.pem + target_info: true # <3> + export_created_metric: true # <4> + max_batch_size_bytes: 3000000 # <5> + service: + pipelines: + metrics: + exporters: [prometheusremotewrite] +# ... +---- +<1> Endpoint for sending the metrics. +<2> Server-side TLS configuration. Defines paths to TLS certificates. +<3> When set to `true`, creates a `target_info` metric for each resource metric. +<4> When set to `true`, exports a `_created` metric for the Summary, Histogram, and Monotonic Sum metric points. +<5> Maximum size of the batch of samples that is sent to the remote write endpoint. Exceeding this value results in batch splitting. The default value is `3000000`, which is approximately 2.861 megabytes. + +[WARNING] +==== +* This exporter drops non-cumulative monotonic, histogram, and summary OTLP metrics. + +* You must enable the `--web.enable-remote-write-receiver` feature flag on the remote Prometheus instance. Without it, pushing the metrics to the instance using this exporter fails. +==== + [id="kafka-exporter_{context}"] == Kafka Exporter @@ -230,3 +272,8 @@ include::snippets/technology-preview.adoc[] <5> The client-side TLS configuration. Defines paths to the TLS certificates. If omitted, TLS authentication is disabled. <6> Disables verifying the server's certificate chain and host name. The default is `+false+`. <7> ServerName indicates the name of the server requested by the client to support virtual hosting. + +[role="_additional-resources"] +[id="additional-resources_otel-collector-exporters_{context}"] +== Additional resources +* link:https://opentelemetry.io/docs/specs/otlp/[OpenTelemetry Protocol (OTLP) documentation] diff --git a/observability/otel/otel-collector/otel-collector-extensions.adoc b/observability/otel/otel-collector/otel-collector-extensions.adoc index f047f9aa24a9..68190754390f 100644 --- a/observability/otel/otel-collector/otel-collector-extensions.adoc +++ b/observability/otel/otel-collector/otel-collector-extensions.adoc @@ -458,3 +458,8 @@ include::snippets/technology-preview.adoc[] ---- <1> Specifies the HTTP endpoint that serves zPages. Use `localhost:` to make it available only locally, or `":"` to make it available on all network interfaces. The default is `localhost:55679`. + +[role="_additional-resources"] +[id="additional-resources_otel-collector-extensions_{context}"] +== Additional resources +* link:https://opentelemetry.io/docs/specs/otlp/[OpenTelemetry Protocol (OTLP) documentation] diff --git a/observability/otel/otel-collector/otel-collector-processors.adoc b/observability/otel/otel-collector/otel-collector-processors.adoc index b3a3b5ccfc97..81b3cd79296c 100644 --- a/observability/otel/otel-collector/otel-collector-processors.adoc +++ b/observability/otel/otel-collector/otel-collector-processors.adoc @@ -558,3 +558,7 @@ config: | |Returns errors up the pipeline and drops the payload. Implicit default. |=== +[role="_additional-resources"] +[id="additional-resources_otel-collector-processors_{context}"] +== Additional resources +* link:https://opentelemetry.io/docs/specs/otlp/[OpenTelemetry Protocol (OTLP) documentation] diff --git a/observability/otel/otel-collector/otel-collector-receivers.adoc b/observability/otel/otel-collector/otel-collector-receivers.adoc index 2d212d9a30e9..716904a1b86a 100644 --- a/observability/otel/otel-collector/otel-collector-receivers.adoc +++ b/observability/otel/otel-collector/otel-collector-receivers.adoc @@ -823,3 +823,8 @@ rules: ---- <1> The service account of the Collector that has the required ClusterRole `otel-collector` RBAC. <2> The list of namespaces to collect events from. The default value is empty, which means that all namespaces are collected. + +[role="_additional-resources"] +[id="additional-resources_otel-collector-receivers_{context}"] +== Additional resources +* link:https://opentelemetry.io/docs/specs/otlp/[OpenTelemetry Protocol (OTLP) documentation] From 64d63819815157018dae9fd7965c6f035d7bc487 Mon Sep 17 00:00:00 2001 From: Israel Blancas Date: Tue, 9 Jul 2024 07:23:31 +0200 Subject: [PATCH 7/7] OBSDOCS-1148: Add documentation for routing connector Signed-off-by: Israel Blancas --- .../otel-collector-connectors.adoc | 41 +++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/observability/otel/otel-collector/otel-collector-connectors.adoc b/observability/otel/otel-collector/otel-collector-connectors.adoc index 13bc2dedce4c..eeb52eb26876 100644 --- a/observability/otel/otel-collector/otel-collector-connectors.adoc +++ b/observability/otel/otel-collector/otel-collector-connectors.adoc @@ -8,6 +8,47 @@ toc::[] A connector connects two pipelines. It consumes data as an exporter at the end of one pipeline and emits data as a receiver at the start of another pipeline. It can consume and emit data of the same or different data type. It can generate and emit data to summarize the consumed data, or it can merely replicate or route data. +[id="routing-connector_{context}"] +== Routing Connector + +The Routing Connector routes logs, metrics, and traces to specified pipelines according to resource attributes and their routing conditions, which are written as OpenTelemetry Transformation Language (OTTL) statements. + +:FeatureName: The Routing Connector +include::snippets/technology-preview.adoc[] + +.OpenTelemetry Collector custom resource with an enabled Routing Connector +[source,yaml] +---- + config: | + connectors: + routing: + table: # <1> + - statement: route() where attributes["X-Tenant"] == "dev" # <2> + pipelines: [traces/dev] # <3> + - statement: route() where attributes["X-Tenant"] == "prod" + pipelines: [traces/prod] + default_pipelines: [traces/dev] # <4> + error_mode: ignore # <5> + match_once: false # <6> + service: + pipelines: + traces/in: + receivers: [otlp] + exporters: [routing] + traces/dev: + receivers: [routing] + exporters: [otlp/dev] + traces/prod: + receivers: [routing] + exporters: [otlp/prod] +---- +<1> Connector routing table. +<2> Routing conditions written as OTTL statements. +<3> Destination pipelines for routing the matching telemetry data. +<4> Destination pipelines for routing the telemetry data for which no routing condition is satisfied. +<5> Error-handling mode: The `propagate` value is for logging an error and dropping the payload. The `ignore` value is for ignoring the condition and attempting to match with the next one. The `silent` value is the same as `ignore` but without logging the error. The default is `propagate`. +<6> When set to `true`, the payload is routed only to the first pipeline whose routing condition is met. The default is `false`. + [id="forward-connector_{context}"] == Forward Connector