Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion _topic_maps/_topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2747,8 +2747,10 @@ Topics:
Topics:
- Name: Installing distributed tracing
File: distr-tracing-installing
- Name: Configuring distributed tracing
- Name: Configuring the distributed tracing platform
File: distr-tracing-deploying
- Name: Configuring distributed tracing data collection
File: distr-tracing-deploying-otel
- Name: Upgrading distributed tracing
File: distr-tracing-updating
- Name: Removing distributed tracing
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
[id="distr-tracing-deploying-otel"]
= Configuring and deploying distributed tracing data collection
include::modules/distr-tracing-document-attributes.adoc[]
:context: deploying-data-collection

toc::[]

The {OTELName} Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to use when creating and deploying the {OTELShortName} resources. You can either install the default configuration or modify the file to better suit your business requirements.

[IMPORTANT]
====
The {OTELName} Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production.
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
====

// The following include statements pull in the module files that comprise the assembly.

include::modules/distr-tracing-deploy-otel-collector.adoc[leveloffset=+1]
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,8 @@ include::modules/distr-tracing-install-elasticsearch.adoc[leveloffset=+1]

include::modules/distr-tracing-install-jaeger-operator.adoc[leveloffset=+1]

include::modules/distr-tracing-install-otel-operator.adoc[leveloffset=+1]

////
== Next steps
* xref:../../distr_tracing/distr_tracing_install/distr-tracing-deploying.adoc#deploying-distributed-tracing[Deploy {DTProductName}].
Expand Down
2 changes: 1 addition & 1 deletion modules/distr-tracing-deploy-default.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ Follow this procedure to create an instance of {JaegerShortName} from the comman
+
[source,terminal]
----
$ oc login https://{HOSTNAME}:8443
$ oc login https://<HOSTNAME>:8443
----

. Create a new project named `tracing-system`.
Expand Down
127 changes: 127 additions & 0 deletions modules/distr-tracing-deploy-otel-collector.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
////
This module included in the following assemblies:
- distr_tracing_install/distr-tracing-deploying.adoc
////

[id="distr-tracing-deploy-otel-collector_{context}"]
= Deploying distributed tracing data collection

The custom resource definition (CRD) defines the configuration used when you deploy an instance of {OTELName}.

.Prerequisites

* The {OTELName} Operator has been installed.
//* You have reviewed the instructions for how to customize the deployment.
* You have access to the cluster as a user with the `cluster-admin` role.

.Procedure

. Log in to the OpenShift web console as a user with the `cluster-admin` role.

. Create a new project, for example `tracing-system`.
+
[NOTE]
====
If you are installing distributed tracing as part of Service Mesh, the {DTShortName} resources must be installed in the same namespace as the `ServiceMeshControlPlane` resource, for example `istio-system`.
====
+
.. Navigate to *Home* -> *Projects*.

.. Click *Create Project*.

.. Enter `tracing-system` in the *Name* field.

.. Click *Create*.

. Navigate to *Operators* -> *Installed Operators*.

. If necessary, select `tracing-system` from the *Project* menu. You might have to wait a few moments for the Operators to be copied to the new project.

. Click the *{OTELName} Operator*. On the *Details* tab, under *Provided APIs*, the Operator provides a single link.

. Under *OpenTelemetryCollector*, click *Create Instance*.

. On the *Create OpenTelemetry Collector* page, to install using the defaults, click *Create* to create the {OTELShortName} instance.

. On the *OpenTelemetryCollectors* page, click the name of the {OTELShortName} instance, for example, `opentelemetrycollector-sample`.

. On the *Details* page, click the *Resources* tab. Wait until the pod has a status of "Running" before continuing.

[id="distr-tracing-deploy-otel-collector-cli_{context}"]
= Deploying {OTELShortName} from the CLI

Follow this procedure to create an instance of {OTELShortName} from the command line.

.Prerequisites

* The {OTELName} Operator has been installed and verified.
+
//* You have reviewed the instructions for how to customize the deployment.
+
* You have access to the OpenShift CLI (`oc`) that matches your {product-title} version.
* You have access to the cluster as a user with the `cluster-admin` role.

.Procedure

. Log in to the {product-title} CLI as a user with the `cluster-admin` role.
+
[source,terminal]
----
$ oc login https://<HOSTNAME>:8443
----

. Create a new project named `tracing-system`.
+
[source,terminal]
----
$ oc new-project tracing-system
----

. Create a custom resource file named `jopentelemetrycollector-sample.yaml` that contains the following text:
+
.Example opentelemetrycollector.yaml
[source,yaml]
----
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: opentelemetrycollector-sample
namespace: openshift-operators
spec:
image: >-
registry.redhat.io/rhosdt/opentelemetry-collector-rhel8@sha256:61934ea5793c55900d09893e8f8b1f2dbd2e712faba8e97684e744691b29f25e
config: |
receivers:
jaeger:
protocols:
grpc:
exporters:
logging:
service:
pipelines:
traces:
receivers: [jaeger]
exporters: [logging]
----

. Run the following command to deploy {JaegerShortName}:
+
[source,terminal]
----
$ oc create -n tracing-system -f opentelemetrycollector.yaml
----

. Run the following command to watch the progress of the pods during the installation process:
+
[source,terminal]
----
$ oc get pods -n tracing-system -w
----
+
After the installation process has completed, you should see output similar to the following example:
+
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
opentelemetrycollector-cdff7897b-qhfdx 2/2 Running 0 24s
----
2 changes: 1 addition & 1 deletion modules/distr-tracing-deploy-production-es.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ Follow this procedure to create an instance of {JaegerShortName} from the comman
+
[source,terminal]
----
$ oc login https://{HOSTNAME}:8443
$ oc login https://<HOSTNAME>:8443
----

. Create a new project named `tracing-system`.
Expand Down
2 changes: 1 addition & 1 deletion modules/distr-tracing-deploy-streaming.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ Procedure
+
[source,terminal]
----
$ oc login https://{HOSTNAME}:8443
$ oc login https://<HOSTNAME>:8443
----

. Create a new project named `tracing-system`.
Expand Down
4 changes: 3 additions & 1 deletion modules/distr-tracing-install-jaeger-operator.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -47,4 +47,6 @@ The *Manual* approval strategy requires a user with appropriate credentials to a

. Click *Install*.

. On the *Subscription Overview* page, select the `openshift-operators` project. Wait until you see that the {JaegerName} Operator shows a status of "InstallSucceeded" before continuing.
. Navigate to *Operators* -> *Installed Operators*.

. On the *Installed Operators* page, select the `openshift-operators` project. Wait until you see that the {JaegerName} Operator shows a status of "Succeeded" before continuing.
20 changes: 13 additions & 7 deletions modules/distr-tracing-install-otel-operator.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,12 @@ This module included in the following assemblies:
[id="distr-tracing-otel-operator-install_{context}"]
= Installing the {OTELName} Operator

#TECH PREVIEW BOILERPLATE HERE#
[IMPORTANT]
====
The {OTELName} Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production.
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
====

To install {OTELName}, you use the link:https://operatorhub.io/[OperatorHub] to install the {OTELName} Operator.

Expand All @@ -15,7 +20,6 @@ By default, the Operator is installed in the `openshift-operators` project.
.Prerequisites
* You have access to the {product-title} web console.
* You have access to the cluster as a user with the `cluster-admin` role. If you use {product-dedicated}, you must have an account with the `dedicated-admin` role.
//* If you require persistent storage, you must also install the OpenShift Elasticsearch Operator before installing the {OTELName} Operator.

[WARNING]
====
Expand All @@ -28,17 +32,17 @@ Do not install Community versions of the Operators. Community Operators are not

. Navigate to *Operators* -> *OperatorHub*.

. Type *distributing tracing datacollection* into the filter to locate the {OTELName} Operator.
. Type *distributing tracing data collection* into the filter to locate the {OTELName} Operator.

. Click the *{OTELName} Operator* provided by Red Hat to display information about the Operator.

. Click *Install*.

. On the *Install Operator* page, select the *stable* Update Channel. This automatically updates your Operator as new versions are released.
. On the *Install Operator* page, accept the default *stable* Update channel. This automatically updates your Operator as new versions are released.

. Select *All namespaces on the cluster (default)*. This installs the Operator in the default `openshift-operators` project and makes the Operator available to all projects in the cluster.
. Accept the default *All namespaces on the cluster (default)*. This installs the Operator in the default `openshift-operators` project and makes the Operator available to all projects in the cluster.

* Select an approval srategy. You can select *Automatic* or *Manual* updates. If you choose *Automatic* updates for an installed Operator, when a new version of that Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select *Manual* updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.
. Accept the default *Automatic* approval strategy. By accepting the default, when a new version of this Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select *Manual* updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.
+
[NOTE]
====
Expand All @@ -48,4 +52,6 @@ The *Manual* approval strategy requires a user with appropriate credentials to a

. Click *Install*.

. On the *Subscription Overview* page, select the `openshift-operators` project. Wait until you see that the {OTELName} Operator shows a status of "InstallSucceeded" before continuing.
. Navigate to *Operators* -> *Installed Operators*.

. On the *Installed Operators* page, select the `openshift-operators` project. Wait until you see that the {OTELName} Operator shows a status of "Succeeded" before continuing.