Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,8 @@ Installing the {OTELShortName} involves the following steps:

include::modules/distr-tracing-otel-install-web-console.adoc[leveloffset=+1]

include::modules/distr-tracing-otel-install-cli.adoc[leveloffset=+1]

[role="_additional-resources"]
[id="additional-resources_dist-tracing-otel-installing"]
== Additional resources
Expand Down
178 changes: 178 additions & 0 deletions modules/distr-tracing-otel-install-cli.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,178 @@
// Module included in the following assemblies:
//
//* distr_tracing_otel/distr-tracing-otel-installing.adoc

:_content-type: PROCEDURE
[id="distr-tracing-otel-install-cli_{context}"]
= Installing the {OTELShortName} by using the CLI

You can install the {OTELShortName} from the command line.

.Prerequisites

* An active {oc-first} session by a cluster administrator with the `cluster-admin` role.
+
[TIP]
====
* Ensure that your {oc-first} version is up to date and matches your {product-title} version.

* Run `oc login`:
+
[source,terminal]
----
$ oc login --username=<your_username>
----
====

.Procedure

. Install the {OTELOperator}:

.. Create a project for the {OTELOperator} by running the following command:
+
[source,terminal]
----
$ oc apply -f - << EOF
apiVersion: project.openshift.io/v1
kind: Project
metadata:
labels:
kubernetes.io/metadata.name: openshift-opentelemetry-operator
openshift.io/cluster-monitoring: "true"
name: openshift-opentelemetry-operator
EOF
----

.. Create an operator group by running the following command:
+
[source,terminal]
----
$ oc apply -f - << EOF
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-opentelemetry-operator
namespace: openshift-opentelemetry-operator
spec:
upgradeStrategy: Default
EOF
----

.. Create a subscription by running the following command:
+
[source,terminal]
----
$ oc apply -f - << EOF
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: opentelemetry-product
namespace: openshift-opentelemetry-operator
spec:
channel: stable
installPlanApproval: Automatic
name: opentelemetry-product
source: redhat-operators
sourceNamespace: openshift-marketplace
EOF
----

.. Check the operator status by running the following command:
+
[source,terminal]
----
$ oc get csv -n openshift-opentelemetry-operator
----

. Create a project of your choice for the *OpenTelemetry Collector* instance that you will create in a subsequent step:

** To create a project from standard input without metadata:
+
[source,terminal]
----
$ oc new-project <project_of_opentelemetry_collector_instance>
----

** To create a project from standard input with metadata:
+
[source,terminal]
----
$ oc apply -f - << EOF
apiVersion: project.openshift.io/v1
kind: Project
metadata:
name: <project_of_opentelemetry_collector_instance>
EOF
----

. Create a *OpenTelemetry Collector* instance in the project that you created for the *OpenTelemetry Collector* instance.
+
NOTE: You can create multiple *OpenTelemetry Collector* instances in separate projects on the same cluster.
+
.. Customize the `OpenTelemetry Collector` custom resource (CR):
+
[source,yaml]
----
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel
namespace: <project_of_opentelemetry_collector_instance>
spec:
mode: deployment
config: |
receivers:
otlp:
protocols:
grpc:
http:
jaeger:
protocols:
grpc:
thrift_binary:
thrift_compact:
thrift_http:
zipkin:
processors:
batch:
memory_limiter:
check_interval: 1s
limit_percentage: 50
spike_limit_percentage: 30
exporters:
logging:
service:
pipelines:
traces:
receivers: [otlp,jaeger,zipkin]
processors: [memory_limiter,batch]
exporters: [logging]
----

This example receivers traces in Jaeger, OTLP and Zipkin format and logs them via STDOUT.

.. Apply the customized CR by running the following command.
+
[source,terminal]
----
$ oc apply -f - << EOF
<OpenTelemetryCollector_custom_resource>
EOF
----


.Verification

. Verify that the `status.phase` of the OpenTelemetry Collector pod is `Running` and the `conditions` are `type: Ready` by running the following command:
+
[source,terminal]
----
$ oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml
----

. Get the OpenTelemetry Collector service by running the following command:
+
[source,terminal]
----
$ oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>
----
28 changes: 1 addition & 27 deletions modules/distr-tracing-otel-install-web-console.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,20 +15,6 @@ You can install the {OTELShortName} from the *Administrator* view of the web con

* For {product-dedicated}, you must be logged in using an account with the `dedicated-admin` role.

* An active {oc-first} session by a cluster administrator with the `cluster-admin` role.
+
[TIP]
====
* Ensure that your {oc-first} version is up to date and matches your {product-title} version.

* Run `oc login`:
+
[source,terminal]
----
$ oc login --username=<your_username>
----
====

.Procedure

. Install the {OTELOperator}:
Expand Down Expand Up @@ -101,16 +87,4 @@ spec:

.Verification

. Verify that the `status.phase` of the OpenTelemetry Collector pod is `Running` and the `conditions` are `type: Ready` by running the following command:
+
[source,terminal]
----
$ oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name> -o yaml
----

. Get the OpenTelemetry Collector service by running the following command:
+
[source,terminal]
----
$ oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=<namespace>.<instance_name>
----
Click in the name of the recently created *OpenTelemetry Collector* instance. Now, select the *Resources* tab and ensure all the created resources status is *Created*.