diff --git a/distr_tracing/distr_tracing_otel/distr-tracing-otel-installing.adoc b/distr_tracing/distr_tracing_otel/distr-tracing-otel-installing.adoc index 2ad5faa7e3bf..3644afe19990 100644 --- a/distr_tracing/distr_tracing_otel/distr-tracing-otel-installing.adoc +++ b/distr_tracing/distr_tracing_otel/distr-tracing-otel-installing.adoc @@ -17,6 +17,8 @@ Installing the {OTELShortName} involves the following steps: include::modules/distr-tracing-otel-install-web-console.adoc[leveloffset=+1] +include::modules/distr-tracing-otel-install-cli.adoc[leveloffset=+1] + [role="_additional-resources"] [id="additional-resources_dist-tracing-otel-installing"] == Additional resources diff --git a/modules/distr-tracing-otel-install-cli.adoc b/modules/distr-tracing-otel-install-cli.adoc new file mode 100644 index 000000000000..fa3b3e565c75 --- /dev/null +++ b/modules/distr-tracing-otel-install-cli.adoc @@ -0,0 +1,178 @@ +// Module included in the following assemblies: +// +//* distr_tracing_otel/distr-tracing-otel-installing.adoc + +:_content-type: PROCEDURE +[id="distr-tracing-otel-install-cli_{context}"] += Installing the {OTELShortName} by using the CLI + +You can install the {OTELShortName} from the command line. + +.Prerequisites + +* An active {oc-first} session by a cluster administrator with the `cluster-admin` role. ++ +[TIP] +==== +* Ensure that your {oc-first} version is up to date and matches your {product-title} version. + +* Run `oc login`: ++ +[source,terminal] +---- +$ oc login --username= +---- +==== + +.Procedure + +. Install the {OTELOperator}: + +.. Create a project for the {OTELOperator} by running the following command: ++ +[source,terminal] +---- +$ oc apply -f - << EOF +apiVersion: project.openshift.io/v1 +kind: Project +metadata: + labels: + kubernetes.io/metadata.name: openshift-opentelemetry-operator + openshift.io/cluster-monitoring: "true" + name: openshift-opentelemetry-operator +EOF +---- + +.. Create an operator group by running the following command: ++ +[source,terminal] +---- +$ oc apply -f - << EOF +apiVersion: operators.coreos.com/v1 +kind: OperatorGroup +metadata: + name: openshift-opentelemetry-operator + namespace: openshift-opentelemetry-operator +spec: + upgradeStrategy: Default +EOF +---- + +.. Create a subscription by running the following command: ++ +[source,terminal] +---- +$ oc apply -f - << EOF +apiVersion: operators.coreos.com/v1alpha1 +kind: Subscription +metadata: + name: opentelemetry-product + namespace: openshift-opentelemetry-operator +spec: + channel: stable + installPlanApproval: Automatic + name: opentelemetry-product + source: redhat-operators + sourceNamespace: openshift-marketplace +EOF +---- + +.. Check the operator status by running the following command: ++ +[source,terminal] +---- +$ oc get csv -n openshift-opentelemetry-operator +---- + +. Create a project of your choice for the *OpenTelemetry Collector* instance that you will create in a subsequent step: + +** To create a project from standard input without metadata: ++ +[source,terminal] +---- +$ oc new-project +---- + +** To create a project from standard input with metadata: ++ +[source,terminal] +---- +$ oc apply -f - << EOF +apiVersion: project.openshift.io/v1 +kind: Project +metadata: + name: +EOF +---- + +. Create a *OpenTelemetry Collector* instance in the project that you created for the *OpenTelemetry Collector* instance. ++ +NOTE: You can create multiple *OpenTelemetry Collector* instances in separate projects on the same cluster. ++ +.. Customize the `OpenTelemetry Collector` custom resource (CR): ++ +[source,yaml] +---- +apiVersion: opentelemetry.io/v1alpha1 +kind: OpenTelemetryCollector +metadata: + name: otel + namespace: +spec: + mode: deployment + config: | + receivers: + otlp: + protocols: + grpc: + http: + jaeger: + protocols: + grpc: + thrift_binary: + thrift_compact: + thrift_http: + zipkin: + processors: + batch: + memory_limiter: + check_interval: 1s + limit_percentage: 50 + spike_limit_percentage: 30 + exporters: + logging: + service: + pipelines: + traces: + receivers: [otlp,jaeger,zipkin] + processors: [memory_limiter,batch] + exporters: [logging] +---- + +This example receivers traces in Jaeger, OTLP and Zipkin format and logs them via STDOUT. + +.. Apply the customized CR by running the following command. ++ +[source,terminal] +---- +$ oc apply -f - << EOF + +EOF +---- + + +.Verification + +. Verify that the `status.phase` of the OpenTelemetry Collector pod is `Running` and the `conditions` are `type: Ready` by running the following command: ++ +[source,terminal] +---- +$ oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=. -o yaml +---- + +. Get the OpenTelemetry Collector service by running the following command: ++ +[source,terminal] +---- +$ oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=. +---- diff --git a/modules/distr-tracing-otel-install-web-console.adoc b/modules/distr-tracing-otel-install-web-console.adoc index ad7155312313..4caddb64ca1a 100644 --- a/modules/distr-tracing-otel-install-web-console.adoc +++ b/modules/distr-tracing-otel-install-web-console.adoc @@ -15,20 +15,6 @@ You can install the {OTELShortName} from the *Administrator* view of the web con * For {product-dedicated}, you must be logged in using an account with the `dedicated-admin` role. -* An active {oc-first} session by a cluster administrator with the `cluster-admin` role. -+ -[TIP] -==== -* Ensure that your {oc-first} version is up to date and matches your {product-title} version. - -* Run `oc login`: -+ -[source,terminal] ----- -$ oc login --username= ----- -==== - .Procedure . Install the {OTELOperator}: @@ -101,16 +87,4 @@ spec: .Verification -. Verify that the `status.phase` of the OpenTelemetry Collector pod is `Running` and the `conditions` are `type: Ready` by running the following command: -+ -[source,terminal] ----- -$ oc get pod -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=. -o yaml ----- - -. Get the OpenTelemetry Collector service by running the following command: -+ -[source,terminal] ----- -$ oc get service -l app.kubernetes.io/managed-by=opentelemetry-operator,app.kubernetes.io/instance=. ----- +Click in the name of the recently created *OpenTelemetry Collector* instance. Now, select the *Resources* tab and ensure all the created resources status is *Created*. \ No newline at end of file