Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,6 @@ It is valuable for understanding serialization, parallelism, and sources of late

A _span_ represents a logical unit of work in {DTProductName} that has an operation name, the start time of the operation, and the duration, as well as potentially tags and logs. Spans may be nested and ordered to model causal relationships.

// The following include statements pull in the module files that comprise the assembly.

include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]

include::modules/distr-tracing-features.adoc[leveloffset=+1]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,12 +46,10 @@ The streaming deployment strategy is currently unsupported on IBM Z.

[NOTE]
====
There are two ways to install and use {DTProductName}, as part of a service mesh or as a stand alone component. If you have installed {DTShortName} as part of Red Hat OpenShift Service Mesh, you can perform basic configuration as part of the xref:../../service_mesh/v2x/installing-ossm.adoc#installing-ossm[ServiceMeshControlPlane] but for completely control you should configure a Jaeger CR and then xref:../../service_mesh/v2x/ossm-observability.html#ossm-config-external-jaeger_observability[reference your distributed tracing configuration file in the ServiceMeshControlPlane].
There are two ways to install and use {DTProductName}, as part of a service mesh or as a stand alone component. If you have installed {DTShortName} as part of {SMProductName}, you can perform basic configuration as part of the xref:../../service_mesh/v2x/installing-ossm.adoc#installing-ossm[ServiceMeshControlPlane] but for completely control you should configure a Jaeger CR and then xref:../../service_mesh/v2x/ossm-observability.html#ossm-config-external-jaeger_observability[reference your distributed tracing configuration file in the ServiceMeshControlPlane].

====

// The following include statements pull in the module files that comprise the assembly.

include::modules/distr-tracing-deploy-default.adoc[leveloffset=+1]

include::modules/distr-tracing-deploy-production-es.adoc[leveloffset=+1]
Expand All @@ -61,7 +59,7 @@ include::modules/distr-tracing-deploy-streaming.adoc[leveloffset=+1]
[id="validating-your-jaeger-deployment"]
== Validating your deployment

include::modules/distr-tracing-accessing-jaeger-console.adoc[leveloffset=+1]
include::modules/distr-tracing-accessing-jaeger-console.adoc[leveloffset=+2]

[id="customizing-your-deployment"]
== Customizing your deployment
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,7 @@ include::_attributes/common-attributes.adoc[]

toc::[]

The {OTELName} Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the {OTELName} resources. You can either install the default configuration or modify the file to better suit your business requirements.

// The following include statements pull in the module files that comprise the assembly.
The {OTELName} Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the {OTELName} resources. You can either install the default configuration or modify the file to better suit your business requirements.

include::modules/distr-tracing-config-otel-collector.adoc[leveloffset=+1]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ toc::[]

You can install {DTProductName} on {product-title} in either of two ways:

* You can install {DTProductName} as part of Red Hat OpenShift Service Mesh. Distributed tracing is included by default in the Service Mesh installation. To install {DTProductName} as part of a service mesh, follow the xref:../../service_mesh/v2x/preparing-ossm-installation.adoc#preparing-ossm-installation[Red Hat Service Mesh Installation] instructions. You must install {DTProductName} in the same namespace as your service mesh, that is, the `ServiceMeshControlPlane` and the {DTProductName} resources must be in the same namespace.
* You can install {DTProductName} as part of {SMProductName}. Distributed tracing is included by default in the Service Mesh installation. To install {DTProductName} as part of a service mesh, follow the xref:../../service_mesh/v2x/preparing-ossm-installation.adoc#preparing-ossm-installation[Red Hat Service Mesh Installation] instructions. You must install {DTProductName} in the same namespace as your service mesh, that is, the `ServiceMeshControlPlane` and the {DTProductName} resources must be in the same namespace.

* If you do not want to install a service mesh, you can use the {DTProductName} Operators to install {DTShortName} by itself. To install {DTProductName} without a service mesh, use the following instructions.

Expand All @@ -29,8 +29,6 @@ Before you can install {DTProductName}, review the installation activities, and

* An account with the `cluster-admin` role.

// The following include statements pull in the module files that comprise the assembly.

include::modules/distr-tracing-install-overview.adoc[leveloffset=+1]

include::modules/distr-tracing-install-elasticsearch.adoc[leveloffset=+1]
Expand Down
2 changes: 0 additions & 2 deletions distr_tracing/distributed-tracing-release-notes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@ include::_attributes/common-attributes.adoc[]

toc::[]

// The following include statements pull in the module files that comprise the assembly.

include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]

include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1]
Expand Down
21 changes: 0 additions & 21 deletions jaeger/jaeger_arch/rhbjaeger-architecture.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -17,29 +17,8 @@ Jaeger records the execution of individual requests across the whole stack of mi

A _span_ represents a logical unit of work in Jaeger that has an operation name, the start time of the operation, and the duration, as well as potentially tags and logs. Spans may be nested and ordered to model causal relationships.

// The following include statements pull in the module files that comprise the assembly.

include::modules/jaeger-product-overview.adoc[leveloffset=+1]

include::modules/jaeger-features.adoc[leveloffset=+1]

include::modules/jaeger-architecture.adoc[leveloffset=+1]

////
TODO
WRITE more detailed component docs

include::modules/jaeger-client-java.adoc[leveloffset=+1]

include::modules/jaeger-agent.adoc[leveloffset=+1]

include::modules/jaeger-collector.adoc[leveloffset=+1]

include::modules/jaeger-data-store.adoc[leveloffset=+1]

include::modules/jaeger-query.adoc[leveloffset=+1]

include::modules/jaeger-ingester.adoc[leveloffset=+1]

include::modules/jaeger-console.adoc[leveloffset=+1]
////
6 changes: 3 additions & 3 deletions jaeger/jaeger_install/rhbjaeger-deploying.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -27,10 +27,10 @@ spec:
+
[NOTE]
====
In-memory storage is not persistent, which means that if the Jaeger instance shuts down, restarts, or is replaced, that your trace data will be lost. And in-memory storage cannot be scaled, since each pod has its own memory. For persistent storage, you must use the `production` or `streaming` strategies, which use Elasticsearch as the default storage.
In-memory storage is not persistent, which means that if the Jaeger instance shuts down, restarts, or is replaced, that your trace data will be lost. And in-memory storage cannot be scaled, since each pod has its own memory. For persistent storage, you must use the `production` or `streaming` strategies, which use Elasticsearch as the default storage.
====

* *production* - The production strategy is intended for production environments, where long term storage of trace data is important, as well as a more scalable and highly available architecture is required. Each of the backend components is therefore deployed separately. The Agent can be injected as a sidecar on the instrumented application. The Query and Collector services are configured with a supported storage type - currently Elasticsearch. Multiple instances of each of these components can be provisioned as required for performance and resilience purposes.
* *production* - The production strategy is intended for production environments, where long term storage of trace data is important, as well as a more scalable and highly available architecture is required. Each of the backend components is therefore deployed separately. The Agent can be injected as a sidecar on the instrumented application. The Query and Collector services are configured with a supported storage type - currently Elasticsearch. Multiple instances of each of these components can be provisioned as required for performance and resilience purposes.

* *streaming* - The streaming strategy is designed to augment the production strategy by providing a streaming capability that effectively sits between the Collector and the backend storage (Elasticsearch). This provides the benefit of reducing the pressure on the backend storage, under high load situations, and enables other trace post-processing capabilities to tap into the real time span data directly from the streaming platform (https://access.redhat.com/documentation/en-us/red_hat_amq/7.6/html/using_amq_streams_on_openshift/index[AMQ Streams]/ https://kafka.apache.org/documentation/[Kafka]).
+
Expand All @@ -41,7 +41,7 @@ The streaming strategy requires an additional Red Hat subscription for AMQ Strea

[NOTE]
====
There are two ways to install and use Jaeger, as part of a service mesh or as a stand alone component. If you have installed Jaeger as part of Red Hat OpenShift Service Mesh, you can configure and deploy Jaeger as part of the xref:../../service_mesh/v2x/installing-ossm.adoc#installing-ossm[ServiceMeshControlPlane] or configure Jaeger and then xref:../../service_mesh/v2x/ossm-observability.html#ossm-config-external-jaeger_observability[reference your Jaeger configuration in the ServiceMeshControlPlane].
There are two ways to install and use Jaeger, as part of a service mesh or as a stand alone component. If you have installed Jaeger as part of {SMProductName}, you can configure and deploy Jaeger as part of the xref:../../service_mesh/v2x/installing-ossm.adoc#installing-ossm[ServiceMeshControlPlane] or configure Jaeger and then xref:../../service_mesh/v2x/ossm-observability.html#ossm-config-external-jaeger_observability[reference your Jaeger configuration in the ServiceMeshControlPlane].

====

Expand Down
4 changes: 1 addition & 3 deletions jaeger/jaeger_install/rhbjaeger-installation.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ toc::[]

You can install Jaeger on {product-title} in either of two ways:

* You can install Jaeger as part of Red Hat OpenShift Service Mesh. Jaeger is included by default in the Service Mesh installation. To install Jaeger as part of a service mesh, follow the xref:../../service_mesh/v2x/preparing-ossm-installation.adoc#preparing-ossm-installation[Red Hat Service Mesh Installation] instructions. Jaeger must be installed in the same namespace as your service mesh, that is, the `ServiceMeshControlPlane` and the Jaeger resources must be in the same namespace.
* You can install Jaeger as part of {SMProductName}. Jaeger is included by default in the Service Mesh installation. To install Jaeger as part of a service mesh, follow the xref:../../service_mesh/v2x/preparing-ossm-installation.adoc#preparing-ossm-installation[Red Hat Service Mesh Installation] instructions. Jaeger must be installed in the same namespace as your service mesh, that is, the `ServiceMeshControlPlane` and the Jaeger resources must be in the same namespace.

* If you do not want to install a service mesh, you can use the {JaegerName} Operator to install {JaegerShortName} by itself. To install Jaeger without a service mesh, use the following instructions.

Expand All @@ -30,8 +30,6 @@ Before you can install {JaegerName}, review the installation activities, and ens

* An account with the `cluster-admin` role.

// The following include statements pull in the module files that comprise the assembly.

include::modules/jaeger-install-overview.adoc[leveloffset=+1]

include::modules/jaeger-install-elasticsearch.adoc[leveloffset=+1]
Expand Down
2 changes: 0 additions & 2 deletions jaeger/rhbjaeger-release-notes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@ include::_attributes/common-attributes.adoc[]

toc::[]

// The following include statements pull in the module files that comprise the assembly.

include::modules/jaeger-product-overview.adoc[leveloffset=+1]

include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1]
Expand Down
9 changes: 5 additions & 4 deletions modules/distr-tracing-accessing-jaeger-console.adoc
Original file line number Diff line number Diff line change
@@ -1,16 +1,17 @@
////
Module included in the following assemblies:

* distr_tracing/distr_tracing_install/distr-tracing-deploying-jaeger.adoc
* distr_tracing/distr_tracing_install/distr-tracing-deploying-otel.adoc
////

:_content-type: PROCEDURE
[id="distr-tracing-accessing-jaeger-console_{context}"]
= Accessing the Jaeger console

To access the Jaeger console you must have either {SMProductName} or {DTProductName} installed, and {JaegerName} installed, configured, and deployed.

The installation process creates a route to access the Jaeger console.

If you know the URL for the Jaeger console, you can access it directly. If you do not know the URL, use the following directions.
If you know the URL for the Jaeger console, you can access it directly. If you do not know the URL, use the following directions.

.Procedure from OpenShift console
. Log in to the {product-title} web console as a user with cluster-admin rights. If you use {product-dedicated}, you must have an account with the `dedicated-admin` role.
Expand All @@ -21,7 +22,7 @@ If you know the URL for the Jaeger console, you can access it directly. If you
+
The *Location* column displays the linked address for each route.
+
. If necessary, use the filter to find the `jaeger` route. Click the route *Location* to launch the console.
. If necessary, use the filter to find the `jaeger` route. Click the route *Location* to launch the console.

. Click *Log In With OpenShift*.

Expand Down
6 changes: 3 additions & 3 deletions modules/distr-tracing-change-operator-20.adoc
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
////
This PROCEDURE module included in the following assemblies:
- /dist_tracing_install/dist-tracing-updating.adoc
This module included in the following assemblies:
- dist_tracing/dist_tracing_install/dist-tracing-updating.adoc
////

[id="distr-tracing-change-operator-20_{context}"]
[id="distr-tracing-changing-operator-channel_{context}"]
= Changing the Operator channel for 2.0

{DTProductName} 2.0.0 made the following changes:
Expand Down
4 changes: 2 additions & 2 deletions modules/distr-tracing-config-otel-collector.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,9 @@ The OpenTelemetry Collector consists of three components that access telemetry d

* *Processors* - (Optional) Processors are run on data between being received and being exported. By default, no processors are enabled. Processors must be enabled for every data source. Not all processors support all data sources. Depending on the data source, it may be recommended that multiple processors be enabled. In addition, it is important to note that the order of processors matters.

* *Exporters* - An exporter, which can be push or pull based, is how you send data to one or more backends/destinations. By default, no exporters are configured. One or more exporters must be configured. Exporters may support one or more data sources. Exporters may come with default settings, but many require configuration to specify at least the destination and security settings.
* *Exporters* - An exporter, which can be push or pull based, is how you send data to one or more backends/destinations. By default, no exporters are configured. One or more exporters must be configured. Exporters may support one or more data sources. Exporters may come with default settings, but many require configuration to specify at least the destination and security settings.

You can define multiple instances of components in a custom resource YAML file. Once configured, these components must be enabled through pipelines defined in the `spec.config.service` section of the YAML file. As a best practice you should only enable the components that you need.
You can define multiple instances of components in a custom resource YAML file. Once configured, these components must be enabled through pipelines defined in the `spec.config.service` section of the YAML file. As a best practice you should only enable the components that you need.

.sample OpenTelemetry collector custom resource file
[source,yaml]
Expand Down
2 changes: 1 addition & 1 deletion modules/distr-tracing-deploy-default.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The custom resource definition (CRD) defines the configuration used when you dep

[NOTE]
====
In-memory storage is not persistent. If the Jaeger pod shuts down, restarts, or is replaced, your trace data will be lost. For persistent storage, you must use the `production` or `streaming` strategies, which use Elasticsearch as the default storage.
In-memory storage is not persistent. If the Jaeger pod shuts down, restarts, or is replaced, your trace data will be lost. For persistent storage, you must use the `production` or `streaming` strategies, which use Elasticsearch as the default storage.
====

.Prerequisites
Expand Down
2 changes: 1 addition & 1 deletion modules/distr-tracing-deploy-streaming.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This module included in the following assemblies:

The `streaming` deployment strategy is intended for production environments that require a more scalable and highly available architecture, and where long-term storage of trace data is important.

The `streaming` strategy provides a streaming capability that sits between the Collector and the Elasticsearch storage. This reduces the pressure on the storage under high load situations, and enables other trace post-processing capabilities to tap into the real-time span data in directly from the Kafka streaming platform.
The `streaming` strategy provides a streaming capability that sits between the Collector and the Elasticsearch storage. This reduces the pressure on the storage under high load situations, and enables other trace post-processing capabilities to tap into the real-time span data directly from the Kafka streaming platform.

[NOTE]
====
Expand Down
4 changes: 2 additions & 2 deletions modules/distr-tracing-deployment-best-practices.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,6 @@ This module included in the following assemblies:

* If you have a multitenant implementation and tenants are separated by namespaces, deploy a {JaegerName} instance to each tenant namespace.

** Agent as a daemonset is not supported for multitenant installations or OpenShift Dedicated. Agent as a sidecar is the only supported configuration for these use cases.
** Agent as a daemonset is not supported for multitenant installations or {product-dedicated}. Agent as a sidecar is the only supported configuration for these use cases.

* If you are installing {DTShortName} as part of Red Hat OpenShift Service Mesh, the {DTShortName} resources must be installed in the same namespace as the `ServiceMeshControlPlane` resource.
* If you are installing {DTShortName} as part of {SMProductName}, the {DTShortName} resources must be installed in the same namespace as the `ServiceMeshControlPlane` resource.
2 changes: 1 addition & 1 deletion modules/distr-tracing-install-elasticsearch.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This module included in the following assemblies:
////

:_content-type: PROCEDURE
[id="distr-tracing-install-elasticsearch_{context}"]
[id="distr-tracing-operator-install-elasticsearch_{context}"]
= Installing the OpenShift Elasticsearch Operator

The default {JaegerName} deployment uses in-memory storage because it is designed to be installed quickly for those evaluating {DTProductName}, giving demonstrations, or using {JaegerName} in a test environment. If you plan to use {JaegerName} in production, you must install and configure a persistent storage option, in this case, Elasticsearch.
Expand Down
Loading