Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 8 additions & 8 deletions _topic_maps/_topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2599,27 +2599,27 @@ Topics:

---
Name: Distributed tracing
Dir: jaeger
Dir: distr_tracing
Distros: openshift-enterprise
Topics:
- Name: Distributed tracing release notes
File: distributed-tracing-release-notes
- Name: Distributed tracing architecture
Dir: jaeger_arch
Dir: distr_tracing_arch
Topics:
- Name: Distributed tracing architecture
File: rhbjaeger-architecture
File: distr-tracing-architecture
- Name: Distributed tracing installation
Dir: jaeger_install
Dir: distr_tracing_install
Topics:
- Name: Installing distributed tracing
File: rhbjaeger-installation
File: distr-tracing-installing
- Name: Configuring distributed tracing
File: rhbjaeger-deploying
File: distr-tracing-deploying
- Name: Upgrading distributed tracing
File: rhbjaeger-updating
File: distr-tracing-updating
- Name: Removing distributed tracing
File: rhbjaeger-removing
File: distr-tracing-removing
---
Name: OpenShift Virtualization
Dir: virt
Expand Down
46 changes: 46 additions & 0 deletions distr_tracing/distr_tracing_arch/distr-tracing-architecture.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
[id="distr-tracing-architecture"]
= Distributed tracing architecture
include::modules/distr-tracing-document-attributes.adoc[]
:context: distributed-tracing-architecture

toc::[]

Every time a user takes an action in an application, a request is executed by the architecture that may require dozens of different services to participate to produce a response.
{DTProductName} lets you perform distributed tracing, which records the path of a request through various microservices that make up an application.

_Distributed tracing_ is a technique that is used to tie the information about different units of work together — usually executed in different processes or hosts — to understand a whole chain of events in a distributed transaction.
Developers can visualize call flows in large microservice architectures with distributed tracing.
It is valuable for understanding serialization, parallelism, and sources of latency.

{DTProductName} records the execution of individual requests across the whole stack of microservices, and presents them as traces. A _trace_ is a data/execution path through the system. An end-to-end trace is comprised of one or more spans.

A _span_ represents a logical unit of work in {DTProductName} that has an operation name, the start time of the operation, and the duration, as well as potentially tags and logs. Spans may be nested and ordered to model causal relationships.

// The following include statements pull in the module files that comprise the assembly.

include::modules/distr-tracing-product-overview.adoc[leveloffset=+1]

include::modules/distr-tracing-features.adoc[leveloffset=+1]

include::modules/distr-tracing-architecture.adoc[leveloffset=+1]

////
TODO
WRITE more detailed component docs

include::modules/distr-tracing-client-java.adoc[leveloffset=+1]

include::modules/distr-tracing-agent.adoc[leveloffset=+1]

include::modules/distr-tracing--jaeger-collector.adoc[leveloffset=+1]

include::modules/distr-tracing-otel-collector.adoc[leveloffset=+1]

include::modules/distr-tracing-data-store.adoc[leveloffset=+1]

include::modules/distr-tracing-query.adoc[leveloffset=+1]

include::modules/distr-tracing-ingester.adoc[leveloffset=+1]

include::modules/distr-tracing-console.adoc[leveloffset=+1]
////
1 change: 1 addition & 0 deletions distr_tracing/distr_tracing_arch/images
1 change: 1 addition & 0 deletions distr_tracing/distr_tracing_arch/modules
1 change: 1 addition & 0 deletions distr_tracing/distr_tracing_config/images
1 change: 1 addition & 0 deletions distr_tracing/distr_tracing_config/modules
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
include::modules/serverless-document-attributes.adoc[]
[id="serverless-jaeger-integration"]
= Integrating distributed tracing with serverless applications using OpenShift Serverless
:context: serverless-jaeger-integration
include::modules/common-attributes.adoc[]

toc::[]

You can enable distributed tracing with xref:../../serverless/serverless-getting-started.adoc#serverless-getting-started[{ServerlessProductName}] for your serverless applications on {product-title}.

include::modules/serverless-jaeger-config.adoc[leveloffset=+1]
86 changes: 86 additions & 0 deletions distr_tracing/distr_tracing_install/distr-tracing-deploying.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
[id="distr-tracing-deploying"]
= Configuring and deploying distributed tracing
include::modules/distr-tracing-document-attributes.adoc[]
:context: deploying-distributed-tracing

toc::[]

The {JaegerName} Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the {JaegerShortName} resources. You can either install the default configuration or modify the file to better suit your business requirements.

{JaegerName} has predefined deployment strategies. You specify a deployment strategy in the custom resource file. When you create a {JaegerShortName} instance the Operator uses this configuration file to create the objects necessary for the deployment.

.Jaeger custom resource file showing deployment strategy
[source,yaml]
----
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: MyConfigFile
spec:
strategy: production <1>
----

<1> The {JaegerName} Operator currently supports the following deployment strategies:

* *allInOne* (Default) - This strategy is intended for development, testing, and demo purposes; it is not intended for production use. The main backend components, Agent, Collector, and Query service, are all packaged into a single executable which is configured, by default. to use in-memory storage.
+
[NOTE]
====
In-memory storage is not persistent, which means that if the {JaegerShortName} instance shuts down, restarts, or is replaced, that your trace data will be lost. And in-memory storage cannot be scaled, since each pod has its own memory. For persistent storage, you must use the `production` or `streaming` strategies, which use Elasticsearch as the default storage.
====

* *production* - The production strategy is intended for production environments, where long term storage of trace data is important, as well as a more scalable and highly available architecture is required. Each of the backend components is therefore deployed separately. The Agent can be injected as a sidecar on the instrumented application. The Query and Collector services are configured with a supported storage type - currently Elasticsearch. Multiple instances of each of these components can be provisioned as required for performance and resilience purposes.

* *streaming* - The streaming strategy is designed to augment the production strategy by providing a streaming capability that effectively sits between the Collector and the Elasticsearch backend storage. This provides the benefit of reducing the pressure on the backend storage, under high load situations, and enables other trace post-processing capabilities to tap into the real time span data directly from the streaming platform (https://access.redhat.com/documentation/en-us/red_hat_amq/7.6/html/using_amq_streams_on_openshift/index[AMQ Streams]/ https://kafka.apache.org/documentation/[Kafka]).
+
[NOTE]
====
The streaming strategy requires an additional Red Hat subscription for AMQ Streams.
====

[NOTE]
====
The streaming deployment strategy is currently unsupported on IBM Z.
====

[NOTE]
====
There are two ways to install and use {DTProductName}, as part of a service mesh or as a stand alone component. If you have installed {DTShortName} as part of Red Hat OpenShift Service Mesh, you can perform basic configuration as part of the xref:../../service_mesh/v2x/installing-ossm.adoc#installing-ossm[ServiceMeshControlPlane] but for completely control you should configure a Jaeger CR and then xref:../../service_mesh/v2x/ossm-observability.html#ossm-config-external-jaeger_observability[reference your distributed tracing configuration file in the ServiceMeshControlPlane].

====

// The following include statements pull in the module files that comprise the assembly.

include::modules/distr-tracing-deploy-default.adoc[leveloffset=+1]

include::modules/distr-tracing-deploy-production-es.adoc[leveloffset=+1]

include::modules/distr-tracing-deploy-streaming.adoc[leveloffset=+1]

[id="customizing-your-deployment"]
== Customizing your deployment

include::modules/distr-tracing-deployment-best-practices.adoc[leveloffset=+2]

include::modules/distr-tracing-config-default.adoc[leveloffset=+2]

include::modules/distr-tracing-config-jaeger-collector.adoc[leveloffset=+2]

//include::modules/distr-tracing-config-otel-collector.adoc[leveloffset=+2]

include::modules/distr-tracing-config-sampling.adoc[leveloffset=+2]

include::modules/distr-tracing-config-storage.adoc[leveloffset=+2]

include::modules/distr-tracing-config-query.adoc[leveloffset=+2]

include::modules/distr-tracing-config-ingester.adoc[leveloffset=+2]

[id="injecting-sidecars"]
== Injecting sidecars

{JaegerName} relies on a proxy sidecar within the application's pod to provide the agent. The {JaegerName} Operator can inject Agent sidecars into Deployment workloads. You can enable automatic sidecar injection or manage it manually.

include::modules/distr-tracing-sidecar-automatic.adoc[leveloffset=+2]

include::modules/distr-tracing-sidecar-manual.adoc[leveloffset=+2]
42 changes: 42 additions & 0 deletions distr_tracing/distr_tracing_install/distr-tracing-installing.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
[id="installing-distributed-tracing"]
= Installing distributed tracing
include::modules/distr-tracing-document-attributes.adoc[]
:context: install-distributed-tracing

toc::[]

You can install {DTProductName} on {product-title} in either of two ways:

* You can install {DTProductName} as part of Red Hat OpenShift Service Mesh. Distributed tracing is included by default in the Service Mesh installation. To install {DTProductName} as part of a service mesh, follow the xref:../../service_mesh/v2x/preparing-ossm-installation.adoc#preparing-ossm-installation[Red Hat Service Mesh Installation] instructions. You must install {DTProductName} in the same namespace as your service mesh, that is, the `ServiceMeshControlPlane` and the {DTProductName} resources must be in the same namespace.

* If you do not want to install a service mesh, you can use the {DTProductName} Operators to install {DTShortName} by itself. To install {DTProductName} without a service mesh, use the following instructions.

== Prerequisites

Before you can install {DTProductName}, review the installation activities, and ensure that you meet the prerequisites:

* Possess an active {product-title} subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information.

* Review the xref:../../architecture/architecture-installation.adoc#installation-overview_architecture-installation[{product-title} {product-version} overview].
* Install {product-title} {product-version}.

** xref:../../installing/installing_aws/installing-aws-account.adoc#installing-aws-account[Install {product-title} {product-version} on AWS]
** xref:../../installing/installing_aws/installing-aws-user-infra.adoc#installing-aws-user-infra[Install {product-title} {product-version} on user-provisioned AWS]
** xref:../../installing/installing_bare_metal/installing-bare-metal.adoc#installing-bare-metal[Install {product-title} {product-version} on bare metal]
** xref:../../installing/installing_vsphere/installing-vsphere.adoc#installing-vsphere[Install {product-title} {product-version} on vSphere]
* Install the version of the OpenShift CLI (`oc`) that matches your {product-title} version and add it to your path.

* An account with the `cluster-admin` role.

// The following include statements pull in the module files that comprise the assembly.

include::modules/distr-tracing-install-overview.adoc[leveloffset=+1]

include::modules/distr-tracing-install-elasticsearch.adoc[leveloffset=+1]

include::modules/distr-tracing-install-jaeger-operator.adoc[leveloffset=+1]

////
== Next steps
* xref:../../distr_tracing/distr_tracing_install/distr-tracing-deploying.adoc#deploying-distributed-tracing[Deploy {DTProductName}].
////
30 changes: 30 additions & 0 deletions distr_tracing/distr_tracing_install/distr-tracing-removing.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
[id="removing-distributed-tracing"]
= Removing distributed tracing
include::modules/distr-tracing-document-attributes.adoc[]
:context: removing-distributed-tracing

toc::[]

The steps for removing {DTProductName} from an {product-title} cluster are as follows:

. Shut down any {DTProductName} pods.
. Remove any {DTProductName} instances.
. Remove the {JaegerName} Operator.
. Remove the {OTELName} Operator.

include::modules/distr-tracing-removing-instance.adoc[leveloffset=+1]

include::modules/distr-tracing-removing-instance-cli.adoc[leveloffset=+1]


== Removing the {DTProductName} Operators

.Procedure

. Follow the instructions for xref:../../operators/admin/olm-deleting-operators-from-cluster.adoc#olm-deleting-operators-from-a-cluster[Deleting Operators from a cluster].

* Remove the {JaegerName} Operator.

//* Remove the {OTELName} Operator.

* After the {JaegerName} Operator has been removed, if appropriate, remove the OpenShift Elasticsearch Operator.
14 changes: 14 additions & 0 deletions distr_tracing/distr_tracing_install/distr-tracing-updating.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
[id="upgrading-distributed-tracing"]
= Upgrading distributed tracing
include::modules/distr-tracing-document-attributes.adoc[]
:context: upgrading-distributed-tracing

toc::[]

Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. The OLM runs by default in {product-title}.
OLM queries for available Operators as well as upgrades for installed Operators.
For more information about how {product-title} handles upgrades, refer to the xref:../../operators/understanding/olm/olm-understanding-olm.adoc#olm-understanding-olm[Operator Lifecycle Manager] documentation.

During an update, the {DTProductName} Operators upgrade the managed {DTShortName} instances to the version associated with the Operator. Whenever a new version of the {JaegerName} Operator is installed, all the {JaegerShortName} application instances managed by the Operator are upgraded to the Operator's version. For example, after upgrading the Operator from 1.10 installed to 1.11, the Operator scans for running {JaegerShortName} instances and upgrades them to 1.11 as well.

For specific instructions on how to update the OpenShift Elasticsearch Operator, refer to xref:../../logging/cluster-logging-upgrading.adoc#cluster-logging-upgrading_cluster-logging-upgrading[Updating OpenShift Logging].
1 change: 1 addition & 0 deletions distr_tracing/distr_tracing_install/images
1 change: 1 addition & 0 deletions distr_tracing/distr_tracing_install/modules
1 change: 1 addition & 0 deletions distr_tracing/images
1 change: 1 addition & 0 deletions distr_tracing/modules
24 changes: 24 additions & 0 deletions modules/distr-tracing-architecture.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
////
This module included in the following assemblies:
-service_mesh/v2x/ossm-architecture.adoc
-dist_tracing_arch/distr-tracing-architecture.adoc
////

[id="distributed-tracing-architecture_{context}"]
= {DTProductName} architecture

{DTProductName} is made up of several components that work together to collect, store, and display tracing data.

* *Client* (Jaeger client, Tracer, Reporter, instrumented application, client libraries)- The {JaegerShortName} clients are language-specific implementations of the OpenTracing API. They can be used to instrument applications for distributed tracing either manually or with a variety of existing open source frameworks, such as Camel (Fuse), Spring Boot (RHOAR), MicroProfile (RHOAR/Thorntail), Wildfly (EAP), and many more, that are already integrated with OpenTracing.

* *Agent* (Jaeger agent, Server Queue, Processor Workers) - The {JaegerShortName} agent is a network daemon that listens for spans sent over User Datagram Protocol (UDP), which it batches and sends to the Collector. The agent is meant to be placed on the same host as the instrumented application. This is typically accomplished by having a sidecar in container environments such as Kubernetes.

* *Collector* (Jaeger Collector, Queue, Workers) - Similar to the agent, the Collector receives spans and places them in an internal queue for processing. This allows the Collector to return immediately to the client/agent instead of waiting for the span to make its way to the storage.

* *Storage* (Data Store) - Collectors require a persistent storage backend. {JaegerName} has a pluggable mechanism for span storage. Note that for this release, the only supported storage is Elasticsearch.

* *Query* (Query Service) - Query is a service that retrieves traces from storage.

* *Ingester* (Ingester Service) - {DTProductName} can use Apache Kafka as a buffer between the Collector and the actual Elasticsearch backing storage. Ingester is a service that reads data from Kafka and writes to the Elasticsearch storage backend.

* *Jaeger Console* – With the {JaegerName} user interface, you can visualize your distributed tracing data. On the Search page, you can find traces and explore details of the spans that make up an individual trace.
Loading