Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions _topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2527,8 +2527,6 @@ Topics:
File: ossm-about
- Name: Service Mesh 2.x release notes
File: servicemesh-release-notes
- Name: Service Mesh support
File: ossm-support
- Name: Service Mesh architecture
File: ossm-architecture
- Name: Service Mesh and Istio differences
Expand Down Expand Up @@ -2561,6 +2559,8 @@ Topics:
File: ossm-extensions
- Name: Using the 3scale Istio adapter
File: threescale-adapter
- Name: Troubleshooting Service Mesh
File: ossm-troubleshooting-istio
- Name: SMCP configuration reference
File: ossm-reference-smcp
- Name: Jaeger configuration reference
Expand Down
16 changes: 12 additions & 4 deletions modules/ossm-about-collecting-ossm-data.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,23 @@

You can use the `oc adm must-gather` CLI command to collect information about your cluster, including features and objects associated with {ProductName}.

To collect {ProductName} data with `must-gather`, you must specify the {ProductName} image.
.Prerequisites

* Access to the cluster as a user with the `cluster-admin` role.

* The {product-title} CLI (`oc`) installed.

.Precedure

. To collect {ProductName} data with `must-gather`, you must specify the {ProductName} image.
+
[source,terminal]
----
$ oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8
----

To collect {ProductName} data for a specific control plane namespace with `must-gather`, you must specify the {ProductName} image and namespace. In this example, replace `<namespace>` with your control plane namespace, such as `istio-system`.

+
. To collect {ProductName} data for a specific control plane namespace with `must-gather`, you must specify the {ProductName} image and namespace. In this example, replace `<namespace>` with your control plane namespace, such as `istio-system`.
+
[source,terminal]
----
$ oc adm must-gather --image=registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8 gather <namespace>
Expand Down
34 changes: 34 additions & 0 deletions modules/ossm-accessing-jaeger.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
// Module included in the following assemblies:
// * service_mesh/v2x/-ossm-troubleshooting-istio.adoc

[id="ossm-accessing-jaeger_{context}"]
= Accessing the Jaeger console
////
(how to find the URL)
Installed Operators > Jaeger Operator > Jaeger > Jaeger Details > Resources > Route > Location = Link
Networking > Routes> search Jaeger route (Location = Link)
Kiali Console > Distributed Tracing tab
////

The installation process creates a route to access the Jaeger console.

.Procedure
. Log in to the {Product-title} console.

. Navigate to *Networking* -> *Routes* and
search for the Jaeger route, which is the URL listed under *Location*.

. To query for details of the route using the command line, enter the following command. In this example, `istio-system` is the control plane namespace.
+
[source,terminal]
----
$ export JAEGER_URL=$(oc get route -n istio-system jaeger -o jsonpath='{.spec.host}')
----
+
. Launch a browser and navigate to ``\https://<JAEGER_URL>``, where `<JAEGER_URL>` is the route that you discovered in the previous step.

. Log in using the same user name and password that you use to access the {Product-title} console.

. If you have added services to the service mesh and have generated traces, you can use the filters and *Find Traces* button to search your trace data.
+
If you are validating the console installation, there is no trace data to display.
32 changes: 32 additions & 0 deletions modules/ossm-accessing-kiali.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
// Module included in the following assemblies:
// * service_mesh/v2x/-ossm-troubleshooting-istio.adoc

[id="ossm-accessing-kiali_{context}"]
= Accessing the Kiali console

////
(how to find the URL to get to the Kiali console)
Installed Operators > Kiali Operator > Kiali > Kiali Details > Resources > Route > Location = Link
Networking > Routes> search Kiali route (Location = Link)
CLI = oc get routes
////

The installation process creates a route to access the Kiali console.

.Procedure

. Log in to the {Product-title} console.

. Use the perspective switcher to switch to the *Administrator* perspective.

. Click *Home* -> *Projects*.

. Click the name of your project. For example click `bookinfo`.

. In the *Launcher* section, click *Kiali*.

. Log in to the Kiali console with the same user name and password that you use to access the {product-title} console.

When you first log in to the Kiali Console, you see the *Overview* page which displays all the namespaces in your service mesh that you have permission to view.

If you are validating the console installation, there might not be any data to display.
12 changes: 7 additions & 5 deletions modules/ossm-observability-access.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,16 @@ To access the Kiali console you must have {ProductName} installed and projects c

.Procedure

. Use the perspective switcher to switch to the Administrator perspective.
. Use the perspective switcher to switch to the *Administrator* perspective.

. Click *Home* > *Projects*.
. Click *Home* -> *Projects*.

. Click the name of your project. For example click `bookinfo`.
. Click the name of your project. For example, click `bookinfo`.

. In the Launcher section, click `Kiali`.
. In the *Launcher* section, click *Kiali*.

. Log in to the Kiali console with the same user name and password that you use to access the {product-title} console.

When you first log in to the Kiali Console, you see the Overview page which displays all the namespaces in your mesh that you have permission to view.
When you first log in to the Kiali Console, you see the *Overview* page which displays all the namespaces in your service mesh that you have permission to view.

If you are validating the console installation, there might not be any data to display.
15 changes: 15 additions & 0 deletions modules/ossm-troubleshooting-injection.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
// Module included in the following assemblies:
// * service_mesh/v2x/-ossm-troubleshooting-istio.adoc

[id="ossm-troubleshooting-injection_{context}"]
= Troubleshooting sidecar injection

{ProductName} does not automatically inject proxy sidecars to pods. You must opt in to sidecar injection.

== Troubleshooting Istio sidecar injection

Check to see if automatic injection is enabled in the Deployment for your application. If automatic injection for the Envoy proxy is enabled, there should be a `sidecar.istio.io/inject:"true"` annotation in the `Deployment` resource under `spec.template.metadata.annotations`.

== Troubleshooting Jaeger agent sidecar injection

Check to see if automatic injection is enabled in the Deployment for your application. If automatic injection for the Jaeger agent is enabled, there should be a `sidecar.jaegertracing.io/inject:"true"` annotation in the `Deployment` resource.
39 changes: 39 additions & 0 deletions modules/ossm-troubleshooting-operators.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
// Module included in the following assemblies:
// * service_mesh/v2x/-ossm-troubleshooting-istio.adoc

[id="ossm-troubleshooting-operators_{context}"]
= Troubleshooting service mesh Operators

If you experience Operator issues:

* Verify your Operator subscription status.
* Verify that you did not install a community version of the Operator, instead of the supported Red Hat version.
* Verify that you have the `cluster-admin` role to install {ProductName}.
* Check for any errors in the Operator pod logs if the issue is related to installation of Operators.

[NOTE]
====
You can install Operators only through the OpenShift console, the OperatorHub is not accessible from the command line.
====

== Viewing Operator pod logs

You can view Operator logs by using the `oc logs` command. Red Hat may request logs to help resolve support cases.

.Procedure

* To view Operator pod logs, enter the command:
+
[source,terminal]
----
$ oc logs -n openshift-operators <podName>
----
+
For example,
+
[source,terminal]
----
$ oc logs -n openshift-operators istio-operator-bb49787db-zgr87
----

//If your pod fails to start, you may need to use the `--previous` option to see the logs of the last attempt.
47 changes: 47 additions & 0 deletions modules/ossm-troubleshooting-proxy.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
// Module included in the following assemblies:
// * service_mesh/v2x/-ossm-troubleshooting-istio.adoc

[id="ossm-troubleshooting-proxy_{context}"]
= Troubleshooting Envoy proxy

The Envoy proxy intercepts all inbound and outbound traffic for all services in the service mesh. Envoy also collects and reports telemetry on the service mesh. Envoy is deployed as a sidecar to the relevant service in the same pod.

== Enabling Envoy access logs

Envoy access logs are useful in diagnosing traffic failures and flows, and help with end-to-end traffic flow analysis.

To enable access logging for all istio-proxy containers, edit the `ServiceMeshControlPlane` (SMCP) object to add a file name for the logging output.

.Procedure

. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Enter the following command. Then, enter your username and password when prompted.
+
[source,terminal]
----
$ oc login https://{HOSTNAME}:6443
----
+
. Change to the project where you installed the control plane, for example `istio-system`.
+
[source,terminal]
----
$ oc project istio-system
----
+
. Edit the `ServiceMeshControlPlane` file.
+
[source,terminal]
----
$ oc edit smcp <smcp_name>
----
+
. As show in the following example, use `name` to specify the file name for the proxy log. If you do not specify a value for `name`, no log entries will be written.
+
[source,yaml]
----
spec:
proxy:
accessLogging:
file:
name: /dev/stdout #file name
----
15 changes: 15 additions & 0 deletions modules/ossm-troubleshooting-smcp.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
// Module included in the following assemblies:
// * service_mesh/v2x/-ossm-troubleshooting-istio.adoc

[id="ossm-troubleshooting-smcp_{context}"]
= Troubleshooting the Service Mesh control plane

If you are experiencing issues while deploying the Service Mesh control plane,

* Ensure that the `ServiceMeshControlPlane` resource is installed in a project that is separate from your services and Operators. This documentation uses the `istio-system` project as an example, but you can deploy your control plane in any project as long as it is separate from the project that contains your Operators and services.

* Ensure that the `ServiceMeshControlPlane` and `Jaeger` custom resources are deployed in the same project. For example, use the `istio-system` project for both.

//* If you selected to install the Elasticsearch Operator in a specific namespace in the cluster instead of selecting *All namespaces in on the cluster (default)*, then OpenShift could not automatically copy the Operator to the istio-system namespace and the Jaeger Operator could not call the Elasticsearch Operator during the installation?

//The steps for deploying the service mesh control plane (SMCP) include verifying the deployment in the OpenShift console.
26 changes: 26 additions & 0 deletions modules/ossm-understanding-versions.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
// Module included in the following assemblies:
// * service_mesh/v1x/upgrading-ossm.adoc ???
// * service_mesh/v2x/upgrading-ossm.adoc
// * service_mesh/v2x/ossm-troubleshooting.adoc

[id="ossm-versions_{context}"]
= Understanding Service Mesh versioning

The {ProductName} 2.0 Operator supports both v1 and v2 service meshes.

* *Operator* version - The current Operator version is {ProductVersion}. This version number only indicates the version of the currently installed Operator. This version number is controlled by the intersection of the *Update Channel* and *Approval Strategy* specified in your Operator subscription. The version of the Operator does not determine which version of the `ServiceMeshControlPlane` resource is deployed. Upgrading to the latest Operator does *not* automatically upgrade your service mesh control plane to the latest version.
+
[IMPORTANT]
====
Upgrading to the latest Operator version does not automatically upgrade your control plane to the latest version.
====
+
* *ServiceMeshControlPlane* version - The same Operator supports multiple versions of the service mesh control plane. The service mesh control plane version controls the architecture and configuration settings that are used to install and deploy {ProductName}. To set or change the service mesh control plane version, you must deploy a new control plane. When you create the service mesh control plane you can select the version in one of two ways:

** To configure in the Form View, select the version from the *Control Plane Version* menu.

** To configure in the YAML View, set the value for `spec.version` in the YAML file.

* *Control Plane* version - The version parameter specified within the SMCP resource file as `spec.version`. Supported versions are v1.1 and v2.0.

The Operator Lifecycle Manager (OLM) does not manage upgrades from v1 to v2, so the version number for your Operator and `ServiceMeshControlPlane` (SMCP) may not match, unless you have manually upgraded your SMCP.
65 changes: 65 additions & 0 deletions modules/ossm-validating-operators.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
// Module included in the following assemblies:
// * service_mesh/v2x/-ossm-troubleshooting-istio.adoc

[id="ossm-validating-operators_{context}"]
= Validating Operator installation

//The Operator installation steps include verifying the Operator status in the OpenShift console.

When you install the {ProductName} Operators, OpenShift automatically creates the following objects as part of a successful Operator installation:

* config maps
* custom resource definitions
* deployments
* pods
* replica sets
* roles
* role bindings
* secrets
* service accounts
* services

.From the Openshift console

You can verify that the Operator pods are available and running by using the {product-title} Console.

. Navigate to *Workloads* -> *Pods*.
. Select the `openshift-operators` namespace.
. Verify that the following pods exist and have a status of `running`:
** `istio-operator`
** `jaeger-operator`
** `kiali-operator`
. Select the `openshift-operators-redhat` namespace.
. Verify that the `elasticsearch-operator` pod exists and has a status of `running`.

.From the command line

. Verify the Operator pods are available and running in the `openshift-operators` namespace with the following command:
+
[source,terminal]
----
$ oc get pods -n openshift-operators
----
+
.Example output
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
istio-operator-bb49787db-zgr87 1/1 Running 0 15s
jaeger-operator-7d5c4f57d8-9xphf 1/1 Running 0 2m42s
kiali-operator-f9c8d84f4-7xh2v 1/1 Running 0 64s
----
+
. Verify the Elasticsearch operator with the following command:
+
[source,terminal]
----
$ oc get pods -n openshift-operators-redhat
----
+
.Example output
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
elasticsearch-operator-d4f59b968-796vq 1/1 Running 0 15s
----
Loading