From bdfa0d5d7e4b3beaea905df5edf6bf6e2d29f498 Mon Sep 17 00:00:00 2001 From: rolfedh Date: Tue, 6 Apr 2021 14:30:31 -0400 Subject: [PATCH] RHDEVDOCS-2900 Tracker for #30976 Issue in file logging/cluster-logging-deploying.adoc Add "OpenShift" to "Elasticsearch Operator" --- jaeger/jaeger_install/rhbjaeger-removing.adoc | 2 +- logging/cluster-logging-deploying.adoc | 10 ++--- logging/cluster-logging-upgrading.adoc | 2 +- .../config/cluster-logging-tolerations.adoc | 2 +- logging/dedicated-cluster-deploying.adoc | 2 +- logging/dedicated-cluster-logging.adoc | 2 +- .../cluster-logging-log-store-status.adoc | 5 +-- modules/cluster-logging-about-collector.adoc | 2 +- modules/cluster-logging-about-logstore.adoc | 8 ++-- modules/cluster-logging-about.adoc | 9 ++-- ...ogging-collector-log-forwarding-about.adoc | 2 +- ...uster-logging-configuring-image-about.adoc | 2 +- modules/cluster-logging-deploy-cli.adoc | 38 ++++++++-------- modules/cluster-logging-deploy-console.adoc | 18 ++++---- .../cluster-logging-deploy-multitenant.adoc | 4 +- ...logging-deploy-storage-considerations.adoc | 2 +- modules/cluster-logging-deploying-about.adoc | 4 +- modules/cluster-logging-elasticsearch-ha.adoc | 2 +- ...uster-logging-elasticsearch-retention.adoc | 2 +- ...cluster-logging-elasticsearch-storage.adoc | 2 +- .../cluster-logging-eventrouter-deploy.adoc | 2 +- ...ster-logging-log-store-status-viewing.adoc | 2 +- modules/cluster-logging-logstore-limits.adoc | 6 +-- ...ter-logging-maintenance-support-about.adoc | 4 +- ...ster-logging-maintenance-support-list.adoc | 2 +- .../cluster-logging-release-notes-5.0.0.adoc | 32 +++++++------- ...uster-logging-troubleshooting-unknown.adoc | 3 +- modules/cluster-logging-uninstall.adoc | 4 +- modules/cluster-logging-updating-logging.adoc | 16 +++---- modules/dedicated-cluster-install-deploy.adoc | 12 ++--- modules/jaeger-config-storage.adoc | 44 +++++++++---------- modules/jaeger-deploy-production-es.adoc | 2 +- modules/jaeger-install-elasticsearch.adoc | 16 +++---- modules/jaeger-install-overview.adoc | 2 +- modules/jaeger-install.adoc | 2 +- modules/jaeger-rn-fixed-issues.adoc | 4 +- modules/jaeger-rn-new-features.adoc | 2 +- modules/metering-install-verify.adoc | 2 +- modules/ossm-install-ossm-operator.adoc | 2 +- modules/ossm-installation-activities.adoc | 2 +- modules/ossm-remove-operators.adoc | 10 ++--- modules/ossm-rn-fixed-issues-1x.adoc | 2 +- modules/ossm-rn-new-features-1x.adoc | 2 +- .../security-monitoring-cluster-logging.adoc | 2 +- service_mesh/v1x/installing-ossm.adoc | 4 +- service_mesh/v2x/installing-ossm.adoc | 4 +- 46 files changed, 150 insertions(+), 155 deletions(-) diff --git a/jaeger/jaeger_install/rhbjaeger-removing.adoc b/jaeger/jaeger_install/rhbjaeger-removing.adoc index 1a24540c5646..eaa8aeebc80d 100644 --- a/jaeger/jaeger_install/rhbjaeger-removing.adoc +++ b/jaeger/jaeger_install/rhbjaeger-removing.adoc @@ -24,4 +24,4 @@ include::modules/jaeger-removing-instance-cli.adoc[leveloffset=+1] * Remove the Jaeger Operator. -* After the Jaeger Operator has been removed, if appropriate, remove the Elasticsearch Operator. +* After the Jaeger Operator has been removed, if appropriate, remove the OpenShift Elasticsearch Operator. diff --git a/logging/cluster-logging-deploying.adoc b/logging/cluster-logging-deploying.adoc index e27a9f1f2c6c..681d12f825c3 100644 --- a/logging/cluster-logging-deploying.adoc +++ b/logging/cluster-logging-deploying.adoc @@ -7,15 +7,15 @@ toc::[] You can install OpenShift Logging by deploying -the Elasticsearch and Cluster Logging Operators. The Elasticsearch Operator -creates and manages the Elasticsearch cluster used by OpenShift Logging. -The Cluster Logging Operator creates and manages the components of the logging stack. +the OpenShift Elasticsearch and Cluster Logging Operators. The OpenShift Elasticsearch Operator +creates and manages the Elasticsearch cluster used by OpenShift Logging. +The Cluster Logging Operator creates and manages the components of the logging stack. The process for deploying OpenShift Logging to {product-title} involves: * Reviewing the xref:../logging/config/cluster-logging-storage-considerations#cluster-logging-storage[OpenShift Logging storage considerations]. -* Installing the Elasticsearch Operator and Cluster Logging Operator using the {product-title} xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-console_cluster-logging-deploying[web console] or xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-cli_cluster-logging-deploying[CLI]. +* Installing the OpenShift Elasticsearch Operator and Cluster Logging Operator using the {product-title} xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-console_cluster-logging-deploying[web console] or xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-cli_cluster-logging-deploying[CLI]. // The following include statements pull in the module files that comprise // the assembly. Include any combination of concept, procedure, or reference @@ -32,7 +32,7 @@ include::modules/cluster-logging-deploy-cli.adoc[leveloffset=+1] == Post-installation tasks -If you plan to use Kibana, you must xref:#cluster-logging-visualizer-indices_cluster-logging-deploying[manually create your Kibana index patterns and visualizations] to explore and visualize data in Kibana. +If you plan to use Kibana, you must xref:#cluster-logging-visualizer-indices_cluster-logging-deploying[manually create your Kibana index patterns and visualizations] to explore and visualize data in Kibana. include::modules/cluster-logging-visualizer-indices.adoc[leveloffset=+2] diff --git a/logging/cluster-logging-upgrading.adoc b/logging/cluster-logging-upgrading.adoc index 515a266f5db6..29c64c5c15a6 100644 --- a/logging/cluster-logging-upgrading.adoc +++ b/logging/cluster-logging-upgrading.adoc @@ -7,7 +7,7 @@ toc::[] -After updating the {product-title} cluster from 4.6 to 4.7, you can update the Elasticsearch Operator and Cluster Logging Operator from 4.6 to 5.0. +After updating the {product-title} cluster from 4.6 to 4.7, you can update the OpenShift Elasticsearch Operator and Cluster Logging Operator from 4.6 to 5.0. // The following include statements pull in the module files that comprise // the assembly. Include any combination of concept, procedure, or reference diff --git a/logging/config/cluster-logging-tolerations.adoc b/logging/config/cluster-logging-tolerations.adoc index 1d8eee55a0bd..ef13cb3b3e9f 100644 --- a/logging/config/cluster-logging-tolerations.adoc +++ b/logging/config/cluster-logging-tolerations.adoc @@ -8,7 +8,7 @@ toc::[] You can use taints and tolerations to ensure that OpenShift Logging pods run on specific nodes and that no other workload can run on those nodes. -Taints and tolerations are simple `key:value` pair. A taint on a node +Taints and tolerations are simple `key:value` pair. A taint on a node instructs the node to repel all pods that do not tolerate the taint. The `key` is any string, up to 253 characters and the `value` is any string up to 63 characters. diff --git a/logging/dedicated-cluster-deploying.adoc b/logging/dedicated-cluster-deploying.adoc index c14f7ba2b03c..2a29410ac1dd 100644 --- a/logging/dedicated-cluster-deploying.adoc +++ b/logging/dedicated-cluster-deploying.adoc @@ -1,6 +1,6 @@ :context: dedicated-cluster-deploying [id="dedicated-cluster-deploying"] -= Installing the Cluster Logging Operator and Elasticsearch Operator += Installing the Cluster Logging Operator and OpenShift Elasticsearch Operator include::modules/common-attributes.adoc[] toc::[] diff --git a/logging/dedicated-cluster-logging.adoc b/logging/dedicated-cluster-logging.adoc index d19608842651..0fe2ecc986c8 100644 --- a/logging/dedicated-cluster-logging.adoc +++ b/logging/dedicated-cluster-logging.adoc @@ -8,7 +8,7 @@ toc::[] As a cluster administrator, you can deploy OpenShift Logging to aggregate logs for a range of services. -{product-title} clusters can perform logging tasks using the Elasticsearch +{product-title} clusters can perform logging tasks using the OpenShift Elasticsearch Operator. OpenShift Logging is configured through the Curator tool to retain logs for two days. diff --git a/logging/troubleshooting/cluster-logging-log-store-status.adoc b/logging/troubleshooting/cluster-logging-log-store-status.adoc index 63d7f85e2149..e1e7aa7c43db 100644 --- a/logging/troubleshooting/cluster-logging-log-store-status.adoc +++ b/logging/troubleshooting/cluster-logging-log-store-status.adoc @@ -5,7 +5,7 @@ include::modules/common-attributes.adoc[] toc::[] -You can view the status of the Elasticsearch Operator and for a number of Elasticsearch components. +You can view the status of the OpenShift Elasticsearch Operator and for a number of Elasticsearch components. // The following include statements pull in the module files that comprise // the assembly. Include any combination of concept, procedure, or reference @@ -16,6 +16,3 @@ You can view the status of the Elasticsearch Operator and for a number of Elasti include::modules/cluster-logging-log-store-status-viewing.adoc[leveloffset=+1] include::modules/cluster-logging-log-store-status-comp.adoc[leveloffset=+1] - - - diff --git a/modules/cluster-logging-about-collector.adoc b/modules/cluster-logging-about-collector.adoc index d63b6c225dbd..714b389fa143 100644 --- a/modules/cluster-logging-about-collector.adoc +++ b/modules/cluster-logging-about-collector.adoc @@ -14,7 +14,7 @@ By default, the log collector uses the following sources: If you configure the log collector to collect audit logs, it gets them from `/var/log/audit/audit.log`. -The logging collector is a daemon set that deploys pods to each {product-title} node. System and infrastructure logs are generated by journald log messages from the operating system, the container runtime, and {product-title}. Application logs are generated by the CRI-O container engine. Fluentd collects the logs from these sources and forwards them internally or externally as you configure in {product-title}. +The logging collector is a daemon set that deploys pods to each {product-title} node. System and infrastructure logs are generated by journald log messages from the operating system, the container runtime, and {product-title}. Application logs are generated by the CRI-O container engine. Fluentd collects the logs from these sources and forwards them internally or externally as you configure in {product-title}. The container runtimes provide minimal information to identify the source of log messages: project, pod name, and container ID. This information is not sufficient to uniquely identify the source of the logs. If a pod with a given name and project is deleted before the log collector begins processing its logs, information from the API server, such as labels and annotations, might not be available. There might not be a way to distinguish the log messages from a similarly named pod and project or trace the logs to their source. This limitation means that log collection and normalization are considered *best effort*. diff --git a/modules/cluster-logging-about-logstore.adoc b/modules/cluster-logging-about-logstore.adoc index 29acccde96c4..653190b41979 100644 --- a/modules/cluster-logging-about-logstore.adoc +++ b/modules/cluster-logging-about-logstore.adoc @@ -3,20 +3,20 @@ // * logging/cluster-logging.adoc [id="cluster-logging-about-logstore_{context}"] -= About the log store += About the log store By default, {product-title} uses link:https://www.elastic.co/products/elasticsearch[Elasticsearch (ES)] to store log data. Optionally, you can use the log forwarding features to forward logs to external log stores using Fluentd protocols, syslog protocols, or the {product-title} Log Forwarding API. -The OpenShift Logging Elasticsearch instance is optimized and tested for short term storage, approximately seven days. If you want to retain your logs over a longer term, it is recommended you move the data to a third-party storage system. +The OpenShift Logging Elasticsearch instance is optimized and tested for short term storage, approximately seven days. If you want to retain your logs over a longer term, it is recommended you move the data to a third-party storage system. -Elasticsearch organizes the log data from Fluentd into datastores, or _indices_, then subdivides each index into multiple pieces called _shards_, which it spreads across a set of Elasticsearch nodes in an Elasticsearch cluster. You can configure Elasticsearch to make copies of the shards, called _replicas_, which Elasticsearch also spreads across the Elasticsearch nodes. The `ClusterLogging` custom resource (CR) allows you to specify how the shards are replicated to provide data redundancy and resilience to failure. You can also specify how long the different types of logs are retained using a retention policy in the `ClusterLogging` CR. +Elasticsearch organizes the log data from Fluentd into datastores, or _indices_, then subdivides each index into multiple pieces called _shards_, which it spreads across a set of Elasticsearch nodes in an Elasticsearch cluster. You can configure Elasticsearch to make copies of the shards, called _replicas_, which Elasticsearch also spreads across the Elasticsearch nodes. The `ClusterLogging` custom resource (CR) allows you to specify how the shards are replicated to provide data redundancy and resilience to failure. You can also specify how long the different types of logs are retained using a retention policy in the `ClusterLogging` CR. [NOTE] ==== The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes. ==== -The Cluster Logging Operator and companion Elasticsearch Operator ensure that each Elasticsearch node is deployed using a unique deployment that includes its own storage volume. +The Cluster Logging Operator and companion OpenShift Elasticsearch Operator ensure that each Elasticsearch node is deployed using a unique deployment that includes its own storage volume. You can use a `ClusterLogging` custom resource (CR) to increase the number of Elasticsearch nodes, as needed. Refer to the link:https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html[Elasticsearch documentation] for considerations involved in configuring storage. diff --git a/modules/cluster-logging-about.adoc b/modules/cluster-logging-about.adoc index 23f57ae09054..2d2d469e450a 100644 --- a/modules/cluster-logging-about.adoc +++ b/modules/cluster-logging-about.adoc @@ -12,8 +12,8 @@ = About deploying OpenShift Logging ifdef::openshift-enterprise,openshift-webscale,openshift-origin[] -{product-title} cluster administrators can deploy OpenShift Logging using -the {product-title} web console or CLI to install the Elasticsearch +{product-title} cluster administrators can deploy OpenShift Logging using +the {product-title} web console or CLI to install the OpenShift Elasticsearch Operator and Cluster Logging Operator. When the operators are installed, you create a `ClusterLogging` custom resource (CR) to schedule OpenShift Logging pods and other resources necessary to support OpenShift Logging. The operators are @@ -22,15 +22,14 @@ endif::openshift-enterprise,openshift-webscale,openshift-origin[] ifdef::openshift-dedicated[] {product-title} administrators can deploy the Cluster Logging Operator and the -Elasticsearch Operator by using the {product-title} web console and can configure logging in the +OpenShift Elasticsearch Operator by using the {product-title} web console and can configure logging in the `openshift-logging` namespace. Configuring logging will deploy Elasticsearch, Fluentd, and Kibana in the `openshift-logging` namespace. The operators are responsible for deploying, upgrading, and maintaining OpenShift Logging. endif::openshift-dedicated[] The `ClusterLogging` CR defines a complete OpenShift Logging environment that includes all the components -of the logging stack to collect, store and visualize logs. The Cluster Logging Operator watches the OpenShift Logging +of the logging stack to collect, store and visualize logs. The Cluster Logging Operator watches the OpenShift Logging CR and adjusts the logging deployment accordingly. Administrators and application developers can view the logs of the projects for which they have view access. - diff --git a/modules/cluster-logging-collector-log-forwarding-about.adoc b/modules/cluster-logging-collector-log-forwarding-about.adoc index 00d4480c5cfa..b839b8cd755d 100644 --- a/modules/cluster-logging-collector-log-forwarding-about.adoc +++ b/modules/cluster-logging-collector-log-forwarding-about.adoc @@ -12,7 +12,7 @@ Forwarding cluster logs to external third-party systems requires a combination o -- * `elasticsearch`. An external Elasticsearch instance. The `elasticsearch` output can use a TLS connection. -* `fluentdForward`. An external log aggregation solution that supports Fluentd. This option uses the Fluentd *forward* protocols. The `fluentForward` output can use a TCP or TLS connection and supports shared-key authentication by providing a *shared_key* field in a secret. Shared-key authentication can be used with or without TLS. +* `fluentdForward`. An external log aggregation solution that supports Fluentd. This option uses the Fluentd *forward* protocols. The `fluentForward` output can use a TCP or TLS connection and supports shared-key authentication by providing a *shared_key* field in a secret. Shared-key authentication can be used with or without TLS. * `syslog`. An external log aggregation solution that supports the syslog link:https://tools.ietf.org/html/rfc3164[RFC3164] or link:https://tools.ietf.org/html/rfc5424[RFC5424] protocols. The `syslog` output can use a UDP, TCP, or TLS connection. diff --git a/modules/cluster-logging-configuring-image-about.adoc b/modules/cluster-logging-configuring-image-about.adoc index 3eb7ca86f86e..619f2fd25ef5 100644 --- a/modules/cluster-logging-configuring-image-about.adoc +++ b/modules/cluster-logging-configuring-image-about.adoc @@ -6,7 +6,7 @@ = Understanding OpenShift Logging component images There are several components in OpenShift Logging, each one implemented with one -or more images. Each image is specified by an environment variable +or more images. Each image is specified by an environment variable defined in the *cluster-logging-operator* deployment in the *openshift-logging* project and should not be changed. You can view the images by running the following command: diff --git a/modules/cluster-logging-deploy-cli.adoc b/modules/cluster-logging-deploy-cli.adoc index 1b7834569af4..af19dc1fdf42 100644 --- a/modules/cluster-logging-deploy-cli.adoc +++ b/modules/cluster-logging-deploy-cli.adoc @@ -5,7 +5,7 @@ [id="cluster-logging-deploy-cli_{context}"] = Installing OpenShift Logging using the CLI -You can use the {product-title} CLI to install the Elasticsearch and Cluster Logging Operators. +You can use the {product-title} CLI to install the OpenShift Elasticsearch and Cluster Logging Operators. .Prerequisites @@ -27,11 +27,11 @@ endif::[] .Procedure -To install the Elasticsearch Operator and Cluster Logging Operator using the CLI: +To install the OpenShift Elasticsearch Operator and Cluster Logging Operator using the CLI: -. Create a Namespace for the Elasticsearch Operator. +. Create a Namespace for the OpenShift Elasticsearch Operator. -.. Create a Namespace object YAML file (for example, `eo-namespace.yaml`) for the Elasticsearch Operator: +.. Create a Namespace object YAML file (for example, `eo-namespace.yaml`) for the OpenShift Elasticsearch Operator: + [source,yaml] ---- @@ -98,9 +98,9 @@ For example: $ oc create -f clo-namespace.yaml ---- -. Install the Elasticsearch Operator by creating the following objects: +. Install the OpenShift Elasticsearch Operator by creating the following objects: -.. Create an Operator Group object YAML file (for example, `eo-og.yaml`) for the Elasticsearch operator: +.. Create an Operator Group object YAML file (for example, `eo-og.yaml`) for the OpenShift Elasticsearch Operator: + [source,yaml] ---- @@ -128,7 +128,7 @@ $ oc create -f eo-og.yaml ---- .. Create a Subscription object YAML file (for example, `eo-sub.yaml`) to -subscribe a Namespace to the Elasticsearch Operator. +subscribe a Namespace to the OpenShift Elasticsearch Operator. + .Example Subscription [source,yaml] @@ -164,7 +164,7 @@ For example: $ oc create -f eo-sub.yaml ---- + -The Elasticsearch Operator is installed to the `openshift-operators-redhat` Namespace and copied to each project in the cluster. +The OpenShift Elasticsearch Operator is installed to the `openshift-operators-redhat` Namespace and copied to each project in the cluster. .. Verify the Operator installation: + @@ -177,18 +177,18 @@ $ oc get csv --all-namespaces [source,terminal] ---- NAMESPACE NAME DISPLAY VERSION REPLACES PHASE -default elasticsearch-operator.5.0.0-202007012112.p0 Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded -kube-node-lease elasticsearch-operator.5.0.0-202007012112.p0 Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded -kube-public elasticsearch-operator.5.0.0-202007012112.p0 Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded -kube-system elasticsearch-operator.5.0.0-202007012112.p0 Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded -openshift-apiserver-operator elasticsearch-operator.5.0.0-202007012112.p0 Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded -openshift-apiserver elasticsearch-operator.5.0.0-202007012112.p0 Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded -openshift-authentication-operator elasticsearch-operator.5.0.0-202007012112.p0 Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded -openshift-authentication elasticsearch-operator.5.0.0-202007012112.p0 Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded +default elasticsearch-operator.5.0.0-202007012112.p0 OpenShift Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded +kube-node-lease elasticsearch-operator.5.0.0-202007012112.p0 OpenShift Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded +kube-public elasticsearch-operator.5.0.0-202007012112.p0 OpenShift Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded +kube-system elasticsearch-operator.5.0.0-202007012112.p0 OpenShift Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded +openshift-apiserver-operator elasticsearch-operator.5.0.0-202007012112.p0 OpenShift Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded +openshift-apiserver elasticsearch-operator.5.0.0-202007012112.p0 OpenShift Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded +openshift-authentication-operator elasticsearch-operator.5.0.0-202007012112.p0 OpenShift Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded +openshift-authentication elasticsearch-operator.5.0.0-202007012112.p0 OpenShift Elasticsearch Operator 5.0.0-202007012112.p0 Succeeded ... ---- + -There should be an Elasticsearch Operator in each Namespace. The version number might be different than shown. +There should be an OpenShift Elasticsearch Operator in each Namespace. The version number might be different than shown. . Install the Cluster Logging Operator by creating the following objects: @@ -388,8 +388,8 @@ However, an unmanaged deployment does not receive updates until OpenShift Loggin <4> Specify the length of time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example, `7d` for seven days. Logs older than the `maxAge` are deleted. You must specify a retention policy for each log source or the Elasticsearch indices will not be created for that source. <5> Specify the number of Elasticsearch nodes. See the note that follows this list. <6> Enter the name of an existing storage class for Elasticsearch storage. For best performance, specify a storage class that allocates block storage. If you do not specify a storage class, {product-title} deploys OpenShift Logging with ephemeral storage only. -<7> Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are `16G` for the memory request and `1` for the CPU request. -<8> Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are `256Mi` for the memory request and `100m` for the CPU request. +<7> Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are `16G` for the memory request and `1` for the CPU request. +<8> Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are `256Mi` for the memory request and `100m` for the CPU request. <9> Settings for configuring Kibana. Using the CR, you can scale Kibana for redundancy and configure the CPU and memory for your Kibana pods. For more information, see *Configuring the log visualizer*. <10> Settings for configuring the Curator schedule. Curator is used to remove data that is in the Elasticsearch index format prior to {product-title} 4.5 and will be removed in a later release. <11> Settings for configuring Fluentd. Using the CR, you can configure Fluentd CPU and memory limits. For more information, see *Configuring Fluentd*. diff --git a/modules/cluster-logging-deploy-console.adoc b/modules/cluster-logging-deploy-console.adoc index 0aa6e0b21520..7daccdfbf478 100644 --- a/modules/cluster-logging-deploy-console.adoc +++ b/modules/cluster-logging-deploy-console.adoc @@ -5,7 +5,7 @@ [id="cluster-logging-deploy-console_{context}"] = Installing OpenShift Logging using the web console -You can use the {product-title} web console to install the Elasticsearch and Cluster Logging Operators. +You can use the {product-title} web console to install the OpenShift Elasticsearch and Cluster Logging Operators. .Prerequisites @@ -27,13 +27,13 @@ endif::[] .Procedure -To install the Elasticsearch Operator and Cluster Logging Operator using the {product-title} web console: +To install the OpenShift Elasticsearch Operator and Cluster Logging Operator using the {product-title} web console: -. Install the Elasticsearch Operator: +. Install the OpenShift Elasticsearch Operator: .. In the {product-title} web console, click *Operators* -> *OperatorHub*. -.. Choose *Elasticsearch Operator* from the list of available Operators, and click *Install*. +.. Choose *OpenShift Elasticsearch Operator* from the list of available Operators, and click *Install*. .. Ensure that the *All namespaces on the cluster* is selected under *Installation Mode*. @@ -60,9 +60,9 @@ scrapes the `openshift-operators-redhat` namespace. .. Click *Install*. -.. Verify that the Elasticsearch Operator installed by switching to the *Operators* → *Installed Operators* page. +.. Verify that the OpenShift Elasticsearch Operator installed by switching to the *Operators* → *Installed Operators* page. -.. Ensure that *Elasticsearch Operator* is listed in all projects with a *Status* of *Succeeded*. +.. Ensure that *OpenShift Elasticsearch Operator* is listed in all projects with a *Status* of *Succeeded*. . Install the Cluster Logging Operator: @@ -225,9 +225,9 @@ However, an unmanaged deployment does not receive updates until OpenShift Loggin <3> Settings for configuring Elasticsearch. Using the CR, you can configure shard replication policy and persistent storage. <4> Specify the length of time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example, `7d` for seven days. Logs older than the `maxAge` are deleted. You must specify a retention policy for each log source or the Elasticsearch indices will not be created for that source. <5> Specify the number of Elasticsearch nodes. See the note that follows this list. -<6> Enter the name of an existing storage class for Elasticsearch storage. For best performance, specify a storage class that allocates block storage. If you do not specify a storage class, {product-title} deploys cluster logging with ephemeral storage only. -<7> Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are `16G` for the memory request and `1` for the CPU request. -<8> Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are `256Mi` for the memory request and `100m` for the CPU request. +<6> Enter the name of an existing storage class for Elasticsearch storage. For best performance, specify a storage class that allocates block storage. If you do not specify a storage class, OpenShift Logging uses ephemeral storage. +<7> Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are `16G` for the memory request and `1` for the CPU request. +<8> Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are `256Mi` for the memory request and `100m` for the CPU request. <9> Settings for configuring Kibana. Using the CR, you can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes. For more information, see *Configuring the log visualizer*. <10> Settings for configuring the Curator schedule. Curator is used to remove data that is in the Elasticsearch index format prior to {product-title} 4.5 and will be removed in a later release. <11> Settings for configuring Fluentd. Using the CR, you can configure Fluentd CPU and memory limits. For more information, see *Configuring Fluentd*. diff --git a/modules/cluster-logging-deploy-multitenant.adoc b/modules/cluster-logging-deploy-multitenant.adoc index b42e5da57c57..cf78c4d60a0d 100644 --- a/modules/cluster-logging-deploy-multitenant.adoc +++ b/modules/cluster-logging-deploy-multitenant.adoc @@ -7,11 +7,11 @@ If you are deploying OpenShift Logging into a cluster that uses multitenant isolation mode, projects are isolated from other projects. As a result, network traffic is not allowed between pods or services in different projects. -Because the Elasticsearch Operator and the Cluster Logging Operator are installed in different projects, you must explicitly allow access between the `openshift-operators-redhat` and `openshift-logging` projects. How you allow this access depends on how you configured multitenant isolation mode. +Because the OpenShift Elasticsearch Operator and the Cluster Logging Operator are installed in different projects, you must explicitly allow access between the `openshift-operators-redhat` and `openshift-logging` projects. How you allow this access depends on how you configured multitenant isolation mode. .Procedure -To allow traffic between the Elasticsearch Operator and the Cluster Logging Operator, perform one of the following: +To allow traffic between the OpenShift Elasticsearch Operator and the Cluster Logging Operator, perform one of the following: * If you configured multitenant isolation mode with the OpenShift SDN CNI plug-in set to the *Multitenant* mode, use the following command to join the two projects: + diff --git a/modules/cluster-logging-deploy-storage-considerations.adoc b/modules/cluster-logging-deploy-storage-considerations.adoc index 48c64f1aff76..00d8d6c12f50 100644 --- a/modules/cluster-logging-deploy-storage-considerations.adoc +++ b/modules/cluster-logging-deploy-storage-considerations.adoc @@ -16,7 +16,7 @@ A persistent volume is required for each Elasticsearch deployment configuration. If you use a local volume for persistent storage, do not use a raw block volume, which is described with `volumeMode: block` in the `LocalVolume` object. Elasticsearch cannot use raw block volumes. ==== -The Elasticsearch Operator names the PVCs using the Elasticsearch resource name. +The OpenShift Elasticsearch Operator names the PVCs using the Elasticsearch resource name. //// Below are capacity planning guidelines for {product-title} aggregate logging. diff --git a/modules/cluster-logging-deploying-about.adoc b/modules/cluster-logging-deploying-about.adoc index cd2493e871ae..770baf838060 100644 --- a/modules/cluster-logging-deploying-about.adoc +++ b/modules/cluster-logging-deploying-about.adoc @@ -85,7 +85,7 @@ You can configure a persistent storage class and size for the Elasticsearch clus ---- This example specifies each data node in the cluster will be bound to a PVC that -requests "200G" of "gp2" storage. Each primary shard will be backed by a single replica. +requests "200G" of "gp2" storage. Each primary shard will be backed by a single replica. [NOTE] ==== @@ -107,7 +107,7 @@ You can set the policy that defines how Elasticsearch shards are replicated acro * `FullRedundancy`. The shards for each index are fully replicated to every data node. * `MultipleRedundancy`. The shards for each index are spread over half of the data nodes. * `SingleRedundancy`. A single copy of each shard. Logs are always available and recoverable as long as at least two data nodes exist. -* `ZeroRedundancy`. No copies of any shards. Logs may be unavailable (or lost) in the event a node is down or fails. +* `ZeroRedundancy`. No copies of any shards. Logs may be unavailable (or lost) in the event a node is down or fails. //// Log collectors:: diff --git a/modules/cluster-logging-elasticsearch-ha.adoc b/modules/cluster-logging-elasticsearch-ha.adoc index ac05bfd0a299..6310c4c7b5d3 100644 --- a/modules/cluster-logging-elasticsearch-ha.adoc +++ b/modules/cluster-logging-elasticsearch-ha.adoc @@ -43,7 +43,7 @@ to every data node. This provides the highest safety, but at the cost of the hig This provides a good tradeoff between safety and performance. * *SingleRedundancy*. Elasticsearch makes one copy of the primary shards for each index. Logs are always available and recoverable as long as at least two data nodes exist. -Better performance than MultipleRedundancy, when using 5 or more nodes. You cannot +Better performance than MultipleRedundancy, when using 5 or more nodes. You cannot apply this policy on deployments of single Elasticsearch node. * *ZeroRedundancy*. Elasticsearch does not make copies of the primary shards. Logs might be unavailable or lost in the event a node is down or fails. diff --git a/modules/cluster-logging-elasticsearch-retention.adoc b/modules/cluster-logging-elasticsearch-retention.adoc index d80da28ed8f8..7190ef3b0230 100644 --- a/modules/cluster-logging-elasticsearch-retention.adoc +++ b/modules/cluster-logging-elasticsearch-retention.adoc @@ -108,7 +108,7 @@ Modifying the `Elasticsearch` CR is not supported. All changes to the retention policies must be made in the `ClusterLogging` CR. ==== + -The Elasticsearch Operator deploys a cron job to roll over indices for each +The OpenShift Elasticsearch Operator deploys a cron job to roll over indices for each mapping using the defined policy, scheduled using the `pollInterval`. + [source,terminal] diff --git a/modules/cluster-logging-elasticsearch-storage.adoc b/modules/cluster-logging-elasticsearch-storage.adoc index 322e8ddca2ab..e532a1360f19 100644 --- a/modules/cluster-logging-elasticsearch-storage.adoc +++ b/modules/cluster-logging-elasticsearch-storage.adoc @@ -5,7 +5,7 @@ [id="cluster-logging-elasticsearch-storage_{context}"] = Configuring persistent storage for the log store -Elasticsearch requires persistent storage. The faster the storage, the faster the Elasticsearch performance. +Elasticsearch requires persistent storage. The faster the storage, the faster the Elasticsearch performance. [WARNING] ==== diff --git a/modules/cluster-logging-eventrouter-deploy.adoc b/modules/cluster-logging-eventrouter-deploy.adoc index ff1a75e39805..6317253c23c1 100644 --- a/modules/cluster-logging-eventrouter-deploy.adoc +++ b/modules/cluster-logging-eventrouter-deploy.adoc @@ -11,7 +11,7 @@ The following Template object creates the service account, cluster role, and clu .Prerequisites -* You need proper permissions to create service accounts and update cluster role bindings. For example, you can run the following template with a user that has the *cluster-admin* role. +* You need proper permissions to create service accounts and update cluster role bindings. For example, you can run the following template with a user that has the *cluster-admin* role. * OpenShift Logging must be installed. diff --git a/modules/cluster-logging-log-store-status-viewing.adoc b/modules/cluster-logging-log-store-status-viewing.adoc index b0a0eb128f97..618f5d387c0d 100644 --- a/modules/cluster-logging-log-store-status-viewing.adoc +++ b/modules/cluster-logging-log-store-status-viewing.adoc @@ -115,7 +115,7 @@ status: <1> * Container Terminated for both the log store and proxy containers. * Pod unschedulable. Also, a condition is shown for a number of issues, see *Example condition messages*. -<4> The log store nodes in the cluster, with `upgradeStatus`. +<4> The log store nodes in the cluster, with `upgradeStatus`. <5> The log store client, data, and master pods in the cluster, listed under 'failed`, `notReady` or `ready` state. [id="cluster-logging-elasticsearch-status-message_{context}"] diff --git a/modules/cluster-logging-logstore-limits.adoc b/modules/cluster-logging-logstore-limits.adoc index 84596af2fef7..e99ad6c29100 100644 --- a/modules/cluster-logging-logstore-limits.adoc +++ b/modules/cluster-logging-logstore-limits.adoc @@ -6,7 +6,7 @@ = Configuring CPU and memory requests for the log store Each component specification allows for adjustments to both the CPU and memory requests. -You should not have to manually adjust these values as the Elasticsearch +You should not have to manually adjust these values as the OpenShift Elasticsearch Operator sets values sufficient for your environment. [NOTE] @@ -53,8 +53,8 @@ spec: memory: 100Mi ---- <1> Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, -the Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are `16Gi` for the memory request and `1` for the CPU request. -<2> Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are `256Mi` for the memory request and `100m` for the CPU request. +the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are `16Gi` for the memory request and `1` for the CPU request. +<2> Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are `256Mi` for the memory request and `100m` for the CPU request. If you adjust the amount of Elasticsearch memory, you must change both the request value and the limit value. diff --git a/modules/cluster-logging-maintenance-support-about.adoc b/modules/cluster-logging-maintenance-support-about.adoc index 1da46f588ae9..6d8b600cc70a 100644 --- a/modules/cluster-logging-maintenance-support-about.adoc +++ b/modules/cluster-logging-maintenance-support-about.adoc @@ -5,9 +5,9 @@ [id="cluster-logging-maintenance-support-about_{context}"] = About unsupported configurations -The supported way of configuring OpenShift Logging is by configuring it using the options described in this documentation. Do not use other configurations, as they are unsupported. Configuration paradigms might change across {product-title} releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will disappear because the Elasticsearch Operator and Cluster Logging Operator reconcile any differences. The Operators reverse everything to the defined state by default and by design. +The supported way of configuring OpenShift Logging is by configuring it using the options described in this documentation. Do not use other configurations, as they are unsupported. Configuration paradigms might change across {product-title} releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will disappear because the OpenShift Elasticsearch Operator and Cluster Logging Operator reconcile any differences. The Operators reverse everything to the defined state by default and by design. [NOTE] ==== -If you _must_ perform configurations not described in the {product-title} documentation, you _must_ set your Cluster Logging Operator or Elasticsearch Operator to *Unmanaged*. An unmanaged OpenShift Logging environment is _not supported_ and does not receive updates until you return OpenShift Logging to *Managed*. +If you _must_ perform configurations not described in the {product-title} documentation, you _must_ set your Cluster Logging Operator or OpenShift Elasticsearch Operator to *Unmanaged*. An unmanaged OpenShift Logging environment is _not supported_ and does not receive updates until you return OpenShift Logging to *Managed*. ==== diff --git a/modules/cluster-logging-maintenance-support-list.adoc b/modules/cluster-logging-maintenance-support-list.adoc index c9057d4469d0..4b8b852d012c 100644 --- a/modules/cluster-logging-maintenance-support-list.adoc +++ b/modules/cluster-logging-maintenance-support-list.adoc @@ -17,7 +17,7 @@ You must set the Cluster Logging Operator to the unmanaged state to modify the f * the Fluentd daemon set -You must set the Elasticsearch Operator to the unmanaged state to modify the following component: +You must set the OpenShift Elasticsearch Operator to the unmanaged state to modify the following component: * the Elasticsearch deployment files. diff --git a/modules/cluster-logging-release-notes-5.0.0.adoc b/modules/cluster-logging-release-notes-5.0.0.adoc index 2ce8079555f7..c29c8841a405 100644 --- a/modules/cluster-logging-release-notes-5.0.0.adoc +++ b/modules/cluster-logging-release-notes-5.0.0.adoc @@ -19,23 +19,23 @@ With this release, Cluster Logging becomes Red Hat OpenShift Logging, version 5. // https://bugzilla.redhat.com/show_bug.cgi?id=1883444 === Maximum five primary shards per index -With this release, the Elasticsearch Operator (EO) sets the number of primary shards for an index between one and five, depending on the number of data nodes defined for a cluster. +With this release, the OpenShift Elasticsearch Operator (EO) sets the number of primary shards for an index between one and five, depending on the number of data nodes defined for a cluster. Previously, the EO set the number of shards for an index to the number of data nodes. When an index in Elasticsearch was configured with a number of replicas, it created that many replicas for each primary shard, not per index. Therefore, as the index sharded, a greater number of replica shards existed in the cluster, which created a lot of overhead for the cluster to replicate and keep in sync. [discrete] [id="openshift-logging-5-0-updated-eo-name"] // https://bugzilla.redhat.com/show_bug.cgi?id=1898920 -=== Updated Elasticsearch Operator name and maturity level +=== Updated OpenShift Elasticsearch Operator name and maturity level -This release updates the display name of the Elasticsearch Operator and operator maturity level. The new display name and clarified specific use for the Elasticsearch Operator are updated in Operator Hub. +This release updates the display name of the OpenShift Elasticsearch Operator and operator maturity level. The new display name and clarified specific use for the OpenShift Elasticsearch Operator are updated in Operator Hub. [discrete] [id="openshift-logging-5-0-es-csv-success"] // https://bugzilla.redhat.com/show_bug.cgi?id=1913464 -=== Elasticsearch Operator reports on CSV success +=== OpenShift Elasticsearch Operator reports on CSV success -This release adds reporting metrics to indicate that installing or upgrading the Elasticsearch Operator ClusterServiceVersion (CSV) was successful. Previously, there was no way to determine, or generate an alert, if the CSV installation or upgrade for the Elasticsearch Operator failed. Now, an alert is provided as part of the Elasticsearch Operator. +This release adds reporting metrics to indicate that installing or upgrading the `ClusterServiceVersion` (CSV) object for the OpenShift Elasticsearch Operator was successful. Previously, there was no way to determine, or generate an alert, if the installing or upgrading the CSV failed. Now, an alert is provided as part of the OpenShift Elasticsearch Operator. [discrete] [id="openshift-logging-5-0-reduced-cert-warnings"] @@ -63,7 +63,7 @@ The current release adds a connection timeout for deletion jobs, which helps pre // https://bugzilla.redhat.com/show_bug.cgi?id=1920215 === Minimize updates to rollover index templates -With this enhancement, the Elasticsearch Operator only updates its rollover index templates if they have different field values. Index templates have a higher priority than indices. When the template is updated, the cluster prioritizes distributing them over the index shards, impacting performance. To minimize Elasticsearch cluster operations, the operator only updates the templates when the number of primary shards or replica shards changes from what is currently configured. +With this enhancement, the OpenShift Elasticsearch Operator only updates its rollover index templates if they have different field values. Index templates have a higher priority than indices. When the template is updated, the cluster prioritizes distributing them over the index shards, impacting performance. To minimize Elasticsearch cluster operations, the operator only updates the templates when the number of primary shards or replica shards changes from what is currently configured. [id="openshift-logging-5-0-technology-preview"] == Technology Preview features @@ -133,19 +133,19 @@ In the table below, features are marked with the following statuses: * Previously, nodes did not recover from `Pending` status because a software bug did not correctly update their statuses in the Elasticsearch custom resource (CR). The current release fixes this issue, so the nodes can recover when their status is `Pending.` (link:https://bugzilla.redhat.com/show_bug.cgi?id=1887357[*BZ#1887357*]) -* Previously, when the Cluster Logging Operator (CLO) scaled down the number of Elasticsearch nodes in the `clusterlogging` CR to three nodes, it omitted previously-created nodes that had unique IDs. The Elasticsearch Operator rejected the update because it has safeguards that prevent nodes with unique IDs from being removed. Now, when the CLO scales down the number of nodes and updates the Elasticsearch CR, it marks nodes with unique IDs as count 0 instead of omitting them. As a result, users can scale down their cluster to 3 nodes by using the `clusterlogging` CR. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1879150[*BZ#1879150*]) +* Previously, when the Cluster Logging Operator (CLO) scaled down the number of Elasticsearch nodes in the `clusterlogging` CR to three nodes, it omitted previously-created nodes that had unique IDs. The OpenShift Elasticsearch Operator rejected the update because it has safeguards that prevent nodes with unique IDs from being removed. Now, when the CLO scales down the number of nodes and updates the Elasticsearch CR, it marks nodes with unique IDs as count 0 instead of omitting them. As a result, users can scale down their cluster to 3 nodes by using the `clusterlogging` CR. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1879150[*BZ#1879150*]) * Previously, the Fluentd collector pod went into a crash loop when the `ClusterLogForwarder` had an incorrectly-configured secret. The current release fixes this issue. Now, the `ClusterLogForwarder` validates the secrets and reports any errors in its status field. As a result, it does not cause the Fluentd collector pod to crash. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1888943[*1888943*]) -* Previously, if you updated the Kibana resource configuration in the `clusterlogging` instance to `resource{}`, the resulting nil map caused a panic and changed the status of the Elasticsearch Operator to `CrashLoopBackOff`. The current release fixes this issue by initializing the map. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1889573[*BZ#1889573*]) +* Previously, if you updated the Kibana resource configuration in the `clusterlogging` instance to `resource{}`, the resulting nil map caused a panic and changed the status of the OpenShift Elasticsearch Operator to `CrashLoopBackOff`. The current release fixes this issue by initializing the map. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1889573[*BZ#1889573*]) * Previously, the fluentd collector pod went into a crash loop when the ClusterLogForwarder had multiple outputs using the same secret. The current release fixes this issue. Now, multiple outputs can share a secret. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1890072[*1890072*]) -* Previously, if you deleted a Kibana route, the Cluster Logging Operator (CLO) could not recover or recreate it. Now, the CLO watches the route, and if you delete the route, the Elasticsearch Operator can reconcile or recreate it. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1890825[*BZ#1890825*]) +* Previously, if you deleted a Kibana route, the Cluster Logging Operator (CLO) could not recover or recreate it. Now, the CLO watches the route, and if you delete the route, the OpenShift Elasticsearch Operator can reconcile or recreate it. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1890825[*BZ#1890825*]) -* Previously, the Cluster Logging Operator (CLO) would attempt to reconcile the Elasticsearch resource, which depended upon the Red Hat-provided Elastic Custom Resource Definition (CRD). Attempts to list an unknown kind caused the CLO to exit its reconciliation loop. This happened because the CLO tried to reconcile all of its managed resources whether they were defined or not. The current release fixes this issue. The CLO only reconciles types provided by the Elasticsearch operator if a user defines managed storage. As a result, users can create collector-only deployments of cluster logging by deploying the CLO. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1891738[*BZ#1891738*]) +* Previously, the Cluster Logging Operator (CLO) would attempt to reconcile the Elasticsearch resource, which depended upon the Red Hat-provided Elastic Custom Resource Definition (CRD). Attempts to list an unknown kind caused the CLO to exit its reconciliation loop. This happened because the CLO tried to reconcile all of its managed resources whether they were defined or not. The current release fixes this issue. The CLO only reconciles types provided by the OpenShift Elasticsearch Operator if a user defines managed storage. As a result, users can create collector-only deployments of cluster logging by deploying the CLO. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1891738[*BZ#1891738*]) -* Previously, because of an LF GA syslog implementation for RFC 3164, logs sent to remote syslog were not compatible with the legacy behavior. The current release fixes this issue. AddLogSource adds details about log's source details to the "message" field. Now, logs sent to remote syslog are compatible with the legacy behavior. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1891886[*BZ#1891886*]) +* Previously, because of an LF GA syslog implementation for RFC 3164, logs sent to remote syslog were not compatible with the legacy behavior. The current release fixes this issue. AddLogSource adds details about log's source details to the "message" field. Now, logs sent to remote syslog are compatible with the legacy behavior. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1891886[*BZ#1891886*]) * Previously, the Elasticsearch rollover pods failed with a `resource_already_exists_exception` error. Within the Elasticsearch rollover API, when the next index was created, the `*-write` alias was not updated to point to it. As a result, the next time the rollover API endpoint was triggered for that particular index, it received an error that the resource already existed. + @@ -153,7 +153,7 @@ The current release fixes this issue. Now, when a rollover occurs in the `indexm * Previously, Fluent stopped sending logs even though the logging stack seemed functional. Logs were not shipped to an endpoint for an extended period even when an endpoint came back up. This happened if the max backoff time was too long and the endpoint was down. The current release fixes this issue by lowering the max backoff time, so the logs are shipped sooner. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1894634[*BZ#1894634*]) -* Previously, omitting the Storage size of the Elasticsearch node caused panic in the Elasticsearch Operator code. This panic appeared in the logs as: `Observed a panic: "invalid memory address or nil pointer dereference"` The panic happened because although Storage size is a required field, the software didn't check for it. The current release fixes this issue, so there is no panic if the storage size is omitted. Instead, the storage defaults to ephemeral storage and generates a log message for the user. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1899589[*BZ#1899589*]) +* Previously, omitting the Storage size of the Elasticsearch node caused panic in the OpenShift Elasticsearch Operator code. This panic appeared in the logs as: `Observed a panic: "invalid memory address or nil pointer dereference"` The panic happened because although Storage size is a required field, the software didn't check for it. The current release fixes this issue, so there is no panic if the storage size is omitted. Instead, the storage defaults to ephemeral storage and generates a log message for the user. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1899589[*BZ#1899589*]) * Previously, `elasticsearch-rollover` and `elasticsearch-delete` pods remained in the `Invalid JSON:` or `ValueError: No JSON object could be decoded` error states. This exception was raised because there was no exception handler for invalid JSON input. The current release fixes this issue by providing a handler for invalid JSON input. As a result, the handler outputs an error message instead of an exception traceback, and the `elasticsearch-rollover` and `elasticsearch-delete` jobs do not remain those error states. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1899905[*BZ#1899905*]) @@ -161,15 +161,15 @@ The current release fixes this issue. Now, when a rollover occurs in the `indexm * Previously, if you deleted the secret, it was not recreated. Even though the certificates were on a disk local to the operator, they weren't rewritten because they hadn't changed. That is, certificates were only written if they changed. The current release fixes this issue. It rewrites the secret if the certificate changes or is not found. Now, if you delete the master-certs, they are replaced. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1901869[*BZ#1901869*]) -* Previously, if a cluster had multiple custom resources with the same name, the resource would get selected alphabetically when not fully qualified with the API group. As a result, if you installed both Red Hat’s Elasticsearch operator alongside the Elastic Elasticsearch operator, you would see failures when collected data via a must-gather report. The current release fixes this issue by ensuring must-gathers now use the full API group when gathering information about the cluster's custom resources. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1897731[*BZ#1897731*]) +* Previously, if a cluster had multiple custom resources with the same name, the resource would get selected alphabetically when not fully qualified with the API group. As a result, if you installed both Red Hat’s OpenShift Elasticsearch Operator alongside the OpenShift Elasticsearch Operator, you would see failures when collected data via a must-gather report. The current release fixes this issue by ensuring must-gathers now use the full API group when gathering information about the cluster's custom resources. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1897731[*BZ#1897731*]) -* An earlier bug fix to address issues related to certificate generation introduced an error. Trying to read the certificates caused them to be regenerated because they were recognized as missing. This, in turn, triggered the Elasticsearch operator to perform a rolling upgrade on the Elasticsearch cluster and, potentially, to have mismatched certificates. This bug was caused by the operator incorrectly writing certificates to the working directory. The current release fixes this issue. Now the operator consistently reads and writes certificates to the same working directory, and the certificates are only regenerated if needed. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1905910[*BZ#1905910*]) +* An earlier bug fix to address issues related to certificate generation introduced an error. Trying to read the certificates caused them to be regenerated because they were recognized as missing. This, in turn, triggered the OpenShift Elasticsearch Operator to perform a rolling upgrade on the Elasticsearch cluster and, potentially, to have mismatched certificates. This bug was caused by the operator incorrectly writing certificates to the working directory. The current release fixes this issue. Now the operator consistently reads and writes certificates to the same working directory, and the certificates are only regenerated if needed. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1905910[*BZ#1905910*]) * Previously, queries to the root endpoint to retrieve the Elasticsearch version received a 403 response. The 403 response broke any services that used this endpoint in prior releases. This error happened because non-administrative users did not have the `monitor` permission required to query the root endpoint and retrieve the Elasticsearch version. Now, non-administrative users can query the root endpoint for the deployed version of Elasticsearch. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1906765[*BZ#1906765*]) -* Previously, in some bulk insertion situations, the Elasticsearch proxy timed out connections between fluentd and Elasticsearch. As a result, fluentd failed to deliver messages and logged a `Server returned nothing (no headers, no data)` error. The current release fixes this issue: It increases the default HTTP read and write timeouts in the Elasticsearch proxy from five seconds to one minute. It also provides command-line options in the Elasticsearch proxy to control HTTP timeouts in the field. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1908707[*BZ#1908707*]) +* Previously, in some bulk insertion situations, the Elasticsearch proxy timed out connections between fluentd and Elasticsearch. As a result, fluentd failed to deliver messages and logged a `Server returned nothing (no headers, no data)` error. The current release fixes this issue: It increases the default HTTP read and write timeouts in the Elasticsearch proxy from five seconds to one minute. It also provides command-line options in the Elasticsearch proxy to control HTTP timeouts in the field. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1908707[*BZ#1908707*]) -* Previously, in some cases, the {ProductName}/Elasticsearch dashboard was missing from the {product-title} monitoring dashboard because the dashboard configuration resource referred to a different namespace owner and caused the {product-title} to garbage-collect that resource. Now, the ownership reference is removed from the Elasticsearch Operator reconciler configuration, and the logging dashboard appears in the console. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1910259[*BZ#1910259*]) +* Previously, in some cases, the {ProductName}/Elasticsearch dashboard was missing from the {product-title} monitoring dashboard because the dashboard configuration resource referred to a different namespace owner and caused the {product-title} to garbage-collect that resource. Now, the ownership reference is removed from the OpenShift Elasticsearch Operator reconciler configuration, and the logging dashboard appears in the console. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1910259[*BZ#1910259*]) * Previously, the code that uses environment variables to replace values in the Kibana configuration file did not consider commented lines. This prevented users from overriding the default value of server.maxPayloadBytes. The current release fixes this issue by uncommenting the default value of server.maxPayloadByteswithin. Now, users can override the value by using environment variables, as documented. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1918876[*BZ#1918876*]) diff --git a/modules/cluster-logging-troubleshooting-unknown.adoc b/modules/cluster-logging-troubleshooting-unknown.adoc index dc73a29a4ae7..c1f3a57a4986 100644 --- a/modules/cluster-logging-troubleshooting-unknown.adoc +++ b/modules/cluster-logging-troubleshooting-unknown.adoc @@ -4,7 +4,7 @@ [id="cluster-logging-troubleshooting-unknown_{context}"] = Troubleshooting a Kubernetes unknown error while connecting to Elasticsearch - + If you are attempting to use a F-5 load balancer in front of Kibana with `X-Forwarded-For` enabled, this can cause an issue in which the Elasticsearch `Searchguard` plug-in is unable to correctly accept connections from Kibana. @@ -38,4 +38,3 @@ $ oc edit configmap/elasticsearch <1> . Scale Elasticsearch back up. . Scale up all Fluentd pods. - diff --git a/modules/cluster-logging-uninstall.adoc b/modules/cluster-logging-uninstall.adoc index f642181a1a93..d8904ee27970 100644 --- a/modules/cluster-logging-uninstall.adoc +++ b/modules/cluster-logging-uninstall.adoc @@ -38,13 +38,13 @@ To remove OpenShift Logging: .. Click the Options menu {kebab} next to *Elasticsearch* and select *Delete Custom Resource Definition*. -. Optional: Remove the Cluster Logging Operator and Elasticsearch Operator: +. Optional: Remove the Cluster Logging Operator and OpenShift Elasticsearch Operator: .. Switch to the *Operators* -> *Installed Operators* page. .. Click the Options menu {kebab} next to the Cluster Logging Operator and select *Uninstall Operator*. -.. Click the Options menu {kebab} next to the Elasticsearch Operator and select *Uninstall Operator*. +.. Click the Options menu {kebab} next to the OpenShift Elasticsearch Operator and select *Uninstall Operator*. . Optional: Remove the OpenShift Logging and Elasticsearch projects. diff --git a/modules/cluster-logging-updating-logging.adoc b/modules/cluster-logging-updating-logging.adoc index bd4a43e2c6ee..ec2dba773f37 100644 --- a/modules/cluster-logging-updating-logging.adoc +++ b/modules/cluster-logging-updating-logging.adoc @@ -5,16 +5,16 @@ [id="cluster-logging-updating-logging_{context}"] = Updating OpenShift Logging -After updating the {product-title} cluster, you can update from cluster logging 4.6 to Red Hat OpenShift Logging 5.0 by changing the subscription for the Elasticsearch Operator and the Cluster Logging Operator. +After updating the {product-title} cluster, you can update from cluster logging 4.6 to Red Hat OpenShift Logging 5.0 by changing the subscription for the OpenShift Elasticsearch Operator and the Cluster Logging Operator. When you update: -* You must update the Elasticsearch Operator before updating the Cluster Logging Operator. -* You must update both the Elasticsearch Operator and the Cluster Logging Operator. +* You must update the OpenShift Elasticsearch Operator before updating the Cluster Logging Operator. +* You must update both the OpenShift Elasticsearch Operator and the Cluster Logging Operator. + -Kibana is unusable when the Elasticsearch Operator has been updated but the Cluster Logging Operator has not been updated. +Kibana is unusable when the OpenShift Elasticsearch Operator has been updated but the Cluster Logging Operator has not been updated. + -If you update the Cluster Logging Operator before the Elasticsearch Operator, Kibana does not update and the Kibana custom resource (CR) is not created. To work around this problem, delete the Cluster Logging Operator pod. When the Cluster Logging Operator pod redeploys, the Kibana CR is created. +If you update the Cluster Logging Operator before the OpenShift Elasticsearch Operator, Kibana does not update and the Kibana custom resource (CR) is not created. To work around this problem, delete the Cluster Logging Operator pod. When the Cluster Logging Operator pod redeploys, the Kibana CR is created. [IMPORTANT] ==== @@ -34,13 +34,13 @@ Upgrade your cluster logging version to 4.6 before updating to Red Hat OpenShift .Procedure -. Update the Elasticsearch Operator: +. Update the OpenShift Elasticsearch Operator: .. From the web console, click *Operators* -> *Installed Operators*. .. Select the `openshift-operators-redhat` project. -.. Click the *Elasticsearch Operator*. +.. Click the *OpenShift Elasticsearch Operator*. .. Click *Subscription* -> *Channel*. @@ -48,7 +48,7 @@ Upgrade your cluster logging version to 4.6 before updating to Red Hat OpenShift .. Wait for a few seconds, then click *Operators* -> *Installed Operators*. + -Verify that the Elasticsearch Operator version is 5.0.x. +Verify that the OpenShift Elasticsearch Operator version is 5.0.x. + Wait for the *Status* field to report *Succeeded*. diff --git a/modules/dedicated-cluster-install-deploy.adoc b/modules/dedicated-cluster-install-deploy.adoc index 145ddc04054d..397ce4b9f39a 100644 --- a/modules/dedicated-cluster-install-deploy.adoc +++ b/modules/dedicated-cluster-install-deploy.adoc @@ -4,18 +4,18 @@ [id="dedicated-cluster-install-deploy"] -= Installing OpenShift Logging and Elasticsearch Operators += Installing OpenShift Logging and OpenShift Elasticsearch Operators You can use the {product-title} console to install OpenShift Logging by deploying instances of -the OpenShift Logging and Elasticsearch Operators. The Cluster Logging Operator -creates and manages the components of the logging stack. The Elasticsearch Operator +the OpenShift Logging and OpenShift Elasticsearch Operators. The Cluster Logging Operator +creates and manages the components of the logging stack. The OpenShift Elasticsearch Operator creates and manages the Elasticsearch cluster used by OpenShift Logging. [NOTE] ==== The OpenShift Logging solution requires that you install both the -Cluster Logging Operator and Elasticsearch Operator. When you deploy an instance -of the Cluster Logging Operator, it also deploys an instance of the Elasticsearch +Cluster Logging Operator and OpenShift Elasticsearch Operator. When you deploy an instance +of the Cluster Logging Operator, it also deploys an instance of the OpenShift Elasticsearch Operator. ==== @@ -29,7 +29,7 @@ production deployments. .Procedure -. Install the Elasticsearch Operator from the OperatorHub: +. Install the OpenShift Elasticsearch Operator from the OperatorHub: .. In the {product-title} web console, click *Operators* -> *OperatorHub*. diff --git a/modules/jaeger-config-storage.adoc b/modules/jaeger-config-storage.adoc index 73ba2ce17640..e9153459ac3a 100644 --- a/modules/jaeger-config-storage.adoc +++ b/modules/jaeger-config-storage.adoc @@ -6,7 +6,7 @@ This REFERENCE module included in the following assemblies: [id="jaeger-config-storage_{context}"] = Jaeger storage configuration options -You configure storage for the Collector, Ingester, and Query services under `spec:storage`. Multiple instances of each of these components can be provisioned as required for performance and resilience purposes. +You configure storage for the Collector, Ingester, and Query services under `spec:storage`. Multiple instances of each of these components can be provisioned as required for performance and resilience purposes. .Jaeger general storage parameters [options="header"] @@ -42,7 +42,7 @@ Memory storage is only appropriate for development, testing, demonstrations, and |storage: esIndexCleaner: enabled: -|When using Elasticsearch storage, by default a job is created to clean old traces from the index. This parameter enables or disables the index cleaner job. +|When using Elasticsearch storage, by default a job is created to clean old traces from the index. This parameter enables or disables the index cleaner job. |`true`/ `false` |`true` @@ -64,19 +64,19 @@ Memory storage is only appropriate for development, testing, demonstrations, and [id="jaeger-config-auto-provisioning-es_{context}"] == Auto-provisioning an Elasticsearch instance -When the `storage:type` is set to `elasticsearch` but there is no value set for `spec:storage:options:es:server-urls`, the Jaeger Operator uses the Elasticsearch Operator to create an Elasticsearch cluster based on the configuration provided in the `storage` section of the custom resource file. +When the `storage:type` is set to `elasticsearch` but there is no value set for `spec:storage:options:es:server-urls`, the Jaeger Operator uses the OpenShift Elasticsearch Operator to create an Elasticsearch cluster based on the configuration provided in the `storage` section of the custom resource file. .Restrictions * There can be only one Elasticsearch per namespace. -* You cannot share or reuse a {ProductName} logging Elasticsearch instance with Jaeger. The Elasticsearch cluster is meant to be dedicated for a single Jaeger instance. +* You cannot share or reuse a {ProductName} logging Elasticsearch instance with Jaeger. The Elasticsearch cluster is meant to be dedicated for a single Jaeger instance. [NOTE] ==== -If you already have installed Elasticsearch as part of OpenShift logging, the Jaeger Operator can use the installed Elasticsearch Operator to provision storage. +If you already have installed Elasticsearch as part of OpenShift logging, the Jaeger Operator can use the installed OpenShift Elasticsearch Operator to provision storage. ==== -The following configuration parameters are for a _self-provisioned_ Elasticsearch instance, that is an instance created by the Jaeger Operator using the Elasticsearch Operator. You specify configuration options for self-provisioned Elasticsearch under `spec:storage:elasticsearch` in your configuration file. +The following configuration parameters are for a _self-provisioned_ Elasticsearch instance, that is an instance created by the Jaeger Operator using the OpenShift Elasticsearch Operator. You specify configuration options for self-provisioned Elasticsearch under `spec:storage:elasticsearch` in your configuration file. .Elasticsearch resource configuration parameters [options="header"] @@ -86,7 +86,7 @@ The following configuration parameters are for a _self-provisioned_ Elasticsearc |elasticsearch: nodeCount: |Number of Elasticsearch nodes. For high availability use at least 3 nodes. Do not use 2 nodes as “split brain” problem can happen. -|Integer value. For example, Proof of concept = 1, +|Integer value. For example, Proof of concept = 1, Minimum deployment =3 |1 @@ -95,7 +95,7 @@ Minimum deployment =3 requests: cpu: |Number of central processing units for requests, based on your environment’s configuration. -|Specified in cores or millicores (for example, 200m, 0.5, 1). For example, Proof of concept = 500m, +|Specified in cores or millicores (for example, 200m, 0.5, 1). For example, Proof of concept = 500m, Minimum deployment =1 |1Gi @@ -182,13 +182,13 @@ spec: redundancyPolicy: ZeroRedundancy ---- -<1> Persistent storage configuration. In this case AWS `gp2` with `5Gi` size. When no value is specified, Jaeger uses `emptyDir`. The Elasticsearch Operator provisions `PersistentVolumeClaim` and `PersistentVolume` which are not removed with Jaeger instance. You can mount the same volumes if you create a Jaeger instance with the same name and namespace. +<1> Persistent storage configuration. In this case AWS `gp2` with `5Gi` size. When no value is specified, Jaeger uses `emptyDir`. The OpenShift Elasticsearch Operator provisions `PersistentVolumeClaim` and `PersistentVolume` which are not removed with Jaeger instance. You can mount the same volumes if you create a Jaeger instance with the same name and namespace. [id="jaeger-config-external-es_{context}"] == Connecting to an existing Elasticsearch instance -Jaeger also allows you to use an existing (self-provisioned) Elasticsearch cluster for storage. You do this by specifying the URL of the existing cluster as the `spec:storage:options:es:server-urls` value in your configuration. +Jaeger also allows you to use an existing (self-provisioned) Elasticsearch cluster for storage. You do this by specifying the URL of the existing cluster as the `spec:storage:options:es:server-urls` value in your configuration. .Restrictions @@ -199,7 +199,7 @@ Jaeger also allows you to use an existing (self-provisioned) Elasticsearch clust Red Hat does not provide support for your external Elasticsearch instance. You can review the tested integrations matrix on the link:https://access.redhat.com/articles/5381021[Customer Portal]. ==== -The following configuration parameters are for an _external_ Elasticsearch instance, that is, an instance that was not created using the Elasticsearch Operator. You specify configuration options for external Elasticsearch under `spec:storage:options:es` in your configuration file. +The following configuration parameters are for an _external_ Elasticsearch instance, that is, an instance that was not created using the OpenShift Elasticsearch Operator. You specify configuration options for external Elasticsearch under `spec:storage:options:es` in your configuration file. .General ES configuration parameters [options="header"] @@ -232,13 +232,13 @@ The following configuration parameters are for an _external_ Elasticsearch insta |es: sniffer: -|The sniffer configuration for Elasticsearch. The client uses the sniffing process to find all nodes automatically. Disabled by default. +|The sniffer configuration for Elasticsearch. The client uses the sniffing process to find all nodes automatically. Disabled by default. |`true`/ `false` |`false` |es: sniffer-tls-enabled: -|Option to enable TLS when sniffing an Elasticsearch Cluster, The client uses the sniffing process to find all nodes automatically. Disabled by default +|Option to enable TLS when sniffing an Elasticsearch Cluster, The client uses the sniffing process to find all nodes automatically. Disabled by default |`true`/ `false` |`false` @@ -250,13 +250,13 @@ The following configuration parameters are for an _external_ Elasticsearch insta |es: username: -|The username required by Elasticsearch. The basic authentication also loads CA if it is specified. See also `es.password`. +|The username required by Elasticsearch. The basic authentication also loads CA if it is specified. See also `es.password`. | | |es: password: -|The password required by Elasticsearch. See also, `es.username`. +|The password required by Elasticsearch. See also, `es.username`. | | @@ -319,7 +319,7 @@ The following configuration parameters are for an _external_ Elasticsearch insta |es: bulk: flush-interval: -|A `time.Duration` after which bulk requests are committed, regardless of other thresholds. To disable the bulk processor flush interval, set this to zero. +|A `time.Duration` after which bulk requests are committed, regardless of other thresholds. To disable the bulk processor flush interval, set this to zero. | |200ms @@ -401,7 +401,7 @@ The following configuration parameters are for an _external_ Elasticsearch insta |es-archive: bulk: flush-interval: -|A `time.Duration` after which bulk requests are committed, regardless of other thresholds. To disable the bulk processor flush interval, set this to zero. +|A `time.Duration` after which bulk requests are committed, regardless of other thresholds. To disable the bulk processor flush interval, set this to zero. | |0s @@ -469,25 +469,25 @@ The following configuration parameters are for an _external_ Elasticsearch insta |es-archive: password: -|The password required by Elasticsearch. See also, `es.username`. +|The password required by Elasticsearch. See also, `es.username`. | | |es-archive: server-urls: -|The comma-separated list of Elasticsearch servers. Must be specified as fully qualified URLs, for example, `\http://localhost:9200`. +|The comma-separated list of Elasticsearch servers. Must be specified as fully qualified URLs, for example, `\http://localhost:9200`. | | |es-archive: sniffer: -|The sniffer configuration for Elasticsearch. The client uses the sniffing process to find all nodes automatically. Disabled by default. +|The sniffer configuration for Elasticsearch. The client uses the sniffing process to find all nodes automatically. Disabled by default. |`true`/ `false` |`false` |es-archive: sniffer-tls-enabled: -|Option to enable TLS when sniffing an Elasticsearch Cluster, The client uses the sniffing process to find all nodes automatically. Disabled by default. +|Option to enable TLS when sniffing an Elasticsearch Cluster, The client uses the sniffing process to find all nodes automatically. Disabled by default. |`true`/ `false` |`false` @@ -541,7 +541,7 @@ The following configuration parameters are for an _external_ Elasticsearch insta |es-archive: username: -|The username required by Elasticsearch. The basic authentication also loads CA if it is specified. See also `es-archive.password`. +|The username required by Elasticsearch. The basic authentication also loads CA if it is specified. See also `es-archive.password`. | | diff --git a/modules/jaeger-deploy-production-es.adoc b/modules/jaeger-deploy-production-es.adoc index c7486e57c203..fac07aa070ae 100644 --- a/modules/jaeger-deploy-production-es.adoc +++ b/modules/jaeger-deploy-production-es.adoc @@ -10,7 +10,7 @@ The `production` deployment strategy is intended for production environments, wh .Prerequisites -* The Elasticsearch Operator must be installed. +* The OpenShift Elasticsearch Operator must be installed. * The Jaeger Operator must be installed. * Review the instructions for how to customize the Jaeger installation. * An account with the `cluster-admin` role. diff --git a/modules/jaeger-install-elasticsearch.adoc b/modules/jaeger-install-elasticsearch.adoc index cb5541f6d797..b74f0dd95d3a 100644 --- a/modules/jaeger-install-elasticsearch.adoc +++ b/modules/jaeger-install-elasticsearch.adoc @@ -5,9 +5,9 @@ [id="jaeger-operator-install-elasticsearch_{context}"] -= Installing the Elasticsearch Operator += Installing the OpenShift Elasticsearch Operator -The default Jaeger deployment uses in-memory storage because it is designed to be installed quickly for those evaluating Jaeger, giving demonstrations, or using Jaeger in a test environment. If you plan to use Jaeger in production, you must install and configure a persistent storage option, in this case, Elasticsearch. +The default Jaeger deployment uses in-memory storage because it is designed to be installed quickly for those evaluating Jaeger, giving demonstrations, or using Jaeger in a test environment. If you plan to use Jaeger in production, you must install and configure a persistent storage option, in this case, Elasticsearch. .Prerequisites * Access to the {product-title} web console. @@ -20,7 +20,7 @@ Do not install Community versions of the Operators. Community Operators are not [NOTE] ==== -If you have already installed the Elasticsearch Operator as part of OpenShift Logging, you do not need to install the Elasticsearch Operator again. The Jaeger Operator will create the Elasticsearch instance using the installed Elasticsearch Operator. +If you have already installed the OpenShift Elasticsearch Operator as part of OpenShift Logging, you do not need to install the OpenShift Elasticsearch Operator again. The Jaeger Operator will create the Elasticsearch instance using the installed OpenShift Elasticsearch Operator. ==== .Procedure @@ -29,9 +29,9 @@ If you have already installed the Elasticsearch Operator as part of OpenShift Lo . Navigate to *Operators* -> *OperatorHub*. -. Type *Elasticsearch* into the filter box to locate the Elasticsearch Operator. +. Type *Elasticsearch* into the filter box to locate the OpenShift Elasticsearch Operator. -. Click the *Elasticsearch Operator* provided by Red Hat to display information about the Operator. +. Click the *OpenShift Elasticsearch Operator* provided by Red Hat to display information about the Operator. . Click *Install*. @@ -41,10 +41,10 @@ If you have already installed the Elasticsearch Operator as part of OpenShift Lo + [NOTE] ==== -The Elasticsearch installation requires the *openshift-operators-redhat* namespace for the Elasticsearch operator. The other {ProductName} operators are installed in the `openshift-operators` namespace. +The Elasticsearch installation requires the *openshift-operators-redhat* namespace for the OpenShift Elasticsearch Operator. The other {ProductName} operators are installed in the `openshift-operators` namespace. ==== + -. Select the *Update Channel* that matches your {product-title} installation. For example, if you are installing on {product-title} version 4.6, select the 4.6 update channel. +. Select the *Update Channel* that matches your {product-title} installation. For example, if you are installing on {product-title} version 4.6, select the 4.6 update channel. + [NOTE] ==== @@ -61,4 +61,4 @@ The Manual approval strategy requires a user with appropriate credentials to app . Click *Install*. -. On the *Installed Operators* page, select the `openshift-operators-redhat` project. Wait until you see that the Elasticsearch Operator shows a status of "InstallSucceeded" before continuing. +. On the *Installed Operators* page, select the `openshift-operators-redhat` project. Wait until you see that the OpenShift Elasticsearch Operator shows a status of "InstallSucceeded" before continuing. diff --git a/modules/jaeger-install-overview.adoc b/modules/jaeger-install-overview.adoc index f3b28944ec8b..70d9a5a18c80 100644 --- a/modules/jaeger-install-overview.adoc +++ b/modules/jaeger-install-overview.adoc @@ -10,7 +10,7 @@ The steps for installing {ProductName} are as follows: * Review the documentation and determine your deployment strategy. -* If your deployment strategy requires persistent storage, install the Elasticsearch Operator via the OperatorHub. +* If your deployment strategy requires persistent storage, install the OpenShift Elasticsearch Operator via the OperatorHub. * Install the Jaeger Operator via the OperatorHub. diff --git a/modules/jaeger-install.adoc b/modules/jaeger-install.adoc index c8fc2517c508..f0e42057c701 100644 --- a/modules/jaeger-install.adoc +++ b/modules/jaeger-install.adoc @@ -14,7 +14,7 @@ By default the Operator is installed in the `openshift-operators` project. .Prerequisites * Access to the {product-title} web console. * An account with the `cluster-admin` role. -* If you require persistent storage, you must also install the Elasticsearch Operator before installing the Jaeger Operator. +* If you require persistent storage, you must also install the OpenShift Elasticsearch Operator before installing the Jaeger Operator. [WARNING] ==== diff --git a/modules/jaeger-rn-fixed-issues.adoc b/modules/jaeger-rn-fixed-issues.adoc index 2956303a33f1..ea8b21a609bc 100644 --- a/modules/jaeger-rn-fixed-issues.adoc +++ b/modules/jaeger-rn-fixed-issues.adoc @@ -15,9 +15,9 @@ Fix - What did we change to fix the problem? Result - How has the behavior changed as a result? Try to avoid “It is fixed” or “The issue is resolved” or “The error no longer presents”. //// -* link:https://issues.redhat.com/browse/TRACING-1725[TRACING-1725] Follow-up to TRACING-1631. Additional fix to ensure that Elasticsearch certificates are properly reconciled when there are multiple Jaeger production instances, using same name but within different namespaces. See also link:https://bugzilla.redhat.com/show_bug.cgi?id=1918920[BZ-1918920]. +* link:https://issues.redhat.com/browse/TRACING-1725[TRACING-1725] Follow-up to TRACING-1631. Additional fix to ensure that Elasticsearch certificates are properly reconciled when there are multiple Jaeger production instances, using same name but within different namespaces. See also link:https://bugzilla.redhat.com/show_bug.cgi?id=1918920[BZ-1918920]. -* link:https://issues.jboss.org/browse/TRACING-1631[TRACING-1631] Multiple Jaeger production instances, using same name but within different namespaces, causing Elasticsearch certificate issue. When multiple service meshes were installed, all of the Jaeger Elasticsearch instances had the same Elasticsearch secret instead of individual secrets, which prevented the Elasticsearch Operator from communicating with all of the Elasticsearch clusters. +* link:https://issues.jboss.org/browse/TRACING-1631[TRACING-1631] Multiple Jaeger production instances, using same name but within different namespaces, causing Elasticsearch certificate issue. When multiple service meshes were installed, all of the Jaeger Elasticsearch instances had the same Elasticsearch secret instead of individual secrets, which prevented the OpenShift Elasticsearch Operator from communicating with all of the Elasticsearch clusters. * link:https://issues.redhat.com/browse/TRACING-1300[TRACING-1300] Failed connection between Agent and Collector when using Istio sidecar. An update of the Jaeger Operator enabled TLS communication by default between a Jaeger sidecar agent and the Jaeger Collector. diff --git a/modules/jaeger-rn-new-features.adoc b/modules/jaeger-rn-new-features.adoc index a293aed19b99..51e6e4fa4bdc 100644 --- a/modules/jaeger-rn-new-features.adoc +++ b/modules/jaeger-rn-new-features.adoc @@ -11,7 +11,7 @@ Result – If changed, describe the current user experience. [id="jaeger-rn-new-features_{context}"] == New features {ProductName} 1.20.0 -* This release of {ProductName} adds support for using an "external" Elasticsearch cluster to store tracing data, that is, an Elasticsearch instance not installed and created by the Elasticsearch Operator. +* This release of {ProductName} adds support for using an "external" Elasticsearch cluster to store tracing data, that is, an Elasticsearch instance not installed and created by the OpenShift Elasticsearch Operator. * This release adds autoscaling support for the Jaeger Collector and Ingester. //// diff --git a/modules/metering-install-verify.adoc b/modules/metering-install-verify.adoc index a4cf43dd2bc2..8b0c49a69821 100644 --- a/modules/metering-install-verify.adoc +++ b/modules/metering-install-verify.adoc @@ -28,7 +28,7 @@ $ oc --namespace openshift-metering get csv [source,terminal,subs="attributes+"] ---- NAME DISPLAY VERSION REPLACES PHASE -elasticsearch-operator.{product-version}.0-202006231303.p0 Elasticsearch Operator {product-version}.0-202006231303.p0 Succeeded +elasticsearch-operator.{product-version}.0-202006231303.p0 OpenShift Elasticsearch Operator {product-version}.0-202006231303.p0 Succeeded metering-operator.v{product-version}.0 Metering {product-version}.0 Succeeded ---- -- diff --git a/modules/ossm-install-ossm-operator.adoc b/modules/ossm-install-ossm-operator.adoc index 45c95d3a2422..4c5d8bec1b9a 100644 --- a/modules/ossm-install-ossm-operator.adoc +++ b/modules/ossm-install-ossm-operator.adoc @@ -9,7 +9,7 @@ .Prerequisites * Access to the {product-title} web console. -* The Elasticsearch Operator must be installed. +* The OpenShift Elasticsearch Operator must be installed. * The Jaeger Operator must be installed. * The Kiali Operator must be installed. diff --git a/modules/ossm-installation-activities.adoc b/modules/ossm-installation-activities.adoc index abbfd50efe8e..4a3e40f7562c 100644 --- a/modules/ossm-installation-activities.adoc +++ b/modules/ossm-installation-activities.adoc @@ -13,6 +13,6 @@ To install the {ProductName} Operator, you must first install these Operators: * *Jaeger* - based on the open source link:https://www.jaegertracing.io/[Jaeger] project, lets you perform tracing to monitor and troubleshoot transactions in complex distributed systems. * *Kiali* - based on the open source link:https://www.kiali.io/[Kiali] project, provides observability for your service mesh. By using Kiali you can view configurations, monitor traffic, and view and analyze traces in a single console. -After you install the Elasticsearch, Jaeger, and Kiali Operators, then you install the {ProductName} Operator. The {ProductShortName} Operator defines and monitors the `ServiceMeshControlPlane` resources that manage the deployment, updating, and deletion of the {ProductShortName} components. +After you install the OpenShift Elasticsearch, Jaeger, and Kiali Operators, then you install the {ProductName} Operator. The {ProductShortName} Operator defines and monitors the `ServiceMeshControlPlane` resources that manage the deployment, updating, and deletion of the {ProductShortName} components. * *{ProductName}* - based on the open source link:https://istio.io/[Istio] project, lets you connect, secure, control, and observe the microservices that make up your applications. diff --git a/modules/ossm-remove-operators.adoc b/modules/ossm-remove-operators.adoc index 7aea03371047..a242d0abb1a9 100644 --- a/modules/ossm-remove-operators.adoc +++ b/modules/ossm-remove-operators.adoc @@ -6,7 +6,7 @@ [id="ossm-operatorhub-remove-operators_{context}"] = Removing the installed Operators -You must remove the Operators to successfully remove {ProductName}. Once you remove the {ProductName} Operator, you must remove the Kiali Operator, the Jaeger Operator, and the Elasticsearch Operator. +You must remove the Operators to successfully remove {ProductName}. Once you remove the {ProductName} Operator, you must remove the Kiali Operator, the Jaeger Operator, and the OpenShift Elasticsearch Operator. [id="ossm-remove-operator-servicemesh_{context}"] == Removing the {ProductName} Operator @@ -84,21 +84,21 @@ This removes the CSV, which in turn removes the pods, Deployments, CRDs, and CRs associated with the Operator. [id="ossm-remove-operator-elasticsearch_{context}"] -== Removing the Elasticsearch Operator +== Removing the OpenShift Elasticsearch Operator -Follow this procedure to remove the Elasticsearch Operator. +Follow this procedure to remove the OpenShift Elasticsearch Operator. .Prerequisites * Access to the {product-title} web console. -* The Elasticsearch Operator must be installed. +* The OpenShift Elasticsearch Operator must be installed. .Procedure . Log in to the {product-title} web console. . From the *Operators* → *Installed Operators* page, scroll or type a keyword into -the *Filter by name* to find the Elasticsearch Operator. Then, click on it. +the *Filter by name* to find the OpenShift Elasticsearch Operator. Then, click on it. . On the right-hand side of the *Operator Details* page, select *Uninstall Operator* from the *Actions* drop-down menu. diff --git a/modules/ossm-rn-fixed-issues-1x.adoc b/modules/ossm-rn-fixed-issues-1x.adoc index 351073ea1b1a..6274c685fc24 100644 --- a/modules/ossm-rn-fixed-issues-1x.adoc +++ b/modules/ossm-rn-fixed-issues-1x.adoc @@ -36,7 +36,7 @@ The following issues been resolved in the current release: * link:https://issues.jboss.org/browse/MAISTRA-1001[MAISTRA-1001] Closing HTTP/2 connections could lead to segmentation faults in `istio-proxy`. -* link:https://issues.jboss.org/browse/MAISTRA-932[MAISTRA-932] Added the `requires` metadata to add dependency relationship between Jaeger operator and Elasticsearch operator. Ensures that when the Jaeger operator is installed, it automatically deploys the Elasticsearch operator if it is not available. +* link:https://issues.jboss.org/browse/MAISTRA-932[MAISTRA-932] Added the `requires` metadata to add dependency relationship between Jaeger operator and OpenShift Elasticsearch Operator. Ensures that when the Jaeger operator is installed, it automatically deploys the OpenShift Elasticsearch Operator if it is not available. * link:https://issues.jboss.org/browse/MAISTRA-862[MAISTRA-862] Galley dropped watches and stopped providing configuration to other components after many namespace deletions and re-creations. diff --git a/modules/ossm-rn-new-features-1x.adoc b/modules/ossm-rn-new-features-1x.adoc index 62341b9a5036..d55a8646eccc 100644 --- a/modules/ossm-rn-new-features-1x.adoc +++ b/modules/ossm-rn-new-features-1x.adoc @@ -346,6 +346,6 @@ Other notable changes in this release include the following: * The Kubernetes Container Network Interface (CNI) plug-in is always on. * The control plane is configured for multitenancy by default. Single tenant, cluster-wide control plane configurations are deprecated. -* The Elasticsearch, Jaeger, Kiali, and {ProductShortName} Operators are installed from OperatorHub. +* The OpenShift Elasticsearch, Jaeger, Kiali, and {ProductShortName} Operators are installed from OperatorHub. * You can create and specify control plane templates. * Automatic route creation was removed from this release. diff --git a/modules/security-monitoring-cluster-logging.adoc b/modules/security-monitoring-cluster-logging.adoc index 4f97b8ca636e..8ce294e5f67d 100644 --- a/modules/security-monitoring-cluster-logging.adoc +++ b/modules/security-monitoring-cluster-logging.adoc @@ -14,5 +14,5 @@ access to logs: To save your logs for further audit and analysis, you can enable the `cluster-logging` add-on feature to collect, manage, and view system, container, and audit logs. -You can deploy, manage, and upgrade OpenShift Logging through the Elasticsearch Operator +You can deploy, manage, and upgrade OpenShift Logging through the OpenShift Elasticsearch Operator and Cluster Logging Operator. diff --git a/service_mesh/v1x/installing-ossm.adoc b/service_mesh/v1x/installing-ossm.adoc index 92e9d3db7e69..198f4fd4fefd 100644 --- a/service_mesh/v1x/installing-ossm.adoc +++ b/service_mesh/v1x/installing-ossm.adoc @@ -5,7 +5,7 @@ include::modules/ossm-document-attributes-1x.adoc[] toc::[] -Installing the {ProductShortName} involves installing the Elasticsearch, Jaeger, Kiali and {ProductShortName} Operators, creating and managing a `ServiceMeshControlPlane` resource to deploy the control plane, and creating a `ServiceMeshMemberRoll` resource to specify the namespaces associated with the {ProductShortName}. +Installing the {ProductShortName} involves installing the OpenShift Elasticsearch, Jaeger, Kiali and {ProductShortName} Operators, creating and managing a `ServiceMeshControlPlane` resource to deploy the control plane, and creating a `ServiceMeshMemberRoll` resource to specify the namespaces associated with the {ProductShortName}. [NOTE] ==== @@ -28,7 +28,7 @@ The {ProductShortName} documentation uses `istio-system` as the example project, The {ProductShortName} installation process uses the link:https://operatorhub.io/[OperatorHub] to install the `ServiceMeshControlPlane` custom resource definition within the `openshift-operators` project. The {ProductName} defines and monitors the `ServiceMeshControlPlane` related to the deployment, update, and deletion of the control plane. -Starting with {ProductName} {ProductVersion}, you must install the Elasticsearch Operator, the Jaeger Operator, and the Kiali Operator before the {ProductName} Operator can install the control plane. +Starting with {ProductName} {ProductVersion}, you must install the OpenShift Elasticsearch Operator, the Jaeger Operator, and the Kiali Operator before the {ProductName} Operator can install the control plane. include::modules/jaeger-install-elasticsearch.adoc[leveloffset=+1] diff --git a/service_mesh/v2x/installing-ossm.adoc b/service_mesh/v2x/installing-ossm.adoc index 7095d995996f..103bdbaac62f 100644 --- a/service_mesh/v2x/installing-ossm.adoc +++ b/service_mesh/v2x/installing-ossm.adoc @@ -5,7 +5,7 @@ include::modules/ossm-document-attributes.adoc[] toc::[] -Installing the {ProductShortName} involves installing the Elasticsearch, Jaeger, Kiali and {ProductShortName} Operators, creating and managing a `ServiceMeshControlPlane` resource to deploy the control plane, and creating a `ServiceMeshMemberRoll` resource to specify the namespaces associated with the {ProductShortName}. +Installing the {ProductShortName} involves installing the OpenShift Elasticsearch, Jaeger, Kiali and {ProductShortName} Operators, creating and managing a `ServiceMeshControlPlane` resource to deploy the control plane, and creating a `ServiceMeshMemberRoll` resource to specify the namespaces associated with the {ProductShortName}. [NOTE] ==== @@ -23,7 +23,7 @@ The {ProductShortName} documentation uses `istio-system` as the example project, The {ProductShortName} installation process uses the link:https://operatorhub.io/[OperatorHub] to install the `ServiceMeshControlPlane` custom resource definition within the `openshift-operators` project. The {ProductName} defines and monitors the `ServiceMeshControlPlane` related to the deployment, update, and deletion of the control plane. -Starting with {ProductName} {ProductVersion}, you must install the Elasticsearch Operator, the Jaeger Operator, and the Kiali Operator before the {ProductName} Operator can install the control plane. +Starting with {ProductName} {ProductVersion}, you must install the OpenShift Elasticsearch Operator, the Jaeger Operator, and the Kiali Operator before the {ProductName} Operator can install the control plane. include::modules/jaeger-install-elasticsearch.adoc[leveloffset=+1]