Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RHDEVDOCS-4102 - Logging corrections from Support Engineering Feedback #50781

Merged
merged 1 commit into from
Oct 17, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
3 changes: 0 additions & 3 deletions logging/cluster-logging-external.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,6 @@ include::modules/cluster-logging-collector-log-forward-loki.adoc[leveloffset=+1]

include::modules/cluster-logging-troubleshooting-loki-entry-out-of-order-errors.adoc[leveloffset=+2]


[role="_additional-resources"]
.Additional resources

Expand All @@ -194,8 +193,6 @@ include::modules/cluster-logging-collector-log-forward-project.adoc[leveloffset=

include::modules/cluster-logging-collector-log-forward-logs-from-application-pods.adoc[leveloffset=+1]

include::modules/cluster-logging-collector-collecting-ovn-logs.adoc[leveloffset=+1]

[role="_additional-resources"]
.Additional resources

Expand Down
4 changes: 2 additions & 2 deletions logging/cluster-logging-release-notes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ include::modules/cluster-logging-loki-tech-preview.adoc[leveloffset=+2]
* link:https://access.redhat.com/security/cve/CVE-2022-21698[CVE-2022-21698]
** link:https://bugzilla.redhat.com/show_bug.cgi?id=2045880[BZ-2045880]

//include::modules/cluster-logging-rn-5.3.12.adoc[leveloffset=+1]
include::modules/cluster-logging-rn-5.3.12.adoc[leveloffset=+1]

include::modules/cluster-logging-rn-5.3.11.adoc[leveloffset=+1]

Expand Down Expand Up @@ -926,7 +926,7 @@ This release includes link:https://access.redhat.com/errata/RHBA-2021:3393[RHBA-

* This enhancement enables you to use a username and password to authenticate a log forwarding connection to an external Elasticsearch instance. For example, if you cannot use mutual TLS (mTLS) because a third-party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password. For more information, see xref:../logging/cluster-logging-external.adoc#cluster-logging-collector-log-forward-es_cluster-logging-external[Forwarding logs to an external Elasticsearch instance]. (link:https://issues.redhat.com/browse/LOG-1022[LOG-1022])

* With this update, you can collect OVN network policy audit logs for forwarding to a logging server. For more information, see xref:../logging/cluster-logging-external.html#cluster-logging-collecting-ovn-audit-logs_cluster-logging-external[Collecting OVN network policy audit logs]. (link:https://issues.redhat.com/browse/LOG-1526[LOG-1526])
* With this update, you can collect OVN network policy audit logs for forwarding to a logging server. (link:https://issues.redhat.com/browse/LOG-1526[LOG-1526])

* By default, the data model introduced in {product-title} 4.5 gave logs from different namespaces a single index in common. This change made it harder to see which namespaces produced the most logs.
+
Expand Down
4 changes: 1 addition & 3 deletions logging/cluster-logging-upgrading.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,4 @@ To upgrade from cluster logging in {product-title} version 4.6 and earlier to Op

To upgrade from a previous version of OpenShift Logging to the current version, you update OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator to their current versions.

include::modules/cluster-logging-updating-logging-to-5-0.adoc[leveloffset=+1]

include::modules/cluster-logging-updating-logging-to-5-1.adoc[leveloffset=+1]
include::modules/cluster-logging-updating-logging-to-current.adoc[leveloffset=+1]
2 changes: 0 additions & 2 deletions logging/cluster-logging.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,6 @@ For information, see xref:../logging/cluster-logging-deploying.adoc#cluster-logg

include::modules/cluster-logging-json-logging-about.adoc[leveloffset=+2]

For information, see xref:../logging/cluster-logging.adoc#cluster-logging-json-logging-about_cluster-logging[About JSON Logging].

include::modules/cluster-logging-collecting-storing-kubernetes-events.adoc[leveloffset=+2]

For information, see xref:../logging/cluster-logging-eventrouter.adoc#cluster-logging-eventrouter[About collecting and storing Kubernetes events].
Expand Down
2 changes: 1 addition & 1 deletion modules/cluster-logging-clo-status-comp.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ You can view the status for a number of {logging} components.

.Prerequisites

* The {logging-title} and Elasticsearch must be installed.
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.

.Procedure

Expand Down
2 changes: 1 addition & 1 deletion modules/cluster-logging-clo-status.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ You can view the status of your Red Hat OpenShift Logging Operator.

.Prerequisites

* The {logging-title} and Elasticsearch must be installed.
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.

.Procedure

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,3 +49,5 @@ kafka 2.7.0
====
Previously, the syslog output supported only RFC-3164. The current syslog output adds support for RFC-5424.
====

//ENG-Feedback: How can we reformat this to accurately reflect 5.4?
2 changes: 1 addition & 1 deletion modules/cluster-logging-collector-tolerations.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ tolerations:

.Prerequisites

* The {logging-title} and Elasticsearch must be installed.
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.

.Procedure

Expand Down
10 changes: 7 additions & 3 deletions modules/cluster-logging-collector-tuning.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ The {logging-title} includes multiple Fluentd parameters that you can use for tu

Fluentd collects log data in a single blob called a _chunk_. When Fluentd creates a chunk, the chunk is considered to be in the _stage_, where the chunk gets filled with data. When the chunk is full, Fluentd moves the chunk to the _queue_, where chunks are held before being flushed, or written out to their destination. Fluentd can fail to flush a chunk for a number of reasons, such as network issues or capacity issues at the destination. If a chunk cannot be flushed, Fluentd retries flushing as configured.

By default in {product-title}, Fluentd uses the _exponential backoff_ method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the _periodic_ retry method instead, which retries flushing the chunks at a specified interval. By default, Fluentd retries chunk flushing indefinitely. In {product-title}, you cannot change the indefinite retry behavior.
By default in {product-title}, Fluentd uses the _exponential backoff_ method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the _periodic_ retry method instead, which retries flushing the chunks at a specified interval.

These parameters can help you determine the trade-offs between latency and throughput.

Expand All @@ -37,7 +37,7 @@ These parameters are:
[options="header"]
|===

|Parmeter |Description |Default
|Parameter |Description |Default

|`chunkLimitSize`
|The maximum size of each chunk. Fluentd stops writing data to a chunk when it reaches this size. Then, Fluentd sends the chunk to the queue and opens a new chunk.
Expand Down Expand Up @@ -82,6 +82,10 @@ a|The retry method when flushing fails:
* `periodic`: Retries flushes periodically, based on the `retryWait` parameter.
|`exponential_backoff`

|`retryTimeOut`
|The maximum time interval to attempt retries before the record is discarded.
|`60m`

|`retryWait`
|The time in seconds before the next chunk flush.
|`1s`
Expand Down Expand Up @@ -138,7 +142,7 @@ spec:
+
[source,terminal]
----
$ oc get pods -n openshift-logging
$ oc get pods -l component=collector -n openshift-logging
----

. Check that the new values are in the `fluentd` config map:
Expand Down
21 changes: 10 additions & 11 deletions modules/cluster-logging-deploy-cli.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,7 @@ You can use the {product-title} CLI to install the OpenShift Elasticsearch and R

.Prerequisites

* Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node
requires its own storage volume.
* Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume.
+
[NOTE]
====
Expand Down Expand Up @@ -140,7 +139,7 @@ spec:
name: "elasticsearch-operator"
----
<1> You must specify the `openshift-operators-redhat` namespace.
<2> Specify `5.0`, `stable`, or `stable-5.<x>` as the channel. See the following note.
<2> Specify `stable`, or `stable-5.<x>` as the channel. See the following note.
<3> `Automatic` allows the Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. `Manual` requires a user with appropriate credentials to approve the Operator update.
<4> Specify `redhat-operators`. If your {product-title} cluster is installed on a restricted network, also known as a disconnected cluster,
specify the name of the CatalogSource object created when you configured the Operator Lifecycle Manager (OLM).
Expand Down Expand Up @@ -241,7 +240,7 @@ spec:
sourceNamespace: openshift-marketplace
----
<1> You must specify the `openshift-logging` namespace.
<2> Specify `5.0`, `stable`, or `stable-5.<x>` as the channel.
<2> Specify `stable`, or `stable-5.<x>` as the channel.
<3> Specify `redhat-operators`. If your {product-title} cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM).
+
[source,terminal]
Expand Down Expand Up @@ -386,7 +385,7 @@ This creates the {logging} components, the `Elasticsearch` custom resource and c

. Verify the installation by listing the pods in the *openshift-logging* project.
+
You should see several pods for OpenShift Logging, Elasticsearch, Fluentd, and Kibana similar to the following list:
You should see several pods for components of the Logging subsystem, similar to the following list:
+
[source,terminal]
----
Expand All @@ -401,11 +400,11 @@ cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m
elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s
elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s
elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s
fluentd-587vb 1/1 Running 0 2m26s
fluentd-7mpb9 1/1 Running 0 2m30s
fluentd-flm6j 1/1 Running 0 2m33s
fluentd-gn4rn 1/1 Running 0 2m26s
fluentd-nlgb6 1/1 Running 0 2m30s
fluentd-snpkt 1/1 Running 0 2m28s
collector-587vb 1/1 Running 0 2m26s
collector-7mpb9 1/1 Running 0 2m30s
collector-flm6j 1/1 Running 0 2m33s
collector-gn4rn 1/1 Running 0 2m26s
collector-nlgb6 1/1 Running 0 2m30s
collector-snpkt 1/1 Running 0 2m28s
kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s
----
1 change: 1 addition & 0 deletions modules/cluster-logging-deploying-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -155,6 +155,7 @@ spec:
nodeCount: 3
resources:
limits:
cpu: 200m
memory: 16Gi
requests:
cpu: 200m
Expand Down
2 changes: 1 addition & 1 deletion modules/cluster-logging-elasticsearch-exposing.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ $ oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging --

.Prerequisites

* The {logging-title} and Elasticsearch must be installed.
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.

* You must have access to the project to be able to access to the logs.

Expand Down
2 changes: 1 addition & 1 deletion modules/cluster-logging-elasticsearch-ha.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ You can define how Elasticsearch shards are replicated across data nodes in the

.Prerequisites

* The {logging-title} and Elasticsearch must be installed.
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.

.Procedure

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ When using emptyDir, if log storage is restarted or redeployed, you will lose da

.Prerequisites
//Find & replace the below according to SME feedback.
* The {logging-title} and Elasticsearch must be installed.
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.

.Procedure

Expand Down
2 changes: 1 addition & 1 deletion modules/cluster-logging-elasticsearch-storage.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ occur.

.Prerequisites

* The {logging-title} and Elasticsearch must be installed.
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.

.Procedure

Expand Down
2 changes: 1 addition & 1 deletion modules/cluster-logging-elasticsearch-tolerations.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ tolerations:

.Prerequisites

* The {logging-title} and Elasticsearch must be installed.
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.

.Procedure

Expand Down
2 changes: 1 addition & 1 deletion modules/cluster-logging-eventrouter-deploy.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ parameters:
<3> Creates a ClusterRoleBinding to bind the ClusterRole to the service account.
<4> Creates a config map in the `openshift-logging` project to generate the required `config.json` file.
<5> Creates a deployment in the `openshift-logging` project to generate and configure the Event Router pod.
<6> Specifies the image, identified by a tag such as `v0.3`.
<6> Specifies the image, identified by a tag such as `v0.4`.
<7> Specifies the minimum amount of CPU to allocate to the Event Router pod. Defaults to `100m`.
<8> Specifies the minimum amount of memory to allocate to the Event Router pod. Defaults to `128Mi`.
<9> Specifies the `openshift-logging` project to install objects in.
Expand Down
2 changes: 1 addition & 1 deletion modules/cluster-logging-kibana-tolerations.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ that is not on other pods ensures only the Kibana pod can run on that node.

.Prerequisites

* The {logging-title} and Elasticsearch must be installed.
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.

.Procedure

Expand Down
2 changes: 1 addition & 1 deletion modules/cluster-logging-log-store-status-viewing.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ You can view the status of your log store.

.Prerequisites

* The {logging-title} and Elasticsearch must be installed.
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.

.Procedure

Expand Down
2 changes: 1 addition & 1 deletion modules/cluster-logging-logstore-limits.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ For production use, you should have no less than the default 16Gi allocated to e

.Prerequisites

* The {logging-title} and Elasticsearch must be installed.
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.

.Procedure

Expand Down
5 changes: 5 additions & 0 deletions modules/cluster-logging-loki-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,11 @@ Loki is a horizontally scalable, highly available, multi-tenant log aggregation
== Deployment Sizing
Sizing for Loki follows the format of `N<x>._<size>_` where the value `<N>` is number of instances and `<size>` specifies performance capabilities.

[NOTE]
====
1x.extra-small is for demo purposes only, and is not supported.
====

.Loki Sizing
[options="header"]
|========================================================================================
Expand Down
4 changes: 2 additions & 2 deletions modules/cluster-logging-loki-deploy.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ You can use the {product-title} web console to deploy the LokiStack.
.Prerequisites

* {logging-title-uc} Operator 5.5 and later
* AWS S3 bucket for log storage
* Supported Log Store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation)

.Procedure

Expand Down Expand Up @@ -78,7 +78,7 @@ data:
storage:
schemas:
- version: v12
effectiveDate: 2022-06-01
effectiveDate: '2022-06-01'
secret:
name: logging-loki-s3
type: s3
Expand Down
6 changes: 3 additions & 3 deletions modules/cluster-logging-manual-rollout-rolling.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Also, a rolling restart is recommended if the nodes on which an Elasticsearch po

.Prerequisites

* The {logging-title} and Elasticsearch must be installed.
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.

.Procedure

Expand All @@ -28,7 +28,7 @@ $ oc project openshift-logging
. Get the names of the Elasticsearch pods:
+
----
$ oc get pods | grep elasticsearch-
$ oc get pods -l component=elasticsearch-
----

. Scale down the Fluentd pods so they stop sending new logs to Elasticsearch:
Expand Down Expand Up @@ -106,7 +106,7 @@ move on to the next deployment.
+
[source,terminal]
----
$ oc get pods | grep elasticsearch-
$ oc get pods -l component=elasticsearch-
----
+
.Example output
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,9 +53,9 @@ spec:
fluentd: {}
----

. Verify that the Fluentd pods are redeployed:
. Verify that the collector pods are redeployed:
+
libander marked this conversation as resolved.
Show resolved Hide resolved
[source,terminal]
----
$ oc get pods -n openshift-logging
$ oc get pods -l component=collector -n openshift-logging
----
2 changes: 1 addition & 1 deletion modules/cluster-logging-uninstall.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Deleting the `ClusterLogging` CR does not remove the persistent volume claims (P

.Prerequisites

* The {logging-title} and Elasticsearch must be installed.
* The Red Hat OpenShift Logging and Elasticsearch Operators must be installed.

.Procedure

Expand Down
4 changes: 2 additions & 2 deletions modules/cluster-logging-updating-logging-to-5-0.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ If you update the operators in the wrong order, Kibana does not update and the K

.. Click *Subscription* -> *Channel*.

.. In the *Change Subscription Update Channel* window, select *5.0* or *stable-5.x* and click *Save*.
.. In the *Change Subscription Update Channel* window, select *stable-5.x* and click *Save*.

.. Wait for a few seconds, then click *Operators* -> *Installed Operators*.
+
Expand All @@ -60,7 +60,7 @@ Wait for the *Status* field to report *Succeeded*.

.. Click *Subscription* -> *Channel*.

.. In the *Change Subscription Update Channel* window, select *5.0* or *stable-5.x* and click *Save*.
.. In the *Change Subscription Update Channel* window, select *stable-5.x* and click *Save*.

.. Wait for a few seconds, then click *Operators* -> *Installed Operators*.
+
Expand Down
6 changes: 3 additions & 3 deletions modules/cluster-logging-updating-logging-to-5-1.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -156,18 +156,18 @@ green open audit-000001
----
====

.. Verify that the log collector is updated to 5.x:
.. Verify that the log collector is updated to 5.3:
+
[source,terminal]
----
$ oc get ds fluentd -o json | grep fluentd-init
$ oc get ds collector -o json | grep collector
libander marked this conversation as resolved.
Show resolved Hide resolved
----
+
Verify that the output includes a `fluentd-init` container:
+
[source,terminal]
----
"containerName": "fluentd-init"
"containerName": "collector"
----

.. Verify that the log visualizer is updated to 5.x using the Kibana CRD:
Expand Down