Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
86 changes: 80 additions & 6 deletions _topic_maps/_topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3076,13 +3076,87 @@ Topics:
# File: log6x-visual
# - Name: API reference 6.0
# File: log6x-api-reference
- Name: Logging 5.8
Dir: logging_release_notes
- Name: Logging 5.8 release notes
File: logging-5-8-release-notes
- Name: Support
File: cluster-logging-support
- Name: Troubleshooting logging
Dir: troubleshooting
Topics:
- Name: Viewing Logging status
File: cluster-logging-cluster-status
- Name: Troubleshooting log forwarding
File: log-forwarding-troubleshooting
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possibly missing the following:
- Name: Troubleshooting logging alerts
File: troubleshooting-logging-alerts
- Name: Viewing the status of the Elasticsearch log store
File: cluster-logging-log-store-status

Copy link

@anpingli anpingli Sep 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we create a separate panel for 5.8? And put all items(5.8 release note and etc) related to 5.8 under that link ?
Logging 6.2
Logging 6.1
Logging 6.0
Logging 5.8

image

- Name: About Logging
File: cluster-logging
- Name: Installing Logging
File: cluster-logging-deploying
- Name: Updating Logging
File: cluster-logging-upgrading
Distros: openshift-enterprise,openshift-origin
- Name: Visualizing logs
Dir: log_visualization
Topics:
- Name: About log visualization
File: log-visualization
- Name: Log visualization with the web console
File: log-visualization-ocp-console
- Name: Configuring your Logging deployment
Dir: config
Distros: openshift-enterprise,openshift-origin
Topics:
- Name: Configuring CPU and memory limits for Logging components
File: cluster-logging-memory
- Name: Configuring systemd-journald for Logging
File: cluster-logging-systemd
- Name: Log collection and forwarding
Dir: log_collection_forwarding
Topics:
- Name: About log collection and forwarding
File: log-forwarding
- Name: Log output types
File: logging-output-types
- Name: Enabling JSON log forwarding
File: cluster-logging-enabling-json-logging
- Name: Configuring log forwarding
File: configuring-log-forwarding
- Name: Configuring the logging collector
File: cluster-logging-collector
- Name: Collecting and storing Kubernetes events
File: cluster-logging-eventrouter
- Name: Log storage
Dir: log_storage
Topics:
- Name: Installing log storage
File: installing-log-storage
- Name: Configuring the LokiStack log store
File: cluster-logging-loki
- Name: Logging alerts
Dir: logging_alerts
Topics:
- Name: Release notes
File: logging-5-8-release-notes
- Name: Installing Logging
File: cluster-logging-deploying
- Name: Default logging alerts
File: default-logging-alerts
- Name: Custom logging alerts
File: custom-logging-alerts
- Name: Performance and reliability tuning
Dir: performance_reliability
Topics:
- Name: Flow control mechanisms
File: logging-flow-control-mechanisms
- Name: Scheduling resources
Dir: scheduling_resources
Topics:
- Name: Using node selectors to move logging resources
File: logging-node-selectors
- Name: Using tolerations to control logging pod placement
File: logging-taints-tolerations
- Name: Uninstalling Logging
File: cluster-logging-uninstall
# - Name: Exported fields
# File: cluster-logging-exported-fields
# Distros: openshift-enterprise,openshift-origin
# - Name: 5.7 Logging API reference
# File: logging-5-7-reference
# - Name: Configuring the logging collector
# File: cluster-logging-collector
# - Name: Support
Expand Down
26 changes: 0 additions & 26 deletions modules/cluster-logging-collector-limits.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -36,29 +36,3 @@ spec:
# ...
----
<1> Specify the CPU and memory limits and requests as needed. The values shown are the default values.

////
[source,yaml]
----
$ oc edit ClusterLogging instance

apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"

....

spec:
collection:
logs:
rsyslog:
resources:
limits: <1>
memory: 358Mi
requests:
cpu: 100m
memory: 358Mi
----
<1> Specify the CPU and memory limits and requests as needed. The values shown are the default values.
////
2 changes: 0 additions & 2 deletions modules/cluster-logging-collector-log-forward-syslog.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,7 @@ To configure log forwarding using the *syslog* protocol, you must create a `Clus
.Prerequisites

* You must have a logging server that is configured to receive the logging data using the specified protocol or format.

.Procedure

. Create or edit a YAML file that defines the `ClusterLogForwarder` CR object:
+
[source,yaml]
Expand Down
7 changes: 4 additions & 3 deletions modules/cluster-logging-deploying-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -155,10 +155,11 @@ spec:
nodeCount: 3
resources:
limits:
memory: 32Gi
cpu: 200m
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the changes in this file need technical review

memory: 16Gi
requests:
cpu: 3
memory: 32Gi
cpu: 200m
memory: 16Gi
storage:
storageClassName: "gp2"
size: "200G"
Expand Down
2 changes: 1 addition & 1 deletion modules/cluster-logging-elasticsearch-audit.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ include::snippets/audit-logs-default.adoc[]

.Procedure

To use the Log Forward API to forward audit logs to the internal Elasticsearch instance:
To use the Log Forwarding API to forward audit logs to the internal Elasticsearch instance:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to https://access.redhat.com/support/policy/updates/openshift_operators. Elasticsearch-opertaor in 4.16,4.17 is for trace only. I think we needn't remove all section as elasticsearch as interal store.


. Create or edit a YAML file that defines the `ClusterLogForwarder` CR object:
+
Expand Down
1 change: 0 additions & 1 deletion modules/cluster-logging-kibana-limits.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
//
// * observability/logging/cluster-logging-visualizer.adoc

:_mod-docs-content-type: PROCEDURE
[id="cluster-logging-kibana-limits_{context}"]
= Configure the CPU and memory limits for the log visualizer

Expand Down
3 changes: 0 additions & 3 deletions modules/cluster-logging-kibana-scaling.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,6 @@ $ oc -n openshift-logging edit ClusterLogging instance
+
[source,yaml]
----
$ oc edit ClusterLogging instance

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file is about elasticsearch, it can be removed.


apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
Expand All @@ -35,4 +33,3 @@ spec:
replicas: 1 <1>
----
<1> Specify the number of Kibana nodes.

6 changes: 0 additions & 6 deletions modules/cluster-logging-maintenance-support-list-6x.adoc
Original file line number Diff line number Diff line change
@@ -1,9 +1,3 @@
// Module included in the following assemblies:
//
// * observability/logging/logging-6.0/log60-cluster-logging-support.adoc
// * observability/logging/logging-6.1/log61-cluster-logging-support.adoc
// * observability/logging/logging-6.2/log62-cluster-logging-support.adoc

:_mod-docs-content-type: REFERENCE
[id="cluster-logging-maintenance-support-list_{context}"]
= Unsupported configurations
Expand Down
26 changes: 20 additions & 6 deletions modules/cluster-logging-manual-rollout-rolling.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ To perform a rolling cluster restart:

. Change to the `openshift-logging` project:
+
[source,terminal]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file is about elasticsearch, it can be removed.

----
$ oc project openshift-logging
----
Expand All @@ -46,24 +47,28 @@ $ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_flus
+
For example:
+
[source,terminal]
----
$ oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_flush/synced" -XPOST
----
+
.Example output
+
[source,terminal]
----
{"_shards":{"total":4,"successful":4,"failed":0},".security":{"total":2,"successful":2,"failed":0},".kibana_1":{"total":2,"successful":2,"failed":0}}
----

. Prevent shard balancing when purposely bringing down nodes using the {product-title} es_util tool:
. Prevent shard balancing when purposely bringing down nodes using the {product-title}
link:https://github.com/openshift/origin-aggregated-logging/tree/master/elasticsearch#es_util[*es_util*] tool:
+
[source,terminal]
----
$ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "primaries" } }'
----
+
For example:
+
[source,terminal]
----
$ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "primaries" } }'
----
Expand All @@ -79,25 +84,27 @@ $ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_uti
.. By default, the {product-title} Elasticsearch cluster blocks rollouts to their nodes. Use the following command to allow rollouts
and allow the pod to pick up the changes:
+
[source,terminal]
----
$ oc rollout resume deployment/<deployment-name>
----
+
For example:
+
[source,terminal]
----
$ oc rollout resume deployment/elasticsearch-cdm-0-1
----
+
.Example output
+
[source,terminal]
----
deployment.extensions/elasticsearch-cdm-0-1 resumed
----
+
A new pod is deployed. After the pod has a ready container, you can
move on to the next deployment.
+
[source,terminal]
----
$ oc get pods -l component=elasticsearch-
----
Expand All @@ -113,24 +120,26 @@ elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h

.. After the deployments are complete, reset the pod to disallow rollouts:
+
[source,terminal]
----
$ oc rollout pause deployment/<deployment-name>
----
+
For example:
+
[source,terminal]
----
$ oc rollout pause deployment/elasticsearch-cdm-0-1
----
+
.Example output
+
[source,terminal]
----
deployment.extensions/elasticsearch-cdm-0-1 paused
----
+
.. Check that the Elasticsearch cluster is in a `green` or `yellow` state:
+
[source,terminal]
----
$ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true
----
Expand All @@ -142,10 +151,13 @@ If you performed a rollout on the Elasticsearch pod you used in the previous com
+
For example:
+
[source,terminal]
----
$ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true
----
+
.Example output
[source,json]
----
{
"cluster_name" : "elasticsearch",
Expand All @@ -171,12 +183,14 @@ $ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_uti

. After all the deployments for the cluster have been rolled out, re-enable shard balancing:
+
[source,terminal]
----
$ oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "all" } }'
----
+
For example:
+
[source,terminal]
----
$ oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query="_cluster/settings" -XPUT -d '{ "persistent": { "cluster.routing.allocation.enable" : "all" } }'
----
Expand Down
4 changes: 0 additions & 4 deletions modules/cluster-logging-must-gather-about.adoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,3 @@
// Module included in the following assemblies:
//
// * observability/logging/cluster-logging-support.adoc

:_mod-docs-content-type: CONCEPT
[id="about-must-gather_{context}"]
= About the must-gather tool
Expand Down
17 changes: 7 additions & 10 deletions modules/cluster-logging-must-gather-collecting.adoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,3 @@
// Module included in the following assemblies:
//
// * observability/logging/cluster-logging-support.adoc

:_mod-docs-content-type: PROCEDURE
[id="cluster-logging-must-gather-collecting_{context}"]
= Collecting {logging} data
Expand All @@ -16,18 +12,19 @@ To collect {logging} information with `must-gather`:

. Run the `oc adm must-gather` command against the {logging} image:
+
ifndef::openshift-origin[]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes in this file require a technical review

If you are using OKD:
+
[source,terminal]
----
$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')
$ oc adm must-gather --image=quay.io/openshift/origin-cluster-logging-operator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason we change the image to quay.io/openshift/origin-cluster-logging-operator?

----
endif::openshift-origin[]
ifdef::openshift-origin[]
+
Otherwise:
+
[source,terminal]
----
$ oc adm must-gather --image=quay.io/openshift/origin-cluster-logging-operator
$ oc adm must-gather --image=$(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}')
----
endif::openshift-origin[]
+
The `must-gather` tool creates a new directory that starts with `must-gather.local` within the current directory. For example:
`must-gather.local.4157245944708210408`.
Expand Down
1 change: 0 additions & 1 deletion modules/cluster-logging-troubleshooting-unknown.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
//
// * logging/cluster-logging-troublehsooting.adoc

:_mod-docs-content-type: PROCEDURE
[id="cluster-logging-troubleshooting-unknown_{context}"]
= Troubleshooting a Kubernetes unknown error while connecting to Elasticsearch

Expand Down
3 changes: 1 addition & 2 deletions modules/cluster-logging-visualizer-launch.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
//
// * observability/logging/cluster-logging-visualizer.adoc

:_mod-docs-content-type: PROCEDURE
[id="cluster-logging-visualizer-launch_{context}"]
= Launching the log visualizer

Expand All @@ -28,7 +27,7 @@ yes
+
[NOTE]
====
The audit logs are not stored in the internal {product-title} Elasticsearch instance by default. To view the audit logs in Kibana, you must use the Log Forward API to configure a pipeline that uses the `default` output for audit logs.
The audit logs are not stored in the internal {product-title} Elasticsearch instance by default. To view the audit logs in Kibana, you must use the Log Forwarding API to configure a pipeline that uses the `default` output for audit logs.
====

.Procedure
Expand Down
Loading