Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion _topic_maps/_topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Name: Upgrading logging
Dir: upgrading
Distros: openshift-logging
Topics:
- Name: Upgrading to Logging 6.0
- Name: Upgrading to Logging 6
File: upgrading-to-logging-60
---
Name: Uninstalling logging
Expand Down
270 changes: 270 additions & 0 deletions modules/changes-to-cluster-logging-and-forwarding-in-logging-6.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,270 @@
:_newdoc-version: 2.18.4
:_template-generated: 2025-06-03
:_mod-docs-content-type: CONCEPT

[id="changes-to-cluster-logging-and-forwarding-in-logging-6_{context}"]
= Changes to cluster logging and forwarding in Logging 6

Log collection and forwarding configurations are now specified under the new link:https://github.com/openshift/cluster-logging-operator/blob/master/docs/reference/operator/api_observability_v1.adoc[API], part of the `observability.openshift.io` API group. The following sections highlight the differences from the old API resources.

[NOTE]
====
Vector is the only supported collector implementation.
====

[id="management-resource-allocation-workload-scheduling_{context}"]
== Management, resource allocation, and workload scheduling

Configuration for management state, resource requests and limits, tolerations, and node selection is now part of the new `ClusterLogForwarder` API.

.Logging 5.x configuration
[source,yaml]
----
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
spec:
managementState: "Managed"
collection:
resources:
limits: {}
requests: {}
nodeSelector: {}
tolerations: {}
----

.Logging 6 configuration
[source,yaml]
----
apiVersion: "observability.openshift.io/v1"
kind: ClusterLogForwarder
spec:
managementState: Managed
collector:
resources:
limits: {}
requests: {}
nodeSelector: {}
tolerations: {}
----

[id="input-specification_{context}"]
== Input specifications

The input specification is an optional part of the `ClusterLogForwarder` specification. Administrators can continue to use the predefined values `application`, `infrastructure`, and `audit` to collect these sources.

Namespace and container inclusions and exclusions have been consolidated into a single field.

.5.x application input with namespace and container includes and excludes
[source,yaml]
----
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
spec:
inputs:
- name: application-logs
type: application
application:
namespaces:
- foo
- bar
includes:
- namespace: my-important
container: main
excludes:
- container: too-verbose
----

.6.x application input with namespace and container includes and excludes
[source,yaml]
----
apiVersion: "observability.openshift.io/v1"
kind: ClusterLogForwarder
spec:
inputs:
- name: application-logs
type: application
application:
includes:
- namespace: foo
- namespace: bar
- namespace: my-important
container: main
excludes:
- container: too-verbose
----

[NOTE]
====
"application", "infrastructure", and "audit" are reserved words and cannot be used as names when defining an input.
====

Changes to input receivers include:

* Explicit configuration of the type at the receiver level.
* Port settings moved to the receiver level.

.5.x input receivers
[source,yaml]
----
apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
spec:
inputs:
- name: an-http
receiver:
http:
port: 8443
format: kubeAPIAudit
- name: a-syslog
receiver:
type: syslog
syslog:
port: 9442
----

.6.x input receivers
[source,yaml]
----
apiVersion: "observability.openshift.io/v1"
kind: ClusterLogForwarder
spec:
inputs:
- name: an-http
type: receiver
receiver:
type: http
port: 8443
http:
format: kubeAPIAudit
- name: a-syslog
type: receiver
receiver:
type: syslog
port: 9442
----

[id="output-specification_{context}"]
== Output specifications

High-level changes to output specifications include:

* URL settings moved to each output type specification.
* Tuning parameters moved to each output type specification.
* Separation of TLS configuration from authentication.
* Explicit configuration of keys and secret/config map for TLS and authentication.

[id="secrets-and-tls-configuration_{context}"]
== Secrets and TLS configuration

Secrets and TLS configurations are now separated into authentication and TLS configuration for each output. They must be explicitly defined in the specification rather than relying on administrators to define secrets with recognized keys. Upgrading TLS and authorization configurations requires administrators to understand previously recognized keys to continue using existing secrets. The examples in this section illustrate how to configure `ClusterLogForwarder` secrets to forward to existing Red{nbsp}Hat managed log storage solutions.

.Logging 6.x output configuration using service account token and config map
[source,yaml]
----
...
spec:
outputs:
- lokiStack
authentication:
token:
from: serviceAccount
target:
name: logging-loki
namespace: openshift-logging
name: my-lokistack
tls:
ca:
configMapName: openshift-service-ca.crt
key: service-ca.crt
type: lokiStack
...
----

.Logging 6.x output authentication and TLS configuration using secrets
[source,yaml]
----
...
spec:
outputs:
- name: my-output
type: http
http:
url: https://my-secure-output:8080
authentication:
password:
key: pass
secretName: my-secret
username:
key: user
secretName: my-secret
tls:
ca:
key: ca-bundle.crt
secretName: collector
certificate:
key: tls.crt
secretName: collector
key:
key: tls.key
secretName: collector
...
----

[id="filters-and-pipeline-configuration_{context}"]
== Filters and pipeline configuration

All attributes of pipelines in previous releases have been converted to filters in this release. Individual filters are defined in the `filters` spec and referenced by a pipeline.

.5.x filters
[source,yaml]
----
...
spec:
pipelines:
- name: app-logs
detectMultilineErrors: true
parse: json
labels:
<key>: <value>
...
----

.6.x filters and pipelines spec
[source,yaml]
----
...
spec:
filters:
- name: my-multiline
type: detectMultilineException
- name: my-parse
type: parse
- name: my-labels
type: openshiftLabels
openshiftLabels:
<key>: <label>
pipelines:
- name: app-logs
filterRefs:
- my-multiline
- my-parse
- my-labels
...
----

[NOTE]
====
`Drop`, `Prune`, and `KubeAPIAudit` filters remain unchanged.
====

== Validation and status

Most validations are now enforced when a resource is created or updated which provides immediate feedback. This is a departure from previous releases where all validation occurred post creation requiring inspection of the resource status location. Some validation still occurs post resource creation for cases where is not possible to do so at creation or update time.

Instances of the `ClusterLogForwarder.observability.openshift.io` resource must satisfy the following conditions before the operator deploys the log collector:

* Resource status conditions: Authorized, Valid, Ready

* Spec validations: Filters, Inputs, Outputs, Pipelines

All must evaluate to the status value of `True`.
21 changes: 21 additions & 0 deletions modules/deleting-red-hat-log-visualization.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
:_newdoc-version: 2.18.4
:_template-generated: 2025-06-03
:_mod-docs-content-type: PROCEDURE

[id="deleting-red-hat-log-visualization_{context}"]
= Deleting Red{nbsp}Hat log visualization

When updating from Logging 5 to Logging 6, delete Red{nbsp}Hat Log Visualization before installing the UIPlugin.

.Prerequisites
* You have administrator permissions.
* You installed the {oc-first}.

.Procedure

* Delete the logging view plugin by running the following command:
+
[source,terminal]
----
$ oc get consoleplugins logging-view-plugin && oc delete consoleplugins logging-view-plugin
----
21 changes: 21 additions & 0 deletions modules/deleting-red-hat-openshift-logging-5-crd.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
:_newdoc-version: 2.18.4
:_template-generated: 2025-06-03
:_mod-docs-content-type: PROCEDURE

[id="deleting-red-hat-openshift-logging-5-crds_{context}"]
= Deleting Red{nbsp}Hat OpenShift Logging 5 CRD

Delete Red{nbsp}Hat OpenShift Logging 5 custom resource definitions (CRD) when upgrading to Logging 6.


.Prerequisites
* You have administrator permissions.
* You installed the {oc-first}.

.Procedure
* Delete `clusterlogforwarders.logging.openshift.io` and `clusterloggings.logging.openshift.io` CRD by running the following command:
+
[source,terminal]
----
$ oc delete crd clusterloggings.logging.openshift.io clusterlogforwarders.logging.openshift.io
----
40 changes: 40 additions & 0 deletions modules/deleting-the-clusterlogging-instance.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
:_newdoc-version: 2.18.4
:_template-generated: 2025-06-03
:_mod-docs-content-type: PROCEDURE

[id="deleting-the-clusterlogging-instance_{context}"]
= Deleting the ClusterLogging instance

Delete the ClusterLogging instance because it is no longer needed in Logging 6.x.

.Prerequisites
* You have administrator permissions.
* You installed the {oc-first}.

.Procedure
* Delete the ClusterLogging instance.
+
[source,terminal]
----
$ oc delete clusterlogging <CR name> -n <namespace>
----

.Verification
. Verify that no collector pods are running by running the following command:
+
[source,terminal]
----
$ oc get pods -l component=collector -n <namespace>
----

. Verify that no `clusterLogForwarder.logging.openshift.io` custom resource (CR) exists by running the following command:
+
[source,terminal]
----
$ oc get clusterlogforwarders.logging.openshift.io -A
----

[IMPORTANT]
=====
If any `clusterLogForwarder.logging.openshift.io` CR is listed, it belongs to the old 5.x Logging stack, and must be removed. Create a backup of the CRs and delete them before deploying any `clusterLogForwarder.observability.openshift.io` CR with the new APIversion.
=====
Loading