Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions _topic_maps/_topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@ Distros: openshift-logging
Topics:
- Name: Configuring log forwarding
File: configuring-log-forwarding
- Name: Configuring the logging collector
File: cluster-logging-collector
- Name: Configuring the log store
File: configuring-the-log-store
#- Name: Configuring LokiStack for OTLP
Expand Down
24 changes: 24 additions & 0 deletions configuring/cluster-logging-collector.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
:_mod-docs-content-type: ASSEMBLY
:context: cluster-logging-collector
[id="cluster-logging-collector"]
= Configuring the logging collector
include::_attributes/common-attributes.adoc[]

toc::[]

{logging-title-uc} collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata.
All supported modifications to the log collector can be performed though the `spec.collection` stanza in the `ClusterLogForwarder` custom resource (CR).

include::modules/creating-logfilesmetricexporter.adoc[leveloffset=+1]

include::modules/cluster-logging-collector-limits.adoc[leveloffset=+1]

[id="cluster-logging-collector-input-receivers"]
== Configuring input receivers

The {clo} deploys a service for each configured input receiver so that clients can write to the collector. This service exposes the port specified for the input receiver. For log forwarder `ClusterLogForwarder` CR deployments, the service name is in the `<clusterlogforwarder_resource_name>-<input_name>` format.

include::modules/configuring-the-collector-to-receive-audit-logs-as-an-http-server.adoc[leveloffset=+2]
include::modules/configuring-the-collector-to-listen-for-connections-as-a-syslog-server.adoc[leveloffset=+2]

//include::modules/cluster-logging-collector-tuning.adoc[leveloffset=+1]
4 changes: 3 additions & 1 deletion log_collection_forwarding/cluster-logging-collector.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
= Configuring the logging collector
include::_attributes/common-attributes.adoc[]

//This is a duplicate file and should be removed in the future.

toc::[]

{logging-title-uc} collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata.
Expand All @@ -24,7 +26,7 @@ The service name is generated based on the following:
* For multi log forwarder `ClusterLogForwarder` CR deployments, the service name is in the format `<ClusterLogForwarder_CR_name>-<input_name>`. For example, `example-http-receiver`.
* For legacy `ClusterLogForwarder` CR deployments, meaning those named `instance` and located in the `openshift-logging` namespace, the service name is in the format `collector-<input_name>`. For example, `collector-http-receiver`.

include::modules/log-collector-http-server.adoc[leveloffset=+2]
//include::modules/log-collector-http-server.adoc[leveloffset=+2]
//include::modules/log-collector-rsyslog-server.adoc[leveloffset=+2]
// uncomment for 5.9 release

Expand Down
24 changes: 12 additions & 12 deletions modules/cluster-logging-collector-limits.adoc
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
// Module included in the following assemblies:
//
// * observability/logging/cluster-logging-collector.adoc
// * configuring/cluster-logging-collector.adoc

:_mod-docs-content-type: PROCEDURE
[id="cluster-logging-collector-limits_{context}"]
= Configure log collector CPU and memory limits

The log collector allows for adjustments to both the CPU and memory limits.
You can adjust both the CPU and memory limits for the log collector by editing the `ClusterLogForwarder` custom resource (CR).

.Procedure

* Edit the `ClusterLogging` custom resource (CR) in the `openshift-logging` project:
* Edit the `ClusterLogForwarder` CR in the `openshift-logging` project:
+
[source,terminal]
----
Expand All @@ -19,20 +19,20 @@ $ oc -n openshift-logging edit ClusterLogging instance
+
[source,yaml]
----
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
name: <clf_name> #<1>
namespace: openshift-logging
spec:
collection:
type: fluentd
resources:
limits: <1>
memory: 736Mi
collector:
resources: #<2>
requests:
memory: 736Mi
limits:
cpu: 100m
memory: 736Mi
# ...
----
<1> Specify the CPU and memory limits and requests as needed. The values shown are the default values.
<1> Specify a name for the `ClusterLogForwarder` CR.
<2> Specify the CPU and memory limits and requests as needed. The values shown are the default values.
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
// Module included in the following assemblies:
//
// * configuring/cluster-logging-collector.adoc


:_newdoc-version: 2.18.4
:_template-generated: 2025-08-05
:_mod-docs-content-type: PROCEDURE

[id="configuring-the-collector-to-listen-for-connections-as-a-syslog-server_{context}"]
= Configuring the collector to listen for connections as a syslog server

You can configure your log collector to collect journal format infrastructure logs by specifying `syslog` as a receiver input in the `ClusterLogForwarder` custom resource (CR).

:feature-name: Syslog receiver input
include::snippets/logging-http-sys-input-support.adoc[]

Prerequisites

* You have administrator permissions.
* You have installed the {oc-first}.
* You have installed the {clo}.

.Procedure

. Grant the `collect-infrastructure-logs` cluster role to the service account by running the following command:
+
.Example binding command
[source,terminal]
----
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logcollector
----

. Modify the `ClusterLogForwarder` CR to add configuration for the `syslog` receiver input:
+
.Example `ClusterLogForwarder` CR
[source,yaml]
----
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: <clusterlogforwarder_name> #<1>
namespace: <namespace>
# ...
spec:
serviceAccount:
name: <service_account_name> # <1>
inputs:
- name: syslog-receiver # <2>
type: receiver
receiver:
type: syslog # <3>
port: 10514 # <4>
outputs:
- name: <output_name>
lokiStack:
authentication:
token:
from: serviceAccount
target:
name: logging-loki
namespace: openshift-logging
tls: # <5>
ca:
key: service-ca.crt
configMapName: openshift-service-ca.crt
type: lokiStack
# ...
pipelines: # <6>
- name: syslog-pipeline
inputRefs:
- syslog-receiver
outputRefs:
- <output_name>
# ...
----
<1> Use the service account that you granted the `collect-infrastructure-logs` permission in the previous step.
<2> Specify a name for your input receiver.
<3> Specify the input receiver type as `syslog`.
<4> Optional: Specify the port that the input receiver listens on. This must be a value between `1024` and `65535`.
<5> If TLS configuration is not set, the default certificates will be used. For more information, run the command `oc explain clusterlogforwarders.spec.inputs.receiver.tls`.
<6> Configure a pipeline for your input receiver.

. Apply the changes to the `ClusterLogForwarder` CR by running the following command:
+
[source,terminal]
----
$ oc apply -f <filename>.yaml
----

. Verify that the collector is listening on the service that has a name in the `<clusterlogforwarder_resource_name>-<input_name>` format by running the following command:
+
[source,terminal]
----
$ oc get svc
----
+
.Example output
+
[source,terminal,options="nowrap"]
----
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
collector ClusterIP 172.30.85.239 <none> 24231/TCP 33m
collector-syslog-receiver ClusterIP 172.30.216.142 <none> 10514/TCP 2m20s
----
+
In this example output, the service name is `collector-syslog-receiver`.

.Verification

. Extract the certificate authority (CA) certificate file by running the following command:
+
[source,terminal]
----
$ oc extract cm/openshift-service-ca.crt -n <namespace>
----
+
[NOTE]
====
If the CA in the cluster where the collectors are running changes, you must extract the CA certificate file again.
====

. As an example, use the `curl` command to send logs by running the following command:
+
[source,terminal]
----
$ curl --cacert <openshift_service_ca.crt> collector-syslog-receiver.<namespace>.svc:10514 “test message”
----
+
Replace <openshift_service_ca.crt> with the extracted CA certificate file.

////
. As an example, send logs by running the following command:
+
[source,terminal]
----
$ logger --tcp --server collector-syslog-receiver.<ns>.svc:10514 “test message”
----
////
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
// Module included in the following assemblies:
//
// * configuring/cluster-logging-collector.adoc

:_newdoc-version: 2.18.4
:_template-generated: 2025-08-05
:_mod-docs-content-type: PROCEDURE

[id="configuring-the-collector-to-receive-audit-logs-as-an-http-server_{context}"]
= Configuring the collector to receive audit logs as an HTTP server

You can configure your log collector to listen for HTTP connections to only receive audit logs by specifying `http` as a receiver input in the `ClusterLogForwarder` custom resource (CR).

:feature-name: HTTP receiver input
include::snippets/logging-http-sys-input-support.adoc[]

.Prerequisites

* You have administrator permissions.
* You have installed the {oc-first}.
* You have installed the {clo}.

.Procedure

. Modify the `ClusterLogForwarder` CR to add configuration for the `http` receiver input:
+
.Example `ClusterLogForwarder` CR
[source,yaml]
----
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: <clusterlogforwarder_name> #<1>
namespace: <namespace>
# ...
spec:
serviceAccount:
name: <service_account_name>
inputs:
- name: http-receiver #<2>
type: receiver
receiver:
type: http #<3>
port: 8443 #<4>
http:
format: kubeAPIAudit #<5>
outputs:
- name: <output_name>
type: http
http:
url: <url>
pipelines: #<6>
- name: http-pipeline
inputRefs:
- http-receiver
outputRefs:
- <output_name>
# ...
----
<1> Specify a name for the `ClusterLogForwarder` CR.
<2> Specify a name for your input receiver.
<3> Specify the input receiver type as `http`.
<4> Optional: Specify the port that the input receiver listens on. This must be a value between `1024` and `65535`. The default value is `8443` if this is not specified.
<5> Currently, only the `kube-apiserver` webhook format is supported for `http` input receivers.
<6> Configure a pipeline for your input receiver.

. Apply the changes to the `ClusterLogForwarder` CR by running the following command:
+
[source,terminal]
----
$ oc apply -f <filename>.yaml
----

. Verify that the collector is listening on the service that has a name in the `<clusterlogforwarder_resource_name>-<input_name>` format by running the following command:
+
[source,terminal]
----
$ oc get svc
----
+
.Example output
----
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
collector ClusterIP 172.30.85.239 <none> 24231/TCP 3m6s
collector-http-receiver ClusterIP 172.30.205.160 <none> 8443/TCP 3m6s
----
+
In the example, the service name is `collector-http-receiver`.

.Verification

. Extract the certificate authority (CA) certificate file by running the following command:
+
[source,terminal]
----
$ oc extract cm/openshift-service-ca.crt -n <namespace>
----
+
[NOTE]
====
If the CA in the cluster where the collectors are running changes, you must extract the CA certificate file again.
====

. As an example, use the `curl` command to send logs by running the following command:
+
[source,terminal]
----
$ curl --cacert <openshift_service_ca.crt> https://collector-http-receiver.<namespace>.svc:8443 -XPOST -d '{"<prefix>":"<message>"}'
----
+
Replace <openshift_service_ca.crt> with the extracted CA certificate file.
13 changes: 8 additions & 5 deletions modules/creating-logfilesmetricexporter.adoc
Original file line number Diff line number Diff line change
@@ -1,14 +1,17 @@
// Module included in the following assemblies:
//
// * observability/logging/log_collection_forwarding/cluster-logging-collector.adoc
// * configuring/cluster-logging-collector.adoc

:_mod-docs-content-type: PROCEDURE
[id="creating-logfilesmetricexporter_{context}"]
= Creating a LogFileMetricExporter resource

In {logging} version 5.8 and newer versions, the LogFileMetricExporter is no longer deployed with the collector by default. You must manually create a `LogFileMetricExporter` custom resource (CR) to generate metrics from the logs produced by running containers.
You must manually create a `LogFileMetricExporter` custom resource (CR) to generate metrics from the logs produced by running containers, because it is not deployed with the collector by default.

If you do not create the `LogFileMetricExporter` CR, you may see a *No datapoints found* message in the {ocp-product-title} web console dashboard for *Produced Logs*.
[NOTE]
====
If you do not create the `LogFileMetricExporter` CR, you might see a *No datapoints found* message in the {ocp-product-title} web console dashboard for the *Produced Logs* field.
====

.Prerequisites

Expand Down Expand Up @@ -53,8 +56,6 @@ $ oc apply -f <filename>.yaml

.Verification

A `logfilesmetricexporter` pod runs concurrently with a `collector` pod on each node.

* Verify that the `logfilesmetricexporter` pods are running in the namespace where you have created the `LogFileMetricExporter` CR, by running the following command and observing the output:
+
[source,terminal]
Expand All @@ -69,3 +70,5 @@ NAME READY STATUS RESTARTS AGE
logfilesmetricexporter-9qbjj 1/1 Running 0 2m46s
logfilesmetricexporter-cbc4v 1/1 Running 0 2m46s
----
+
A `logfilesmetricexporter` pod runs concurrently with a `collector` pod on each node.
Loading