From 761813c58ac53b2ed547becf46603c46bc528c59 Mon Sep 17 00:00:00 2001 From: Ashwin Mehendale Date: Wed, 30 Jul 2025 00:55:02 +0530 Subject: [PATCH] OBSDOCS-948: Follow up changes for http/syslog input docs --- _topic_maps/_topic_map.yml | 2 + configuring/cluster-logging-collector.adoc | 24 +++ .../cluster-logging-collector.adoc | 4 +- modules/cluster-logging-collector-limits.adoc | 24 +-- ...en-for-connections-as-a-syslog-server.adoc | 139 ++++++++++++++++++ ...-receive-audit-logs-as-an-http-server.adoc | 111 ++++++++++++++ modules/creating-logfilesmetricexporter.adoc | 13 +- modules/log-collector-http-server.adoc | 90 ------------ 8 files changed, 299 insertions(+), 108 deletions(-) create mode 100644 configuring/cluster-logging-collector.adoc create mode 100644 modules/configuring-the-collector-to-listen-for-connections-as-a-syslog-server.adoc create mode 100644 modules/configuring-the-collector-to-receive-audit-logs-as-an-http-server.adoc delete mode 100644 modules/log-collector-http-server.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index bc8db8696632..de9f5b56d112 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -32,6 +32,8 @@ Distros: openshift-logging Topics: - Name: Configuring log forwarding File: configuring-log-forwarding +- Name: Configuring the logging collector + File: cluster-logging-collector - Name: Configuring the log store File: configuring-the-log-store #- Name: Configuring LokiStack for OTLP diff --git a/configuring/cluster-logging-collector.adoc b/configuring/cluster-logging-collector.adoc new file mode 100644 index 000000000000..b6b9ffdbe89e --- /dev/null +++ b/configuring/cluster-logging-collector.adoc @@ -0,0 +1,24 @@ +:_mod-docs-content-type: ASSEMBLY +:context: cluster-logging-collector +[id="cluster-logging-collector"] += Configuring the logging collector +include::_attributes/common-attributes.adoc[] + +toc::[] + +{logging-title-uc} collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata. +All supported modifications to the log collector can be performed though the `spec.collection` stanza in the `ClusterLogForwarder` custom resource (CR). + +include::modules/creating-logfilesmetricexporter.adoc[leveloffset=+1] + +include::modules/cluster-logging-collector-limits.adoc[leveloffset=+1] + +[id="cluster-logging-collector-input-receivers"] +== Configuring input receivers + +The {clo} deploys a service for each configured input receiver so that clients can write to the collector. This service exposes the port specified for the input receiver. For log forwarder `ClusterLogForwarder` CR deployments, the service name is in the `-` format. + +include::modules/configuring-the-collector-to-receive-audit-logs-as-an-http-server.adoc[leveloffset=+2] +include::modules/configuring-the-collector-to-listen-for-connections-as-a-syslog-server.adoc[leveloffset=+2] + +//include::modules/cluster-logging-collector-tuning.adoc[leveloffset=+1] diff --git a/log_collection_forwarding/cluster-logging-collector.adoc b/log_collection_forwarding/cluster-logging-collector.adoc index 5e39d215443c..dde66ad46e53 100644 --- a/log_collection_forwarding/cluster-logging-collector.adoc +++ b/log_collection_forwarding/cluster-logging-collector.adoc @@ -4,6 +4,8 @@ = Configuring the logging collector include::_attributes/common-attributes.adoc[] +//This is a duplicate file and should be removed in the future. + toc::[] {logging-title-uc} collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata. @@ -24,7 +26,7 @@ The service name is generated based on the following: * For multi log forwarder `ClusterLogForwarder` CR deployments, the service name is in the format `-`. For example, `example-http-receiver`. * For legacy `ClusterLogForwarder` CR deployments, meaning those named `instance` and located in the `openshift-logging` namespace, the service name is in the format `collector-`. For example, `collector-http-receiver`. -include::modules/log-collector-http-server.adoc[leveloffset=+2] +//include::modules/log-collector-http-server.adoc[leveloffset=+2] //include::modules/log-collector-rsyslog-server.adoc[leveloffset=+2] // uncomment for 5.9 release diff --git a/modules/cluster-logging-collector-limits.adoc b/modules/cluster-logging-collector-limits.adoc index eec48a89b8de..ac8936df62ea 100644 --- a/modules/cluster-logging-collector-limits.adoc +++ b/modules/cluster-logging-collector-limits.adoc @@ -1,16 +1,16 @@ // Module included in the following assemblies: // -// * observability/logging/cluster-logging-collector.adoc +// * configuring/cluster-logging-collector.adoc :_mod-docs-content-type: PROCEDURE [id="cluster-logging-collector-limits_{context}"] = Configure log collector CPU and memory limits -The log collector allows for adjustments to both the CPU and memory limits. +You can adjust both the CPU and memory limits for the log collector by editing the `ClusterLogForwarder` custom resource (CR). .Procedure -* Edit the `ClusterLogging` custom resource (CR) in the `openshift-logging` project: +* Edit the `ClusterLogForwarder` CR in the `openshift-logging` project: + [source,terminal] ---- @@ -19,20 +19,20 @@ $ oc -n openshift-logging edit ClusterLogging instance + [source,yaml] ---- -apiVersion: logging.openshift.io/v1 -kind: ClusterLogging +apiVersion: observability.openshift.io/v1 +kind: ClusterLogForwarder metadata: - name: instance + name: #<1> namespace: openshift-logging spec: - collection: - type: fluentd - resources: - limits: <1> - memory: 736Mi + collector: + resources: #<2> requests: + memory: 736Mi + limits: cpu: 100m memory: 736Mi # ... ---- -<1> Specify the CPU and memory limits and requests as needed. The values shown are the default values. +<1> Specify a name for the `ClusterLogForwarder` CR. +<2> Specify the CPU and memory limits and requests as needed. The values shown are the default values. diff --git a/modules/configuring-the-collector-to-listen-for-connections-as-a-syslog-server.adoc b/modules/configuring-the-collector-to-listen-for-connections-as-a-syslog-server.adoc new file mode 100644 index 000000000000..f4ac0d86898b --- /dev/null +++ b/modules/configuring-the-collector-to-listen-for-connections-as-a-syslog-server.adoc @@ -0,0 +1,139 @@ +// Module included in the following assemblies: +// +// * configuring/cluster-logging-collector.adoc + + +:_newdoc-version: 2.18.4 +:_template-generated: 2025-08-05 +:_mod-docs-content-type: PROCEDURE + +[id="configuring-the-collector-to-listen-for-connections-as-a-syslog-server_{context}"] += Configuring the collector to listen for connections as a syslog server + +You can configure your log collector to collect journal format infrastructure logs by specifying `syslog` as a receiver input in the `ClusterLogForwarder` custom resource (CR). + +:feature-name: Syslog receiver input +include::snippets/logging-http-sys-input-support.adoc[] + +Prerequisites + +* You have administrator permissions. +* You have installed the {oc-first}. +* You have installed the {clo}. + +.Procedure + +. Grant the `collect-infrastructure-logs` cluster role to the service account by running the following command: ++ +.Example binding command +[source,terminal] +---- +$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logcollector +---- + +. Modify the `ClusterLogForwarder` CR to add configuration for the `syslog` receiver input: ++ +.Example `ClusterLogForwarder` CR +[source,yaml] +---- +apiVersion: observability.openshift.io/v1 +kind: ClusterLogForwarder +metadata: + name: #<1> + namespace: +# ... +spec: + serviceAccount: + name: # <1> + inputs: + - name: syslog-receiver # <2> + type: receiver + receiver: + type: syslog # <3> + port: 10514 # <4> + outputs: + - name: + lokiStack: + authentication: + token: + from: serviceAccount + target: + name: logging-loki + namespace: openshift-logging + tls: # <5> + ca: + key: service-ca.crt + configMapName: openshift-service-ca.crt + type: lokiStack +# ... + pipelines: # <6> + - name: syslog-pipeline + inputRefs: + - syslog-receiver + outputRefs: + - +# ... +---- +<1> Use the service account that you granted the `collect-infrastructure-logs` permission in the previous step. +<2> Specify a name for your input receiver. +<3> Specify the input receiver type as `syslog`. +<4> Optional: Specify the port that the input receiver listens on. This must be a value between `1024` and `65535`. +<5> If TLS configuration is not set, the default certificates will be used. For more information, run the command `oc explain clusterlogforwarders.spec.inputs.receiver.tls`. +<6> Configure a pipeline for your input receiver. + +. Apply the changes to the `ClusterLogForwarder` CR by running the following command: ++ +[source,terminal] +---- +$ oc apply -f .yaml +---- + +. Verify that the collector is listening on the service that has a name in the `-` format by running the following command: ++ +[source,terminal] +---- +$ oc get svc +---- ++ +.Example output ++ +[source,terminal,options="nowrap"] +---- +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +collector ClusterIP 172.30.85.239 24231/TCP 33m +collector-syslog-receiver ClusterIP 172.30.216.142 10514/TCP 2m20s +---- ++ +In this example output, the service name is `collector-syslog-receiver`. + +.Verification + +. Extract the certificate authority (CA) certificate file by running the following command: ++ +[source,terminal] +---- +$ oc extract cm/openshift-service-ca.crt -n +---- ++ +[NOTE] +==== +If the CA in the cluster where the collectors are running changes, you must extract the CA certificate file again. +==== + +. As an example, use the `curl` command to send logs by running the following command: ++ +[source,terminal] +---- +$ curl --cacert collector-syslog-receiver..svc:10514 “test message” +---- ++ +Replace with the extracted CA certificate file. + +//// +. As an example, send logs by running the following command: ++ +[source,terminal] +---- +$ logger --tcp --server collector-syslog-receiver..svc:10514 “test message” +---- +//// diff --git a/modules/configuring-the-collector-to-receive-audit-logs-as-an-http-server.adoc b/modules/configuring-the-collector-to-receive-audit-logs-as-an-http-server.adoc new file mode 100644 index 000000000000..c5e71b272572 --- /dev/null +++ b/modules/configuring-the-collector-to-receive-audit-logs-as-an-http-server.adoc @@ -0,0 +1,111 @@ +// Module included in the following assemblies: +// +// * configuring/cluster-logging-collector.adoc + +:_newdoc-version: 2.18.4 +:_template-generated: 2025-08-05 +:_mod-docs-content-type: PROCEDURE + +[id="configuring-the-collector-to-receive-audit-logs-as-an-http-server_{context}"] += Configuring the collector to receive audit logs as an HTTP server + +You can configure your log collector to listen for HTTP connections to only receive audit logs by specifying `http` as a receiver input in the `ClusterLogForwarder` custom resource (CR). + +:feature-name: HTTP receiver input +include::snippets/logging-http-sys-input-support.adoc[] + +.Prerequisites + +* You have administrator permissions. +* You have installed the {oc-first}. +* You have installed the {clo}. + +.Procedure + +. Modify the `ClusterLogForwarder` CR to add configuration for the `http` receiver input: ++ +.Example `ClusterLogForwarder` CR +[source,yaml] +---- +apiVersion: observability.openshift.io/v1 +kind: ClusterLogForwarder +metadata: + name: #<1> + namespace: +# ... +spec: + serviceAccount: + name: + inputs: + - name: http-receiver #<2> + type: receiver + receiver: + type: http #<3> + port: 8443 #<4> + http: + format: kubeAPIAudit #<5> + outputs: + - name: + type: http + http: + url: + pipelines: #<6> + - name: http-pipeline + inputRefs: + - http-receiver + outputRefs: + - +# ... +---- +<1> Specify a name for the `ClusterLogForwarder` CR. +<2> Specify a name for your input receiver. +<3> Specify the input receiver type as `http`. +<4> Optional: Specify the port that the input receiver listens on. This must be a value between `1024` and `65535`. The default value is `8443` if this is not specified. +<5> Currently, only the `kube-apiserver` webhook format is supported for `http` input receivers. +<6> Configure a pipeline for your input receiver. + +. Apply the changes to the `ClusterLogForwarder` CR by running the following command: ++ +[source,terminal] +---- +$ oc apply -f .yaml +---- + +. Verify that the collector is listening on the service that has a name in the `-` format by running the following command: ++ +[source,terminal] +---- +$ oc get svc +---- ++ +.Example output +---- +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +collector ClusterIP 172.30.85.239 24231/TCP 3m6s +collector-http-receiver ClusterIP 172.30.205.160 8443/TCP 3m6s +---- ++ +In the example, the service name is `collector-http-receiver`. + +.Verification + +. Extract the certificate authority (CA) certificate file by running the following command: ++ +[source,terminal] +---- +$ oc extract cm/openshift-service-ca.crt -n +---- ++ +[NOTE] +==== +If the CA in the cluster where the collectors are running changes, you must extract the CA certificate file again. +==== + +. As an example, use the `curl` command to send logs by running the following command: ++ +[source,terminal] +---- +$ curl --cacert https://collector-http-receiver..svc:8443 -XPOST -d '{"":""}' +---- ++ +Replace with the extracted CA certificate file. diff --git a/modules/creating-logfilesmetricexporter.adoc b/modules/creating-logfilesmetricexporter.adoc index 7139e3d5e62e..148ebb43aaca 100644 --- a/modules/creating-logfilesmetricexporter.adoc +++ b/modules/creating-logfilesmetricexporter.adoc @@ -1,14 +1,17 @@ // Module included in the following assemblies: // -// * observability/logging/log_collection_forwarding/cluster-logging-collector.adoc +// * configuring/cluster-logging-collector.adoc :_mod-docs-content-type: PROCEDURE [id="creating-logfilesmetricexporter_{context}"] = Creating a LogFileMetricExporter resource -In {logging} version 5.8 and newer versions, the LogFileMetricExporter is no longer deployed with the collector by default. You must manually create a `LogFileMetricExporter` custom resource (CR) to generate metrics from the logs produced by running containers. +You must manually create a `LogFileMetricExporter` custom resource (CR) to generate metrics from the logs produced by running containers, because it is not deployed with the collector by default. -If you do not create the `LogFileMetricExporter` CR, you may see a *No datapoints found* message in the {ocp-product-title} web console dashboard for *Produced Logs*. +[NOTE] +==== +If you do not create the `LogFileMetricExporter` CR, you might see a *No datapoints found* message in the {ocp-product-title} web console dashboard for the *Produced Logs* field. +==== .Prerequisites @@ -53,8 +56,6 @@ $ oc apply -f .yaml .Verification -A `logfilesmetricexporter` pod runs concurrently with a `collector` pod on each node. - * Verify that the `logfilesmetricexporter` pods are running in the namespace where you have created the `LogFileMetricExporter` CR, by running the following command and observing the output: + [source,terminal] @@ -69,3 +70,5 @@ NAME READY STATUS RESTARTS AGE logfilesmetricexporter-9qbjj 1/1 Running 0 2m46s logfilesmetricexporter-cbc4v 1/1 Running 0 2m46s ---- ++ +A `logfilesmetricexporter` pod runs concurrently with a `collector` pod on each node. diff --git a/modules/log-collector-http-server.adoc b/modules/log-collector-http-server.adoc deleted file mode 100644 index e8c3ddf37aed..000000000000 --- a/modules/log-collector-http-server.adoc +++ /dev/null @@ -1,90 +0,0 @@ -// Module included in the following assemblies: -// -// * observability/logging/log_collection_forwarding/cluster-logging-collector.adoc - - -//This file is for Logging 5.x - -:_mod-docs-content-type: PROCEDURE -[id="log-collector-http-server_{context}"] -= Configuring the collector to receive audit logs as an HTTP server - -You can configure your log collector to listen for HTTP connections and receive audit logs as an HTTP server by specifying `http` as a receiver input in the `ClusterLogForwarder` custom resource (CR). This enables you to use a common log store for audit logs that are collected from both inside and outside of your {ocp-product-title} cluster. - -.Prerequisites - -* You have administrator permissions. -* You have installed the {oc-first}. -* You have installed the {clo}. -* You have created a `ClusterLogForwarder` CR. - -.Procedure - -. Modify the `ClusterLogForwarder` CR to add configuration for the `http` receiver input: -+ --- -.Example `ClusterLogForwarder` CR if you are using a multi log forwarder deployment -[source,yaml] ----- -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: -# ... -spec: - serviceAccountName: - inputs: - - name: http-receiver # <1> - receiver: - type: http # <2> - http: - format: kubeAPIAudit # <3> - port: 8443 # <4> - pipelines: # <5> - - name: http-pipeline - inputRefs: - - http-receiver -# ... ----- -<1> Specify a name for your input receiver. -<2> Specify the input receiver type as `http`. -<3> Currently, only the `kube-apiserver` webhook format is supported for `http` input receivers. -<4> Optional: Specify the port that the input receiver listens on. This must be a value between `1024` and `65535`. The default value is `8443` if this is not specified. -<5> Configure a pipeline for your input receiver. --- -+ --- -.Example `ClusterLogForwarder` CR if you are using a legacy deployment -[source,yaml] ----- -apiVersion: observability.openshift.io/v1 -kind: ClusterLogForwarder -metadata: - name: instance - namespace: openshift-logging -spec: - inputs: - - name: http-receiver # <1> - receiver: - type: http # <2> - http: - format: kubeAPIAudit # <3> - port: 8443 # <4> - pipelines: # <5> - - inputRefs: - - http-receiver - name: http-pipeline -# ... ----- -<1> Specify a name for your input receiver. -<2> Specify the input receiver type as `http`. -<3> Currently, only the `kube-apiserver` webhook format is supported for `http` input receivers. -<4> Optional: Specify the port that the input receiver listens on. This must be a value between `1024` and `65535`. The default value is `8443` if this is not specified. -<5> Configure a pipeline for your input receiver. --- - -. Apply the changes to the `ClusterLogForwarder` CR by running the following command: -+ -[source,terminal] ----- -$ oc apply -f .yaml -----