You can forward logs to an external Kafka broker in addition to, or instead of, the default Elasticsearch log store.
To configure log forwarding to an external Kafka instance, create a ClusterLogForwarder
custom resource (CR) with an output to that instance and a pipeline that uses the output. You can include a specific Kafka topic in the output or use the default. The Kafka output can use a TCP (insecure) or TLS (secure TCP) connection.
-
Create a
ClusterLogForwarder
CR YAML file similar to the following:apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance (1) namespace: openshift-logging (2) spec: outputs: - name: app-logs (3) type: kafka (4) url: tls://kafka.example.devlab.com:9093/app-topic (5) secret: name: kafka-secret (6) - name: infra-logs type: kafka url: tcp://kafka.devlab2.example.com:9093/infra-topic (7) - name: audit-logs type: kafka url: tls://kafka.qelab.example.com:9093/audit-topic secret: name: kafka-secret-qe pipelines: - name: app-topic (8) inputRefs: (9) - application outputRefs: (10) - app-logs parse: json (11) labels: logType: "application" (12) - name: infra-topic (13) inputRefs: - infrastructure outputRefs: - infra-logs labels: logType: "infra" - name: audit-topic inputRefs: - audit outputRefs: - audit-logs - default (14) labels: logType: "audit"
-
The name of the
ClusterLogForwarder
CR must beinstance
. -
The namespace for the
ClusterLogForwarder
CR must beopenshift-logging
. -
Specify a name for the output.
-
Specify the
kafka
type. -
Specify the URL and port of the Kafka broker as a valid absolute URL, optionally with a specific topic. You can use the
tcp
(insecure) ortls
(secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. -
If using a
tls
prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in theopenshift-logging
project and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent. -
Optional: To send an insecure output, use a
tcp
prefix in front of the URL. Also omit thesecret
key and itsname
from this output. -
Optional: Specify a name for the pipeline.
-
Specify which log types should be forwarded using that pipeline:
application,
infrastructure
, oraudit
. -
Specify the output to use with that pipeline for forwarding the logs.
-
Optional: Forward structured JSON log entries as JSON objects in the
structured
field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes thestructured
field and instead sends the log entry to the default index,app-00000x
. -
Optional: String. One or more labels to add to the logs.
-
Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
-
Optional. A name to describe the pipeline.
-
The
inputRefs
is the log type to forward using that pipeline:application,
infrastructure
, oraudit
. -
The
outputRefs
is the name of the output to use. -
Optional: String. One or more labels to add to the logs.
-
-
Optional: Specify
default
to forward logs to the internal Elasticsearch instance.
-
-
Optional: To forward a single output to multiple Kafka brokers, specify an array of Kafka brokers as shown in this example:
... spec: outputs: - name: app-logs type: kafka secret: name: kafka-secret-dev kafka: (1) brokers: (2) - tls://kafka-broker1.example.com:9093/ - tls://kafka-broker2.example.com:9093/ topic: app-topic (3) ...
-
Specify a
kafka
key that has abrokers
andtopic
key. -
Use the
brokers
key to specify an array of one or more brokers. -
Use the
topic
key to specify the target topic that will receive the logs.
-
-
Create the CR object:
$ oc create -f <file-name>.yaml
The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd pods to force them to redeploy.
$ oc delete pod --selector logging-infra=fluentd