Skip to content

Latest commit

 

History

History
118 lines (109 loc) · 4.61 KB

cluster-logging-collector-log-forward-kafka.adoc

File metadata and controls

118 lines (109 loc) · 4.61 KB

Forwarding logs to a Kafka broker

You can forward logs to an external Kafka broker in addition to, or instead of, the default Elasticsearch log store.

To configure log forwarding to an external Kafka instance, create a ClusterLogForwarder custom resource (CR) with an output to that instance and a pipeline that uses the output. You can include a specific Kafka topic in the output or use the default. The Kafka output can use a TCP (insecure) or TLS (secure TCP) connection.

Procedure
  1. Create a ClusterLogForwarder CR YAML file similar to the following:

    apiVersion: logging.openshift.io/v1
    kind: ClusterLogForwarder
    metadata:
      name: instance (1)
      namespace: openshift-logging (2)
    spec:
      outputs:
       - name: app-logs (3)
         type: kafka (4)
         url: tls://kafka.example.devlab.com:9093/app-topic (5)
         secret:
           name: kafka-secret (6)
       - name: infra-logs
         type: kafka
         url: tcp://kafka.devlab2.example.com:9093/infra-topic (7)
       - name: audit-logs
         type: kafka
         url: tls://kafka.qelab.example.com:9093/audit-topic
         secret:
            name: kafka-secret-qe
      pipelines:
       - name: app-topic (8)
         inputRefs: (9)
         - application
         outputRefs: (10)
         - app-logs
         parse: json (11)
         labels:
           logType: "application" (12)
       - name: infra-topic (13)
         inputRefs:
         - infrastructure
         outputRefs:
         - infra-logs
         labels:
           logType: "infra"
       - name: audit-topic
         inputRefs:
         - audit
         outputRefs:
         - audit-logs
         - default (14)
         labels:
           logType: "audit"
    1. The name of the ClusterLogForwarder CR must be instance.

    2. The namespace for the ClusterLogForwarder CR must be openshift-logging.

    3. Specify a name for the output.

    4. Specify the kafka type.

    5. Specify the URL and port of the Kafka broker as a valid absolute URL, optionally with a specific topic. You can use the tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.

    6. If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and must have keys of: tls.crt, tls.key, and ca-bundle.crt that point to the respective certificates that they represent.

    7. Optional: To send an insecure output, use a tcp prefix in front of the URL. Also omit the secret key and its name from this output.

    8. Optional: Specify a name for the pipeline.

    9. Specify which log types should be forwarded using that pipeline: application, infrastructure, or audit.

    10. Specify the output to use with that pipeline for forwarding the logs.

    11. Optional: Forward structured JSON log entries as JSON objects in the structured field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the structured field and instead sends the log entry to the default index, app-00000x.

    12. Optional: String. One or more labels to add to the logs.

    13. Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:

      • Optional. A name to describe the pipeline.

      • The inputRefs is the log type to forward using that pipeline: application, infrastructure, or audit.

      • The outputRefs is the name of the output to use.

      • Optional: String. One or more labels to add to the logs.

    14. Optional: Specify default to forward logs to the internal Elasticsearch instance.

  2. Optional: To forward a single output to multiple Kafka brokers, specify an array of Kafka brokers as shown in this example:

    ...
    spec:
      outputs:
      - name: app-logs
        type: kafka
        secret:
          name: kafka-secret-dev
        kafka:  (1)
          brokers: (2)
            - tls://kafka-broker1.example.com:9093/
            - tls://kafka-broker2.example.com:9093/
          topic: app-topic (3)
    ...
    1. Specify a kafka key that has a brokers and topic key.

    2. Use the brokers key to specify an array of one or more brokers.

    3. Use the topic key to specify the target topic that will receive the logs.

  3. Create the CR object:

    $ oc create -f <file-name>.yaml

The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. If the pods do not redeploy, you can delete the Fluentd pods to force them to redeploy.

$ oc delete pod --selector logging-infra=fluentd