Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 37 additions & 14 deletions modules/cluster-logging-collector-log-forward-kafka.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -29,21 +29,21 @@ spec:
name: kafka-secret <6>
- name: infra-logs
type: kafka
url: tls://kafka.devlab2.example.com:9093/infra-topic
url: tcp://kafka.devlab2.example.com:9093/infra-topic <7>
- name: audit-logs
type: kafka
url: tls://kafka.qelab.example.com:9093/audit-topic
secret:
name: kafka-secret-qe
name: kafka-secret-qe
pipelines:
- name: app-topic <7>
inputRefs: <8>
- name: app-topic <8>
inputRefs: <9>
- application
outputRefs: <9>
outputRefs: <10>
- app-logs
labels:
logType: application <10>
- name: infra-topic <11>
logType: application <11>
- name: infra-topic <12>
inputRefs:
- infrastructure
outputRefs:
Expand All @@ -55,7 +55,7 @@ spec:
- audit
outputRefs:
- audit-logs
- default <12>
- default <13>
labels:
logType: audit
----
Expand All @@ -65,16 +65,39 @@ spec:
<4> Specify the `kafka` type.
<5> Specify the URL and port of the Kafka broker as a valid absolute URL, optionally with a specific topic. You can use the `tcp` (insecure) or `tls` (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
<6> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project and must have keys of: *tls.crt*, *tls.key*, and *ca-bundle.crt* that point to the respective certificates that they represent.
<7> Optional: Specify a name for the pipeline.
<8> Specify which log types should be forwarded using that pipeline: `application,` `infrastructure`, or `audit`.
<9> Specify the output to use with that pipeline for forwarding the logs.
<10> Optional: One or more labels to add to the logs.
<11> Optional: Configure multiple outputs to forward logs to other external log aggregtors of any supported type:
<7> Optional: To send an insecure output, use a `tcp` prefix in front of the URL. Also omit the `secret` key and its `name` from this output.
<8> Optional: Specify a name for the pipeline.
<9> Specify which log types should be forwarded using that pipeline: `application,` `infrastructure`, or `audit`.
<10> Specify the output to use with that pipeline for forwarding the logs.
<11> Optional: One or more labels to add to the logs.
<12> Optional: Configure multiple outputs to forward logs to other external log aggregtors of any supported type:
** Optional. A name to describe the pipeline.
** The `inputRefs` is the log type to forward using that pipeline: `application,` `infrastructure`, or `audit`.
** The `outputRefs` is the name of the output to use.
** Optional: One or more labels to add to the logs.
<12> Optional: Specify `default` to forward logs to the internal Elasticsearch instance.
<13> Optional: Specify `default` to forward logs to the internal Elasticsearch instance.

. Optional: To forward a single output to multiple kafka brokers, specify an array of kafka brokers as shown in this example:
+
[source,yaml]
----
...
spec:
outputs:
- name: app-logs
type: kafka
secret:
name: kafka-secret-dev
kafka: <1>
brokers: <2>
- tls://kafka-broker1.example.com:9093/
- tls://kafka-broker2.example.com:9093/
topic: app-topic <3>
...
----
<1> Specify a `kafka` key that has a `brokers` and `topic` key.
<2> Use the `brokers` key to specify an array of one or more brokers.
<3> Use the `topic` key to specify the target topic that will receive the logs.

. Create the CR object:
+
Expand Down