Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 4 additions & 6 deletions modules/cluster-logging-collector-log-forward-es.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -46,10 +46,9 @@ spec:
outputRefs:
- elasticsearch-secure <10>
- default <11>
parse: json <12>
labels:
myLabel: "myValue" <13>
- name: infrastructure-audit-logs <14>
myLabel: "myValue" <12>
- name: infrastructure-audit-logs <13>
inputRefs:
- infrastructure
outputRefs:
Expand All @@ -68,9 +67,8 @@ spec:
<9> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
<10> Specify the name of the output to use when forwarding logs with this pipeline.
<11> Optional: Specify the `default` output to send the logs to the internal Elasticsearch instance.
<12> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
<13> Optional: String. One or more labels to add to the logs.
<14> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
<12> Optional: String. One or more labels to add to the logs.
<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
** A name to describe the pipeline.
** The `inputRefs` is the log type to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
** The `outputRefs` is the name of the output to use.
Expand Down
12 changes: 5 additions & 7 deletions modules/cluster-logging-collector-log-forward-fluentd.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -39,10 +39,9 @@ spec:
outputRefs:
- fluentd-server-secure <9>
- default <10>
parse: json <11>
labels:
clusterId: "C1234" <12>
- name: forward-to-fluentd-insecure <13>
clusterId: "C1234" <11>
- name: forward-to-fluentd-insecure <12>
inputRefs:
- infrastructure
outputRefs:
Expand All @@ -55,14 +54,13 @@ spec:
<3> Specify a name for the output.
<4> Specify the `fluentdForward` type.
<5> Specify the URL and port of the external Fluentd instance as a valid absolute URL. You can use the `tcp` (insecure) or `tls` (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
<6> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project, and must have keys of: *tls.crt*, *tls.key*, and *ca-bundle.crt* that point to the respective certificates that they represent. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. For more information, see the following "Example: Setting secret that contains a username and password."
<6> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project, and must have keys of: *tls.crt*, *tls.key*, and *ca-bundle.crt* that point to the respective certificates that they represent.
<7> Optional: Specify a name for the pipeline.
<8> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
<9> Specify the name of the output to use when forwarding logs with this pipeline.
<10> Optional: Specify the `default` output to forward logs to the internal Elasticsearch instance.
<11> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
<12> Optional: String. One or more labels to add to the logs.
<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
<11> Optional: String. One or more labels to add to the logs.
<12> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
** A name to describe the pipeline.
** The `inputRefs` is the log type to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
** The `outputRefs` is the name of the output to use.
Expand Down
14 changes: 6 additions & 8 deletions modules/cluster-logging-collector-log-forward-kafka.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,9 @@ spec:
- application
outputRefs: <10>
- app-logs
parse: json <11>
labels:
logType: "application" <12>
- name: infra-topic <13>
logType: "application" <11>
- name: infra-topic <12>
inputRefs:
- infrastructure
outputRefs:
Expand All @@ -58,7 +57,7 @@ spec:
- audit
outputRefs:
- audit-logs
- default <14>
- default <13>
labels:
logType: "audit"
----
Expand All @@ -72,14 +71,13 @@ spec:
<8> Optional: Specify a name for the pipeline.
<9> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
<10> Specify the name of the output to use when forwarding logs with this pipeline.
<11> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
<12> Optional: String. One or more labels to add to the logs.
<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
<11> Optional: String. One or more labels to add to the logs.
<12> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
** A name to describe the pipeline.
** The `inputRefs` is the log type to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
** The `outputRefs` is the name of the output to use.
** Optional: String. One or more labels to add to the logs.
<14> Optional: Specify `default` to forward logs to the internal Elasticsearch instance.
<13> Optional: Specify `default` to forward logs to the internal Elasticsearch instance.

. Optional: To forward a single output to multiple Kafka brokers, specify an array of Kafka brokers as shown in the following example:
+
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,30 +24,28 @@ spec:
pipelines:
- inputRefs: [ myAppLogData ] <3>
outputRefs: [ default ] <4>
parse: json <5>
inputs: <6>
inputs: <5>
- name: myAppLogData
application:
selector:
matchLabels: <7>
matchLabels: <6>
environment: production
app: nginx
namespaces: <8>
namespaces: <7>
- app1
- app2
outputs: <9>
outputs: <8>
- default
...
----
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
<3> Specify one or more comma-separated values from `inputs[].name`.
<4> Specify one or more comma-separated values from `outputs[]`.
<5> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
<6> Define a unique `inputs[].name` for each application that has a unique set of pod labels.
<7> Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.
<8> Optional: Specify one or more namespaces.
<9> Specify one or more outputs to forward your log data to. The optional `default` output shown here sends log data to the internal Elasticsearch instance.
<5> Define a unique `inputs[].name` for each application that has a unique set of pod labels.
<6> Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.
<7> Optional: Specify one or more namespaces.
<8> Specify one or more outputs to forward your log data to. The optional `default` output shown here sends log data to the internal Elasticsearch instance.

. Optional: To restrict the gathering of log data to specific namespaces, use `inputs[].name.application.namespaces`, as shown in the preceding example.

Expand Down
10 changes: 4 additions & 6 deletions modules/cluster-logging-collector-log-forward-project.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -42,10 +42,9 @@ spec:
- my-app-logs
outputRefs: <10>
- fluentd-server-insecure
parse: json <11>
labels:
project: "my-project" <12>
- name: forward-to-fluentd-secure <13>
project: "my-project" <11>
- name: forward-to-fluentd-secure <12>
inputRefs:
- application
- audit
Expand All @@ -66,9 +65,8 @@ spec:
<8> Configuration for a pipeline to use the input to send project application logs to an external Fluentd instance.
<9> The `my-app-logs` input.
<10> The name of the output to use.
<11> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
<12> Optional: String. One or more labels to add to the logs.
<13> Configuration for a pipeline to send logs to other log aggregators.
<11> Optional: String. One or more labels to add to the logs.
<12> Configuration for a pipeline to send logs to other log aggregators.
** Optional: Specify a name for the pipeline.
** Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
** Specify the name of the output to use when forwarding logs with this pipeline.
Expand Down
10 changes: 4 additions & 6 deletions modules/cluster-logging-collector-log-forward-syslog.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -51,11 +51,10 @@ spec:
outputRefs: <10>
- rsyslog-east
- default <11>
parse: json <12>
labels:
secure: "true" <13>
secure: "true" <12>
syslog: "east"
- name: syslog-west <14>
- name: syslog-west <13>
inputRefs:
- infrastructure
outputRefs:
Expand All @@ -75,9 +74,8 @@ spec:
<9> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
<10> Specify the name of the output to use when forwarding logs with this pipeline.
<11> Optional: Specify the `default` output to forward logs to the internal Elasticsearch instance.
<12> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
<13> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
<14> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
<12> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type:
** A name to describe the pipeline.
** The `inputRefs` is the log type to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
** The `outputRefs` is the name of the output to use.
Expand Down
18 changes: 8 additions & 10 deletions modules/cluster-logging-collector-log-forwarding-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -90,23 +90,22 @@ spec:
outputRefs:
- elasticsearch-secure
- default
parse: json <8>
labels:
secure: "true" <9>
secure: "true" <8>
datacenter: "east"
- name: infrastructure-logs <10>
- name: infrastructure-logs <9>
inputRefs:
- infrastructure
outputRefs:
- elasticsearch-insecure
labels:
datacenter: "west"
- name: my-app <11>
- name: my-app <10>
inputRefs:
- my-app-logs
outputRefs:
- default
- inputRefs: <12>
- inputRefs: <11>
- application
outputRefs:
- kafka-app
Expand Down Expand Up @@ -134,15 +133,14 @@ spec:
** The `inputRefs` is the log type, in this example `audit`.
** The `outputRefs` is the name of the output to use, in this example `elasticsearch-secure` to forward to the secure Elasticsearch instance and `default` to forward to the internal Elasticsearch instance.
** Optional: Labels to add to the logs.
<8> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`.
<9> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
<10> Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance.
<11> Configuration for a pipeline to send logs from the `my-project` project to the internal Elasticsearch instance.
<8> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
<9> Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance.
<10> Configuration for a pipeline to send logs from the `my-project` project to the internal Elasticsearch instance.
** A name to describe the pipeline.
** The `inputRefs` is a specific input: `my-app-logs`.
** The `outputRefs` is `default`.
** Optional: String. One or more labels to add to the logs.
<12> Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name:
<11> Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name:
** The `inputRefs` is the log type, in this example `application`.
** The `outputRefs` is the name of the output to use.
** Optional: String. One or more labels to add to the logs.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,11 @@ If you forward JSON logs to the default Elasticsearch instance managed by OpenSh

You can use the following structure types in the `ClusterLogForwarder` CR to construct index names for the Elasticsearch log store:

* `structuredTypeKey` (string, optional) is the name of a message field. The value of that field, if present, is used to construct the index name.
* `structuredTypeKey` is the name of a message field. The value of that field is used to construct the index name.
** `kubernetes.labels.<key>` is the Kubernetes pod label whose value is used to construct the index name.
** `openshift.labels.<key>` is the `pipeline.label.<key>` element in the `ClusterLogForwarder` CR whose value is used to construct the index name.
** `kubernetes.container_name` uses the container name to construct the index name.
* `structuredTypeName`: (string, optional) If `structuredTypeKey` is not set or its key is not present, OpenShift Logging uses the value of `structuredTypeName` as the structured type. When you use both `structuredTypeKey` and `structuredTypeName` together, `structuredTypeName` provides a fallback index name if the key in `structuredTypeKey` is missing from the JSON log data.
* `structuredTypeName`: If the `structuredTypeKey` field is not set or its key is not present, the `structuredTypeName` value is used as the structured type. When you use both the `structuredTypeKey` field and the `structuredTypeName` field together, the `structuredTypeName` value provides a fallback index name if the key in the `structuredTypeKey` field is missing from the JSON log data.

[NOTE]
====
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,13 +28,13 @@ pipelines:
parse: json
----

. Optional: Use `structuredTypeKey` to specify one of the log record fields, as described in the documentation about "Configuring JSON log data for Elasticsearch". Otherwise, remove this line.
. Use `structuredTypeKey` field to specify one of the log record fields.

. Optional: Use `structuredTypeName` to specify a `<name>`, as described in the documentation about "Configuring JSON log data for Elasticsearch". Otherwise, remove this line.
. Use `structuredTypeName` field to specify a name.
+
[IMPORTANT]
====
To parse JSON logs, you must set either `structuredTypeKey` or `structuredTypeName`, or both `structuredTypeKey` and `structuredTypeName`.
To parse JSON logs, you must set both the `structuredTypeKey` and `structuredTypeName` fields.
====

. For `inputRefs`, specify which log types to forward by using that pipeline, such as `application,` `infrastructure`, or `audit`.
Expand All @@ -48,7 +48,7 @@ To parse JSON logs, you must set either `structuredTypeKey` or `structuredTypeNa
$ oc create -f <filename>.yaml
----
+
The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. However, if they do not redeploy, delete the Fluentd pods to force them to redeploy.
The Red Hat OpenShift Logging Operator redeploys the collector pods. However, if they do not redeploy, delete the collector pods to force them to redeploy.
+
[source,terminal]
----
Expand Down
7 changes: 5 additions & 2 deletions modules/cluster-logging-forwarding-separate-indices.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,9 @@ metadata:
spec:
outputDefaults:
elasticsearch:
enableStructuredContainerLogs: true <1>
structuredTypeKey: kubernetes.labels.logFormat <1>
structuredTypeName: nologformat
enableStructuredContainerLogs: true <2>
pipelines:
- inputRefs:
- application
Expand All @@ -38,7 +40,8 @@ spec:
- default
parse: json
----
<1> Enables multi-container outputs.
<1> Uses the value of the key-value pair that is formed by the Kubernetes `logFormat` label.
<2> Enables multi-container outputs.

. Create or edit a YAML file that defines the `Pod` CR object:
+
Expand Down