diff --git a/modules/cluster-logging-collector-log-forward-es.adoc b/modules/cluster-logging-collector-log-forward-es.adoc index 9d2f24fd1d93..3bb74570436b 100644 --- a/modules/cluster-logging-collector-log-forward-es.adoc +++ b/modules/cluster-logging-collector-log-forward-es.adoc @@ -46,10 +46,9 @@ spec: outputRefs: - elasticsearch-secure <10> - default <11> - parse: json <12> labels: - myLabel: "myValue" <13> - - name: infrastructure-audit-logs <14> + myLabel: "myValue" <12> + - name: infrastructure-audit-logs <13> inputRefs: - infrastructure outputRefs: @@ -68,9 +67,8 @@ spec: <9> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`. <10> Specify the name of the output to use when forwarding logs with this pipeline. <11> Optional: Specify the `default` output to send the logs to the internal Elasticsearch instance. -<12> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`. -<13> Optional: String. One or more labels to add to the logs. -<14> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: +<12> Optional: String. One or more labels to add to the logs. +<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: ** A name to describe the pipeline. ** The `inputRefs` is the log type to forward by using the pipeline: `application,` `infrastructure`, or `audit`. ** The `outputRefs` is the name of the output to use. diff --git a/modules/cluster-logging-collector-log-forward-fluentd.adoc b/modules/cluster-logging-collector-log-forward-fluentd.adoc index 4b542ae9bd09..31a9ca4a7da9 100644 --- a/modules/cluster-logging-collector-log-forward-fluentd.adoc +++ b/modules/cluster-logging-collector-log-forward-fluentd.adoc @@ -39,10 +39,9 @@ spec: outputRefs: - fluentd-server-secure <9> - default <10> - parse: json <11> labels: - clusterId: "C1234" <12> - - name: forward-to-fluentd-insecure <13> + clusterId: "C1234" <11> + - name: forward-to-fluentd-insecure <12> inputRefs: - infrastructure outputRefs: @@ -55,14 +54,13 @@ spec: <3> Specify a name for the output. <4> Specify the `fluentdForward` type. <5> Specify the URL and port of the external Fluentd instance as a valid absolute URL. You can use the `tcp` (insecure) or `tls` (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. -<6> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project, and must have keys of: *tls.crt*, *tls.key*, and *ca-bundle.crt* that point to the respective certificates that they represent. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. For more information, see the following "Example: Setting secret that contains a username and password." +<6> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project, and must have keys of: *tls.crt*, *tls.key*, and *ca-bundle.crt* that point to the respective certificates that they represent. <7> Optional: Specify a name for the pipeline. <8> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`. <9> Specify the name of the output to use when forwarding logs with this pipeline. <10> Optional: Specify the `default` output to forward logs to the internal Elasticsearch instance. -<11> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`. -<12> Optional: String. One or more labels to add to the logs. -<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: +<11> Optional: String. One or more labels to add to the logs. +<12> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: ** A name to describe the pipeline. ** The `inputRefs` is the log type to forward by using the pipeline: `application,` `infrastructure`, or `audit`. ** The `outputRefs` is the name of the output to use. diff --git a/modules/cluster-logging-collector-log-forward-kafka.adoc b/modules/cluster-logging-collector-log-forward-kafka.adoc index 1f8ef2fb616f..e2410bea5ad5 100644 --- a/modules/cluster-logging-collector-log-forward-kafka.adoc +++ b/modules/cluster-logging-collector-log-forward-kafka.adoc @@ -43,10 +43,9 @@ spec: - application outputRefs: <10> - app-logs - parse: json <11> labels: - logType: "application" <12> - - name: infra-topic <13> + logType: "application" <11> + - name: infra-topic <12> inputRefs: - infrastructure outputRefs: @@ -58,7 +57,7 @@ spec: - audit outputRefs: - audit-logs - - default <14> + - default <13> labels: logType: "audit" ---- @@ -72,14 +71,13 @@ spec: <8> Optional: Specify a name for the pipeline. <9> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`. <10> Specify the name of the output to use when forwarding logs with this pipeline. -<11> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`. -<12> Optional: String. One or more labels to add to the logs. -<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: +<11> Optional: String. One or more labels to add to the logs. +<12> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: ** A name to describe the pipeline. ** The `inputRefs` is the log type to forward by using the pipeline: `application,` `infrastructure`, or `audit`. ** The `outputRefs` is the name of the output to use. ** Optional: String. One or more labels to add to the logs. -<14> Optional: Specify `default` to forward logs to the internal Elasticsearch instance. +<13> Optional: Specify `default` to forward logs to the internal Elasticsearch instance. . Optional: To forward a single output to multiple Kafka brokers, specify an array of Kafka brokers as shown in the following example: + diff --git a/modules/cluster-logging-collector-log-forward-logs-from-application-pods.adoc b/modules/cluster-logging-collector-log-forward-logs-from-application-pods.adoc index d109e931f7cd..70f86c987113 100644 --- a/modules/cluster-logging-collector-log-forward-logs-from-application-pods.adoc +++ b/modules/cluster-logging-collector-log-forward-logs-from-application-pods.adoc @@ -24,18 +24,17 @@ spec: pipelines: - inputRefs: [ myAppLogData ] <3> outputRefs: [ default ] <4> - parse: json <5> - inputs: <6> + inputs: <5> - name: myAppLogData application: selector: - matchLabels: <7> + matchLabels: <6> environment: production app: nginx - namespaces: <8> + namespaces: <7> - app1 - app2 - outputs: <9> + outputs: <8> - default ... ---- @@ -43,11 +42,10 @@ spec: <2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`. <3> Specify one or more comma-separated values from `inputs[].name`. <4> Specify one or more comma-separated values from `outputs[]`. -<5> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`. -<6> Define a unique `inputs[].name` for each application that has a unique set of pod labels. -<7> Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs. -<8> Optional: Specify one or more namespaces. -<9> Specify one or more outputs to forward your log data to. The optional `default` output shown here sends log data to the internal Elasticsearch instance. +<5> Define a unique `inputs[].name` for each application that has a unique set of pod labels. +<6> Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs. +<7> Optional: Specify one or more namespaces. +<8> Specify one or more outputs to forward your log data to. The optional `default` output shown here sends log data to the internal Elasticsearch instance. . Optional: To restrict the gathering of log data to specific namespaces, use `inputs[].name.application.namespaces`, as shown in the preceding example. diff --git a/modules/cluster-logging-collector-log-forward-project.adoc b/modules/cluster-logging-collector-log-forward-project.adoc index c76453f52859..96d7901f2149 100644 --- a/modules/cluster-logging-collector-log-forward-project.adoc +++ b/modules/cluster-logging-collector-log-forward-project.adoc @@ -42,10 +42,9 @@ spec: - my-app-logs outputRefs: <10> - fluentd-server-insecure - parse: json <11> labels: - project: "my-project" <12> - - name: forward-to-fluentd-secure <13> + project: "my-project" <11> + - name: forward-to-fluentd-secure <12> inputRefs: - application - audit @@ -66,9 +65,8 @@ spec: <8> Configuration for a pipeline to use the input to send project application logs to an external Fluentd instance. <9> The `my-app-logs` input. <10> The name of the output to use. -<11> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`. -<12> Optional: String. One or more labels to add to the logs. -<13> Configuration for a pipeline to send logs to other log aggregators. +<11> Optional: String. One or more labels to add to the logs. +<12> Configuration for a pipeline to send logs to other log aggregators. ** Optional: Specify a name for the pipeline. ** Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`. ** Specify the name of the output to use when forwarding logs with this pipeline. diff --git a/modules/cluster-logging-collector-log-forward-syslog.adoc b/modules/cluster-logging-collector-log-forward-syslog.adoc index ea7541804bf4..dda4261cc816 100644 --- a/modules/cluster-logging-collector-log-forward-syslog.adoc +++ b/modules/cluster-logging-collector-log-forward-syslog.adoc @@ -51,11 +51,10 @@ spec: outputRefs: <10> - rsyslog-east - default <11> - parse: json <12> labels: - secure: "true" <13> + secure: "true" <12> syslog: "east" - - name: syslog-west <14> + - name: syslog-west <13> inputRefs: - infrastructure outputRefs: @@ -75,9 +74,8 @@ spec: <9> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`. <10> Specify the name of the output to use when forwarding logs with this pipeline. <11> Optional: Specify the `default` output to forward logs to the internal Elasticsearch instance. -<12> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`. -<13> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean. -<14> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: +<12> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean. +<13> Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: ** A name to describe the pipeline. ** The `inputRefs` is the log type to forward by using the pipeline: `application,` `infrastructure`, or `audit`. ** The `outputRefs` is the name of the output to use. diff --git a/modules/cluster-logging-collector-log-forwarding-about.adoc b/modules/cluster-logging-collector-log-forwarding-about.adoc index 169979ba4ac7..89874eec0dfa 100644 --- a/modules/cluster-logging-collector-log-forwarding-about.adoc +++ b/modules/cluster-logging-collector-log-forwarding-about.adoc @@ -90,23 +90,22 @@ spec: outputRefs: - elasticsearch-secure - default - parse: json <8> labels: - secure: "true" <9> + secure: "true" <8> datacenter: "east" - - name: infrastructure-logs <10> + - name: infrastructure-logs <9> inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: datacenter: "west" - - name: my-app <11> + - name: my-app <10> inputRefs: - my-app-logs outputRefs: - default - - inputRefs: <12> + - inputRefs: <11> - application outputRefs: - kafka-app @@ -134,15 +133,14 @@ spec: ** The `inputRefs` is the log type, in this example `audit`. ** The `outputRefs` is the name of the output to use, in this example `elasticsearch-secure` to forward to the secure Elasticsearch instance and `default` to forward to the internal Elasticsearch instance. ** Optional: Labels to add to the logs. -<8> Optional: Specify whether to forward structured JSON log entries as JSON objects in the `structured` field. The log entry must contain valid structured JSON; otherwise, OpenShift Logging removes the `structured` field and instead sends the log entry to the default index, `app-00000x`. -<9> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean. -<10> Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance. -<11> Configuration for a pipeline to send logs from the `my-project` project to the internal Elasticsearch instance. +<8> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean. +<9> Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance. +<10> Configuration for a pipeline to send logs from the `my-project` project to the internal Elasticsearch instance. ** A name to describe the pipeline. ** The `inputRefs` is a specific input: `my-app-logs`. ** The `outputRefs` is `default`. ** Optional: String. One or more labels to add to the logs. -<12> Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name: +<11> Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name: ** The `inputRefs` is the log type, in this example `application`. ** The `outputRefs` is the name of the output to use. ** Optional: String. One or more labels to add to the logs. diff --git a/modules/cluster-logging-configuration-of-json-log-data-for-default-elasticsearch.adoc b/modules/cluster-logging-configuration-of-json-log-data-for-default-elasticsearch.adoc index 44ef382c5ed0..e21423adef68 100644 --- a/modules/cluster-logging-configuration-of-json-log-data-for-default-elasticsearch.adoc +++ b/modules/cluster-logging-configuration-of-json-log-data-for-default-elasticsearch.adoc @@ -12,11 +12,11 @@ If you forward JSON logs to the default Elasticsearch instance managed by OpenSh You can use the following structure types in the `ClusterLogForwarder` CR to construct index names for the Elasticsearch log store: -* `structuredTypeKey` (string, optional) is the name of a message field. The value of that field, if present, is used to construct the index name. +* `structuredTypeKey` is the name of a message field. The value of that field is used to construct the index name. ** `kubernetes.labels.` is the Kubernetes pod label whose value is used to construct the index name. ** `openshift.labels.` is the `pipeline.label.` element in the `ClusterLogForwarder` CR whose value is used to construct the index name. ** `kubernetes.container_name` uses the container name to construct the index name. -* `structuredTypeName`: (string, optional) If `structuredTypeKey` is not set or its key is not present, OpenShift Logging uses the value of `structuredTypeName` as the structured type. When you use both `structuredTypeKey` and `structuredTypeName` together, `structuredTypeName` provides a fallback index name if the key in `structuredTypeKey` is missing from the JSON log data. +* `structuredTypeName`: If the `structuredTypeKey` field is not set or its key is not present, the `structuredTypeName` value is used as the structured type. When you use both the `structuredTypeKey` field and the `structuredTypeName` field together, the `structuredTypeName` value provides a fallback index name if the key in the `structuredTypeKey` field is missing from the JSON log data. [NOTE] ==== diff --git a/modules/cluster-logging-forwarding-json-logs-to-the-default-elasticsearch.adoc b/modules/cluster-logging-forwarding-json-logs-to-the-default-elasticsearch.adoc index b0ad7359714b..5eb3143f876b 100644 --- a/modules/cluster-logging-forwarding-json-logs-to-the-default-elasticsearch.adoc +++ b/modules/cluster-logging-forwarding-json-logs-to-the-default-elasticsearch.adoc @@ -28,13 +28,13 @@ pipelines: parse: json ---- -. Optional: Use `structuredTypeKey` to specify one of the log record fields, as described in the documentation about "Configuring JSON log data for Elasticsearch". Otherwise, remove this line. +. Use `structuredTypeKey` field to specify one of the log record fields. -. Optional: Use `structuredTypeName` to specify a ``, as described in the documentation about "Configuring JSON log data for Elasticsearch". Otherwise, remove this line. +. Use `structuredTypeName` field to specify a name. + [IMPORTANT] ==== -To parse JSON logs, you must set either `structuredTypeKey` or `structuredTypeName`, or both `structuredTypeKey` and `structuredTypeName`. +To parse JSON logs, you must set both the `structuredTypeKey` and `structuredTypeName` fields. ==== . For `inputRefs`, specify which log types to forward by using that pipeline, such as `application,` `infrastructure`, or `audit`. @@ -48,7 +48,7 @@ To parse JSON logs, you must set either `structuredTypeKey` or `structuredTypeNa $ oc create -f .yaml ---- + -The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. However, if they do not redeploy, delete the Fluentd pods to force them to redeploy. +The Red Hat OpenShift Logging Operator redeploys the collector pods. However, if they do not redeploy, delete the collector pods to force them to redeploy. + [source,terminal] ---- diff --git a/modules/cluster-logging-forwarding-separate-indices.adoc b/modules/cluster-logging-forwarding-separate-indices.adoc index 9190212d47fe..dc54283cc144 100644 --- a/modules/cluster-logging-forwarding-separate-indices.adoc +++ b/modules/cluster-logging-forwarding-separate-indices.adoc @@ -29,7 +29,9 @@ metadata: spec: outputDefaults: elasticsearch: - enableStructuredContainerLogs: true <1> + structuredTypeKey: kubernetes.labels.logFormat <1> + structuredTypeName: nologformat + enableStructuredContainerLogs: true <2> pipelines: - inputRefs: - application @@ -38,7 +40,8 @@ spec: - default parse: json ---- -<1> Enables multi-container outputs. +<1> Uses the value of the key-value pair that is formed by the Kubernetes `logFormat` label. +<2> Enables multi-container outputs. . Create or edit a YAML file that defines the `Pod` CR object: +