Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion logging/cluster-logging.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ OpenShift Logging aggregates the following types of logs:

* `application` - Container logs generated by user applications running in the cluster, except infrastructure container applications.
* `infrastructure` - Logs generated by infrastructure components running in the cluster and {product-title} nodes, such as journal logs. Infrastructure components are pods that run in the `openshift*`, `kube*`, or `default` projects.
* `audit` - Logs generated by auditd, the node audit system, which are stored in the */var/log/audit/audit.log* file, and the audit logs from the Kubernetes apiserver and the OpenShift apiserver.
* `audit` - Logs generated by auditd, the node audit system, which are stored in the */var/log/audit/audit.log* file, and the audit logs from the Kubernetes apiserver and the OpenShift apiserver.

[NOTE]
====
Expand Down
Original file line number Diff line number Diff line change
@@ -1,15 +1,13 @@
[id="cluster-logging-configuration-of-json-log-data-for-default-elasticsearch_{context}"]
= Configuring JSON log data for Elasticsearch

When forwarding JSON logs to an Elasticsearch log store, you must create an index for each format if the JSON log entries _have different formats_.
If your JSON logs follow more than one schema, storing them in a single index might cause type conflicts and cardinality problems. To avoid that, you must configure the `ClusterLogForwarder` custom resource (CR) to group each schema into a single output definition. This way, each schema is forwarded to a separate index.

[IMPORTANT]
====
You must create a separate index for each different JSON log format. Otherwise, forwarding different formats to the same index can cause type conflicts and cardinality problems.
If you forward JSON logs to the default Elasticsearch instance managed by OpenShift Logging, it generates new indices based on your configuration. To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas.
====

To provide a different index for each format, you configure the `ClusterLogForwarder` custom resource (CR). You use a structure type from which to construct the index name.

.Structure types

You can use the following structure types in the `ClusterLogForwarder` CR to construct index names for the Elasticsearch log store:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,14 @@
[id="cluster-logging-forwarding-json-logs-to-the-default-elasticsearch_{context}"]
= Forwarding JSON logs to the Elasticsearch log store

For the Elasticsearch log store that OpenShift Logging manages, you must create a different index for each format in advance if your JSON log entries _have different formats_. Otherwise, forwarding different formats to the same index can cause type conflicts and cardinality problems.
For an Elasticsearch log store, if your JSON log entries _follow different schemas_, configure the `ClusterLogForwarder` custom resource (CR) to group each JSON schema into a single output definition. This way, Elasticsearch uses a separate index for each schema.

[IMPORTANT]
====
Because forwarding different schemas to the same index can cause type conflicts and cardinality problems, you must perform this configuration before you forward data to the Elasticsearch store.

To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas.
====

.Procedure

Expand Down
2 changes: 1 addition & 1 deletion modules/cluster-logging-json-log-forwarding.adoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[id="cluster-logging-json-log-forwarding_{context}"]
= Parsing JSON logs

Logs including JSON logs are usually represented as a string inside the `message` field. That makes it hard for users to query specific fields inside a JSON document. OpenShift Logging's Log Forwarding API enables you to parse JSON logs into a structured object and forward them to either Red Hat's managed Elasticsearch or any other third-party system supported by the Log Forwarding API.
Logs including JSON logs are usually represented as a string inside the `message` field. That makes it hard for users to query specific fields inside a JSON document. OpenShift Logging's Log Forwarding API enables you to parse JSON logs into a structured object and forward them to either OpenShift Logging-managed Elasticsearch or any other third-party system supported by the Log Forwarding API.

To illustrate how this works, suppose that you have the following structured JSON log entry.

Expand Down
2 changes: 0 additions & 2 deletions modules/cluster-logging-maintenance-support-list.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,6 @@ Explicitly unsupported cases include:

* *Throttling log collection*. You cannot throttle down the rate at which the logs are read in by the log collector.

* *Configuring log collection JSON parsing*. You cannot format log messages in JSON.

* *Configuring the logging collector using environment variables*. You cannot use environment variables to modify the log collector.

* *Configuring how the log collector normalizes logs*. You cannot modify default log normalization.