diff --git a/modules/log6x-audit-log-filtering.adoc b/modules/log6x-audit-log-filtering.adoc new file mode 100644 index 000000000000..df9fc45d48e5 --- /dev/null +++ b/modules/log6x-audit-log-filtering.adoc @@ -0,0 +1,118 @@ +// Module included in the following assemblies: +// +// * observability/logging/logging-6.0/log6x-clf.adoc + +:_mod-docs-content-type: CONCEPT +[id="log6x-audit-filtering_{context}"] += Overview of API audit filter +OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, checking stops at the first match. How much data is included in an event is determined by the value of the `level` field: + +* `None`: The event is dropped. +* `Metadata`: Audit metadata is included, request and response bodies are removed. +* `Request`: Audit metadata and the request body are included, the response body is removed. +* `RequestResponse`: All data is included: metadata, request body and response body. The response body can be very large. For example, `oc get pods -A` generates a response body containing the YAML description of every pod in the cluster. + +In logging 5.8 and later, the `ClusterLogForwarder` custom resource (CR) uses the same format as the standard link:https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/#audit-policy[Kubernetes audit policy], while providing the following additional functions: + +Wildcards:: Names of users, groups, namespaces, and resources can have a leading or trailing `\*` asterisk character. For example, namespace `openshift-\*` matches `openshift-apiserver` or `openshift-authentication`. Resource `\*/status` matches `Pod/status` or `Deployment/status`. + +Default Rules:: Events that do not match any rule in the policy are filtered as follows: +* Read-only system events such as `get`, `list`, `watch` are dropped. +* Service account write events that occur within the same namespace as the service account are dropped. +* All other events are forwarded, subject to any configured rate limits. + +To disable these defaults, either end your rules list with a rule that has only a `level` field or add an empty rule. + +Omit Response Codes:: A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the `OmitResponseCodes` field, a list of HTTP status code for which no events are created. The default value is `[404, 409, 422, 429]`. If the value is an empty list, `[]`, then no status codes are omitted. + +The `ClusterLogForwarder` CR audit policy acts in addition to the {product-title} audit policy. The `ClusterLogForwarder` CR audit filter changes what the log collector forwards, and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store, and a less detailed stream to a remote site. + +[NOTE] +==== +The example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration. +==== + + +.Example audit policy +[source,yaml] +---- +apiVersion: observability.openshift.io/v1 +kind: ClusterLogForwarder +metadata: + name: + namespace: +spec: + pipelines: + - name: my-pipeline + inputRefs: audit #<1> + filterRefs: my-policy #<2> + outputRefs: default + filters: + - name: my-policy + type: kubeAPIAudit + kubeAPIAudit: + # Don't generate audit events for all requests in RequestReceived stage. + omitStages: + - "RequestReceived" + + rules: + # Log pod changes at RequestResponse level + - level: RequestResponse + resources: + - group: "" + resources: ["pods"] + + # Log "pods/log", "pods/status" at Metadata level + - level: Metadata + resources: + - group: "" + resources: ["pods/log", "pods/status"] + + # Don't log requests to a configmap called "controller-leader" + - level: None + resources: + - group: "" + resources: ["configmaps"] + resourceNames: ["controller-leader"] + + # Don't log watch requests by the "system:kube-proxy" on endpoints or services + - level: None + users: ["system:kube-proxy"] + verbs: ["watch"] + resources: + - group: "" # core API group + resources: ["endpoints", "services"] + + # Don't log authenticated requests to certain non-resource URL paths. + - level: None + userGroups: ["system:authenticated"] + nonResourceURLs: + - "/api*" # Wildcard matching. + - "/version" + + # Log the request body of configmap changes in kube-system. + - level: Request + resources: + - group: "" # core API group + resources: ["configmaps"] + # This rule only applies to resources in the "kube-system" namespace. + # The empty string "" can be used to select non-namespaced resources. + namespaces: ["kube-system"] + + # Log configmap and secret changes in all other namespaces at the Metadata level. + - level: Metadata + resources: + - group: "" # core API group + resources: ["secrets", "configmaps"] + + # Log all other resources in core and extensions at the Request level. + - level: Request + resources: + - group: "" # core API group + - group: "extensions" # Version of group should NOT be included. + + # A catch-all rule to log all other requests at the Metadata level. + - level: Metadata +---- +<1> The log types that are collected. The value for this field can be `audit` for audit logs, `application` for application logs, `infrastructure` for infrastructure logs, or a named input that has been defined for your application. +<2> The name of your audit policy. diff --git a/modules/log6x-content-filter-drop-records.adoc b/modules/log6x-content-filter-drop-records.adoc new file mode 100644 index 000000000000..2e9fe987b954 --- /dev/null +++ b/modules/log6x-content-filter-drop-records.adoc @@ -0,0 +1,108 @@ +// Module included in the following assemblies: +// +// * observability/logging/logging-6.0/log6x-clf.adoc + +:_mod-docs-content-type: PROCEDURE +[id="log6x-content-filter-drop-records_{context}"] += Configuring content filters to drop unwanted log records + +When the `drop` filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration. + +.Prerequisites + +* You have installed the {clo}. +* You have administrator permissions. +* You have created a `ClusterLogForwarder` custom resource (CR). + +.Procedure + +. Add a configuration for a filter to the `filters` spec in the `ClusterLogForwarder` CR. ++ +The following example shows how to configure the `ClusterLogForwarder` CR to drop log records based on regular expressions: ++ +.Example `ClusterLogForwarder` CR +[source,yaml] +---- +apiVersion: observability.openshift.io/v1 +kind: ClusterLogForwarder +metadata: +# ... +spec: + filters: + - name: + type: drop # <1> + drop: # <2> + test: # <3> + - field: .kubernetes.labels."foo-bar/baz" # <4> + matches: .+ # <5> + - field: .kubernetes.pod_name + notMatches: "my-pod" # <6> + pipelines: + - name: # <7> + filterRefs: [""] +# ... +---- +<1> Specifies the type of filter. The `drop` filter drops log records that match the filter configuration. +<2> Specifies configuration options for applying the `drop` filter. +<3> Specifies the configuration for tests that are used to evaluate whether a log record is dropped. +** If all the conditions specified for a test are true, the test passes and the log record is dropped. +** When multiple tests are specified for the `drop` filter configuration, if any of the tests pass, the record is dropped. +** If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false. +<4> Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores (`a-zA-Z0-9_`), for example, `.kubernetes.namespace_name`. If segments contain characters outside of this range, the segment must be in quotes, for example, `.kubernetes.labels."foo.bar-bar/baz"`. You can include multiple field paths in a single `test` configuration, but they must all evaluate to true for the test to pass and the `drop` filter to be applied. +<5> Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the `matches` or `notMatches` condition for a single `field` path, but not both. +<6> Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the `matches` or `notMatches` condition for a single `field` path, but not both. +<7> Specifies the pipeline that the `drop` filter is applied to. + +. Apply the `ClusterLogForwarder` CR by running the following command: ++ +[source,terminal] +---- +$ oc apply -f .yaml +---- + +.Additional examples + +The following additional example shows how you can configure the `drop` filter to only keep higher priority log records: + +[source,yaml] +---- +apiVersion: observability.openshift.io/v1 +kind: ClusterLogForwarder +metadata: +# ... +spec: + filters: + - name: important + type: drop + drop: + test: + - field: .message + notMatches: "(?i)critical|error" + - field: .level + matches: "info|warning" +# ... +---- + +In addition to including multiple field paths in a single `test` configuration, you can also include additional tests that are treated as _OR_ checks. In the following example, records are dropped if either `test` configuration evaluates to true. However, for the second `test` configuration, both field specs must be true for it to be evaluated to true: + +[source,yaml] +---- +apiVersion: observability.openshift.io/v1 +kind: ClusterLogForwarder +metadata: +# ... +spec: + filters: + - name: important + type: drop + drop: + test: + - field: .kubernetes.namespace_name + matches: "^open" + test: + - field: .log_type + matches: "application" + - field: .kubernetes.pod_name + notMatches: "my-pod" +# ... +---- diff --git a/modules/log6x-content-filter-prune-records.adoc b/modules/log6x-content-filter-prune-records.adoc new file mode 100644 index 000000000000..ed92dcdc01f4 --- /dev/null +++ b/modules/log6x-content-filter-prune-records.adoc @@ -0,0 +1,58 @@ +// Module included in the following assemblies: +// +// * observability/logging/logging-6.0/log6x-clf.adoc + +:_mod-docs-content-type: PROCEDURE +[id="log6x-content-filter-prune-records_{context}"] += Configuring content filters to prune log records + +When the `prune` filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations. + +.Prerequisites + +* You have installed the {clo}. +* You have administrator permissions. +* You have created a `ClusterLogForwarder` custom resource (CR). + +.Procedure + +. Add a configuration for a filter to the `prune` spec in the `ClusterLogForwarder` CR. ++ +The following example shows how to configure the `ClusterLogForwarder` CR to prune log records based on field paths: ++ +[IMPORTANT] +==== +If both are specified, records are pruned based on the `notIn` array first, which takes precedence over the `in` array. After records have been pruned by using the `notIn` array, they are then pruned by using the `in` array. +==== ++ +.Example `ClusterLogForwarder` CR +[source,yaml] +---- +apiVersion: observability.openshift.io/v1 +kind: ClusterLogForwarder +metadata: +# ... +spec: + filters: + - name: + type: prune # <1> + prune: # <2> + in: [.kubernetes.annotations, .kubernetes.namespace_id] # <3> + notIn: [.kubernetes,.log_type,.message,."@timestamp"] # <4> + pipelines: + - name: # <5> + filterRefs: [""] +# ... +---- +<1> Specify the type of filter. The `prune` filter prunes log records by configured fields. +<2> Specify configuration options for applying the `prune` filter. The `in` and `notIn` fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores (`a-zA-Z0-9_`), for example, `.kubernetes.namespace_name`. If segments contain characters outside of this range, the segment must be in quotes, for example, `.kubernetes.labels."foo.bar-bar/baz"`. +<3> Optional: Any fields that are specified in this array are removed from the log record. +<4> Optional: Any fields that are not specified in this array are removed from the log record. +<5> Specify the pipeline that the `prune` filter is applied to. + +. Apply the `ClusterLogForwarder` CR by running the following command: ++ +[source,terminal] +---- +$ oc apply -f .yaml +---- diff --git a/modules/log6x-delivery-tuning.adoc b/modules/log6x-delivery-tuning.adoc new file mode 100644 index 000000000000..c68b25a511d6 --- /dev/null +++ b/modules/log6x-delivery-tuning.adoc @@ -0,0 +1,108 @@ +// Module included in the following assemblies: +// +// * observability/logging/logging-6.0/log6x-clf.adoc + +:_mod-docs-content-type: REFERENCE +[id="log6x-delivery-tuning_{context}"] += Tuning log payloads and delivery + +In {logging} 5.9 and newer versions, the `tuning` spec in the `ClusterLogForwarder` custom resource (CR) provides a means of configuring your deployment to prioritize either throughput or durability of logs. + +For example, if you need to reduce the possibility of log loss when the collector restarts, or you require collected log messages to survive a collector restart to support regulatory mandates, you can tune your deployment to prioritize log durability. If you use outputs that have hard limitations on the size of batches they can receive, you may want to tune your deployment to prioritize log throughput. + +[IMPORTANT] +==== +To use this feature, your {logging} deployment must be configured to use the Vector collector. The `tuning` spec in the `ClusterLogForwarder` CR is not supported when using the Fluentd collector. +==== + +The following example shows the `ClusterLogForwarder` CR options that you can modify to tune log forwarder outputs: + +.Example `ClusterLogForwarder` CR tuning options +[source,yaml] +---- +apiVersion: "observability.openshift.io/v1" +kind: ClusterLogForwarder +metadata: +# ... +spec: + outputs: + - name: + type: + : + tuning: + delivery: atLeastOnce # <1> + maxWrite: # <2> + compression: none # <3> + minRetryDuration: 1s # <4> + maxRetryDuration: 1s # <5> +# ... +---- +<1> Specify the delivery mode for log forwarding. +** `AtLeastOnce` delivery means that if the log forwarder crashes or is restarted, any logs that were read before the crash but not sent to their destination are re-sent. It is possible that some logs are duplicated after a crash. +** `AtMostOnce` delivery means that the log forwarder makes no effort to recover logs lost during a crash. This mode gives better throughput, but may result in greater log loss. +<2> Specifying a `compression` configuration causes data to be compressed before it is sent over the network. Note that not all output types support compression, and if the specified compression type is not supported by the output, this results in an error. The possible values for this configuration are `none` for no compression, `gzip`, `snappy`, `zlib`, or `zstd`. `lz4` compression is also available if you are using a Kafka output. See the table "Supported compression types for tuning outputs" for more information. +<3> Specifies a limit for the maximum payload of a single send operation to the output. +<4> Specifies a minimum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds (`ms`), seconds (`s`), or minutes (`m`). +<5> Specifies a maximum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds (`ms`), seconds (`s`), or minutes (`m`). + +.Supported compression types for tuning outputs +[options="header"] +|=== +|Compression algorithm |Splunk |Amazon Cloudwatch |Elasticsearch 8 |LokiStack |Apache Kafka |HTTP |Syslog |Google Cloud |Microsoft Azure Monitoring + +|`gzip` +|X +|X +|X +|X +| +|X +| +| +| + +|`snappy` +| +|X +| +|X +|X +|X +| +| +| + +|`zlib` +| +|X +|X +| +| +|X +| +| +| + +|`zstd` +| +|X +| +| +|X +|X +| +| +| + +|`lz4` +| +| +| +| +|X +| +| +| +| + +|=== diff --git a/modules/log6x-forwarder-feature.adoc b/modules/log6x-forwarder-feature.adoc new file mode 100644 index 000000000000..bdc60fb2af96 --- /dev/null +++ b/modules/log6x-forwarder-feature.adoc @@ -0,0 +1,15 @@ +// Module included in the following assemblies: +// +// * observability/logging/logging-6.0/log6x-clf.adoc + +:_mod-docs-content-type: CONCEPT +[id="log6x-forwarder-feature_{context}"] += Log forwarder feature + +The log forwarder feature provides the following functionality: + +* Administrators can control which users are allowed to define log collection and which logs they are allowed to collect. +* Users who have the required permissions are able to specify additional log collection configurations. +* Administrators who are migrating from the deprecated Fluentd collector to the Vector collector can deploy a new log forwarder separately from their existing deployment. The existing and new log forwarders can operate simultaneously while workloads are being migrated. + +You can create `ClusterLogForwarder` resources using any name and in any namespace. diff --git a/modules/log6x-input-spec-filter-audit-infrastructure.adoc b/modules/log6x-input-spec-filter-audit-infrastructure.adoc new file mode 100644 index 000000000000..36b70a4f8719 --- /dev/null +++ b/modules/log6x-input-spec-filter-audit-infrastructure.adoc @@ -0,0 +1,60 @@ +// Module included in the following assemblies: +// +// * observability/logging/logging-6.0/log6x-clf.adoc + +:_mod-docs-content-type: PROCEDURE +[id="log6x-input-spec-filter-audit-infrastructure_{context}"] += Filtering the audit and infrastructure log inputs by source + +You can define the list of `audit` and `infrastructure` sources to collect the logs by using the `input` selector. + +.Prerequisites + +* You have installed the {clo}. +* You have administrator permissions. +* You have created a `ClusterLogForwarder` custom resource (CR). + +.Procedure + +. Add a configuration to define the `audit` and `infrastructure` sources in the `ClusterLogForwarder` CR. + ++ +The following example shows how to configure the `ClusterLogForwarder` CR to define `aduit` and `infrastructure` sources: ++ +.Example `ClusterLogForwarder` CR ++ +[source,yaml] +---- +apiVersion: observability.openshift.io/v1 +kind: ClusterLogForwarder +# ... +spec: + inputs: + - name: mylogs1 + infrastructure: + sources: # <1> + - node + - name: mylogs2 + audit: + sources: # <2> + - kubeAPI + - openshiftAPI + - ovn +# ... +---- +<1> Specifies the list of infrastructure sources to collect. The valid sources include: +** `node`: Journal log from the node +** `container`: Logs from the workloads deployed in the namespaces +<2> Specifies the list of audit sources to collect. The valid sources include: +** `kubeAPI`: Logs from the Kubernetes API servers +** `openshiftAPI`: Logs from the OpenShift API servers +** `auditd`: Logs from a node auditd service +** `ovn`: Logs from an open virtual network service + +. Apply the `ClusterLogForwarder` CR by running the following command: + ++ +[source,terminal] +---- +$ oc apply -f .yaml +---- \ No newline at end of file diff --git a/modules/log6x-input-spec-filter-labels-expressions.adoc b/modules/log6x-input-spec-filter-labels-expressions.adoc new file mode 100644 index 000000000000..fef1ed332d4b --- /dev/null +++ b/modules/log6x-input-spec-filter-labels-expressions.adoc @@ -0,0 +1,57 @@ +// Module included in the following assemblies: +// +// * observability/logging/logging-6.0/log6x-clf.adoc + +:_mod-docs-content-type: PROCEDURE +[id="log6x-input-spec-filter-labels-expressions_{context}"] += Filtering application logs at input by including the label expressions or a matching label key and values + +You can include the application logs based on the label expressions or a matching label key and its values by using the `input` selector. + +.Prerequisites + +* You have installed the {clo}. +* You have administrator permissions. +* You have created a `ClusterLogForwarder` custom resource (CR). + +.Procedure + +. Add a configuration for a filter to the `input` spec in the `ClusterLogForwarder` CR. ++ +The following example shows how to configure the `ClusterLogForwarder` CR to include logs based on label expressions or matched label key/values: ++ +.Example `ClusterLogForwarder` CR +[source,yaml] +---- +apiVersion: observability.openshift.io/v1 +kind: ClusterLogForwarder +# ... +spec: + inputs: + - name: mylogs + application: + selector: + matchExpressions: + - key: env # <1> + operator: In # <2> + values: [“prod”, “qa”] # <3> + - key: zone + operator: NotIn + values: [“east”, “west”] + matchLabels: # <4> + app: one + name: app1 +# ... +---- +<1> Specifies the label key to match. +<2> Specifies the operator. Valid values include: `In`, `NotIn`, `Exists`, and `DoesNotExist`. +<3> Specifies an array of string values. If the `operator` value is either `Exists` or `DoesNotExist`, the value array must be empty. +<4> Specifies an exact key or value mapping. + +. Apply the `ClusterLogForwarder` CR by running the following command: + ++ +[source,terminal] +---- +$ oc apply -f .yaml +---- \ No newline at end of file diff --git a/modules/log6x-input-spec-filter-namespace-container.adoc b/modules/log6x-input-spec-filter-namespace-container.adoc new file mode 100644 index 000000000000..42a03c16f548 --- /dev/null +++ b/modules/log6x-input-spec-filter-namespace-container.adoc @@ -0,0 +1,57 @@ +// Module included in the following assemblies: +// +// * observability/logging/logging-6.0/log6x-clf.adoc + +:_mod-docs-content-type: PROCEDURE +[id="log6x-input-spec-filter-namespace-container_{context}"] += Filtering application logs at input by including or excluding the namespace or container name + +You can include or exclude the application logs based on the namespace and container name by using the `input` selector. + +.Prerequisites + +* You have installed the {clo}. +* You have administrator permissions. +* You have created a `ClusterLogForwarder` custom resource (CR). + +.Procedure + +. Add a configuration to include or exclude the namespace and container names in the `ClusterLogForwarder` CR. ++ +The following example shows how to configure the `ClusterLogForwarder` CR to include or exclude namespaces and container names: ++ +.Example `ClusterLogForwarder` CR +[source,yaml] +---- +apiVersion: observability.openshift.io/v1 +kind: ClusterLogForwarder +# ... +spec: + inputs: + - name: mylogs + application: + includes: + - namespace: "my-project" # <1> + container: "my-container" # <2> + excludes: + - container: "other-container*" # <3> + namespace: "other-namespace" # <4> +# ... +---- +<1> Specifies that the logs are only collected from these namespaces. +<2> Specifies that the logs are only collected from these containers. +<3> Specifies the pattern of namespaces to ignore when collecting the logs. +<4> Specifies the set of containers to ignore when collecting the logs. + +. Apply the `ClusterLogForwarder` CR by running the following command: + ++ +[source,terminal] +---- +$ oc apply -f .yaml +---- + +[NOTE] +==== +The `excludes` option takes precedence over `includes`. +==== diff --git a/modules/log6x-logging-compatibility-support-matrix.adoc b/modules/log6x-logging-compatibility-support-matrix.adoc new file mode 100644 index 000000000000..8f255e0e2528 --- /dev/null +++ b/modules/log6x-logging-compatibility-support-matrix.adoc @@ -0,0 +1,28 @@ +[id="log6x-logging-compatibility-support-matrix_{context}"] += Compatibility and support matrix + +In the table, components are marked with the following statuses: + +[horizontal] +TP:: Technology Preview +GA:: General Availability + +The components or features in link:https://access.redhat.com/support/offerings/techpreview[Technology Preview] status are experimental and are not intended for production use. + +[NOTE] +==== +To know about compatible {product-title} versions for {logging-uc} releases, see link:https://access.redhat.com/product-life-cycles?product=Red%20Hat%20OpenShift%20Logging[Product Life Cycles]. +==== + +.Components support matrix +[options="header"] +|=== + +| {logging-uc} Version 6+| Component Version + +| Operator | `eventrouter` | `logfilemetricexplorer` | `loki` | `lokistack-gateway` | `opa-openshift` | `vector` + +|6.0 | 0.4 (GA) | v1.1 (GA) | v3.1.0 (GA) | v0.1 (GA) | v0.1 (GA) | v0.37.x (GA) + +|=== + diff --git a/modules/log6x-multiline-except.adoc b/modules/log6x-multiline-except.adoc new file mode 100644 index 000000000000..e8b79afd69b0 --- /dev/null +++ b/modules/log6x-multiline-except.adoc @@ -0,0 +1,124 @@ +// Module included in the following assemblies: +// +// * observability/logging/logging-6.0/log6x-clf.adoc + +:_mod-docs-content-type: PROCEDURE +[id="log6x-multiline-except_{context}"] += Enabling multi-line exception detection + +Enables multi-line error detection of container logs. + +[WARNING] +==== +Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. +==== + +Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. + +.Example java exception +[,text] +---- +java.lang.NullPointerException: Cannot invoke "String.toString()" because "" is null + at testjava.Main.handle(Main.java:47) + at testjava.Main.printMe(Main.java:19) + at testjava.Main.main(Main.java:10) +---- + +* To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the `ClusterLogForwarder` Custom Resource (CR) contains a `detectMultilineErrors` field, with a value of `true`. + +.Example ClusterLogForwarder CR +[source,yaml] +---- +apiVersion: "observability.openshift.io/v1" +kind: ClusterLogForwarder +metadata: + name: + namespace: +spec: + filters: + - name: + type: detectMultilineException + pipelines: + - inputRefs: + - + name: + filterRefs: + - + outputRefs: + - +---- + +== Details +When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message's content is replaced with the concatenated content of all the message fields in the sequence. + +.Supported languages per collector: +|=== +|Language | Collector + +|Java | ✓ +|JS | ✓ +|Ruby | ✓ +|Python | ✓ +|Golang | ✓ +|PHP | ✓ +|Dart | ✓ +|=== + +== Troubleshooting +When enabled, the collector configuration will include a new section with type: `detect_exceptions` + +.Example vector configuration section +---- +[transforms.detect_exceptions_app-logs] + type = "detect_exceptions" + inputs = ["application"] + languages = ["All"] + group_by = ["kubernetes.namespace_name","kubernetes.pod_name","kubernetes.container_name"] + expire_after_ms = 2000 + multiline_flush_interval_ms = 1000 +---- + +// OBSDOCS-1104 +== Modify log level in collector + +You can modify the log level in the collector by setting it to `debug`. + +.Example ClusterLogForwarder CR +[source,yaml] +---- +apiVersion: observability.openshift.io/v1 +kind: ClusterLogForwarder +metadata: + name: collector + annotations: + observability.openshift.io/log-level: debug +spec: + managementState: Managed + outputs: + - lokiStack: + authentication: + token: + from: serviceAccount + target: + name: lokistack-dev + namespace: openshift-logging + tuning: + compression: gzip + delivery: atLeastOnce + name: lokistack + tls: + ca: + key: service-ca.crt + configMapName: openshift-service-ca.crt + type: lokiStack + pipelines: + - inputRefs: + - application + - infrastructure + - audit + name: forward-to-lokistack + outputRefs: + - lokistack + serviceAccount: + name: my-loki-sa +---- \ No newline at end of file diff --git a/modules/logging-delivery-tuning.adoc b/modules/logging-delivery-tuning.adoc index 49db092ad44a..866056d19398 100644 --- a/modules/logging-delivery-tuning.adoc +++ b/modules/logging-delivery-tuning.adoc @@ -20,17 +20,21 @@ The following example shows the `ClusterLogForwarder` CR options that you can mo .Example `ClusterLogForwarder` CR tuning options [source,yaml] ---- -apiVersion: logging.openshift.io/v1 +apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder metadata: # ... spec: - tuning: - delivery: AtLeastOnce # <1> - compression: none # <2> - maxWrite: # <3> - minRetryDuration: 1s # <4> - maxRetryDuration: 1s # <5> + outputs: + - name: + type: + : + tuning: + delivery: atLeastOnce # <1> + maxWrite: # <2> + compression: none # <3> + minRetryDuration: 1s # <4> + maxRetryDuration: 1s # <5> # ... ---- <1> Specify the delivery mode for log forwarding. diff --git a/observability/logging/cluster-logging.adoc b/observability/logging/cluster-logging.adoc index ea791ff07bd8..738bb4fb90fc 100644 --- a/observability/logging/cluster-logging.adoc +++ b/observability/logging/cluster-logging.adoc @@ -26,6 +26,10 @@ include::modules/logging-architecture-overview.adoc[leveloffset=+1] .Additional resources * xref:../../observability/logging/log_visualization/log-visualization-ocp-console.adoc#log-visualization-ocp-console[Log visualization with the web console] +// this module is to be included in the Logging 6.0 Release Notes. See https://docs.openshift.com/pipelines/1.15/about/op-release-notes.html#compatibility-support-matrix_op-release-notes for reference. + +include::modules/log6x-logging-compatibility-support-matrix.adoc[leveloffset=+1] + include::modules/cluster-logging-about.adoc[leveloffset=+1] ifdef::openshift-rosa,openshift-dedicated[] diff --git a/observability/logging/log_storage/about-log-storage.adoc b/observability/logging/log_storage/about-log-storage.adoc index e906087a714f..25257ccec92f 100644 --- a/observability/logging/log_storage/about-log-storage.adoc +++ b/observability/logging/log_storage/about-log-storage.adoc @@ -12,7 +12,9 @@ You can use an internal Loki or Elasticsearch log store on your cluster for stor [id="log-storage-overview-types"] == Log storage types -include::snippets/logging-loki-statement-snip.adoc[] +Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as an alternative to Elasticsearch as a log store for the {logging}. + +Elasticsearch indexes incoming log records completely during ingestion. Loki only indexes a few fixed labels during ingestion and defers more complex parsing until after the logs have been stored. This means Loki can collect logs more quickly. include::modules/cluster-logging-about-es-logstore.adoc[leveloffset=+2] diff --git a/observability/logging/logging-6.0/log6x-clf.adoc b/observability/logging/logging-6.0/log6x-clf.adoc index ca8d464d571d..318a02eadda7 100644 --- a/observability/logging/logging-6.0/log6x-clf.adoc +++ b/observability/logging/logging-6.0/log6x-clf.adoc @@ -5,3 +5,184 @@ include::_attributes/common-attributes.adoc[] :context: logging-6x toc::[] + +The `ClusterLogForwarder` CR serves as a single point of configuration for log forwarding, making it easier to manage and maintain log collection and forwarding rules. + +* Defines inputs (sources) for log collection +* Specifies outputs (destinations) for log forwarding +* Configures filters for log processing +* Defines pipelines to route logs from inputs to outputs +* Indicates management state (managed or unmanaged) + +// Needs engineering eval re applicability to 6.0 +include::modules/log6x-forwarder-feature.adoc[leveloffset=+1] + +// This needs to be called out extensively. +[source,yaml] +---- +apiVersion: observability.openshift.io/v1 +kind: ClusterLogForwarder +metadata: + name: +spec: + managementState: + collector: + nodeSelector: + resources: + tolerations: + inputs: + - application: + excludes: + includes: + selector: + tuning: + name: + type: + - infrastructure: + sources: + - container + - node + name: + type: infrastructure + - audit: + sources: + - auditd + - kubeAPI + - openshiftAPI + - ovn + name: + type: audit + - receiver: + http: + port: + tls: + type: + name: + type: receiver + filters: + - name: + type: detectMultilineException + - name: + type: parse + - openShiftLabels: + : + name: + type: openShiftLabels + - drop: + - test: + type: drop + - kubeAPIAudit: + omitResponseCodes: + omitStages: + rules: + name: + type: kubeAPIAudit + - prune: + in: + notin: + name: + type: prune + outputs: + - azureMonitor: + authentication: + sharedKey: + customerId: + name: + type: azureMonitor + - cloudwatch: + authentication: + type: + groupName: {.log_type||"default"} + region: us-east-2 + name: + type: cloudwatch + - elasticsearch: + index: {.log_type|"default"} + url: + name: + type: elasticsearch + - http: + authentication: + username: + password: + token: + url: + name: + type: http + - kafka: + authentication: + sasl: + brokers: + topic: + url: + name: + type: kafka + - loki: + authentication: + labelKeys: + tenantKey: + url: + name: + type: loki + - lokiStack: + authentication: + target: + name: + type: lokiStack + - googleCloudLogging: + authentication: + id: + logId: + name: + type: googleCloudLogging + - splunk: + authentication: + index: '{.log_type|""}' + url: + name: + type: splunk + - syslog + rfc: + url: + name: + type: splunk + - otlp: + authentication: + url: + name: + type: splunk + pipelines: + - inputRefs: + - + - application + - infrastructure + - audit + name: + filterRefs: + - + - + outputRefs: + - + serviceAccount: + name: +---- + +// All of these need to be validated by engineering for 6.0. +include::modules/log6x-audit-log-filtering.adoc[leveloffset=+1] + +include::modules/log6x-content-filter-drop-records.adoc[leveloffset=+1] + +include::modules/log6x-content-filter-prune-records.adoc[leveloffset=+1] + +include::modules/log6x-input-spec-filter-audit-infrastructure.adoc[leveloffset=+1] + +include::modules/log6x-input-spec-filter-labels-expressions.adoc[leveloffset=+1] + +include::modules/log6x-input-spec-filter-namespace-container.adoc[leveloffset=+1] + +include::modules/log6x-multiline-except.adoc[leveloffset=+1] + +[id="log6x-CLF-samples_{context}"] +== Use case samples +// Should be an include, create: modules/log6x-use-metric-export.adoc - Associated JIRA? +* Log File Metric Exporter as a Separate Deployment \ No newline at end of file