Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
118 changes: 118 additions & 0 deletions modules/log6x-audit-log-filtering.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,118 @@
// Module included in the following assemblies:
//
// * observability/logging/logging-6.0/log6x-clf.adoc

:_mod-docs-content-type: CONCEPT
[id="log6x-audit-filtering_{context}"]
= Overview of API audit filter
OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, checking stops at the first match. How much data is included in an event is determined by the value of the `level` field:

* `None`: The event is dropped.
* `Metadata`: Audit metadata is included, request and response bodies are removed.
* `Request`: Audit metadata and the request body are included, the response body is removed.
* `RequestResponse`: All data is included: metadata, request body and response body. The response body can be very large. For example, `oc get pods -A` generates a response body containing the YAML description of every pod in the cluster.

In logging 6.0, the `ClusterLogForwarder` custom resource (CR) uses the same format as the standard link:https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/#audit-policy[Kubernetes audit policy], while providing the following additional functions:

Wildcards:: Names of users, groups, namespaces, and resources can have a leading or trailing `\*` asterisk character. For example, namespace `openshift-\*` matches `openshift-apiserver` or `openshift-authentication`. Resource `\*/status` matches `Pod/status` or `Deployment/status`.

Default Rules:: Events that do not match any rule in the policy are filtered as follows:
* Read-only system events such as `get`, `list`, `watch` are dropped.
* Service account write events that occur within the same namespace as the service account are dropped.
* All other events are forwarded, subject to any configured rate limits.

To disable these defaults, either end your rules list with a rule that has only a `level` field or add an empty rule.

Omit Response Codes:: A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the `OmitResponseCodes` field, a list of HTTP status code for which no events are created. The default value is `[404, 409, 422, 429]`. If the value is an empty list, `[]`, then no status codes are omitted.

The `ClusterLogForwarder` CR audit policy acts in addition to the {product-title} audit policy. The `ClusterLogForwarder` CR audit filter changes what the log collector forwards, and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store, and a less detailed stream to a remote site.

[NOTE]
====
You must have a cluster role `collect-audit-logs` to collect the audit logs. For more information see link:https://docs.openshift.com/container-platform/4.16/observability/logging/log_collection_forwarding/log-forwarding.html#log-collection-rbac-permissions_log-forwarding[Authorizing log collection RBAC permissions]. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This cannot be used here.

https://github.com/openshift/openshift-docs/blob/main/contributing_to_docs/doc_guidelines.adoc#links-hyperlinks-and-cross-references

OCP does not allow hard links to internal content.
OCP requires xrefs for internal content.
OCP does not allow xrefs in modules, only assemblies.

====

.Example audit policy
[source,yaml]
----
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: <log_forwarder_name>
namespace: <log_forwarder_namespace>
spec:
serviceAccount:
name: <service_account_name>
pipelines:
- name: my-pipeline

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • name: my-pipeline
    inputRefs: [audit # <1>]
    filterRefs: [my-policy # <2]

inputRefs: audit # <1>
filterRefs: my-policy # <2>
filters:
- name: my-policy
type: kubeApiAudit

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"kubeAPIAudit"

kubeApiAudit:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kubeAPIAudit

# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"

rules:
# Log pod changes at RequestResponse level
- level: RequestResponse
resources:
- group: ""
resources: ["pods"]

# Log "pods/log", "pods/status" at Metadata level
- level: Metadata
resources:
- group: ""
resources: ["pods/log", "pods/status"]

# Don't log requests to a configmap called "controller-leader"
- level: None
resources:
- group: ""
resources: ["configmaps"]
resourceNames: ["controller-leader"]

# Don't log watch requests by the "system:kube-proxy" on endpoints or services
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core API group
resources: ["endpoints", "services"]

# Don't log authenticated requests to certain non-resource URL paths.
- level: None
userGroups: ["system:authenticated"]
nonResourceURLs:
- "/api*" # Wildcard matching.
- "/version"

# Log the request body of configmap changes in kube-system.
- level: Request
resources:
- group: "" # core API group
resources: ["configmaps"]
# This rule only applies to resources in the "kube-system" namespace.
# The empty string "" can be used to select non-namespaced resources.
namespaces: ["kube-system"]

# Log configmap and secret changes in all other namespaces at the Metadata level.
- level: Metadata
resources:
- group: "" # core API group
resources: ["secrets", "configmaps"]

# Log all other resources in core and extensions at the Request level.
- level: Request
resources:
- group: "" # core API group
- group: "extensions" # Version of group should NOT be included.

# A catch-all rule to log all other requests at the Metadata level.
- level: Metadata
----
<1> The log types that are collected. The value for this field can be `audit` for audit logs, `application` for application logs, `infrastructure` for infrastructure logs, or a named input that has been defined for your application.
<2> The name of your audit policy.
112 changes: 112 additions & 0 deletions modules/log6x-content-filter-drop-records.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
// Module included in the following assemblies:
//
// * observability/logging/logging-6.0/log6x-clf.adoc

:_mod-docs-content-type: PROCEDURE
[id="log6x-content-filter-drop-records_{context}"]
= Configuring content filters to drop unwanted log records

When the `drop` filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration.

.Prerequisites

* You must have a `serviceAccount` in the same namespace in which you create the `ClusterLogForwarder`. Additionally, for this `ClusterLogForwarder` you must have the `collect-audit-logs`, `collect-application-logs`, and `collect-infrastructure-logs` cluster roles. For more information see link:https://docs.openshift.com/container-platform/4.16/observability/logging/log_collection_forwarding/log-forwarding.html#log-collection-rbac-permissions_log-forwarding[Authorizing log collection RBAC permissions].
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This cannot be used here.

https://github.com/openshift/openshift-docs/blob/main/contributing_to_docs/doc_guidelines.adoc#links-hyperlinks-and-cross-references

OCP does not allow hard links to internal content.
OCP requires xrefs for internal content.
OCP does not allow xrefs in modules, only assemblies.


.Procedure

. Add a configuration for a filter to the `filters` spec in the `ClusterLogForwarder` CR.
+
The following example shows how to configure the `ClusterLogForwarder` CR to drop log records based on regular expressions:
+
.Example `ClusterLogForwarder` CR
[source,yaml]
----
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
# ...
spec:
serviceAccount:
name: <service_account_name>
filters:
- name: <filter_name>
type: drop # <1>
drop: # <2>
- test: # <3>
- field: .kubernetes.labels."foo-bar/baz" # <4>
matches: .+ # <5>
- field: .kubernetes.pod_name
notMatches: "my-pod" # <6>
pipelines:
- name: <pipeline_name> # <7>
filterRefs: ["<filter_name>"]
# ...
----
<1> Specifies the type of filter. The `drop` filter drops log records that match the filter configuration.
<2> Specifies configuration options for applying the `drop` filter.
<3> Specifies the configuration for tests that are used to evaluate whether a log record is dropped.
** If all the conditions specified for a test are true, the test passes and the log record is dropped.
** When multiple tests are specified for the `drop` filter configuration, if any of the tests pass, the record is dropped.
** If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false.
<4> Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores (`a-zA-Z0-9_`), for example, `.kubernetes.namespace_name`. If segments contain characters outside of this range, the segment must be in quotes, for example, `.kubernetes.labels."foo.bar-bar/baz"`. You can include multiple field paths in a single `test` configuration, but they must all evaluate to true for the test to pass and the `drop` filter to be applied.
<5> Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the `matches` or `notMatches` condition for a single `field` path, but not both.
<6> Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the `matches` or `notMatches` condition for a single `field` path, but not both.
<7> Specifies the pipeline that the `drop` filter is applied to.

. Apply the `ClusterLogForwarder` CR by running the following command:
+
[source,terminal]
----
$ oc apply -f <filename>.yaml
----

.Additional examples

The following additional example shows how you can configure the `drop` filter to only keep higher priority log records:

[source,yaml]
----
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
# ...
spec:
serviceAccount:
name: <service_account_name>
filters:
- name: important
type: drop
drop:
- test:
- field: .message
notMatches: "(?i)critical|error"
- field: .level
matches: "info|warning"
# ...
----

In addition to including multiple field paths in a single `test` configuration, you can also include additional tests that are treated as _OR_ checks. In the following example, records are dropped if either `test` configuration evaluates to true. However, for the second `test` configuration, both field specs must be true for it to be evaluated to true:

[source,yaml]
----
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
# ...
spec:
serviceAccount:
name: <service_account_name>
filters:
- name: important
type: drop
drop:
- test:
- field: .kubernetes.namespace_name
matches: "^open"
- test:
- field: .log_type
matches: "application"
- field: .kubernetes.pod_name
notMatches: "my-pod"
# ...
----
63 changes: 63 additions & 0 deletions modules/log6x-content-filter-prune-records.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
// Module included in the following assemblies:
//
// * observability/logging/logging-6.0/log6x-clf.adoc

:_mod-docs-content-type: PROCEDURE
[id="log6x-content-filter-prune-records_{context}"]
= Configuring content filters to prune log records

When the `prune` filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations.

.Prerequisites

* You must have a `serviceAccount` in the same namespace in which you create the `ClusterLogForwarder`. Additionally, for this `ClusterLogForwarder` you must have the `collect-audit-logs`, `collect-application-logs`, and `collect-infrastructure-logs` cluster roles. For more information see link:https://docs.openshift.com/container-platform/4.16/observability/logging/log_collection_forwarding/log-forwarding.html#log-collection-rbac-permissions_log-forwarding[Authorizing log collection RBAC permissions].
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This cannot be used here.

https://github.com/openshift/openshift-docs/blob/main/contributing_to_docs/doc_guidelines.adoc#links-hyperlinks-and-cross-references

OCP does not allow hard links to internal content.
OCP requires xrefs for internal content.
OCP does not allow xrefs in modules, only assemblies.


.Procedure

. Add a configuration for a filter to the `prune` spec in the `ClusterLogForwarder` CR.
+
The following example shows how to configure the `ClusterLogForwarder` CR to prune log records based on field paths:
+
[IMPORTANT]
====
If both are specified, records are pruned based on the `notIn` array first, which takes precedence over the `in` array. After records have been pruned by using the `notIn` array, they are then pruned by using the `in` array.
====
+
.Example `ClusterLogForwarder` CR
[source,yaml]
----
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
# ...
spec:
serviceAccount:
name: <service_account_name>
filters:
- name: <filter_name>
type: prune # <1>
prune: # <2>
in: [.kubernetes.annotations, .kubernetes.namespace_id] # <3>
notIn: [.kubernetes,.log_type,.message,."@timestamp"] # <4>
pipelines:
- name: <pipeline_name> # <5>
filterRefs: ["<filter_name>"]
# ...
----
<1> Specify the type of filter. The `prune` filter prunes log records by configured fields.
<2> Specify configuration options for applying the `prune` filter. The `in` and `notIn` fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores (`a-zA-Z0-9_`), for example, `.kubernetes.namespace_name`. If segments contain characters outside of this range, the segment must be in quotes, for example, `.kubernetes.labels."foo.bar-bar/baz"`.
<3> Optional: Any fields that are specified in this array are removed from the log record.
<4> Optional: Any fields that are not specified in this array are removed from the log record.
<5> Specify the pipeline that the `prune` filter is applied to.
+
[NOTE]
====
The filters exempts the `log_type`, `.log_source`, and `.message` fields.
====

. Apply the `ClusterLogForwarder` CR by running the following command:
+
[source,terminal]
----
$ oc apply -f <filename>.yaml
----
18 changes: 18 additions & 0 deletions modules/log6x-create-log-forwarder.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
// Module included in the following assemblies:
//
// * observability/logging/logging-6.0/log6x-clf.adoc

:_mod-docs-content-type: CONCEPT
[id="log6x-create-log-forwarder_{context}"]
= Creating a log forwarder

The log forwarder provides the following functionality:

* Administrators can control which users are allowed to define log collection and which logs they are allowed to collect.
* Users who have the required permissions are able to specify additional log collection configurations.
* Administrators can deploy separate collectors that operate independently according to their collection requirements.

[NOTE]
====
You must have a `serviceAccount` in the same namespace in which you create the `ClusterLogForwarder`. Additionally, for this `ClusterLogForwarder` you must have the `collect-audit-logs`, `collect-application-logs`, and `collect-infrastructure-logs` cluster roles. For more information see link:https://docs.openshift.com/container-platform/4.16/observability/logging/log_collection_forwarding/log-forwarding.html#log-collection-rbac-permissions_log-forwarding[Authorizing log collection RBAC permissions].
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This cannot be used here.

https://github.com/openshift/openshift-docs/blob/main/contributing_to_docs/doc_guidelines.adoc#links-hyperlinks-and-cross-references

OCP does not allow hard links to internal content.
OCP requires xrefs for internal content.
OCP does not allow xrefs in modules, only assemblies.

====
62 changes: 62 additions & 0 deletions modules/log6x-input-spec-filter-audit-infrastructure.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
// Module included in the following assemblies:
//
// * observability/logging/logging-6.0/log6x-clf.adoc

:_mod-docs-content-type: PROCEDURE
[id="log6x-input-spec-filter-audit-infrastructure_{context}"]
= Filtering the audit and infrastructure log inputs by source

You can define the list of `audit` and `infrastructure` sources to collect the logs by using the `input` selector.

.Prerequisites

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This cannot be used here.

https://github.com/openshift/openshift-docs/blob/main/contributing_to_docs/doc_guidelines.adoc#links-hyperlinks-and-cross-references

OCP does not allow hard links to internal content.
OCP requires xrefs for internal content.
OCP does not allow xrefs in modules, only assemblies.

* You must have a `serviceAccount` in the same namespace in which you create the `ClusterLogForwarder`. Additionally, for this `ClusterLogForwarder` you must have the `collect-audit-logs`, `collect-application-logs`, and `collect-infrastructure-logs` cluster roles. For more information see link:https://docs.openshift.com/container-platform/4.16/observability/logging/log_collection_forwarding/log-forwarding.html#log-collection-rbac-permissions_log-forwarding[Authorizing log collection RBAC permissions].

.Procedure

. Add a configuration to define the `audit` and `infrastructure` sources in the `ClusterLogForwarder` CR.

+
The following example shows how to configure the `ClusterLogForwarder` CR to define `audit` and `infrastructure` sources:
+
.Example `ClusterLogForwarder` CR
+
[source,yaml]
----
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
# ...
spec:
serviceAccount:
name: <service_account_name>
inputs:
- name: mylogs1
type: infrastructure
infrastructure:
sources: # <1>
- node
- name: mylogs2
type: audit
audit:
sources: # <2>
- kubeAPI
- openshiftAPI
- ovn
# ...
----
<1> Specifies the list of infrastructure sources to collect. The valid sources include:
** `node`: Journal log from the node
** `container`: Logs from the workloads deployed in the namespaces
<2> Specifies the list of audit sources to collect. The valid sources include:
** `kubeAPI`: Logs from the Kubernetes API servers
** `openshiftAPI`: Logs from the OpenShift API servers
** `auditd`: Logs from a node auditd service
** `ovn`: Logs from an open virtual network service

. Apply the `ClusterLogForwarder` CR by running the following command:

+
[source,terminal]
----
$ oc apply -f <filename>.yaml
----
Loading