Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 14 additions & 3 deletions configuring/configuring-log-forwarding.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -111,10 +111,21 @@ The order of filterRefs matters, as they are applied sequentially. Earlier filte

Filters are configured in an array under `spec.filters`. They can match incoming log messages based on the value of structured fields and modify or drop them.

Administrators can configure the following types of filters:

include::modules/enabling-multi-line-exception-detection.adoc[leveloffset=+2]
include::modules/logging-http-forward.adoc[leveloffset=+2]

include::modules/cluster-logging-collector-log-forward-gcp.adoc[leveloffset=+1]

include::modules/logging-forward-splunk.adoc[leveloffset=+1]

include::modules/logging-http-forward.adoc[leveloffset=+1]

include::modules/logging-forwarding-azure.adoc[leveloffset=+1]

include::modules/cluster-logging-collector-log-forward-project.adoc[leveloffset=+1]

include::modules/cluster-logging-collector-log-forward-logs-from-application-pods.adoc[leveloffset=+1]

include::modules/cluster-logging-collector-log-forward-syslog.adoc[leveloffset=+2]


Expand All @@ -135,6 +146,7 @@ On {sts-short}-enabled clusters such as {product-rosa}, {aws-short} roles are pr

* xref:../modules/cluster-logging-collector-log-forward-sts-cloudwatch.adoc#cluster-logging-collector-log-forward-sts-cloudwatch_configuring-log-forwarding[Forwarding logs to Amazon CloudWatch from STS enabled clusters]
////

* Creating a secret for CloudWatch with an existing {aws-short} role

* Forwarding logs to Amazon CloudWatch from STS-enabled clusters
Expand All @@ -146,7 +158,6 @@ If you do not have an {aws-short} IAM role pre-configured with trust policies, y
* xref:../modules/cluster-logging-collector-log-forward-secret-cloudwatch.adoc#cluster-logging-collector-log-forward-secret-cloudwatch_configuring-log-forwarding[Creating a secret for AWS CloudWatch with an existing AWS role]
* xref:../modules/cluster-logging-collector-log-forward-sts-cloudwatch.adoc#cluster-logging-collector-log-forward-sts-cloudwatch[Forwarding logs to Amazon CloudWatch from STS enabled clusters]
////

include::modules/creating-an-aws-role.adoc[leveloffset=+2]
include::modules/cluster-logging-collector-log-forward-secret-cloudwatch.adoc[leveloffset=+2]
include::modules/cluster-logging-collector-log-forward-sts-cloudwatch.adoc[leveloffset=+2]
Expand Down
46 changes: 26 additions & 20 deletions modules/cluster-logging-collector-log-forward-gcp.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,59 +6,65 @@
[id="cluster-logging-collector-log-forward-gcp_{context}"]
= Forwarding logs to Google Cloud Platform (GCP)

You can forward logs to link:https://cloud.google.com/logging/docs/basic-concepts[Google Cloud Logging] in addition to, or instead of, the internal default {ocp-product-title} log store.
You can forward logs to link:https://cloud.google.com/logging/docs/basic-concepts[Google Cloud Logging].

[NOTE]
[IMPORTANT]
====
Using this feature with Fluentd is not supported.
Forwarding logs to GCP is not supported on Red{nbsp}Hat OpenShift on AWS.
====

.Prerequisites

* {clo} 5.5.1 and later
* {clo} has been installed.

.Procedure

. Create a secret using your link:https://cloud.google.com/iam/docs/creating-managing-service-account-keys[Google service account key].
+
[source,terminal,subs="+quotes"]
----
$ oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json=_<your_service_account_key_file.json>_
$ oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json=<your_service_account_key_file.json>
----

. Create a `ClusterLogForwarder` Custom Resource YAML using the template below:
+
[source,yaml]
----
apiVersion: logging.openshift.io/v1
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: <log_forwarder_name> <1>
namespace: <log_forwarder_namespace> <2>
name: <log_forwarder_name>
namespace: openshift-logging
spec:
serviceAccountName: <service_account_name> <3>
serviceAccount:
name: <service_account_name> #<1>
outputs:
- name: gcp-1
type: googleCloudLogging
secret:
name: gcp-secret
googleCloudLogging:
projectId : "openshift-gce-devel" <4>
logId : "app-gcp" <5>
authentication:
credentials:
secretName: gcp-secret
key: google-application-credentials.json
id:
type : project
value: openshift-gce-devel #<2>
logId : app-gcp #<3>
pipelines:
- name: test-app
inputRefs: <6>
inputRefs: #<4>
- application
outputRefs:
- gcp-1
----
<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name.
<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace.
<3> The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the `openshift-logging` namespace.
<4> Set a `projectId`, `folderId`, `organizationId`, or `billingAccountId` field and its corresponding value, depending on where you want to store your logs in the link:https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy[GCP resource hierarchy].
<5> Set the value to add to the `logName` field of the link:https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry[Log Entry].
<6> Specify which log types to forward by using the pipeline: `application`, `infrastructure`, or `audit`.

<1> The name of your service account.
<2> Set a `project`, `folder`, `organization`, or `billingAccount` field and its corresponding value, depending on where you want to store your logs in the GCP resource hierarchy.
<3> Set the value to add to the `logName` field of the log entry. The value can be a combination of static and dynamic values consisting of field paths followed by `||`, followed by another field path or a static value. A dynamic value must be encased in single curly brackets `{}` and must end with a static fallback value separated with `||`. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes.
<4> Specify the the names of inputs, defined in the `input.name` field for this pipeline. You can also use the built-in values `application`, `infrastructure`, `audit`.

[role="_additional-resources"]
.Additional resources
* link:https://cloud.google.com/billing/docs/concepts[Google Cloud Billing Documentation]
* link:https://cloud.google.com/logging/docs[Cloud Logging documentation] for Google GCP.
* link:https://cloud.google.com/logging/docs/view/logging-query-language[Google Cloud Logging Query Language Documentation]
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * observability/logging/log_collection_forwarding/configuring-log-forwarding.adoc
// * configuring/configuring-log-forwarding.adoc

:_mod-docs-content-type: PROCEDURE
[id="cluster-logging-collector-log-forward-logs-from-application-pods_{context}"]
Expand All @@ -16,42 +16,41 @@ To specify the pod labels, you use one or more `matchLabels` key-value pairs. If

. Create or edit a YAML file that defines the `ClusterLogForwarder` CR object. In the file, specify the pod labels using simple equality-based selectors under `inputs[].name.application.selector.matchLabels`, as shown in the following example.
+
.Example `ClusterLogForwarder` CR YAML file
[source,yaml]
----
apiVersion: logging.openshift.io/v1
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: <log_forwarder_name> <1>
namespace: <log_forwarder_namespace> <2>
name: <log_forwarder_name>
namespace: <log_forwarder_namespace>
spec:
pipelines:
- inputRefs: [ myAppLogData ] <3>
outputRefs: [ default ] <4>
inputs: <5>
- name: myAppLogData
application:
selector:
matchLabels: <6>
environment: production
app: nginx
namespaces: <7>
- app1
- app2
outputs: <8>
serviceAccount:
name: <service_account_name> #<1>
outputs:
- <output_name>
...
# ...
inputs:
- name: exampleAppLogData #<2>
type: application #<3>
application:
includes: #<4>
- namespace: app1
- namespace: app2
selector:
matchLabels: #<5>
environment: production
app: nginx
pipelines:
- inputRefs:
- exampleAppLogData
outputRefs:
# ...
----
<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name.
<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace.
<3> Specify one or more comma-separated values from `inputs[].name`.
<4> Specify one or more comma-separated values from `outputs[]`.
<5> Define a unique `inputs[].name` for each application that has a unique set of pod labels.
<6> Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.
<7> Optional: Specify one or more namespaces.
<8> Specify one or more outputs to forward your log data to.

. Optional: To restrict the gathering of log data to specific namespaces, use `inputs[].name.application.namespaces`, as shown in the preceding example.
<1> Specify the service account name.
<2> Specify a name for the input.
<3> Specify the type as `application` to collect logs from applications.
<4> Specify the set of namespaces to include when collecting logs.
<5> Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.

. Optional: You can send log data from additional applications that have different pod labels to the same pipeline.
.. For each unique combination of pod labels, create an additional `inputs[].name` section similar to the one shown.
Expand All @@ -72,4 +71,4 @@ $ oc create -f <file-name>.yaml
[role="_additional-resources"]
.Additional resources

* For more information on `matchLabels` in Kubernetes, see link:https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements[Resources that support set-based requirements].
* link:https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements[Resources that support set-based requirements].
92 changes: 37 additions & 55 deletions modules/cluster-logging-collector-log-forward-project.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
// Module included in the following assemblies:
//
// * observability/logging/log_collection_forwarding/configuring-log-forwarding.adoc
// * configuring/configuring-log-forwarding.adoc

:_mod-docs-content-type: PROCEDURE
[id="cluster-logging-collector-log-forward-project_{context}"]
Expand All @@ -15,71 +15,53 @@ To configure forwarding application logs from a project, you must create a `Clus
* You must have a logging server that is configured to receive the logging data using the specified protocol or format.

.Procedure

. Create or edit a YAML file that defines the `ClusterLogForwarder` CR:
+
.Example `ClusterLogForwarder` CR
[source,yaml]
----
apiVersion: logging.openshift.io/v1
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance <1>
namespace: openshift-logging <2>
name: <log_forwarder_name>
namespace: <log_forwarder_namespace>
spec:
serviceAccount:
name: <service_account_name>
outputs:
- name: fluentd-server-secure <3>
type: fluentdForward <4>
url: 'tls://fluentdserver.security.example.com:24224' <5>
secret: <6>
name: fluentd-secret
- name: fluentd-server-insecure
type: fluentdForward
url: 'tcp://fluentdserver.home.example.com:24224'
inputs: <7>
- name: my-app-logs
application:
namespaces:
- my-project <8>
- name: <output_name>
type: <output_type>
inputs:
- name: my-app-logs #<1>
type: application #<2>
application:
includes: #<3>
- namespace: my-project
filters:
- name: my-project-labels
type: openshiftLabels
openshiftLabels: #<4>
project: my-project
- name: cluster-labels
type: openshiftLabels
openshiftLabels:
clusterId: C1234
pipelines:
- name: forward-to-fluentd-insecure <9>
inputRefs: <10>
- my-app-logs
outputRefs: <11>
- fluentd-server-insecure
labels:
project: "my-project" <12>
- name: forward-to-fluentd-secure <13>
inputRefs:
- application <14>
- audit
- infrastructure
outputRefs:
- fluentd-server-secure
- default
labels:
clusterId: "C1234"
- name: <pipeline_name> #<5>
inputRefs:
- my-app-logs
outputRefs:
- <output_name>
filterRefs:
- my-project-labels
- cluster-labels
----
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
<3> The name of the output.
<4> The output type: `elasticsearch`, `fluentdForward`, `syslog`, or `kafka`.
<5> The URL and port of the external log aggregator as a valid absolute URL. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
<6> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project and have *tls.crt*, *tls.key*, and *ca-bundle.crt* keys that each point to the certificates they represent.
<7> The configuration for an input to filter application logs from the specified projects.
<8> If no namespace is specified, logs are collected from all namespaces.
<9> The pipeline configuration directs logs from a named input to a named output. In this example, a pipeline named `forward-to-fluentd-insecure` forwards logs from an input named `my-app-logs` to an output named `fluentd-server-insecure`.
<10> A list of inputs.
<11> The name of the output to use.
<12> Optional: String. One or more labels to add to the logs.
<13> Configuration for a pipeline to send logs to other log aggregators.
+
* Optional: Specify a name for the pipeline.
* Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
* Specify the name of the output to use when forwarding logs with this pipeline.
* Optional: Specify the `default` output to forward logs to the default log store.
* Optional: String. One or more labels to add to the logs.
<14> Note that application logs from all namespaces are collected when using this configuration.
<1> Specify the name for the input.
<2> Specify the type as `application` to collect logs from applications.
<3> Specify the set of namespaces and containers to include when collecting logs.
<4> Specify the labels to be applied to log records passing through this pipeline. These labels appear in the `openshift.labels` map in the log record.
<5> Specify a name for the pipeline.

. Apply the `ClusterLogForwarder` CR by running the following command:
+
Expand Down
Loading