Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion modules/cluster-logging-collector-log-forward-es.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ spec:
<5> Specify the `elasticsearch` type.
<6> Specify the URL and port of the external Elasticsearch instance as a valid absolute URL. You can use the `http` (insecure) or `https` (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address.
<7> For a secure connection, you can specify an `https` or `http` URL that you authenticate by specifying a `secret`.
<8> For an `https` prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project, and must have keys of *tls.crt*, *tls.key*, and *ca-bundle.crt* that point to the respective certificates that they represent. Otherwise, for `http` and `https` prefixes, you can specify a secret that contains a username and password. For more information, see the following "Example: Setting a secret that contains a username and password."
<8> For an `https` prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must contain a `ca-bundle.crt` key that points to the certificate it represents. Otherwise, for `http` and `https` prefixes, you can specify a secret that contains a username and password. In legacy implementations, the secret must exist in the `openshift-logging` project. For more information, see the following "Example: Setting a secret that contains a username and password."
<9> Optional: Specify a name for the pipeline.
<10> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
<11> Specify the name of the output to use when forwarding logs with this pipeline.
Expand Down
2 changes: 1 addition & 1 deletion modules/cluster-logging-collector-log-forward-fluentd.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ spec:
<3> Specify a name for the output.
<4> Specify the `fluentdForward` type.
<5> Specify the URL and port of the external Fluentd instance as a valid absolute URL. You can use the `tcp` (insecure) or `tls` (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
<6> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project, and must have keys of: *tls.crt*, *tls.key*, and *ca-bundle.crt* that point to the respective certificates that they represent.
<6> If you are using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project and must contain a `ca-bundle.crt` key that points to the certificate it represents.
<7> Optional: Specify a name for the pipeline.
<8> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
<9> Specify the name of the output to use when forwarding logs with this pipeline.
Expand Down
2 changes: 1 addition & 1 deletion modules/cluster-logging-collector-log-forward-kafka.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ spec:
<4> Specify a name for the output.
<5> Specify the `kafka` type.
<6> Specify the URL and port of the Kafka broker as a valid absolute URL, optionally with a specific topic. You can use the `tcp` (insecure) or `tls` (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
<7> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project, and must have keys of *tls.crt*, *tls.key*, and *ca-bundle.crt* that point to the respective certificates that they represent.
<7> If you are using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must contain a `ca-bundle.crt` key that points to the certificate it represents. In legacy implementations, the secret must exist in the `openshift-logging` project.
<8> Optional: To send an insecure output, use a `tcp` prefix in front of the URL. Also omit the `secret` key and its `name` from this output.
<9> Optional: Specify a name for the pipeline.
<10> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
Expand Down
2 changes: 1 addition & 1 deletion modules/cluster-logging-collector-log-forward-loki.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ spec:
<5> Specify the type as `"loki"`.
<6> Specify the URL and port of the Loki system as a valid absolute URL. You can use the `http` (insecure) or `https` (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. Loki's default port for HTTP(S) communication is 3100.
<7> For a secure connection, you can specify an `https` or `http` URL that you authenticate by specifying a `secret`.
<8> For an `https` prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project, and must have keys of *tls.crt*, *tls.key*, and *ca-bundle.crt* that point to the respective certificates that they represent. Otherwise, for `http` and `https` prefixes, you can specify a secret that contains a username and password. For more information, see the following "Example: Setting a secret that contains a username and password."
<8> For an `https` prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must contain a `ca-bundle.crt` key that points to the certificates it represents. Otherwise, for `http` and `https` prefixes, you can specify a secret that contains a username and password. In legacy implementations, the secret must exist in the `openshift-logging` project. For more information, see the following "Example: Setting a secret that contains a username and password."
<9> Optional: Specify a metadata key field to generate values for the `TenantID` field in Loki. For example, setting `tenantKey: kubernetes.namespace_name` uses the names of the Kubernetes namespaces as values for tenant IDs in Loki. To see which other log record fields you can specify, see the "Log Record Fields" link in the following "Additional resources" section.
<10> Optional: Specify a list of metadata field keys to replace the default Loki labels. Loki label names must match the regular expression `[a-zA-Z_:][a-zA-Z0-9_:]*`. Illegal characters in metadata keys are replaced with `_` to form the label name. For example, the `kubernetes.labels.foo` metadata key becomes Loki label `kubernetes_labels_foo`. If you do not set `labelKeys`, the default value is: `[log_type, kubernetes.namespace_name, kubernetes.pod_name, kubernetes_host]`. Keep the set of labels small because Loki limits the size and number of labels allowed. See link:https://grafana.com/docs/loki/latest/configuration/#limits_config[Configuring Loki, limits_config]. You can still query based on any log record field using query filters.
<11> Optional: Specify a name for the pipeline.
Expand Down
4 changes: 2 additions & 2 deletions modules/cluster-logging-collector-log-forward-syslog.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ spec:
<5> Specify the `syslog` type.
<6> Optional: Specify the syslog parameters, listed below.
<7> Specify the URL and port of the external syslog instance. You can use the `udp` (insecure), `tcp` (insecure) or `tls` (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
<8> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project, and must have keys of *tls.crt*, *tls.key*, and *ca-bundle.crt* that point to the respective certificates that they represent.
<8> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must contain a `ca-bundle.crt` key that points to the certificate it represents. In legacy implementations, the secret must exist in the `openshift-logging` project.
<9> Optional: Specify a name for the pipeline.
<10> Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
<11> Specify the name of the output to use when forwarding logs with this pipeline.
Expand All @@ -87,7 +87,7 @@ spec:
+
[source,terminal]
----
$ oc create -f <file-name>.yaml
$ oc create -f <filename>.yaml
----

[id=cluster-logging-collector-log-forward-examples-syslog-log-source]
Expand Down
20 changes: 7 additions & 13 deletions modules/cluster-logging-collector-log-forwarding-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -139,14 +139,9 @@ If your external logging aggregator becomes unavailable and cannot receive logs,
== Supported Authorization Keys
Common key types are provided here. Some output types support additional specialized keys, documented with the output-specific configuration field. All secret keys are optional. Enable the security features you want by setting the relevant keys. You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration. Open Shift Logging will not attempt to verify a mismatch between authorization combinations.

Transport Layer Security (TLS):: Using a TLS URL ('http://...' or 'ssl://...') without a Secret enables basic TLS server-side authentication. Additional TLS features are enabled by including a Secret and setting the following optional fields:

* `tls.crt`: (string) File name containing a client certificate. Enables mutual authentication. Requires `tls.key`.

* `tls.key`: (string) File name containing the private key to unlock the client certificate. Requires `tls.crt`.
Transport Layer Security (TLS):: Using a TLS URL (`+http://...+` or `+ssl://...+`) without a secret enables basic TLS server-side authentication. Additional TLS features are enabled by including a secret and setting the following optional fields:

* `passphrase`: (string) Passphrase to decode an encoded TLS private key. Requires `tls.key`.

* `ca-bundle.crt`: (string) File name of a customer CA for server authentication.

Username and Password::
Expand All @@ -163,14 +158,13 @@ If missing or empty, the system defaults are used.
== Creating a Secret

You can create a secret in the directory that contains your certificate and key files by using the following command:
[subs="+quotes"]

[source,terminal]
----
$ oc create secret generic -n openshift-logging <my-secret> \
--from-file=tls.key=<your_key_file>
--from-file=tls.crt=<your_crt_file>
--from-file=ca-bundle.crt=<your_bundle_file>
--from-literal=username=<your_username>
--from-literal=password=<your_password>
$ oc create secret generic -n <namespace> <secret_name> \
--from-file=ca-bundle.crt=<your_bundle_file> \
--from-literal=username=<your_username> \
--from-literal=password=<your_password>
----

[NOTE]
Expand Down