-
Notifications
You must be signed in to change notification settings - Fork 1.8k
migrate logx about content, with modules and snippets #92351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
gabriel-rh
merged 1 commit into
openshift:standalone-logging-docs-main
from
gabriel-rh:standalone-logging-logx-about
Apr 17, 2025
Merged
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,65 @@ | ||
| :_mod-docs-content-type: ASSEMBLY | ||
| include::_attributes/common-attributes.adoc[] | ||
| [id="log6x-about-6-1"] | ||
| = Logging 6.1 | ||
| :context: logging-6x-6.1 | ||
|
|
||
| toc::[] | ||
|
|
||
| The `ClusterLogForwarder` custom resource (CR) is the central configuration point for log collection and forwarding. | ||
|
|
||
| [id="inputs-and-outputs_6-1_{context}"] | ||
| == Inputs and outputs | ||
|
|
||
| Inputs specify the sources of logs to be forwarded. Logging provides the following built-in input types that select logs from different parts of your cluster: | ||
|
|
||
| * `application` | ||
| * `receiver` | ||
| * `infrastructure` | ||
| * `audit` | ||
|
|
||
| You can also define custom inputs based on namespaces or pod labels to fine-tune log selection. | ||
|
|
||
| Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings. | ||
|
|
||
| [id="receiver-input-type_6-1_{context}"] | ||
| == Receiver input type | ||
| The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: `http` and `syslog`. | ||
|
|
||
| The `ReceiverSpec` field defines the configuration for a receiver input. | ||
|
|
||
| [id="pipelines-and-filters_6-1_{context}"] | ||
| == Pipelines and filters | ||
|
|
||
| Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. You can use filters to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages. | ||
|
|
||
| [id="operator-behavior_6-1_{context}"] | ||
| == Operator behavior | ||
|
|
||
| The Cluster Logging Operator manages the deployment and configuration of the collector based on the `managementState` field of the `ClusterLogForwarder` resource: | ||
|
|
||
| - When set to `Managed` (default), the Operator actively manages the logging resources to match the configuration defined in the spec. | ||
| - When set to `Unmanaged`, the Operator does not take any action, allowing you to manually manage the logging components. | ||
|
|
||
| [id="validation_6-1_{context}"] | ||
| == Validation | ||
| Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The `ClusterLogForwarder` resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios. | ||
|
|
||
| [id="quick-start_6-1_{context}"] | ||
| == Quick start | ||
|
|
||
| OpenShift Logging supports two data models: | ||
|
|
||
| * ViaQ (General Availability) | ||
| * OpenTelemetry (Technology Preview) | ||
|
|
||
| You can select either of these data models based on your requirement by configuring the `lokiStack.dataModel` field in the `ClusterLogForwarder`. ViaQ is the default data model when forwarding logs to LokiStack. | ||
|
|
||
| [NOTE] | ||
| ==== | ||
| In future releases of OpenShift Logging, the default data model will change from ViaQ to OpenTelemetry. | ||
| ==== | ||
|
|
||
| include::modules/log6x-quickstart-viaq.adoc[leveloffset=+2] | ||
|
|
||
| include::modules/log6x-quickstart-opentelemetry.adoc[leveloffset=+2] |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,158 @@ | ||
| // Module included in the following assemblies: | ||
| // | ||
| // * observability/logging/logging-6.0/log6x-about.adoc | ||
|
|
||
| :_mod-docs-content-type: PROCEDURE | ||
| [id="quick-start-opentelemetry_{context}"] | ||
| = Quick start with OpenTelemetry | ||
|
|
||
| :FeatureName: The OpenTelemetry Protocol (OTLP) output log forwarder | ||
| include::snippets/technology-preview.adoc[] | ||
|
|
||
| To configure OTLP ingestion and enable the OpenTelemetry data model, follow these steps: | ||
|
|
||
| .Prerequisites | ||
| * Cluster administrator permissions | ||
|
|
||
| .Procedure | ||
|
|
||
| . Install the {clo}, {loki-op}, and {coo-first} from OperatorHub. | ||
|
|
||
| . Create a `LokiStack` custom resource (CR) in the `openshift-logging` namespace: | ||
| + | ||
| [source,yaml] | ||
| ---- | ||
| apiVersion: loki.grafana.com/v1 | ||
| kind: LokiStack | ||
| metadata: | ||
| name: logging-loki | ||
| namespace: openshift-logging | ||
| spec: | ||
| managementState: Managed | ||
| size: 1x.extra-small | ||
| storage: | ||
| schemas: | ||
| - effectiveDate: '2024-10-01' | ||
| version: v13 | ||
| secret: | ||
| name: logging-loki-s3 | ||
| type: s3 | ||
| storageClassName: gp3-csi | ||
| tenants: | ||
| mode: openshift-logging | ||
| ---- | ||
| + | ||
| [NOTE] | ||
| ==== | ||
| Ensure that the `logging-loki-s3` secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see "Secrets and TLS Configuration". | ||
| ==== | ||
|
|
||
| . Create a service account for the collector: | ||
| + | ||
| [source,terminal] | ||
| ---- | ||
| $ oc create sa collector -n openshift-logging | ||
| ---- | ||
|
|
||
| . Allow the collector's service account to write data to the `LokiStack` CR: | ||
| + | ||
| [source,terminal] | ||
| ---- | ||
| $ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector | ||
| ---- | ||
| + | ||
| [NOTE] | ||
| ==== | ||
| The `ClusterRole` resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. | ||
| ==== | ||
|
|
||
| . Allow the collector's service account to collect logs: | ||
| + | ||
| [source,terminal] | ||
| ---- | ||
| $ oc project openshift-logging | ||
| ---- | ||
| + | ||
| [source,terminal] | ||
| ---- | ||
| $ oc adm policy add-cluster-role-to-user collect-application-logs -z collector | ||
| ---- | ||
| + | ||
| [source,terminal] | ||
| ---- | ||
| $ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector | ||
| ---- | ||
| + | ||
| [source,terminal] | ||
| ---- | ||
| $ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector | ||
| ---- | ||
| + | ||
| [NOTE] | ||
| ==== | ||
| The example binds the collector to all three roles (application, infrastructure, and audit). By default, only application and infrastructure logs are collected. To collect audit logs, update your `ClusterLogForwarder` configuration to include them. Assign roles based on the specific log types required for your environment. | ||
| ==== | ||
|
|
||
| . Create a `UIPlugin` CR to enable the *Log* section in the *Observe* tab: | ||
| + | ||
| [source,yaml] | ||
| ---- | ||
| apiVersion: observability.openshift.io/v1alpha1 | ||
| kind: UIPlugin | ||
| metadata: | ||
| name: logging | ||
| spec: | ||
| type: Logging | ||
| logging: | ||
| lokiStack: | ||
| name: logging-loki | ||
| ---- | ||
|
|
||
| . Create a `ClusterLogForwarder` CR to configure log forwarding: | ||
| + | ||
| [source,yaml] | ||
| ---- | ||
| apiVersion: observability.openshift.io/v1 | ||
| kind: ClusterLogForwarder | ||
| metadata: | ||
| name: collector | ||
| namespace: openshift-logging | ||
| annotations: | ||
| observability.openshift.io/tech-preview-otlp-output: "enabled" # <1> | ||
| spec: | ||
| serviceAccount: | ||
| name: collector | ||
| outputs: | ||
| - name: loki-otlp | ||
| type: lokiStack # <2> | ||
| lokiStack: | ||
| target: | ||
| name: logging-loki | ||
| namespace: openshift-logging | ||
| dataModel: Otel # <3> | ||
| authentication: | ||
| token: | ||
| from: serviceAccount | ||
| tls: | ||
| ca: | ||
| key: service-ca.crt | ||
| configMapName: openshift-service-ca.crt | ||
| pipelines: | ||
| - name: my-pipeline | ||
| inputRefs: | ||
| - application | ||
| - infrastructure | ||
| outputRefs: | ||
| - loki-otlp | ||
| ---- | ||
| <1> Use the annotation to enable the `Otel` data model, which is a Technology Preview feature. | ||
| <2> Define the output type as `lokiStack`. | ||
| <3> Specifies the OpenTelemetry data model. | ||
| + | ||
| [NOTE] | ||
| ==== | ||
| You cannot use `lokiStack.labelKeys` when `dataModel` is `Otel`. To achieve similar functionality when `dataModel` is `Otel`, refer to "Configuring LokiStack for OTLP data ingestion". | ||
| ==== | ||
|
|
||
| .Verification | ||
| * Verify that OTLP is functioning correctly by going to *Observe* -> *OpenShift Logging* -> *LokiStack* -> *Writes* in the OpenShift web console, and checking *Distributor - Structured Metadata*. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,146 @@ | ||
| // Module included in the following assemblies: | ||
| // | ||
| // * observability/logging/logging-6.0/log6x-about.adoc | ||
|
|
||
| :_mod-docs-content-type: PROCEDURE | ||
| [id="quick-start-viaq_{context}"] | ||
| = Quick start with ViaQ | ||
|
|
||
| To use the default ViaQ data model, follow these steps: | ||
|
|
||
| .Prerequisites | ||
| * You have access to an {product-title} cluster with `cluster-admin` permissions. | ||
| * You installed the {oc-first}. | ||
| * You have access to a supported object store. For example, AWS S3, Google Cloud Storage, {azure-short}, Swift, Minio, or {rh-storage}. | ||
|
|
||
| .Procedure | ||
|
|
||
| . Install the `{clo}`, `{loki-op}`, and `{coo-first}` from OperatorHub. | ||
|
|
||
| . Create a `LokiStack` custom resource (CR) in the `openshift-logging` namespace: | ||
| + | ||
| [source,yaml] | ||
| ---- | ||
| apiVersion: loki.grafana.com/v1 | ||
| kind: LokiStack | ||
| metadata: | ||
| name: logging-loki | ||
| namespace: openshift-logging | ||
| spec: | ||
| managementState: Managed | ||
| size: 1x.extra-small | ||
| storage: | ||
| schemas: | ||
| - effectiveDate: '2024-10-01' | ||
| version: v13 | ||
| secret: | ||
| name: logging-loki-s3 | ||
| type: s3 | ||
| storageClassName: gp3-csi | ||
| tenants: | ||
| mode: openshift-logging | ||
| ---- | ||
| + | ||
| [NOTE] | ||
| ==== | ||
| Ensure that the `logging-loki-s3` secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see Secrets and TLS Configuration. | ||
| ==== | ||
|
|
||
| . Create a service account for the collector: | ||
| + | ||
| [source,terminal] | ||
| ---- | ||
| $ oc create sa collector -n openshift-logging | ||
| ---- | ||
|
|
||
| . Allow the collector's service account to write data to the `LokiStack` CR: | ||
| + | ||
| [source,terminal] | ||
| ---- | ||
| $ oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging | ||
| ---- | ||
| + | ||
| [NOTE] | ||
| ==== | ||
| The `ClusterRole` resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. | ||
| ==== | ||
|
|
||
| . To collect logs, use the service account of the collector by running the following commands: | ||
| + | ||
| [source,terminal] | ||
| ---- | ||
| $ oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging | ||
| ---- | ||
| + | ||
| [source,terminal] | ||
| ---- | ||
| $ oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging | ||
| ---- | ||
| + | ||
| [source,terminal] | ||
| ---- | ||
| $ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging | ||
| ---- | ||
| + | ||
| [NOTE] | ||
| ==== | ||
| The example binds the collector to all three roles (application, infrastructure, and audit), but by default, only application and infrastructure logs are collected. To collect audit logs, update your `ClusterLogForwarder` configuration to include them. Assign roles based on the specific log types required for your environment. | ||
| ==== | ||
|
|
||
| . Create a `UIPlugin` CR to enable the *Log* section in the *Observe* tab: | ||
| + | ||
| [source,yaml] | ||
| ---- | ||
| apiVersion: observability.openshift.io/v1alpha1 | ||
| kind: UIPlugin | ||
| metadata: | ||
| name: logging | ||
| spec: | ||
| type: Logging | ||
| logging: | ||
| lokiStack: | ||
| name: logging-loki | ||
| ---- | ||
|
|
||
| . Create a `ClusterLogForwarder` CR to configure log forwarding: | ||
| + | ||
| [source,yaml] | ||
| ---- | ||
| apiVersion: observability.openshift.io/v1 | ||
| kind: ClusterLogForwarder | ||
| metadata: | ||
| name: collector | ||
| namespace: openshift-logging | ||
| spec: | ||
| serviceAccount: | ||
| name: collector | ||
| outputs: | ||
| - name: default-lokistack | ||
| type: lokiStack | ||
| lokiStack: | ||
| authentication: | ||
| token: | ||
| from: serviceAccount | ||
| target: | ||
| name: logging-loki | ||
| namespace: openshift-logging | ||
| tls: | ||
| ca: | ||
| key: service-ca.crt | ||
| configMapName: openshift-service-ca.crt | ||
| pipelines: | ||
| - name: default-logstore | ||
| inputRefs: | ||
| - application | ||
| - infrastructure | ||
| outputRefs: | ||
| - default-lokistack | ||
| ---- | ||
| + | ||
| [NOTE] | ||
| ==== | ||
| The `dataModel` field is optional and left unset (`dataModel: ""`) by default. This allows the Cluster Logging Operator (CLO) to automatically select a data model. Currently, the CLO defaults to the ViaQ model when the field is unset, but this will change in future releases. Specifying `dataModel: ViaQ` ensures the configuration remains compatible if the default changes. | ||
| ==== | ||
|
|
||
| .Verification | ||
| * Verify that logs are visible in the *Log* section of the *Observe* tab in the {product-title} web console. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,12 @@ | ||
| // When including this file, ensure that {FeatureName} is set immediately before | ||
| // the include. Otherwise it will result in an incorrect replacement. | ||
|
|
||
| [IMPORTANT] | ||
| ==== | ||
| [subs="attributes+"] | ||
| {FeatureName} is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. | ||
|
|
||
| For more information about the support scope of Red Hat Technology Preview features, see link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview Features Support Scope]. | ||
| ==== | ||
| // Undefine {FeatureName} attribute, so that any mistakes are easily spotted | ||
| :!FeatureName: | ||
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤖 [error] OpenShiftAsciiDoc.ModuleContainsContentType: Module is missing the '_mod-docs-content-type' variable.