Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 6 additions & 9 deletions logging/cluster-logging-deploying.adoc
Original file line number Diff line number Diff line change
@@ -1,20 +1,17 @@
:_content-type: ASSEMBLY
:context: cluster-logging-deploying
[id="cluster-logging-deploying"]
= Installing OpenShift Logging
= Installing the {logging-title}
include::_attributes/common-attributes.adoc[]

toc::[]


You can install OpenShift Logging by deploying
the OpenShift Elasticsearch and Red Hat OpenShift Logging Operators. The OpenShift Elasticsearch Operator
creates and manages the Elasticsearch cluster used by OpenShift Logging.
The Red Hat OpenShift Logging Operator creates and manages the components of the logging stack.
You can install the {logging-title} by deploying the OpenShift Elasticsearch and Red Hat OpenShift Logging Operators. The OpenShift Elasticsearch Operator creates and manages the Elasticsearch cluster used by OpenShift Logging. The {logging} Operator creates and manages the components of the logging stack.

The process for deploying OpenShift Logging to {product-title} involves:
The process for deploying the {logging} to {product-title} involves:

* Reviewing the xref:../logging/config/cluster-logging-storage-considerations#cluster-logging-storage[OpenShift Logging storage considerations].
* Reviewing the xref:../logging/config/cluster-logging-storage-considerations#cluster-logging-storage[{logging-uc} storage considerations].

* Installing the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator using the {product-title} xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-console_cluster-logging-deploying[web console] or xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-cli_cluster-logging-deploying[CLI].

Expand All @@ -34,7 +31,7 @@ include::modules/cluster-logging-deploy-console.adoc[leveloffset=+1]

If you plan to use Kibana, you must xref:#cluster-logging-visualizer-indices_cluster-logging-deploying[manually create your Kibana index patterns and visualizations] to explore and visualize data in Kibana.

If your cluster network provider enforces network isolation, xref:#cluster-logging-deploy-multitenant_cluster-logging-deploying[allow network traffic between the projects that contain the OpenShift Logging operators].
If your cluster network provider enforces network isolation, xref:#cluster-logging-deploy-multitenant_cluster-logging-deploying[allow network traffic between the projects that contain the {logging} Operators].


include::modules/cluster-logging-deploy-cli.adoc[leveloffset=+1]
Expand All @@ -43,7 +40,7 @@ include::modules/cluster-logging-deploy-cli.adoc[leveloffset=+1]

If you plan to use Kibana, you must xref:#cluster-logging-visualizer-indices_cluster-logging-deploying[manually create your Kibana index patterns and visualizations] to explore and visualize data in Kibana.

If your cluster network provider enforces network isolation, xref:#cluster-logging-deploy-multitenant_cluster-logging-deploying[allow network traffic between the projects that contain the OpenShift Logging operators].
If your cluster network provider enforces network isolation, xref:#cluster-logging-deploy-multitenant_cluster-logging-deploying[allow network traffic between the projects that contain the {logging} Operators].

include::modules/cluster-logging-visualizer-indices.adoc[leveloffset=+2]

Expand Down
4 changes: 2 additions & 2 deletions logging/cluster-logging-eventrouter.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ include::_attributes/common-attributes.adoc[]

toc::[]

The {product-title} Event Router is a pod that watches Kubernetes events and logs them for collection by OpenShift Logging. You must manually deploy the Event Router.
The {product-title} Event Router is a pod that watches Kubernetes events and logs them for collection by the {logging}. You must manually deploy the Event Router.

The Event Router collects events from all projects and writes them to `STDOUT`. Fluentd collects those events and forwards them into the {product-title} Elasticsearch instance. Elasticsearch indexes the events to the `infra` index.
The Event Router collects events from all projects and writes them to `STDOUT`. The collector then forwards those events to the store defined in the `ClusterLogForwarder` custom resource (CR).

[IMPORTANT]
====
Expand Down
4 changes: 3 additions & 1 deletion logging/cluster-logging-exported-fields.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,14 @@ include::_attributes/common-attributes.adoc[]

toc::[]

The following fields can be present in log records exported by OpenShift Logging. Although log records are typically formatted as JSON objects, the same data model can be applied to other encodings.
The following fields can be present in log records exported by the {logging}. Although log records are typically formatted as JSON objects, the same data model can be applied to other encodings.

To search these fields from Elasticsearch and Kibana, use the full dotted field name when searching. For example, with an Elasticsearch */_search URL*, to look for a Kubernetes pod name, use `/_search/q=kubernetes.pod_name:name-of-my-pod`.

// The logging system can parse JSON-formatted log entries to external systems. These log entries are formatted as a fluentd message with extra fields such as `kubernetes`. The fields exported by the logging system and available for searching from Elasticsearch and Kibana are documented at the end of this document.

include::modules/cluster-logging-exported-fields-top-level-fields.adoc[leveloffset=0]

include::modules/cluster-logging-exported-fields-kubernetes.adoc[leveloffset=0]

// add modules/cluster-logging-exported-fields-openshift when available
4 changes: 2 additions & 2 deletions logging/cluster-logging-external.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]

toc::[]

By default, OpenShift Logging sends container and infrastructure logs to the default internal Elasticsearch log store defined in the `ClusterLogging` custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, you do not need to configure the Cluster Log Forwarder.
By default, the {logging} sends container and infrastructure logs to the default internal Elasticsearch log store defined in the `ClusterLogging` custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, you do not need to configure the Cluster Log Forwarder.

To send logs to other log aggregators, you use the {product-title} Cluster Log Forwarder. This API enables you to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. In addition, you can send different types of logs to various systems so that various individuals can access each type. You can also enable Transport Layer Security (TLS) support to send logs securely, as required by your organization.

Expand All @@ -15,7 +15,7 @@ To send logs to other log aggregators, you use the {product-title} Cluster Log F
To send audit logs to the default internal Elasticsearch log store, use the Cluster Log Forwarder as described in xref:../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-audit_cluster-logging-store[Forward audit logs to the log store].
====

When you forward logs externally, the Red Hat OpenShift Logging Operator creates or modifies a Fluentd config map to send logs using your desired protocols. You are responsible for configuring the protocol on the external log aggregator.
When you forward logs externally, the {logging} creates or modifies a Fluentd config map to send logs using your desired protocols. You are responsible for configuring the protocol on the external log aggregator.

[IMPORTANT]
====
Expand Down
6 changes: 3 additions & 3 deletions logging/cluster-logging-release-notes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@ toc::[]

include::modules/making-open-source-more-inclusive.adoc[leveloffset=+1]

[id="cluster-logging-supported-versions"]
== Supported Versions
include::modules/cluster-logging-supported-versions.adoc[leveloffset=+1]
[id="cluster-logging-ocp-compatibility"]
= {product-title} compatibility
The {logging-title} is provided as an installable component, with a distinct release cycle from the core {product-title}. The link:https://access.redhat.com/support/policy/updates/openshift#logging[Red Hat OpenShift Container Platform Life Cycle Policy] outlines release compatibility.

// Release Notes by version
[id="cluster-logging-release-notes-5-3-0"]
Expand Down
2 changes: 1 addition & 1 deletion logging/cluster-logging-uninstall.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]

toc::[]

You can remove OpenShift Logging from your {product-title} cluster.
You can remove the {logging} from your {product-title} cluster.

// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
Expand Down
12 changes: 4 additions & 8 deletions logging/cluster-logging-upgrading.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,9 @@ include::_attributes/common-attributes.adoc[]

toc::[]

.{product-title} version support for Red Hat OpenShift Logging (RHOL)
[options="header"]
|====
| |4.7 |4.8 |4.9
|RHOL 5.0|X |X |
|RHOL 5.1|X |X |
|RHOL 5.2|X |X |X
|====
[id="cluster-logging-supported-versions"]
== Supported Versions
For version compatibility and support information, see link:https://access.redhat.com/support/policy/updates/openshift#logging[Red Hat OpenShift Container Platform Life Cycle Policy]

To upgrade from cluster logging in {product-title} version 4.6 and earlier to OpenShift Logging 5.x, you update the {product-title} cluster to version 4.7 or 4.8. Then, you update the following operators:

Expand All @@ -23,4 +18,5 @@ To upgrade from cluster logging in {product-title} version 4.6 and earlier to Op
To upgrade from a previous version of OpenShift Logging to the current version, you update OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator to their current versions.

include::modules/cluster-logging-updating-logging-to-5-0.adoc[leveloffset=+1]

include::modules/cluster-logging-updating-logging-to-5-1.adoc[leveloffset=+1]
6 changes: 2 additions & 4 deletions logging/cluster-logging-visualizer.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]

toc::[]

OpenShift Logging includes a web console for visualizing collected log data. Currently, {product-title} deploys the Kibana console for visualization.
The {logging} includes a web console for visualizing collected log data. Currently, {product-title} deploys the Kibana console for visualization.

Using the log visualizer, you can do the following with your data:

Expand All @@ -15,7 +15,7 @@ Using the log visualizer, you can do the following with your data:
* create and view custom dashboards using the *Dashboard* tab.

Use and configuration of the Kibana interface is beyond the scope of this documentation. For more information,
on using the interface, see the link:https://www.elastic.co/guide/en/kibana/6.8/connect-to-elasticsearch.html[Kibana documentation].
on using the interface, see the link:https://www.elastic.co/guide/en/kibana/6.8/connect-to-elasticsearch.html[Kibana documentation].

[NOTE]
====
Expand All @@ -29,5 +29,3 @@ The audit logs are not stored in the internal {product-title} Elasticsearch inst

include::modules/cluster-logging-visualizer-indices.adoc[leveloffset=+1]
include::modules/cluster-logging-visualizer-kibana.adoc[leveloffset=+1]


8 changes: 4 additions & 4 deletions logging/cluster-logging.adoc
Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
:_content-type: ASSEMBLY
:context: cluster-logging
[id="cluster-logging"]
= Understanding Red Hat OpenShift Logging
= Understanding the {logging-title}
include::_attributes/common-attributes.adoc[]

toc::[]



ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
As a cluster administrator, you can deploy OpenShift Logging to
As a cluster administrator, you can deploy the {logging} to
aggregate all the logs from your {product-title} cluster, such as node system audit logs, application container logs, and infrastructure logs.
OpenShift Logging aggregates these logs from throughout your cluster and stores them in a default log store. You can xref:../logging/cluster-logging-visualizer.adoc#cluster-logging-visualizer[use the Kibana web console to visualize log data].
The {logging} aggregates these logs from throughout your cluster and stores them in a default log store. You can xref:../logging/cluster-logging-visualizer.adoc#cluster-logging-visualizer[use the Kibana web console to visualize log data].

OpenShift Logging aggregates the following types of logs:
The {logging} aggregates the following types of logs:

* `application` - Container logs generated by user applications running in the cluster, except infrastructure container applications.
* `infrastructure` - Logs generated by infrastructure components running in the cluster and {product-title} nodes, such as journal logs. Infrastructure components are pods that run in the `openshift*`, `kube*`, or `default` projects.
Expand Down
2 changes: 1 addition & 1 deletion logging/config/cluster-logging-collector.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]

toc::[]

{product-title} uses Fluentd to collect operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata.
{logging-title-uc} collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata.

You can configure the CPU and memory limits for the log collector and xref:../../logging/config/cluster-logging-moving-nodes.adoc#cluster-logging-moving[move the log collector pods to specific nodes]. All supported modifications to the log collector can be performed though the `spec.collection.log.fluentd` stanza in the `ClusterLogging` custom resource (CR).

Expand Down
7 changes: 1 addition & 6 deletions logging/config/cluster-logging-configuring-cr.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,11 @@ include::_attributes/common-attributes.adoc[]

toc::[]

To configure OpenShift Logging, you customize the `ClusterLogging` custom resource (CR).
To configure {logging-title} you customize the `ClusterLogging` custom resource (CR).

// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
// modules required to cover the user story. You can also include other
// assemblies.

include::modules/cluster-logging-about-crd.adoc[leveloffset=+1]





13 changes: 6 additions & 7 deletions logging/config/cluster-logging-configuring.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,13 @@ include::_attributes/common-attributes.adoc[]

toc::[]

OpenShift Logging is configurable using a `ClusterLogging` custom resource (CR) deployed
{logging-title-uc} is configurable using a `ClusterLogging` custom resource (CR) deployed
in the `openshift-logging` project.

The Red Hat OpenShift Logging Operator watches for changes to `ClusterLogging` CR,
The {logging} operator watches for changes to `ClusterLogging` CR,
creates any missing logging components, and adjusts the logging environment accordingly.

The `ClusterLogging` CR is based on the `ClusterLogging` custom resource definition (CRD), which defines a complete OpenShift Logging environment
and includes all the components of the logging stack to collect, store and visualize logs.
The `ClusterLogging` CR is based on the `ClusterLogging` custom resource definition (CRD), which defines a complete {logging} environment and includes all the components of the logging stack to collect, store and visualize logs.

.Sample `ClusterLogging` custom resource (CR)
[source,yaml]
Expand Down Expand Up @@ -53,9 +52,9 @@ spec:
resources: null
type: kibana
----
You can configure the following for OpenShift Logging:
You can configure the following for the {logging}:

* You can overwrite the image for each OpenShift Logging component by modifying the appropriate
* You can overwrite the image for each {logging} component by modifying the appropriate
environment variable in the `cluster-logging-operator` Deployment.

* You can specify specific nodes for the logging components using node selectors.
Expand All @@ -78,5 +77,5 @@ The Rsyslog log collector is currently a Technology Preview feature.

[IMPORTANT]
====
The logging routes are managed by the Red Hat OpenShift Logging Operator and cannot be modified by the user.
The logging routes are managed by the {logging-title} Operator and cannot be modified by the user.
====
2 changes: 1 addition & 1 deletion logging/config/cluster-logging-log-store.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]

toc::[]

{product-title} uses Elasticsearch 6 (ES) to store and organize the log data.
{logging-title-uc} uses Elasticsearch 6 (ES) to store and organize the log data.

You can make modifications to your log store, including:

Expand Down
3 changes: 0 additions & 3 deletions logging/config/cluster-logging-maintenance-support.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,3 @@ include::modules/cluster-logging-maintenance-support-about.adoc[leveloffset=+1]
include::modules/cluster-logging-maintenance-support-list.adoc[leveloffset=+1]

include::modules/unmanaged-operators.adoc[leveloffset=+1]



5 changes: 2 additions & 3 deletions logging/config/cluster-logging-memory.adoc
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
:_content-type: ASSEMBLY
:context: cluster-logging-memory
[id="cluster-logging-memory"]
= Configuring CPU and memory limits for OpenShift Logging components
= Configuring CPU and memory limits for {logging} components
include::_attributes/common-attributes.adoc[]

toc::[]


You can configure both the CPU and memory limits for each of the OpenShift Logging components as needed.
You can configure both the CPU and memory limits for each of the {logging} components as needed.


// The following include statements pull in the module files that comprise
Expand All @@ -17,4 +17,3 @@ You can configure both the CPU and memory limits for each of the OpenShift Loggi


include::modules/cluster-logging-cpu-memory.adoc[leveloffset=+1]

2 changes: 1 addition & 1 deletion logging/config/cluster-logging-moving-nodes.adoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
:_content-type: ASSEMBLY
:context: cluster-logging-moving
[id="cluster-logging-moving"]
= Moving OpenShift Logging resources with node selectors
= Moving {logging} resources with node selectors
include::_attributes/common-attributes.adoc[]

toc::[]
Expand Down
8 changes: 3 additions & 5 deletions logging/config/cluster-logging-storage-considerations.adoc
Original file line number Diff line number Diff line change
@@ -1,16 +1,14 @@
:_content-type: ASSEMBLY
:context: cluster-logging-storage
[id="cluster-logging-storage"]
= Configuring OpenShift Logging storage
= Configuring {logging} storage
include::_attributes/common-attributes.adoc[]

toc::[]


Elasticsearch is a memory-intensive application. The default OpenShift Logging installation deploys 16G of memory for both memory requests and memory limits.
The initial set of {product-title} nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the
{product-title} cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower
memory setting, though this is not recommended for production environments.
Elasticsearch is a memory-intensive application. The default {logging} installation deploys 16G of memory for both memory requests and memory limits.
The initial set of {product-title} nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the {product-title} cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower memory setting, though this is not recommended for production environments.

// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
Expand Down
4 changes: 2 additions & 2 deletions logging/config/cluster-logging-tolerations.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]

toc::[]

You can use taints and tolerations to ensure that OpenShift Logging pods run
You can use taints and tolerations to ensure that {logging} pods run
on specific nodes and that no other workload can run on those nodes.

Taints and tolerations are simple `key:value` pair. A taint on a node
Expand All @@ -15,7 +15,7 @@ instructs the node to repel all pods that do not tolerate the taint.
The `key` is any string, up to 253 characters and the `value` is any string up to 63 characters.
The string must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores.

.Sample OpenShift Logging CR with tolerations
.Sample {logging} CR with tolerations
[source,yaml]
----
apiVersion: "logging.openshift.io/v1"
Expand Down
3 changes: 1 addition & 2 deletions logging/config/cluster-logging-visualizer.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]

toc::[]

{product-title} uses Kibana to display the log data collected by OpenShift Logging.
{product-title} uses Kibana to display the log data collected by the {logging}.

You can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes.

Expand All @@ -18,4 +18,3 @@ You can scale Kibana for redundancy and configure the CPU and memory for your Ki
include::modules/cluster-logging-cpu-memory.adoc[leveloffset=+1]

include::modules/cluster-logging-kibana-scaling.adoc[leveloffset=+1]

Loading