Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions logging/cluster-logging-deploying.adoc
Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
:context: cluster-logging-deploying
[id="cluster-logging-deploying"]
= Installing cluster logging
= Installing OpenShift Logging
include::modules/common-attributes.adoc[]

toc::[]


You can install cluster logging by deploying
You can install OpenShift Logging by deploying
the Elasticsearch and Cluster Logging Operators. The Elasticsearch Operator
creates and manages the Elasticsearch cluster used by cluster logging.
creates and manages the Elasticsearch cluster used by OpenShift Logging.
The Cluster Logging Operator creates and manages the components of the logging stack.

The process for deploying cluster logging to {product-title} involves:
The process for deploying OpenShift Logging to {product-title} involves:

* Reviewing the xref:../logging/config/cluster-logging-storage-considerations#cluster-logging-storage[cluster logging storage considerations].
* Reviewing the xref:../logging/config/cluster-logging-storage-considerations#cluster-logging-storage[OpenShift Logging storage considerations].

* Installing the Elasticsearch Operator and Cluster Logging Operator using the {product-title} xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-console_cluster-logging-deploying[web console] or xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-cli_cluster-logging-deploying[CLI].

Expand Down
2 changes: 1 addition & 1 deletion logging/cluster-logging-eventrouter.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ include::modules/common-attributes.adoc[]

toc::[]

The {product-title} Event Router is a pod that watches Kubernetes events and logs them for collection by cluster logging. You must manually deploy the Event Router.
The {product-title} Event Router is a pod that watches Kubernetes events and logs them for collection by OpenShift Logging. You must manually deploy the Event Router.

The Event Router collects events from all projects and writes them to `STDOUT`. Fluentd collects those events and forwards them into the {product-title} Elasticsearch instance. Elasticsearch indexes the events to the `infra` index.

Expand Down
2 changes: 1 addition & 1 deletion logging/cluster-logging-external.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ include::modules/common-attributes.adoc[]
toc::[]


By default, {product-title} cluster logging sends logs to the default internal Elasticsearch log store, defined in the `ClusterLogging` custom resource. If you want to forward logs to other log aggregators, you can use the {product-title} Log Forwarding API to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. You can send different types of logs to different systems, allowing you to control who in your organization can access each type. Optional TLS support ensures that you can send logs using secure communication as required by your organization.
By default, OpenShift Logging sends logs to the default internal Elasticsearch log store, defined in the `ClusterLogging` custom resource. If you want to forward logs to other log aggregators, you can use the {product-title} Log Forwarding API to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. You can send different types of logs to different systems, allowing you to control who in your organization can access each type. Optional TLS support ensures that you can send logs using secure communication as required by your organization.

When you forward logs externally, the Cluster Logging Operator creates or modifies a Fluentd config map to send logs using your desired protocols. You are responsible for configuring the protocol on the external log aggregator.

Expand Down
4 changes: 2 additions & 2 deletions logging/cluster-logging-uninstall.adoc
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
:context: cluster-logging-uninstall
[id="cluster-logging-uninstall"]
= Uninstalling Cluster Logging
= Uninstalling OpenShift Logging
include::modules/common-attributes.adoc[]

toc::[]

You can remove cluster logging from your {product-title} cluster.
You can remove OpenShift Logging from your {product-title} cluster.

// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
Expand Down
2 changes: 1 addition & 1 deletion logging/cluster-logging-upgrading.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
:context: cluster-logging-upgrading
[id="cluster-logging-upgrading"]
= Updating cluster logging
= Updating OpenShift Logging
include::modules/common-attributes.adoc[]

toc::[]
Expand Down
2 changes: 1 addition & 1 deletion logging/cluster-logging-visualizer.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ include::modules/common-attributes.adoc[]

toc::[]

{product-title} cluster logging includes a web console for visualizing collected log data. Currently, {product-title} deploys the Kibana console for visualization.
OpenShift Logging includes a web console for visualizing collected log data. Currently, {product-title} deploys the Kibana console for visualization.

Using the log visualizer, you can do the following with your data:

Expand Down
12 changes: 6 additions & 6 deletions logging/cluster-logging.adoc
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
:context: cluster-logging
[id="cluster-logging"]
= Understanding cluster logging
= Understanding OpenShift Logging
include::modules/common-attributes.adoc[]

toc::[]



ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
As a cluster administrator, you can deploy cluster logging to
As a cluster administrator, you can deploy OpenShift Logging to
aggregate all the logs from your {product-title} cluster, such as node system audit logs, application container logs, and infrastructure logs.
Cluster logging aggregates these logs from throughout your cluster and stores them in a default log store. You can xref:../logging/cluster-logging-visualizer.adoc#cluster-logging-visualizer[use the Kibana web console to visualize log data].
OpenShift Logging aggregates these logs from throughout your cluster and stores them in a default log store. You can xref:../logging/cluster-logging-visualizer.adoc#cluster-logging-visualizer[use the Kibana web console to visualize log data].

Cluster logging aggregates the following types of logs:
OpenShift Logging aggregates the following types of logs:

* `application` - Container logs generated by user applications running in the cluster, except infrastructure container applications.
* `infrastructure` - Logs generated by infrastructure components running in the cluster and {product-title} nodes, such as journal logs. Infrastructure components are pods that run in the `openshift*`, `kube*`, or `default` projects.
Expand All @@ -25,10 +25,10 @@ Because the internal {product-title} Elasticsearch log store does not provide se
endif::[]

ifdef::openshift-dedicated[]
As an administrator, you can deploy cluster logging to
As an administrator, you can deploy OpenShift Logging to
aggregate logs for a range of {product-title} services.

Cluster logging runs on worker nodes. As an
OpenShift Logging runs on worker nodes. As an
administrator, you can monitor resource consumption in the
console and via Prometheus and Grafana. Due to the high work load required for
logging, more worker nodes may be required for your environment.
Expand Down
2 changes: 1 addition & 1 deletion logging/config/cluster-logging-configuring-cr.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ include::modules/common-attributes.adoc[]

toc::[]

To configure {product-title} cluster logging, you customize the `ClusterLogging` custom resource (CR).
To configure OpenShift Logging, you customize the `ClusterLogging` custom resource (CR).

// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
Expand Down
10 changes: 5 additions & 5 deletions logging/config/cluster-logging-configuring.adoc
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
:context: cluster-logging-configuring
[id="cluster-logging-configuring"]
= Configuring cluster logging
= Configuring OpenShift Logging
include::modules/common-attributes.adoc[]

toc::[]

Cluster logging is configurable using a `ClusterLogging` custom resource (CR) deployed
OpenShift Logging is configurable using a `ClusterLogging` custom resource (CR) deployed
in the `openshift-logging` project.

The Cluster Logging Operator watches for changes to `ClusterLogging` CR,
creates any missing logging components, and adjusts the logging environment accordingly.

The `ClusterLogging` CR is based on the `ClusterLogging` custom resource definition (CRD), which defines a complete cluster logging environment
The `ClusterLogging` CR is based on the `ClusterLogging` custom resource definition (CRD), which defines a complete OpenShift Logging environment
and includes all the components of the logging stack to collect, store and visualize logs.

.Sample `ClusterLogging` custom resource (CR)
Expand Down Expand Up @@ -52,9 +52,9 @@ spec:
resources: null
type: kibana
----
You can configure the following for cluster logging:
You can configure the following for OpenShift Logging:

* You can overwrite the image for each cluster logging component by modifying the appropriate
* You can overwrite the image for each OpenShift Logging component by modifying the appropriate
environment variable in the `cluster-logging-operator` Deployment.

* You can specify specific nodes for the logging components using node selectors.
Expand Down
4 changes: 2 additions & 2 deletions logging/config/cluster-logging-memory.adoc
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
:context: cluster-logging-memory
[id="cluster-logging-memory"]
= Configuring CPU and memory limits for cluster logging components
= Configuring CPU and memory limits for OpenShift Logging components
include::modules/common-attributes.adoc[]

toc::[]


You can configure both the CPU and memory limits for each of the cluster logging components as needed.
You can configure both the CPU and memory limits for each of the OpenShift Logging components as needed.


// The following include statements pull in the module files that comprise
Expand Down
6 changes: 2 additions & 4 deletions logging/config/cluster-logging-moving-nodes.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
:context: cluster-logging-moving
[id="cluster-logging-moving"]
= Moving the cluster logging resources with node selectors
= Moving OpenShift Logging resources with node selectors
include::modules/common-attributes.adoc[]

toc::[]
Expand All @@ -9,13 +9,11 @@ toc::[]



You can use node selectors to deploy the Elasticsearch, Kibana, and Curator pods to different nodes.
You can use node selectors to deploy the Elasticsearch, Kibana, and Curator pods to different nodes.

// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
// modules required to cover the user story. You can also include other
// assemblies.

include::modules/infrastructure-moving-logging.adoc[leveloffset=+1]


4 changes: 2 additions & 2 deletions logging/config/cluster-logging-storage-considerations.adoc
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
:context: cluster-logging-storage
[id="cluster-logging-storage"]
= Configuring cluster logging storage
= Configuring OpenShift Logging storage
include::modules/common-attributes.adoc[]

toc::[]


Elasticsearch is a memory-intensive application. The default cluster logging installation deploys 16G of memory for both memory requests and memory limits.
Elasticsearch is a memory-intensive application. The default OpenShift Logging installation deploys 16G of memory for both memory requests and memory limits.
The initial set of {product-title} nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the
{product-title} cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower
memory setting, though this is not recommended for production environments.
Expand Down
6 changes: 3 additions & 3 deletions logging/config/cluster-logging-tolerations.adoc
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
:context: cluster-logging-tolerations
[id="cluster-logging-tolerations"]
= Using tolerations to control cluster logging pod placement
= Using tolerations to control OpenShift Logging pod placement
include::modules/common-attributes.adoc[]

toc::[]

You can use taints and tolerations to ensure that cluster logging pods run
You can use taints and tolerations to ensure that OpenShift Logging pods run
on specific nodes and that no other workload can run on those nodes.

Taints and tolerations are simple `key:value` pair. A taint on a node
Expand All @@ -14,7 +14,7 @@ instructs the node to repel all pods that do not tolerate the taint.
The `key` is any string, up to 253 characters and the `value` is any string up to 63 characters.
The string must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores.

.Sample cluster logging CR with tolerations
.Sample OpenShift Logging CR with tolerations
[source,yaml]
----
apiVersion: "logging.openshift.io/v1"
Expand Down
2 changes: 1 addition & 1 deletion logging/config/cluster-logging-visualizer.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ include::modules/common-attributes.adoc[]

toc::[]

{product-title} uses Kibana to display the log data collected by cluster logging.
{product-title} uses Kibana to display the log data collected by OpenShift Logging.

You can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes.

Expand Down
2 changes: 1 addition & 1 deletion logging/dedicated-cluster-deploying.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
:context: dedicated-cluster-deploying
[id="dedicated-cluster-deploying"]
= Installing the Cluster Logging and Elasticsearch Operators
= Installing the Cluster Logging Operator and Elasticsearch Operator
include::modules/common-attributes.adoc[]

toc::[]
Expand Down
10 changes: 5 additions & 5 deletions logging/dedicated-cluster-logging.adoc
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
:context: dedicated-cluster-logging
[id="dedicated-cluster-logging"]
= Configuring cluster logging in {product-title}
= Configuring OpenShift Logging in {product-title}
include::modules/common-attributes.adoc[]

As a cluster administrator, you can deploy cluster logging
As a cluster administrator, you can deploy OpenShift Logging
to aggregate logs for a range of services.

{product-title} clusters can perform logging tasks using the Elasticsearch
Operator. Cluster logging is configured through the Curator tool to retain logs
Operator. OpenShift Logging is configured through the Curator tool to retain logs
for two days.

Cluster logging is configurable using a `ClusterLogging` custom resource (CR)
OpenShift Logging is configurable using a `ClusterLogging` custom resource (CR)
deployed in the `openshift-logging` project namespace.

The Cluster Logging Operator watches for changes to `ClusterLogging` CR, creates
any missing logging components, and adjusts the logging environment accordingly.

The `ClusterLogging` CR is based on the `ClusterLogging` custom resource
definition (CRD), which defines a complete cluster logging environment and
definition (CRD), which defines a complete OpenShift Logging environment and
includes all the components of the logging stack to collect, store and visualize
logs.

Expand Down
2 changes: 1 addition & 1 deletion logging/troubleshooting/cluster-logging-alerts.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
:context: cluster-logging-alerts
[id="cluster-logging-alerts"]
= Understanding cluster logging alerts
= Understanding OpenShift Logging alerts
include::modules/common-attributes.adoc[]

toc::[]
Expand Down
4 changes: 2 additions & 2 deletions logging/troubleshooting/cluster-logging-cluster-status.adoc
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
:context: cluster-logging-cluster-status
[id="cluster-logging-cluster-status"]
= Viewing cluster logging status
= Viewing OpenShift Logging status
include::modules/common-attributes.adoc[]

toc::[]

You can view the status of the Cluster Logging Operator and for a number of cluster logging components.
You can view the status of the Cluster Logging Operator and for a number of OpenShift Logging components.

// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
Expand Down
6 changes: 3 additions & 3 deletions logging/troubleshooting/cluster-logging-must-gather.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ toc::[]

When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.

The xref:../../support/gathering-cluster-data.adoc#gathering-cluster-data[`must-gather` tool] enables you to collect diagnostic information for project-level resources, cluster-level resources, and each of the cluster logging components.
The xref:../../support/gathering-cluster-data.adoc#gathering-cluster-data[`must-gather` tool] enables you to collect diagnostic information for project-level resources, cluster-level resources, and each of the OpenShift Logging components.

For prompt support, supply diagnostic information for both {product-title} and cluster logging.
For prompt support, supply diagnostic information for both {product-title} and OpenShift Logging.

[NOTE]
====
Expand All @@ -23,6 +23,6 @@ include::modules/cluster-logging-must-gather-about.adoc[leveloffset=+1]
[id="cluster-logging-must-gather-prereqs"]
== Prerequisites

* Cluster logging and Elasticsearch must be installed.
* OpenShift Logging and Elasticsearch must be installed.

include::modules/cluster-logging-must-gather-collecting.adoc[leveloffset=+1]
8 changes: 4 additions & 4 deletions migration/migrating_3_4/planning-migration-3-to-4.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -157,18 +157,18 @@ endif::[]
Review the following logging changes to consider when transitioning from {product-title} 3.11 to {product-title} {product-version}.

[discrete]
==== Deploying cluster logging
==== Deploying OpenShift Logging

{product-title} 4 provides a simple deployment mechanism for cluster logging, by using a Cluster Logging custom resource.
{product-title} 4 provides a simple deployment mechanism for OpenShift Logging, by using a Cluster Logging custom resource.

For more information, see xref:../../logging/cluster-logging-deploying.adoc#cluster-logging-deploying_cluster-logging-deploying[Installing cluster logging].
For more information, see xref:../../logging/cluster-logging-deploying.adoc#cluster-logging-deploying_cluster-logging-deploying[Installing OpenShift Logging].

[discrete]
==== Aggregated logging data

You cannot transition your aggregate logging data from {product-title} 3.11 into your new {product-title} 4 cluster.

For more information, see xref:../../logging/cluster-logging.adoc#cluster-logging-about_cluster-logging[About cluster logging].
For more information, see xref:../../logging/cluster-logging.adoc#cluster-logging-about_cluster-logging[About OpenShift Logging].

[discrete]
==== Unsupported logging configurations
Expand Down
6 changes: 3 additions & 3 deletions modules/cluster-logging-about-components.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,13 @@ ifeval::["{context}" == "virt-openshift-cluster-monitoring"]
endif::[]

[id="cluster-logging-about-components_{context}"]
= About cluster logging components
= About OpenShift Logging components

The cluster logging components include a collector deployed to each node in the {product-title} cluster
The OpenShift Logging components include a collector deployed to each node in the {product-title} cluster
that collects all node and container logs and writes them to a log store. You can use a centralized web UI
to create rich visualizations and dashboards with the aggregated data.

The major components of cluster logging are:
The major components of OpenShift Logging are:

* collection - This is the component that collects logs from the cluster, formats them, and forwards them to the log store. The current implementation is Fluentd.
* log store - This is where the logs are stored. The default implementation is Elasticsearch. You can use the default Elasticsearch log store or forward logs to external log stores. The default log store is optimized and tested for short-term storage.
Expand Down
4 changes: 2 additions & 2 deletions modules/cluster-logging-about-crd.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@
[id="cluster-logging-configuring-crd_{context}"]
= About the `ClusterLogging` custom resource

To make changes to your cluster logging environment, create and modify the `ClusterLogging` custom resource (CR).
To make changes to your OpenShift Logging environment, create and modify the `ClusterLogging` custom resource (CR).
Instructions for creating or modifying a CR are provided in this documentation as appropriate.

The following is an example of a typical custom resource for cluster logging.
The following is an example of a typical custom resource for OpenShift Logging.

[id="efk-logging-configuring-about-sample_{context}"]
.Sample `ClusterLogging` CR
Expand Down
Loading