Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions _unused_topics/cluster-logging-configuring-node-selector.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,13 @@
// * logging/cluster-logging-elasticsearch.adoc

[id="cluster-logging-configuring-node-selector_{context}"]
= Specifying a node for cluster logging components using node selectors
= Specifying a node for OpenShift Logging components using node selectors

Each component specification allows the component to target a specific node.
Each component specification allows the component to target a specific node.

.Procedure

. Edit the Cluster Logging Custom Resource (CR) in the `openshift-logging` project:
. Edit the Cluster Logging custom resource (CR) in the `openshift-logging` project:
+
----
$ oc edit ClusterLogging instance
Expand Down
2 changes: 1 addition & 1 deletion _unused_topics/cluster-logging-elasticsearch-admin.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ administrative operations on Elasticsearch are provided within the

[NOTE]
====
To confirm whether or not your cluster logging installation provides these, run:
To confirm whether or not your OpenShift Logging installation provides these, run:
----
$ oc describe secret elasticsearch -n openshift-logging
----
Expand Down
2 changes: 1 addition & 1 deletion _unused_topics/cluster-logging-exported-fields-docker.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
[id="cluster-logging-exported-fields-container_{context}"]
= Container exported fields

These are the Docker fields exported by the {product-title} cluster logging available for searching from Elasticsearch and Kibana.
These are the Docker fields exported by OpenShift Logging available for searching from Elasticsearch and Kibana.
Namespace for docker container-specific metadata. The docker.container_id is the Docker container ID.


Expand Down
8 changes: 4 additions & 4 deletions _unused_topics/cluster-logging-uninstall-cluster-ops.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@
[id="cluster-logging-uninstall-ops_{context}"]
= Uninstall the infra cluster

You can uninstall the infra cluster from the {product-title} cluster logging.
You can uninstall the infra cluster from OpenShift Logging.
After uninstalling, Fluentd no longer splits logs.

.Procedure

To uninstall the infra cluster:

.
.

.
.

.
.
10 changes: 5 additions & 5 deletions logging/cluster-logging-deploying.adoc
Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
:context: cluster-logging-deploying
[id="cluster-logging-deploying"]
= Installing cluster logging
= Installing OpenShift Logging
include::modules/common-attributes.adoc[]

toc::[]


You can install cluster logging by deploying
You can install OpenShift Logging by deploying
the Elasticsearch and Cluster Logging Operators. The Elasticsearch Operator
creates and manages the Elasticsearch cluster used by cluster logging.
creates and manages the Elasticsearch cluster used by OpenShift Logging.
The Cluster Logging Operator creates and manages the components of the logging stack.

The process for deploying cluster logging to {product-title} involves:
The process for deploying OpenShift Logging to {product-title} involves:

* Reviewing the xref:../logging/config/cluster-logging-storage-considerations#cluster-logging-storage[cluster logging storage considerations].
* Reviewing the xref:../logging/config/cluster-logging-storage-considerations#cluster-logging-storage[OpenShift Logging storage considerations].

* Installing the Elasticsearch Operator and Cluster Logging Operator using the {product-title} xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-console_cluster-logging-deploying[web console] or xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-cli_cluster-logging-deploying[CLI].

Expand Down
2 changes: 1 addition & 1 deletion logging/cluster-logging-eventrouter.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ include::modules/common-attributes.adoc[]

toc::[]

The {product-title} Event Router is a pod that watches Kubernetes events and logs them for collection by cluster logging. You must manually deploy the Event Router.
The {product-title} Event Router is a pod that watches Kubernetes events and logs them for collection by OpenShift Logging. You must manually deploy the Event Router.

The Event Router collects events from all projects and writes them to `STDOUT`. Fluentd collects those events and forwards them into the {product-title} Elasticsearch instance. Elasticsearch indexes the events to the `infra` index.

Expand Down
2 changes: 1 addition & 1 deletion logging/cluster-logging-external.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ include::modules/common-attributes.adoc[]
toc::[]


By default, {product-title} cluster logging sends logs to the default internal Elasticsearch log store, defined in the `ClusterLogging` custom resource. If you want to forward logs to other log aggregators, you can use the {product-title} Log Forwarding API to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. You can send different types of logs to different systems, allowing you to control who in your organization can access each type. Optional TLS support ensures that you can send logs using secure communication as required by your organization.
By default, OpenShift Logging sends logs to the default internal Elasticsearch log store, defined in the `ClusterLogging` custom resource. If you want to forward logs to other log aggregators, you can use the {product-title} Log Forwarding API to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. You can send different types of logs to different systems, allowing you to control who in your organization can access each type. Optional TLS support ensures that you can send logs using secure communication as required by your organization.

When you forward logs externally, the Cluster Logging Operator creates or modifies a Fluentd config map to send logs using your desired protocols. You are responsible for configuring the protocol on the external log aggregator.

Expand Down
4 changes: 2 additions & 2 deletions logging/cluster-logging-uninstall.adoc
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
:context: cluster-logging-uninstall
[id="cluster-logging-uninstall"]
= Uninstalling Cluster Logging
= Uninstalling OpenShift Logging
include::modules/common-attributes.adoc[]

toc::[]

You can remove cluster logging from your {product-title} cluster.
You can remove OpenShift Logging from your {product-title} cluster.

// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
Expand Down
6 changes: 3 additions & 3 deletions logging/cluster-logging-upgrading.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
:context: cluster-logging-upgrading
[id="cluster-logging-upgrading"]
= Updating cluster logging
= Updating OpenShift Logging
include::modules/common-attributes.adoc[]

toc::[]
Expand All @@ -9,14 +9,14 @@ toc::[]

After updating the {product-title} cluster from 4.6 to 4.7, you can then update the Elasticsearch Operator and Cluster Logging Operator from 4.6 to 4.7.

Cluster logging 4.5 introduces a new Elasticsearch version, Elasticsearch 6.8.1, and an enhanced security plug-in, Open Distro for Elasticsearch. The new Elasticsearch version introduces a new Elasticsearch data model, where the Elasticsearch data is indexed only by type: infrastructure, application, and audit. Previously, data was indexed by type (infrastructure and application) and project.
OpenShift Logging 4.5 introduces a new Elasticsearch version, Elasticsearch 6.8.1, and an enhanced security plug-in, Open Distro for Elasticsearch. The new Elasticsearch version introduces a new Elasticsearch data model, where the Elasticsearch data is indexed only by type: infrastructure, application, and audit. Previously, data was indexed by type (infrastructure and application) and project.

[IMPORTANT]
====
Because of the new data model, the update does not migrate existing custom Kibana index patterns and visualizations into the new version. You must re-create your Kibana index patterns and visualizations to match the new indices after updating.
====

Due to the nature of these changes, you are not required to update your cluster logging to 4.6. However, when you update to {product-title} 4.7, you must update cluster logging to 4.7 at that time.
Due to the nature of these changes, you are not required to update your OpenShift Logging to 4.6. However, when you update to {product-title} 4.7, you must update OpenShift Logging to 4.7 at that time.

// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
Expand Down
2 changes: 1 addition & 1 deletion logging/cluster-logging-visualizer.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ include::modules/common-attributes.adoc[]

toc::[]

{product-title} cluster logging includes a web console for visualizing collected log data. Currently, {product-title} deploys the Kibana console for visualization.
OpenShift Logging includes a web console for visualizing collected log data. Currently, {product-title} deploys the Kibana console for visualization.

Using the log visualizer, you can do the following with your data:

Expand Down
12 changes: 6 additions & 6 deletions logging/cluster-logging.adoc
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
:context: cluster-logging
[id="cluster-logging"]
= Understanding cluster logging
= Understanding OpenShift Logging
include::modules/common-attributes.adoc[]

toc::[]



ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
As a cluster administrator, you can deploy cluster logging to
As a cluster administrator, you can deploy OpenShift Logging to
aggregate all the logs from your {product-title} cluster, such as node system audit logs, application container logs, and infrastructure logs.
Cluster logging aggregates these logs from throughout your cluster and stores them in a default log store. You can xref:../logging/cluster-logging-visualizer.adoc#cluster-logging-visualizer[use the Kibana web console to visualize log data].
OpenShift Logging aggregates these logs from throughout your cluster and stores them in a default log store. You can xref:../logging/cluster-logging-visualizer.adoc#cluster-logging-visualizer[use the Kibana web console to visualize log data].

Cluster logging aggregates the following types of logs:
OpenShift Logging aggregates the following types of logs:

* `application` - Container logs generated by user applications running in the cluster, except infrastructure container applications.
* `infrastructure` - Logs generated by infrastructure components running in the cluster and {product-title} nodes, such as journal logs. Infrastructure components are pods that run in the `openshift*`, `kube*`, or `default` projects.
Expand All @@ -25,10 +25,10 @@ Because the internal {product-title} Elasticsearch log store does not provide se
endif::[]

ifdef::openshift-dedicated[]
As an administrator, you can deploy cluster logging to
As an administrator, you can deploy OpenShift Logging to
aggregate logs for a range of {product-title} services.

Cluster logging runs on worker nodes. As an
OpenShift Logging runs on worker nodes. As an
administrator, you can monitor resource consumption in the
console and via Prometheus and Grafana. Due to the high work load required for
logging, more worker nodes may be required for your environment.
Expand Down
2 changes: 1 addition & 1 deletion logging/config/cluster-logging-configuring-cr.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ include::modules/common-attributes.adoc[]

toc::[]

To configure {product-title} cluster logging, you customize the `ClusterLogging` custom resource (CR).
To configure OpenShift Logging, you customize the `ClusterLogging` custom resource (CR).

// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
Expand Down
10 changes: 5 additions & 5 deletions logging/config/cluster-logging-configuring.adoc
Original file line number Diff line number Diff line change
@@ -1,17 +1,17 @@
:context: cluster-logging-configuring
[id="cluster-logging-configuring"]
= Configuring cluster logging
= Configuring OpenShift Logging
include::modules/common-attributes.adoc[]

toc::[]

Cluster logging is configurable using a `ClusterLogging` custom resource (CR) deployed
OpenShift Logging is configurable using a `ClusterLogging` custom resource (CR) deployed
in the `openshift-logging` project.

The Cluster Logging Operator watches for changes to `ClusterLogging` CR,
creates any missing logging components, and adjusts the logging environment accordingly.

The `ClusterLogging` CR is based on the `ClusterLogging` custom resource definition (CRD), which defines a complete cluster logging environment
The `ClusterLogging` CR is based on the `ClusterLogging` custom resource definition (CRD), which defines a complete OpenShift Logging environment
and includes all the components of the logging stack to collect, store and visualize logs.

.Sample `ClusterLogging` custom resource (CR)
Expand Down Expand Up @@ -52,9 +52,9 @@ spec:
resources: null
type: kibana
----
You can configure the following for cluster logging:
You can configure the following for OpenShift Logging:

* You can overwrite the image for each cluster logging component by modifying the appropriate
* You can overwrite the image for each OpenShift Logging component by modifying the appropriate
environment variable in the `cluster-logging-operator` Deployment.

* You can specify specific nodes for the logging components using node selectors.
Expand Down
4 changes: 2 additions & 2 deletions logging/config/cluster-logging-memory.adoc
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
:context: cluster-logging-memory
[id="cluster-logging-memory"]
= Configuring CPU and memory limits for cluster logging components
= Configuring CPU and memory limits for OpenShift Logging components
include::modules/common-attributes.adoc[]

toc::[]


You can configure both the CPU and memory limits for each of the cluster logging components as needed.
You can configure both the CPU and memory limits for each of the OpenShift Logging components as needed.


// The following include statements pull in the module files that comprise
Expand Down
6 changes: 2 additions & 4 deletions logging/config/cluster-logging-moving-nodes.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
:context: cluster-logging-moving
[id="cluster-logging-moving"]
= Moving the cluster logging resources with node selectors
= Moving OpenShift Logging resources with node selectors
include::modules/common-attributes.adoc[]

toc::[]
Expand All @@ -9,13 +9,11 @@ toc::[]



You can use node selectors to deploy the Elasticsearch, Kibana, and Curator pods to different nodes.
You can use node selectors to deploy the Elasticsearch, Kibana, and Curator pods to different nodes.

// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
// modules required to cover the user story. You can also include other
// assemblies.

include::modules/infrastructure-moving-logging.adoc[leveloffset=+1]


4 changes: 2 additions & 2 deletions logging/config/cluster-logging-storage-considerations.adoc
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
:context: cluster-logging-storage
[id="cluster-logging-storage"]
= Configuring cluster logging storage
= Configuring OpenShift Logging storage
include::modules/common-attributes.adoc[]

toc::[]


Elasticsearch is a memory-intensive application. The default cluster logging installation deploys 16G of memory for both memory requests and memory limits.
Elasticsearch is a memory-intensive application. The default OpenShift Logging installation deploys 16G of memory for both memory requests and memory limits.
The initial set of {product-title} nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the
{product-title} cluster to run with the recommended or higher memory. Each Elasticsearch node can operate with a lower
memory setting, though this is not recommended for production environments.
Expand Down
6 changes: 3 additions & 3 deletions logging/config/cluster-logging-tolerations.adoc
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
:context: cluster-logging-tolerations
[id="cluster-logging-tolerations"]
= Using tolerations to control cluster logging pod placement
= Using tolerations to control OpenShift Logging pod placement
include::modules/common-attributes.adoc[]

toc::[]

You can use taints and tolerations to ensure that cluster logging pods run
You can use taints and tolerations to ensure that OpenShift Logging pods run
on specific nodes and that no other workload can run on those nodes.

Taints and tolerations are simple `key:value` pair. A taint on a node
Expand All @@ -14,7 +14,7 @@ instructs the node to repel all pods that do not tolerate the taint.
The `key` is any string, up to 253 characters and the `value` is any string up to 63 characters.
The string must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores.

.Sample cluster logging CR with tolerations
.Sample OpenShift Logging CR with tolerations
[source,yaml]
----
apiVersion: "logging.openshift.io/v1"
Expand Down
2 changes: 1 addition & 1 deletion logging/config/cluster-logging-visualizer.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ include::modules/common-attributes.adoc[]

toc::[]

{product-title} uses Kibana to display the log data collected by cluster logging.
{product-title} uses Kibana to display the log data collected by OpenShift Logging.

You can scale Kibana for redundancy and configure the CPU and memory for your Kibana nodes.

Expand Down
2 changes: 1 addition & 1 deletion logging/dedicated-cluster-deploying.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
:context: dedicated-cluster-deploying
[id="dedicated-cluster-deploying"]
= Installing the Cluster Logging and Elasticsearch Operators
= Installing the Cluster Logging Operator and Elasticsearch Operator
include::modules/common-attributes.adoc[]

toc::[]
Expand Down
10 changes: 5 additions & 5 deletions logging/dedicated-cluster-logging.adoc
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
:context: dedicated-cluster-logging
[id="dedicated-cluster-logging"]
= Configuring cluster logging in {product-title}
= Configuring OpenShift Logging in {product-title}
include::modules/common-attributes.adoc[]

As a cluster administrator, you can deploy cluster logging
As a cluster administrator, you can deploy OpenShift Logging
to aggregate logs for a range of services.

{product-title} clusters can perform logging tasks using the Elasticsearch
Operator. Cluster logging is configured through the Curator tool to retain logs
Operator. OpenShift Logging is configured through the Curator tool to retain logs
for two days.

Cluster logging is configurable using a `ClusterLogging` custom resource (CR)
OpenShift Logging is configurable using a `ClusterLogging` custom resource (CR)
deployed in the `openshift-logging` project namespace.

The Cluster Logging Operator watches for changes to `ClusterLogging` CR, creates
any missing logging components, and adjusts the logging environment accordingly.

The `ClusterLogging` CR is based on the `ClusterLogging` custom resource
definition (CRD), which defines a complete cluster logging environment and
definition (CRD), which defines a complete OpenShift Logging environment and
includes all the components of the logging stack to collect, store and visualize
logs.

Expand Down
2 changes: 1 addition & 1 deletion logging/troubleshooting/cluster-logging-alerts.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
:context: cluster-logging-alerts
[id="cluster-logging-alerts"]
= Understanding cluster logging alerts
= Understanding OpenShift Logging alerts
include::modules/common-attributes.adoc[]

toc::[]
Expand Down
4 changes: 2 additions & 2 deletions logging/troubleshooting/cluster-logging-cluster-status.adoc
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
:context: cluster-logging-cluster-status
[id="cluster-logging-cluster-status"]
= Viewing cluster logging status
= Viewing OpenShift Logging status
include::modules/common-attributes.adoc[]

toc::[]

You can view the status of the Cluster Logging Operator and for a number of cluster logging components.
You can view the status of the Cluster Logging Operator and for a number of OpenShift Logging components.

// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
Expand Down
6 changes: 3 additions & 3 deletions logging/troubleshooting/cluster-logging-must-gather.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ toc::[]

When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support.

The xref:../../support/gathering-cluster-data.adoc#gathering-cluster-data[`must-gather` tool] enables you to collect diagnostic information for project-level resources, cluster-level resources, and each of the cluster logging components.
The xref:../../support/gathering-cluster-data.adoc#gathering-cluster-data[`must-gather` tool] enables you to collect diagnostic information for project-level resources, cluster-level resources, and each of the OpenShift Logging components.

For prompt support, supply diagnostic information for both {product-title} and cluster logging.
For prompt support, supply diagnostic information for both {product-title} and OpenShift Logging.

[NOTE]
====
Expand All @@ -23,6 +23,6 @@ include::modules/cluster-logging-must-gather-about.adoc[leveloffset=+1]
[id="cluster-logging-must-gather-prereqs"]
== Prerequisites

* Cluster logging and Elasticsearch must be installed.
* OpenShift Logging and Elasticsearch must be installed.

include::modules/cluster-logging-must-gather-collecting.adoc[leveloffset=+1]
Loading