Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion jaeger/jaeger_install/rhbjaeger-removing.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,4 +24,4 @@ include::modules/jaeger-removing-instance-cli.adoc[leveloffset=+1]

* Remove the Jaeger Operator.

* After the Jaeger Operator has been removed, if appropriate, remove the Elasticsearch Operator.
* After the Jaeger Operator has been removed, if appropriate, remove the OpenShift Elasticsearch Operator.
10 changes: 5 additions & 5 deletions logging/cluster-logging-deploying.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,15 @@ toc::[]


You can install OpenShift Logging by deploying
the Elasticsearch and Cluster Logging Operators. The Elasticsearch Operator
creates and manages the Elasticsearch cluster used by OpenShift Logging.
The Cluster Logging Operator creates and manages the components of the logging stack.
the OpenShift Elasticsearch and Cluster Logging Operators. The OpenShift Elasticsearch Operator
creates and manages the Elasticsearch cluster used by OpenShift Logging.
The Cluster Logging Operator creates and manages the components of the logging stack.

The process for deploying OpenShift Logging to {product-title} involves:

* Reviewing the xref:../logging/config/cluster-logging-storage-considerations#cluster-logging-storage[OpenShift Logging storage considerations].

* Installing the Elasticsearch Operator and Cluster Logging Operator using the {product-title} xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-console_cluster-logging-deploying[web console] or xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-cli_cluster-logging-deploying[CLI].
* Installing the OpenShift Elasticsearch Operator and Cluster Logging Operator using the {product-title} xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-console_cluster-logging-deploying[web console] or xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-cli_cluster-logging-deploying[CLI].

// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
Expand All @@ -32,7 +32,7 @@ include::modules/cluster-logging-deploy-cli.adoc[leveloffset=+1]

== Post-installation tasks

If you plan to use Kibana, you must xref:#cluster-logging-visualizer-indices_cluster-logging-deploying[manually create your Kibana index patterns and visualizations] to explore and visualize data in Kibana.
If you plan to use Kibana, you must xref:#cluster-logging-visualizer-indices_cluster-logging-deploying[manually create your Kibana index patterns and visualizations] to explore and visualize data in Kibana.

include::modules/cluster-logging-visualizer-indices.adoc[leveloffset=+2]

Expand Down
2 changes: 1 addition & 1 deletion logging/cluster-logging-upgrading.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ toc::[]



After updating the {product-title} cluster from 4.6 to 4.7, you can update the Elasticsearch Operator and Cluster Logging Operator from 4.6 to 5.0.
After updating the {product-title} cluster from 4.6 to 4.7, you can update the OpenShift Elasticsearch Operator and Cluster Logging Operator from 4.6 to 5.0.

// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
Expand Down
2 changes: 1 addition & 1 deletion logging/config/cluster-logging-tolerations.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ toc::[]
You can use taints and tolerations to ensure that OpenShift Logging pods run
on specific nodes and that no other workload can run on those nodes.

Taints and tolerations are simple `key:value` pair. A taint on a node
Taints and tolerations are simple `key:value` pair. A taint on a node
instructs the node to repel all pods that do not tolerate the taint.

The `key` is any string, up to 253 characters and the `value` is any string up to 63 characters.
Expand Down
2 changes: 1 addition & 1 deletion logging/dedicated-cluster-deploying.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
:context: dedicated-cluster-deploying
[id="dedicated-cluster-deploying"]
= Installing the Cluster Logging Operator and Elasticsearch Operator
= Installing the Cluster Logging Operator and OpenShift Elasticsearch Operator
include::modules/common-attributes.adoc[]

toc::[]
Expand Down
2 changes: 1 addition & 1 deletion logging/dedicated-cluster-logging.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ toc::[]
As a cluster administrator, you can deploy OpenShift Logging
to aggregate logs for a range of services.

{product-title} clusters can perform logging tasks using the Elasticsearch
{product-title} clusters can perform logging tasks using the OpenShift Elasticsearch
Operator. OpenShift Logging is configured through the Curator tool to retain logs
for two days.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ include::modules/common-attributes.adoc[]

toc::[]

You can view the status of the Elasticsearch Operator and for a number of Elasticsearch components.
You can view the status of the OpenShift Elasticsearch Operator and for a number of Elasticsearch components.

// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
Expand All @@ -16,6 +16,3 @@ You can view the status of the Elasticsearch Operator and for a number of Elasti
include::modules/cluster-logging-log-store-status-viewing.adoc[leveloffset=+1]

include::modules/cluster-logging-log-store-status-comp.adoc[leveloffset=+1]



2 changes: 1 addition & 1 deletion modules/cluster-logging-about-collector.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ By default, the log collector uses the following sources:

If you configure the log collector to collect audit logs, it gets them from `/var/log/audit/audit.log`.

The logging collector is a daemon set that deploys pods to each {product-title} node. System and infrastructure logs are generated by journald log messages from the operating system, the container runtime, and {product-title}. Application logs are generated by the CRI-O container engine. Fluentd collects the logs from these sources and forwards them internally or externally as you configure in {product-title}.
The logging collector is a daemon set that deploys pods to each {product-title} node. System and infrastructure logs are generated by journald log messages from the operating system, the container runtime, and {product-title}. Application logs are generated by the CRI-O container engine. Fluentd collects the logs from these sources and forwards them internally or externally as you configure in {product-title}.

The container runtimes provide minimal information to identify the source of log messages: project, pod name, and container ID. This information is not sufficient to uniquely identify the source of the logs. If a pod with a given name and project is deleted before the log collector begins processing its logs, information from the API server, such as labels and annotations, might not be available. There might not be a way to distinguish the log messages from a similarly named pod and project or trace the logs to their source. This limitation means that log collection and normalization are considered *best effort*.

Expand Down
8 changes: 4 additions & 4 deletions modules/cluster-logging-about-logstore.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,20 +3,20 @@
// * logging/cluster-logging.adoc

[id="cluster-logging-about-logstore_{context}"]
= About the log store
= About the log store

By default, {product-title} uses link:https://www.elastic.co/products/elasticsearch[Elasticsearch (ES)] to store log data. Optionally, you can use the log forwarding features to forward logs to external log stores using Fluentd protocols, syslog protocols, or the {product-title} Log Forwarding API.

The OpenShift Logging Elasticsearch instance is optimized and tested for short term storage, approximately seven days. If you want to retain your logs over a longer term, it is recommended you move the data to a third-party storage system.
The OpenShift Logging Elasticsearch instance is optimized and tested for short term storage, approximately seven days. If you want to retain your logs over a longer term, it is recommended you move the data to a third-party storage system.

Elasticsearch organizes the log data from Fluentd into datastores, or _indices_, then subdivides each index into multiple pieces called _shards_, which it spreads across a set of Elasticsearch nodes in an Elasticsearch cluster. You can configure Elasticsearch to make copies of the shards, called _replicas_, which Elasticsearch also spreads across the Elasticsearch nodes. The `ClusterLogging` custom resource (CR) allows you to specify how the shards are replicated to provide data redundancy and resilience to failure. You can also specify how long the different types of logs are retained using a retention policy in the `ClusterLogging` CR.
Elasticsearch organizes the log data from Fluentd into datastores, or _indices_, then subdivides each index into multiple pieces called _shards_, which it spreads across a set of Elasticsearch nodes in an Elasticsearch cluster. You can configure Elasticsearch to make copies of the shards, called _replicas_, which Elasticsearch also spreads across the Elasticsearch nodes. The `ClusterLogging` custom resource (CR) allows you to specify how the shards are replicated to provide data redundancy and resilience to failure. You can also specify how long the different types of logs are retained using a retention policy in the `ClusterLogging` CR.

[NOTE]
====
The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes.
====

The Cluster Logging Operator and companion Elasticsearch Operator ensure that each Elasticsearch node is deployed using a unique deployment that includes its own storage volume.
The Cluster Logging Operator and companion OpenShift Elasticsearch Operator ensure that each Elasticsearch node is deployed using a unique deployment that includes its own storage volume.
You can use a `ClusterLogging` custom resource (CR) to increase the number of Elasticsearch nodes, as needed.
Refer to the link:https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html[Elasticsearch documentation] for considerations involved in configuring storage.

Expand Down
9 changes: 4 additions & 5 deletions modules/cluster-logging-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@
= About deploying OpenShift Logging

ifdef::openshift-enterprise,openshift-webscale,openshift-origin[]
{product-title} cluster administrators can deploy OpenShift Logging using
the {product-title} web console or CLI to install the Elasticsearch
{product-title} cluster administrators can deploy OpenShift Logging using
the {product-title} web console or CLI to install the OpenShift Elasticsearch
Operator and Cluster Logging Operator. When the operators are installed, you create
a `ClusterLogging` custom resource (CR) to schedule OpenShift Logging pods and
other resources necessary to support OpenShift Logging. The operators are
Expand All @@ -22,15 +22,14 @@ endif::openshift-enterprise,openshift-webscale,openshift-origin[]

ifdef::openshift-dedicated[]
{product-title} administrators can deploy the Cluster Logging Operator and the
Elasticsearch Operator by using the {product-title} web console and can configure logging in the
OpenShift Elasticsearch Operator by using the {product-title} web console and can configure logging in the
`openshift-logging` namespace. Configuring logging will deploy Elasticsearch,
Fluentd, and Kibana in the `openshift-logging` namespace. The operators are
responsible for deploying, upgrading, and maintaining OpenShift Logging.
endif::openshift-dedicated[]

The `ClusterLogging` CR defines a complete OpenShift Logging environment that includes all the components
of the logging stack to collect, store and visualize logs. The Cluster Logging Operator watches the OpenShift Logging
of the logging stack to collect, store and visualize logs. The Cluster Logging Operator watches the OpenShift Logging
CR and adjusts the logging deployment accordingly.

Administrators and application developers can view the logs of the projects for which they have view access.

Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Forwarding cluster logs to external third-party systems requires a combination o
--
* `elasticsearch`. An external Elasticsearch instance. The `elasticsearch` output can use a TLS connection.

* `fluentdForward`. An external log aggregation solution that supports Fluentd. This option uses the Fluentd *forward* protocols. The `fluentForward` output can use a TCP or TLS connection and supports shared-key authentication by providing a *shared_key* field in a secret. Shared-key authentication can be used with or without TLS.
* `fluentdForward`. An external log aggregation solution that supports Fluentd. This option uses the Fluentd *forward* protocols. The `fluentForward` output can use a TCP or TLS connection and supports shared-key authentication by providing a *shared_key* field in a secret. Shared-key authentication can be used with or without TLS.

* `syslog`. An external log aggregation solution that supports the syslog link:https://tools.ietf.org/html/rfc3164[RFC3164] or link:https://tools.ietf.org/html/rfc5424[RFC5424] protocols. The `syslog` output can use a UDP, TCP, or TLS connection.

Expand Down
2 changes: 1 addition & 1 deletion modules/cluster-logging-configuring-image-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
= Understanding OpenShift Logging component images

There are several components in OpenShift Logging, each one implemented with one
or more images. Each image is specified by an environment variable
or more images. Each image is specified by an environment variable
defined in the *cluster-logging-operator* deployment in the *openshift-logging* project and should not be changed.

You can view the images by running the following command:
Expand Down
Loading