Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 9 additions & 6 deletions logging/cluster-logging-deploying.adoc
Original file line number Diff line number Diff line change
@@ -1,17 +1,20 @@
:_content-type: ASSEMBLY
:context: cluster-logging-deploying
[id="cluster-logging-deploying"]
= Installing the {logging-title}
= Installing OpenShift Logging
include::_attributes/common-attributes.adoc[]

toc::[]


You can install the {logging-title} by deploying the OpenShift Elasticsearch and Red Hat OpenShift Logging Operators. The OpenShift Elasticsearch Operator creates and manages the Elasticsearch cluster used by OpenShift Logging. The {logging} Operator creates and manages the components of the logging stack.
You can install OpenShift Logging by deploying
the OpenShift Elasticsearch and Red Hat OpenShift Logging Operators. The OpenShift Elasticsearch Operator
creates and manages the Elasticsearch cluster used by OpenShift Logging.
The Red Hat OpenShift Logging Operator creates and manages the components of the logging stack.

The process for deploying the {logging} to {product-title} involves:
The process for deploying OpenShift Logging to {product-title} involves:

* Reviewing the xref:../logging/config/cluster-logging-storage-considerations#cluster-logging-storage[{logging-uc} storage considerations].
* Reviewing the xref:../logging/config/cluster-logging-storage-considerations#cluster-logging-storage[OpenShift Logging storage considerations].

* Installing the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator using the {product-title} xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-console_cluster-logging-deploying[web console] or xref:../logging/cluster-logging-deploying.adoc#cluster-logging-deploy-cli_cluster-logging-deploying[CLI].

Expand All @@ -31,7 +34,7 @@ include::modules/cluster-logging-deploy-console.adoc[leveloffset=+1]

If you plan to use Kibana, you must xref:#cluster-logging-visualizer-indices_cluster-logging-deploying[manually create your Kibana index patterns and visualizations] to explore and visualize data in Kibana.

If your cluster network provider enforces network isolation, xref:#cluster-logging-deploy-multitenant_cluster-logging-deploying[allow network traffic between the projects that contain the {logging} Operators].
If your cluster network provider enforces network isolation, xref:#cluster-logging-deploy-multitenant_cluster-logging-deploying[allow network traffic between the projects that contain the OpenShift Logging operators].


include::modules/cluster-logging-deploy-cli.adoc[leveloffset=+1]
Expand All @@ -40,7 +43,7 @@ include::modules/cluster-logging-deploy-cli.adoc[leveloffset=+1]

If you plan to use Kibana, you must xref:#cluster-logging-visualizer-indices_cluster-logging-deploying[manually create your Kibana index patterns and visualizations] to explore and visualize data in Kibana.

If your cluster network provider enforces network isolation, xref:#cluster-logging-deploy-multitenant_cluster-logging-deploying[allow network traffic between the projects that contain the {logging} Operators].
If your cluster network provider enforces network isolation, xref:#cluster-logging-deploy-multitenant_cluster-logging-deploying[allow network traffic between the projects that contain the OpenShift Logging operators].

include::modules/cluster-logging-visualizer-indices.adoc[leveloffset=+2]

Expand Down
4 changes: 2 additions & 2 deletions logging/cluster-logging-eventrouter.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ include::_attributes/common-attributes.adoc[]

toc::[]

The {product-title} Event Router is a pod that watches Kubernetes events and logs them for collection by the {logging}. You must manually deploy the Event Router.
The {product-title} Event Router is a pod that watches Kubernetes events and logs them for collection by OpenShift Logging. You must manually deploy the Event Router.

The Event Router collects events from all projects and writes them to `STDOUT`. The collector then forwards those events to the store defined in the `ClusterLogForwarder` custom resource (CR).
The Event Router collects events from all projects and writes them to `STDOUT`. Fluentd collects those events and forwards them into the {product-title} Elasticsearch instance. Elasticsearch indexes the events to the `infra` index.

[IMPORTANT]
====
Expand Down
4 changes: 1 addition & 3 deletions logging/cluster-logging-exported-fields.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,12 @@ include::_attributes/common-attributes.adoc[]

toc::[]

The following fields can be present in log records exported by the {logging}. Although log records are typically formatted as JSON objects, the same data model can be applied to other encodings.
The following fields can be present in log records exported by OpenShift Logging. Although log records are typically formatted as JSON objects, the same data model can be applied to other encodings.

To search these fields from Elasticsearch and Kibana, use the full dotted field name when searching. For example, with an Elasticsearch */_search URL*, to look for a Kubernetes pod name, use `/_search/q=kubernetes.pod_name:name-of-my-pod`.

// The logging system can parse JSON-formatted log entries to external systems. These log entries are formatted as a fluentd message with extra fields such as `kubernetes`. The fields exported by the logging system and available for searching from Elasticsearch and Kibana are documented at the end of this document.

include::modules/cluster-logging-exported-fields-top-level-fields.adoc[leveloffset=0]

include::modules/cluster-logging-exported-fields-kubernetes.adoc[leveloffset=0]

// add modules/cluster-logging-exported-fields-openshift when available
4 changes: 2 additions & 2 deletions logging/cluster-logging-external.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ include::_attributes/common-attributes.adoc[]

toc::[]

By default, the {logging} sends container and infrastructure logs to the default internal Elasticsearch log store defined in the `ClusterLogging` custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, you do not need to configure the Cluster Log Forwarder.
By default, OpenShift Logging sends container and infrastructure logs to the default internal Elasticsearch log store defined in the `ClusterLogging` custom resource. However, it does not send audit logs to the internal store because it does not provide secure storage. If this default configuration meets your needs, you do not need to configure the Cluster Log Forwarder.

To send logs to other log aggregators, you use the {product-title} Cluster Log Forwarder. This API enables you to send container, infrastructure, and audit logs to specific endpoints within or outside your cluster. In addition, you can send different types of logs to various systems so that various individuals can access each type. You can also enable Transport Layer Security (TLS) support to send logs securely, as required by your organization.

Expand All @@ -15,7 +15,7 @@ To send logs to other log aggregators, you use the {product-title} Cluster Log F
To send audit logs to the default internal Elasticsearch log store, use the Cluster Log Forwarder as described in xref:../logging/config/cluster-logging-log-store.adoc#cluster-logging-elasticsearch-audit_cluster-logging-store[Forward audit logs to the log store].
====

When you forward logs externally, the {logging} creates or modifies a Fluentd config map to send logs using your desired protocols. You are responsible for configuring the protocol on the external log aggregator.
When you forward logs externally, the Red Hat OpenShift Logging Operator creates or modifies a Fluentd config map to send logs using your desired protocols. You are responsible for configuring the protocol on the external log aggregator.

[IMPORTANT]
====
Expand Down
Loading