Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,6 @@ include::modules/configuring-logging-collector.adoc[leveloffset=+1]

include::modules/creating-logfilesmetricexporter.adoc[leveloffset=+1]

include::modules/log-collector-resources-scheduling.adoc[leveloffset=+1]

include::modules/cluster-logging-collector-pod-location.adoc[leveloffset=+1]

include::modules/cluster-logging-collector-limits.adoc[leveloffset=+1]

[id="cluster-logging-collector-input-receivers"]
Expand Down
4 changes: 1 addition & 3 deletions logging/log_storage/cluster-logging-loki.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,6 @@ include::modules/logging-loki-restart-hardening.adoc[leveloffset=+1]

include::modules/logging-loki-reliability-hardening.adoc[leveloffset=+1]

include::modules/logging-loki-pod-placement.adoc[leveloffset=+1]

[role="_additional-resources"]
.Additional resources
* link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#podantiaffinity-v1-core[`PodAntiAffinity` v1 core Kubernetes documentation]
Expand Down Expand Up @@ -63,4 +61,4 @@ include::modules/logging-loki-memberlist-ip.adoc[leveloffset=+1]
* link:https://loki-operator.dev/docs/howto_connect_grafana.md/[Grafana Dashboard documentation]
* link:https://loki-operator.dev/docs/object_storage.md/[Loki Object Storage documentation]
* link:https://loki-operator.dev/docs/api.md/#loki-grafana-com-v1-IngestionLimitSpec[{loki-op} `IngestionLimitSpec` documentation]
* link:https://grafana.com/docs/loki/latest/operations/storage/schema/#changing-the-schema[Loki Storage Schema documentation]
* link:https://grafana.com/docs/loki/latest/operations/storage/schema/#changing-the-schema[Loki Storage Schema documentation]
7 changes: 6 additions & 1 deletion logging/scheduling_resources/logging-node-selectors.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,12 @@ toc::[]
include::snippets/about-node-selectors.adoc[]

include::modules/nodes-scheduler-node-selectors-about.adoc[leveloffset=+1]
include::modules/infrastructure-moving-logging.adoc[leveloffset=+1]

include::modules/logging-loki-pod-placement.adoc[leveloffset=+1]

include::modules/log-collector-resources-scheduling.adoc[leveloffset=+1]

include::modules/cluster-logging-collector-pod-location.adoc[leveloffset=+1]

[role="_additional-resources"]
[id="additional-resources_logging-node-selection"]
Expand Down
9 changes: 7 additions & 2 deletions logging/scheduling_resources/logging-taints-tolerations.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,15 @@ toc::[]
Taints and tolerations allow the node to control which pods should (or should not) be scheduled on them.

include::modules/nodes-scheduler-taints-tolerations-about.adoc[leveloffset=+1]
include::modules/cluster-logging-logstore-tolerations.adoc[leveloffset=+1]
include::modules/cluster-logging-kibana-tolerations.adoc[leveloffset=+1]

include::modules/logging-loki-pod-placement.adoc[leveloffset=+1]

include::modules/cluster-logging-collector-tolerations.adoc[leveloffset=+1]

include::modules/log-collector-resources-scheduling.adoc[leveloffset=+1]

include::modules/cluster-logging-collector-pod-location.adoc[leveloffset=+1]

[role="_additional-resources"]
[id="additional-resources_cluster-logging-tolerations"]
== Additional resources
Expand Down
12 changes: 5 additions & 7 deletions machine_management/creating-infrastructure-machinesets.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,9 @@ toc::[]

include::modules/machine-user-provisioned-limitations.adoc[leveloffset=+1]


You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment.

In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and {SMProductName} deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. {SMProductName} deploys Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.

include::modules/infrastructure-components.adoc[leveloffset=+1]

Expand All @@ -22,7 +21,7 @@ To create an infrastructure node, you can xref:../machine_management/creating-in
[id="creating-infrastructure-machinesets-production"]
== Creating infrastructure machine sets for production environments

In a production deployment, it is recommended that you deploy at least three compute machine sets to hold infrastructure components. Both OpenShift Logging and {SMProductName} deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different compute machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
In a production deployment, it is recommended that you deploy at least three compute machine sets to hold infrastructure components. {SMProductName} deploys Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different compute machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.

[id="creating-infrastructure-machinesets-clouds"]
=== Creating infrastructure machine sets for different clouds
Expand Down Expand Up @@ -131,9 +130,8 @@ include::modules/infrastructure-moving-registry.adoc[leveloffset=+2]

include::modules/infrastructure-moving-monitoring.adoc[leveloffset=+2]

include::modules/infrastructure-moving-logging.adoc[leveloffset=+2]

[role="_additional-resources"]
.Additional resources

* See xref:../monitoring/configuring-the-monitoring-stack.adoc#moving-monitoring-components-to-different-nodes_configuring-the-monitoring-stack[the monitoring documentation] for the general instructions on moving {product-title} components.
* xref:../monitoring/configuring-the-monitoring-stack.adoc#moving-monitoring-components-to-different-nodes_configuring-the-monitoring-stack[Moving monitoring components to different nodes]
* xref:../logging/scheduling_resources/logging-node-selectors.adoc#logging-node-selectors[Using node selectors to move logging resources]
* xref:../logging/scheduling_resources/logging-taints-tolerations.adoc#cluster-logging-logstore-tolerations_logging-taints-tolerations[Using taints and tolerations to control logging pod placement]
64 changes: 0 additions & 64 deletions modules/cluster-logging-kibana-tolerations.adoc

This file was deleted.

223 changes: 0 additions & 223 deletions modules/infrastructure-moving-logging.adoc

This file was deleted.

8 changes: 7 additions & 1 deletion post_installation_configuration/cluster-tasks.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -629,7 +629,13 @@ include::modules/infrastructure-moving-registry.adoc[leveloffset=+2]

include::modules/infrastructure-moving-monitoring.adoc[leveloffset=+2]

include::modules/infrastructure-moving-logging.adoc[leveloffset=+2]
[id="custer-tasks-moving-logging-resources"]
=== Moving {logging} resources

For information about moving {logging} resources, see:

* xref:../logging/scheduling_resources/logging-node-selectors.adoc#logging-node-selectors[Using node selectors to move logging resources]
* xref:../logging/scheduling_resources/logging-taints-tolerations.adoc#cluster-logging-logstore-tolerations_logging-taints-tolerations[Using taints and tolerations to control logging pod placement]

include::modules/cluster-autoscaler-about.adoc[leveloffset=+1]
include::modules/cluster-autoscaler-cr.adoc[leveloffset=+2]
Expand Down