Skip to content

Commit 4f9f50c

Browse files
authored
Merge pull request #74072 from abrennan89/manualCP413
[enterprise-4.13] OBSDOCS-761: Update schedule and placement logging docs
2 parents e8a008c + 2182a2b commit 4f9f50c

File tree

8 files changed

+26
-305
lines changed

8 files changed

+26
-305
lines changed

logging/log_collection_forwarding/cluster-logging-collector.adoc

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,10 +14,6 @@ include::modules/configuring-logging-collector.adoc[leveloffset=+1]
1414

1515
include::modules/creating-logfilesmetricexporter.adoc[leveloffset=+1]
1616

17-
include::modules/log-collector-resources-scheduling.adoc[leveloffset=+1]
18-
19-
include::modules/cluster-logging-collector-pod-location.adoc[leveloffset=+1]
20-
2117
include::modules/cluster-logging-collector-limits.adoc[leveloffset=+1]
2218

2319
[id="cluster-logging-collector-input-receivers"]

logging/log_storage/cluster-logging-loki.adoc

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,8 +18,6 @@ include::modules/logging-loki-restart-hardening.adoc[leveloffset=+1]
1818
1919
include::modules/logging-loki-reliability-hardening.adoc[leveloffset=+1]
2020

21-
include::modules/logging-loki-pod-placement.adoc[leveloffset=+1]
22-
2321
[role="_additional-resources"]
2422
.Additional resources
2523
* link:https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#podantiaffinity-v1-core[`PodAntiAffinity` v1 core Kubernetes documentation]
@@ -60,4 +58,4 @@ include::modules/logging-loki-memberlist-ip.adoc[leveloffset=+1]
6058
* link:https://loki-operator.dev/docs/howto_connect_grafana.md/[Grafana Dashboard documentation]
6159
* link:https://loki-operator.dev/docs/object_storage.md/[Loki Object Storage documentation]
6260
* link:https://loki-operator.dev/docs/api.md/#loki-grafana-com-v1-IngestionLimitSpec[{loki-op} `IngestionLimitSpec` documentation]
63-
* link:https://grafana.com/docs/loki/latest/operations/storage/schema/#changing-the-schema[Loki Storage Schema documentation]
61+
* link:https://grafana.com/docs/loki/latest/operations/storage/schema/#changing-the-schema[Loki Storage Schema documentation]

logging/scheduling_resources/logging-node-selectors.adoc

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,12 @@ toc::[]
1010
include::snippets/about-node-selectors.adoc[]
1111

1212
include::modules/nodes-scheduler-node-selectors-about.adoc[leveloffset=+1]
13-
include::modules/infrastructure-moving-logging.adoc[leveloffset=+1]
13+
14+
include::modules/logging-loki-pod-placement.adoc[leveloffset=+1]
15+
16+
include::modules/log-collector-resources-scheduling.adoc[leveloffset=+1]
17+
18+
include::modules/cluster-logging-collector-pod-location.adoc[leveloffset=+1]
1419

1520
[role="_additional-resources"]
1621
[id="additional-resources_logging-node-selection"]

logging/scheduling_resources/logging-taints-tolerations.adoc

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,10 +10,15 @@ toc::[]
1010
Taints and tolerations allow the node to control which pods should (or should not) be scheduled on them.
1111

1212
include::modules/nodes-scheduler-taints-tolerations-about.adoc[leveloffset=+1]
13-
include::modules/cluster-logging-logstore-tolerations.adoc[leveloffset=+1]
14-
include::modules/cluster-logging-kibana-tolerations.adoc[leveloffset=+1]
13+
14+
include::modules/logging-loki-pod-placement.adoc[leveloffset=+1]
15+
1516
include::modules/cluster-logging-collector-tolerations.adoc[leveloffset=+1]
1617

18+
include::modules/log-collector-resources-scheduling.adoc[leveloffset=+1]
19+
20+
include::modules/cluster-logging-collector-pod-location.adoc[leveloffset=+1]
21+
1722
[role="_additional-resources"]
1823
[id="additional-resources_cluster-logging-tolerations"]
1924
== Additional resources

machine_management/creating-infrastructure-machinesets.adoc

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -8,10 +8,9 @@ toc::[]
88

99
include::modules/machine-user-provisioned-limitations.adoc[leveloffset=+1]
1010

11-
1211
You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment.
1312

14-
In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and {SMProductName} deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
13+
In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. {SMProductName} deploys Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
1514

1615
include::modules/infrastructure-components.adoc[leveloffset=+1]
1716

@@ -22,7 +21,7 @@ To create an infrastructure node, you can xref:../machine_management/creating-in
2221
[id="creating-infrastructure-machinesets-production"]
2322
== Creating infrastructure machine sets for production environments
2423

25-
In a production deployment, it is recommended that you deploy at least three compute machine sets to hold infrastructure components. Both OpenShift Logging and {SMProductName} deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different compute machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
24+
In a production deployment, it is recommended that you deploy at least three compute machine sets to hold infrastructure components. {SMProductName} deploys Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different compute machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability.
2625

2726
[id="creating-infrastructure-machinesets-clouds"]
2827
=== Creating infrastructure machine sets for different clouds
@@ -131,9 +130,8 @@ include::modules/infrastructure-moving-registry.adoc[leveloffset=+2]
131130

132131
include::modules/infrastructure-moving-monitoring.adoc[leveloffset=+2]
133132

134-
include::modules/infrastructure-moving-logging.adoc[leveloffset=+2]
135-
136133
[role="_additional-resources"]
137134
.Additional resources
138-
139-
* See xref:../monitoring/configuring-the-monitoring-stack.adoc#moving-monitoring-components-to-different-nodes_configuring-the-monitoring-stack[the monitoring documentation] for the general instructions on moving {product-title} components.
135+
* xref:../monitoring/configuring-the-monitoring-stack.adoc#moving-monitoring-components-to-different-nodes_configuring-the-monitoring-stack[Moving monitoring components to different nodes]
136+
* xref:../logging/scheduling_resources/logging-node-selectors.adoc#logging-node-selectors[Using node selectors to move logging resources]
137+
* xref:../logging/scheduling_resources/logging-taints-tolerations.adoc#cluster-logging-logstore-tolerations_logging-taints-tolerations[Using taints and tolerations to control logging pod placement]

modules/cluster-logging-kibana-tolerations.adoc

Lines changed: 0 additions & 64 deletions
This file was deleted.

modules/infrastructure-moving-logging.adoc

Lines changed: 0 additions & 223 deletions
This file was deleted.

post_installation_configuration/cluster-tasks.adoc

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -629,7 +629,13 @@ include::modules/infrastructure-moving-registry.adoc[leveloffset=+2]
629629

630630
include::modules/infrastructure-moving-monitoring.adoc[leveloffset=+2]
631631

632-
include::modules/infrastructure-moving-logging.adoc[leveloffset=+2]
632+
[id="custer-tasks-moving-logging-resources"]
633+
=== Moving {logging} resources
634+
635+
For information about moving {logging} resources, see:
636+
637+
* xref:../logging/scheduling_resources/logging-node-selectors.adoc#logging-node-selectors[Using node selectors to move logging resources]
638+
* xref:../logging/scheduling_resources/logging-taints-tolerations.adoc#cluster-logging-logstore-tolerations_logging-taints-tolerations[Using taints and tolerations to control logging pod placement]
633639

634640
include::modules/cluster-autoscaler-about.adoc[leveloffset=+1]
635641
include::modules/cluster-autoscaler-cr.adoc[leveloffset=+2]

0 commit comments

Comments
 (0)