Skip to content

Commit e96e4b6

Browse files
committed
OSDOCS#15340:OSDOCS#15339: Cluster autoscaler features in HCP
1 parent 62b14f3 commit e96e4b6

File tree

5 files changed

+303
-0
lines changed

5 files changed

+303
-0
lines changed

hosted_control_planes/hcp-machine-config.adoc

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ You can reference any `machineconfiguration.openshift.io` resources in the `node
1414
====
1515

1616
In {hcp}, the `MachineConfigPool` CR does not exist. A node pool contains a set of compute nodes. You can handle a machine configuration by using node pools.
17+
You can manage your workloads in your hosted cluster by using the cluster autoscaler.
1718

1819
[NOTE]
1920
====
@@ -35,3 +36,12 @@ include::modules/hcp-configure-ntp.adoc[leveloffset=+1]
3536

3637
* xref:../installing/install_config/installing-customizing.adoc#installation-special-config-butane_installing-customizing[Creating machine configs with Butane]
3738
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.14/html-single/clusters/index#create-host-inventory-cli-steps[Creating a host inventory]
39+
40+
41+
include::modules/scale-up-down-autoscaler-hcp.adoc[leveloffset=+1]
42+
43+
include::modules/scale-up-autoscaler-hcp.adoc[leveloffset=+1]
44+
45+
include::modules/priority-expander-autoscaler-hcp.adoc[leveloffset=+1]
46+
47+
include::modules/balance-ignored-labels-autoscaler-hcp.adoc[leveloffset=+1]
Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * hosted_control_planes/hcp-machine-config.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="balance-ignored-labels-autoscaler-hcp_{context}"]
7+
= Balancing ignored labels in a hosted cluster
8+
9+
After you scale up your node pools, you can use `balancingIgnoredLabels` to evenly distribute the machines across node pools.
10+
11+
.Prerequisites
12+
13+
* You have created the `HostedCluster` and `NodePool` resources.
14+
15+
.Procedure
16+
17+
. Add the `node.group.balancing.ignored` label to each of the relevant node pool by using the same label value. Run the following command:
18+
+
19+
[source,terminal]
20+
----
21+
$ oc patch -n <hosted_cluster_namespace> \
22+
nodepool <node_pool_name> \
23+
--type=merge \
24+
--patch='{"spec": {"nodeLabels": {"node.group.balancing.ignored": "<label_name>"}}}'
25+
----
26+
27+
. Enable cluster autoscaling for your hosted cluster by running the following command:
28+
+
29+
[source,terminal]
30+
----
31+
$ oc patch -n <hosted_cluster_namespace> \
32+
hostedcluster <hosted_cluster_name> \
33+
--type=merge \
34+
--patch='{"spec": {"autoscaling": {"balancingIgnoredLabels": ["node.group.balancing.ignored"]}}}'
35+
----
36+
37+
. Remove the `spec.replicas` field from the `NodePool` resource to allow the cluster autoscaler to manage the node count. Run the following command:
38+
+
39+
[source,terminal]
40+
----
41+
$ oc patch -n <hosted_cluster_namespace> \
42+
nodepool <node_pool_name> \
43+
--type=json \
44+
--patch='[{"op": "remove", "path": "/spec/replicas"}]'
45+
----
46+
47+
. Enable cluster autoscaling to configure the minimum and maximum node counts for your node pools. Run the following command:
48+
+
49+
[source,terminal]
50+
----
51+
$ oc patch -n <hosted_cluster_namespace> \
52+
nodepool <nodepool_name> \
53+
--type=merge --patch='{"spec": {"autoScaling": {"max": 3, "min": 1}}}'
54+
----
55+
56+
. Generate the `kubeconfig` file by running the following command:
57+
+
58+
[source,terminal]
59+
----
60+
$ hcp create kubeconfig \
61+
--name <hosted_cluster_name> \
62+
--namespace <hosted_cluster_namespace> > nested.config
63+
----
64+
65+
. After scaling up the node pools, check that all worker nodes are in the `Ready` status by running the following command:
66+
+
67+
[source,terminal]
68+
----
69+
$ oc --kubeconfig nested.config get nodes -l 'hypershift.openshift.io/nodePool=<node_pool_name>'
70+
----
71+
72+
. Confirm that the new nodes contain the `node.group.balancing.ignored` label by running the following command:
73+
+
74+
[source,terminal]
75+
----
76+
$ oc --kubeconfig nested.config get nodes \
77+
-l 'hypershift.openshift.io/nodePool=<node_pool_name>' \
78+
-o jsonpath='{.items[*].metadata.labels}' | grep "node.group.balancing.ignored"
79+
----
80+
81+
. Enable cluster autoscaling for your hosted cluster by running the following command:
82+
+
83+
[source,terminal]
84+
----
85+
$ oc patch -n <hosted_cluster_namespace> \
86+
hostedcluster <hosted_cluster_name> \
87+
--type=merge \
88+
--patch='{"spec": {"autoscaling": {"balancingIgnoredLabels": ["node.group.balancing.ignored"]}}}'
89+
----
90+
91+
.Verification
92+
93+
* Verify that the number of nodes provisioned by each node pool is evenly distributed. For example, if you created three node pools with the same label value, the node counts might be 3, 2, and 3. Run the following command:
94+
+
95+
[source,terminal]
96+
----
97+
$ oc --kubeconfig nested.config get nodes -l 'hypershift.openshift.io/nodePool=<node_pool_name>'
98+
----
Lines changed: 87 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,87 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * hosted_control_planes/hcp-machine-config.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="priority-expander-autoscaler-hcp_{context}"]
7+
= Setting the priority expander in a hosted cluster
8+
9+
You can define the priority for your node pools and create high priority machines before low priority machines by using the priority expander in your hosted cluster.
10+
11+
.Prerequisites
12+
13+
* You have created the `HostedCluster` and `NodePool` resources.
14+
15+
.Procedure
16+
17+
. To define the priority for your node pools, create a config map named `priority-expander-configmap.yaml` in your hosted cluster. Node pools with low numbers receive high priority. See the following example configuration:
18+
+
19+
.Example `priority-expander-configmap.yaml` file
20+
[source,yaml]
21+
----
22+
apiVersion: v1
23+
kind: ConfigMap
24+
metadata:
25+
name: cluster-autoscaler-priority-expander
26+
namespace: kube-system
27+
# ...
28+
data:
29+
priorities: |-
30+
10:
31+
- ".*<node_pool_name1>.*"
32+
100:
33+
- ".*<node_pool_name2>.*"
34+
# ...
35+
----
36+
37+
. Generate the `kubeconfig` file by running the following command:
38+
+
39+
[source,terminal]
40+
----
41+
$ hcp create kubeconfig --name <hosted_cluster_name> --namespace <hosted_cluster_namespace> > nested.config
42+
----
43+
44+
. Create the `ConfigMap` object by running the following command:
45+
+
46+
[source,terminal]
47+
----
48+
$ oc --kubeconfig nested.config create -f priority-expander-configmap.yaml
49+
----
50+
51+
. Enable cluster autoscaling by setting the priority expander for your hosted cluster. Run the following command:
52+
+
53+
[source,terminal]
54+
----
55+
$ oc patch -n <hosted_cluster_namespace> \
56+
hostedcluster <hosted_cluster_name> \
57+
--type=merge \
58+
--patch='{"spec": {"autoscaling": {"scaling": "ScaleUpOnly", "maxPodGracePeriod": 60, "expanders": ["Priority"]}}}'
59+
----
60+
61+
. Remove the `spec.replicas` field from the `NodePool` resource to allow cluster autoscaler to manage the node count. Run the following command:
62+
+
63+
[source,terminal]
64+
----
65+
$ oc patch -n <hosted_cluster_namespace> \
66+
nodepool <node_pool_name> \
67+
--type=json
68+
--patch='[{"op": "remove", "path": "/spec/replicas"}]'
69+
----
70+
71+
. Enable cluster autoscaling to configure the minimum and maximum node counts for your node pools. Run the following command:
72+
+
73+
[source,terminal]
74+
----
75+
$ oc patch -n <hosted_cluster_namespace> \
76+
nodepool <nodepool_name> \
77+
--type=merge --patch='{"spec": {"autoScaling": {"max": 3, "min": 1}}}'
78+
----
79+
80+
.Verification
81+
82+
* After you apply new workloads, verify that the worker nodes associated with the priority node pool are scaled up first. Run the following command to check the status of the worker node:
83+
+
84+
[source,terminal]
85+
----
86+
$ oc --kubeconfig nested.config get nodes -l 'hypershift.openshift.io/nodePool=<node_pool_name>'
87+
----
Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * hosted_control_planes/hcp-machine-config.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="scale-up-autoscaler-hcp_{context}"]
7+
= Scaling up workloads in a hosted cluster
8+
9+
To scale up workloads in your hosted cluster, you can use the `ScaleUpOnly` behavior.
10+
11+
.Prerequisites
12+
13+
* You have created the `HostedCluster` and `NodePool` resources.
14+
15+
.Procedure
16+
17+
. Enable cluster autoscaling for your hosted cluster by setting the scaling behavior to `ScaleUpOnly`. Run the following command:
18+
+
19+
[source,terminal]
20+
----
21+
$ oc patch -n <hosted_cluster_namespace> hostedcluster <hosted_cluster_name> --type=merge --patch='{"spec": {"autoscaling": {"scaling": "ScaleUpOnly", "maxPodGracePeriod": 60}}}'
22+
----
23+
24+
. Remove the `spec.replicas` field from the `NodePool` resource to allow cluster autoscaler to manage the node count. Run the following command:
25+
+
26+
[source,terminal]
27+
----
28+
$ oc patch -n clusters nodepool <node_pool_name> --type=json --patch='[{"op": "remove", "path": "/spec/replicas"}]'
29+
----
30+
31+
. Enable cluster autoscaling to configure the minimum and maximum node counts for your node pools. Run the following command:
32+
+
33+
[source,terminal]
34+
----
35+
$ oc patch -n <hosted_cluster_namespace> nodepool <nodepool_name>
36+
--type=merge --patch='{"spec": {"autoScaling": {"max": 3, "min": 1}}}'
37+
----
38+
39+
.Verification
40+
41+
. Verify that all worker nodes are in the `Ready` status by running the following command:
42+
+
43+
[source,terminal]
44+
----
45+
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
46+
----
47+
48+
. Verify that the worker nodes are scaled up successfully by checking the node count for your node pools. Run the following command:
49+
+
50+
[source,terminal]
51+
----
52+
$ oc --kubeconfig nested.config get nodes -l 'hypershift.openshift.io/nodePool=<node_pool_name>'
53+
----
Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * hosted_control_planes/hcp-machine-config.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="scale-up-down-autoscaler-hcp_{context}"]
7+
= Scaling up and down workloads in a hosted cluster
8+
9+
To scale up and down the workloads in your hosted cluster, you can use the `ScaleUpAndScaleDown` behavior.
10+
11+
.Prerequisites
12+
13+
* You have created the `HostedCluster` and `NodePool` resources.
14+
15+
.Procedure
16+
17+
. Enable cluster autoscaling for your hosted cluster by setting the scaling behavior to `ScaleUpAndScaleDown`. Run the following command:
18+
+
19+
[source,terminal]
20+
----
21+
$ oc patch -n <hosted_cluster_namespace> \
22+
hostedcluster <hosted_cluster_name> \
23+
--type=merge \
24+
--patch='{"spec": {"autoscaling": {"scaling": "ScaleUpAndScaleDown", "maxPodGracePeriod": 60, "scaleDown": {"utilizationThresholdPercent": 50}}}}'
25+
----
26+
27+
. Remove the `spec.replicas` field from the `NodePool` resource to allow cluster autoscaler to manage the node count. Run the following command:
28+
+
29+
[source,terminal]
30+
----
31+
$ oc patch -n <hosted_cluster_namespace> \
32+
nodepool <node_pool_name> \
33+
--type=json \
34+
--patch='[{"op": "remove", "path": "/spec/replicas"}]'
35+
----
36+
37+
. Enable cluster autoscaling to configure the minimum and maximum node counts for your node pools. Run the following command:
38+
+
39+
[source,terminal]
40+
----
41+
$ oc patch -n <hosted_cluster_namespace> \
42+
nodepool <nodepool_name> \
43+
--type=merge --patch='{"spec": {"autoScaling": {"max": 3, "min": 1}}}'
44+
----
45+
+
46+
The worker nodes scale up when you add workloads and scale down when you delete workloads.
47+
48+
.Verification
49+
50+
* To verify that all worker nodes are in the `Ready` status, run the following command:
51+
+
52+
[source,terminal]
53+
----
54+
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
55+
----

0 commit comments

Comments
 (0)