diff --git a/hosted_control_planes/hcp-machine-config.adoc b/hosted_control_planes/hcp-machine-config.adoc index f0a4cb0e5a0c..e463060b48a0 100644 --- a/hosted_control_planes/hcp-machine-config.adoc +++ b/hosted_control_planes/hcp-machine-config.adoc @@ -14,6 +14,7 @@ You can reference any `machineconfiguration.openshift.io` resources in the `node ==== In {hcp}, the `MachineConfigPool` CR does not exist. A node pool contains a set of compute nodes. You can handle a machine configuration by using node pools. +You can manage your workloads in your hosted cluster by using the cluster autoscaler. [NOTE] ==== @@ -35,3 +36,12 @@ include::modules/hcp-configure-ntp.adoc[leveloffset=+1] * xref:../installing/install_config/installing-customizing.adoc#installation-special-config-butane_installing-customizing[Creating machine configs with Butane] * link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.14/html-single/clusters/index#create-host-inventory-cli-steps[Creating a host inventory] + + +include::modules/scale-up-down-autoscaler-hcp.adoc[leveloffset=+1] + +include::modules/scale-up-autoscaler-hcp.adoc[leveloffset=+1] + +include::modules/priority-expander-autoscaler-hcp.adoc[leveloffset=+1] + +include::modules/balance-ignored-labels-autoscaler-hcp.adoc[leveloffset=+1] diff --git a/modules/balance-ignored-labels-autoscaler-hcp.adoc b/modules/balance-ignored-labels-autoscaler-hcp.adoc new file mode 100644 index 000000000000..52a94f92e2a7 --- /dev/null +++ b/modules/balance-ignored-labels-autoscaler-hcp.adoc @@ -0,0 +1,98 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-machine-config.adoc + +:_mod-docs-content-type: PROCEDURE +[id="balance-ignored-labels-autoscaler-hcp_{context}"] += Balancing ignored labels in a hosted cluster + +After you scale up your node pools, you can use `balancingIgnoredLabels` to evenly distribute the machines across node pools. + +.Prerequisites + +* You have created the `HostedCluster` and `NodePool` resources. + +.Procedure + +. Add the `node.group.balancing.ignored` label to each of the relevant node pool by using the same label value. Run the following command: ++ +[source,terminal] +---- +$ oc patch -n \ + nodepool \ + --type=merge \ + --patch='{"spec": {"nodeLabels": {"node.group.balancing.ignored": ""}}}' +---- + +. Enable cluster autoscaling for your hosted cluster by running the following command: ++ +[source,terminal] +---- +$ oc patch -n \ + hostedcluster \ + --type=merge \ + --patch='{"spec": {"autoscaling": {"balancingIgnoredLabels": ["node.group.balancing.ignored"]}}}' +---- + +. Remove the `spec.replicas` field from the `NodePool` resource to allow the cluster autoscaler to manage the node count. Run the following command: ++ +[source,terminal] +---- +$ oc patch -n \ + nodepool \ + --type=json \ + --patch='[{"op": "remove", "path": "/spec/replicas"}]' +---- + +. Enable cluster autoscaling to configure the minimum and maximum node counts for your node pools. Run the following command: ++ +[source,terminal] +---- +$ oc patch -n \ + nodepool \ + --type=merge --patch='{"spec": {"autoScaling": {"max": 3, "min": 1}}}' +---- + +. Generate the `kubeconfig` file by running the following command: ++ +[source,terminal] +---- +$ hcp create kubeconfig \ + --name \ + --namespace > nested.config +---- + +. After scaling up the node pools, check that all compute nodes are in the `Ready` status by running the following command: ++ +[source,terminal] +---- +$ oc --kubeconfig nested.config get nodes -l 'hypershift.openshift.io/nodePool=' +---- + +. Confirm that the new nodes contain the `node.group.balancing.ignored` label by running the following command: ++ +[source,terminal] +---- +$ oc --kubeconfig nested.config get nodes \ + -l 'hypershift.openshift.io/nodePool=' \ + -o jsonpath='{.items[*].metadata.labels}' | grep "node.group.balancing.ignored" +---- + +. Enable cluster autoscaling for your hosted cluster by running the following command: ++ +[source,terminal] +---- +$ oc patch -n \ + hostedcluster \ + --type=merge \ + --patch='{"spec": {"autoscaling": {"balancingIgnoredLabels": ["node.group.balancing.ignored"]}}}' +---- + +.Verification + +* Verify that the number of nodes provisioned by each node pool is evenly distributed. For example, if you created three node pools with the same label value, the node counts might be 3, 2, and 3. Run the following command: ++ +[source,terminal] +---- +$ oc --kubeconfig nested.config get nodes -l 'hypershift.openshift.io/nodePool=' +---- diff --git a/modules/priority-expander-autoscaler-hcp.adoc b/modules/priority-expander-autoscaler-hcp.adoc new file mode 100644 index 000000000000..cdac70d0dbc6 --- /dev/null +++ b/modules/priority-expander-autoscaler-hcp.adoc @@ -0,0 +1,86 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-machine-config.adoc + +:_mod-docs-content-type: PROCEDURE +[id="priority-expander-autoscaler-hcp_{context}"] += Setting the priority expander in a hosted cluster + +You can define the priority for your node pools and create high priority machines before low priority machines by using the priority expander in your hosted cluster. + +.Prerequisites + +* You have created the `HostedCluster` and `NodePool` resources. + +.Procedure + +. To define the priority for your node pools, create a config map named `priority-expander-configmap.yaml` in your hosted cluster. Node pools with low numbers receive high priority. See the following example configuration: ++ +[source,yaml] +---- +apiVersion: v1 +kind: ConfigMap +metadata: + name: cluster-autoscaler-priority-expander + namespace: kube-system +# ... +data: + priorities: |- + 10: + - ".*.*" + 100: + - ".*.*" +# ... +---- + +. Generate the `kubeconfig` file by running the following command: ++ +[source,terminal] +---- +$ hcp create kubeconfig --name --namespace > nested.config +---- + +. Create the `ConfigMap` object by running the following command: ++ +[source,terminal] +---- +$ oc --kubeconfig nested.config create -f priority-expander-configmap.yaml +---- + +. Enable cluster autoscaling by setting the priority expander for your hosted cluster. Run the following command: ++ +[source,terminal] +---- +$ oc patch -n \ + hostedcluster \ + --type=merge \ + --patch='{"spec": {"autoscaling": {"scaling": "ScaleUpOnly", "maxPodGracePeriod": 60, "expanders": ["Priority"]}}}' +---- + +. Remove the `spec.replicas` field from the `NodePool` resource to allow the cluster autoscaler to manage the node count. Run the following command: ++ +[source,terminal] +---- +$ oc patch -n \ + nodepool \ + --type=json + --patch='[{"op": "remove", "path": "/spec/replicas"}]' +---- + +. Enable cluster autoscaling to configure the minimum and maximum node counts for your node pools. Run the following command: ++ +[source,terminal] +---- +$ oc patch -n \ + nodepool \ + --type=merge --patch='{"spec": {"autoScaling": {"max": 3, "min": 1}}}' +---- + +.Verification + +* After you apply new workloads, verify that the compute nodes associated with the priority node pool are scaled up first. Run the following command to check the status of the compute node: ++ +[source,terminal] +---- +$ oc --kubeconfig nested.config get nodes -l 'hypershift.openshift.io/nodePool=' +---- diff --git a/modules/scale-up-autoscaler-hcp.adoc b/modules/scale-up-autoscaler-hcp.adoc new file mode 100644 index 000000000000..219c7431424d --- /dev/null +++ b/modules/scale-up-autoscaler-hcp.adoc @@ -0,0 +1,53 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-machine-config.adoc + +:_mod-docs-content-type: PROCEDURE +[id="scale-up-autoscaler-hcp_{context}"] += Scaling up workloads in a hosted cluster + +To scale up the workloads in your hosted cluster, you can use the `ScaleUpOnly` behavior. + +.Prerequisites + +* You have created the `HostedCluster` and `NodePool` resources. + +.Procedure + +. Enable cluster autoscaling for your hosted cluster by setting the scaling behavior to `ScaleUpOnly`. Run the following command: ++ +[source,terminal] +---- +$ oc patch -n hostedcluster --type=merge --patch='{"spec": {"autoscaling": {"scaling": "ScaleUpOnly", "maxPodGracePeriod": 60}}}' +---- + +. Remove the `spec.replicas` field from the `NodePool` resource to allow the cluster autoscaler to manage the node count. Run the following command: ++ +[source,terminal] +---- +$ oc patch -n clusters nodepool --type=json --patch='[{"op": "remove", "path": "/spec/replicas"}]' +---- + +. Enable cluster autoscaling to configure the minimum and maximum node counts for your node pools. Run the following command: ++ +[source,terminal] +---- +$ oc patch -n nodepool + --type=merge --patch='{"spec": {"autoScaling": {"max": 3, "min": 1}}}' +---- + +.Verification + +. Verify that all compute nodes are in the `Ready` status by running the following command: ++ +[source,terminal] +---- +$ oc --kubeconfig .kubeconfig get nodes +---- + +. Verify that the compute nodes are scaled up successfully by checking the node count for your node pools. Run the following command: ++ +[source,terminal] +---- +$ oc --kubeconfig nested.config get nodes -l 'hypershift.openshift.io/nodePool=' +---- diff --git a/modules/scale-up-down-autoscaler-hcp.adoc b/modules/scale-up-down-autoscaler-hcp.adoc new file mode 100644 index 000000000000..6f964649fe36 --- /dev/null +++ b/modules/scale-up-down-autoscaler-hcp.adoc @@ -0,0 +1,53 @@ +// Module included in the following assemblies: +// +// * hosted_control_planes/hcp-machine-config.adoc + +:_mod-docs-content-type: PROCEDURE +[id="scale-up-down-autoscaler-hcp_{context}"] += Scaling up and down workloads in a hosted cluster + +To scale up and down the workloads in your hosted cluster, you can use the `ScaleUpAndScaleDown` behavior. The compute nodes scale up when you add workloads and scale down when you delete workloads. + +.Prerequisites + +* You have created the `HostedCluster` and `NodePool` resources. + +.Procedure + +. Enable cluster autoscaling for your hosted cluster by setting the scaling behavior to `ScaleUpAndScaleDown`. Run the following command: ++ +[source,terminal] +---- +$ oc patch -n \ + hostedcluster \ + --type=merge \ + --patch='{"spec": {"autoscaling": {"scaling": "ScaleUpAndScaleDown", "maxPodGracePeriod": 60, "scaleDown": {"utilizationThresholdPercent": 50}}}}' +---- + +. Remove the `spec.replicas` field from the `NodePool` resource to allow cluster autoscaler to manage the node count. Run the following command: ++ +[source,terminal] +---- +$ oc patch -n \ + nodepool \ + --type=json \ + --patch='[{"op": "remove", "path": "/spec/replicas"}]' +---- + +. Enable cluster autoscaling to configure the minimum and maximum node counts for your node pools. Run the following command: ++ +[source,terminal] +---- +$ oc patch -n \ + nodepool \ + --type=merge --patch='{"spec": {"autoScaling": {"max": 3, "min": 1}}}' +---- + +.Verification + +* To verify that all compute nodes are in the `Ready` status, run the following command: ++ +[source,terminal] +---- +$ oc --kubeconfig .kubeconfig get nodes +----