Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions hosted_control_planes/hcp-machine-config.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ You can reference any `machineconfiguration.openshift.io` resources in the `node
====

In {hcp}, the `MachineConfigPool` CR does not exist. A node pool contains a set of compute nodes. You can handle a machine configuration by using node pools.
You can manage your workloads in your hosted cluster by using the cluster autoscaler.

[NOTE]
====
Expand All @@ -35,3 +36,12 @@ include::modules/hcp-configure-ntp.adoc[leveloffset=+1]

* xref:../installing/install_config/installing-customizing.adoc#installation-special-config-butane_installing-customizing[Creating machine configs with Butane]
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.14/html-single/clusters/index#create-host-inventory-cli-steps[Creating a host inventory]


include::modules/scale-up-down-autoscaler-hcp.adoc[leveloffset=+1]

include::modules/scale-up-autoscaler-hcp.adoc[leveloffset=+1]

include::modules/priority-expander-autoscaler-hcp.adoc[leveloffset=+1]

include::modules/balance-ignored-labels-autoscaler-hcp.adoc[leveloffset=+1]
98 changes: 98 additions & 0 deletions modules/balance-ignored-labels-autoscaler-hcp.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-machine-config.adoc

:_mod-docs-content-type: PROCEDURE
[id="balance-ignored-labels-autoscaler-hcp_{context}"]
= Balancing ignored labels in a hosted cluster

After you scale up your node pools, you can use `balancingIgnoredLabels` to evenly distribute the machines across node pools.

.Prerequisites

* You have created the `HostedCluster` and `NodePool` resources.

.Procedure

. Add the `node.group.balancing.ignored` label to each of the relevant node pool by using the same label value. Run the following command:
+
[source,terminal]
----
$ oc patch -n <hosted_cluster_namespace> \
nodepool <node_pool_name> \
--type=merge \
--patch='{"spec": {"nodeLabels": {"node.group.balancing.ignored": "<label_name>"}}}'
----

. Enable cluster autoscaling for your hosted cluster by running the following command:
+
[source,terminal]
----
$ oc patch -n <hosted_cluster_namespace> \
hostedcluster <hosted_cluster_name> \
--type=merge \
--patch='{"spec": {"autoscaling": {"balancingIgnoredLabels": ["node.group.balancing.ignored"]}}}'
----

. Remove the `spec.replicas` field from the `NodePool` resource to allow the cluster autoscaler to manage the node count. Run the following command:
+
[source,terminal]
----
$ oc patch -n <hosted_cluster_namespace> \
nodepool <node_pool_name> \
--type=json \
--patch='[{"op": "remove", "path": "/spec/replicas"}]'
----

. Enable cluster autoscaling to configure the minimum and maximum node counts for your node pools. Run the following command:
+
[source,terminal]
----
$ oc patch -n <hosted_cluster_namespace> \
nodepool <nodepool_name> \
--type=merge --patch='{"spec": {"autoScaling": {"max": 3, "min": 1}}}'
----

. Generate the `kubeconfig` file by running the following command:
+
[source,terminal]
----
$ hcp create kubeconfig \
--name <hosted_cluster_name> \
--namespace <hosted_cluster_namespace> > nested.config
----

. After scaling up the node pools, check that all compute nodes are in the `Ready` status by running the following command:
+
[source,terminal]
----
$ oc --kubeconfig nested.config get nodes -l 'hypershift.openshift.io/nodePool=<node_pool_name>'
----

. Confirm that the new nodes contain the `node.group.balancing.ignored` label by running the following command:
+
[source,terminal]
----
$ oc --kubeconfig nested.config get nodes \
-l 'hypershift.openshift.io/nodePool=<node_pool_name>' \
-o jsonpath='{.items[*].metadata.labels}' | grep "node.group.balancing.ignored"
----

. Enable cluster autoscaling for your hosted cluster by running the following command:
+
[source,terminal]
----
$ oc patch -n <hosted_cluster_namespace> \
hostedcluster <hosted_cluster_name> \
--type=merge \
--patch='{"spec": {"autoscaling": {"balancingIgnoredLabels": ["node.group.balancing.ignored"]}}}'
----

.Verification

* Verify that the number of nodes provisioned by each node pool is evenly distributed. For example, if you created three node pools with the same label value, the node counts might be 3, 2, and 3. Run the following command:
+
[source,terminal]
----
$ oc --kubeconfig nested.config get nodes -l 'hypershift.openshift.io/nodePool=<node_pool_name>'
----
86 changes: 86 additions & 0 deletions modules/priority-expander-autoscaler-hcp.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-machine-config.adoc

:_mod-docs-content-type: PROCEDURE
[id="priority-expander-autoscaler-hcp_{context}"]
= Setting the priority expander in a hosted cluster

You can define the priority for your node pools and create high priority machines before low priority machines by using the priority expander in your hosted cluster.

.Prerequisites

* You have created the `HostedCluster` and `NodePool` resources.

.Procedure

. To define the priority for your node pools, create a config map named `priority-expander-configmap.yaml` in your hosted cluster. Node pools with low numbers receive high priority. See the following example configuration:
+
[source,yaml]
----
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-autoscaler-priority-expander
namespace: kube-system
# ...
data:
priorities: |-
10:
- ".*<node_pool_name1>.*"
100:
- ".*<node_pool_name2>.*"
# ...
----

. Generate the `kubeconfig` file by running the following command:
+
[source,terminal]
----
$ hcp create kubeconfig --name <hosted_cluster_name> --namespace <hosted_cluster_namespace> > nested.config
----

. Create the `ConfigMap` object by running the following command:
+
[source,terminal]
----
$ oc --kubeconfig nested.config create -f priority-expander-configmap.yaml
----

. Enable cluster autoscaling by setting the priority expander for your hosted cluster. Run the following command:
+
[source,terminal]
----
$ oc patch -n <hosted_cluster_namespace> \
hostedcluster <hosted_cluster_name> \
--type=merge \
--patch='{"spec": {"autoscaling": {"scaling": "ScaleUpOnly", "maxPodGracePeriod": 60, "expanders": ["Priority"]}}}'
----

. Remove the `spec.replicas` field from the `NodePool` resource to allow the cluster autoscaler to manage the node count. Run the following command:
+
[source,terminal]
----
$ oc patch -n <hosted_cluster_namespace> \
nodepool <node_pool_name> \
--type=json
--patch='[{"op": "remove", "path": "/spec/replicas"}]'
----

. Enable cluster autoscaling to configure the minimum and maximum node counts for your node pools. Run the following command:
+
[source,terminal]
----
$ oc patch -n <hosted_cluster_namespace> \
nodepool <nodepool_name> \
--type=merge --patch='{"spec": {"autoScaling": {"max": 3, "min": 1}}}'
----

.Verification

* After you apply new workloads, verify that the compute nodes associated with the priority node pool are scaled up first. Run the following command to check the status of the compute node:
+
[source,terminal]
----
$ oc --kubeconfig nested.config get nodes -l 'hypershift.openshift.io/nodePool=<node_pool_name>'
----
53 changes: 53 additions & 0 deletions modules/scale-up-autoscaler-hcp.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-machine-config.adoc

:_mod-docs-content-type: PROCEDURE
[id="scale-up-autoscaler-hcp_{context}"]
= Scaling up workloads in a hosted cluster

To scale up the workloads in your hosted cluster, you can use the `ScaleUpOnly` behavior.

.Prerequisites

* You have created the `HostedCluster` and `NodePool` resources.

.Procedure

. Enable cluster autoscaling for your hosted cluster by setting the scaling behavior to `ScaleUpOnly`. Run the following command:
+
[source,terminal]
----
$ oc patch -n <hosted_cluster_namespace> hostedcluster <hosted_cluster_name> --type=merge --patch='{"spec": {"autoscaling": {"scaling": "ScaleUpOnly", "maxPodGracePeriod": 60}}}'
----

. Remove the `spec.replicas` field from the `NodePool` resource to allow the cluster autoscaler to manage the node count. Run the following command:
+
[source,terminal]
----
$ oc patch -n clusters nodepool <node_pool_name> --type=json --patch='[{"op": "remove", "path": "/spec/replicas"}]'
----

. Enable cluster autoscaling to configure the minimum and maximum node counts for your node pools. Run the following command:
+
[source,terminal]
----
$ oc patch -n <hosted_cluster_namespace> nodepool <nodepool_name>
--type=merge --patch='{"spec": {"autoScaling": {"max": 3, "min": 1}}}'
----

.Verification

. Verify that all compute nodes are in the `Ready` status by running the following command:
+
[source,terminal]
----
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
----

. Verify that the compute nodes are scaled up successfully by checking the node count for your node pools. Run the following command:
+
[source,terminal]
----
$ oc --kubeconfig nested.config get nodes -l 'hypershift.openshift.io/nodePool=<node_pool_name>'
----
53 changes: 53 additions & 0 deletions modules/scale-up-down-autoscaler-hcp.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
// Module included in the following assemblies:
//
// * hosted_control_planes/hcp-machine-config.adoc

:_mod-docs-content-type: PROCEDURE
[id="scale-up-down-autoscaler-hcp_{context}"]
= Scaling up and down workloads in a hosted cluster

To scale up and down the workloads in your hosted cluster, you can use the `ScaleUpAndScaleDown` behavior. The compute nodes scale up when you add workloads and scale down when you delete workloads.

.Prerequisites

* You have created the `HostedCluster` and `NodePool` resources.

.Procedure

. Enable cluster autoscaling for your hosted cluster by setting the scaling behavior to `ScaleUpAndScaleDown`. Run the following command:
+
[source,terminal]
----
$ oc patch -n <hosted_cluster_namespace> \
hostedcluster <hosted_cluster_name> \
--type=merge \
--patch='{"spec": {"autoscaling": {"scaling": "ScaleUpAndScaleDown", "maxPodGracePeriod": 60, "scaleDown": {"utilizationThresholdPercent": 50}}}}'
----

. Remove the `spec.replicas` field from the `NodePool` resource to allow cluster autoscaler to manage the node count. Run the following command:
+
[source,terminal]
----
$ oc patch -n <hosted_cluster_namespace> \
nodepool <node_pool_name> \
--type=json \
--patch='[{"op": "remove", "path": "/spec/replicas"}]'
----

. Enable cluster autoscaling to configure the minimum and maximum node counts for your node pools. Run the following command:
+
[source,terminal]
----
$ oc patch -n <hosted_cluster_namespace> \
nodepool <nodepool_name> \
--type=merge --patch='{"spec": {"autoScaling": {"max": 3, "min": 1}}}'
----

.Verification

* To verify that all compute nodes are in the `Ready` status, run the following command:
+
[source,terminal]
----
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes
----