Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions _topic_maps/_topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3392,6 +3392,8 @@ Topics:
File: cnf-understanding-low-latency
- Name: Tuning nodes for low latency with the performance profile
File: cnf-tuning-low-latency-nodes-with-perf-profile
- Name: Tuning hosted control planes for low latency with the performance profile
File: cnf-tuning-low-latency-hosted-cp-nodes-with-perf-profile
- Name: Provisioning real-time and low latency workloads
File: cnf-provisioning-low-latency-workloads
- Name: Debugging low latency tuning
Expand Down
184 changes: 184 additions & 0 deletions modules/apply-performance-profile-hosted-cluster.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,184 @@
// Module included in the following assemblies:
//
// * scalability_and_performance/cnf-tuning-low-latency-hosted-cp-nodes-with-perf-profile.adoc

:_mod-docs-content-type: PROCEDURE
[id="apply-performance-profile-hosted-cluster_{context}"]
= Configuring low-latency tuning in a hosted cluster

To set low latency with the performance profile on the nodes in your hosted cluster, you can use the Node Tuning Operator. In {hcp}, you can configure low-latency tuning by creating config maps that contain `Tuned` objects and referencing those config maps in your node pools. The tuned object in this case is a `PerformanceProfile` object that defines the performance profile you want to apply to the nodes in a node pool.

.Procedure

. Export the management cluster `kubeconfig` file by running the following command:
+
[source,terminal]
----
$ export MGMT_KUBECONFIG=<path_to_mgmt_kubeconfig>
----

. Create the `ConfigMap` object in the management cluster by running the following command:
+
[source,terminal]
----
$ oc --kubeconfig="$MGMT_KUBECONFIG" apply -f my-hosted-cp-performance-profile.yaml
----

. Edit the `NodePool` object in the `clusters` namespace adding the `spec.tuningConfig` field and the name of the created performance profile in that field by running the following command:
+
[source,terminal]
----
$ oc edit np -n clusters
----
+
[source,yaml]
----
apiVersion: hypershift.openshift.io/v1beta1
kind: NodePool
metadata:
annotations:
hypershift.openshift.io/nodePoolCurrentConfig: 2f752a2c
hypershift.openshift.io/nodePoolCurrentConfigVersion: 998aa3ce
hypershift.openshift.io/nodePoolPlatformMachineTemplate: democluster-us-east-1a-3dff55ec
creationTimestamp: "2025-04-09T09:41:55Z"
finalizers:
- hypershift.openshift.io/finalizer
generation: 1
labels:
hypershift.openshift.io/auto-created-for-infra: democluster
name: democluster-us-east-1a
namespace: clusters
ownerReferences:
- apiVersion: hypershift.openshift.io/v1beta1
kind: HostedCluster
name: democluster
uid: af77e390-c289-433c-9d29-3aee8e5dc76f
resourceVersion: "53056"
uid: 11efa47c-5a7b-476c-85cf-a274f748a868
spec:
tuningConfig:
- name: performance
arch: amd64
clusterName: democluster
management:
----
+
[NOTE]
====
You can reference the same profile in multiple node pools. In {hcp}, the Node Tuning Operator appends a hash of the node pool name and namespace to the name of the `Tuned` custom resources to distinguish them. After you make the changes, the system detects that a configuration change is required and starts a rolling update of the nodes in that pool to apply the new configuration.
====

.Verification

. List all node pools across all namespaces by running the following command:
+
[source,terminal]
----
$ oc --kubeconfig="$MGMT_KUBECONFIG" get np -A
----
+
.Example output
[source,terminal]
----
NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE
clusters democluster-us-east-1a democluster 1 1 False False 4.17.0 False True
----
+
[NOTE]
====
The `UPDATINGCONFIG` field indicates whether the node pool is in the process of updating its configuration. During this update, the `UPDATINGCONFIG` field in the node pool's status becomes `True`. The new configuration is considered fully applied only when the `UPDATINGCONFIG` field returns to `False`.
====

. List all config maps in the `clusters-democluster` namespace by running the following command:
+
[source,terminal]
----
$ oc --kubeconfig="$MGMT_KUBECONFIG" get cm -n clusters-democluster
----
+
.Example output
[source,terminal]
----
NAME DATA AGE
aggregator-client-ca 1 69m
auth-config 1 68m
aws-cloud-config 1 68m
aws-ebs-csi-driver-trusted-ca-bundle 1 66m
... 1 67m
kubelet-client-ca 1 69m
kubeletconfig-performance-democluster-us-east-1a 1 22m
...
ovnkube-identity-cm 2 66m
performance-democluster-us-east-1a 1 22m
...
tuned-performance-democluster-us-east-1a 1 22m
----
+
The output shows a kubeletconfig `kubeletconfig-performance-democluster-us-east-1a` and a performance profile `performance-democluster-us-east-1a` has been created. The Node Tuning Operator syncs the `Tuned` objects into the hosted cluster. You can verify which `Tuned` objects are defined and which profiles are applied to each node.

. List available secrets on the management cluster by running the following command:
+
[source,terminal]
----
$ oc get secrets -n clusters
----
+
.Example output
[source,terminal]
----
NAME TYPE DATA AGE
builder-dockercfg-25qpp kubernetes.io/dockercfg 1 128m
default-dockercfg-mkvlz kubernetes.io/dockercfg 1 128m
democluster-admin-kubeconfig Opaque 1 127m
democluster-etcd-encryption-key Opaque 1 128m
democluster-kubeadmin-password Opaque 1 126m
democluster-pull-secret Opaque 1 128m
deployer-dockercfg-8lfpd kubernetes.io/dockercfg 1 128m
----

. Extract the `kubeconfig` file for the hosted cluster by running the following command:
+
[source,terminal]
----
$ oc get secret <secret_name> -n clusters -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
----
+
.Example
[source,terminal]
----
$ oc get secret democluster-admin-kubeconfig -n clusters -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
----

. Export the hosted cluster kubeconfig by running the following command:
+
[source,terminal]
----
$ export HC_KUBECONFIG=<path_to_hosted-cluster-kubeconfig>
----

. Verify that the kubeletconfig is mirrored in the hosted cluster by running the following command:
+
[source,terminal]
----
$ oc --kubeconfig="$HC_KUBECONFIG" get cm -n openshift-config-managed | grep kubelet
----
+
.Example output
[source,terminal]
----
kubelet-serving-ca 1 79m
kubeletconfig-performance-democluster-us-east-1a 1 15m
----

. Verify that the `single-numa-node` policy is set on the hosted cluster by running the following command:
+
[source,terminal]
----
$ oc --kubeconfig="$HC_KUBECONFIG" get cm kubeletconfig-performance-democluster-us-east-1a -o yaml -n openshift-config-managed | grep single
----
+
.Example output
[source,terminal]
----
topologyManagerPolicy: single-numa-node
----
105 changes: 105 additions & 0 deletions modules/cnf-gathering-data-about-hosted-cluster-using-must-gather.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
// Module included in the following assemblies:
//
// * scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc

:_mod-docs-content-type: PROCEDURE
[id="gathering-data-about-your-hosted-cluster-using-must-gather_{context}"]
= Gathering data about your hosted control planes cluster for the PPC

The Performance Profile Creator (PPC) tool requires `must-gather` data. As a cluster administrator, run the `must-gather` command to capture information about your cluster.

.Prerequisites

* You have `cluster-admin` role access to the management cluster.
* You installed the {oc-first}.

.Procedure

. Export the management cluster `kubeconfig` file by running the following command:
+
[source,terminal]
----
$ export MGMT_KUBECONFIG=<path_to_mgmt_kubeconfig>
----

. List all node pools across all namespaces by running the following command:
+
[source,terminal]
----
$ oc --kubeconfig="$MGMT_KUBECONFIG" get np -A
----
+
.Example output
[source,terminal]
----
NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE
clusters democluster-us-east-1a democluster 1 1 False False 4.17.0 False True
----
+
* The output shows the namespace `clusters` in the management cluster where the `NodePool` resource is defined.
* The name of the `NodePool` resource, for example `democluster-us-east-1a`.
* The `HostedCluster` this `NodePool` belongs to. For example, `democluster`.

. On the management cluster, run the following command to list available secrets:
+
[source,terminal]
----
$ oc get secrets -n clusters
----
+
.Example output
[source,terminal]
----
NAME TYPE DATA AGE
builder-dockercfg-25qpp kubernetes.io/dockercfg 1 128m
default-dockercfg-mkvlz kubernetes.io/dockercfg 1 128m
democluster-admin-kubeconfig Opaque 1 127m
democluster-etcd-encryption-key Opaque 1 128m
democluster-kubeadmin-password Opaque 1 126m
democluster-pull-secret Opaque 1 128m
deployer-dockercfg-8lfpd kubernetes.io/dockercfg 1 128m
----

. Extract the `kubeconfig` file for the hosted cluster by running the following command:
+
[source,terminal]
----
$ oc get secret <secret_name> -n <cluster_namespace> -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
----
+
.Example
[source,terminal]
----
$ oc get secret democluster-admin-kubeconfig -n clusters -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
----

. To create a `must-gather` bundle for the hosted cluster, open a separate terminal window and run the following commands:

.. Export the hosted cluster `kubeconfig` file:
+
[source,terminal]
----
$ export HC_KUBECONFIG=<path_to_hosted_cluster_kubeconfig>
----
+
.Example
[source,terminal]
----
$ export HC_KUBECONFIG=~/hostedcpkube/hosted-cluster-kubeconfig
----

.. Navigate to the directory where you want to store the `must-gather` data.

.. Gather the troubleshooting data for your hosted cluster:
+
[source,terminal]
----
$ oc --kubeconfig="$HC_KUBECONFIG" adm must-gather
----

.. Create a compressed file from the `must-gather` directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:
+
[source,terminal]
----
$ tar -czvf must-gather.tar.gz must-gather.local.1203869488012141147
----
Loading