From b0ddd23aabbff4c2922ff288ef9a179a5d98f97a Mon Sep 17 00:00:00 2001 From: Ronan Hennessy Date: Tue, 29 Jul 2025 15:54:59 +0100 Subject: [PATCH] TELCODOCS-2281: Adding siteconfig to clusterinstance API docs --- _topic_maps/_topic_map.yml | 2 + .../ztp-migrate-clusterinstance.adoc | 39 +++ modules/ztp-active-ocp-version.adoc | 77 ++++++ modules/ztp-clusterinstance-cleanup.adoc | 81 ++++++ .../ztp-clusterinstance-troubleshooting.adoc | 87 ++++++ .../ztp-creating-argocd-clusterinstance.adoc | 173 ++++++++++++ modules/ztp-enable-siteconfig-addon.adoc | 53 ++++ .../ztp-migrate-clusterinstance-overview.adoc | 71 +++++ modules/ztp-migrating-sno-clusterinstnce.adoc | 255 ++++++++++++++++++ ...ztp-preparing-migrate-clusterinstance.adoc | 15 ++ modules/ztp-site-converter-ref.adoc | 29 ++ 11 files changed, 882 insertions(+) create mode 100644 edge_computing/ztp-migrate-clusterinstance.adoc create mode 100644 modules/ztp-active-ocp-version.adoc create mode 100644 modules/ztp-clusterinstance-cleanup.adoc create mode 100644 modules/ztp-clusterinstance-troubleshooting.adoc create mode 100644 modules/ztp-creating-argocd-clusterinstance.adoc create mode 100644 modules/ztp-enable-siteconfig-addon.adoc create mode 100644 modules/ztp-migrate-clusterinstance-overview.adoc create mode 100644 modules/ztp-migrating-sno-clusterinstnce.adoc create mode 100644 modules/ztp-preparing-migrate-clusterinstance.adoc create mode 100644 modules/ztp-site-converter-ref.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 68629793c6ac..79c4a9f00cd8 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -3458,6 +3458,8 @@ Topics: File: ztp-deploying-far-edge-sites - Name: Manually installing a single-node OpenShift cluster with GitOps ZTP File: ztp-manual-install +- Name: Migrating from SiteConfig CRs to ClusterInstance CRs + File: ztp-migrate-clusterinstance - Name: Recommended single-node OpenShift cluster configuration for vDU application workloads File: ztp-reference-cluster-configuration-for-vdu - Name: Validating cluster tuning for vDU application workloads diff --git a/edge_computing/ztp-migrate-clusterinstance.adoc b/edge_computing/ztp-migrate-clusterinstance.adoc new file mode 100644 index 000000000000..ddd61c739107 --- /dev/null +++ b/edge_computing/ztp-migrate-clusterinstance.adoc @@ -0,0 +1,39 @@ +:_mod-docs-content-type: ASSEMBLY +[id="ztp-migrate-clusterinstance"] += Migrating from SiteConfig CRs to ClusterInstance CRs +include::_attributes/common-attributes.adoc[] +:context: ztp-migrate-clusterinstance + +toc::[] + +You can incrementally migrate {sno} clusters from `SiteConfig` custom resources (CRs) to `ClusterInstance` CRs. During migration, the existing and new pipelines run in parallel, so you can migrate one or more clusters at a time in a controlled and phased manner. + +[IMPORTANT] +==== +* The `SiteConfig` CR is deprecated from {product-title} version 4.18 and will be removed in a future version. + +* The `ClusterInstance` CR is available from {rh-rhacm-first} version 2.12 or later. +==== + +include::modules/ztp-migrate-clusterinstance-overview.adoc[leveloffset=+1] + +include::modules/ztp-creating-argocd-clusterinstance.adoc[leveloffset=+1] + +include::modules/ztp-active-ocp-version.adoc[leveloffset=+1] + +include::modules/ztp-migrating-sno-clusterinstnce.adoc[leveloffset=+1] + +[role="_additional-resources"] +.Additional resources + +* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/multicluster_engine_operator_with_red_hat_advanced_cluster_management/siteconfig-intro#enable[Enabling the SiteConfig operator] + +//include::modules/ztp-clusterinstance-site-converter.adoc[leveloffset=+1] + +include::modules/ztp-site-converter-ref.adoc[leveloffset=+2] + +include::modules/ztp-clusterinstance-cleanup.adoc[leveloffset=+1] + +//include::modules/ztp-enable-siteconfig-addon.adoc[leveloffset=+1] + +include::modules/ztp-clusterinstance-troubleshooting.adoc[leveloffset=+1] diff --git a/modules/ztp-active-ocp-version.adoc b/modules/ztp-active-ocp-version.adoc new file mode 100644 index 000000000000..1ea9d9260f7f --- /dev/null +++ b/modules/ztp-active-ocp-version.adoc @@ -0,0 +1,77 @@ +// Module included in the following assemblies: +// +// * edge_computing/ztp-migrate-clusterinstance.adoc + +:_mod-docs-content-type: PROCEDURE +[id="ztp-active-ocp-version_{context}"] += Transitioning the active-ocp-version ClusterImageSet + +Optionally, the `active-ocp-version` `ClusterImageSet` is a {ztp-first} convention used in {ztp} deployments. +It provides a single, central definition of the {product-title} release image to use when provisioning clusters. +By default, this resource is synchronized to the hub cluster from the `site-config/resources/` folder. + +If your deployment uses an `active-ocp-version` `ClusterImageSet` CR, you must migrate it to the `resources/` folder in the new directroy that contains `ClusterInstance` CRs. +This prevents synchronization conflicts because both Argo CD applications cannot manage the same resource. + +.Prerequisites + +* You have completed the procedure to create the parallel Argo CD pipeline for `ClusterInstance` CRs. +* The Argo CD application points to the folder in your Git repository that will contain the new `ClusterInstance` CRs and associated cluster resouces. In this example, the `site-configs-v2/` Argo CD application points to the `site-configs-v2/` folder. +* Your Git repository contains an `active-ocp-version.yaml` manifest in the `resources/` folder. + +.Procedure + +. Copy the `resources/` folder from the `site-configs/` directory into the new `site-configs-v2/` directory: ++ +[source,bash] +---- +$ cp -r site-configs/resources site-configs-v2/ +---- + +. Remove the reference to the `resources/` folder from the `site-configs/kustomization.yaml` file. +This ensures that the old `clusters` Argo CD application no longer manages the `active-ocp-version` resource. ++ +.Example updated `site-configs/resources/kustomization.yaml` file +[source,yaml] +---- +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: + - pre-reqs/ + #- resources/ +generators: + - hub-1/sno1.yaml + - hub-1/sno2.yaml + - hub-1/sno3.yaml +---- + +. Add the `resources/` folder to the `site-configs-v2/kustomization.yaml` file. +This step transfers ownership of the `ClusterImageSet` to the new `clusters-v2` application. ++ +.Example updated `site-configs-v2/kustomization.yaml` file +[source,yaml] +---- +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: + - resources/ +---- + +. Commit and push the changes to the Git repository. + +.Verification + +. In Argo CD, verify that the `clusters-v2` application is *Healthy* and *Synced*. + +. If the `active-ocp-version` `ClusterImageSet` resource in the `cluster` Argo application is out of sync, you can remove the Argo CD application label by running the following command: ++ +[source,bash] +---- +$ oc label clusterimageset active-ocp-version app.kubernetes.io/instance- +---- ++ +.Example output +[source,bash] +---- +clusterimageset.hive.openshift.io/active-ocp-version unlabeled +---- diff --git a/modules/ztp-clusterinstance-cleanup.adoc b/modules/ztp-clusterinstance-cleanup.adoc new file mode 100644 index 000000000000..8e7c7b59cc9e --- /dev/null +++ b/modules/ztp-clusterinstance-cleanup.adoc @@ -0,0 +1,81 @@ +// Module included in the following assemblies: +// +// * edge_computing/ztp-migrate-clusterinstance.adoc + +:_mod-docs-content-type: PROCEDURE +[id="ztp-clusterinstance-cleanup_{context}"] += Deleting the Argo CD pipeline post-migration + +After you migrate all {sno} clusters from using `SiteConfig` CRs to `ClusterInstance` CRs, you can delete the original Argo CD application and related resources that managed the `SiteConfig` CRs. + +[NOTE] +==== +Only delete the Argo CD application and related resources after you have confirmed that all clusters are successfully managed by the new Argo CD application that uses `ClusterInstance` CRs. Additionally, if the Argo CD project was only used for the migrated cluster's Argo application, you can also delete this project. +==== + +.Prerequisites +* You have logged in to the hub cluster as a user with `cluster-admin` privileges. +* All {sno} clusters have been successfully migrated to use `ClusterInstance` CRs and are managed by another Argo CD application. + +.Procedure + +. Delete the original Argo CD application that managed the `SiteConfig` CRs: ++ +[source,bash] +---- +$ oc delete application.argo clusters -n openshift-gitops +---- ++ +* Replace `clusters` with the name of your original Argo CD application. + +. Delete the original Argo CD project by running the following command: ++ +[source,bash] +---- +$ oc delete appproject ztp-app-project -n openshift-gitops +---- ++ +* Replace `ztp-app-project` with the name of your original Argo CD project. + +.Verification + +. Confirm that the original Argo CD application is deleted by running the following command: ++ +[source,bash] +---- +$ oc get appproject -n openshift-gitops +---- ++ +.Example output +[source,bash] +---- +NAME AGE +default 6d20h +policy-app-project 2d22h +ztpv2-app-project 44h +---- ++ +* The original Argo CD project in this example, `ztp-app-project` is not present in the output. + +. Confirm that the original Argo CD project is deleted by running the following command: ++ +[source,bash] +---- +oc get applications.argo -n openshift-gitops +---- ++ +.Example output +[source,bash] +---- +NAME SYNC STATUS HEALTH STATUS +clusters-v2 Synced Healthy +policies Synced Healthy +---- ++ +* The original Argo CD application in this example, `clusters` is not present in the output. + + + + + + diff --git a/modules/ztp-clusterinstance-troubleshooting.adoc b/modules/ztp-clusterinstance-troubleshooting.adoc new file mode 100644 index 000000000000..02ec20dad5bb --- /dev/null +++ b/modules/ztp-clusterinstance-troubleshooting.adoc @@ -0,0 +1,87 @@ +// Module included in the following assemblies: +// +// * edge_computing/ztp-migrate-clusterinstance.adoc + +:_mod-docs-content-type: PROCEDURE +[id="ztp-clusterinstance-troubleshooting_{context}"] += Troubleshooting the migration to ClusterInstance CRs + +Consider the following troubleshooting steps if you encounter issues during the migration from `SiteConfig` CRs to `ClusterInstance` CRs. + +.Procedure + +* Verify that the SiteConfig Operator rendered all the required deployment resources by running the following command: ++ +[source,bash] +---- +$ oc -n get clusterinstances -ojson | jq .status.manifestsRendered +---- ++ +.Example output +[source,json] +---- +[ + { + "apiGroup": "extensions.hive.openshift.io/v1beta1", + "kind": "AgentClusterInstall", + "lastAppliedTime": "2025-01-13T11:10:52Z", + "name": "sno1", + "namespace": "sno1", + "status": "rendered", + "syncWave": 1 + }, + { + "apiGroup": "metal3.io/v1alpha1", + "kind": "BareMetalHost", + "lastAppliedTime": "2025-01-13T11:10:53Z", + "name": "sno1.example.com", + "namespace": "sno1", + "status": "rendered", + "syncWave": 1 + }, + { + "apiGroup": "hive.openshift.io/v1", + "kind": "ClusterDeployment", + "lastAppliedTime": "2025-01-13T11:10:53Z", + "name": "sno1", + "namespace": "sno1", + "status": "rendered", + "syncWave": 1 + }, + { + "apiGroup": "agent-install.openshift.io/v1beta1", + "kind": "InfraEnv", + "lastAppliedTime": "2025-01-13T11:10:53Z", + "name": "sno1", + "namespace": "sno1", + "status": "rendered", + "syncWave": 1 + }, + { + "apiGroup": "agent-install.openshift.io/v1beta1", + "kind": "NMStateConfig", + "lastAppliedTime": "2025-01-13T11:10:53Z", + "name": "sno1.example.com", + "namespace": "sno1", + "status": "rendered", + "syncWave": 1 + }, + { + "apiGroup": "agent.open-cluster-management.io/v1", + "kind": "KlusterletAddonConfig", + "lastAppliedTime": "2025-01-13T11:10:53Z", + "name": "sno1", + "namespace": "sno1", + "status": "rendered", + "syncWave": 2 + }, + { + "apiGroup": "cluster.open-cluster-management.io/v1", + "kind": "ManagedCluster", + "lastAppliedTime": "2025-01-13T11:10:53Z", + "name": "sno1", + "status": "rendered", + "syncWave": 2 + } +] +---- diff --git a/modules/ztp-creating-argocd-clusterinstance.adoc b/modules/ztp-creating-argocd-clusterinstance.adoc new file mode 100644 index 000000000000..55ec973f2f3f --- /dev/null +++ b/modules/ztp-creating-argocd-clusterinstance.adoc @@ -0,0 +1,173 @@ +// Module included in the following assemblies: +// +// * edge_computing/ztp-migrate-clusterinstance.adoc + +:_mod-docs-content-type: PROCEDURE +[id="ztp-creating-argocd-clusterinstance_{context}"] += Preparing a parallel Argo CD pipeline for ClusterInstance CRs + +Create a parallel Argo CD project and application to manage the new `ClusterInstance` CRs and associated cluster resources. + +.Prerequisites + +* You have logged in to the hub cluster as a user with `cluster-admin` privileges. +* You have configured your {ztp} environment successfully. +* You have installed and configured the Assisted Installer service successfully. +* You have access to the Git repository that contains your {sno} cluster configurations. + +.Procedure + +. Create YAML files for the parallel Argo project and application: + +.. Create a YAML file that defines the `AppProject` resource: ++ +.Example `ztp-app-project-v2.yaml` file +[source,yaml] +---- +apiVersion: argoproj.io/v1alpha1 +kind: AppProject +metadata: + name: ztp-app-project-v2 + namespace: openshift-gitops +spec: + clusterResourceWhitelist: + - group: hive.openshift.io + kind: ClusterImageSet + - group: hive.openshift.io + kind: ClusterImageSet + - group: cluster.open-cluster-management.io + kind: ManagedCluster + - group: "" + kind: Namespace + destinations: + - namespace: '*' + server: '*' + namespaceResourceWhitelist: + - group: "" + kind: ConfigMap + - group: "" + kind: Namespace + - group: "" + kind: Secret + - group: agent-install.openshift.io + kind: InfraEnv + - group: agent-install.openshift.io + kind: NMStateConfig + - group: extensions.hive.openshift.io + kind: AgentClusterInstall + - group: hive.openshift.io + kind: ClusterDeployment + - group: metal3.io + kind: BareMetalHost + - group: metal3.io + kind: HostFirmwareSettings + - group: agent.open-cluster-management.io + kind: KlusterletAddonConfig + - group: cluster.open-cluster-management.io + kind: ManagedCluster + - group: siteconfig.open-cluster-management.io + kind: ClusterInstance <1> + sourceRepos: + - '*' +---- +<1> The `ClusterInstance` CR manages the `siteconfig.open-cluster-management.io` object instead of the `SiteConfig` CR. + +.. Create a YAML file that defines the `Application` resource: ++ +.Example `clusters-v2.yaml` file +[source,yaml] +---- +apiVersion: argoproj.io/v1alpha1 +kind: Application +metadata: + name: clusters-v2 + namespace: openshift-gitops +spec: + destination: + namespace: clusters-sub + server: https://kubernetes.default.svc + ignoreDifferences: + - group: cluster.open-cluster-management.io + kind: ManagedCluster + managedFieldsManagers: + - controller + project: ztp-app-project-v2 <1> + source: + path: site-configs-v2 <2> + repoURL: http://infra.5g-deployment.lab:3000/student/ztp-repository.git + targetRevision: main + syncPolicy: + syncOptions: + - CreateNamespace=true + - PrunePropagationPolicy=background + - RespectIgnoreDifferences=true +---- +<1> The `project` field must match the name of the `AppProject` resource created in the previous step. +<2> The `path` field must match the root folder in your Git repository that will contain the `ClusterInstance` CRs and associated resources. ++ +[NOTE] +==== +By default, `auto-sync` is enabled. However, synchronization only occurs when you push configuration data for the cluster to the new configuration folder, or in this example, the `site-configs-v2/` folder. +==== + +. Create and commit a root folder in your Git repository that will contain the `ClusterInstance` CRs and associated resources, for example: ++ +[source,bash] +---- +$ mkdir site-configs-v2 +$ touch site-configs-v2/.gitkeep +$ git commit -s -m “Creates cluster-instance folder” +$ git push origin main +---- ++ +* The `.gitkeep` file is a placeholder to ensure that the empty folder is tracked by Git. ++ +[NOTE] +==== +You only need to create and commit the root `site-configs-v2/` folder during pipeline setup. +You will mirror the complete `site-configs/` folder structure into `site-configs-v2/` during the cluster migration procedure. +==== + +. Apply the `AppProject` and `Application` resources to the hub cluster by running the following commands: ++ +[source,bash] +---- +$ oc apply -f ztp-app-project-v2.yaml +$ oc apply -f clusters-v2.yaml +---- + +.Verification + +. Verify that the original Argo CD project, `ztp-app-project`, and the new Argo CD project, `ztp-app-project-v2` are present on the hub cluster by running the following command: ++ +[source,bash] +---- +$ oc get appprojects -n openshift-gitops +---- ++ +.Example output +[source,bash] +---- +NAME AGE +default 46h +policy-app-project 42h +ztp-app-project 18h +ztp-app-project-v2 14s +---- + +. Verify that the original Argo CD application, `clusters`, and the new Argo CD application, `clusters-v2` are present on the hub cluster by running the following command: ++ +[source,bash] +---- +$ oc get application.argo -n openshift-gitops +---- ++ +.Example output ++ +[source,bash] +---- +NAME SYNC STATUS HEALTH STATUS +clusters Synced Healthy +clusters-v2 Synced Healthy +policies Synced Healthy +---- diff --git a/modules/ztp-enable-siteconfig-addon.adoc b/modules/ztp-enable-siteconfig-addon.adoc new file mode 100644 index 000000000000..1f17f3f7bdaf --- /dev/null +++ b/modules/ztp-enable-siteconfig-addon.adoc @@ -0,0 +1,53 @@ +// Module included in the following assemblies: +// +// * edge_computing/ztp-migrate-clusterinstance.adoc + +:_mod-docs-content-type: PROCEDURE +[id="ztp-clusterinstance-components_{context}"] += Enabling the SiteConfig addon for migration + +The SiteConfig Operator reconciles the `ClusterInstance` custom resource (CR). To deploy the SiteConfig Operator, you must enable the SiteConfig Addon in {rh-rhacm-first}. + +.Prerequisites + +* You have logged in to the hub cluster as a user with `cluster-admin` privileges. +* You have configured your {ztp} environment successfully. +* You have deployed {rh-rhacm-first} version 2.12 or later. + +.Procedure + +* Enable the SiteConfig add-on by running the following command: ++ +[source,bash] +---- +$ oc -n patch multiclusterhubs.operator.open-cluster-management.io multiclusterhub --type json --patch '[{"op": "add", "path":"/spec/overrides/components/-", "value": {"name":"siteconfig","enabled": true}}]' +---- ++ +* Replace `` with the namespace where {rh-rhacm} is installed, for example `open-cluster-management`. ++ +.Example output +[source,bash] +---- +multiclusterhub.operator.open-cluster-management.io/multiclusterhub patched +---- + +.Verification + +* Check the status of the SiteConfig Operator by running the following command: ++ +[source,bash] +---- +$ oc -n get po | grep siteconfig +---- ++ +.Example output +[source,bash] +---- +siteconfig-controller-manager-6c864fb6b9-kvbv9 2/2 Running 0 43s +---- + +// Do you need to check siteconfig templates? + + + + diff --git a/modules/ztp-migrate-clusterinstance-overview.adoc b/modules/ztp-migrate-clusterinstance-overview.adoc new file mode 100644 index 000000000000..7bf3192d9e01 --- /dev/null +++ b/modules/ztp-migrate-clusterinstance-overview.adoc @@ -0,0 +1,71 @@ +// Module included in the following assemblies: +// +// * edge_computing/ztp-migrate-clusterinstance.adoc + +:_mod-docs-content-type: CONCEPT +[id="ztp-migrate-clusterinstance-overview_{context}"] += Overview of migrating from SiteConfig CRs to ClusterInstance CRs + +The `ClusterInstance` CR provides a more unified and generic approach to defining clusters and is the preferred method for managing cluster deployments in the {ztp} workflow. +The SiteConfig Operator, which manages the `ClusterInstance` custom resource (CR), is a fully developed controller shipped as an add-on within {rh-rhacm-first}. + +[IMPORTANT] +==== +The SiteConfig Operator only reconciles updates for `ClusterInstance` objects. The controller does not monitor or manage deprecated `SiteConfig` objects. +==== + +The migration from `SiteConfig` CRs to `ClusterInstance` CRs provides several improvements, such as enhanced scalability and a clear separation of cluster parameters from the cluster deployment method. For more information about these improvements, and the SiteConfig Operator, see link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/multicluster_engine_operator_with_red_hat_advanced_cluster_management/index#siteconfig-intro[SiteConfig]. + +The migration process involves the following high-level steps: + +. Set up the parallel pipeline by preparing a new Git folder structure in your repository and creating the corresponding Argo CD project and application. + +. To migrate the clusters incrementally, first remove the associated `SiteConfig` CR from the old pipeline. Then, add a corresponding `ClusterInstance` CR to the new pipeline. ++ +[NOTE] +==== +By using the `prune=false` sync policy in the initial Argo CD application, the resources managed by this pipeline remain intact even after you remove the target cluster from this application. This approach ensures that the existing cluster resources remain operational during the migration process. +==== + +.. Optionally, use the `siteconfig-converter` tool to automatically convert existing `SiteConfig` CRs to `ClusterInstance` CRs. + +. When you complete the cluster migration, delete the original Argo project and application and clean up any related resources. + +The following sections describe how to migrate an example cluster, `sno1`, from using a `SiteConfig` CR to a `ClusterInstance` CR. + +The following Git repository folder structure is used as a basis for this example migration: +[source,text] +---- +├── site-configs/ +│ ├── kustomization.yaml +│ ├── hub-1/ +│ │ └── kustomization.yaml +│ │ ├── sno1.yaml +│ │ ├── sno2.yaml +│ │ ├── sno3.yaml +│ │ ├── extra-manifest/ +│ │ │ ├── enable-crun-master.yaml +│ │ │ └── enable-crun-worker.yaml +│ ├── pre-reqs/ +│ │ ├── kustomization.yaml +│ │ ├── sno1/ +│ │ │ ├── bmc-credentials.yaml +│ │ │ ├── kustomization.yaml +│ │ │ └── pull-secret.yaml +│ │ ├── sno2/ +│ │ │ ├── bmc-credentials.yaml +│ │ │ ├── kustomization.yaml +│ │ │ └── pull-secret.yaml +│ │ └── sno3/ +│ │ ├── bmc-credentials.yaml +│ │ ├── kustomization.yaml +│ │ └── pull-secret.yaml +│ ├── reference-manifest/ +│ │ └── 4.20/ +│ ├──resources/ +│ │ ├── active-ocp-version.yaml +│ │ └── kustomization.yaml + +└── site-policies/ #Policies and configurations implemented for the clusters +... +---- diff --git a/modules/ztp-migrating-sno-clusterinstnce.adoc b/modules/ztp-migrating-sno-clusterinstnce.adoc new file mode 100644 index 000000000000..993cc30af6c8 --- /dev/null +++ b/modules/ztp-migrating-sno-clusterinstnce.adoc @@ -0,0 +1,255 @@ +// Module included in the following assemblies: +// +// * edge_computing/ztp-migrate-clusterinstance.adoc + +:_mod-docs-content-type: PROCEDURE +[id="ztp-migrating-sno-clusterinstance_{context}"] += Performing the migration from SiteConfig CR to ClusterInstance CR + +Migrate a {sno} cluster from using a `SiteConfig` CR to a `ClusterInstance` CR by removing the `SiteConfig` CR from the old pipeline, and adding a corresponding `ClusterInstance` CR to the new pipeline. + +.Prerequisites + +* You have logged in to the hub cluster as a user with `cluster-admin` privileges. +* You have set up the parallel Argo CD pipeline, including the Argo CD project and application, that will manage the cluster using the `ClusterInstance` CR. +* The Argo CD application managing the original `SiteConfig` CR pipeline is configured with the sync policy `prune=false`. This setting ensures that resources remain intact after you remove the target cluster from this application. +* You have access to the Git repository that contains your {sno} cluster configurations. +* You have {rh-rhacm-first} version 2.12 or later installed in the hub cluster. +* The SiteConfig Operator is installed and running in the hub cluster. +* You have installed Podman and you have access to the registry.redhat.io container image registry. + +.Procedure + +. Mirror the `site-configs` folder structure to the new `site-configs-v2` directory that will contain the `ClusterInstance` CRs, for example: ++ +[source,text] +---- +site-configs-v2/ +├── hub-1/ <1> +│ └── extra-manifest/ +├── pre-reqs/ +│ └── sno1/ <2> +├── reference-manifest/ +│ └── 4.20/ +└── resources/ +---- +<1> The `hub-1/` folder will contain the `ClusterInstance` CR for each cluster. +<2> Mirror the target cluster, in this example `sno1`, to include the required pre-requisite resources such as the image registry pull secret, the baseboard management controller credentials, and so on. + +. Remove the target cluster from the original Argo CD application by commenting out the resources in the related files in Git: + +.. Comment out the target cluster from the `site-configs/kustomization.yaml` file, for example: ++ +[source,bash] +---- +$ cat site-configs/kustomization.yaml +---- ++ +.Example updated `site-configs/kustomization.yaml` file +[source,yaml] +---- +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: + - pre-reqs/ + #- resources/ +generators: + #- hub-1/sno1.yaml + - hub-1/sno2.yaml + - hub-1/sno3.yaml +---- + +.. Comment out the target cluster from the `site-configs/pre-reqs/kustomization.yaml` file. +This removes the `site-configs/pre-reqs/sno1` folder, which also requires migration and has resources such as the image registry pull secret, the baseboard management controller credentials, and so on, for example: ++ +[source,bash] +---- +$ cat site-configs/pre-reqs/kustomization.yaml +---- ++ +.Example updated `site-configs/pre-reqs/kustomization.yaml` file +[source,yaml] +---- +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: + #- sno1/ + - sno2/ + - sno3/ +---- + +. Commit the changes to the Git repository. ++ +[NOTE] +==== +After you commit the changes, the original Argo CD application reports an `OutOfSync` sync status because the Argo CD application still attempts to monitor the status of the taget cluster's resources. However, because the sync policy is set to `prune=false`, the Argo CD application does not delete any resources. +==== + +. To ensure that the original Argo CD application no longer manages the cluster resources, you can remove the Argo CD application label from the resources by running the following command: ++ +[source,bash] +---- +$ for cr in bmh,hfs,clusterdeployment,agentclusterinstall,infraenv,nmstateconfig,configmap,klusterletaddonconfig,secrets; do oc label $cr app.kubernetes.io/instance- --all -n sno1; done && oc label ns sno1 app.kubernetes.io/instance- && oc label managedclusters sno1 app.kubernetes.io/instance- +---- ++ +The Argo CD application label is removed from all resources in the `sno1` namespace and the sync status returns to `Synced`. + +. Create the `ClusterInstance` CR for the target cluster by using the `siteconfig-converter` tool packaged with the `ztp-site-generate` container image: ++ +[NOTE] +==== +The siteconfig-converter tool cannot translate earlier versions of the `AgentClusterInstall` resource that uses the following deprecated fields in the `SiteConfig` CR: + +* `apiVIP` +* `ingressVIP` +* `manifestsConfigMapRef` + +To solve this issue, you can do one of the following options: + +* Create a custom cluster template that includes these fields. For more information about creating custom templates, see link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.13/html/multicluster_engine_operator_with_red_hat_advanced_cluster_management/siteconfig-intro#create-custom-templates[Creating custom templates with the SiteConfig operator] +* Suppress the creation of the `AgentClusterInstall` resource by adding it to the `suppressedManifests` list in the `ClusterInstance` CR, or by using the `-s` flag in the `siteconfig-converter` tool. You must remove the resource from the `suppressedManifests` list when reinstalling the cluster. +==== + +.. Pull the `ztp-site-generate` container image by running the following command: ++ +[source,bash,subs="attributes+"] +---- +podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:{product-version} +---- + +.. Run the `siteconfig-converter` tool interactively through the container by running the following command: ++ +[source,bash] +---- +$ podman run -v "${PWD}":/resources:Z,U -it registry.redhat.io/openshift4/ztp-site-generate-rhel8:{product-version} siteconfig-converter -d /resources/ /resources/ +---- ++ +* Replace `` with the output directory for the generated files. +* Replace `` with the path to the target `SiteConfig` CR file. ++ +.Example output ++ +[source,bash] +---- +Successfully read SiteConfig: sno1/sno1 +Converted cluster 1 (sno1) to ClusterInstance: /resources/output/sno1.yaml +WARNING: extraManifests field is not supported in ClusterInstance and will be ignored. Create one or more configmaps with the exact desired set of CRs for the cluster and include them in the extraManifestsRefs. +WARNING: Added default extraManifest ConfigMap 'extra-manifests-cm' to extraManifestsRefs. This configmap is created automatically. +Successfully converted 1 cluster(s) to ClusterInstance files in /resources/output: sno1.yaml +Generating ConfigMap kustomization files... +Using ConfigMap name: extra-manifests-cm, namespace: sno1, manifests directory: extra-manifests +Generating ConfigMap kustomization files with name: extra-manifests-cm, namespace: sno1, manifests directory: extra-manifests +Generating extraManifests for SiteConfig: /resources/sno1.yaml +Using absolute path for input file: /resources/sno1.yaml +Running siteconfig-generator from directory: /resources +Found extraManifests directory: /resources/output/extra-manifests/sno1 +Moved sno1_containerruntimeconfig_enable-crun-master.yaml to /resources/output/extra-manifests/sno1_containerruntimeconfig_enable-crun-master.yaml +Moved sno1_containerruntimeconfig_enable-crun-worker.yaml to /resources/output/extra-manifests/sno1_containerruntimeconfig_enable-crun-worker.yaml +Moved 2 extraManifest files from /resources/output/extra-manifests/sno1 to /resources/output/extra-manifests +Removed directory: /resources/output/extra-manifests/sno1 +--- Kustomization.yaml Generator --- +Scanning directory: /resources/output/extra-manifests +Found and adding: extra-manifests/sno1_containerruntimeconfig_enable-crun-master.yaml +Found and adding: extra-manifests/sno1_containerruntimeconfig_enable-crun-worker.yaml +------------------------------------ +kustomization-configMapGenerator-snippet.yaml generated successfully at: /resources/output/kustomization-configMapGenerator-snippet.yaml +Content: +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +configMapGenerator: + - files: + - extra-manifests/sno1_containerruntimeconfig_enable-crun-master.yaml + - extra-manifests/sno1_containerruntimeconfig_enable-crun-worker.yaml + name: extra-manifests-cm + namespace: sno1 +generatorOptions: + disableNameSuffixHash: true + +------------------------------------ +---- ++ +[NOTE] +==== +The `ClusterInstance` CR requires the extra manifests to be defined in a `ConfigMap` resource. + +To meet this requirement, the `siteconfig-converter` tool generates a `kustomization.yaml` snippet. The generated snippet uses Kustomize's `configMapGenerator` to automatically package your manifest files into the required `ConfigMap` resource. You must merge this snippet into your original `kustomization.yaml` file to ensure that the `ConfigMap` resource is created and managed alongside your other cluster resources. +==== + +. Configure the new Argo CD application to manage the target cluster by referencing it in the new pipelines `Kustomization` files, for example: ++ +[source,bash] +---- +$ cat site-configs-v2/kustomization.yaml +---- ++ +.Example updated `site-configs-v2/kustomization.yaml` file +[source,yaml] +---- +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: + - resources/ + - pre-reqs/ + - hub-1/sno1.yaml +---- ++ +[source,bash] +---- +$ cat site-configs-v2/pre-reqs/kustomization.yaml +---- ++ +.Example updated `site-configs-v2/pre-reqs/kustomization.yaml` file +[source,yaml] +---- +apiVersion: kustomize.config.k8s.io/v1beta1 +kind: Kustomization +resources: + - sno1/ +---- + +. Commit the changes to the Git repository. + +.Verification + +. Verify that the `ClusterInstance` CR is successfully deployed and the provisioning status complete by running the following command: ++ +[source,bash] +---- +$ oc get clusterinstance -A +---- ++ +.Example output +[source,bash] +---- +NAME PAUSED PROVISIONSTATUS PROVISIONDETAILS AGE +clusterinstance.siteconfig.open-cluster-management.io/sno1 Completed Provisioning completed 27s +---- ++ +At this point, the new Argo CD application that uses the `ClusterInstance` CR is managing the `sno1` cluster. You can continue to migrate one or more clusters at a time by repeating these steps until all target clusters are migrated to the new pipeline. + +. Verify the folder structure and files in the `site-configs-v2/` directory contain the migrated resources for the `sno1` cluster, for example: ++ +[source,text] +---- +site-configs-v2/ +├── hub-1/ +│ ├── sno1.yaml <1> +├── extra-manifest/ +│ ├── enable-crun-worker.yaml <2> +│ └── enable-crun-master.yaml +├── kustomization.yaml <3> +├── pre-reqs/ +│ └── sno1/ +│ ├── bmc-credentials.yaml +│ ├── namespace.yaml +│ └── pull-secret.yaml +├── kustomization.yaml +├── reference-manifest/ +│ └── 4.20/ +└── resources/ + ├── active-ocp-version.yaml + └── kustomization.yaml +---- +<1> This `ClusterInstance` CR for the `sno1` cluster. +<2> The tool automatically generates the extra manifests referenced by the `ClusterInstance` CR. After generation, the file names might change. You can rename the files to match the original naming convention in the associated `kustomization.yaml` file. +<3> The tool generates a `kuztomization.yaml` file snippet to create the `ConfigMap` resources that specifies the extra manifests. You can merge the generated `kustomization` snippet with your original `kuztomization.yaml` file. diff --git a/modules/ztp-preparing-migrate-clusterinstance.adoc b/modules/ztp-preparing-migrate-clusterinstance.adoc new file mode 100644 index 000000000000..803b08fddfa1 --- /dev/null +++ b/modules/ztp-preparing-migrate-clusterinstance.adoc @@ -0,0 +1,15 @@ +// Module included in the following assemblies: +// +// * edge_computing/ztp-migrate-clusterinstance.adoc + +:_mod-docs-content-type: PROCEDURE +[id="ztp-preparing-migrate-clusterinstance_{context}"] += Preparation for SiteConfig CRs to ClusterInstance CRs migration + +To prepare for the migration from `SiteConfig` CRs to `ClusterInstance` CRs, you must complete the following steps: + +* Delete the ArgoCD application in the target cluster. + +* Prepare the git repository by creating a directory for migrated clusters with the `ClusterInstance` CRs and associated resources. + +* Optionally, use the `siteconfig-converter` tool to convert existing `SiteConfig` CRs to `ClusterInstance` CRs at scale. diff --git a/modules/ztp-site-converter-ref.adoc b/modules/ztp-site-converter-ref.adoc new file mode 100644 index 000000000000..f39ea754ebb3 --- /dev/null +++ b/modules/ztp-site-converter-ref.adoc @@ -0,0 +1,29 @@ +// Module included in the following assemblies: +// +// * edge_computing/ztp-migrate-clusterinstance.adoc + +:_mod-docs-content-type: PROCEDURE +[id="ztp-site_converter-ref_{context}"] += Reference flags for the siteconfig-converter tool + +The following matrix describes the flags for the `siteconfig-converter` tool. + +[cols="1,1,4", options="header"] +|=== +|Flag |Type |Description + +|-d |string |Define the output directory for the converted `ClusterInstance` custom resources (CRs). This flag is required. + +|-t |string |Define a comma-separated list of template references for clusters in namespace/name format. The default value is `open-cluster-management/ai-cluster-templates-v1`. + +|-n |string |Define a comma-separated list of template references for nodes in namespace/name format. The default value is `open-cluster-management/ai-node-templates-v1`. + +|-m |string |Define a comma-separated list of `ConfigMap` names to use for extra manifests references. + +|-s |string |Define a comma-separated list of manifest names to suppress at the cluster level. + +|-w |boolean |Write conversion warnings as comments to the head of the converted YAML files. The default value is `false`. + +|-c |boolean |Copy comments from the original `SiteConfig` CRs to the converted `ClusterInstance` CRs. The default is false. + +|=== \ No newline at end of file