Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
147 changes: 113 additions & 34 deletions modules/ztp-adding-new-content-to-gitops-ztp.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,71 +4,150 @@

:_content-type: PROCEDURE
[id="ztp-adding-new-content-to-gitops-ztp_{context}"]
= Adding new content to the {ztp} pipeline

The source CRs in the {ztp-first} site generator container provide a set of critical features and node tuning settings for RAN Distributed Unit (DU) applications. These are applied to the clusters that you deploy with {ztp}. To add or modify existing source CRs in the `ztp-site-generate` container, rebuild the `ztp-site-generate` container and make it available to the hub cluster, typically from the disconnected registry associated with the hub cluster. Any valid {product-title} CR can be added.
= Adding custom content to the {gitops-shortname} ZTP pipeline

Perform the following procedure to add new content to the {ztp} pipeline.

.Procedure

. Create a directory containing a Containerfile and the source CR YAML files that you want to include in the updated `ztp-site-generate` container, for example:
. Create a subdirectory named `source-crs` in the directory containing the `kustomization.yaml` file for the `PolicyGenTemplate` custom resource (CR).

. Add your custom CRs to the `source-crs` subdirectory, as shown in the following example:
+
[source,text]
----
ztp-update/
├── example-cr1.yaml
├── example-cr2.yaml
└── ztp-update.in
example
└── policygentemplates
├── dev.yaml
├── kustomization.yaml
├── mec-edge-sno1.yaml
├── sno.yaml
└── source-crs <1>
├── PaoCatalogSource.yaml
├── PaoSubscription.yaml
├── custom-crs
| ├── apiserver-config.yaml
| └── disable-nic-lldp.yaml
└── elasticsearch
├── ElasticsearchNS.yaml
└── ElasticsearchOperatorGroup.yaml
----
<1> The `source-crs` subdirectory must be in the same directory as the `kustomization.yaml` file.

. Add the following content to the `ztp-update.in` Containerfile:
+
[source,text,subs="attributes+"]
----
FROM registry.redhat.io/openshift4/ztp-site-generate-rhel8:v{product-version}
[IMPORTANT]
====
To use your own resources, ensure that the custom CR names differ from the default source CRs provided in the ZTP container.
====

ADD example-cr2.yaml /kustomize/plugin/ran.openshift.io/v1/policygentemplate/source-crs/
ADD example-cr1.yaml /kustomize/plugin/ran.openshift.io/v1/policygentemplate/source-crs/
----

. Open a terminal at the `ztp-update/` folder and rebuild the container:
. Update the required `PolicyGenTemplate` CRs to include references to the content you added in the `source-crs/custom-crs` directory, as shown in the following example:
+
[source,terminal,subs="attributes+"]
[source,yaml]
----
$ podman build -t ztp-site-generate-rhel8-custom:v{product-version}-custom-1
apiVersion: ran.openshift.io/v1
kind: PolicyGenTemplate
metadata:
name: "group-dev"
namespace: "ztp-clusters"
spec:
bindingRules:
dev: "true"
mcp: "master"
sourceFiles:
# These policies/CRs come from the internal container Image
#Cluster Logging
- fileName: ClusterLogNS.yaml
remediationAction: inform
policyName: "group-dev-cluster-log-ns"
- fileName: ClusterLogOperGroup.yaml
remediationAction: inform
policyName: "group-dev-cluster-log-operator-group"
- fileName: ClusterLogSubscription.yaml
remediationAction: inform
policyName: "group-dev-cluster-log-sub"
#Local Storage Operator
- fileName: StorageNS.yaml
remediationAction: inform
policyName: "group-dev-lso-ns"
- fileName: StorageOperGroup.yaml
remediationAction: inform
policyName: "group-dev-lso-operator-group"
- fileName: StorageSubscription.yaml
remediationAction: inform
policyName: "group-dev-lso-sub"
#These are custom local polices that come from the source-crs directory in the git repo
# Performance Addon Operator
- fileName: PaoSubscriptionNS.yaml
remediationAction: inform
policyName: "group-dev-pao-ns"
- fileName: PaoSubscriptionCatalogSource.yaml
remediationAction: inform
policyName: "group-dev-pao-cat-source"
spec:
image: <image_URL_here>
- fileName: PaoSubscription.yaml
remediationAction: inform
policyName: "group-dev-pao-sub"
#Elasticsearch Operator
- fileName: elasticsearch/ElasticsearchNS.yaml <1>
remediationAction: inform
policyName: "group-dev-elasticsearch-ns"
- fileName: elasticsearch/ElasticsearchOperatorGroup.yaml
remediationAction: inform
policyName: "group-dev-elasticsearch-operator-group"
#Custom Resources
- fileName: custom-crs/apiserver-config.yaml <1>
remediationAction: inform
policyName: "group-dev-apiserver-config"
- fileName: custom-crs/disable-nic-lldp.yaml
remediationAction: inform
policyName: "group-dev-disable-nic-lldp"
----
<1> Set `fileName` to include the custom CR subdirectory from the `/source-crs` parent, such as `<subdirectory>/<filename>`.

. Commit the `PolicyGenTemplate` change in Git, and then push to the Git repository that is monitored by the GitOps ZTP Argo CD policies application.

. Push the built container image to your disconnected registry, for example:
. Update the `ClusterGroupUpgrade` CR to include the changed `PolicyGenTemplate` and save it as `cgu-test.yaml`, as shown in the following example:
+
[source,terminal,subs="attributes+"]
[source,yaml]
----
$ podman push localhost/ztp-site-generate-rhel8-custom:v{product-version}-custom-1 registry.example.com:5000/ztp-site-generate-rhel8-custom:v{product-version}-custom-1
apiVersion: ran.openshift.io/v1alpha1
kind: ClusterGroupUpgrade
metadata:
name: custom-source-cr
namespace: ztp-clusters
spec:
managedPolicies:
- group-dev-config-policy
enable: true
clusters:
- cluster1
remediationStrategy:
maxConcurrency: 2
timeout: 240
----

. Patch the Argo CD instance on the hub cluster to point to the newly built container image:
. Apply the updated `ClusterGroupUpgrade` CR by running the following command:
+
[source,terminal,subs="attributes+"]
[source,terminal]
----
$ oc patch -n openshift-gitops argocd openshift-gitops --type=json -p '[{"op": "replace", "path":"/spec/repo/initContainers/0/image", "value": "registry.example.com:5000/ztp-site-generate-rhel8-custom:v{product-version}-custom-1"} ]'
$ oc apply -f cgu-test.yaml
----
+
When the Argo CD instance is patched, the `openshift-gitops-repo-server` pod automatically restarts.

.Verification

. Verify that the new `openshift-gitops-repo-server` pod has completed initialization and that the previous repo pod is terminated:
* Check that the updates have succeeded by running the following command:
+
[source,terminal]
[source, terminal]
----
$ oc get pods -n openshift-gitops | grep openshift-gitops-repo-server
$ oc get cgu -A
----
+
.Example output
+
[source,terminal]
[source, terminal]
----
openshift-gitops-server-7df86f9774-db682 1/1 Running 1 28s
NAMESPACE NAME AGE STATE DETAILS
ztp-clusters custom-source-cr 6s InProgress Remediating non-compliant policies
ztp-install cluster1 19h Completed All clusters are compliant with all the managed policies
----
+
You must wait until the new `openshift-gitops-repo-server` pod has completed initialization and the previous pod is terminated before the newly added container image content is available.
Original file line number Diff line number Diff line change
Expand Up @@ -19,11 +19,6 @@ include::modules/ztp-using-pgt-to-update-source-crs.adoc[leveloffset=+1]

include::modules/ztp-adding-new-content-to-gitops-ztp.adoc[leveloffset=+1]

[role="_additional-resources"]
.Additional resources

* Alternatively, you can patch the ArgoCD instance as described in xref:../../scalability_and_performance/ztp_far_edge/ztp-preparing-the-hub-cluster.adoc#ztp-configuring-hub-cluster-with-argocd_ztp-preparing-the-hub-cluster[Configuring the hub cluster with ArgoCD] by modifying `argocd-openshift-gitops-patch.json` with an updated `initContainer` image before applying the patch file.

include::modules/ztp-configuring-pgt-compliance-eval-timeouts.adoc[leveloffset=+1]

include::modules/ztp-creating-a-validator-inform-policy.adoc[leveloffset=+1]
Expand Down