This guide will focus on setting up the pipeline across different clusters on the Cloud and Satellite.
- Prerequisites
- Architecture Diagram
- Location B - Production Environment
- Location A - Dev Environment
- Trigger the Pipeline
- (Optional)Connect to Postgres DB
- Resources
- For each classic virtual server (worker nodes) for the openshift cluster, it should have minimum 4vCPU, 16 GB RAM, 3 disks (100 GB for boot, at least 25 GB for /var/data disk, a free unmounted and unpartitioned disk can be 100 GB).
- Shared Storage for the pipeline. Check: Setting up local file storage on Red Hat OpenShift on IBM Cloud Satellite
- Set up Image Registry for Red Hat OpenShift Cluster on IBM Cloud Satellite
- Create
prod-env
project
oc new-project prod-env
- Create Service account and give it the right permissions
oc create sa pipeline-starter -n prod-env
oc create -f https://raw.githubusercontent.com/nerdingitout/sat-cicd/main/location%20b/pipeline-starter-clusterrole.yaml -n prod-env
oc create -f https://raw.githubusercontent.com/nerdingitout/sat-cicd/main/location%20b/pipeline-starter-rolebinding.yaml -n prod-env
- Obtain the
pipeline-starter
authentication token
oc sa get-token pipeline-starter -n prod-env
- Get the server URL from OpenShift cluster Overview page, copy the Cluster service URL from the Networking section and save it to use it at a later step when editing the pipeline
- Create tasks
oc create -f https://raw.githubusercontent.com/nerdingitout/sat-cicd/main/tasks/apply-manifest-task.yaml -n prod-env
oc create -f https://raw.githubusercontent.com/nerdingitout/sat-cicd/main/tasks/test-task.yaml -n prod-env
oc create -f https://raw.githubusercontent.com/nerdingitout/sat-cicd/main/tasks/update-deployment-task.yaml -n prod-env
- Create Pipeline
oc create -f https://raw.githubusercontent.com/nerdingitout/sat-cicd/main/location%20b/pipeline-b.yaml -n prod-env
- Create PVC
From the Administrator perspective on the web console, go to storage and access PersistentVolumeClaims section. Click Create Persistent Volume Claim. Fill in the details as shown in the screenshot below.
- Create
dev-env
project
oc new-project dev-env
- Create
pipeline-starter
secret indev-env
project to access theprod-env
project
oc create secret generic --from-literal=openshift-token=INSERT_TOKEN_HERE pipeline-starter -n dev-env
- Create tasks
oc create -f https://raw.githubusercontent.com/nerdingitout/sat-cicd/main/location%20a/execute-remote-pipeline-task.yaml -n dev-env
oc create -f https://raw.githubusercontent.com/nerdingitout/sat-cicd/main/tasks/apply-manifest-task.yaml -n dev-env
oc create -f https://raw.githubusercontent.com/nerdingitout/sat-cicd/main/tasks/test-task.yaml -n dev-env
oc create -f https://raw.githubusercontent.com/nerdingitout/sat-cicd/main/tasks/update-deployment-task.yaml -n dev-env
- Create Pipeline
oc create -f https://raw.githubusercontent.com/nerdingitout/sat-cicd/main/location%20a/pipeline-a.yaml -n dev-env
-
Create PVC
From the Administrator perspective on the web console, go to storage and access PersistentVolumeClaims section. Click Create Persistent Volume Claim. Fill in the details as shown in the screenshot below.
-
Make sure to edit the
openshift-server-url
parameter inexectue-remote-pipeline
task in the pipeline yaml (line 101). Add the remote cluster URL (Location B) to connect to it. (the following lines for reference)
- name: execute-remote-pipeline
params:
- name: APP_NAME
value: $(params.deployment-name)
- name: url
value: $(params.git-url)
- name: pipeline-name
value: prod-pipeline
- name: pipeline-namespace
value: prod-env
- name: openshift-server-url
value: INSERT_OPENSHIFT_URL_HERE
- name: openshift-token-secret
value: pipeline-starter
- Run the following command to trigger the pipeline that starts from location A which in turn triggers the pipeline in location B. Make sure to change the values indicated below according to your project, pipeline and deployment details.
tkn pipeline start <pipeline-name> -w name=shared-workspace,ClaimName=source-pvc -p deployment-name=<deployment-name>
-p git-url=<git-url> --use-param-defaults
- This step is to be applied once and only for the backend application form bff. Make sure to connect to your postgres database by creating a secret in each environment using the following command. Make sure to replace the values wit the right credentials for each variable. Leave port 8080 as is.
oc create secret generic postgredb-secret --from-literal=DB_USER=<add-db-user-here> --from-literal=DB_PASSWORD=<add-db-password-here> --from-literal=DB_HOST=<add-db-host-here> --from-literal=DB_PORT=<add-db-port-here> --from-literal=DB_NAME=<add-db-name-here> --from-literal=PORT=8080
- Then set the secret you create as environment variable for your application
oc set env --from=secret/postgredb-secret deployment/form-bff
- https://piotrminkowski.com/2021/08/05/kubernetes-ci-cd-with-tekton-and-argocd/
- https://dzone.com/articles/cicd-pipeline-spanning-multiple-openshift-clusters
- https://github.com/noseka1/execute-remote-pipeline
- https://containerjournal.com/features/standardizing-multi-cloud-k8s-deployments-with-tekton/
- https://cloud.ibm.com/docs/satellite?topic=satellite-hosts