⚠️ Although in working order, this should be considered more of a "POC" for the time being. You can watch the video if you're just looking for a quickstart.
This is a "self contained" and "self bootstrapping" OpenShift CI/CD Demo, that uses OpenShift Pipelines (Tekton) in tandem with OpenShift GitOps (Argo CD) to deliver a cloud native CI/CD solution.
Once deployed, you will have the following:
- OpenShift GitOps cluster scoped deployment installed
- Tekton installed
- Custom Cluster configurations
- This includes creating a
developer
user.
- This includes creating a
The OpenShift GitOps system will then install the following:
- "Tenant" Argo CD instance under the
welcome-gitops
namespace- This is using OpenShift oAuth
- A GitTea instances to house all the manifests
- It also imports all the needed repos
- Sample
welcome-app
application in the following namespaceswelcome-dev
welcome-prod
- Tekton Pipeline under
welcome-pipeline
set up with the following- Builds application from source (hosted on the GitTea service)
- Image gets stored in the internal registry
- Tekton will "write back" to the GitTea instance for ArgoCD to act on
⚠️ I've tested this on OCP 4.8.2
To deploy this demo, you need to have the following
- An "empty" OpenShift 4.8 cluster installed
- Cluster Admin access to the cluster
oc
,kubectl
, andhelm
CLI installed.- Not required; but
kustomize
is useful (for debugging)
Once you have a cluster ready, login as cluster-admin
and add the
following helm repo (and update the defs):
helm repo add redhat-demos https://redhat-developer-demos.github.io/helm-repo
helm repo update
Install deploy this repo with the following command
🚨 RHPDS users, please read this before installing.
helm install ocpcicd redhat-demos/openshift-cicd-demo
If you have issues, please see the troubleshooting section.
Once deployed, you will need the following information.
Console route can be found using:
oc get route console -n openshift-console -o jsonpath='{.spec.host}{"\n"}'
Use the following credentials
- username:
developer
- password:
openshift
The tenant Argo CD instance is running in the welcome-gitops
namespace. To reach the Web UI:
oc get route welcome-argocd-server -n welcome-gitops -o jsonpath='{.spec.host}{"\n"}'
Click on "LOGIN WITH OPENSHIFT". And login with the following credentials.
- username:
developer
- password:
openshift
The Git service route can be found by:
oc get route gitea -n scm -o jsonpath='{.spec.host}{"\n"}'
Use the following credentials for the demo:
- username:
developer
- password:
openshift
You will see two repos; welcome-app
and welcome-deploy
.
welcome-app
repo is the code repo that builds the applicationwelcome-deploy
repo is the repo with the deployment manifest.
The application is deployed to two namespaces: welcome-dev
and welcome-prod
You can find the routes using the following:
Dev
oc get route welcome-app -n welcome-dev -o jsonpath='{.spec.host}{"\n"}'
Prod
oc get route welcome-app -n welcome-prod -o jsonpath='{.spec.host}{"\n"}'
Make a commit to the index.php
file at the following URL:
echo $(oc get route gitea -n scm -o jsonpath='{.spec.host}')/developer/welcome-app
This should fire off a build that you can see progress in the
welcome-pipeline
namespace. It's probably easier to just show you, so
watch this video:
Common issues that may arise are in this section in no particular order.
If you don't want to use the helm chart, you can deploy this repo directly by running the following:
until kubectl apply -k https://github.com/RedHatWorkshops/openshift-cicd-demo/bootstrap/overlays/base.cluster/
do
sleep 5
done
For more information about the helm repo, visit the chart repo
Currently, the demo does not fit within the current limits setting on RHPDS Clusters. Until I can get this fix, you'll need to do this workaround.
Remove Project request template setting.
oc patch project.config.openshift.io cluster --type=json -p '[{"op": "remove", "path": "/spec/projectRequestMessage"}]'
oc patch project.config.openshift.io cluster --type=json -p '[{"op": "remove", "path": "/spec/projectRequestTemplate"}]'
Then, delete the template itself.
oc delete template project-request -n openshift-config
Part of this demo replaces the oAuth inside of OpenShift. This isn't normally a problem on "new" clusters, however, RHPDS clusters are setup with some configuration (hence not technically "new"). Part of this configuration is setting up the opentlc-mgr
account as an admin.
The helm chart deletes this account. Create a backup admin account in case you need it for the demo.
First create a service account.
oc create sa backupadmin -n default
Add the cluster-admin
role to this service account
oc adm policy add-cluster-role-to-user cluster-admin -z backupadmin -n default
Get the login token for this account.
oc serviceaccounts get-token backupadmin -n default
❗ SAVE THIS TOKEN This what you will use to login to the cluster once the opentlc-mgr
account is gone.
oc login --token=<token> <api address>
The OpenShift GitOps installation (aka the "Cluster Argo CD) can be reached by the following route:
oc get route openshift-gitops-server -n openshift-gitops -o jsonpath='{.spec.host}{"\n"}'
The Admin password can be found by:
oc extract secret/openshift-gitops-cluster -n openshift-gitops --to=-
In case you need it, the GitTea Admin credentials:
- giteaAdminUser:
gitea-admin
- giteaAdminPassword:
openshift