This repository contains resources to deploy and practice Openshift Pipelines.
- 1. Introduction
- 2. Installation of the operator
- 3. Useful Tasks for the pipelines
- 4. Creating pipelines
- 5. Running pipelines
- 6. Triggers
- 7. Secrets for authentication
- Annex A: Installing Tekton CLI
- Annex B: Tutorial
- Annex C: Default ClusterTasks
- Annex D: Retrieve the etc-pki-entitlement secret
- Annex E: Run Tekton Pipelines on Quay Image Mirroring Notifications
Red Hat OpenShift Pipelines is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details.
What do we understand as Cloud Native CI/CD?
-
Containers: Built for container apps and runs on Kubernetes.
-
Serverless: Runs serverless with no CI/CD engine to manage and maintain.
-
DevOps: Designed with microservices and distributed teams in mind.
To get more details, you can check the official documentation.
Installing the operator is as easy as creating a Subscription in the project openshift-operators
. The Pipelines Operator is installed for All namespaces:
oc apply -f 00-installation/1-subscription.yaml
In order to split the pipelines executing from the operator itself, we will create another namespace where the pipelines will be executed:
oc process -f 00-installation/2-project.yaml | oc apply -f -
If you have Grafana installed in the cluster, you can create the following Dashboards to monitor the status of the pipeline’s execution. There are two dashboards obtained from different sources:
-
tekton-overview.json
from the Grafana dashboards marketplace. This chart tries to provide a full overview. -
tekton-dashboard.json
from the jenkins-x-charts organization. This chart is focused on TaskRuns and PipelineRuns.
oc process -f https://raw.githubusercontent.com/alvarolop/quarkus-observability-app/main/openshift/grafana/grafana-04-dashboard.yaml \
-p DASHBOARD_GZIP="$(cat 00-installation/tekton-overview.json | gzip | base64 -w0)" \
-p DASHBOARD_NAME=tekton-overview \
-p CUSTOM_FOLDER_NAME="Openshift Pipelines" | oc apply -f -
oc process -f https://raw.githubusercontent.com/alvarolop/quarkus-observability-app/main/openshift/grafana/grafana-04-dashboard.yaml \
-p DASHBOARD_GZIP="$(cat 00-installation/tekton-dashboard.json | gzip | base64 -w0)" \
-p DASHBOARD_NAME=tekton-dashboard \
-p CUSTOM_FOLDER_NAME="Openshift Pipelines" | oc apply -f -
In general, Tekton has several relevant metrics. All of them are detailed in the upstream documentation.
This section contains several tasks that can be useful for the pipelines.
Task | New | Description |
---|---|---|
✔️ |
List files from a mounted workspace. |
|
✔️ |
Download the latest release of a GitHub release from a given organization/repository. |
|
✔️ |
Change permissions of a file |
|
❌ |
Patched version of the Skopeo copy task. The feature to bulk-copy several images in the same Pipeline execution is currently affected by this bug. |
|
❌ |
Patched version of the Buildah task where the Task also mounts the |
In order to create all the tasks, just execute the following commands. It will create all the Tasks
objects in the ocp-pipelines
project:
oc apply -k openshift/tasks
Pipeline | Description |
---|---|
Copy one container image from one container repository to another. |
|
Copy several images listed in a ConfigMap from one container repository to another. |
|
This pipeline builds an image by cloning a git repo and building the image using its Dockerfile and the default |
|
This pipeline modifies the default |
|
This pipeline also builds a container image and includes a binary downloaded from the GH releases section of any repository (By default, it installs |
|
Pipeline that copies the |
oc apply -k openshift/pipelines
oc create -f pipelineruns/11-copy-image.yaml
oc create -f pipelineruns/12-copy-bulk-images.yaml
oc create -f pipelineruns/21-build-image.yaml
oc create -f pipelineruns/22-build-image-entitlements.yaml
oc create -f pipelineruns/31-copy-entitlements.yaml
oc create -f pipelineruns/23-build-image-download.yaml
If you don’t want to use the previous pipeline, you can do it manually with the following command:
oc get secret etc-pki-entitlement -n openshift-config-managed -o yaml | yq 'del(.metadata.creationTimestamp, .metadata.uid, .metadata.resourceVersion, .metadata.namespace, .metadata.managedFields)' | oc create -n pipelines -f -
Triggers capture external events, such as a Git pull request, and process them to extract key pieces of information. Triggers consist of four different CRDs that work together:
-
The TriggerBinding resource extracts the fields from an event payload and stores them as parameters.
-
The TriggerTemplate resource acts as a standard for the way resources must be created.
-
The Trigger resource combines the TriggerBinding and TriggerTemplate resources, and optionally, the interceptors event processor.
-
The EventListener resource provides an endpoint or an event sink, that listens for incoming HTTP-based events with a JSON payload.
The following example uses a Kubernetes CronJob to implement a basic cron trigger that runs every minute. This works by using a cron job that emits an HTTP request to the EventListener Service endpoint.
oc process -f 40-triggers/1-copy-images.yaml | oc apply -f -
In many practical use cases, you might need to pull from private Git repositories or might need to push to an external container registry such as Quay.io. In this section, we will summarize how to create the Secrets
to configure all the credentials.
To use the registry.redhat.io
registry, you have to have a Red Hat login. To consume container images from registry.redhat.io in shared environments such as OpenShift, it is recommended for an administrator to use a Registry Service Account, also referred to as authentication tokens, in place of an individual’s Customer Portal credentials.
The management of Service Accounts is available via the Registry Service Account management application.
apiVersion: v1
kind: Secret
metadata:
name: $SECRET_NAME
data:
.dockerconfigjson: $DOCKER_CONFIG_FILE_CONTENT
type: kubernetes.io/dockerconfigjson
Once you create the file with its contents, you can apply it to the cluster like this:
oc apply -f secrets/rh-registry-sa.yaml -n ocp-pipelines
oc secrets link -n ocp-pipelines pipeline rh-registry-sa
For more information, check the full KCS article.
To clone a private repository in the pipeline, the pipeline
Service Account will need to be able to authenticate against the repository. There are two main options to get this authentication: Using a username+token (Or a PAT if using GitHub) or using an SSH private key.
oc create secret generic git-ssh-key-secret --type=kubernetes.io/ssh-auth --from-file=ssh-privatekey=$LOCATION_PRIVATE_KEY -n pipelines
oc annotate secret git-ssh-key-secret tekton.dev/git-0="$GIT_PRIVATE_URL"
oc secrets link pipeline git-ssh-key-secret
oc create secret generic gh-pat-secret -n pipelines \
--type=kubernetes.io/basic-auth \
--from-literal=username=$GITHUB_USERNAME \
--from-literal=password=$GITHUB_PAT
oc annotate secret gh-pat-secret tekton.dev/git-0="$GIT_PRIVATE_URL"
oc secrets link pipeline gh-pat-secret -n pipelines
For more information about the PAT creation and configuration, you can follow the instructions that we have in the following workshop guidelines.
Robot accounts are a way to access repositories without requiring a human user account. A robot account has its own credentials, generated by Quay and linked to an Organization. To create a Robot Account and get its credentials, you have to access the Quay web console. For this repository, we are going to use my personal Quay organization, which is located at: https://quay.io/user/alopezme.
Using an admin account, you can access the organization, go to the Robot Accounts section and click on Create Robot Account
. After creating the Account, click on it to directly download the Kubernetes secret definition that you have to apply in your namespace.
Once you create the file with its contents, you can apply it to the cluster like this:
oc apply -f secrets/quay-alopezme-pull-secret.yaml -n ocp-pipelines
oc secrets link -n ocp-pipelines pipeline quay-alopezme-pull-secret
For more information, you can access the documentation of the on-premise installation of Quay.
To get the most out of Openshift Pipelines, you will need to download and install the tkn
command line tool. You can download it from the Tekton documentation or directly from your Openshift cluster:
If you want a tutorial to learn Openshift Pipelines, I recommend you this tutorial from Red Hat Scholars.
The Openshift Pipelines Operator configures several ClusterTasks by default. Here you can find a summary of them for documentation purposes:
$ tkn clustertasks list
NAME DESCRIPTION AGE
argocd-task-sync-and-wait This task syncs (de... 2 days ago
buildah Buildah task builds... 2 days ago
git-cli This task can be us... 2 days ago
git-clone These Tasks are Git... 2 days ago
helm-upgrade-from-repo These tasks will in... 2 days ago
helm-upgrade-from-source These tasks will in... 2 days ago
jib-maven This Task builds Ja... 2 days ago
kn This Task performs ... 2 days ago
kn-apply This task deploys a... 2 days ago
kubeconfig-creator This Task do a simi... 2 days ago
maven This Task can be us... 2 days ago
openshift-client This task runs comm... 2 days ago
pull-request This Task allows a ... 2 days ago
s2i-dotnet s2i-dotnet task fet... 2 days ago
s2i-go s2i-go task clones ... 2 days ago
s2i-java s2i-java task clone... 2 days ago
s2i-nodejs s2i-nodejs task clo... 2 days ago
s2i-perl s2i-perl task clone... 2 days ago
s2i-php s2i-php task clones... 2 days ago
s2i-python s2i-python task clo... 2 days ago
s2i-ruby s2i-ruby task clone... 2 days ago
skopeo-copy Skopeo is a command... 2 days ago
tkn This task performs ... 2 days ago
trigger-jenkins-job The following task ... 2 days ago
In some cases, the Insights Operator cannot retrieve the etc-pki-entitlement
secret correctly. In such cases, it is possible to download the certificate from the Red Hat management console. If this is your case…
>> Click Here <<
In this section, we are going to explore the proposal by siamaksade in Red Hat’s Blog: Keep Your Applications Secure With Automatic Rebuilds and in his GitHub repository quay-mirror-pipeline. For more information:
>> Click Here <<