Installing Tekton Pipelines
Use this page to add the component to an existing Kubernetes cluster.
- Installing Tekton Pipelines
- Installing Tekton PIpelines on OpenShift/MiniShift
A Kubernetes cluster version 1.11 or later (if you don't have an existing cluster):
# Example cluster creation command on GKE gcloud container clusters create $CLUSTER_NAME \ --zone=$CLUSTER_ZONE
Grant cluster-admin permissions to the current user:
kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole=cluster-admin \ --user=$(gcloud config get-value core/account)
See Role-based access control for more information.
The versions of Tekton Pipelines available are:
- Officially released versions, e.g.
- Nightly releases are
published every night to
HEAD- To install the most recent, unreleased code in the repo see the development guide
Installing Tekton Pipelines
To add the Tekton Pipelines component to an existing cluster:
kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
(Previous versions will be available at
previous/$VERSION_NUMBER, e.g. https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.2.0/release.yaml.)
kubectl getcommand to monitor the Tekton Pipelines components until all of the components show a
kubectl get pods --namespace tekton-pipelines
Tip: Instead of running the
kubectl getcommand multiple times, you can append the
--watchflag to view the component's status updates in real time. Use CTRL + C to exit watch mode.
You are now ready to create and run Tekton Pipelines:
Installing Tekton Pipelines on OpenShift/MiniShift
tekton-pipelines-controller service account needs the
context constraint in order to run the webhook pod.
See Security Context Constraints for more information
First, login as a user with
cluster-adminprivileges. The following example uses the default
# For MiniShift: oc login -u admin:admin oc login -u system:admin
Run the following commands to set up the project/namespace, and to install Tekton Pipelines:
oc new-project tekton-pipelines oc adm policy add-scc-to-user anyuid -z tekton-pipelines-controller oc apply --filename https://storage.googleapis.com/tekton-releases/latest/release.yaml
See here for an overview of the
occommand-line tool for OpenShift.
oc getcommand to monitor the Tekton Pipelines components until all of the components show a
oc get pods --namespace tekton-pipelines --watch
Configuring Tekton Pipelines
How are resources shared between tasks
The PVC option can be configured using a ConfigMap with the name
config-artifact-pvc and the following attributes:
size: the size of the volume (5Gi by default)
storageClassName: the storage class of the volume (default storage class by default). The possible values depend on the cluster configuration and the underlying infrastructure provider.
The GCS storage bucket or the S3 bucket can be configured using a ConfigMap with the name
config-artifact-bucket with the following attributes:
location: the address of the bucket (for example gs://mybucket or s3://mybucket)
bucket.service.account.secret.name: the name of the secret that will contain the credentials for the service account with access to the bucket
bucket.service.account.secret.key: the key in the secret with the required service account json.
- The bucket is recommended to be configured with a retention policy after which files will be deleted.
bucket.service.account.field.name: the name of the environment variable to use when specifying the secret path. Defaults to
GOOGLE_APPLICATION_CREDENTIALS. Set to
BOTO_CONFIGif using S3 instead of GCS.
Note: When using an S3 bucket, there is a restriction that the bucket is located in the us-east-1 region. This is a limitation coming from using gsutil with a boto configuration behind the scene to access the S3 bucket.
An typical configuration to use an S3 bucket is available below :
apiVersion: v1 kind: Secret metadata: name: tekton-storage type: kubernetes.io/opaque stringData: boto-config: | [Credentials] aws_access_key_id = AWS_ACCESS_KEY_ID aws_secret_access_key = AWS_SECRET_ACCESS_KEY [s3] host = s3.us-east-1.amazonaws.com [Boto] https_validate_certificates = True --- apiVersion: v1 data: null kind: ConfigMap metadata: name: config-artifact-pvc data: location: s3://mybucket bucket.service.account.secret.name: tekton-storage bucket.service.account.secret.key: boto-config bucket.service.account.field.name: BOTO_CONFIG
Both options provide the same functionality to the pipeline. The choice is based on the infrastructure used, for example in some Kubernetes platforms, the creation of a persistent volume could be slower than uploading/downloading files to a bucket, or if the the cluster is running in multiple zones, the access to the persistent volume can fail.
Overriding default ServiceAccount used for TaskRun and PipelineRun
config-defaults can be used to override default service account
e.g. to override the default service account (
tekton apply the
### config-defaults.yaml apiVersion: v1 kind: ConfigMap data: default-service-account: "tekton"
_example key contains of the keys that can be overriden and their
The release Task can be used for creating a custom release of Tekton Pipelines. This can be useful for advanced users that need to configure the container images built and used by the Pipelines components.