Skip to content

Latest commit

 

History

History
214 lines (162 loc) · 7.66 KB

install.md

File metadata and controls

214 lines (162 loc) · 7.66 KB

Installing Tekton Pipelines

Use this page to add the component to an existing Kubernetes cluster.

Pre-requisites

  1. A Kubernetes cluster version 1.11 or later (if you don't have an existing cluster):

    # Example cluster creation command on GKE
    gcloud container clusters create $CLUSTER_NAME \
      --zone=$CLUSTER_ZONE
  2. Grant cluster-admin permissions to the current user:

    kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole=cluster-admin \
    --user=$(gcloud config get-value core/account)

    See Role-based access control for more information.

Versions

The versions of Tekton Pipelines available are:

Installing Tekton Pipelines

To add the Tekton Pipelines component to an existing cluster:

  1. Run the kubectl apply command to install Tekton Pipelines and its dependencies:

    kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml

    (Previous versions will be available at previous/$VERSION_NUMBER, e.g. https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.2.0/release.yaml.)

  2. Run the kubectl get command to monitor the Tekton Pipelines components until all of the components show a STATUS of Running:

    kubectl get pods --namespace tekton-pipelines

    Tip: Instead of running the kubectl get command multiple times, you can append the --watch flag to view the component's status updates in real time. Use CTRL + C to exit watch mode.

You are now ready to create and run Tekton Pipelines:

Installing Tekton Pipelines on OpenShift/MiniShift

The tekton-pipelines-controller service account needs the anyuid security context constraint in order to run the webhook pod.

See Security Context Constraints for more information

  1. First, login as a user with cluster-admin privileges. The following example uses the default system:admin user (admin:admin for MiniShift):

    # For MiniShift: oc login -u admin:admin
    oc login -u system:admin
  2. Run the following commands to set up the project/namespace, and to install Tekton Pipelines:

    oc new-project tekton-pipelines
    oc adm policy add-scc-to-user anyuid -z tekton-pipelines-controller
    oc apply --filename https://storage.googleapis.com/tekton-releases/latest/release.yaml

    See here for an overview of the oc command-line tool for OpenShift.

  3. Run the oc get command to monitor the Tekton Pipelines components until all of the components show a STATUS of Running:

    oc get pods --namespace tekton-pipelines --watch

Configuring Tekton Pipelines

How are resources shared between tasks

Pipelines need a way to share resources between tasks. The alternatives are a Persistent volume, an S3 Bucket or a GCS storage bucket

The PVC option can be configured using a ConfigMap with the name config-artifact-pvc and the following attributes:

  • size: the size of the volume (5Gi by default)
  • storageClassName: the storage class of the volume (default storage class by default). The possible values depend on the cluster configuration and the underlying infrastructure provider.

The GCS storage bucket or the S3 bucket can be configured using a ConfigMap with the name config-artifact-bucket with the following attributes:

  • location: the address of the bucket (for example gs://mybucket or s3://mybucket)
  • bucket.service.account.secret.name: the name of the secret that will contain the credentials for the service account with access to the bucket
  • bucket.service.account.secret.key: the key in the secret with the required service account json.
  • The bucket is recommended to be configured with a retention policy after which files will be deleted.
  • bucket.service.account.field.name: the name of the environment variable to use when specifying the secret path. Defaults to GOOGLE_APPLICATION_CREDENTIALS. Set to BOTO_CONFIG if using S3 instead of GCS.

Note: When using an S3 bucket, there is a restriction that the bucket is located in the us-east-1 region. This is a limitation coming from using gsutil with a boto configuration behind the scene to access the S3 bucket.

An typical configuration to use an S3 bucket is available below :

apiVersion: v1
kind: Secret
metadata:
  name: tekton-storage
type: kubernetes.io/opaque
stringData:
  boto-config: |
    [Credentials]
    aws_access_key_id = AWS_ACCESS_KEY_ID
    aws_secret_access_key = AWS_SECRET_ACCESS_KEY
    [s3]
    host = s3.us-east-1.amazonaws.com
    [Boto]
    https_validate_certificates = True
---
apiVersion: v1
data: null
kind: ConfigMap
metadata:
  name: config-artifact-pvc
data:
  location: s3://mybucket
  bucket.service.account.secret.name: tekton-storage
  bucket.service.account.secret.key: boto-config
  bucket.service.account.field.name: BOTO_CONFIG

Both options provide the same functionality to the pipeline. The choice is based on the infrastructure used, for example in some Kubernetes platforms, the creation of a persistent volume could be slower than uploading/downloading files to a bucket, or if the the cluster is running in multiple zones, the access to the persistent volume can fail.

Overriding default ServiceAccount used for TaskRun and PipelineRun

The ConfigMap config-defaults can be used to override default service account e.g. to override the default service account (default) to tekton apply the following

### config-defaults.yaml
apiVersion: v1
kind: ConfigMap
data:
  default-service-account: "tekton"

NOTE: The _example key contains of the keys that can be overriden and their default values.

Custom Releases

The release Task can be used for creating a custom release of Tekton Pipelines. This can be useful for advanced users that need to configure the container images built and used by the Pipelines components.


Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License.