Skip to content

Latest commit

 

History

History
335 lines (256 loc) · 12 KB

DEVELOPMENT.md

File metadata and controls

335 lines (256 loc) · 12 KB

Developing

Getting started

  1. Ramp up on kubernetes and CRDs
  2. Create a GitHub account
  3. Setup GitHub access via SSH
  4. Set up your development environment
  5. Create and checkout a repo fork
  6. Set up a Kubernetes cluster
  7. Configure kubectl to use your cluster
  8. Set up a docker repository you can push to

Then you can iterate (including running the controllers with ko).

Ramp up

Welcome to the project!! You may find these resources helpful to ramp up on some of the technology this project is built on. This project extends Kubernetes (aka k8s) with Custom Resource Definitions (CRDs). To find out more:

At this point, you may find it useful to return to these Tekton Pipeline docs:

Environment Setup

You must install these tools:

  1. git: For source control

  2. go: The language Tekton Pipelines is built in. You need go version v1.15 or higher.

Your [$GOPATH] setting is critical for ko apply to function properly: a successful run will typically involve building pushing images instead of only configuring Kubernetes resources.

To run your controllers with ko you'll need to set these environment variables (we recommend adding them to your .bashrc):

  1. GOPATH: If you don't have one, simply pick a directory and add export GOPATH=...
  2. $GOPATH/bin on PATH: This is so that tooling installed via go get will work properly.
  3. KO_DOCKER_REPO: The docker repository to which developer images should be pushed (e.g. gcr.io/[gcloud-project]). You can also run a local registry and set KO_DOCKER_REPO to reference the registry (e.g. at localhost:5000/mypipelineimages).

.bashrc example:

export GOPATH="$HOME/go"
export PATH="${PATH}:${GOPATH}/bin"
export KO_DOCKER_REPO='gcr.io/my-gcloud-project-name'

Make sure to configure authentication for your KO_DOCKER_REPO if required. To be able to push images to gcr.io/<project>, you need to run this once:

gcloud auth configure-docker

Besides, if your registry KO_DOCKER_REPO is private, then you also should create a secret and add it to tekton-pipelines-controller and tekton-pipelines-webhook serviceAccounts accordingly to pull images.

  • Create a secret
kubectl create secret docker-registry ${SECRET_NAME} \
  --docker-username=${USERNAME} \
  --docker-password=${PASSWORD} \
  --docker-email=me@here.com \
  --namespace=tekton-pipelines
  • Add it to serviceAccount

Because you will install tekton-pipelines use ko later, before that, you need to change the contents of serviceAccount to below:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tekton-pipelines-controller
  namespace: tekton-pipelines
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: default
    app.kubernetes.io/part-of: tekton-pipelines
imagePullSecrets:
  - name: ${SECRET_NAME}
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tekton-pipelines-webhook
  namespace: tekton-pipelines
  labels:
    app.kubernetes.io/component: webhook
    app.kubernetes.io/instance: default
    app.kubernetes.io/part-of: tekton-pipelines
imagePullSecrets:
  - name: ${SECRET_NAME}

After setting GOPATH and putting $GOPATH/bin on your PATH, you must then install these tools:

  1. ko: For development. ko version v0.5.1 or higher is required for pipeline to work correctly.

  2. kubectl: For interacting with your kube cluster

The user you are using to interact with your k8s cluster must be a cluster admin to create role bindings:

# Using gcloud to get your current user
USER=$(gcloud config get-value core/account)
# Make that user a cluster admin
kubectl create clusterrolebinding cluster-admin-binding \
  --clusterrole=cluster-admin \
  --user="${USER}"

Install in custom namespace

  1. To install into a different namespace you can use this script :
#!/usr/bin/env bash
set -e

# Set your target namespace here
TARGET_NAMESPACE=new-target-namespace

ko resolve -f config | sed -e '/kind: Namespace/!b;n;n;s/:.*/: '"${TARGET_NAMESPACE}"'/' | \
    sed "s/namespace: tekton-pipelines$/namespace: ${TARGET_NAMESPACE}/" | \
    kubectl apply -f-
kubectl set env deployments --all SYSTEM_NAMESPACE=${TARGET_NAMESPACE} -n ${TARGET_NAMESPACE}

Checkout your fork

The Go tools require that you clone the repository to the src/github.com/tektoncd/pipeline directory in your GOPATH.

To check out this repository:

  1. Create your own fork of this repo
  2. Clone it to your machine:
mkdir -p ${GOPATH}/src/github.com/tektoncd
cd ${GOPATH}/src/github.com/tektoncd
git clone git@github.com:${YOUR_GITHUB_USERNAME}/pipeline.git
cd pipeline
git remote add upstream git@github.com:tektoncd/pipeline.git
git remote set-url --push upstream no_push

Adding the upstream remote sets you up nicely for regularly syncing your fork.

Kubernetes cluster

The recommended configuration is:

  • Kubernetes version 1.17 or later
  • 4 vCPU nodes (n1-standard-4)
  • Node autoscaling, up to 3 nodes
  • API scopes for cloud-platform

To setup a cluster using MiniKube:

To setup a cluster with Docker Desktop:

To setup a cluster with GKE:

  1. Install required tools and setup GCP project (You may find it useful to save the ID of the project in an environment variable (e.g. PROJECT_ID).

  2. Create a GKE cluster (with --cluster-version=latest but you can use any version 1.17 or later):

    export PROJECT_ID=my-gcp-project
    export CLUSTER_NAME=mycoolcluster
    
    gcloud container clusters create $CLUSTER_NAME \
     --enable-autoscaling \
     --min-nodes=1 \
     --max-nodes=3 \
     --scopes=cloud-platform \
     --enable-basic-auth \
     --no-issue-client-certificate \
     --project=$PROJECT_ID \
     --region=us-central1 \
     --machine-type=n1-standard-4 \
     --image-type=cos \
     --num-nodes=1 \
     --cluster-version=1.17

    Note that the --scopes argument to gcloud container cluster create controls what GCP resources the cluster's default service account has access to; for example to give the default service account full access to your GCR registry, you can add storage-full to your --scopes arg.

  3. Grant cluster-admin permissions to the current user:

    kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole=cluster-admin \
    --user=$(gcloud config get-value core/account)

Iterating

While iterating on the project, you may need to:

  1. Install/Run everything
  2. Verify it's working by looking at the logs
  3. Update your (external) dependencies with: ./hack/update-deps.sh.
  4. Update your type definitions with: ./hack/update-codegen.sh.
  5. Update your OpenAPI specs with: ./hack/update-openapigen.sh.
  6. Add new CRD types
  7. Add and run tests

To make changes to these CRDs, you will probably interact with:

Install Pipeline

You can stand up a version of this controller on-cluster (to your kubectl config current-context):

ko apply -f config/

Redeploy controller

As you make changes to the code, you can redeploy your controller with:

ko apply -f config/controller.yaml

Tear it down

You can clean up everything with:

ko delete -f config/

Accessing logs

To look at the controller logs, run:

kubectl -n tekton-pipelines logs $(kubectl -n tekton-pipelines get pods -l app=tekton-pipelines-controller -o name)

To look at the webhook logs, run:

kubectl -n tekton-pipelines logs $(kubectl -n tekton-pipelines get pods -l app=tekton-pipelines-webhook -o name)

To look at the logs for individual TaskRuns or PipelineRuns, see docs on accessing logs.

Adding new types

If you need to add a new CRD type, you will need to add:

  1. A yaml definition in config/
  2. Add the type to the cluster roles in:
  3. Add go structs for the types in pkg/apis/pipeline/v1alpha1 e.g condition_types.go This should implement the Defaultable and Validatable interfaces as they are needed for the webhook in the next step.
  4. Register it with the webhook
  5. Add the new type to the list of known types

See the API compatibility policy.