Skip to content

Latest commit

 

History

History
222 lines (164 loc) · 9.34 KB

prow.md

File metadata and controls

222 lines (164 loc) · 9.34 KB

The prow cluster

The prow cluster is where we run Prow, which currently does a lot of our CI, though we are trying to dogfood more and more.

tektoncd uses Prow for CI automation, though we are moving this over to use our own dogfooding.

See the community docs for more on Prow and the PR process, and see Prow's own docs.

Prow secrets

Secrets which have been applied to the prow cluster but are not committed here are:

  • GitHub personal access tokens:
    • bot-token-github in the default namespace
    • bot-token-github in the github-admin namespace
    • hmac-token for authenticating GitHub
    • oauth-token which is a GitHub access token for tekton-robot, used by Prow itself as well as by containers started by Prow via the Prow config. See the GitHub secret Prow docs.
  • GCP secrets:
    • test-account is a token for the service account prow-account@tekton-releases.iam.gserviceaccount.com. This account can interact with GCP resources such as uploading Prow results to GCS (which is done directly from the containers started by Prow, configured in config.yaml) and interacting with boskos clusters.
    • Nightly release secret: nightly-account a token for the nightly-release GCP service account

Creating the Prow cluster

If you need to re-create the Prow cluster (which includes the boskos running inside), you will need to:

  1. Create a new cluster
  2. Create the necessary secrets
  3. Apply the new Prow and Boskos
  4. Setup ingress
  5. Update GitHub webhook(s)

Creating the cluster

To create a cluster of the right size, using the same GCP project:

export PROJECT_ID=tekton-releases
export CLUSTER_NAME=tekton-plumbing

gcloud container clusters create $CLUSTER_NAME \
 --scopes=cloud-platform \
 --enable-basic-auth \
 --issue-client-certificate \
 --project=$PROJECT_ID \
 --region=us-central1-a \
 --machine-type=n1-standard-4 \
 --image-type=cos \
 --num-nodes=8 \
 --cluster-version=latest

Start it

Apply the Prow and boskos configuration:

# Deploy boskos
kubectl apply -f boskos/boskos.yaml # Must be applied first to create the namespace
kubectl apply -f boskos/boskos-config.yaml
kubectl apply -f boskos/storage-class.yaml

# Deploy GitHub Proxy
kubectl apply -f prow/gce-ssd-retain_storageclass.yaml
kubectl apply -f prow/ghproxy.yaml

# Deploy Prow
kubectl apply -f prow/prowjob-schemaless_customresourcedefinition.yaml
kubectl apply -f prow/prow.yaml
kubectl apply -f prow/cherrypicker_deployment.yaml
kubectl apply -f prow/cherrypicker_service.yaml

# Deploy daemonset to configure fs.inotify.max_user_[watches,instances] via sysctl.
# This is to deal with kind having issues like https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files
kubectl apply -f prow/tune-sysctls_daemonset.yaml

# Create Prow's configuration
kubectl create configmap config --from-file=config.yaml=prow/config.yaml
kubectl create configmap plugins --from-file=plugins.yaml=prow/plugins.yaml

Ingress

To get ingress working properly, you must:

  • Install and configure cert-manager. cert-manager can be installed via Helm using this guide
  • Apply the ingress resource and update the prow.tekton.dev DNS configuration.

To apply the ingress resource:

# Apply the ingress resource, configured to use `prow.tekton.dev`
kubectl apply -f prow/ingress.yaml

To see the IP of the ingress in the new cluster:

kubectl get ingress ing

You should be able to navigate to this endpoint in your browser and see the Prow landing page.

Then you can update https://prow.tekton.dev to point at the Cluster ingress address. (Not sure who has access to this domain name registration, someone in the Linux Foundation? dlorenc@ can provide more info.)

Update GitHub webhook

You will need to configure GitHubs's webhook(s) to point at the ingress of the new Prow cluster. (Or you can use the domain name.)

For tektoncd this is configured at the Org level.

  • github.com/tektoncd -> Settings -> Webhooks -> http://some-ingress-ip/hook

Update the value of the webhook with http://ingress-address/hook (see kicking the tires to get the ingress IP).

OAuth Setup

OAuth Setup is done following the official guide. The "Prow" OAuth GitHub application is defined in the tektoncd GitHub org.

Updating Prow itself

Prow has been installed by taking the starter.yaml and modifying it for our needs.

Updating (e.g. bumping the versions of the images being used) requires:

  1. If you are feeling cautious and motivated, manually backup the config values by hand (see prow.yaml to see what values will be changed).

  2. Manually updating the image values and applying any other config changes found in the starter.yaml to our prow.yaml.

  3. Updating the utility_images in our config.yaml if the version of the plank component is changed.

  4. Applying the new configuration with:

     # Step 1: Configure kubectl to use the cluster, doesn't have to be via gcloud but gcloud makes it easy
     gcloud container clusters get-credentials prow --zone us-central1-a --project tekton-releases
    
     # Step 2: Update Prow itself
     kubectl apply -f prow/prow.yaml
    
     # Step 2: Update the configuration used by Prow
     kubectl create configmap config --from-file=config.yaml=prow/config.yaml --dry-run -o yaml | kubectl replace configmap config -f -
    
     # Step 3: Remember to configure kubectl to connect to your regular cluster!
     gcloud container clusters get-credentials ...
  5. Verify that the changes are working by opening a PR and manually looking at the logs of each check, in case Prow has gotten into a state where failures are being reported as successes.

These values have been removed from the original starter.yaml:

  • The ConfigMap values plugins and config because they are generated from config.yaml and plugin
  • The Services which were manually configured with a ClusterIP and other routing information (deck, tide, hook)
  • The Ingress ing - Configuration for this is in ingress.yaml
  • The statusreconciler Deployment, etc. - Created #54 to investigate adding this.
  • The Role values give pod permissions in the default namespace as well as test-pods - The intention seems to be that test-pods be used to run the pods themselves, but we don't currently have that configured in our config.yaml.

Tekton Pipelines with Prow

Tekton Pipelines is also installed in the prow cluster so that Prow can trigger the execution of PipelineRuns.

Prow supports pipelines v1alpha1 up to v0.13.1:

kubectl apply --filename  https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.13.1/release.yaml

See also Tekton Pipelines installation instructions.

Updating Prow configuration

Changes to config.yaml are automatically applied to the Prow cluster via a tekton task that runs in the dogfooding cluster.

To apply the configuration "manually":

# Step 1: Configure kubectl to use the cluster, doesn't have to be via gcloud but gcloud makes it easy
gcloud container clusters get-credentials prow --zone us-central1-a --project tekton-releases

# Step 2: Update the configuration used by Prow
kubectl create configmap config --from-file=config.yaml=prow/config.yaml --dry-run -o yaml | kubectl replace configmap config -f -

# Step 3: Remember to configure kubectl to connect to your regular cluster!
gcloud container clusters get-credentials ...