Using Google Container Builder, you can go from pushing code to having a new container built and uploaded into Google Container Registry, but you'll still need to upgrade your deployment manually to point to the new version.
This package automatically updates deployments based on successful Container Builder builds.
The design is fairly simple:
- it's a Python script that listens to the Google PubSub
cloud_builds, which is populated with status messages by Container Builder as it starts and completes builds.
- When it see messages with a status of
SUCCESS, it uses the Kubernetes API to
PATCHthe existing deployment configuration to use the new version.
This works well enough, but I'm only using it for my toy software, so you may want to be a bit more rigorous in your evaluation!
Run locall via
You can play around with this script by running it locally and accessing your cluster over the Kubernetes proxy. First, run kubectl proxy in some other terminal:
git clone email@example.com:lethain/gke_ci.git virtualen env . ./env/bin/activate pip install -r requirements python ci.py GKE-PROJECT-ID --loc http://localhost:8001
Then trigger a build on Google Container Builder and you're good to go.
There are two ways to deploy, either forking the repository and adding
your continuous build pipeline, or to checkout the repository, build it once and
upload it to your private registry.
In both cases, you'll need to create a
deployment.yaml to configure the deployment
and do some setup on Google to create resources:
First, you'll need to create a service account for consuming the PubSub messages, perhaps name it
service-account-gke-ci. Give it the 'Pub/Sub Admin' role.
Download the secrets for that service account, and upload them:
kubectl create secret generic gke-ci --from-file ./file-with-secrets.json
Create a PubSub Subscriber for the
cloud_buildstopic, name it
Finally, create your
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: gke-ci spec: replicas: 1 template: metadata: labels: app: gke-ci spec: containers: - image: gcr.io/larson-deployment/gke-ci:0.1 env: - name: "GOOGLE_APPLICATION_CREDENTIALS" value: "/var/run/secret/cloud.google.com/file-with-secrets.json" volumeMounts: - name: "service-account" mountPath: "/var/run/secret/cloud.google.com" imagePullPolicy: Always name: gke-ci command: ["/usr/bin/python"] args: ["ci.py", "gke_ci"] volumes: - name: "service-account" secret: secretName: gke-ci
Then provision it via:
kubectl apply -f deployment.
After that, you should be good to go!
CI for your CI
The easiest way to get the container in your private repo is
to make a private fork of this repository,
and then mirror that to Google Source Repository, and actually have
self-upgrade! I think, conceptually, even in that case it would not miss any
triggering other deploys when it itself upgrades, although that might require
removing the "try-finally" block in the
run function to fully eliminate the
Anyway, pretty remarkable in my mind to have a CI system that deploy itself!
Build the container yourself via (I've had some trouble getting these instructions work, I actually deploy using the third method described below):
export GP="your-project" git clone firstname.lastname@example.org:lethain/gke_ci.git gcloud docker -a docker build -t gcr.io/$GP/gke-ci . docker tag CONTAINER_ID gcr.io/$GP/gke-ci:0.1 gcloud docker -- push gcr.io/$GP/gke-ci
Your deployment should detect the image and upgrade appropriately.