Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

Commit

Permalink
[stable/concourse] Upgrade to Concourse 3.8.0.
Browse files Browse the repository at this point in the history
Make necessary improvements to Concourse worker lifecycle management.

Add additional fatal errors emitted as of Concourse 3.8.0 that should
trigger a restart, and remove "unkown volume" as one such error as this
will happen normally when running multiple concourse-web pods.

Try to start workers with a clean slate by cleaning up previous
incarnations of a worker. Call retire-worker before starting. Also
clear the concourse-work-dir before starting.

Call retire-worker in a loop and don't exit that loop until the old
worker is gone. This allows us to remove the fixed worker.postStopDelaySeconds
duration.

Add a note about persistent volumes being necessary.
  • Loading branch information
Will Tran committed Jan 3, 2018
1 parent 1f7e0c6 commit b72c88c
Show file tree
Hide file tree
Showing 4 changed files with 26 additions and 25 deletions.
4 changes: 2 additions & 2 deletions stable/concourse/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name: concourse
version: 0.10.7
appVersion: 3.6.0
version: 0.11.0
appVersion: 3.8.0
description: Concourse is a simple and scalable CI system.
icon: https://avatars1.githubusercontent.com/u/7809479
keywords:
Expand Down
13 changes: 7 additions & 6 deletions stable/concourse/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ $ kubectl scale statefulset my-release-worker --replicas=3

### Restarting workers

If a worker isn't taking on work, you can restart the worker with `kubectl delete pod`. This will initiate a graceful shutdown by "retiring" the worker, with some waiting time before the worker starts up again to ensure concourse doesn't try looking for old volumes on the new worker. The values `worker.postStopDelaySeconds` and `worker.terminationGracePeriodSeconds` can be used to tune this.
If a worker isn't taking on work, you can restart the worker with `kubectl delete pod`. This will initiate a graceful shutdown by "retiring" the worker, to ensure Concourse doesn't try looking for old volumes on the new worker. The value`worker.terminationGracePeriodSeconds` can be used to provide an upper limit on graceful shutdown time before forcefully terminating the container.

### Worker Liveness Probe

Expand All @@ -68,7 +68,7 @@ The following tables lists the configurable parameters of the Concourse chart an
| Parameter | Description | Default |
| ----------------------- | ---------------------------------- | ---------------------------------------------------------- |
| `image` | Concourse image | `concourse/concourse` |
| `imageTag` | Concourse image version | `3.3.2` |
| `imageTag` | Concourse image version | `3.8.0` |
| `imagePullPolicy` |Concourse image pull policy | `Always` if `imageTag` is `latest`, else `IfNotPresent` |
| `concourse.username` | Concourse Basic Authentication Username | `concourse` |
| `concourse.password` | Concourse Basic Authentication Password | `concourse` |
Expand Down Expand Up @@ -124,8 +124,7 @@ The following tables lists the configurable parameters of the Concourse chart an
| `worker.minAvailable` | Minimum number of workers available after an eviction | `1` |
| `worker.resources` | Concourse Worker resource requests and limits | `{requests: {cpu: "100m", memory: "512Mi"}}` |
| `worker.additionalAffinities` | Additional affinities to apply to worker pods. E.g: node affinity | `nil` |
| `worker.postStopDelaySeconds` | Time to wait after graceful shutdown of worker before starting up again | `60` |
| `worker.terminationGracePeriodSeconds` | Upper bound for graceful shutdown, including `worker.postStopDelaySeconds` | `120` |
| `worker.terminationGracePeriodSeconds` | Upper bound for graceful shutdown to allow the worker to drain its tasks | `60` |
| `worker.fatalErrors` | Newline delimited strings which, when logged, should trigger a restart of the worker | *See [values.yaml](values.yaml)* |
| `worker.updateStrategy` | `OnDelete` or `RollingUpdate` (requires Kubernetes >= 1.7) | `RollingUpdate` |
| `worker.podManagementPolicy` | `OrderedReady` or `Parallel` (requires Kubernetes >= 1.7) | `Parallel` |
Expand Down Expand Up @@ -203,7 +202,7 @@ concourse:
< Insert the contents of your concourse-keys/worker_key.pub file >
```
Alternativelly, you can provide those keys to `helm install` via parameters:
Alternatively, you can provide those keys to `helm install` via parameters:


```console
Expand Down Expand Up @@ -241,6 +240,8 @@ persistence:
size: "20Gi"
```
It is highly recommended to use Persistent Volumes for Concourse Workers; otherwise container images managed by the Worker is stored in an `emptyDir` volume on the node's disk. This will interfere with k8s ImageGC and the node's disk will fill up as a result. This will be fixed in a future release of k8s: https://github.com/kubernetes/kubernetes/pull/57020

### Ingress TLS

If your cluster allows automatic creation/retrieval of TLS certificates (e.g. [kube-lego](https://github.com/jetstack/kube-lego)), please refer to the documentation for that mechanism.
Expand Down Expand Up @@ -326,5 +327,5 @@ credentialManager:
## initial periodic token issued for concourse
## ref: https://www.vaultproject.io/docs/concepts/tokens.html#periodic-tokens
##
clientToken: PERIODIC_VAULT_TOKEN
clientToken: PERIODIC_VAULT_TOKEN
```
14 changes: 10 additions & 4 deletions stable/concourse/templates/worker-statefulset.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,11 @@ spec:
- -c
- |-
cp /dev/null /concourse-work-dir/.liveness_probe
rm -rf /concourse-work-dir/*
while ! concourse retire-worker --name=${HOSTNAME} | grep -q worker-not-found; do
sleep 5
done
concourse worker --name=${HOSTNAME} | tee -a /concourse-work-dir/.liveness_probe
sleep ${POST_STOP_DELAY_SECONDS}
livenessProbe:
exec:
command:
Expand All @@ -56,9 +59,12 @@ spec:
preStop:
exec:
command:
- "/bin/sh"
- "-c"
- "concourse retire-worker --name=${HOSTNAME}"
- /bin/sh
- -c
- |-
while ! concourse retire-worker --name=${HOSTNAME} | grep -q worker-not-found; do
sleep 5
done
env:
- name: CONCOURSE_TSA_HOST
valueFrom:
Expand Down
20 changes: 7 additions & 13 deletions stable/concourse/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ image: concourse/concourse
## Concourse image version.
## ref: https://hub.docker.com/r/concourse/concourse/tags/
##
imageTag: "3.5.0"
imageTag: "3.8.0"

## Specify a imagePullPolicy: 'Always' if imageTag is 'latest', else set to 'IfNotPresent'.
## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images
Expand Down Expand Up @@ -449,24 +449,18 @@ worker:
# value: "value"
# effect: "NoSchedule"

## Time to delay after the worker process shuts down. This inserts time between shutdown and startup
## to avoid errors caused by a worker restart.
postStopDelaySeconds: 60

## Time to allow the pod to terminate before being forcefully terminated. This should include
## postStopDelaySeconds, and should additionally provide time for the worker to retire, e.g.
## = postStopDelaySeconds + max time to allow the worker to drain its tasks. See
## https://concourse.ci/worker-internals.html for worker lifecycle semantics.
terminationGracePeriodSeconds: 120
## Time to allow the pod to terminate before being forcefully terminated. This should provide time for
## the worker to retire, i.e. drain its tasks. See https://concourse.ci/worker-internals.html for worker
## lifecycle semantics.
terminationGracePeriodSeconds: 60

## If any of the strings are found in logs, the worker's livenessProbe will fail and trigger a pod restart.
## Specify one string per line, exact matching is used.
##
## "guardian.api.garden-server.create.failed" appears when the worker's filesystem has issues.
## "unknown handle" appears if a worker didn't cleanly restart.
fatalErrors: |-
guardian.api.garden-server.create.failed
unknown handle
guardian.api.garden-server.run.failed
baggageclaim.api.volume-server.create-volume-async.failed-to-create
## Strategy for StatefulSet updates (requires Kubernetes 1.6+)
## Ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset
Expand Down

0 comments on commit b72c88c

Please sign in to comment.