Skip to content
Rob Bygrave edited this page May 19, 2018 · 2 revisions

Docker, Kubernetes and Continuous Delivery

In a world of Continuous Delivery of Docker containers into Kubernetes clusters some things that have traditionally been less important become more important. This includes:

  • Fast Startup
  • Well behaved Shutdown
  • Ability to operate well with resource constraints (CPU & Memory quota and limits)

Fast Startup

With CD/CI we are looking to roll out releases into production (and other environments) much more frequently than in the past. When rolling out a new version of a service with Blue/Green deployment (or similar) it is very beneficial to have fast startup times. Conversely very slow startup times of a service in a large cluster with many pods can make a version rollout very slow and resource expensive.

For example, if a service takes 1 minute to start and we roll out a release into a cluster with 5 pods this could take 5 minutes depending on the rollout policy.

Slow startup becomes a larger issue with more services and larger clusters in production but we are also doing Continuous Integration and our build pipelines are looking to rollout releases for testing all the time.

Fast startup of services is important

Shutdown

In the CI/CD world we are looking to rollout releases and doing this without any impact to the users (or upstream services). We can do this via Blue/Green deployment or similar and this is provided by Kubernetes based on Health checks.

Just as we startup more frequently we are also shutting down much more frequently and as such we need to make sure we do this nicely allowing active requests to complete etc.

What we need our Applications to do is when they are requested to shutdown is:

  • Change the Health check response
  • Delay the shutdown to allow any active requests to complete (with a reasonable timeout)

Docker friendly build - War and Uber Jars are not cool

A feature of Docker is layered file systems. When we build a War or Uber Jar our 'layer' is our application code plus all the dependencies. Typically the dependencies are vastly larger than our actual code.

Old Fashioned Classpath - lib/*

From a Docker build perspective we are a lot better off by using a plan old classpath where we add our dependencies (lib/*) and then add our application code. As long as the dependencies do not change then Docker will use the cached layer for the dependencies.

A Docker version built this way will have much smaller diff (only our application code). This means a faster build, faster to push and pull images to docker registry and that we take a whole less disk space doing this.

Docker friendly - resource limits

A colleague describes this as being a good neighbour which I think is a good description. Each of our services (docker containers) should run with limits set for CPU and memory. This means that they will play nicely with all the other services that are running in the cluster and will not hog resources.

Be a Good Neighbour - play nicely with other services in the cluster and don't hog resources

As developers running laptops with 8 cpus and 32G ram we sometimes don't notice when our services are a bit too slow and a bit too hungry. Right now I think Spring Boot falls into this category where it has seemingly so much focus on auto everything for developer convenience with little regard for startup time and resource consumption.

We need to consider limiting or completely remove classpath scanning