Skip to content

Continuous Deployment

Roy Meissner edited this page Oct 20, 2016 · 11 revisions

Our deployment process is fully automated in three stages through Travis. Whenever a developer pushes changes to one of the registered Github repos, Continuous Integration (CI) and CD are triggerd. Given that CI ends without errors, deployment to the experimental environment will be started.

What’s running on the experimental server:

Our whole infrastructure is based on Docker. We’re running a MongoDB and Nginx Reverse-Proxy Server as Docker containers, as well as all those services we are developing.

Nginx Reverse-Proxy Server:

We’re using a Docker container provided by JWilder called jwilder/nginx-proxy. This is an automated nginx that is listening for other containers containing an environment variable called VIRTUAL_HOST. Whenever such a container is found, the nginx container will trigger a configuration renewal of the nginx configuration. In the end, the new container will be registered under the specified subdomain in VIRTUAL_HOST and all incoming requestes will be rerouted. This also supports service scaling, given that a second service with the same subdomain is started.

How WE started this image (it's running statically):

docker run -d --restart="always" -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy

Did you notice that we havn't touched the Nginx configuration a single time? That's really great and makes things very simple.

MongoDB

We're using MongoDB as our main database, that is also containerized by this image.

How WE started this image (it's running statically):

docker run -d --restart="always" --name mongodb mongo

Did you notice that we haven't bound any ports to the container? That's intended!

Our Services

We will run a lot of different services, for example the slidewiki-platform and the deck-service. CD is, like I said earlier, handled through Travis. Travis is basically building a docker image and subsequently pushing this image to our DockerHub organization. In the next stage, a bash script runs on our target server (trough docker network daemon (setup)). This script pulls the new image, stops the currently running container and starts the container based on the new image - so remember to build them stateless!.

The script looks like this:

#!/bin/bash

docker-compose pull # pull new image
docker-compose stop # stop the running container
echo y | docker-compose rm # remove the stopped container
docker-compose up -d # start a new container based on the pulled image
docker rmi $(docker images | grep "<none>" | awk "{print \$3}") # remove all unnecessary images

We're using docker-compose here, because it allows us to specify the container configuration in files like these. Here you'll find all those needed environment variables, as well as the linkage to the database and other containers we need.

Whenever we’re certain that a service reached a good quality level, it will be published (by pushing slidewiki-platform to a specific branch) to our beta environment. Same process as before. Just have a look at the next picture to get an impression:

Deployment Environments

Do we got an infrastructure picture?###

Of Course we have one:

Deployment structure

Why all this docker overhead?

Because it's sandboxing applications AND we can exchange the deployment server in less then 30 minutes. To do this:

  1. Install Docker and docker-compose to the new server
  2. Start the Nginx Reverse-Proxy and Mongodb containers
  3. Exchange the deployment URL inside Snap-CI (keep an eye on authentification)
  4. Grab a cold beer. Your done!

Future Plans

  • Automatic deployment to three environments: experimental, beta, stable
    • deployment to beta and stable have to be manually triggered
  • Enable HTTPS with subdomain certificates or one certificate
    • Let's encrypt is suitable for multiple certificates
    • automatic certificate creation while deploying