Skip to content
Permalink
Branch: master
Find file Copy path
1 contributor

Users who have contributed to this file

403 lines (279 sloc) 19.3 KB

Raspberry Pi Docker Swarm with Let's Encrypt

Jun 10, 2017

Following these steps I have built a four-node Raspberry Pi cluster running web services within Docker Swarm that use Let's Encrypt for SSL.

Join me on mankind's quest to craft the most complex blogging platform possible.

Pi stack

Which of these rascals just served you this article? The answer will surprise you.

Why would you do this weird thing?

  • For fun.
  • To learn about Docker, Let's Encrypt, and maintaining web services.
  • To "save money" on hobby hosting costs. (Depending on how much you value your time.)

You should not do this weird thing if...

  • You suffer monetarily or emotionally from downtime.
  • You have people depending on your web services.
  • It is not a hobby of yours to fix things that are not remotely broken.

Parts list

  • Raspberry Pis. I used four Raspberry Pi 3 Model Bs. It doesn't matter how many. The more you have, the more services you can host. Don't use just one. That would be sad.
  • MicroSD cards. These SanDisks haven't let me down yet.
  • Power source. I used this usb charging station.
  • Micro usb cables. Like these.
  • Networking switch and ethernet cables. (Optional. You could try using wifi. Note that the Pi's ethernet adapter can only handle 100M, so don't spend extra on a gigabit switch.)
  • A fancy case or something, plus maybe some zip ties. (Optional. Perhaps you just scatter your gadgets all over your desk.)

Intangibles list

  • Home internet connection that allows forwarding ports 80 and 443. (Port forwarding optional. I'll mention a way around this if your ISP hates fun.)
  • One or more hot domain names.
  • Time.
  • Patience.

A word about Docker on Raspberry Pi

If you are excited about the prospect of booting up Docker on your Pis and going to town with all the millions of Docker images available, let me throw a wet blanket on that thought.

Yes you can run Docker just fine on Raspbery Pi, but keep this in mind: Raspberry Pis run on an ARM processor whose architecture (ARMv8 on the Raspberry Pi 3) is different from the x86-64 architecture on which modern Intel and AMD CPUs are built.

When a Docker image is built on a machine with x86-64 architecture, it will only run properly on machines with the same architecture.

(This is an oversimplification and there are tools to target different architectures, but that is outside the scope of this article.)

Because nearly everyone is only building Docker images on x86-64 machines, nearly all Docker images are incompatible with Raspberry Pis.

So how do you get working images for Raspberry Pi? Two ways:

  • Find images that were expressly built for an ARM architecture. You'll often see images tagged with modifiers like arm or armhf or rpi. Luckily there are enough utility images published this way to give you a head start - stuff like ubuntu or alpine.
  • Roll up your sleeves and build them yourself!

How to write Dockerfiles for Raspberry Pi

  1. The image mentioned in the FROM line of your Dockerfile must be able to run on your Raspberry Pi. (Something like hypriot/rpi-node:boron.)
  2. The Dockerfile must produce an image capable of running on Raspberry Pi.

Remember to run docker build ... on your Raspberry Pi.

This sounds simple, but it can get complex! Note that this can be a recursive exercise if the base image you need to start FROM is not itself built with the ARM architecture in mind.

Step two sounds like a no-brainer, but you might hit more snags than you think. Plenty of Dockerfiles do stuff like this:

RUN curl https://raw.githubusercontent.com/ahaurw01/some-project/v13/some-program_x86-64.zip

Judging by the name, looks like that thing will not run on the Raspberry Pi. Hopefully this project built an ARM-compatible binary. Otherwise you need to build that tool yourself or find an alternative.

If you go to the trouble of building an image that runs on Raspberry Pi, share it on Docker Hub! I suggest naming or tagging your images in a way that make it clear that they were built with the ARM architecture in mind.

Building your Pi cluster

I followed this howchoo.com article. Tyler does a great job of explaining what you need to do.

As you are setting things up, I recommend adding an ssh key to your Pis for GitHub or whatever source control provider you use. This will allow you to clone your repos on a Pi when you need to build images.

I definitely made use of his ProTip™ of using ApplePi Baker to copy images onto your other SD cards. I suggest doing this after you have set up Docker, ssh keys, and any other customizations on your first Pi.

When booting up the rest of the Pis after copying the image, don't forget to adjust the hostname and the password if you like.

I only made one node a manager instead of two. You do you.

If you also get annoyed typing sudo before every Docker command, follow these steps to allow the pi user to invoke docker without using sudo.

# These commands to be run on your Raspberry Pi
$ sudo groupadd docker
$ sudo gpasswd -a $USER docker
# Restart your ssh session and you should be able to run docker without sudo.

Expose yourself

DNS settings

If you don't have a domain, skip this step. Just plan on pasting your IP address in your browser every time to want to hit your website.

Say you registered the domain sassy-cat-pics.com. If you want to host a site on http://sassy-cat-pics.com or https://sassy-cat-pics.com, your Pi cluster needs to service requests made to port 80 and port 443.

Find out your household's IP address by running curl ipinfo.io in a terminal or just Googling "what is my IP".

Configure your DNS provider with an A record for sassy-cat-pics.com to your IP address, of course using your own domain. You can configure multiple A records if you want multiple subdomains to be handled by your cluster. E.g. fat-and.sassy-cat-pics.com or disapproving.sassy-cat-pics.com.

Test your DNS configuration by running dig sassy-cat-pics.com in a terminal. It might take some time for DNS settings to propagate across the internet. Depends how many cats are stuck in the tubes at the moment.

Consumer internet access is usually provided via dynamic public IP addresses instead of static ones. For the Trump voters out there, that means your household's public IP address can change over time. You can deal with this in one of several ways:

  • Don't worry about it if you don't plan on hosting a particular web service for very long.
  • Manually change your DNS A records if your public IP changes.
  • Use a service like No-IP or Dyn DNS. Some of these can even integrate with your router at home.
  • Write a service for your cluster to periodically update your DNS settings. This requires your DNS provider to expose an API of some kind.

Forward your ports

After setting up your DNS records, your home's router will be hit with requests to sassy-cat-pics.com but it doesn't know where to send them within your network.

Log into your router and find the port forwarding settings. Forward TCP requests for ports 80 and 443 to the IP address of your primary swarm manager.

What if my ISP blocks requests to ports 80 or 443?

Not all is lost. There are tools like ngrok or PageKite that might fit your needs. But I've successfully used manual ssh tunneling to make this work, which is much more fun. The basic gist is this:

  1. Fire up a cheap cloud VM, like a t2.nano on EC2. This costs about $5 a month. You can get even lower by purchasing a reserved instance.
  2. Set up a DNS A record for your sassy-cat-pics.com domain to point at the public IP of your cloud VM.
  3. Install HAProxy or NGINX or your favorite reverse proxy on the VM. Set up Let's Encrypt on the VM for your domains.
  4. Have HAProxy or NGINX reverse proxy all requests to localhost:1337 or some other port. Be sure to set Host, X-Forwarded-For, and X-Forwarded-Proto headers.
  5. From one of your Pis, open an ssh tunnel along these lines. This causes the cloud VM to listen on port 1337 and forward requests to port 80 on your Pi.
ssh -R \*:1337:localhost:80 -nNT vm-user@sassy-cat-pics.com

Riff off of the rest of this tutorial to set up your Docker Flow Proxy, just without the Let's Encrypt companion. I believe in you.

Testing it out

Time to start up a web service on your cluster to see if things are working. Try this out:

$ docker service create \
    --name sassy-cat \
    --replicas 3 \
    --publish 80:80 \
    ahaurw01/sassy-cat:armhf

This runs three tasks within a new service and binds port 80 of every node of the swarm to port 80 of a task, regardless of whether each node is running a task.

This means that the manager node - the one to which your router is forwarding port 80 - will send those port 80 requests to one of these tasks.

Check to see if it is running properly.

$ docker service ps sassy-cat

Now hit it in your browser!

$ open http://sassy-cat-pics.com # Or whatever your domain is.

When that cat has looked at you long enough, you can tear down the service.

$ docker service rm sassy-cat

Docker Flow Proxy and friends

Docker Flow Proxy is a tool that load balances your Docker services from a single entry point and offers simple configuration. It will...

  • service requests to port 80 and 443 from the public.
  • route requests to your services based on rules you define when creating them.

Docker Flow Swarm Listener is a tool that listens for Docker swarm events and reconfigures the proxy in response. It will...

  • pay attention to when you create and destroy services.
  • tell Docker Flow Proxy about these changes so it can reconfigure itself.

Docker Flow Let's Encrypt is a tool that creates and renews Let's Encrypt certificates and integrates with Docker Flow Proxy. It will...

  • use the official certbot tool to interact with Let's Encrypt.
  • create SSL certificates for the domains you want.
  • renew these certificates over time.

These pieces combined form a nice little framework for us to deploy load balanced encrypted web services. There is already great documentation available for these tools, but I will provide the setup needed for your Raspberry Pi cluster here.

Remember that thing about building images for ARM?

I have forked the above repos to be able to build Raspberry Pi-compatible images.

And I have published images on Docker Hub.

Or if you don't trust me - none taken - then you can always build these images yourself from my forks.

Create Docker Flow services

ssh onto your primary manager and run these commands.

# Create a network through which these services will talk.
$ docker network create \
    --driver overlay \
    --subnet=192.168.5.0/24 \
    swarm-overlay

# Create the Docker Flow Proxy service.
$ docker service create \
    --name proxy \
    --network swarm-overlay \
    --publish mode=host,target=80,published=80 \
    --publish mode=host,target=443,published=443 \
    -e MODE=swarm \
    -e LISTENER_ADDRESS=swarm-listener \
    --constraint "node.id==`docker node inspect --format '{{ .ID }}' self`" \
    --replicas 1 \
    ahaurw01/docker-flow-proxy:armhf

# Create the Docker Flow Swarm Listener service.
$ docker service create \
    --name swarm-listener \
    --network swarm-overlay \
    --mount "type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock" \
    -e DF_NOTIFY_CREATE_SERVICE_URL=http://proxy:8080/v1/docker-flow-proxy/reconfigure \
    -e DF_NOTIFY_REMOVE_SERVICE_URL=http://proxy:8080/v1/docker-flow-proxy/remove \
    --constraint "node.id==`docker node inspect --format '{{ .ID }}' self`" \
    --replicas 1 \
    ahaurw01/docker-flow-swarm-listener:armhf

# Confirm that these services are up and running properly
# before moving forward.
$ docker service ps proxy
$ docker service logs -f proxy
$ docker service ps swarm-listener
$ docker service logs -f swarm-listener
$ open sassy-cat-pics.com # You should see a styled 503 message.

# Create the Docker Flow Let's Encrypt service.
# Substitute your own domains here. You can have as many
# DOMAIN_X environment variables as you want.
# You can have up to 100 or so subdomains.
$ mkdir -p /etc/letsencrypt # Volume where certs will be saved.
$ docker service create \
    --name letsencrypt-companion \
    --network swarm-overlay \
    --mount "type=bind,source=/etc/letsencrypt,target=/etc/letsencrypt" \
    -e DOMAIN_1="('sassy-cat-pics.com' 'www.sassy-cat-pics.com' 'super.sassy-cat-pics.com')" \
    -e DOMAIN_2="('pleased-pupper-pics.com' 'www.pleased-pupper-pics.com' 'perfectly.pleased-pupper-pics.com')" \
    -e CERTBOT_EMAIL=your@email.com \
    -e PROXY_ADDRESS=proxy \
    -e CERTBOT_CRON_RENEW="('0 3 * * *' '0 15 * * *')" \
    --label com.df.servicePath=/.well-known/acme-challenge \
    --label com.df.notify=true \
    --label com.df.distribute=true \
    --label com.df.port=80 \
    --constraint "node.id==`docker node inspect --format '{{ .ID }}' self`" \
    --replicas 1 \
    ahaurw01/docker-flow-letsencrypt:armhf

# Check that your certs are falling into place.
$ docker service ps letsencrypt-companion
$ docker service logs -f letsencrypt-companion

Deploy some services

Fire up a sample service again to see if your proxy is working and certs are in place.

$ docker service create \
    --name sassy-cat \
    --network swarm-overlay \
    --label com.df.serviceDomain=sassy-cat-pics.com \
    --label com.df.notify=true \
    --label com.df.distribute=true \
    --label com.df.port=80 \
    --replicas 3 \
    ahaurw01/sassy-cat:armhf

You should now be able to visit http://sassy-cat-pics.com and https://sassy-cat-pics.com and see that mean mug.

Docker Flow Proxy has lots of fun options that allow you to customize the proxying. One I like is the ability to force https.

--label com.df.httpsOnly=true

Stacks instead of raw services

Docker swarm supports using Docker Compose-style files for defining groupings of services. The above example in a stack file can look like this.

# docker-compose-stack.yml
version: "3"

services:
  sassy-cat:
    image: ahaurw01/sassy-cat:armhf
    networks:
      - swarm-overlay
      - default
    deploy:
      replicas: 3
      labels:
        - com.df.serviceDomain=sassy-cat-pics.com
        - com.df.notify=true
        - com.df.distribute=true
        - com.df.port=80

networks:
  swarm-overlay:
    external: true
  default:
    external: false

Use this command to deploy the stack.

$ docker stack deploy -c docker-compose-stack.yml sassy-cat
$ docker stack services sassy-cat

Next steps

Check out these resources to make the most of your new cluster.

Try out Portainer as a front end to your cluster status.

docker service create \
    --name portainer \
    --network swarm-overlay \
    --label com.df.serviceDomain=admin.sassy-cat-pics.com \
    --label com.df.notify=true \
    --label com.df.distribute=true \
    --label com.df.port=9000 \
    --label com.df.httpsOnly=true \
    --constraint 'node.role == manager' \
    --mount type=bind,src=//var/run/docker.sock,dst=/var/run/docker.sock \
    portainer/portainer:linux-arm-1.13.2 \
    -H unix:///var/run/docker.sock

Notes and tips

Seeing clients' IP addresses on requests

We noted earlier that publishing a service's ports to the swarm allows every node to service requests, regardless of whether it is running a task for that service.

An unfortunate side effect of this feature is the originating IP address your service will see is not the IP address belonging to the person from across the internet. It instead looks something like 10.255.0.1, which belongs to an internal network that helps with this routing.

The workaround is to publish the proxy's ports using mode=host which binds the container's ports 80 and 443 directly to the node on which the service is running at ports 80 and 443. The implication of this approach is that you cannot have more than one proxy service running per node.

On top of this, you must configure the proxy to set a header indicating the IP address of the initial client. Set this label on your swarm service:

com.df.addReqHeader="X-Forwarded-For %[src]"

If you don't care about any of this, then you can feel free to not use mode=host, remove the constraint, and run more than one replica of the proxy.

In any case, you should keep the defined settings for the swarm listener and Let's Encrypt companion.

Let's Encrypt usage limits

Every time you start the Let's Encrypt companion service it will try to create new certs, regardless of whether you've created them before.

You might hit usage limits if you restart it a lot. Let's Encrypt allows you to make up to five identical cert creation requests within a week. More if those requests include new or different subdomains.

If you are just testing, you can add an environment variable of CERTBOTMODE=staging to the companion service.

Let's Encrypt failures

You may see some failures from the Let's Encrypt script.

Check your work to make sure that you have spelled out the domains properly in the letsencrypt-companion service definition.

Check your ability to dig sassy-cat-pics.com or whatever your domain is.

If you just updated DNS settings, it may take a while for those changes to be seen by everyone across the internet. Just because you are able to resolve your domain name to the proper IP doesn't mean it isn't cached with an old value somewhere else on the internet.

If you think everything is set up properly but Let's Encrypt is still not able to reach your domain, then go watch some cat videos and try later.

You can’t perform that action at this time.