Please note: because this demo uses Powerstrip, which is only meant for prototyping Docker extensions, we do not recommend this configuration for anything approaching production usage. When Docker extensions become official, Flocker will support them. Until then, this is just a proof-of-concept.
We recently showed how you could use Docker Swarm to migrate a database container and its volume between hosts using only the native Docker Swarm CLI. Today we are going to show you how to do the same thing using only Kubernetes.
Kubernetes is great at orchestrating containers and Flocker is great at managing data volumes attached to containers.
Ideally - we want to use both systems together so we can orchestrate AND migrate containers. That is the aim of this demo, to show how using Powerstrip, we can extend Docker with tools like Flocker and still use orchestration tools like Kubernetes.
We also need to network our Kubernetes cluster - an ideal tool for this is Weave which allows us to allocate an IP address per container. In this example, we have fully integrated the Weave network into Kubernetes so each container is allocated an IP address from the weave bridge.
This demo is the classic kubernetes guestbook
app that uses PHP and Redis.
The aim is to migrate the Redis container AND it's data using nothing other than Kubernetes primitives.
We have labeled the 2 minions spinning
and ssd
to represent the types of disk they have. The Redis server is first allocated onto the node with the spinning disk and then migrated (along with its data) onto the node with an ssd drive.
This represents a real world migration where we realise that our database server needs a faster disk.
First you need to install:
We’ll use Virtualbox to supply the virtual machines that our Kubernetes cluster will run on.
We’ll use Vagrant to simulate our application stack locally. You could also run this demo on AWS or Rackspace with minimal modifications.
The first step is to clone this repo and start the 3 VMs.
$ git clone https://github.com/binocarlos/powerstrip-k8s-demo
$ cd powerstrip-k8s-demo
$ vagrant up
The next step is to SSH into the master node.
$ vagrant ssh master
We can now use the kubectl
command to control our Kubernetes cluster:
master$ kubectl get nodes
NAME LABELS STATUS
democluster-node1 disktype=spinning Ready
democluster-node2 disktype=ssd Ready
Notice how we have labelled node1 with disktype=spinning
and node2 with disktype=ssd
. We will use these labels together with a nodeSelector
for the Redis Master pod. The nodeSelector
is what decides which node the redis container is scheduled onto.
The first step is to spin up the 2 services. Services are Kubernetes way of dynamically routing around the cluster - you can read more about services here.
master$ kubectl create -f /vagrant/examples/guestbook/redis-master-service.json
master$ kubectl create -f /vagrant/examples/guestbook/frontend-service.json
We can check that those services were registered:
master$ kubectl get services
The next step is to start the redis master - we use a replication controller which has a nodeSelector set to disktype=spinning
.
master$ kubectl create -f /vagrant/examples/guestbook/redis-master-controller.json
Once we have done this we run kubectl get pods
and wait for the redis-master to move from status Pending
to status Running
NOTE: it may take a short while before all pods enter the Running
state.
Now we start the PHP replication controller - this will start 3 PHP containers which all link to the redis-master service:
master$ kubectl create -f /vagrant/examples/guestbook/frontend-controller.json
Once we have run this - we run kubectl get pods
and wait for our PHP pods to be in the Running
state.
NOTE: it may take a short while before all pods enter the Running
state.
Notice how the redis-master has been allocated onto node1 (democluster-node1
):
master$ kubectl get pods | grep name=redis-master
redis-master-pod 10.2.2.8 redis-master dockerfile/redis democluster-node1/172.16.255.251 app=redis,name=redis-master Running About an hour
The next step is to load the app in your browser using the following address:
http://172.16.255.251:8000
This will load the guestbook application - make a couple of entries clicking Submit
after each entry.
Now it's time to tell kubernetes to move the Redis container and its data onto node2 (the one with an SSD drive).
To do this we change the nodeSelector
for the pod template in the replication controller (from spinning to ssd):
master$ kubectl get rc redis-master -o yaml | sed 's/spinning/ssd/' | kubectl update -f -
Then, we delete the redis-master pod. The replication controller will spin up another redis-master and use the modified nodeSelector which means it will end up on node2 (with the ssd drive).
master$ kubectl delete pod -l name=redis-master
Once we have done this we run kubectl get pods
and wait for the redis-master to move from status Pending
to status Running
.
Notice how the redis-master has been allocated onto node2 (democluster-node2
):
master$ kubectl get pods | grep name=redis-master
redis-master-pod 10.2.3.9 redis-master dockerfile/redis democluster-node2/172.16.255.252 app=redis,name=redis-master Running About an hour
Now, load the app in your browser using same address:
http://172.16.255.251:8000
It should have loaded the entries you made originally - this means that Flocker has migrated the data onto another server!
note: it sometimes take 10 seconds for the service layer to connect the PHP to the Redis - if the data does not appear wait 10 seconds and then refresh
The key part of this demonstration is the usage of Flocker to migrate data from one server to another. To make Flocker work natively with Kubernetes, we've used Powerstrip. Powerstrip is an open-source project we started to prototype Docker extensions.
This demo uses the Flocker extension prototype (powerstrip-flocker). Once the official Docker extensions mechamisn is released, Powerstrip will go away and you’ll be able to use Flocker directly with Kubernetes (or Docker Swarm, or Apache Mesos) to perform database migrations.
We have installed Powerstrip and powerstrip-flocker on each host. This means that when Kubernetes starts a container with volumes - powerstrip-flocker is able prepare / migrate the required data volumes before docker starts the container.
The 2 nodes are joined by the Kubernetes master
. This runs the various other parts of Kubernetes (kube-controller
, kube-scheduler
, kube-apiserver
, etc
). It also runs the flocker-control-service
.
Kubernetes is a powerful orchestration tool and we have shown that you can extend it's default behaviour using Powerstrip adapters (and soon official Docker extensions).
This demo made use of local storage for your data volumes. Local storage is fast and cheap and with Flocker, it’s also portable between servers and even clouds.
We are also working on adding support for block storage so you can use that with your application.
If you vagrant halt
the cluster - you will need to restart the cluster using this command:
$ make boot
This will vagrant up
and then run sudo bash /vagrant/install.sh boot
which spins up all the required services.
There is a script that can automate the steps of the demo:
$ vagrant ssh master
master$ sudo bash /vagrant/demo.sh up
master$ sudo bash /vagrant/demo.sh switch
master$ sudo bash /vagrant/demo.sh down
To run the acceptance tests:
$ make test
This will vagrant up
and then bash test.sh
.
test.sh
will use vagrant ssh -c ""
style commands to run through the following tests:
- a basic data migration using powerstrip-flocker
- launch the frontend and redis-master services and replication controllers
- write some data to the guestbook
- rewrite the redis-master rc
- kill the redis-master pod
- wait for the redis-master pod to be scheduled onto node2
- check for the data migrated from node2