Deploy Mesosphere on Apache CloudStack the immutable infrastructure way
HCL Python
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.


Deploy [Mesosphere] ( on [Apache CloudStack] (



First we build a Mesos image (CloudStack template) that contains the necessary mesosphere packages using Packer. For both master and slave nodes we use the same CloudStack template, although the slave requires only a subset of the master's software. To do this, we start with a base Ubuntu1404 image and install Mesosphere packages on it.

Once Packer creates a Mesos base template for us, we will use Terraform to create the Mesosphere cluster using this base template to create the master and slave Mesos VMs. We will use simple scripts derived from this [tutorial] ( as our guide to configure Mesos. Note that this is different from using cloud-init to drive the install and configuration of the Mesos nodes. Our approach is closer to immutable infrastructure. To upgrade packages, just build a new template using Packer with the upgraded images, delete nodes and reprovision fresh nodes.

Packer install

Create an Ubuntu base VM in a guest network in CloudStack. Install Packer on this VM. We'll call this VM the Packer VM. Packer can be installed from The CloudStack plugin can be installed from Note that if you are building from scratch, as of end of June 2015, the build fails. To build successfully

export GOPATH=$HOME/go
mkdir -p $GOPATH
export PATH=$PATH:$GOPATH/bin
#assuming that Packer 0.7.5 has been installed in $GOPATH/bin
sudo apt-get -y install mercurial git bzr 
go get -u
go get -u
cd $GOPATH/src/
make -C $GOPATH/src/ updatedeps dev
git checkout tags/v0.7.5
cd $GOPATH/src/
make dev

Build Mesos Image using Packer

Copy mesostack.json from this repository to the Packer VM.

Edit the builders part of the Packer template (mesostack.json). Fill in the values for :

  `hypervisor`: xenserver has been tested
  `service_offering_id` : this is the service offering used by Packer to instantiate a new (Mesos) VM.
  `template_id` : the CloudStack template id of the base Ubuntu template.
  `zone_id` : zone where Packer will create the Mesos VM.
  `network_ids` : Network where Packer will create the Mesos VM. This has to be the same network as the Packer VM.

We need to let Packer know the credentials to the CloudStack cloud. In the Packer VM:

export CLOUDSTACK_API_URL="http://cloudstack.local:8080/client/api"

Execute on Packer VM

packer validate mesostack.json
packer build mesostack.json

If this works, you will have a brand new template called 'Ubuntu1404_mesos'.

Build the Mesosphere cluster in an isolated network

We will use [Terraform] ( to deploy the Mesos template and create a Mesosphere cluster We will use this [excellent tutorial] ( as our guide.


  • Any hypervisor (tested : XenServer 6.5)
  • terraform >= v0.6.1
  • API and secret keys for your CloudStack
  • ssh keypair (using cloudmonkey: cloudmonkey create sshkeypair name=ubuntu)
  • Ubuntu1404_mesos template built by Packer as above


You can execute terraform from anywhere, as long as the CloudStack API is reachable. Check out (or copy) the contents of the terraform folder in this repository to where you will be working with terraform.

Update the by filling in the values for the variables. Details for the values of those variables are below.

The available variables that can be configured are:

  • cs_access_key: CloudStack API key
  • cs_secret_key: CloudStack secret key
  • cs_key_name: The SSH key name to use for the instances
  • cs_ssh_private_key_file: Path to the SSH private key file. This should have been generated by create sshkeypair.
  • cs_ssh_user: SSH user (default ubuntu)
  • num_masters: The number of Mesos Masters. Zookeeper and Marathon will run on these as well to launch (default 3)
  • num_slaves: The number of Mesos Slaves.

Other variables may be configured in cloudstack/ . Of particular interest are the instance types (service offerings) for the master and slave nodes (default : t1.medium), and the ip address defaults (172.16.0.x/24)


Here are some step-by-step instructions for deploying the Mesos cluster via Terraform:

  1. Run the following commands in the folder containing the file
terraform get -update
terraform apply

This will deploy the cluster.

Upon success, terraform will print the public IP of the cluster


  public_ip =

This public ip has port forwards to ssh into the masters and slaves as well as the Web UI for Mesos and Marathon.

  • Ports 1222, 1223, 1224, etc are forwarded to ssh port of master nodes
  • Ports 2222, 2223, 2224, etc are forwarded to ssh port of slave nodes
  • Port 5050 and 8080 are forwarded respectively to the first master node

Unfortunately it isn't guaranteed that the first master node will be the 'leading master' of the cluster. Therefore the Web UI (e.g., will try to redirect to the private IP of the leading master. This will be unreachable from outside the mesos cluster network, so you will have to play around with the port forwarding to get to the right master.

You can add additional slave nodes to grow your cluster by changing the variable num_slaves in and

  terraform plan
  terraform apply

How the Terraform template works

  • creates a new Isolated Guest network
  • creates egress rules to allow all traffic out of this network
  • creates the required number of master VMs from the Packer-built-template. The ssh keypair is specified
  • Acquires a public IP for the Isolated network
  • creates ssh port forwarding on this public IP to each of the master VMs
  • opens the firewall for the ssh port forwarding
  • as part of the ssh port forwarding, uses the 'file' provisioner to scp the config scripts into the master VMs.
  • as part of the ssh port forwarding resource, uses the 'remote-exec' provisioner to execute the copied scripts.
  • creates port forwarding rules on the public IP for ports 5050 (Mesos master) and 8080 (Marathon)
  • creates the required number of slave VMs from the Packer-built-template. The ssh keypair is specified
  • creates ssh port forwarding on the public IP to each of the slave VMs.
  • opens the firewall for the ssh port forwarding
  • as part of the ssh port forwarding, uses the 'file' provisioner to scp the config scripts into the slave VMs.
  • as part of the ssh port forwarding resource, uses the 'remote-exec' provisioner to execute the copied scripts on the slave VMs.