Find file
5772968 Aug 4, 2015
@unbit @StefanoOrdine @pauloxnet @taifu
708 lines (464 sloc) 21.3 KB

Using as a customer

When you buy/activate an account on a service you will get:

an api base url: like

an api username: like kratos

an api password: like deimos

With those parameters you will be able to configure your services using the HTTP api.

In this quickstart we will use the 'curl' command, but your supplier could give a you a more user-friendly interface (like a web-based one) build over the api

The reference (python) implementation for the api is the following one: (by Riccardo Magliocchetti)

while a web interface (django) is available at (by 20Tab S.r.l.)

Let's start: get your personal data

First step is ensuring your personal data are correct:

  "company": "God of war S.r.l.",
  "containers": [30001, 30004, 30007, 30009],
  "uuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
  "vat": "01234567890"

Update personal data

It looks like our company value is wrong, le'ts update it with a POST request

curl -X POST -d '{"company": "God of War 4 S.p.a."}'

Update password

As our password has ben generated by the supplier and so (very probably) it is no more private, let's change it with another POST request

curl -X POST -d '{"password": "deimos17"}'


In the first api call we made in this quickstart we have seen how the returned object has a "containers" array.

This array is the list of containers mapped to our account. Containers are the "virtual systems/jail" you will use for hosting your applications.

We can get a more verbose list of our containers with:


But most of the time you just want to view a single one:

  "uid": 30009,
  "ip": "",
  "server_address": "",
  "hostname": "changeme",
  "storage": 1000,
  "uuid": "aaaaaaaa-49ff-4349-8d27-705ca239bb95",
  "server": "fooserver",
  "note": "",
  "quota_threshold": 90,
  "distro_name": null,
  "memory": 500,
  "distro": null,
  "ssh_keys": [],
  "name": "changeme",
  "linked_to": [],
  "jid": "",
  "jid_destinations": ""

this is the response you generally get from a just created container. The storage and memory attributes are in megabytes, and define the resources of the container.

server_address is the ip address you need to DNS point to the domains you want to map to this container.

A container is not started (read you cannot ssh into it) until you assign it an ssh key and a 'distro'


The supplier allows you to choose from a pool of Linux distributions used as the rootfs of your containers (each container can have its distro).

Each distro has an id you can assign to the container object.

To get the list of distros:

    "id": 1,
    "name": "Precise Pangolin - Ubuntu 12.04 LTS (64 bit)"
    "id": 2,
    "name": "Saucy Salamander - Ubuntu 13.10 (64 bit)"

We want to use Saucy (id 2) so let's assign it to the container 30009:

curl -X POST -d '{"distro": 2}'

SSH keys

To access your container you need ssh keys (there is no, and never will be, support for simple password access)

To set ssh public key:

curl -X POST -d '{"ssh_keys": ["ssh-rsa ........."]}'

you can assign multiple keys in one shot:

curl -X POST -d '{"ssh_keys": ["ssh-rsa .........", "ssh-rsa ........."]}'

after a bunch of seconds your instance will start and you will be able to ssh into it:

ssh 30009@server_address

'server_address' is the value returned by your request for container data, in our case will be


Every web application needs a domain to be accessed.

Domains are mapped to a customer, so multiple containers can use them (to implement clustering, load balancing or high availability)

You may ask your supplier to map a domain to your account or add by yourself if you have access to its dns zone. In both cases you need to map the A record of the domain to the container server address.

To get the list of domains mapped to your account:

    "uuid": "00000000-8c29-4290-babd-b24d1100e006",
    "id": 1,
    "name": ""
    "uuid": "00000000-72a7-4e53-8939-d9ca4a748bef",
    "id": 15,
    "name": ""

If you have write access to your domain DNS zone you can add it to your account.

Just get the 'uuid' of your account (we have seen it in the first api call example) and add a TXT record to your zone in the form of

TXT        uwsgi:aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee

IMPORTANT !!! the uuid is the one of your account, NOT the container one. A domain can be used by multiple containers that is why it is more logic to pair it with the account.

To check your zone correctness:

host -t TXT

once the zone is updated you can try adding the domain to your account:

curl -X POST -d '{"name":""}'

if successfull you will get a 202 (Created) response

You can even delete a domain from your account:

curl -X DELETE -d '{"name":""}'

Remember that and are two different objects, so if you need your app to respond to both, you need to add both.

The first deploy

We will try to deploy a python WSGI app, a perl PSGI app, and a ruby Rack app

From the ssh shell, let's create 3 files (in the home):

def application(environ, start_response):
    start_response('200 OK', [('Content-Type','text/plain')])
    return ["Hello i am python"]

my $app = sub {
    my ($env) = @_;
    return [200, ['Content-Type' => 'text/plain'], ["Hello i am perl"]];

class MyApp
  def call(env)
    [200, {'Content-Type' => 'text/plain'}, ["Hello i am ruby"]]


Once a container is spawned, a dedicated Emperor will start monitoring the "vassals" directory for uWSGI config files.

Let's deploy the perl one:


plugin = 0:psgi
psgi = $(HOME)/
domain =

and visit your domain.

If all goes well you should see the hello message, otherwise check the logs/ directory (in your home) for errors.

Now let's deploy the python one


plugin = 0:python
wsgi-file = $(HOME)/
domain =

and visit your domain multiple times (use curl this time for better experience, will soon discover why...)

you will note how the previous perl instance will continue to answer in a round robin fashion with your python one.

Yes, multiple vassals can serve the same domain and load balancing will be automatically enabled. This allows lot of tricks and (more important) true high availability reloads when updating code !

Finally, we deploy the ruby app


plugin = 0:rack
rack = $(HOME)/
domain =

and as expected, the domain will load balance between the three instances.

If you are a uWSGI user, you may ask what the 'domain' option is (it is not part of standard uWSGI options).

It is a custom option automatically generated by the container Emperor that is defined in that way:

declare-option2 = domain=socket=/run/$1_%I.socket;subscribe2=server=/run/subscribe/http,key=$1,addr=$(HOME)/run/$1_%I.socket,sign=SHA1:$(HOME)/etc/uwsgi.key;chmod-socket=666

pretty complex as it need to configure the secured subscription system and deal with different mount namespaces

The custom options are all defined in the /opt/unbit/uwsgi/shortcuts.ini file (it is automatically merged with your config)

Wildcard domains

You can subscribe your vassal to a wildcard domain (like * using the dotsplit syntax:

domain =

to support it you need to add the to your account.

If you subscribe to a specific domain, it will win over the wildcard one (and you do not need to add it to your account)


HTTPS support is totally customer-governed via SNI. Subscription packets inform the HTTP router on how to configure SNI contexts.

let's create a self-signed certificate:

openssl genrsa -out foobar.key 2048
openssl req -new -key foobar.key -out foobar.csr
openssl x509 -req -days 365 -in foobar.csr -signkey foobar.key -out foobar.crt

now you can instruct the http router to load it using ssl-domain instead of domain:

ssl-domain = $(HOME)/foobar.key $(HOME)/foobar.crt

The HTTPS request var is set to 'on' if a specific requests is over SSL. You can use it to force HTTPS for a domain:

ssl-domain = $(HOME)/foobar.key $(HOME)/foobar.crt
plugin = router_redirect
route-if-not = equal:${HTTPS};on redirect-permanent:https://${HTTP_HOST}${REQUEST_URI}

wildcard/dotsplit SNI subscriptions are supported too

IMPORTANT: mixing ssl-domain with domain for the same name, must be avoided. ssl-domain automatically register non-ssl record too

Client-certificates HTTPS/SNI authentication

You can authenticate your https clients via certificates. For doign it you need a certificate authentication for signing your clients.

You can create a new CA pretty easily (weill it is only a pair of key and cert)

openssl genrsa -des3 -out ca.key 4096
openssl req -new -x509 -days 365 -key ca.key -out ca.crt

Now you can sign your clients csr's with:

openssl x509 -req -days 365 -in foobar.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out foobar.crt

foobar.csr is the filename of the csr sent by your client.

Now to enable client certificate authentication for a domain:

ssl-ca-domain = $(HOME)/foobar.key $(HOME)/foobar.crt !$(HOME)/ca.crt

the '!' prefix tell the server to disallow access to non-certificate-authenticated clients.

If you want to support even client not supplying a certificate you can remove the '!' prefix.

The ${HTTPS_DN} request var contains the certificate DN (if any).


Every server of a infrastructure is part of a "Legion".

Containers mapped to different servers in the same Legion can build a cluster.

Each Legion has a dedicated ip in failover mode: only a single member of the Legion, named the "Lord" receives requests on that ip. When the Lord dies, a new member takes the ip address and so on.

The Lord load balance requests to the instances subscribed to it.

To enable load balacing for a domain, just DNS map it to the Legion ip.

The legion ip is reported as "legion_address" in the container api data. If there is no legion_address attribute (or it is null) in your responses, it means your server is not part of a Legion (ask your supplier for an upgrade)

Once the domain is DNS mapped, just change you vassal config from:

domain =


cluster-domain =

HTTPS/SNI over clustering is a bit complex as the http router could not have access to the certificate files. For such a reason the subscription packet includes the certificate/key blobs. While it currently works, the key embedded in the subscription packet should be encrypted and it is currently not supported (we are already working on it)

Linking containers

Every container has 2 network interfaces: lo and uwsgi0

lo is the classic loopback interface while uwsgi0 is mapped to a class.

uwsgi0 is used for inter-container communication.

By default containers are isolated (even containers of the same customer cannot exchange data), but you can link them to other containers (even owned by different customers)

Linking is a 2-step operations, both containers involved need to agree on it. (if only one peer configure linking it will not work until the other will link too).

Supposing 2 containers:

30009 with ip

30008 with ip

we want them to communicate each other on the network

curl -X POST -d '{"link": 30008}'


curl -X POST -d '{"link": 30009}'

to unlink a container just run

curl -X POST -d '{"unlink": 30009}'

the links of a container are showed in the "linked_to" attribute of the container api

Rebooting containers

to reboot a container without making any change to it, just pass "reboot":1

curl -X POST -d '{"reboot":1}'

technically any update to the container object will trigger a reboot (remember it !!!)


You can assign tags (or 'labels' if you tend to use 'tag' as a social thing) to containers and domains.

Tagging is a handy way to "group" your items. For example you may want to group containers and domains related to the same sub-customer or project.

Tags are related to customers, so every customer will have its distinct set.

To get the list of currently defined tags run


To create a tag

curl -X POST -d '{"name":"foobar"}'

To delete a tag (ID is the id of the tag as returned by the previous calls)

curl -X DELETE

Once you have your set of tags you can start mapping them to containers or domains using the 'tags' (array) attribute:

curl -X POST -d '{"tags":["foobar"]}'


curl -X POST -d '{"tags":["foobar"]}'

Now you can filter containers and domains by-tag, using the 'tags' QUERY_STRING attribute:


the call will returns ONLY containers tagged with 'foobar' or 'zeus'

as well as


will returns domains tagged with 'foobar'


Would not be amazing if you could "partitionate" your container diskspace and assign each partition to a customer ?

Would not be even better to enforce disk usage limits to a single vassal/app ?

What about single-file-contained apps ?

Loopboxes allow you to "mount" loop block devices in your container.

Loop block devices are a common UNIX feature: you use a file as a block device. When you download an .iso file, you can directly mount it thanks to loop block devices.

You can create all the loopboxes you need in a container, and the system will take care of mapping them to a loop block device in the server and to mount it in your container.

Confused ? let'see an example.

You want the customer 'zeus' (using the container 30017) to be confined in a 100MB virtual disk. You first need to create a 100MB zero-filled file:

dd if=/dev/zero of=zeus001 bs=512 count=200000

(512 as the block size is only a convention, as generally block devices are managed in sectors, you are free to choose the approach you like most)

now we can simply 'format' the file as it would be an hard disk or a usb key:

mkfs.ext4 zeus001

(only the ext4 filesystem is supported)

Finally we create a directory mountpoint:

mkdir zeus

Let's mount zeus001 to zeus/ in container 30017 via the api:

curl -X POST -d '{"container":30017,"filename":"zeus001","mountpoint":"zeus"}'

Now check your logs/emperor.log logfile to get notifications about zeus001 mount status.

To get the list of your loopboxes:


To get infos (included tags) of a loopbox:


where id is the "id" field/attribute of a loopbox

When you want to destroy a loopbox (destroying a loopbox will only unmount it from your container, the image files remain untouched) you simply call the DELETE method on it:

curl -X DELETE

You can assign tags to a loopbox too:

curl -X POST -d '{"tags":["foobar"]}'

When working with loopboxes you need to take care about the following rules:

  • deleting the image file will unmount the loopbox
  • resizing the image file will unmount the loopbox (it will be remounted 30 seconds later if all is right with the new size)
  • if you resize an image you can "fix" it with the resize2fs command (an fsck on the image file could be required)
  • the lost+found directory is owned by root
  • image files lower than 1MB are not mounted
  • always make a backup copy of an image file before resizing it
  • updating loopbox fields (except for tags) is not allowed. This is for avoiding mess (mainly race conditions) with the mount namespace. You can delete and recreate a loopbox easily
  • all the paths are relative to the container's home, paths cannot start with a / or contains './' and '../' sequences
  • You cannot mount an image file in one of the directories managed by the Emperor (like 'vassals' or 'etc')
  • ext4 is the only supporte filesystem with POSIX acl and extended attributes enabled
  • all the loopbox-related messages are logged to logs/emperor.log (it is hardcoded)
  • The 'loopback' wording is generally wrong, the real name of the technology is loop block devices.
  • you can create readonly loopboxes adding "ro":true attribute in the POST api call
  • When using loopboxes with vassals/apps you should ensure they are mounted, the --wait-for-mountpoint or --wait-for-dir options could be useful to suspend a uWSGI instance while waiting for a loopbox to be mounted (30 seconds at most if all is correct)

In the 'zeus' example our vassal should be something like:

; suspend the instance until /containers/30017/zeus is mounted
wait-for-mountpoint = $(HOME)/zeus


The logs/emperor.log file is created as the default logging file (and rotated when it reaches 100M size).

Each vassal can log whatever (and however) it needs


Each container runs in a limited environment. Keeping an eye on such limit avoid problems in the long term (albeit an alarm system, see below, warns you when dangerous situations are near).

To check your diskspace:

quota -s

To check used memory (in bytes):

cat /run/cgroup/memory.usage_in_bytes 

To check max memory (in bytes, maps to the container api memory attribute)

cat /run/cgroup/memory.limit_in_bytes 


The container Emperor automatically set a series of alarms.

Currently, you will get an alarm when your container quota is low (you can set the threshold with the quota_threshold item of the container api) and when a OOM (out of memory) is triggered.

The alarm is broadcasted to all of the connected container shells and optionally to a jabber/xmpp account, a pushover app ( or a pushbullet devices ( or a slack ( channel

To enable jabber/xmpp alarm just set "jid", "jid_secret" and "jid_destinations" attributes of the container api.

jid and jid_secret are the credentials the Emperor will use to login to a jabber/xmpp server while jid_destinations is the comma-separated list of jid that will receive the alarms.

:Note: for trouble with google account check your gmail account and visit this link

to enable pushover support just set "pushover_user" and "pushover_token" values, and optionally the "pushover_sound"

to enable pushbullet support just set the "pushbullet_token" field

To enable slack support, set the "slack_webhook" field with the integration api url generated in the slack interface

You will get at most an alarm every 60 seconds, if you want to raise this value set the field "alarm_freq"

Additionally each alarm is stored as a persistent record in the Customer's infos. You can access those records with


You have (by default, but it is dependent by your supplier) 100 alarms slot for each container. You can use those slots to store any kind of alarm (exceptions, tracebacks, logs...). When you reach the limit of records per container, the oldest record is deleted.

The alarms api is pretty big, check for a detailed description and example usages.


You will find deployment snippets for various technologies (like SQL/NOSQL servers, cache daemons and other tools) on