Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

The Clair Backend Infrastructure

The Clair Stack1 is the infrastructure-as-code implementation of the Clair Platform, a system to collect measurements from networked CO2-sensors for indoor air-quality monitoring. It is developed and run by the Clair Berlin Initiative, a non-profit, open-source initiative to help operators of public spaces lower the risk of SARS-CoV2-transmission amongst their patrons.

The Clair Stack consists of several Python applications, some of which share a PostgreSQL DBMS. For ease of development, we packaged the applications proper, the DBMS, and the pgAdmin database administration service into docker containers, so that the entire setup can be run locally. Our goal with the present infrastructure setup is to minimize the difference between development, staging, production and other environments.

This repository contains the docker setup and all configuration necessary to deploy and run the entire Clair Stack. Furthermore, this repository includes git submodules for individual applications of the stack, to provide for a seamless development experience.

We use docker in swarm mode, docker contexts, and docker stack deploy to deploy the stack defined in docker-compose.yml and its extension files docker-compose.X.yml.

docker-compose up does not work with these docker-compose files because the Traefik reverse proxy we use reads its configuration from labels attached to the deploy sections of the services, which are ignored by docker-compose.


The Clair stack comprises the following services:

  • reverse_proxy: Traefik reverse proxy.
  • managair_server: Django application, business layer models, public API.
  • static_frontend: An nginx image that serves the Clair frontend.
  • ingestair: A second instance of the managair container; provides an internal ingestion endpoint for measurement samples (potentially public in the future).
  • clairchen_forwarder: A TTN application that receives uplink messages of Clairchen devices, decodes them, and forwards their samples to the ingestair.
  • ers_forwarder: The same for ERS devices.
  • db: The PostgreSQL database management system (DBMS).
  • redis: A redis store, used by Django's task queue.



The extension used for the development environment adds the following service:

  • pgadmin: A pgAdmin instance to inspect and manipulate the databases.


The docker-compose.tls.yml extension adds Traefik labels to enable automatic TLS encryption (https) using Let's Encrypt (LE). This should only be enabled for swarms that can be accessed by the LE servers on the internet.

Configuration via environments

All configuration is handled through environment variables. For each deployment target, all environment variables are grouped in a target-specific environment file located in the environments/ folder. Upon deployment, the configuration of the Clair stack is sourced from the selected environment file.

The first three environment variables in each file control the overall deployment:

  • DOCKER_CONTEXT: The docker context to use. The context defines the docker swarm on which the system is to be deployed. If you want to deploy teh stack for local development work, you need to initialize a local context first, which will typically be named default. Use docker context ls to see all available contexts.
  • CLAIR_DOMAIN: The domain used by Traefik and Django to configure their routes (localhost, or similar).
  • DOCKER_STACK_DEPLOY_ARGS: Optinal additional arguments to docker stack deply, mainly used to add extension files; e.g., `DOCKER_STACK_DEPLOY_ARGS="-c".

All remaining variables affect one or more services.


Some of the containers depend on various credentials; e.g., to access the TTN applications. We use docker secrets to securely transmit and store these credentials. All secrets are meant to be placed in files in the secrets subdirectory. The secrets in use can be found in the secrets sections of the docker-compose(.X).yml files.

The secrets subdirectory is ignored by git. Never commit any secrets to a git repository!

Development setup

To set up the Clair backend for development on your local machine, proceed as follows:

  1. Install docker
  2. Activate swarm mode:
    docker swarm init
  3. Clone the present repository onto your local machine:
    git clone
  4. Check out the submodules (learn more about git submodules):
    git submodule init && git submodule update
  5. If your local docker context is not named default, and the local domain should be called differently than localhost, adjust the DOCKER_CONTEXT and CLAIR_DOMAIN environment variables in environments/dev.env
  6. Create volumes:
    tools/ environments/dev.env
  7. Deploy the development stack locally:
    tools/ environments/dev.env
  8. Load example data from fixtures:
    tools/ environments/dev.env

The entire backend stack will launch in DEVELOPMENT mode. Pending database migrations will be executed automatically.

Development access to the managair application

The managair_server application is a Django web application. In DEVELOPMENT mode, it is executed in its internal development webserver, which supports hot reloads upon code changes. To this end, the local codebase is bind-mounted into the application's docker container. Whenever you make changes to code in the managair git submodule locally, it will trigger a restart of the application inside the container.

All managair endpoints are available on your local machine at localhost:8888:

  • To log in from a local webbrowser, open the preliminary login site at localhost:8888/dashboard.
  • The Django admin UI is available at localhost:8888/admin. If you preloaded the test data, you can log in as user admin with password admin.
  • The browsable ReST API is available at localhost/api/v1/.

Development access to the database

To directly inspect and access the PostgreSQL database, a pgAdmin container is included in the stack. You can access its UI at localhost:8889. Login as user, with password admin.


As long as no solid continuous deployment system is in place, we deploy manually using docker context use X and docker stack deploy.

Since there is a substantial risk of inadvertently causing damage by not resetting the docker context on your system, it is highly recommended to use the respective tool in the tools subdirectory. All these tools expect a valid environment file as their first (and usually only) argument, and warn you when you are about to make changes to a docker context that is not the default.

Deployment Utilities env

Deploy the Clair stack to DOCKER_CONTEXT. env

Remove the Clair stack from DOCKER_CONTEXT.

Initial Setup Utilities env

Create the external volumes used by the Clair stack. env [fixture]...

Load sample data from internal json files.

Development Utilities env [-y] arg...

Access the script of the managair_server container. All arguments are passed on.

Add -y to skip confirmation in case of non-default docker contexts. This is needed for piping to stdin, as in loaddata (see below), since the prompt leads to a broken pipe.


tools/ environments/dev.env createsuperuser
tools/ environments/dev.env makemigrations

Miscellaneous env service

Fetch and follow the log output for one of the stack's services:

tools/ environments/livland.env managair_server dump.json

Convert a mongo export of the obsolete ingestair database to a fixture which can be loaded from stdin.

docker exec -i clair_mongo.X.YYYYYY mongoexport --db clair --collection base_sample --jsonFormat canonical > samples_mongo.json

tools/ samples_mongo.json | tools/ environments/livland.env -y loaddata --format=json -


The following is a summary of typical administrative tasks for a production environment.

Insert samples into the database

There might arise the need to add samples to the core database that were not present previously. The simplest way to do so is via Django fixtures. Developed as a means to populate a Django database for testing, fixtures can also be used to load additional data into a table.

The most common administrative use for the Clair Stack arises when a [TTN forwarder] disconnects from the Things Network for any reason, so that sensor data no longer arrives at the Clair Stack. If the corresponding TTN application has the Data Storage Integration enabled, it acts as a backup from which you can recover lost data for up to seven days. Use the command line tool clair-generate-fixtures-from-storage that comes with the Clair-TTN application to retrieve the missing samples from the TTN storage integration and turn them into a JSON file that can be read in as a Django fixture.

To otherwise import samples, you need to prepare them according to Django's fixture format. To actually trigger the import, pipe the resulting fixtures file into the Django management wrapper discussed previously:

cat <fixtures_file> | ./tools/ <environment_file> -y loaddata --format=json -

Note that the -y flag overrides the safety question when you attempt to rund the command on a production environment. The trailing dash - is required for Django to pick up the stream from stdin.

Upon import, the DBMS enforces constraints on the uniqueness of samples - you cannot re-import a sample that is already present in the database. Therefore, you need to manually clean the fixtures in advances to not contain duplicates.

Restart a single service

There's currently no tool to restart a single service. However, you can scale a service down and up again like this:

docker context use staging
docker service scale clair_ers_forwarder_v3=0
docker service scale clair_ers_forwarder_v3=1

Update a service

Individual services can be updated using the tool. It takes pairs of service_name and image_name as arguments. service_name is the name of the service as defined in the Compose file.


tools/ environments/dev.env managair_server clairberlin/managair:2.0.0 ingestair clairberlin/managair:2.0.0



  1. The Clair Platform and the Clair-Berlin initiative are now part of the CO2-Monitoring (COMo) project, funded by a grant from the Senate Chancellery of the Governing Mayor of Berlin.


The Clair Platform infrastructure as code.








No releases published


No packages published