Docker files and compose script for timings API
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.


This repo provides docker-compose support for the node/express based TIMINGS API only! This is not the API itself but merely a collection of scripts and configuration files to run the API in a docker based environment. For details about the API itself, please check out the repo here:

Also, see the FAQ section in the Wiki for more help & tips:


System requirements

Step 1. Clone this repo

Clone this repo to a folder of your choice:

$ git clone
Cloning into 'timings-docker'...

$ cd timings-docker

Step 2. Create a custom config file

It is recommended that you create a custom config file. You can copy the sample config file (./timings-docker/timings/config/.config_sample.js) and:

  • save it in a location of your choice (example: /etc/perfconfig.js)
  • edit the file according to your needs - see also here:
  • update ./timings-docker/docker-compose.yml file and uncomment + edit the volumes section to map your config file to the container's /src/.config.js file:
  # - /your/custom/config.js:/src/.config.js  <<< uncomment & update this line!
  - ./timings/logs:/src/logs

If you don't use a custom config file, the API will use default values. Settings such as the ElasticSearch host (ES_HOST), Kibana host (KB_HOST), etc. don't need to be included because they are already defined in the docker-compose yaml file.

Step 3. Prepping the docker host

Before you can run the API you may have to make a few modifications to your docker host:

Add user to docker group [Linux only]

You have to add your user account to the docker group and logging out & back in again. If you don't do this, you have to run docker-compose with sudo which is not recommended! You can use the following command:

usermod -aG docker ${USER}

Custom elasticsearch data directory

This is optional. By defaults, the elasticsearch container will use the timings-docker/elasticsearch/data directory on the docker host to store its data. If you want to use a different location, you need to edit the volumes section of the timings-docker/docker-compose.yml file and point to the desired location:

      - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
      - ./elasticsearch/data:/usr/share/elasticsearch/data [ <<-- edit this line ]
      - ./elasticsearch/logs:/var/log/elasticsearch

Set permissions for the elasticsearch data directory [Linux only]

You need to set the required permissions to the elasticsearch/data directory by running these commands:

$ sudo chown 1000:1000 ./elasticsearch/data
$ sudo chmod 775 ./elasticsearch/data

Step 4. Starting up the API

You should now be able to run the environment by running docker-compose up from the timings-docker folder.

NOTE: The first time you install this or when you use the pull / --build argument(s), Docker will (re-)build the containers! The output will look different and the entire process will take a bit longer to complete.

You should use docker-compose pull && docker-compose up --build every time one of the docker images is updated. This ensures you're getting the latest timings container!

$ docker-compose up
Starting elasticsearch ... done
Starting kibana ... done
Starting timings ... done
Attaching to elasticsearch, kibana, timings
elasticsearch    | [2018-03-12T16:11:33,741][INFO ][o.e.n.Node               ] [] initializing ...
timings          | debug: [Elasticsearch] - elasticsearch:9200 - [WAIT] waiting for Elasticsearch healthy state ...
timings          | debug: [timings API] - LTDV-MVERKERK:80 - [READY] v1.1.9 - using config: [defaults] - Could not find or access config file, or no file provided

... [ELK messages]

timings          | debug: [Elasticsearch] - elasticsearch:9200 - [PORTCHECK] - port 9200 is live ...
timings          | debug: [Elasticsearch] - elasticsearch:9200 - [HEALTHCHECK] - status is [GREEN] ...
timings          | debug: [Elasticsearch] - elasticsearch:9200 - [READY] - [Elasticsearch v.5.6.2] is up!
timings          | debug: [Elasticsearch] - elasticsearch:9200 - [UPGRADE] checking if upgrades are needed ...
timings          | debug: [Elasticsearch] - elasticsearch:9200 - [UPGRADE] upgrading Kibana items ... [Force: undefined - New: true - Upgr: false]

... [more ELK messages]

timings          | debug: [Elasticsearch] - elasticsearch:9200 - [IMPORT] - imported/updated 48 Kibana item(s)!
timings          | debug: [Elasticsearch] - elasticsearch:9200 - Template [cicd-perf] exists/created: true

... [log continues ...]

Above example is showing the main log output messages of the [timings] service that you should look for! During the first startup, you should see the line that says imported/updated [xx] Kibana item(s)! This confirms that the API waited for Elasticsearch to be healthy and it imported the main Kibana dashboards and visualizations into Elasticsearch.

Step 5. Test the apps

After the containers have started, you can test the apps by browsing to the following:

App Link
timings http://your_server
elasticsearch http://your_server:9200
kibana http://your_server:5601

Step 6. Validate Kibana

Now go and check out Kibana to make sure everything looks A-OK! Navigate to your Kibana server and go to:

Management -> Saved Objects (should see a number of dashboards and visualizations)

Kibana Saved Objects - after API startup & import


Visualize (should see a list of visualizations)

Kibana Saved Objects - after API startup & import


Dashboard -> TIMINGS-Dashboard (should see an empty dashboard)

Since you most probably haven't submitted any test results to the API yet, the main dashboard (http://{kibana host}/app/kibana#/dashboard/TIMINGS-Dashboard) is working but still empty:

Kibana empty Dashboard

Time to start running your tests and submit data to the API and your dashboard should start showing some data!