DaCHS on Docker
This repository contains the image/dockerfiles for GAVO DaCHS.
The DaCHS software provides data access services after two daemons running in background, a DBMS (PostgreSQL) server and the Dachs server itself responsible for the data management and user interface.
This (Github) repository offers a
docker-compose.yml file, which is
the recommended way of running DaCHS.
How to use it
The recommended way of running DaCHS on Docker is through
docker-compose.yml will call
chbrandt/dachs:postgres to compose the service accessible through
http://localhost (port 80). The containers are name
respectively, and each one of them has a data volume associated
(see [Compose data volumes][compose-data-volumes]).
$ docker-compose up
...wait a few seconds and the web interface should show up at
If you want to see further details about running DaCHS on Docker without the compose, take a look at Further ways of running Dachs.
After the service has been started, you can run commands to control
Dachs through docker's
For example, to order a
dachs server, we should do:
$ docker exec -it dachs gavo serve restart
This command line means:
gavo serve restart: the command we want to run inside the container
dachs: the name of the container we want to run the command
docker exec -t: ask docker to execute the command in a terminal (
Compose data volumes
The compose file creates two data volumes, one for each container.
The data volume associated with the dachs-server mounts at
the volume associated with postgres server mounts at where
is (for instance, at
The volumes are used to expose and persist the data.
Using data volumes will keep the data even when the parent containers go
down, restarted, upgraded.
You can manage the data inside the volumes through another --generic--
container, by mounting from the containers directly using
$ docker run -it --rm --volumes-from dachs debian
will mount the volume from running
the new container.
The new container --which is running from a
debian image-- will have
the content of
/var/gavo at its disposal.
You can now do the modifications/additions you want and exit.
This container will be killed after its use (
Complete workflow example
Let us now start a Dachs service from scratch and publish the famous ARIHIP dataset.
- Run docker-compose
[host] $ docker-compose -f docker-compose.yml up
Verify if dachs is running using your browser
http://localhost. If yes, proceed, otherwise email me (the service should start after ~30 seconds maximum).
Run a companion image to manage data inside
[host] $ docker run -it --rm --name temp \ --volumes-from dachs \ debian:jessie
- From inside the
tempcontainer, download and save the data:
[at-temp] $ apt-get update [at-temp] $ apt-get install curl [at-temp] $ mkdir /var/gavo/inputs [at-temp] $ mkdir arihip && cd arihip [at-temp] $ curl -O http://svn.ari.uni-heidelberg.de/svn/gavo/hdinputs/arihip/q.rd [at-temp] $ mkdir data && cd data [at-temp] $ curl -O http://dc.g-vo.org/arihip/q/cone/static/data.txt.gz
We can now exit from the
- Finally, we just need to run the
[from-host] $ docker exec -t dachs gavo import /var/gavo/inputs/arihip/q.rd [from-host] $ docker exec -t dachs gavo publish /var/gavo/inputs/arihip/q.rd [from-host] $ docker exec -t dachs gavo serve restart
You should have the ARIHIP dataset available at
To test data persistence, you can shutdown the
and then restart them to see the very same content at
$ docker-compose down $ docker-compose up
Further ways of running Dachs
Dachs on Docker comes in two flavors:
- the all-in-one image, where gavo-dachs and postgres run together in the same container
- a pair of images, where gavo-dachs and postgres run separately but linked through a Docker network
First option is provided by
It is exactly what a default install procedure (
apt-get install gavodachs-server) provides.
The goal here is to just provide a straight way of having Dachs working on your system
(Linux, MacOS, Windows).
To run this image, just type:
(host)$ docker run -it -p 80:80 chbrandt/dachs:latest
Usual Dachs/DB management applies then.
Second option is provided by the images
This way of running the suite fits better in Docker scenario, where the container is meant to
run one process.
The way you run this images together is like:
(host)$ docker run -dt --name postgres chbrandt/dachs:postgres (host)$ docker run -dt --name dachs --link postgres -p 80:80 chbrandt/dachs:server
After a few seconds, after postgres and dachs have initialized, you should see dachs http
If you then to connect to
dachs container, to manage your data for example, you can type:
(host)$ docker exec -it dachs bash
This second option, the pair-of-images, can also be run using Docker-Compose.
dachs:data is here to provide a starting point -- it is an example for inserting the data
as volumes into the framework. The contents can be seeing here, at the Dockerfile.
- Note-1: the
postgrescontainer must be named "postgres".
- Note-2: the
servercontainer exposes port "80".
- OBS: the lines below call
dachs:datajust as an example on adding data volumes.
Before actually running the (dachs) server, we need to think about the data to be published.
Dachs maintains its datasets under
There isn't, though, a unique way of doing it with docker; one may prefer to download the
data from inside the (
dachs) container from a central repository, for example.
Another way, aligned with Docker "tetris" practices, of inserting data sets into Docker-Dachs would be through a docker-volume. Here goes an example on how to do it:
(host)$ mkdir -p arihip/data (host)$ cd arihip && curl -O http://svn.ari.uni-heidelberg.de/svn/gavo/hdinputs/arihip/q.rd (host)$ cd data && curl -O http://dc.g-vo.org/arihip/q/cone/static/data.txt.gz (host)$ cd ../.. (host)$ docker run -d --name arihip -v $PWD/arihip:/var/gavo/inputs/arihip debian
debian" can be substituted by any other image, as you wish.
And then you could run:
(host)$ docker run -it -p 80:80 --volumes-from arihip chbrandt/dachs:latest # container initiate [inside container] $ gavo imp arihip/q.rd [inside container] $ gavo pub arihip/q.rd [inside container] $ service dachs reload
Any doubt, comment or error, please file an issue on Github