Skip to content


Pierre Forcioli-Conti edited this page Jun 15, 2018 · 12 revisions

To deploy to, you will need to get comfortable with the wonderful world of Docker!

Creating Docker images

We build our images on Docker Hub, triggered by commits to the main repo. The Dockerfile which defines the image is here.

To check the status of the Dockerhub build, go here.

Deploying to server

We use Docker Compose to tie together the workbenchdata container with a postgres container and configure a few things. The workbench-docker repository gets cloned to the server, and contains the docker-compose.yml file that specifies how everything is stitched together. It also defines the Docker volumes that save the files from three places: the database, the media directory where uploaded files and saved data versions are stored, and the importedmodules directory where module files copied from Github are stored. All other file are erased when the docker image is updated.

To set up a server, first install Docker Compose. Then clone workbench-docker:

git clone

To bring up the server, first you will need an .env file in your home directory. This contains settings for all sorts of environment variables, including secret keys of various types. It must never be committed to a repo. Then, to start the server, change to the workbench-docker directory and run


This pulls all necessary containers (including the database) and starts the server processes. This same command van be used to update to the latest container versions. In that case, it will stop the server and restart it after downloading new images (takes 30 seconds or so).

You can see the running containers with docker ps. The server is started inside the workbench-web container, via the script and runs on port 8000 by default.

Logging into staging and production servers

If you are on the Workbench team, you will need the cjworkbench private ssh key. Once this is copied to your ~/.ssh directory, you can log into staging and update to the latest version like this

ssh -i ~/.ssh/cjworkbench
cd workbench-docker

and production like this

ssh -i ~/.ssh/cjworkbench
cd workbench-docker

Updating modules

All internal modules are automatically updated when the server is updated. All external modules (imported from github) can be updated by running the following command in the workbench-docker directory:


you can add a --force flag to the end of the command in reimport-all-modules to force reload even if the version is the same.

Managing the server Docker container

Django manage

You can run Django manage commands inside the container. The first time you run, you will need to manually create an admin user.

docker exec -i -t workbench-web python createsuperuser

You can also run arbitrary Python code from within the context of the server process by doing something like

docker exec -i -t workbench-web python shell -c "from django.contrib.auth.models import User; User.objects.get(email='').delete()"

Shelling into the server process

You can get a shell on the server container by the servershell script in the workbench-docker directory.

This is the easiest way to debug on the server. From here you can edit source files, restart the Django server, etc. Note that the container does not come with an editor by default. You can install one by doing sudo apt-get install nano. Any changes you make to files inside this container -- like installing that editor, or editing Python files to fix a bug -- will disappear when you update the server. So you may want to git commit any changes you need permanent.

Viewing server logs

You can view the server logs continuously with the taillogs script in the cjworkbench-docker directory..

You can’t perform that action at this time.