ScummVM dockerized BuildBot
To setup the whole thing:
apt install docker.io git make m4
useradd -Um -s /bin/bash buildbot
usermod -aG docker buildbot
apt install \ python3-autobahn \ python3-cryptography \ python3-dateutil \ python3-docker \ python3-future \ python3-jinja2 \ python3-jwt \ python3-migrate \ python3-sqlalchemy \ python3-twisted \ python3-venv \ python3-yaml
su - buildbot
python3 -m venv --system-site-packages buildbot-master
git clone repo
cp contrib/buildbot.service /etc/systemd/system/buildbot.serviceand adapt paths and user
- As root:
systemctl daemon-reload && systemctl enable buildbot && systemctl start buildbot
Buildbot master is located in
master and spread across several files:
steps.py: defines all custom steps needed by ScummVM
config.py: contains configuration which directly depends on the local setup and administrator choices,
builds.py: configures all the build steps for each project. There si one class for all projects variant (ScummVM, ScummVM tools) and each variant (master, stable, whatever) is registered using the previous class,
platforms.py: describes all the platforms specific configurations. These can be specialized depending on build run.
workers.py: defines the various workers types used by BuildBot. There are currently 2 of them: fetcher and builder. While fetcher is responsible to fetch and trigger actual builds, builder is instantiated based on the build platform.
ui.py: contains all user interface related configurations.
master.cfg: the file loaded by BuildBot and defines the
BuildmasterConfigobject based on the other files.
New platforms get defined in
New projects (GSoC for example) are added to
Workers get started on demand and stopped when they are not needed anymore. That avoids idling containers taking memory and CPU cycles. They use a local network created at buildbot startup.
All the data that gets generated by build processes is stored in
buildbot-data at the root of the repository. There is the source, build objects, packages and ccache directory. All of this is created at buildbot startup.
Docker images are started readonly to avoid storing modifications and to have reproductible build process.
make master installs buildbot and generates a
Workers are using Docker to run. All the workers images are defined through Dockerfiles in
Dockerfile if ending with
.m4 extension is preprocessed using the GNU m4 preprocessor.
While its syntax is quite... oldish, it doesn't clash with Dockerfile one like C preprocessor does and it's heavily available.
Each worker has its image data located in its own directory. So
debian-x86-64 data resides in
M4 Dockerfiles include parts from common directory to avoid reapeating over and over the same instructions.
This lets more latitude to create images than creating a base image with all buildbot tools and deriving from it.
To create a worker for a new platform, one should create a new toolchain with everything ready in it. This lets split between toolchain creation process and instanciation with buildbot tools for ScummVM use. To be compliant with Makefile rules, worker need a toolchain with the same name or no toolchain at all. You don't need a toolchain if the worker can easily pull all libraries it needs directly from repositories (like for the Debian platforms).
To create a custom worker, Dockerfile should:
- create a new image with the same base that the toolchain one (to have matching host libraries),
- install buildbot in it (if base image is debian, you can use
- copy the
PREFIXdirectory from the toolchain,
- define the same environment as in the toolchain (the PATH can be adjusted to make build process more easy),
- finish buildbot configuration (using
make workers just runs
docker build on every directory in
workers using GNU m4 when needed. It handles files modifications and toolchain dependency.
A toolchain is a collection of compiler, binutils and libraries needed to compile ScummVM. They are installed in a specified prefix and shouldn't polluate the image filesystem.
There is one common image
toolchains/common to help generating toolchains. It just contains the scripts and no operating system.
Toolchains generation images just copy files from this base image when needed (like in
When a custom toolchain has to be built, Dockerfile should:
- build or install a compiler, a libc and binutils at the prefix place,
- define environment with all new installed binaries,
- install prebuilt libraries if available,
- copy missing libraries build rules from
common/toolchainor local build context,
- run the rules.
make toolchains just runs
docker build on every directory in
toolchains using GNU m4 when needed.
Many parts of this repository come from Colin Snover's work at https://github.com/csnover/scummvm-buildbot.
Thanks to him.
There are still many things to do:
- trigger build using Git polling (or GitHub push notifications),
- create all platforms images and add them to master configuration,
- add back the IRC bot (IRC bot provided by buildbot is currently not secure enough),
- other things I must have forgot...