Testing Servo on Taskcluster
When a pull request is reviewed and the appropriate command is given,
Homu creates a merge commit of
master and the PR’s branch, and pushes it to the
One or more CI system (through their own means) get notified of this push by GitHub,
start testing the merge commit, and use the GitHub Status API to report results.
Through a Webhook, Homu gets notified of changes to these statues.
If all of the required statuses are reported successful,
Homu pushes its merge commit to the
and goes on to testing the next pull request in its queue.
Taskcluster − GitHub integration
Taskcluster is very flexible and not necessarily tied to GitHub,
but it does have an optional GitHub integration service that you can enable
on a repository as a GitHub App.
When enabled, this service gets notified for every push, pull request, or GitHub release.
It then schedules some tasks based on reading
.taskcluster.yml in the corresponding commit.
This file contains templates for creating one or more tasks, but the logic it can support is fairly limited. So a common pattern is to have it only run a single initial task called a decision task that can have complex logic based on code and data in the repository to build an arbitrary task graph.
Servo’s decision task
.taskcluster.yml schedules a single task
that runs the Python 3 script
It is called a decision task as it is responsible for deciding what other tasks to schedule.
The Docker image that runs the decision task
is hosted on Docker Hub at
It is built by Docker Hub automated builds based on a
taskcluster-bootstrap-docker-images GitHub repository.
Hopefully, this image does not need to be modified often
as it only needs to clone the repository and run Python.
In-tree Docker images
Similar to Firefox, Servo’s decision task supports running other tasks
in Docker images built on-demand, based on
Dockerfiles in the main repository.
Dockerfile and relying on those new changes
can be done in the same pull request or commit.
To avoid rebuilding images on every pull request,
they are cached based on a hash of the source
For now, to support this hashing, we make
Dockerfiles be self-contained (with one exception).
Images are built without a context,
so instructions like
COPY cannot be used because there is nothing to copy from.
The exception is that the decision task adds support for a non-standard include directive:
Dockerfile first line is
% include followed by a filename,
that line is replaced with the content of that file.
etc/taskcluster/docker/build.dockerfile starts like so:
% include base.dockerfile RUN \ apt-get install -qy --no-install-recommends \ # […]
web-platform-tests (WPT) is large enough that running all of a it takes a long time. So it supports chunking, such as multiple chunks of the test suite can be run in parallel on different machines. As of this writing, Servo’s current Buildbot setup for this has each machine start by compiling its own copy of Servo. On Taskcluster with a decision task, we can have a single build task save its resulting binary executable as an artifact, together with multiple testing tasks that each depend on the build task (wait until it successfully finishes before they can start) and start by downloading the artifact that was saved earlier.
The logic for all this is in
and can be modified in any pull request.
Taskcluster automatically save the
stdio output of a task as an artifact,
and as special support for seeing and streaming that output while the task is still running.
Servo’s decision task additionally looks for
*.log arguments to its tasks’s commands,
assumes they instruct a program to create a log file with that name,
and saves those log files as individual artifacts.
For example, WPT tasks have a
that is typically the most relevant output when such a task fails.
Scopes and roles
Scopes are what Taskcluster calls permissions. They control access to everything.
A running task has a set of scopes allowing it access to various functionality and APIs. It can grant those scopes (and at most only thoses) to sub-tasks that it schedules (if it has the scope allowing it to schedule new tasks in the first place).
For example, when Taskcluster-GitHub schedules tasks based on the
in a push to the
auto branch of this repository,
those tasks are granted the scope
Scopes that start with
assume: are special,
they expand to the scopes defined in the matching roles.
In this case, the
repo:github.com/servo/servo:branch:* role matches.
Servo admins have scope
auth:update-role:repo:github.com/servo/* which allows them
to edit that role in the web UI and grant more scopes to these tasks
(if that person has the new scope themselves).
centralize the set of scopes granted to the decision task.
This avoids maintaining them seprately in the
and in the
base role is granted to tasks executed when a pull request is opened.
These tasks are less trusted because they run before the code has been reviewed,
and anyone can open a PR.
project-servo/daily hook in Taskcluster’s Hooks service
is used to run some tasks automatically ever 24 hours.
In this case as well we use a decision task.
decision_task.py script can differenciate this from a GitHub push
based on the
$TASK_FOR environment variable.
Daily tasks can also be triggered manually.
Scopes available to the daily decision task need to be both requested in the hook definition
and granted through the
Because they do not have something similar to GitHub statuses that link to them,
daily tasks are indexed under the
AWS EC2 workers
Tasks scheduled with the
servo-docker-worker worker type run in a Linux environment,
in a Docker container, on an AWS EC2 virtual machine.
These machines are short-lived “spot instances”. They are started automatically as needed by the AWS provisioner when the existing capacity is insufficient to execute queued tasks. They terminate themselves after being idle without work for a while, or unconditionally after a few days. Because these workers are short-lived, we don’t need to worry about evicting old entries from Cargo’s or rustup’s download cache, for example.
Servo admins can view and edit the worker type definition which configures the provisioner, in particular with the types of EC2 instances to be used.
Other worker types
README.md files for:
- Windows, also short-lived workers on EC2
- macOS, Mac Minis hosted by Macstadium
- Non-virtualized Linux, hosted by Packet.net
Taskcluster − Treeherder integration
Self-service, Bugzilla, and IRC
Taskcluster is designed to be “self-service” as much as possible,
with features like in-tree
or the web UI for modifying the worker type definitions.
However some changes like adding a new worker type still require Taskcluster admin access.
For those, file requests on Bugzilla under Taskcluster :: Service Request.
For asking for help less formally, try the
#taskcluster channels on Mozilla IRC.
We try to keep as much as possible of our Taskcluster configuration in this repository. To modify those, submit a pull request.
.taskcluster.ymlfile, for starting decision tasks in reaction to GitHub events
etc/ci/decision_task.pyfile, defining what other tasks to schedule
However some configuration needs to be handled separately. Modifying those requires Servo-project-level administrative access.
aws-provisioner/servo-docker-workerworker type definition, for EC2 instances configuration
project-servo/dailyhook definition, for starting daily decision tasks
hook-id:project-servo/dailyrole, for scopes granted to those tasks
repo:github.com/servo/servo:branch:*role, for scopes granted to tasks responding to a GitHub push to the repository (includin by Homu)