Skip to content


Vox Pupuli Tasks - The Webapp for community management

Build Status AGPL v3 License GitHub release Maintainability

Table of contents


As a collaborator at Vox Pupuli we have basically two different kinds of main tasks:

We currently have a few tools for those jobs:

Reviewing open Pull Requests

Collaborators review a lot of code in many pull requests. But there are even more pull requests that are open but don't need any attention. A collaborator spends a lot of time to figure out which pull request actually needs attention.

One of the goals of this project is to provide a proper UI that displays filtered pull requests. Some examples:

It's not required to review code in a pull request if a merge conflict exists. If the PR is properly labeled, we can exclude it from the UI. The service gets notifications from GitHub for each activity on a PR. If a conflict appears, a label will be added. It will also automatically be removed if the conflict disappears after a rebase.

Instead of dealing with all open PRs over and over, collaborators can spend their time to reviewing pull requests that actually need it.

Some more examples are documented as open issues, in particular issue 4

Yak shaving Puppet modules

This is the second big tasks for collaborators. Update dependencies in metadata.json files, allow new Puppet versions, drop legacy Operating Systems. There are many many tasks that collaborators do from time to time and this project tries to make it as easy as possible or even automate stuff where it's suitable.


You can currently access a beta version at This is a MVP that we redeploy with enhancements every few days at the moment. The app uses GitHubs OAuth for authentication:

login screenshot

The application is developed by Robert 'flipez' Müller and Tim 'bastelfreak' Meusel. The current OAuth app is registed to their personal account but will soon be migrated to the Vox Pupuli GitHub organisation. You do not need to grant the application access to any repository. This OAuth setup is only used to authenticate the user.

In the future it's possible to restrict the login or certain features to people that are members of specific GitHub organisations or teams. We didn't want to reimplement a whole usermanagement so we rely on GitHub OAuth.

After the login, you see the following page:

startpage screenshot

Each yak shaving task is a row. It's prefixed with the number of modules that are in this category. For example:

single yak shaving action

You can click on each PR. The app displays all the information that GitHub provides. It will also list open pull requests. It will be possible to filter this. The filtering will also work for all open pull requests in a namespace. The design and scope is currently discussed and implemented in Issue #4.

Besides being an OAuth application, this Ruby on Rails website is also a registered GitHub Application. This means that GitHub sends notifications for user interactions to the Rails app. The app gets information about every new pull request, new label, new code or comments in a pull request and much more. We currently store those notifications in a self hosted Sentry. The displayed data in the frontend comes from polling the GitHub API and from analysing the notifications. In the future we all add more automisation to the app, which will be based on the notifications. Use cases for automisation are discussed and developed at:

Other open issues might also be good candidates for live interactions based on notifications. Please comment the open issues or raise new ones if you have crazy ideas.

Existing Automatisation

We aim to automate different use cases. Each usecase gets a dedicated milestone at GitHub to track the issues and pull requests.

Merge Conflicts - Milestone 1

At the moment, the application handles appearing and disappearing merge conflicts. Since PR #35 went live, we are able to detect if a Pull request went from a mergeable into a non-mergeable state. In this case we check if the label merge-conflicts is present in the repository. Afterwards we add it to the pull request.

bot adds label

Also our bot Account adds a comment to the pull request. GitHub does not send notifications to the author whe a label was added, but for comments.

bot adds comment

Sync GitHub Labels

People heavily depend on labels for their daily collaborator work. To ensure that they can use the correct labels, this App tracks an authoritative list of:

  • Label names
  • Their colour
  • Their description

The application ensures that all those labels are configured in all repositories.

This is all managed in one yaml file.

Update the file to automatically update all labels in a repository. This app does not remove labels that aren't in the yaml. It just ensures that all labels from the yaml file are present in the repositories.

Future work: The yaml already supports aliases for labels. The idea is that the app searches for labels in repositories with an alias. Matching labels could be renamed to the correct one.

The work for this feature is/was tracked in issue #131.

Configure your module repo

The old list with repos to ignore is still active but will be replaced with the below workflow soon.

Per default VPT does take care of every repo in the voxpupuli group matching /^puppet-(?!lint)/.

You can configure VPT on a repository level to override some of the default behavior via the .sync.yml file.

Example configuration of the current posibilities:

  enabled: false # vpt will ignore this repository completely if false. default: true
    needs_rebase: false # vpt will post a comment if a pull request gets conflicts with the target branch if true, default: true
    tests_failed: false # vpt will post a comment if the ci tests enter a failed state if true, default: true

Local Setup

To start the app locally, do the following (assumes that you've ruby, bundler and yarn available, also redis needs to be started):

git clone
cd vox-pupuli-tasks
bundle config set path 'vendor/bundle'
bundle config set with 'development test'
bundle install
yarn install --frozen-lockfile --non-interactive
export SECRET_KEY_BASE=$(bundle exec rails secret)
bundle exec rails assets:precompile
# removing the credentials.yml.enc file is required unless you were given the matching master.key by the developers
rm config/credentials.yml.enc
bundle exec rails credentials:edit
docker-compose up -d postgres
# only required for local debugging
docker-compose up -d jaeger
# db:create will fail if the database already exists, go to the next step if that is the case
RAILS_ENV=development bundle exec rails db:create
RAILS_ENV=development bundle exec rails db:migrate
bundle exec sidekiq
# in a new shell
bundle exec rails s -b ''

Secrets are stored as an encrypted yaml file. You can edit them by doing:

bundle exec rails credentials:edit

This only works properly if one of the developers sent you the /config/master.key file.

For a basic development setup you need at least these values:

# Used as the base secret for all MessageVerifiers in Rails, including the one protecting cookies.
secret_key_base: <existing value from `rails secret`>

    host: localhost
    port: 6379
    db: 9

Foreman will take care of the actual rails application, but it will also start sidekiq.

Connect to the local jaeger instance to see traces of what's going on.

Dry Run

Experimental! It is very likely that not all write requests are skipped yet!

You can use the environment variable DRY_RUN to skip write requests to GitHub. This will log to log/dry_run.log if a write has been skipped because of the flag.

Production Setup

The production setup is a homage to microservices:

poo logo

The setup is deployed as docker microservices. This repository contains a docker-compose.yaml for this.

We deploy multiple containers:


This is a webinterface for Elasticsearch. The service is available at localhost:9001. We highly suggest that you deploy an nginx in front of it with proper TLS certiicates. To access the elasticsearch container, you can use this URL:


The docker-compose.yaml sets elasticsearch as FQDN for the container.


We use the semantic logger to log all rails data to elasticsearch. Logs are important, and writing them to a file in a container is a bad idea.


Kibana is our frontend for elasticsearch. It's available at localhost on port 5601.


ToDo: Describe how we forward errors to Sentry

GitHub App Setup

As mentioned in the usage section, this Ruby on Rails application can be registered as a GitHub App. To do this, a few things need to be configured.

User authorization callback URL

The full URL to redirect to after a user authorizes an installation. For our instance this is

Request user authorization (OAuth) during installation

Requests that the installing user grants access to their identity during installation of the application.

This allows us to validate if a user is in a specific GitHub organisation or Team.

Webhook URL

Events will POST to this URL. For our instance this is

As of 2021-06, webhooks can only be set up for organisations or individual repos. Set up the webhook for your organisation as described in the linked docs. Make sure to select content type "application/json" and "Send me everything". As "Secret" create a secure random string and also configure it in rails credentials:edit as

    webhook_secret: <secure random string>
    client_id: <from your OAuth registration>
    client_secret: <from your OAuth registration>


Sadly, we require Administration access with Read and write. It will allow us to add labels to a project.

issue perms

We need Read and write access to issues because we add/remove labels to pull requests and also comment on them. More information can be found at the GitHub developer docs. (For GitHub, a pull request is a specific issue. That's why pull request permissions ar handled on the issue endpoints).

issue perms

The same applies for the Pull requests. More information can be found at the GitHub developer docs

pr perms


We also need to tell GitHub which events we would like to get:


API docs for:

Contribution and Development

We have a helpful rake task available to run a ruby linter. It will inform you about styleguide violations. Please execute it before you provide a pull request:

bundle exec rake rubocop

This will execute the linter. You can also advice him to automatically fix things (works often, but not on all issues):

bundle exec rake rubocop::auto_correct

We constantly improve our codebase. We adjusted a few rubocop cops to relax the default configuration. Also sometimes we need to merge important changes that violate the current rubocop config. For such situations we need to run

bundle exec rubocop --auto-gen-config

Add/Drop new Operating system checks

Among all the stuff we validate is also a check for supported operating systems in the metadata.json file in a Puppet module. Sometimes a new version is released. This is checked via puppet_metadata.


The following flowchart displays what happens when we receive a GitHub notification about a pull request. The diagram was created with You can import the vpt.drawio from the /images/ directory.



This project is licensed under GNU Affero General Public License version 3

Docker tricks

Start just the rails console while all containers are off:

docker-compose run --no-deps web bundle exec rails console

Start all containers:

docker-compose up -d

Start the rails console while containers are running:

docker-compose exec web bundle exec rails c

Delete the sidekiq queue (execute in irb session, returns true if jobs got deleted, otherwise false):


This project is sponsored by Hetzner Cloud. They provide us with free cloud instances to host the application.


Are you interested as well in sponsoring parts of the Vox Pupuli organisation? Get in touch with the Project Management Committee.

Prepare a release

We use the GitHub changelog generator to generate our To prepare a new release, write the desired version into our Rakefile. Afterwards export a GitHub API token as CHANGELOG_GITHUB_TOKEN environment variable. Now you can generate the changelog with bundle exec rake changelog. Propose that as a pull request.

If it gets approved and merged, you can create a git tag and push it. Our CI platform will take care of pushing a matching docker image.