Skip to content
Branch: master
Find file Copy path
Find file Copy path
520 lines (369 sloc) 17.9 KB

Contributing to Concourse

It takes a lot of work from a lot of people to build a great CI system. We really appreciate any and all contributions we receive, and are dedicated to helping out anyone that wants to be a part of Concourse's development.

This doc will go over the basics of developing Concourse and testing your changes.

If you run into any trouble, feel free to hang out and ask for help in in Discord! We'll grant you the @contributors role on request (just ask in #introductions), which will allow you to chat in the #contributors channel where you can ask for help or get feedback on something you're working on.

Contribution process

  • Fork this repo into your GitHub account.

  • Install the development dependencies and follow the instructions below for running and developing Concourse.

  • Commit your changes and push them to a branch on your fork.

    • Don't forget to write tests; pull requests without tests are unlikely to be merged. For instruction on writing and running the various test suites, see Testing your changes.

    • All commits must have a signature certifying agreement to the DCO. For more information, see Signing your work.

    • Write release notes by adding onto the file in the release-notes/ directory! For formatting and style examples, see previous release notes in the same directory.

    • Optional: check out our Go style guide!

  • When you're ready, submit a pull request!

Development dependencies

You'll need a few things installed in order to build, test and run Concourse during development:

Concourse uses Go 1.11's module system, so make sure it's not cloned under your $GOPATH.

Running Concourse

To build and run Concourse from source, run the following in the root of this repo:

$ yarn install
$ yarn build
$ docker-compose up

Concourse will be running at localhost:8080.

Building fly and targeting your local Concourse

To build and install the fly CLI from source, run:

$ go install ./fly

This will install a fly executable to your $GOPATH/bin, so make sure that's on your $PATH!

Once fly is built, you can get a test pipeline running like this:

Log in to the locally-running Concourse instance targeted as dev:

$ fly -t dev login -c http://localhost:8080 -u test -p test

Create an example pipeline that runs a hello world job every minute:

$ fly -t dev set-pipeline -p example -c examples/hello-world-every-minute.yml

Unpause the example pipeline:

$ fly -t dev unpause-pipeline -p example

Developing Concourse

Concourse's source code is structured as a monorepo containing Go source code for the server components and Elm/Less source code for the web UI.

Currently, the top-level folders are confusingly cleverly named, because they were originally separate components living in their own Git repos with silly air-traffic-themed names.

directory description
/atc The "brain" of Concourse: pipeline scheduling, build tracking, resource checking, and web UI/API server. One half of concourse web.
/fly The fly CLI.
/testflight The acceptance test suite, exercising pipeline and fly features. Runs against a single Concourse deployment.
/web The Elm source code and other assets for the web UI, which gets built and then embedded into the concourse executable and served by the ATC's web server.
/go-concourse A Go client libary for using the ATC API, used internally by fly.
/skymarshal Adapts Dex into an embeddable auth component for the ATC, plus the auth flag specifications for fly and concourse web.
/tsa A custom-built SSH server responsible for securely authenticating and registering workers. The other half of concourse web.
/worker The concourse worker library code for registering with the TSA, periodically reaping containers/volumes, etc.
/cmd This is mainly glue code to wire the ATC, TSA, BaggageClaim, and Garden into the single concourse CLI.
/topgun Another acceptance suite which covers operator-level features and technical aspects of the Concourse runtime. Deploys its own Concourse clusters, runs tests against them, and tears them down.

Rebuilding to test your changes

After making any changes, you can try them out by rebuilding and recreating the web and worker containers:

$ docker-compose up --build -d

This can be run in a separate terminal while the original docker-compose up command is still running.

In certain cases, when a change is done to the underlying development image (e.g. Go upgrade from 1.11 to 1.12), you will need to pull the latest version of concourse/dev image, so that web and worker containers can be built locally using the fresh image:

$ docker pull concourse/dev
$ docker-compose up --build -d

If you're working on a dependency that doesn't live under this repository (for instance, baggageclaim), you'll need to update go.mod with a replace directive with the exact reference that the module lives at:

# after pushing to the `sample` branch in your baggageclaim fork,
# try to fetch the module revision and get the version.
$ go mod download -json | jq '.Version'
go: finding sample

# with that version, update `go.mod` including a replace directive
$ echo 'replace => v1.3.6-0.20190315100745-09d349f19891' \
  > ./go.mod

# run the usual build
$ docker-compose up --build -d

Working on the web UI

Concourse is written in Go, but the web UI is written in Elm and Less.

After making changes to web/, run the following to rebuild the web UI assets:

$ yarn build

When new assets are built locally, they will automatically propagate to the web container without requiring a restart.

Debugging with dlv

With concourse already running, during local development is possible to attach dlv to either the web or worker instance, allowing you to set breakpoints and inspect the current state of either one of those.

To trace a running web instance:

$ ./hack/trace web

To trace a running worker instance:

$ ./hack/trace worker

To attach IDE debugger to a running instance, you can use the --listen flag followed by a port and the dlv will be started in headless mode listening on the specified port.

To debug a running web instance:

$ ./hack/trace web --listen 2345

To debug a running worker instance:

$ ./hack/trace worker --listen 2345

After this is done, the final step is to connect your IDE to the debugger with the following parameters:

  • host: localhost
  • port: 2345

For GoLand you can do so by going to Run | Edit Configurations… | + | Go Remote and fill in the parameters.

Connecting to Postgres

If you want to poke around the database, you can connect to the db node using the following parameters:

  • host: localhost
  • port: 6543
  • username: dev
  • password: (blank)
  • database: concourse

A utility script is provided to connect via psql (or pgcli if installed):

$ ./hack/db

To reset the database, you'll need to stop everything and then blow away the db container:

$ docker-compose stop # or Ctrl+C the running session
$ docker-compose rm db
$ docker-compose start

Adding migrations

Concourse database migrations live under atc/db/migration/migrations. They are generated using Concourse's own inbuilt migration library. The migration file names are of the following format:


The migration version number is the timestamp of the time at which the migration files are created. This is to ensure that the migrations always run in order. There is a utility provided to generate migration files, located at atc/db/migration/cli.

To generate a migration:

  1. Build the CLI:
$ cd atc/db/migration
$ go build -o mig ./cli
  1. Run the generate command. It takes the migration name, file type (SQL or Go) and optionally, the directory in which to put the migration files (by default, new migrations are placed in ./migrations):
$ ./mig generate -n my_migration_name -t sql

This should generate two files for you:


Now that the migration files have been created in the right format, you can fill the database up and down migrations in these files. On startup, concourse web will look for any new migrations in atc/db/migration/migrations and will run them in order.

Testing your changes

Any new feature or bug fix should have tests written for it. If there are no tests, it is unlikely that your pull request will be merged, especially if it's for a substantial feature.

There are a few different test suites in Concourse:

  • unit tests: Unit tests live throughout the codebase (foo_test.go alongside foo.go), and should probably be written for any contribution.

  • testflight/: This suite is the "core Concourse" acceptance tests suite, exercising pipeline logic and fly execute. A new test should be added to testflight for most features that are exposed via pipelines or fly.

  • web/elm/tests/: These test the various Elm functions in the web UI code in isolation. For the most part, the tests for web/elm/src/<module name>.elm will be in web/elm/tests/<module name>Tests.elm. We have been finding it helpful to test the update and view functions pretty exhaustively, leaving the models free to be refactored.

  • web/wats/: This suite covers specifically the web UI, and run against a real Concourse cluster just like testflight. This suite is still in its early stages and we're working out a unit testing strategy as well, so expectations are low for PRs, though we may provide guidance and only require coverage on a case-by-case basis.

  • topgun/: This suite is more heavyweight and exercises behavior that may be more visible to operators than end-users. We typically do not expect pull requests to add to this suite.

If you need help figuring out the testing strategy for your change, ask in Discord!

Concourse uses Ginkgo as its test framework and suite runner of choice for Go code. You'll need to install the ginkgo CLI to run the unit tests and testflight:

$ go get

We use Counterfeiter to generate fakes for our unit tests. You may need to regenerate fakes if you add or modify an interface. To do so, you'll need to install counterfeiter as follows:

$ go get -u

You can then generate the fakes by running

$ go generate ./...

in the directory where the interface is located.

Running unit tests

Concourse is a ton of code, so it's faster to just run the tests for the component you're changing.

To run the tests for the package you're in, run:

$ ginkgo -r -p

This will run the tests for all packages found in the current working directory, recursively (-r), running all examples within each package in parallel (-p).

You can also pass the path to a package to run as an argument, rather than cding.

Note that running go test ./... will break, as the tests currently assume only one package is running at a time (the ginkgo default). The go test default is to run each package in parallel, so tests that allocate ports for test servers and such will collide with each other.

Running elm tests

You can run yarn test from the root of the repo or elm-test from the web/elm directory. They are pretty snappy so you can comfortably run the whole suite on every change.

Elm static analysis

Running yarn analyse will run many checks across the codebase and report unused imports and variables, potential optimizations, etc. Powered by elm-analyse. If you add the -s flag it will run a server at localhost:3000 which allows for easier browsing, and even some automated fixes!

Elm formatting

Run yarn format to format the elm code according to the official Elm Style Guide. Powered by elm-format.

Elm benchmarking

Run yarn benchmark.

Running the acceptance tests (testflight)

The testflight package contains tests that run against a real live Concourse. By default, it will run against localhost:8080, i.e. the docker-compose up'd Concourse.

If you've already got Concourse running via docker-compose up, you should be able to just run the acceptance tests by running ginkgo the same way you would run it for unit tests:

$ ginkgo -r -p testflight

Note: because testflight actually runs real workloads, you may want to limit the parallelism if you're on a machine with more than, say, 8 cores. This can be done by specifying --nodes:

$ ginkgo -r --nodes=4 testflight

Running the web acceptance tests (web/wats)

Run yarn test from the web/wats directory. They use puppeteer to run a headless Chromium. A handy fact is that in most cases if a test fails, a screenshot taken at the moment of the failure will be at web/wats/failure.png.

Running Kubernetes tests

Kubernetes-related testing are all end-to-end, living under topgun/k8s. They require access to a real Kubernetes cluster with access granted through a properly configured ~/.kube/config file.

The tests require a few environment variables to be set:

  • CONCOURSE_IMAGE_TAG or CONCOURSE_IMAGE_DIGEST: the tag or digest to use when deploying Concourse in the k8s cluster
  • CONCOURSE_IMAGE_NAME: the name of the image to use when deploying Concourse to the Kubernetes cluster
  • CHARTS_DIR: location in the filesystem where a copy of the Concourse Helm chart exists.

With those set, go to topgun/k8s and run Ginkgo:

ginkgo .

A note on topgun

The topgun/ suite is quite heavyweight and we don't currently expect most contributors to run or modify it. It's also kind of hard for mere mortals external contributors to run anyway. So for now, ignore it.

Signing your work

Concourse has joined other open-source projects in adopting the Developer Certificate of Origin process for contributions. The purpose of the DCO is simply to determine that the content you are contributing is appropriate for submitting under the terms of our open-source license (Apache v2).

The content of the DCO is as follows:

Developer Certificate of Origin
Version 1.1

Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129

Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.

Developer's Certificate of Origin 1.1

By making a contribution to this project, I certify that:

(a) The contribution was created in whole or in part by me and I
    have the right to submit it under the open source license
    indicated in the file; or

(b) The contribution is based upon previous work that, to the best
    of my knowledge, is covered under an appropriate open source
    license and I have the right under that license to submit that
    work with modifications, whether created in whole or in part
    by me, under the same open source license (unless I am
    permitted to submit under a different license), as indicated
    in the file; or

(c) The contribution was provided directly to me by some other
    person who certified (a), (b) or (c) and I have not modified

(d) I understand and agree that this project and the contribution
    are public and that a record of the contribution (including all
    personal information I submit with it, including my sign-off) is
    maintained indefinitely and may be redistributed consistent with
    this project or the open source license(s) involved.

This is also available at

All commits require a Signed-off-by: signature indicating that the author has agreed to the DCO. This must be done using your real name, and must be done on each commit. This line can be automatically appended via git commit -s.

Your commit should look something like this in git log:

commit 8a0a135f8d3362691235d057896e6fc2a1ca421b (HEAD -> master)
Author: Alex Suraci <>
Date:   Tue Dec 18 12:06:07 2018 -0500

    document DCO process

    Signed-off-by: Alex Suraci <>

If you forgot to add the signature, you can run git commit --amend -s. Note that you will have to force-push (push -f) after amending if you've already pushed commits without the signature.

You can’t perform that action at this time.