Bazel Continuous Integration
Want to test your project on Bazel CI? Simply file a request via this link!
Bazel uses Buildkite for continuous integration. The user interface and the orchestration of CI builds is fully managed by Buildkite, but Bazel brings its own CI machines. The buildkite folder contains all the scripts and configuration files necessary to setup Bazel's CI on Buildkite.
Bazel on Buildkite 101
When you first log into Buildkite you are presented with a list of pipelines. A pipeline is a template of steps that are executed either in sequence or in parallel and that all need to succeed in order for the pipeline to succeed. The Bazel organisation has dozens of pipelines. Here are a selected few:
- The bazel postsubmit pipeline builds and tests each commit to Bazel's repository on all supported platforms.
- The bazel presubmit pipeline is triggered on every pull request to Bazel.
- The rules_go postsubmit pipeline is triggered on every commit to the rules_go repository.
- The TensorFlow pipeline builds and tests TensorFlow at
HEADevery four hours.
When you click on a pipeline you can see the last few builds of this pipeline. Clicking on a build then gives you access to the details of the build. For example, the below image shows a failed build step on Ubuntu 16.04.
One can see which tests failed by clicking on the Test section. In the below example, the
//src/test/shell/bazel:external_path_test was flaky as it failed in 1 out of 5 runs.
You can view the failed test attempt's
test.log file in the Artifacts tab.
Bazel accepts contributions via pull requests. Contributions by members of the bazelbuild organisation as well as members of individual repositories (i.e. rule maintainers) are whitelisted automatically and will immediately be built and tested on Buildkite.
An external contribution, however, first needs to be verified by a project member and therefore will display a pending status named Verify Pull Request.
A member can verify a pull request by clicking on Details, followed by Verify Pull Request.
Please vet external contributions carefully as they can execute arbitrary code on our CI machines
Build and Test Results
After a pull request has been built and tested, the results will be displayed as a status message on the pull request. A detailed view is available when clicking on the corresponding Details link. Click here for an example.
Presubmit for downstream projects
You can preview the effect of an unmerged commit on downstream projects. See Testing Local Changes With All Downstream Projects.
Bazel downstream projects is red? Use culprit finder to find out which bazel commit broke it!
First you should check if the project is green with the latest Bazel release. If not, probably it's their commits that broke the CI.
If a project is green with release Bazel but red with Bazel nightly, it means some Bazel commit broke it, then culprit finder can help!
Create "New Build" in the Culprit Finder project with the following environment variable:
- PROJECT_NAME (The project name must exist in DOWNSTREAM_PROJECTS in bazelci.py)
- TASK_NAME (The task name must exist in the project's config file, eg. macos_latest). For old config syntax where platform name is essentially the task name, you can also set PLATFORM_NAME instead of TASK_NAME.
- GOOD_BAZEL_COMMIT (A full Bazel commit, Bazel built at this commit still works for this project)
- BAD_BAZEL_COMMIT (A full Bazel commit, Bazel built at this commit fails with this project)
- (Optional) NEEDS_CLEAN (Set NEEDS_CLEAN to
bazel clean --expungebefore each build, this will help reduce flakiness)
- (Optional) REPEAT_TIMES (Set REPEAT_TIMES to run the build multiple times to detect flaky build failure, if at least one build fails we consider the commit as bad)
PROJECT_NAME=rules_go PLATFORM_NAME=ubuntu1404 GOOD_BAZEL_COMMIT=b6ea3b6caa7f379778e74da33d1bd0ff6477f963 BAD_BAZEL_COMMIT=91eb3d207714af0ab1e5812252a0f10f40d6e4a8
Note: Bazel commit can only be set to commits after 63453bdbc6b05bd201375ee9e25b35010ae88aab, Culprit Finder needs to download Bazel at specific commit, but we didn't prebuild Bazel binaries before this commit.
Configuring a Pipeline
Each pipeline is configured via a Yaml file. This file either lives in
$PROJECT_DIR/.bazelci/presubmit.yml (for presubmits) or in an arbitrary location whose path or URL is passed to the CI script (as configured in the Buildkite settings of the respective pipeline). Projects should store the postsubmit configuration in their own repository, but we keep some configurations for downstream projects in https://github.com/bazelbuild/continuous-integration/tree/master/buildkite/pipelines.
The most important piece of the configuration file is the
tasks dictionary. Each task has a unique key, a platform and usually some build and/or test targets:
--- tasks: ubuntu_build_only: platform: ubuntu1604 build_targets: - "..." windows: platform: windows build_targets: - "..." test_targets: - "..."
If there is exactly one task per platform, you can omit the
platform field and use its value as task ID instead. The following code snippet is equivalent to the previous one:
--- tasks: ubuntu1604: build_targets: - "..." windows: build_targets: - "..." test_targets: - "..."
Setting Environment Variables
You can set environment variables for each individual task via the
--- tasks: ubuntu1804: environment: CC: clang build_targets: - "..."
Running Commands, Shell Scripts or Binary Targets
The presubmit configuration allows you to specify a list of shell commands that are executed at the beginning of every job.
Simply add the
batch_commands (Windows) or
shell_commands field (all other platforms).
You can even run executable targets via the
The following example demonstrates all of these features:
--- tasks: ubuntu1804: shell_commands: - rm -f obsolete_file run_targets: - "//whatever" build_targets: - "..." windows: batch_commands: - powershell -Command "..." build_targets: - "..."
Using Specific Build & Test Flags
test_flags fields contain lists of flags that should be used when building or testing (respectively):
--- tasks: ubuntu1404: build_flags: - "--define=ij_product=clion-latest" build_targets: - "..." test_flags: - "--define=ij_product=clion-latest" test_targets: - ":clwb_tests"
Specifying a Display Name
Each task may have an optional display name that can include Emojis. This feature is especially useful if you have several tasks that run on the same platform, but use different Bazel binaries.
Simply set the
--- tasks: windows: name: "some :emoji:" build_targets: - "..."
Most existing configuration use the legacy format with a "platforms" dictionary:
--- platforms: ubuntu1404: build_targets: - "..." test_targets: - "..."
The new format expects a "tasks" dictionary instead:
--- tasks: arbitrary_id: platform: ubuntu1404 build_targets: - "..." test_targets: - "..."
In this case we can omit the
platform field since there is a 1:1 mapping between tasks and platforms. Consequently, the format looks almost identical to the old one:
--- tasks: ubuntu1404: build_targets: - "..." test_targets: - "..."
The CI script still supports the legacy format, too.
Using a specific version of Bazel
The CI uses Bazelisk to support older versions of Bazel, too. You can specify a Bazel version for each pipeline (or even for individual platforms) in the pipeline Yaml configuration:
--- bazel: 0.20.0 tasks: windows: build_targets: - "..." macos: build_targets: - "..." ubuntu1404: bazel: 0.18.0 build_targets: - "..." [...]
In this example the jobs on Windows and MacOS would use 0.20.0, whereas the job on Ubuntu would run 0.18.0.
CI supports several magic version values such as
Please see the Bazelisk documentation for more details.
macOS: Using a specific version of Xcode
We upgrade the CI machines to the latest version of Xcode shortly after it is released and this version will then be used as the default Xcode version. If required, you can specify a fixed Xcode version to test against in your pipeline config.
Warning: We might have to run jobs that specify an explicit Xcode version on separate, slower machines, so we really advise you to not use this feature unless necessary.
The general policy is to not specify a fixed Xcode version number, so that we can update the default version more easily and don't have to update every single CI configuration file out there.
However, if you know that you need to test against multiple versions of Xcode or that newer versions frequently break you, you can use this feature.
tasks: # Test against the latest released Xcode version. macos: build_targets: - "..." # Ensure that we're still supporting Xcode 10.1. macos_xcode_10_1: platform: macos xcode_version: "10.1" build_targets: - "..."
Take care to quote the version number, otherwise YAML will interpret it as a floating point number.
Running Buildifier on CI
For each pipeline you can enable Buildifier to check all WORKSPACE, BUILD, BUILD.bazel and .bzl files for lint warnings and formatting violations. Simply add the following code to the top of the particular pipeline configuration:
--- buildifier: latest [...]
As a consequence, every future build for this pipeline will contain an additional "Buildifier" step that runs the latest version of Buildifier both in "lint" and "check" mode. Alternatively you can specify a particular Buildifier version such as "0.20.0".
--- buildifier: version: latest warnings: "positional-args,duplicated-name" [...]
Using multiple Workspaces in a single Pipeline
Some projects may contain one or more
WORKSPACE files in subdirectories, in addition to their top-level
All of these workspaces can be tested in a single pipeline by using the
working_directory task property.
Consider the configuration for a project that contains a second
WORKSPACE file in the
--- tasks: production_code: name: "My Project" platform: ubuntu1804 test_targets: - //... examples: name: Examples platform: ubuntu1804 working_directory: examples_dir test_targets: - //...
Validating changes to pipeline configuration files
You can set the top-level
validate_config option to ensure that changes to pipeline configuration files in the
.bazelci directory will be validated.
With this option, every build for a commit that touches a configuration file will contain an additional validation step for each modified configuration file.
--- validate_config: 1 tasks: macos: build_targets: - "..."
Exporting JSON profiles of builds and tests
Bazel's JSON Profile is a useful tool to investigate the performance of Bazel. You can configure your pipeline to export these JSON profiles on builds and tests using the
--- tasks: ubuntu1604: include_json_profile: - build - test build_targets: - "..." test_targets: - "..."
include_json_profile is specified with
build, the builds will be carried out with the extra JSON profile flags. Similarly for
test. Other values will be ignored.
The exported JSON profiles are available as artifacts after each run.