BuildKit is a toolkit for converting source code to build artifacts in an efficient, expressive and repeatable manner.
- Automatic garbage collection
- Extendable frontend formats
- Concurrent dependency resolution
- Efficient instruction caching
- Build cache import/export
- Nested build job invocations
- Distributable workers
- Multiple output formats
- Pluggable architecture
- Execution without root privileges
Read the proposal from https://github.com/moby/moby/issues/32925
Introductory blog post https://blog.mobyproject.org/introducing-buildkit-17e056cc5317
The following command installs
$ make && sudo make install
You can also use
make binaries-all to prepare
Starting the buildkitd daemon:
buildkitd --debug --root /var/lib/buildkit
The buildkitd daemon supports two worker backends: OCI (runc) and containerd.
By default, the OCI (runc) worker is used.
You can set
--oci-worker=false --containerd-worker=true to use the containerd worker.
We are open to adding more backends.
BuildKit builds are based on a binary intermediate format called LLB that is used for defining the dependency graph for processes running part of your build. tl;dr: LLB is to Dockerfile what LLVM IR is to C.
- Marshaled as Protobuf messages
- Concurrently executable
- Efficiently cacheable
- Vendor-neutral (i.e. non-Dockerfile languages can be easily implemented)
solver/pb/ops.proto for the format definition.
Currently, following high-level languages has been implemented for LLB:
- Dockerfile (See Exploring Dockerfiles)
- (open a PR to add your own language)
For understanding the basics of LLB,
examples/buildkit* directory contains scripts that define how to build different configurations of BuildKit itself and its dependencies using the
client package. Running one of these scripts generates a protobuf definition of a build graph. Note that the script itself does not execute any steps of the build.
You can use
buildctl debug dump-llb to see what data is in this definition. Add
--dot to generate dot layout.
go run examples/buildkit0/buildkit.go | buildctl debug dump-llb | jq .
To start building use
buildctl build command. The example script accepts
--with-containerd flag to choose if containerd binaries and support should be included in the end result as well.
go run examples/buildkit0/buildkit.go | buildctl build
buildctl build will show interactive progress bar by default while the build job is running. It will also show you the path to the trace file that contains all information about the timing of the individual steps and logs.
Different versions of the example scripts show different ways of describing the build definition for this project to show the capabilities of the library. New versions have been added when new features have become available.
./examples/buildkit0- uses only exec operations, defines a full stage per component.
./examples/buildkit1- cloning git repositories has been separated for extra concurrency.
./examples/buildkit2- uses git sources directly instead of running
git clone, allowing better performance and much safer caching.
./examples/buildkit3- allows using local source files for separate components eg.
./buildkit3 --runc=local | buildctl build --local runc-src=some/local/path
./examples/dockerfile2llb- can be used to convert a Dockerfile to LLB for debugging purposes
./examples/gobuild- shows how to use nested invocation to generate LLB for Go package internal dependencies
Frontends are components that run inside BuildKit and convert any build definition to LLB. There is a special frontend called gateway (gateway.v0) that allows using any image as a frontend.
During development, Dockerfile frontend (dockerfile.v0) is also part of the BuildKit repo. In the future, this will be moved out, and Dockerfiles can be built using an external image.
Building a Dockerfile with
buildctl build --frontend=dockerfile.v0 --local context=. --local dockerfile=. buildctl build --frontend=dockerfile.v0 --local context=. --local dockerfile=. --frontend-opt target=foo --frontend-opt build-arg:foo=bar
--local exposes local source files from client to the builder.
dockerfile are the names Dockerfile frontend looks for build context and Dockerfile location.
For people familiar with
docker build command, there is an example wrapper utility in
./examples/build-using-dockerfile that allows building Dockerfiles with BuildKit using a syntax similar to
go build ./examples/build-using-dockerfile && sudo install build-using-dockerfile /usr/local/bin build-using-dockerfile -t myimage . build-using-dockerfile -t mybuildkit -f ./hack/dockerfiles/test.Dockerfile . # build-using-dockerfile will automatically load the resulting image to Docker docker inspect myimage
During development, an external version of the Dockerfile frontend is pushed to https://hub.docker.com/r/tonistiigi/dockerfile that can be used with the gateway frontend. The source for the external frontend is currently located in
./frontend/dockerfile/cmd/dockerfile-frontend but will move out of this repository in the future (#163). For automatic build from master branch of this repository
tonistiigi/dockerfile:master image can be used.
buildctl build --frontend=gateway.v0 --frontend-opt=source=tonistiigi/dockerfile --local context=. --local dockerfile=. buildctl build --frontend gateway.v0 --frontend-opt=source=tonistiigi/dockerfile --frontend-opt=context=git://github.com/moby/moby --frontend-opt build-arg:APT_MIRROR=cdn-fastly.deb.debian.org
By default, the build result and intermediate cache will only remain internally in BuildKit. Exporter needs to be specified to retrieve the result.
Exporting resulting image to containerd
The containerd worker needs to be used
buildctl build ... --exporter=image --exporter-opt name=docker.io/username/image ctr --namespace=buildkit images ls
Push resulting image to registry
buildctl build ... --exporter=image --exporter-opt name=docker.io/username/image --exporter-opt push=true
If credentials are required,
buildctl will attempt to read Docker configuration file.
Exporting build result back to client
The local client will copy the files directly to the client. This is useful if BuildKit is being used for building something else than container images.
buildctl build ... --exporter=local --exporter-opt output=path/to/output-dir
Exporting built image to Docker
# exported tarball is also compatible with OCI spec buildctl build ... --exporter=docker --exporter-opt name=myimage | docker load
OCI Image Format tarball to clientExporting
buildctl build ... --exporter=oci --exporter-opt output=path/to/output.tar buildctl build ... --exporter=oci > output.tar
View build cache
buildctl du -v
Show enabled workers
buildctl debug workers -v
Running containerized buildkit
BuildKit can also be used by running the
buildkitd daemon inside a Docker container and accessing it remotely. The client tool
buildctl is also available for Mac and Windows.
To run daemon in a container:
docker run -d --privileged -p 1234:1234 tonistiigi/buildkit --addr tcp://0.0.0.0:1234 export BUILDKIT_HOST=tcp://0.0.0.0:1234 buildctl build --help
tonistiigi/buildkit image can be built locally using the Dockerfile in
BuildKit supports opentracing for buildkitd gRPC API and buildctl commands. To capture the trace to Jaeger, set
JAEGER_TRACE environment variable to the collection address.
docker run -d -p6831:6831/udp -p16686:16686 jaegertracing/all-in-one:latest export JAEGER_TRACE=0.0.0.0:6831 # restart buildkitd and buildctl so they know JAEGER_TRACE # any buildctl command should be traced to http://127.0.0.1:16686/
Supported runc version
During development, BuildKit is tested with the version of runc that is being used by the containerd repository. Please refer to runc.md for more information.
Running BuildKit without root privileges
Please refer to
This runs all unit and integration tests in a containerized environment. Locally, every package can be tested separately with standard Go tools, but integration tests are skipped if local user doesn't have enough permissions or worker binaries are not installed.
# test a specific package only make test TESTPKGS=./client # run a specific test with all worker combinations make test TESTPKGS=./client TESTFLAGS="--run /TestCallDiskUsage -v" # run all integration tests with a specific worker # supported workers: oci, oci-rootless, containerd, containerd-1.0 make test TESTPKGS=./client TESTFLAGS="--run //worker=containerd -v"
Updating vendored dependencies:
# update vendor.conf make vendor
Validating your updates before submission: