Skip to content

How to build deckhouse images

Denys Romanenko edited this page Feb 27, 2024 · 3 revisions

Distroless tips

When building DKP components, the use of distroless is a priority. Also any component should be able to be built without internet access. Let's look at examples of how to achieve this.

Building base DEV-images

The main task is to prepare images for building in which all necessary dev-packages will be installed. To automate this process use special utility - https://github.com/deckhouse/deckhouse/blob/main/tools/dev_images/create_dev_images.sh.

When create_dev_images.sh is running, it automatically collect BASE_ images from image_versions.yml and build corresponding DEV image using Dockefile from Dockerfiles folder. After build builded DEV image pushed to the Deckhouse registry. After that script writes image name and tag to console. You should manually add it to the images_versions.yml.

In general, for all base images it is possible to build a dev image that contains everything you need. However, in a few places there are problems with requiring mutually exclusive versions of packages that are available in the distribution. To resolve this conflict, the required packages are downloaded to the dev image but not installed. They are installed from the downloaded copy when building the artifact. This method does not require Internet access. Example:

# Build luarocks assets
FROM $BASE_ALT_DEV as luarocks-builder
ARG SOURCE_REPO
ENV SOURCE_REPO=${SOURCE_REPO}
RUN apt-get install -y lua5.1-devel \
   && git clone --branch 0.4.1 ${SOURCE_REPO}/starwing/lua-protobuf \
   && cd lua-protobuf/ \
   && luarocks-5.1 make rockspecs/lua-protobuf-scm-1.rockspec
RUN cd / && \
   git clone --branch 7-3 ${SOURCE_REPO}/luarocks-sorces/lua-iconv \
   && cd lua-iconv/ \
   && luarocks-5.1 install lua-iconv-7-3.src.rock

Artifacts building

The main requirement for building artifacts is to be able to build without internet access. The build varies depending on what we are building.

For artifacts building we should use special dev-images, which contains all build requirements.

Source code storage.

All repositories needed for the build are forked to fox.flant.com/deckhouse/3p. Last part of path repeats the path in github. For example, github.com/containerd/containerd -> fox.flant.com/deckhouse/3p/containerd/containerd.

The base path to repositories is stored in the CI variable {{ .SOURCE_REPO }}. ALL source code should be taken from this path without exception.

Go applications building

In order to build an application on Go without internet access, it is necessary to set up Go mod proxy. How to setup it described here. The address of the proxy must be specified in the CI variable {{ .GOPROXY }}. Also, to speed up the build it is necessary to connect the local cache:

- fromPath: ~/go-pkg-cache
  to: /go/pkg

Note that GOPROXY cannot resolve dependencies when there is no internet. Therefore, when libraries are updated during the build process, you should follow the update instructions yourself on the code base you want to build, and make diffs for go.mod and go.sum files, which should be included in the build as a patch.

Example:

go get -u google.golang.org/grpc@v1.57.1
git diff

Put the resulting patch into patches/go-mod.patch and use it in the build. An example of a build made according to the above rules:

artifact: {{ .ModuleName }}/coredns-artifact
# use dev-image
from: {{ .Images.BASE_GOLANG_21_ALPINE_DEV }}
git:
- add: /{{ $.ModulePath }}/modules/042-{{ $.ModuleName }}/images/{{ $.ImageName }}/patches
  to: /patches
  stageDependencies:
    install:
      - '**/*'
mount:
# connect to local cache
  - fromPath: ~/go-pkg-cache
    to: /go/pkg
shell:
  install:
    - mkdir -p /src
    - cd /src
# clone source code from SOURCE_REPO
    - git clone --depth 1 -b v1.11.1 {{ $.SOURCE_REPO }}/coredns/coredns.git .
    - find /patches -name '*.patch' | xargs git apply --verbose
# export GO build variables. Set up GOXPROXY
    - export GO_VERSION=${GOLANG_VERSION} GOPROXY={{ $.GOPROXY }} GOOS=linux GOARCH=amd64 CGO_ENABLED=0
    - go build -ldflags='-extldflags "-static" -s -w -X github.com/coredns/coredns/coremain.GitCommit=v1.11.1' -o /coredns
# use deckhouse:deckhouse user as binary owner
    - chown 64535:64535 /coredns
    - chmod 0700 /coredns
---
image: {{ .ModuleName }}/{{ .ImageName }}
fromImage: common/distroless
import:
- artifact: {{ .ModuleName }}/coredns-artifact
  add: /coredns
  to: /coredns
  before: setup
docker:
  ENTRYPOINT: ["/coredns"]

Python applications building

To install python modules, you need to prepare wheels and put them in the {{ .SOURCE_REPO }}/python-modules/wheels repository. This repository has instructions on how to prepare wheels. The prepared wheels should be used in the build:

image: {{ $.ModuleName }}/{{ $.ImageName }}
fromImage: common/shell-operator
import:
- artifact: tini-artifact
  add: /tini/tini-static
  to: /sbin/tini
  before: install
- artifact: {{ $.ModuleName }}/{{ $.ImageName }}-binary-artifact
  add: /usr/bin
  to: /usr/bin
  before: install
  includePaths:
  - python3
  - python3.9
- artifact: {{ $.ModuleName }}/{{ $.ImageName }}-binary-artifact
  add: /usr/lib/python3
  to: /usr/lib/python3
  before: install
- artifact: {{ $.ModuleName }}/{{ $.ImageName }}-binary-artifact
  add: /usr/lib64/python3
  to: /usr/lib64/python3
  before: install
- artifact: {{ $.ModuleName }}/{{ $.ImageName }}-binary-artifact
  add: /usr/lib64/python3.9
  to: /usr/lib64/python3.9
  before: install
- artifact: {{ $.ModuleName }}/{{ $.ImageName }}-binary-artifact
  add: /usr/local/lib/python3
  to: /usr/local/lib/python3
  before: install
- artifact: {{ $.ModuleName }}/{{ $.ImageName }}-binary-artifact
  add: /usr/local/lib64/python3
  to: /usr/local/lib64/python3
  before: install
- artifact: {{ $.ModuleName }}/falco-artifact
  add: /falco-package
  to: /
  includePaths:
  - usr/bin/
  - usr/share/
  before: install
git:
- add: /{{ $.ModulePath }}modules/650-{{ $.ModuleName }}/images/{{ $.ImageName }}/hooks
  to: /hooks
  stageDependencies:
    install:
    - '**/*'
docker:
  ENV:
    SHELL_OPERATOR_HOOKS_DIR: "/hooks"
    LOG_TYPE: json
    PYTHONPATH: "/hooks"
  ENTRYPOINT: ["tini", "--", "/shell-operator"]
  CMD: ["start"]
---
artifact: {{ $.ModuleName }}/{{ $.ImageName }}-binary-artifact
from: {{ $.Images.BASE_ALT_DEV }}
git:
- add: /{{ $.ModulePath }}modules/650-{{ $.ModuleName }}/images/{{ $.ImageName }}/requirements.txt
  to: /requirements.txt
  stageDependencies:
    install:
    - '**/*'
shell:
  install:
  - export SOURCE_REPO={{ .SOURCE_REPO }}
  - git clone --depth 1 {{ $.SOURCE_REPO }}/python-modules/wheels /wheels
  - pip3 install -f file:///wheels --no-index -r /requirements.txt

Rust applications building

Rust has a normal vendoring system, so to build without internet you need to prepare a vendors folder beforehand and put it in {{ .SOURCE_REPO }}. An example of a repository for a log-shipper build. Since vendors depends on the software version, we put the prepared vendors folder into the repository in the branch of the software version. Then the vendors folder is added during the build:

artifact: {{ $.ModuleName }}/{{ $.ImageName }}-artifact
from: {{ $.Images.BASE_ALT_DEV }}
git:
- add: /{{ $.ModulePath }}modules/460-{{ $.ModuleName }}/images/{{ $.ImageName }}/patches
  to: /patches
  stageDependencies:
    install:
    - '**/*'
shell:
  install:
  - source "$HOME/.cargo/env"
  # Install librdkafka-dev >=2.0 because bundled version (1.9.2) has bugs with CA certificates location.
  # https://github.com/confluentinc/librdkafka/commit/f8830a28652532009e3f16854cb9d5004d9de06b
  - git clone --depth 1 --branch v2.0.2 {{ $.SOURCE_REPO }}/confluentinc/librdkafka.git /librdkafka
  - cd /librdkafka
  - ./configure
  - make
  - make install
  - export PKG_CONFIG_PATH="/usr/local/lib/pkgconfig/"
  - cd /
  - git clone --depth 1 --branch v0.31.0 {{ $.SOURCE_REPO }}/vectordotdev/vector.git
  - cd /vector
  - git clone --depth 1 --branch v0.31.0 {{ $.SOURCE_REPO }}/vectordotdev/vector-deps.git /vector/vendor
  - find /patches -name '*.patch' | xargs git apply --verbose
  - |
    cat <<EOF >> .cargo/config.toml
    [source.crates-io]
    replace-with = "vendored-sources"
    [source."git+https://github.com/Azure/azure-sdk-for-rust.git?rev=b4544d4920fa3064eb921340054cd9cc130b7664"]
    git = "https://github.com/Azure/azure-sdk-for-rust.git"
    rev = "b4544d4920fa3064eb921340054cd9cc130b7664"
    replace-with = "vendored-sources"
    [source."git+https://github.com/MSxDOS/ntapi.git?rev=24fc1e47677fc9f6e38e5f154e6011dc9b270da6"]
    git = "https://github.com/MSxDOS/ntapi.git"
    rev = "24fc1e47677fc9f6e38e5f154e6011dc9b270da6"
    replace-with = "vendored-sources"
    [source."git+https://github.com/tokio-rs/tracing?rev=e0642d949891546a3bb7e47080365ee7274f05cd"]
    git = "https://github.com/tokio-rs/tracing"
    rev = "e0642d949891546a3bb7e47080365ee7274f05cd"
    replace-with = "vendored-sources"
    [source."git+https://github.com/vectordotdev/aws-sdk-rust?rev=3d6aefb7fcfced5fc2a7e761a87e4ddbda1ee670"]
    git = "https://github.com/vectordotdev/aws-sdk-rust"
    rev = "3d6aefb7fcfced5fc2a7e761a87e4ddbda1ee670"
    replace-with = "vendored-sources"
    [source."git+https://github.com/vectordotdev/chrono.git?tag=v0.4.26-no-default-time-1"]
    git = "https://github.com/vectordotdev/chrono.git"
    tag = "v0.4.26-no-default-time-1"
    replace-with = "vendored-sources"
    [source."git+https://github.com/vectordotdev/heim.git?branch=update-nix"]
    git = "https://github.com/vectordotdev/heim.git"
    branch = "update-nix"
    replace-with = "vendored-sources"
    [source."git+https://github.com/vectordotdev/nix.git?branch=memfd/gnu/musl"]
    git = "https://github.com/vectordotdev/nix.git"
    branch = "memfd/gnu/musl"
    replace-with = "vendored-sources"
    [source."git+https://github.com/vectordotdev/tokio?branch=tokio-util-0.7.4-framed-read-continue-on-error"]
    git = "https://github.com/vectordotdev/tokio"
    branch = "tokio-util-0.7.4-framed-read-continue-on-error"
    replace-with = "vendored-sources"
    [source.vendored-sources]
    directory = "vendor"
    EOF
  - |
    cargo build \
    --release \
    -j $(($(nproc) /2)) \
    --offline \
    --no-default-features \
    --features "api,api-client,enrichment-tables,sources-host_metrics,sources-internal_metrics,sources-file,sources-kubernetes_logs,transforms,sinks-prometheus,sinks-blackhole,sinks-elasticsearch,sinks-file,sinks-loki,sinks-socket,sinks-console,sinks-vector,sinks-kafka,sinks-splunk_hec,unix,rdkafka?/dynamic-linking,rdkafka?/gssapi-vendored"
  - strip target/release/vector
  - cp target/release/vector /usr/bin/vector
  - export LD_LIBRARY_PATH="/usr/local/lib"
  - /binary_replace.sh -i /usr/bin/vector -o /relocate

C and other languages applications building

When building for other languages, the general approach remains the same: we prepare a vendor which we put into {{ .SOURCE_REPO }} and then use in the build. There are no general instructions here, it all depends on how the software build is organized. Here is an example of such vendoring.

Build example:

{{- $falcoVersion := "0.35.1" }}
---
image: {{ $.ModuleName }}/{{ $.ImageName }}
from: {{ $.Images.BASE_ALT }}
import:
- artifact: {{ $.ModuleName }}/{{ $.ImageName }}-artifact
 add: /falco-package
 to: /
 includePaths:
 - usr/bin/
 - usr/share/
 - etc/
 before: install
shell:
 beforeInstall:
 - rm -df /lib/modules
 - ln -s $HOST_ROOT/lib/modules /lib/modules
 install:
 - "sed -i 's/time_format_iso_8601: false/time_format_iso_8601: true/' /etc/falco/falco.yaml"
docker:
 CMD: ["/usr/bin/falco"]
---
artifact: {{ $.ModuleName }}/{{ $.ImageName }}-artifact
from: {{ $.Images.BASE_ALT_DEV }}
shell:
 install:
 - git clone --branch {{ $falcoVersion }} --depth 1 {{ .SOURCE_REPO }}/falcosecurity/falco.git
 - mkdir -p /falco/build
 - cd /falco/build
 - git clone --branch {{ $falcoVersion }} --depth 1 {{ .SOURCE_REPO }}/falcosecurity/falco-deps.git .
 - tar -zxvf grpc.tar.gz
 - rm -f /usr/bin/clang
 - ln -s /usr/bin/clang-15 /usr/bin/clang
 - cmake -DCMAKE_BUILD_TYPE=release -DCMAKE_INSTALL_PREFIX=/usr -DBUILD_DRIVER=OFF -DBUILD_BPF=OFF -DBUILD_FALCO_MODERN_BPF=ON -DBUILD_WARNINGS_AS_ERRORS=OFF -DFALCO_VERSION="{{ $falcoVersion }}" -DUSE_BUNDLED_DEPS=ON /falco
 - sed -i "s/DEB;RPM;TGZ/TGZ/" ./CPackConfig.cmake
 - make package -j4
 - mkdir -p /falco-package
 - tar -zxvf falco-{{ $falcoVersion }}-x86_64.tar.gz --strip-components 1 -C /falco-package

Final image building

ALL final images must be based either on the common/distroless image or on the BASE_ALT image (it will be explained further why). The pre-built binary from the artifact is copied into this image. Example:

image: {{ .ModuleName }}/{{ .ImageName }}
fromImage: common/distroless
import:
- artifact: {{ $.ModuleName }}/{{ $.ImageName }}-artifact
  add: /kube-rbac-proxy
  to: /kube-rbac-proxy
  before: setup
  EXPOSE: "8080"
---
artifact: {{ .ModuleName }}/{{ .ImageName }}-artifact
from: {{ $.Images.BASE_GOLANG_20_ALPINE_DEV }}
git:
- add: /{{ $.ModulePath }}modules/000-{{ $.ModuleName }}/images/{{ $.ImageName }}/patches
  to: /patches
  stageDependencies:
    install:
    - '**/*'
mount:
- fromPath: ~/go-pkg-cache
  to: /go/pkg
shell:
  beforeInstall:
  - git clone --depth 1 --branch v0.11.0 {{ .SOURCE_REPO }}/brancz/kube-rbac-proxy.git /src
  install:
  - cd /src
  - find /patches -name '*.patch' | xargs git apply --verbose
  - export GOPROXY={{ .GOPROXY }} GOOS=linux GOARCH=amd64 CGO_ENABLED=0
  - make build
  - cp /src/_output/kube-rbac-proxy-linux-$(go env GOARCH) /kube-rbac-proxy
  - chown 64535:64535 /kube-rbac-proxy
  - chmod 0755 /kube-rbac-proxy

If we need access to a file inside a container via kubectl exec, we need to add the path of that file to the env PATH in the helm template that uses that image.

If the binary in the final image needs any additional executable files, we take them from the BASE_ALT_DEV image and copy them using a special script /relocate.sh, which is included in the BASE_ALT_DEV image. This script copies the specified binaries together with the libraries with which the binary is linked. Example:

image: {{ $.ModuleName }}/{{ $.ImageName }}
fromImage: common/distroless
import:
- artifact: {{ $.ModuleName }}/{{ $.ImageName }}-artifact
 add: /go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver/bin/aws-ebs-csi-driver
 to: /bin/aws-ebs-csi-driver
 before: setup
- artifact: {{ $.ModuleName }}/{{ $.ImageName }}-binaries-artifact
 add: /relocate
 to: /
 before: install
 includePaths:
 - '**/*'
docker:
 ENTRYPOINT: ["/bin/aws-ebs-csi-driver"]
---
artifact: {{ $.ModuleName }}/{{ $.ImageName }}-artifact
from: {{ $.Images.BASE_GOLANG_20_ALPINE_DEV }}
shell:
 install:
 - export GO_VERSION=${GOLANG_VERSION}
 - export GOPROXY={{ $.GOPROXY }}
 - mkdir -p /go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver
 - git clone --depth 1 --branch v1.3.0 {{ $.SOURCE_REPO }}/kubernetes-sigs/aws-ebs-csi-driver.git /go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver
 - cd /go/src/github.com/kubernetes-sigs/aws-ebs-csi-driver
 - make linux/amd64
mount:
- fromPath: ~/go-pkg-cache
 to: /go/pkg
---
{{- $csiBinaries := "/bin/chmod /bin/mount /bin/mkdir /bin/rmdir /bin/umount /bin/findmnt /bin/lsblk /sbin/badblocks /sbin/blockdev /sbin/blk* /sbin/dumpe2fs /sbin/e2* /sbin/findfs /sbin/fsck* /sbin/fstrim /sbin/mke2fs /sbin/mkfs* /sbin/resize2fs /usr/sbin/parted /usr/sbin/xfs*" }}
---
artifact: {{ $.ModuleName }}/{{ $.ImageName }}-binaries-artifact
from: {{ $.Images.BASE_ALT_DEV }}
shell:
 setup:
   - /binary_replace.sh -i "{{ $csiBinaries }}" -o /relocate

In rare cases (e.g. deckhouse-controller) when an image needs a full-featured environment with all components (bash, grep, etc...) it is allowed to use BASE_ALT as the final image.

Binaries in the final image must run from user deckhouse:deckhouse except special cases (for example, ingress nginx controller).