GitHub Action to build and push container images with Kaniko.
This action runs Kaniko on Kubernetes and so neither relies on Docker to build the action (e.g., as aevea/action-kaniko does) nor to execute Kaniko itself (e.g., like int128/kaniko-action). Hence, it can be used with self-hosted runners that are scheduled in container environments such as Kubernetes.
This action is also compatible with the Docker's official actions such as docker/login-action or docker/metadata-action.
To use this action, first setup access to a Kubernetes cluster. A minimal example to build a Docker image is given below:
jobs:
build:
steps:
- uses: actions/checkout@v3
- uses: azure/setup-kubectl@v3
- uses: azure/k8s-set-context@v3
- uses: heinrichreimer/kaniko-action@v1
This action requires access to a Kubernetes cluster and the kubectl
executable.
Setup kubectl
in your workflow using the azure/setup-kubectl action:
- uses: azure/setup-kubectl@v3
It is recommended to run the Kaniko pods in a dedicated namespace.
We provide a Kubernetes YAML file that will create a namespace kaniko
, a corresponding service account and role bindings that allow the service account to create pods within that namespace.
Apply it like this:
kubectl apply -f kaniko-setup.yml
Now get your Kubernetes cluster URL and service account secret.
Store the cluster URL in a GitHub secret called KUBERNETES_URL
. Find your cluster URL by running:
kubectl config view --minify -o 'jsonpath={.clusters[0].cluster.server}' && echo
Store the Kubernetes secret in a GitHub secret called KUBERNETES_SECRET
. To print your secret, run:
kubectl get secret kaniko -n kaniko -o yaml
Store the cluster URL in a GitHub secret called KUBERNETES_URL
and the Kubernetes secret in a GitHub secret called KUBERNETES_SECRET
.
Then use the azure/k8s-set-context action to setup kubectl
to authenticate with the service account you just created:
- uses: azure/k8s-set-context@v3
with:
cluster-type: generic
method: service-account
k8s-url: '${{ secrets.KUBERNETES_URL }}'
k8s-secret: '${{ secrets.KUBERNETES_SECRET }}'
token: '${{ secrets.KUBERNETES_TOKEN }}'
Push the built image to a container registry:
jobs:
build:
steps:
- uses: actions/checkout@v3
- uses: azure/setup-kubectl@v3
- uses: azure/k8s-set-context@v3
with:
...
- uses: docker/login-action@v1
with:
registry: registry.example.com
username: foo
password: bar
- uses: heinrichreimer/kaniko-action@v1
with:
push: true
tags: registry.example.com/my-image
Include metadata about the repository the image is built from, using the docker/metadata-action:
jobs:
build:
steps:
- uses: actions/checkout@v3
- uses: azure/setup-kubectl@v3
- uses: azure/k8s-set-context@v3
with:
...
- uses: docker/metadata-action@v3
id: metadata
with:
images: registry.example.com/my-image
- uses: heinrichreimer/kaniko-action@v1
with:
tags: ${{ steps.metadata.outputs.tags }}
labels: ${{ steps.metadata.outputs.labels }}
Kaniko supports layer caching with a remote repository such as GHCR or Amazon ECR. Refer to the Kaniko documentation for details.
To enable caching, just set a cache repository.
jobs:
build:
steps:
- uses: actions/checkout@v3
- uses: azure/setup-kubectl@v3
- uses: azure/k8s-set-context@v3
with:
...
- uses: docker/login-action@v1
with:
registry: registry.example.com
username: foo
password: bar
- uses: heinrichreimer/kaniko-action@v1
with:
cache: true
cache-repository: registry.example.com/my-image/cache
This action outputs the digest of the built image, for example, ${{ steps.image.outputs.digest }}
would evaluate to something like sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef
in the following example:
jobs:
build:
steps:
- uses: actions/checkout@v3
- uses: azure/setup-kubectl@v3
- uses: azure/k8s-set-context@v3
with:
...
- uses: heinrichreimer/kaniko-action@v1
id: image
- run: echo ${{ steps.image.outputs.digest }}
The digest can be used to construct an image URI, if you want to deploy your image.
For example, ghcr.io/${{ github.repository }}@${{ steps.image.outputs.digest }}
would evaluate to something like ghcr.io/username/repository@sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef
.
This action runs the Kaniko executor via the go run
command.
The exact inputs and outputs are given below.
This action supports the below inputs. See also the flags of the Kaniko executor.
Name | Description | Corresponding flag |
---|---|---|
context * |
Path to the build context. Default to the workspace | - |
file * |
Path to the Dockerfile. Default to Dockerfile . It must be in the context. If set, this action passes the relative path to Kaniko, same as the behavior of docker build |
--dockerfile |
build-args * |
List of build args | --build-arg |
labels * |
List of metadata for an image | --label |
push * |
Push an image to the registry. Default to false | --no-push |
tags * |
List of tags | --destination |
target * |
Target stage to build | --target |
cache |
Enable caching layers | --cache |
cache-repository |
Repository for storing cached layers | --cache-repo |
cache-ttl |
Cache timeout | --cache-ttl |
push-retry |
Number of retries for the push of an image | --push-retry |
registry-mirror |
Use registry mirror(s) | --registry-mirror |
verbosity |
Set the logging level | --verbosity |
kaniko-version |
Version of the Kaniko executor. Default to 1.19.2 |
- |
kaniko-args |
Extra args to the Kaniko executor | - |
* These inputs are compatible with docker/build-push-action.
Name | Description | Example |
---|---|---|
digest |
Image digest | sha256:abcdef... |
We can build a multi-architecture image such as amd64
and arm64
on self-hosted runners in GitHub Actions.
For details, see @int128's fantastic docker-manifest-create-action.
Here is an example to build and push a container image to the GitHub Container Registry:
jobs:
build:
steps:
- uses: actions/checkout@v3
- uses: azure/setup-kubectl@v3
- uses: azure/k8s-set-context@v3
with:
...
- uses: docker/metadata-action@v3
id: metadata
with:
images: ghcr.io/${{ github.repository }}
- uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: heinrichreimer/kaniko-action@v1
with:
push: true
tags: ${{ steps.metadata.outputs.tags }}
labels: ${{ steps.metadata.outputs.labels }}
cache: true
cache-repository: ghcr.io/${{ github.repository }}/cache
To build and push a container image to Amazon ECR, use:
jobs:
build:
steps:
- uses: actions/checkout@v3
- uses: azure/setup-kubectl@v3
- uses: azure/k8s-set-context@v3
with:
...
- uses: aws-actions/configure-aws-credentials@v1
with:
role-to-assume: arn:aws:iam::ACCOUNT:role/ROLE
- uses: aws-actions/amazon-ecr-login@v1
id: ecr
- uses: docker/metadata-action@v4
id: metadata
with:
images: ${{ steps.ecr.outputs.registry }}/${{ github.repository }}
- uses: heinrichreimer/kaniko-action@v1
with:
push: true
tags: ${{ steps.metadata.outputs.tags }}
labels: ${{ steps.metadata.outputs.labels }}
cache: true
cache-repository: ${{ steps.ecr.outputs.registry }}/${{ github.repository }}/cache
Here is an example workflow to build and deploy an application.
jobs:
deploy:
steps:
- uses: actions/checkout@v3
- uses: azure/setup-kubectl@v3
- uses: azure/k8s-set-context@v3
with:
...
- uses: aws-actions/amazon-ecr-login@v1
id: ecr
- uses: docker/metadata-action@v4
id: metadata
with:
images: ${{ steps.ecr.outputs.registry }}/${{ github.repository }}
- uses: heinrichreimer/kaniko-action@v1
id: build
with:
push: true
tags: ${{ steps.metadata.outputs.tags }}
labels: ${{ steps.metadata.outputs.labels }}
cache: true
cache-repository: ${{ steps.ecr.outputs.registry }}/${{ github.repository }}/cache
- run: kustomize edit set image myapp=${{ steps.ecr.outputs.registry }}/${{ github.repository }}@${{ steps.build.outputs.digest }}
- run: kustomize build | kubectl apply -f -
To build this package and contribute to its development you need to install Yarn:
Install package and test dependencies:
yarn install
Verify your changes against the test suite to verify.
yarn format-check # Code format
yarn lint # LINT errors
yarn test # Unit tests
Please also add tests for your newly developed code.
This package can be built with:
yarn package
If you hit any problems using this package, please file an issue. I'm happy to help!
This repository is released under the Apache 2.0 license.