Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kaniko should only be run inside of a container #1542

Open
valador opened this issue Jan 8, 2021 · 12 comments
Open

kaniko should only be run inside of a container #1542

valador opened this issue Jan 8, 2021 · 12 comments
Labels
area/cli bugs related to kaniko CLI kind/bug Something isn't working priority/p2 High impact feature/bug. Will get a lot of users happy work-around-available

Comments

@valador
Copy link

valador commented Jan 8, 2021

Actual behavior
Try to build example and fail
"kaniko should only be run inside of a container, run with the --force flag if you are sure you want to continue"
Tested with git context, same error.
I start example vith k3s in lxc container, unprevileged - error, previleged - vork fine.

Expected behavior
Build and "hello"

To Reproduce
k3s kubernetes

Additional Information

  • Dockerfile
FROM ubuntu
ENTRYPOINT ["/bin/bash", "-c", "echo hello"]
  • Build Context
apiVersion: v1
kind: PersistentVolume
metadata:
  name: dockerfile
  labels:
    type: local
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  storageClassName: local-storage
  hostPath:
    path: /home/dev/test/src # in src only Dockerfile
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: dockerfile-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
  storageClassName: local-storage
---
apiVersion: v1
kind: Pod
metadata:
  name: kaniko
spec:
  containers:
  - name: kaniko
    image: gcr.io/kaniko-project/executor:latest
    args: ["--dockerfile=/workspace/Dockerfile",
            "--context=dir://workspace",
            "--no-push"] 
    volumeMounts:
      - name: dockerfile-storage
        mountPath: /workspace
  restartPolicy: Never
  volumes:
    - name: dockerfile-storage
      persistentVolumeClaim:
        claimName: dockerfile-claim
  • Kaniko Image (fully qualified with digest)
    gcr.io/kaniko-project/executor:latest
Description Yes/No
Please check if this a new feature you are proposing
Please check if the build works in docker but not in kaniko
Please check if this error is seen when you use --cache flag
Please check if your dockerfile is a multistage dockerfile
@debdolph
Copy link

Any progress? Running into this error after updating to debian bullseye with docker 20.10.5.

@qalinn
Copy link

qalinn commented May 19, 2021

Any update on this ?

@rawyler
Copy link

rawyler commented Jun 3, 2021

I've encountered the same issue when running a kaniko job on a docker based gitlab-runner. But for me it was failing with privileged = true.

I guess I started to have those since the change of c2a919a#diff-cb441f1bc0f59b82d3fec60db5a4e6924c446fce5d868e14f0e2ad4a88d987ed which detects in what containered environment it runs.

So as a hack I've added environment = ["container=docker"] to my gitlab-runner config.toml and I'm able to build docker images with kaniko again.
This environment variable is read by the container detection script https://github.com/GoogleContainerTools/kaniko/blame/57ea150cade8689a311a1db4bba23ee705b06728/vendor/github.com/genuinetools/bpfd/proc/proc.go#L138

anthr76 added a commit to anthr76/infra that referenced this issue Jun 11, 2021
GoogleContainerTools/kaniko#1542

Signed-off-by: anthr76 <hello@anthonyrabbito.com>
github-actions bot pushed a commit to veloren/veloren that referenced this issue Jul 19, 2021
"kaniko should only be run inside of a container, run with the --force flag if you are sure you want to continue" error
applied as described here GoogleContainerTools/kaniko#1542

its also done in veloren-docker-cli: https://gitlab.com/veloren/veloren-docker-ci/-/commit/c8aa8ac857292cf28e37dfd009c96c43ab02d206?merge_request_iid=50

We didnt had that problem in veloren repo until now.
@davinkevin
Copy link

I have the same problem. I can't identify "what" change precisely (kubernetes version, gitlab runner…) but since I fully reinstall k3s, I encounter the same problem.

vliaskov added a commit to vliaskov/examples that referenced this issue Sep 29, 2021
…ble.

This is a hack/workaround for the currently unsolved kaniko issue of refusing
to build images on non-container/bare-metal environments, see:
GoogleContainerTools/kaniko#1542

Workaround for fuseml/fuseml#252
@nickbroon
Copy link

Might this be related to #1592 ?

@JonasGroeger
Copy link
Contributor

We have 2 runners (Debian 10 and Debian 11) with otherwise identical configuration. The Debian 10 runner worked fine but the Debian 11 runner had this issue. The cgroups version was identical (v2).

The workaround of @rawyler worked. Here a little more context:

$ cat /etc/gitlab-runner/config.toml
[[runners]]
  url = "..."
  environment = ["container=docker"]

@Floppy012
Copy link

We have 2 runners (Debian 10 and Debian 11) with otherwise identical configuration.

I have exactly the same issue (also using Gitlab Runner). I just want to provide a little more information:

Node A
root@***:~# uname -a
Linux *** 4.19.0-18-amd64 #1 SMP Debian 4.19.208-1 (2021-09-29) x86_64 GNU/Linux

root@***:~# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux 10 (buster)
Release:        10
Codename:       buster

root@***:~# docker -v
Docker version 20.10.11, build dea9396

root@***:~# grep cgroup /proc/filesystems 
nodev   cgroup
nodev   cgroup2
Node B
root@***:~# uname -a
Linux *** 5.10.0-11-amd64 #1 SMP Debian 5.10.92-1 (2022-01-18) x86_64 GNU/Linux

root@***:~# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux 11 (bullseye)
Release:        11
Codename:       bullseye

root@***:~# docker -v
Docker version 20.10.12, build e91ed57

root@***:~# grep cgroup /proc/filesystems 
nodev   cgroup
nodev   cgroup2

@imjasonh
Copy link
Collaborator

imjasonh commented Feb 2, 2022

Might this be related to #1592 ?

This sounds like a likely culprit. If anybody has experience or interest or time to investigate, it would be much appreciated.

@int128
Copy link

int128 commented Mar 13, 2022

I got the same error when trying docker run on GitHub Actions.

Here is my script and result.

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - run: |
          docker run --rm \
            -v $(pwd)/helloworld:/workspace:ro \
            gcr.io/kaniko-project/executor:v1.8.0 \
            --context dir:///workspace/ \
            --no-push \
            --verbosity debug
$ docker run --rm \
    -v $(pwd)/helloworld:/workspace:ro \
    gcr.io/kaniko-project/executor:v1.8.0 \
    --context dir:///workspace/ \
    --no-push \
    --verbosity debug
Unable to find image 'gcr.io/kaniko-project/executor:v1.8.0' locally
v1.8.0: Pulling from kaniko-project/executor
...
Digest: sha256:ff98af876169a488df4d70418f2a60e68f9e304b2e68d5d3db4c59e7fdc3da3c
Status: Downloaded newer image for gcr.io/kaniko-project/executor:v1.8.0
DEBU[0000] Getting source context from dir:///workspace/ 
DEBU[0000] Build context located at /workspace/         
DEBU[0000] Copying file /workspace/Dockerfile to /kaniko/Dockerfile 
kaniko should only be run inside of a container, run with the --force flag if you are sure you want to continue
Error: Process completed with exit code 1.

@int128
Copy link

int128 commented Mar 13, 2022

I found the workaround at #1542 (comment).

I added -e container=docker to the docker args and finally it works!

      - run: |
          docker run --rm \
            -v $(pwd)/helloworld:/workspace:ro \
            -e container=docker \
            gcr.io/kaniko-project/executor:v1.8.0 \
            --context dir:///workspace/ \
            --no-push \
            --verbosity debug

@superherointj
Copy link

superherointj commented Mar 23, 2022

For running in a Kubernetes cluster, you have to declare an environment variable named container with value kube.

To configure gitlab-runner that runs on Kubernetes use:
$ cat /etc/gitlab-runner/config.toml

[[runners]]
executor = "kubernetes"
environment = [ "container=kube" ]

For other runtimes, see options.

@aaron-prindle aaron-prindle added kind/bug Something isn't working area/cli bugs related to kaniko CLI priority/p2 High impact feature/bug. Will get a lot of users happy work-around-available labels May 29, 2023
@stanhu
Copy link

stanhu commented Jun 13, 2023

We ran into this error with Kaniko v1.3.0 and Google COS cos-105-17412-101-24.

Updating to Kaniko v1.11.0 fixed the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cli bugs related to kaniko CLI kind/bug Something isn't working priority/p2 High impact feature/bug. Will get a lot of users happy work-around-available
Projects
None yet
Development

No branches or pull requests