Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chown: Value too large for defined data type #656

Closed
vineethvijay opened this issue May 8, 2019 · 23 comments · Fixed by #1254
Closed

chown: Value too large for defined data type #656

vineethvijay opened this issue May 8, 2019 · 23 comments · Fixed by #1254
Labels
area/container For all bugs related to the kaniko container area/released-image fixed-needs-verfication in progress kind/bug Something isn't working priority/p2 High impact feature/bug. Will get a lot of users happy

Comments

@vineethvijay
Copy link

vineethvijay commented May 8, 2019

Kaniko build fails (randomly) with Value too large for defined data type

Dockerfile snippet:

FROM <repo>ubuntu:latest
WORKDIR /opt/docker
ADD opt /opt
RUN ["chown", "-R", "test_nobody:test_nobody", "."]

Build error :

INFO[0021] Taking snapshot of files...                  
INFO[0023] RUN ["chown", "-R", "test_nobody:test_nobody", "."] 
INFO[0023] cmd: chown                                   
INFO[0023] args: [-R test_nobody:test_nobody .]         
  chown: .: Value too large for defined data type
  error building image: error building stage: waiting for process to exit: exit status 1

Kaniko : gcr.io/kaniko-project/executor:debug
Build Env: Jenkins - Kubernetes

@vineethvijay
Copy link
Author

@vineethvijay vineethvijay changed the title Build fail : chown: .: Value too large for defined data type Build fail : Value too large for defined data type May 11, 2019
@abergmeier
Copy link
Contributor

abergmeier commented May 28, 2019

I guess this may be due to Busybox being built with Bazel. Does Busybox need glibc for this to work?

@steerben
Copy link

The kaniko debug image is defined by https://github.com/GoogleContainerTools/kaniko/blob/master/deploy/Dockerfile_debug

This Dockerfile is multi stage, preps busybox in one of the prior steps and copies it over to the final kaniko debug image.

This intermediate step is dependent on the distroless repository where it sets up busybox via a bazel build.
This bazel build references the busybox binary defined in the distroless repo's WORKSPACE file.
The referenced busybox binary has been added 2 years ago: https://busybox.net/downloads/binaries/1.27.1-i686/busybox
and is a 32bit and not a 64bit build (see i686).

According to https://bugs.busybox.net/show_bug.cgi?id=11651#c2 this should not matter though since for the file sizes following is described:
uses off_t, not ints, for file sizes everywhere.

Yet many point out that switchting to amd64 glibc-based busybox from 32-bit uclibc fixes their issue.

@abergmeier
Copy link
Contributor

So is switching to amd64 glibc variant an option?

@ravenpride
Copy link

I started using kaniko as well some time ago and directly ran into this issue. I'm trying to build on Kubernetes using the kaniko debug image, if this matters. Is a fix for this in sight? If not, I will have to build a kaniko image on my own using glibc, but I would prefer to use the official images ;-)

Any feedback is appreciated.

@MohammedFadin
Copy link

I'm having the same issue

@ravenpride
Copy link

ravenpride commented Jul 10, 2019

Meanwhile I've build an Alpine Linux image with kaniko that works quite well.

Maybe this might help you, too.

https://gitlab.com/griffinplus/gitlab-kaniko

EDIT: If you use /kaniko/executor instead of the kaniko-build wrapper, the image behaves just the same as the original image.

@StepanKuksenko
Copy link

StepanKuksenko commented Sep 20, 2019

i have the same issue :(
kaniko 0.12.0,
gke 1.14.3

have to build my own kaniko build...

UPDATE:
custom kaniko image doesn't resolve the issue. I have to use bare metal k8s cluster, kaniko doesn't work in GKE :(((

@donmccasland donmccasland added area/container For all bugs related to the kaniko container priority/p2 High impact feature/bug. Will get a lot of users happy labels Sep 24, 2019
@marcosimioni
Copy link

marcosimioni commented Nov 22, 2019

FYI I keep having a "Value too large for defined data type" when I even do an ls, but only on my k8s cluster running Linux 5.0 gcc 7.4 . My local Docker runs 4.15 & gcc 5.4 and it works just fine.

Tested both with distroless (related to GoogleContainerTools/distroless#225) and kaniko-debug, just to confirm - see below.

user@ubuntu:~$ docker run --entrypoint=/busybox/sh --rm -it gcr.io/kaniko-project/executor:debug-e0e59e619c03da1e60e9e9520aee5cc741000e3d
/ # ls
busybox  dev      etc      kaniko   proc     sys
/ # cat /proc/version
Linux version 4.15.0-70-generic (buildd@lgw01-amd64-057) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.12)) #79~16.04.1-Ubuntu SMP Tue Nov 12 14:01:10 UTC 2019
/ # uname -a
Linux d6ea16280785 4.15.0-70-generic #79~16.04.1-Ubuntu SMP Tue Nov 12 14:01:10 UTC 2019 x86_64 GNU/Linux
/ # exit

user@ubuntu:~$ kubectl exec -it kaniko-debug-test /busybox/sh
/ # ls
ls: can't open '.': Value too large for defined data type
/ # cat /proc/version
Linux version 5.0.0-29-generic (buildd@lgw01-amd64-039) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #31~18.04.1-Ubuntu SMP Thu Sep 12 18:29:21 UTC 2019
/ # uname -a
Linux kaniko-debug-test 5.0.0-29-generic #31~18.04.1-Ubuntu SMP Thu Sep 12 18:29:21 UTC 2019 x86_64 GNU/Linux
/ # 

and btw, @ravenpride alpine based image worked like a charm, thank for putting that together!

@gebi
Copy link

gebi commented Dec 10, 2019

Currently using kaniko is a bit unfortunately, because it still failes most builds (for us) on k8s clusters.

The reason as above written is a broken busybox/uclibc coming from distroless (which is i686 and breaks on hitting 64bit inodes)

The fix for us was to just add an additional layer over gcr.io/kaniko-project/executor:debug like

FROM gcr.io/kaniko-project/executor:debug
MAINTAINER Michael Gebetsroither <mgebetsroither@mgit.at>

COPY --from=amd64/busybox:1.31.0 /bin/busybox /busybox/busybox

Would be nice if this problem could finally be fixed after so many months.

@cvgw cvgw self-assigned this Dec 23, 2019
@cvgw cvgw added the kind/bug Something isn't working label Dec 23, 2019
@tejal29 tejal29 added this to the GA Release v1.0.0 milestone Jan 10, 2020
@tejal29
Copy link
Member

tejal29 commented Jan 10, 2020

@gebi Sorry about the lack of support. We are a small team and trying to re-vamp our support level.

Looks like this was fixed on distroless GoogleContainerTools/distroless#437 in Nov 2019.

We did a release in Dec and this should be fixed now.
Can you please confirm with the v0.15.0 version?

Tejal

@Silthias
Copy link

Silthias commented Jan 17, 2020

I'm encountering the same issue having tried the v0.15.0 image indicated above, when on a mac.

docker run -it --entrypoint="/busybox/sh" gcr.io/kaniko-project/executor:debug-v0.15.0
Unable to find image 'gcr.io/kaniko-project/executor:debug-v0.15.0' locally
debug-v0.15.0: Pulling from kaniko-project/executor
56794e083a92: Pull complete
1c0b7e34e3d0: Pull complete
8f7bb327a6ac: Pull complete
36d852826fe5: Pull complete
0a2adab3f462: Pull complete
bd1fdd317c06: Pull complete
6dd6c383d6aa: Pull complete
Digest: sha256:966acfdd2e11b8c82d25310ff7acf453b2bb97b806cb6da8e6ef0546bdc09128
Status: Downloaded newer image for gcr.io/kaniko-project/executor:debug-v0.15.0
/ # ls
ls: can't open '.': Value too large for defined data type

However it works correctly when running as a container inside my kubernetes cluster.

@cvgw
Copy link
Contributor

cvgw commented Jan 17, 2020

@Silthias What does your docker file look like? Do you happen to be copying / because that will cause the issue you posted. See #960 for more details

@lmakarov
Copy link

lmakarov commented Feb 6, 2020

Ran into this issue trying to build images with kaniko in a GitLab managed EKS Kubernetes cluster.

https://gitlab.com/griffinplus/gitlab-kaniko did not work, since it does not seem to have the amazon-ecr-credential-helper embedded and I do need to push to AWS ECR.

The quick and easy workaround from @gebi did the trick. Thanks @gebi!

I've implemented it here https://github.com/lmakarov/kaniko and using lmakarov/kaniko as the build image in GitLab for the time being.

build:
  stage: build
  image:
    #name: gcr.io/kaniko-project/executor:debug
    name: lmakarov/kaniko # See https://github.com/lmakarov/kaniko
    entrypoint: [""]
  tags:
    - kubernetes
  variables:
    IMAGE_URI: <aws-account-id>.dkr.ecr.us-east-1.amazonaws.com/<ecr-repo>:${CI_COMMIT_SHORT_SHA}
  before_script:
    - echo '{"credsStore":"ecr-login"}' > /kaniko/.docker/config.json # Enable amazon-ecr-credential-helper
  script:
    # Build and push image using Kaniko in K8s
    - /kaniko/executor --context ${CI_PROJECT_DIR} --dockerfile ${CI_PROJECT_DIR}/Dockerfile --destination ${IMAGE_URI}

@jeunii
Copy link
Contributor

jeunii commented Feb 27, 2020

Does anyone have a somewhat similar issue in GKE cluster where the build goes through but the image build misses chown-ing a folder ? Im referring to #1079

@tejal29
Copy link
Member

tejal29 commented Mar 6, 2020

@jeunii i will try to reproduce your issue on GKE.

@skaymakca
Copy link

skaymakca commented Mar 7, 2020

I'm running into the same issue (running ls) on a Rancher/RKE kubernetes cluster running docker 19.3.5. The builds otherwise run fine.

The issue doesn't exist on a development Ubuntu 18.04 VM running docker 19.03.7.

I tried it on the debug and debug-v0.18.0 tagged images.

@cvgw cvgw removed their assignment Mar 27, 2020
@tstromberg tstromberg changed the title Build fail : Value too large for defined data type chown: Value too large for defined data type Mar 29, 2020
@tstromberg
Copy link
Contributor

I've run into this as well with local Docker, but have not been able to reproduce it since.

I concur with the others in that the root cause is likely due to commands making not being able to handle system calls with 64-bit values.

@tstromberg
Copy link
Contributor

tstromberg commented Mar 29, 2020

My attempts to reproduce the issue have so far failed, but I definitely ran into this with my first time using Kaniko and Docker. Tried creating a 167GB file:

# dd if=/dev/zero bs=4096 seek=40960000 count=1 of=test

But ls was able to process it properly. Maybe I got unlucky with inode numbers before?

@mg-christian-axelsson
Copy link

Just to add another data point I run into the issue on a shell session in gcr.io/kaniko-project/executor:debug-v0.19.0 running inside a Kubernetes cluster ontop of Ubuntu 18.04 LTS:

/ # ls
ls: can't open '.': Value too large for defined data type

Pod is created by the following spec:

apiVersion: v1
kind: Pod
metadata:
  name: kaniko-test
spec:
    containers:
    - name: builder
      image: gcr.io/kaniko-project/executor:debug-v0.19.0
      command: ["sh", "-c", "trap : TERM INT; (while true; do sleep 1000; done) & wait"]
    restartPolicy: Never

@den-is
Copy link

den-is commented May 4, 2020

erm .. encountered the same issue
fascinating how that issue is not yet solved by providing official x64 binaries

workaround from @gebi and (GoogleContainerTools/distroless#225 (comment)) has worked perfectly fine

@tejal29 tejal29 mentioned this issue May 8, 2020
@tejal29
Copy link
Member

tejal29 commented May 8, 2020

@gebi Trying to fix this here #1254

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/container For all bugs related to the kaniko container area/released-image fixed-needs-verfication in progress kind/bug Something isn't working priority/p2 High impact feature/bug. Will get a lot of users happy
Projects
None yet
Development

Successfully merging a pull request may close this issue.