-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
error creating zfs mount: no such file or directory #37207
Comments
Issue also seen on Ubuntu 16.04 uname -aLinux XXXX 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux df -h /app/dockerFilesystem Size Used Avail Use% Mounted on apt-cache policy zfsutils-linuxzfsutils-linux: docker versionClient: Server: docker infoContainers: 1 WARNING: No swap limit support |
|
Same issue. Output of docker info: Containers: 96 |
I'm having the same problem. The build command:
The exact error:
The dockerfile
|
This is now happening every second build. If you are looking for some help to reproduce it I can probably help. |
@bsutton I have not seen this issue for a while. Maybe it is worth trying with recent versions of Docker and containerd.io (the latter one probably being more important). |
Sorry I'm not even certain what containerd.io is (I'm new to docker). |
@stephan2012 I can confirm that this is still happening with containerd.io 1.4.3 and docker-ce 20.10.3 (from Docker's official repo for debian buster) This seems to be a race condition that happens when Docker's root is on a ZFS volume and the build is multi-stage, containing a The reason it seems to work every n-th time is that the race condition is not triggered if the previous build step is already in cache. The way to reproduce this is:
See also moby/buildkit#1758 it is the same bug IMHO |
So can this be worked around by putting a sleep or similar in the source
stage?
…On Mon, 8 Feb 2021, 10:57 pm tobia, ***@***.***> wrote:
@stephan2012 <https://github.com/stephan2012> I can confirm that this is
still happening with containerd.io 1.4.3 and docker-ce 20.10.3 (from
Docker's official repo for debian buster)
This seems to be a race condition that happens when Docker's root is on a
ZFS volume and the build is multi-stage, containing a COPY --from=... If
the source stage takes more time to build than the destination stage (and
is not already in cache) this triggers the bug.
The reason it seems to work every n-th time is that the race condition is
not triggered if the previous build step is already in cache. The way to
reproduce this is:
1. use a ZFS installation, meaning that data-root is on a ZFS volume
(even if daemon.json does not explicitly set "storage-driver": "zfs")
2. take a moderately complex multi-stage build, with COPY --from=...
commands (any such build should trigger the bug, as long as the source
stage take more time to build than the destination stage)
3. perform a build with --no-cache
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#37207 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAG32OD4BB4H53NST6WVVDLS57GTBANCNFSM4FDFGUHQ>
.
|
@bsutton If anything, the sleep should go in the destination stage, so that it can wait for the source stage to be ready. But it's not reliable. Sometimes you even get a successful build, but the image does not contain some of the files! So I would advise against it. As I posted on the other issue moby/buildkit#1758 this is a minimal script that triggers the bug (thanks @jaen) #!/bin/sh
cd `mktemp -d`
echo test-1 > test-1
echo test-2 > test-2
cat > Dockerfile <<EOF
FROM alpine as builder1
COPY test-1 /test-1
FROM alpine as builder2
COPY test-2 /test-2
FROM alpine
COPY --from=builder1 /test-1 /stuff/test-1
COPY --from=builder2 /test-2 /stuff/test-2
EOF
TAR=`mktemp`
tar -cf $TAR .
env DOCKER_BUILDKIT=1 docker build - < $TAR I don't know why the TAR step triggers it more easily, but it does. Sometimes you get this error:
Other times you get this:
|
PS. the only a workaround I found for this issue is not to use the zfs storage driver at all (explicitly set |
I'm having the same issue, does any knows how can I set up the dev environment to work in a PR for this? I identified that the error is thrown in this file but I have no idea about how debug this =/ Any tips? |
+1 I am going to move my docker mount onto a zvol + zfs + overlay. This is not the only bug I experienced :( |
+1 Also experiencing this issue. Running Arch Linux with kernel version Output for
|
This comment was marked as outdated.
This comment was marked as outdated.
The fix discussed in moby/buildkit#1758 works. Backporting to docker.io 20.10.12 is straightforward. |
I'm hitting this error in
|
Description
docker build
reportsno such file or directory
when runningdocker build
on a ZFS.Steps to reproduce the issue:
Execute
docker build
when/var/lib/docker
is mounted on a ZFS filesystem.Describe the results you received:
Describe the results you expected:
No errors
Additional information you deem important (e.g. issue happens only occasionally):
Intermittent error. Usually
docker build
succeeds after three to four retries. Probably a race condition.Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.):
The text was updated successfully, but these errors were encountered: