New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker: run failure on named volumes, error setting label #2497

Closed
sixcorners opened this Issue Aug 20, 2018 · 5 comments

Comments

Projects
None yet
4 participants
@sixcorners

sixcorners commented Aug 20, 2018

Issue Report

Bug

Container Linux Version

$ cat /etc/os-release
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1871.0.0
VERSION_ID=1871.0.0
BUILD_ID=2018-08-15-2242
PRETTY_NAME="Container Linux by CoreOS 1871.0.0 (Rhyolite)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr"

Environment

intel nuc nuc6i5syh

Expected Behavior

Container runs

Actual Behavior

Container stops with a status of "Created"

Reproduction Steps

  1. docker swarm init
  2. docker service create --mount type=volume,source=test,destination=/test ubuntu

Other Information

$ docker start docker ps -laq
Error response from daemon: error setting label on mount source '': no such file or directory
Error: failed to start containers: 3432e46f5f82

Other people have reported the bug here:
docker/cli#1234

@lucab

This comment has been minimized.

Show comment
Hide comment
@lucab

lucab Aug 20, 2018

Member

@sixcorners thanks for the report. This looks like an selinux-related issue on the daemon side of docker-ce. Can you please report it directly to the moby/moby upstream project?

Anyway, I can confirm this is visible also on a plain beta without any swarm setup. I haven't looked at the moby+selinux code, but stracing the dockerd daemon shows that at some point an empty-string manages to get into the filesystem logic:

mount("shm", "/var/lib/docker/containers/fd9cfcac7e55c826cb0cd4e6424e5b0c6eda351eb64a63a1a7d8c55fe3d08ee6/mounts/shm", "tmpfs", MS_NOSUID|MS_NODEV|MS_NOEXEC, "mode=1777,size=67108864,context="...) = 0
fchownat(AT_FDCWD, "/var/lib/docker/containers/fd9cfcac7e55c826cb0cd4e6424e5b0c6eda351eb64a63a1a7d8c55fe3d08ee6/mounts/shm", 0, 0, 0) = 0
lstat("", 0xc42009da38)                 = -1 ENOENT (No such file or directory)
lsetxattr("", "security.selinux", "system_u:object_r:svirt_lxc_file_t:s0", 37, 0) = -1 ENOENT (No such file or directory)
lstat("/var/lib/docker/containers/fd9cfcac7e55c826cb0cd4e6424e5b0c6eda351eb64a63a1a7d8c55fe3d08ee6/mounts", {st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0
lstat("/var/lib/docker/containers/fd9cfcac7e55c826cb0cd4e6424e5b0c6eda351eb64a63a1a7d8c55fe3d08ee6/mounts/secrets", 0xc42009dbd8) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/proc/self/mountinfo", O_RDONLY|O_CLOEXEC) = 23
Member

lucab commented Aug 20, 2018

@sixcorners thanks for the report. This looks like an selinux-related issue on the daemon side of docker-ce. Can you please report it directly to the moby/moby upstream project?

Anyway, I can confirm this is visible also on a plain beta without any swarm setup. I haven't looked at the moby+selinux code, but stracing the dockerd daemon shows that at some point an empty-string manages to get into the filesystem logic:

mount("shm", "/var/lib/docker/containers/fd9cfcac7e55c826cb0cd4e6424e5b0c6eda351eb64a63a1a7d8c55fe3d08ee6/mounts/shm", "tmpfs", MS_NOSUID|MS_NODEV|MS_NOEXEC, "mode=1777,size=67108864,context="...) = 0
fchownat(AT_FDCWD, "/var/lib/docker/containers/fd9cfcac7e55c826cb0cd4e6424e5b0c6eda351eb64a63a1a7d8c55fe3d08ee6/mounts/shm", 0, 0, 0) = 0
lstat("", 0xc42009da38)                 = -1 ENOENT (No such file or directory)
lsetxattr("", "security.selinux", "system_u:object_r:svirt_lxc_file_t:s0", 37, 0) = -1 ENOENT (No such file or directory)
lstat("/var/lib/docker/containers/fd9cfcac7e55c826cb0cd4e6424e5b0c6eda351eb64a63a1a7d8c55fe3d08ee6/mounts", {st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0
lstat("/var/lib/docker/containers/fd9cfcac7e55c826cb0cd4e6424e5b0c6eda351eb64a63a1a7d8c55fe3d08ee6/mounts/secrets", 0xc42009dbd8) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/proc/self/mountinfo", O_RDONLY|O_CLOEXEC) = 23

@lucab lucab changed the title from docker swarm services fail to create containers with named volumes to docker: run failure on named volumes, error setting label Aug 20, 2018

@lucab

This comment has been minimized.

Show comment
Hide comment
@lucab

lucab Aug 20, 2018

Member

For reference the whole command and failure is:

$ docker run --rm --name test --mount source=test,destination=/test alpine

/run/torcx/bin/docker: Error response from daemon: error setting label on mount source '': no such file or directory.
ERRO[0000] error waiting for container: context canceled
Member

lucab commented Aug 20, 2018

For reference the whole command and failure is:

$ docker run --rm --name test --mount source=test,destination=/test alpine

/run/torcx/bin/docker: Error response from daemon: error setting label on mount source '': no such file or directory.
ERRO[0000] error waiting for container: context canceled
@sixcorners

This comment has been minimized.

Show comment
Hide comment
@sixcorners

sixcorners Aug 22, 2018

Weird.. when I tried to reproduce it without swarm just -vtesting:/test it didn't work.

sixcorners commented Aug 22, 2018

Weird.. when I tried to reproduce it without swarm just -vtesting:/test it didn't work.

@icedream

This comment has been minimized.

Show comment
Hide comment
@icedream

icedream Aug 22, 2018

I'm confirming this issue as present since Docker upgrade to 18.06.0; Alpha and Beta channel both currently have this bug. I only tried this with swarm mode though but good to know this also exists in non-swarm setups.

icedream commented Aug 22, 2018

I'm confirming this issue as present since Docker upgrade to 18.06.0; Alpha and Beta channel both currently have this bug. I only tried this with swarm mode though but good to know this also exists in non-swarm setups.

@dm0-

This comment has been minimized.

Show comment
Hide comment
@dm0-

dm0- Sep 6, 2018

Member

I've built a torcx image with the upstream fix at: http://builds.developer.core-os.net/torcx/pkgs/amd64-usr/docker/c0f963f620a30dae112917e7396b8f2b89268c13f23055234e2006a2cdeb4096410bbc7d3c7b8d49b8556746898fa2f015d6b12196fd6fb008a508912448bf05/docker:18.06.torcx.tgz

You can test it by saving it to /var/lib/torcx/store/docker:test.torcx.tgz, running sed s/com.coreos.cl/test/ < /usr/share/torcx/profiles/vendor.json > /etc/torcx/profiles/test.json ; echo test > /etc/torcx/next-profile as root, and rebooting. (Remember to delete /etc/torcx/next-profile after testing to resume using Docker from /usr to get automatic updates.)

This should be included in next week's releases if there are no issues with it.

Member

dm0- commented Sep 6, 2018

I've built a torcx image with the upstream fix at: http://builds.developer.core-os.net/torcx/pkgs/amd64-usr/docker/c0f963f620a30dae112917e7396b8f2b89268c13f23055234e2006a2cdeb4096410bbc7d3c7b8d49b8556746898fa2f015d6b12196fd6fb008a508912448bf05/docker:18.06.torcx.tgz

You can test it by saving it to /var/lib/torcx/store/docker:test.torcx.tgz, running sed s/com.coreos.cl/test/ < /usr/share/torcx/profiles/vendor.json > /etc/torcx/profiles/test.json ; echo test > /etc/torcx/next-profile as root, and rebooting. (Remember to delete /etc/torcx/next-profile after testing to resume using Docker from /usr to get automatic updates.)

This should be included in next week's releases if there are no issues with it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment