Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Permissions on tmpfs mounts reset after container restart #138

Open
2 of 3 tasks
charlvanniekerk opened this issue Oct 18, 2017 · 12 comments
Open
2 of 3 tasks

Permissions on tmpfs mounts reset after container restart #138

charlvanniekerk opened this issue Oct 18, 2017 · 12 comments

Comments

@charlvanniekerk
Copy link

  • This is a bug report
  • This is a feature request
  • I searched existing issues before opening this one

Expected behavior

The tmpfs mount's permissions should be exactly the same as when they were initially set.

Actual behavior

On the first start of the container, the permissions are correctly set. After a restart they are always reset back to 755.

Steps to reproduce the behavior

$ docker run --name test --tmpfs /test debian stat -c %a /test && docker start -a test && docker rm test
1777
755
test
$ docker run --name test --mount type=tmpfs,destination=/test,tmpfs-mode=1777 debian stat -c %a /test && docker start -a test && docker rm test
1777
755
test
$ docker run --name test --mount type=tmpfs,destination=/test,tmpfs-mode=0444 debian stat -c %a /test && docker start -a test && docker rm test
444
755
test

Output of docker version:

Client:
 Version:      17.09.0-ce
 API version:  1.32
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:42:18 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.09.0-ce
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:40:56 2017
 OS/Arch:      linux/amd64
 Experimental: false

Output of docker info:

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 1
Server Version: 17.09.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-97-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 992.1MiB
Name: ubuntu-xenial
ID: I4LX:P2UV:RBV5:CF5A:H6Y2:UN2T:ISCF:ETL7:XRT3:J7J5:6O5B:WKGY
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

Additional environment details (AWS, VirtualBox, physical, etc.)

Package: virtualbox-5.1
Version: 5.1.30-118389~Debian~stretch
@trentbarry
Copy link

Is this one being looked at? Could really use a fix. Setting up a container with non-root and read-only fs by using tmpfs mount for configuration file changes on startup script works great...until docker restart of the container and the permissions change on the tmpfs mount. Currently have a sh script to squirrel away the logs from the container before force remove it and start a new container as a work around. Would really prefer to be able to use the restart behavior of docker with the more secure options of the read-only fs and non-root and the tmpfs mount type.

@struanb
Copy link

struanb commented Nov 18, 2020

It seems like a really basic issue of correctness that (a) the tmpfs mode set on launch should be the same after the container is restarted; (b) the tmpfs mode should, when not overridden, always be the default specified in the documentation (which says "Defaults to 1777").

$ docker run -it --name xxx --mount=type=tmpfs,dst=/xyzzy debian:latest bash
root@adc81b0e72df:/# ls -ld /xyzzy/
drwxrwxrwt 2 root root 40 Nov 18 20:58 /xyzzy/
root@adc81b0e72df:/# exit

newsnow@netstart:~$ docker start xxx
xxx

$ docker exec -it xxx bash
root@adc81b0e72df:/# ls -ld /xyzzy/
drwxr-xr-x 2 root root 40 Nov 18 20:59 /xyzzy/

Is there a fix pending for this?

@vaind
Copy link

vaind commented Jan 31, 2021

Is this really so complicated to fix? It's been sitting here for three years...

@tmuncks
Copy link

tmuncks commented Apr 25, 2022

This is a weird one, and one that shouldn't be too hard to fix (in my own naive understanding of the issue at least).

When mounting a tmpfs volume (or any other volume) inside a container, the resulting permissions of that mounted filesystem, should in be completely independent of the permissions or ownership of the mountpoint on which it's mounted. This is how mountpoints normally work, so I can't think of any good reason to do things differently here.

So how can we help getting this fixed?

Brikaa referenced this issue in engineer-man/piston Jun 29, 2022
Permissions on the jobs directory allowed anyone to write into the directory - this commit simply allows only the `node` user to `rwx` on the jobs directory.
@AIGeneratedUsername
Copy link

AIGeneratedUsername commented Apr 3, 2023

This is also a problem for volumes. Example:

services:

  # By default, bind mounted files are owned by a root user. The below is
  # a hack to bind mount folder with keeping non-root user permissions.
  # https://github.com/docker/compose/issues/3270#issuecomment-1245819741
  backend-init:
    image: alpine:3.17.3
    user: root
    group_add:
      - '1000'
    volumes:
      - ${HOME}/.ssh:/tmp/.ssh:ro
      - ssh-copy:/tmp/ssh_copy/.ssh
    command: cp -r /tmp/.ssh /tmp/ssh_copy && chown -R 1000:1000 /tmp/ssh_copy

  backend:
    volumes:
      - ssh-copy:/home/nonroot/.ssh
    command:
      - /bin/bash
      - '-c'
      - |
          echo ${DOCKER_USER_PASSWORD} | sudo -S chown -R 1000:1000 ~/.ssh
          /bin/bash
    depends_on:
      backend-init:
        condition: service_completed_successfully


volumes:
  ssh-copy:

This code chowns permissions successfully to 1000, but after a container restart all permissions are again 0 (root).

@xorinzor
Copy link

xorinzor commented Apr 21, 2023

Running into this issue now too when attempting to use Varnish on Unraid using Docker.

Initial docker container creation goes fine and it runs, but as soon as you stop & restart the container it no longer has the correct permissions and fails to start.

Would really like to see a fix for this.

Docker version 20.10.21, build baeda1f

@xorinzor
Copy link

Update, I got it to work for me with the varnish container thanks to the comment by @kolorafa in this issue moby/moby#20437

Once the container was running I had a look in the directory I was mounting via tmpfs to see which user & group was used.
Then I had a look in the /etc/passwd to get the User & Group ID.

Next I just added ,uid=<user-id>,gid=<group-id> behind :exec and this caused the tmpfs to be created with correct permissions, and persist after a restart of the container.

Resulting command looked like --tmpfs /var/lib/varnish/varnishd:exec,uid=1000,gid=1000

@cpuguy83
Copy link
Collaborator

cpuguy83 commented Jun 22, 2023

This looks like a runc bug.

As an example (using ctr for simplicity to mount a filesystem):

$ mkdir -p /tmp/test/rootfs
$ sudo ctr image mount docker.io/library/busybox:latetst /tmp/test/rootfs
$ sudo runc spec --bundle /tmp/test # creates a default runc spec
$ sudo mv /tmp/test/config.json /tmp/test/config.json.orig
$ sudo jq '.process += { "args": ["stat", "-c", "%a", "/tmp/test"]} | .mounts += [{ "destination": "/tmp/test", "type": "tmpfs", "source": "tmpfs", "options": ["mode=0444"] }]' /tmp/test/config.json.orig > /tmp/test/config.json
$ sudo runc run --bundle /tmp/test
444
$ sudo runc run --bundle /tmp/test
755
  • Note: to properly clean this up run: sudo ctr image unmount /tmp/test/rootfs && sudo ctr snapshot rm /tmp/test/rootfs

This means it can actually be reproduced by pre-creating the destination dir for the tmpfs.

@cpuguy83
Copy link
Collaborator

Opened opencontainers/runc#3911 to get runc maintainers thoughts on a desirable fix for this.

@cpuguy83
Copy link
Collaborator

FWIW, you should be able to pre-create the directory in the container image with the perms you want and it will... in theory... work for you.

@PikachuEXE
Copy link

PikachuEXE commented Dec 13, 2023

Still got this issue with runc v1.1.10-0-g18a0cb0

Update 1: pre-create the directory in the container image works but needs perms 777
PikaSer-Cosmos/likecoin-chain-tx-indexer-pika@a42f415

@LiorRaines
Copy link

I faced the same issue - after I done docker-compose restart, I couldnt write files to the tmpfs
@xorinzor solution worked for me, thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

13 participants