-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Permissions on tmpfs mounts reset after container restart #138
Comments
Is this one being looked at? Could really use a fix. Setting up a container with non-root and read-only fs by using tmpfs mount for configuration file changes on startup script works great...until docker restart of the container and the permissions change on the tmpfs mount. Currently have a sh script to squirrel away the logs from the container before force remove it and start a new container as a work around. Would really prefer to be able to use the restart behavior of docker with the more secure options of the read-only fs and non-root and the tmpfs mount type. |
It seems like a really basic issue of correctness that (a) the tmpfs mode set on launch should be the same after the container is restarted; (b) the tmpfs mode should, when not overridden, always be the default specified in the documentation (which says "Defaults to 1777").
Is there a fix pending for this? |
Is this really so complicated to fix? It's been sitting here for three years... |
This is a weird one, and one that shouldn't be too hard to fix (in my own naive understanding of the issue at least). When mounting a So how can we help getting this fixed? |
This is also a problem for volumes. Example: services:
# By default, bind mounted files are owned by a root user. The below is
# a hack to bind mount folder with keeping non-root user permissions.
# https://github.com/docker/compose/issues/3270#issuecomment-1245819741
backend-init:
image: alpine:3.17.3
user: root
group_add:
- '1000'
volumes:
- ${HOME}/.ssh:/tmp/.ssh:ro
- ssh-copy:/tmp/ssh_copy/.ssh
command: cp -r /tmp/.ssh /tmp/ssh_copy && chown -R 1000:1000 /tmp/ssh_copy
backend:
volumes:
- ssh-copy:/home/nonroot/.ssh
command:
- /bin/bash
- '-c'
- |
echo ${DOCKER_USER_PASSWORD} | sudo -S chown -R 1000:1000 ~/.ssh
/bin/bash
depends_on:
backend-init:
condition: service_completed_successfully
volumes:
ssh-copy: This code chowns permissions successfully to |
Running into this issue now too when attempting to use Varnish on Unraid using Docker. Initial docker container creation goes fine and it runs, but as soon as you stop & restart the container it no longer has the correct permissions and fails to start. Would really like to see a fix for this.
|
Update, I got it to work for me with the varnish container thanks to the comment by @kolorafa in this issue moby/moby#20437 Once the container was running I had a look in the directory I was mounting via tmpfs to see which user & group was used. Next I just added Resulting command looked like |
This looks like a runc bug. As an example (using ctr for simplicity to mount a filesystem): $ mkdir -p /tmp/test/rootfs
$ sudo ctr image mount docker.io/library/busybox:latetst /tmp/test/rootfs
$ sudo runc spec --bundle /tmp/test # creates a default runc spec
$ sudo mv /tmp/test/config.json /tmp/test/config.json.orig
$ sudo jq '.process += { "args": ["stat", "-c", "%a", "/tmp/test"]} | .mounts += [{ "destination": "/tmp/test", "type": "tmpfs", "source": "tmpfs", "options": ["mode=0444"] }]' /tmp/test/config.json.orig > /tmp/test/config.json
$ sudo runc run --bundle /tmp/test
444
$ sudo runc run --bundle /tmp/test
755
This means it can actually be reproduced by pre-creating the destination dir for the tmpfs. |
Opened opencontainers/runc#3911 to get runc maintainers thoughts on a desirable fix for this. |
FWIW, you should be able to pre-create the directory in the container image with the perms you want and it will... in theory... work for you. |
Still got this issue with Update 1: |
I faced the same issue - after I done docker-compose restart, I couldnt write files to the tmpfs |
Expected behavior
The
tmpfs
mount's permissions should be exactly the same as when they were initially set.Actual behavior
On the first start of the container, the permissions are correctly set. After a restart they are always reset back to
755
.Steps to reproduce the behavior
Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.)
The text was updated successfully, but these errors were encountered: