Skip to content
This repository has been archived by the owner on Jan 1, 2021. It is now read-only.

1.3.0 - Only root can write to OSX volumes / Can't change permissions within #581

Open
lynxnathan opened this issue Oct 20, 2014 · 230 comments
Labels

Comments

@lynxnathan
Copy link

Currently only root inside the container is able to modify mounted volumes. Also permissions changed from within have no effect. I'm unsure as to whether cannot be managed dynamically on the blog post refers to any of this.

I would suggest updating the readme to let users know about the existence of native support for volumes and also its limitations (especially if this is one of them).

@SvenDowideit SvenDowideit added this to the 1.3.1 milestone Oct 20, 2014
@mnapoli
Copy link

mnapoli commented Oct 20, 2014

I think I'm having the same issue (latest versions of everything, installed today).

In the container, in a directory shared with the host (i.e. mounted through the virtualbox VM):

  • root user can write files
  • www-data user can't, even in a directory with 777 permissions

Is that normal or not? (i.e. I did something wrong)

@lukaswelte
Copy link

So means if I create a cache folder for example that writes new files and folders to disk during execution, I cannot do this right now?

Cause I experience the same problem as @mnapoli does..

@bradgessler
Copy link

I wrote up a simple test case. Let me know if there's anything else I can do to make this bug easier to fix.

First, I create a Dockerfile:

FROM ubuntu:14.04

# Adding user
RUN adduser --system --ingroup staff mugzy

# Setup repos dir.
RUN mkdir /data
RUN chown mugzy:staff -R /data
VOLUME /data

CMD touch /data/file && ls -al /data

Then create a host volume that I'll mount into the Docker instance at runtime:

app[master] → mkdir ./data

When I run the container, I see root:root ownership on the docker /data volume. I'd expect for this to be what I set from the Dockerfile above, mugzy:staff:

app[master] → docker run -i -v $PWD/data:/data c638bc43bfe7
total 4
drwxr-xr-x  1 1000 staff  102 Oct 21  2014 .
drwxr-xr-x 62 root root  4096 Oct 21 01:54 ..
-rw-r--r--  1 1000 staff    0 Oct 21 01:49 file

Obviously when I then run the docker instance as mugzy, the volume has incorrect permissions and I can't write to disk:

app[master] → docker run -i -v $PWD/data:/data -u mugzy c638bc43bfe7
total 4
drwxr-xr-x  1 1000 staff  102 Oct 21  2014 .
drwxr-xr-x 62 root root  4096 Oct 21 01:54 ..
-rw-r--r--  1 1000 staff    0 Oct 21 01:49 file
touch: cannot touch '/data/file': Permission denied

I'd imagine if people are trying to boot services in Docker on Mac, they'd want to be able to setup a volume that a service user can write to for persistence.

@vmaatta
Copy link

vmaatta commented Oct 22, 2014

Mounting a VBox share as UID 999 will allow the created container user to write as it's usually 999. This is just a fragile hack on the hack but it does allow writing…

sudo mount -t vboxsf -o uid=999,gid=50 your-other-share-name /some/mount/location

@lukaswelte
Copy link

So can I use this command to set the /Users share to be writeable?
Or do I have to create a special share for every application that needs to be able to write in a folder?

@vmaatta
Copy link

vmaatta commented Oct 22, 2014

/Users is a bit special as it's mounted by the boot2docker script (any share matching the names listed in release notes), changing it requires customising the code. I just created a custom share anyway as I don't want to share the whole /Users structure with containers.

  1. Overriding the /Users default share on boot2docker start:
boot2docker --vbox-share=$(pwd)/share-location=share-name up
  1. boot2docker ssh in and mount the custom share:
sudo mount -t vboxsf -o uid=999,gid=50 share-name [SHARE-FULL-PATH]/share-location

@lukaswelte
Copy link

Thanks for the workaround.
Although it would be great to have that behavior built into the boot2docker script (don’t want coworkers to do much extra action).

@SvenDowideit
Copy link
Contributor

@tianon ?

@tianon
Copy link
Contributor

tianon commented Oct 23, 2014

This is bizarre - do we need to tweak something about the way we mount the share?

There's nothing special about what we do: https://github.com/boot2docker/boot2docker/blob/master/rootfs/rootfs/etc/rc.d/automount-shares (just uid=...,gid=... as mentioned here)

@tianon tianon modified the milestone: 1.3.1 Oct 23, 2014
@tianon
Copy link
Contributor

tianon commented Oct 23, 2014

Indeed, I've just tested here:

docker@boot2docker:~$ ls -lAF /
...
drwxr-xr-x    1 docker   staff        12288 Oct 23 16:37 Users/
...
docker@boot2docker:~$ echo test > /Users/test
docker@boot2docker:~$ cat /Users/test
test
docker@boot2docker:~$ rm -v /Users/test
rm: remove '/Users/test'? y
docker@boot2docker:~$

So I definitely need more information about what's going wrong if we're going to figure out how to fix it. 😄

@mnapoli
Copy link

mnapoli commented Oct 23, 2014

@tianon Now try with a different user and it will not work.

The big problem is that usually, Nginx or MySQL or whatever will run as a different user (www-data, mysql…) so it's impossible to use Docker at all (at least in those situations).

@tianon
Copy link
Contributor

tianon commented Oct 23, 2014

Right, but how can we fix it?

Permissions with volume mounts are one of those "gotchas" that's always a sticky point, so I don't see a good fix for the general case (besides Docker handling the volume sharing more directly, and thus smoothing permissions issues somehow), especially since once you've got it working for "just this one container", you're going to want to spin up another, and it will probably have different UIDs altogether (think mysql + postgres + wordpress + stuff specific to your own development, etc).

@vmaatta
Copy link

vmaatta commented Oct 23, 2014

@tianon above in your example you're running as the docker user in the VM. It'll work just fine there but the container and added users are a different story.

  • Outside the container, in the VM, the vboxsf is mounted as UID=1000, GID=50, i.e. docker:staff
  • Inside a running container that UID is not in use by default, but if you'll just useradd new user, 1000 will be the first ID. Ok so far.
  • If you add both a new group and a new user to the freshly created group, inside the container, you'll likely end up with UID / GID 999:999… -> Quite a few containers in the Hub do just that. They add a group and a user for the process to run as. Here's an example from the postgres container* RUN groupadd -r postgres && useradd -r -g postgres postgres. That'll end up with 999:999

Here's some testing I did with my custom uid=999,gid=50 vboxsf:

Test "postgres" container

root@9aa1b6f15b1c:/# groupadd -r postgres && useradd -r -g postgres postgres
root@9aa1b6f15b1c:/# cat /etc/group | grep post
postgres:x:999:

root@9aa1b6f15b1c:/tmp/test# mkdir root-dir
root@9aa1b6f15b1c:/tmp/test# su -c 'mkdir user-dir' postgres
root@9aa1b6f15b1c:/tmp/test# su -c 'touch user-file' postgres
root@9aa1b6f15b1c:/tmp/test# su -c 'echo "test" > user-file' postgres
root@9aa1b6f15b1c:/tmp/test# su -c 'cat user-file' postgres
test
root@9aa1b6f15b1c:/tmp/test# su -c 'ln -s user-file user-link' postgres
root@9aa1b6f15b1c:/tmp/test# su -c 'ln user-file user-hard-link' postgres
ln: failed to create hard link `user-hard-link' => `user-file': Operation not permitted

New container

root@7a795aa575df:/# useradd test-user
root@7a795aa575df:/# cat /etc/passwd
…
test-user:x:1000:1000::/home/test-user:/bin/sh
root@7a795aa575df:/tmp/test# su - test-user
No directory, logging in with HOME=/
$ cd /tmp/test
$ touch test-user-test-file
touch: cannot touch `test-user-test-file': Permission denied

VM

docker@boot2docker:~$ ls -lAF /Users/vmaatta/projects/data/writetest/
total 8
drwxr-xr-x    1 999      staff           68 Oct 23 20:44 root-dir/
drwxr-xr-x    1 999      staff           68 Oct 23 20:45 user-dir/
-rw-r--r--    1 999      staff            5 Oct 23 20:48 user-file
lrwxr-xr-x    1 999      staff            9 Oct 23 20:49 user-link -> user-file

Now, as you said, it's very difficult to come up with a general fix. And this is actually not that different from the situation of running docker on a Linux host without boot2docker or any other virtualisation layer. Issues with volume folder rights are a challenge there as well.

Currently the vboxsf mount is UID / GID 1000:50. That's docker:staff in the VM and, no-one:staff or first user to default group in a container. I changed this to 999:50 which matches the new group and user scenario by UID. GID is still 50 and this allows the VM docker user access and also the container root user is fine. The web server I mount a volume for uses the new group and user scenario as well so it works for me.

I don't know… maybe there's a better / more general UID / GID combination but I've seen 999:999 mentioned already a couple times in the Hub for containers' documentation. No surprise as they add both group and a user. But YMMV and that's why I've just done this in bootlocal.sh instead of a pull request.

And maybe we need something completely different [from vboxsf] to solve this.

  • Postgres being a bit bad example now as initdb dies on hard links now but oh well… there's nothing we can do for that here unfortunately.

@tianon
Copy link
Contributor

tianon commented Oct 23, 2014

Yeah, in the general Linux case this is easier because the permissions
actually can be munged, generally speaking. With "vboxsf", we have to
choose one mapping, and no matter what we pick we're going to alienate a
non-trivial number of people, so we defaulted to "docker:staff" to at least
make the reasons for the default choice clear and obvious.

Maybe making the exact uid/gid configurable via the persistent storage
"profile" file is the way to go, but that's really just pushing the already
bad situation down on our users (however, with the benefit that they can
actually get themselves to the workaround without a huge amount of effort,
compared to where we're at now with hacks in bootlocal, etc).

@SvenDowideit
Copy link
Contributor

tbh, the right place to do all this is in docker client - which would be the one to create a new share every time, and then the vm mount would be specific to the container - aaaand, for that we need help writing code :)

@lukaswelte
Copy link

Are there any news on this?
I have no really good idea how to solve it, but this issue holds me back from switching to boot2docker, cause I just cannot run my apache2 docker container..

@vmaatta
Copy link

vmaatta commented Oct 30, 2014

I might be wrong but I don't think there is really much that can be done on the boot2docker side very soon. Every container's needs are going to be different and getting the automatic vbox share working with them all is quite difficult.

If the apache2 container does not need to make hard links to the bind mounted volume you should be able to use a custom share with access rights suitable for the apache user. Above you can find my override of the default share but you could also add another one for use with apache

  1. Add a share location to the VM: $VBoxManage sharedfolder add "boot2docker-vm" --name "apacheshare" --hostpath "/Users/username/shares/data/apache" You could just as well do that via the VirtualBox UI.
  2. Mount it in the VM after bootup: mount -t vboxsf -o uid=999,gid=50 apacheshare /Users/username/shares/data/apache You'll need to find the suitable UID that works for the apache2 user, if it works.

For multiple bind mounts, and different containers, there might need to be multiple different shares depending on the needed UIDs.

I've added my version of number 2 to the bootlocal.sh in the VM so it's done automatically on boot. I don't have the script handy now but I can add it here later.

@SvenDowideit
Copy link
Contributor

yup, @vmaatta has it baout right - you could do this by hand, but you probably should consider that you might be better off working out a different way to acheive it - like using volume containers.

@lukaswelte
Copy link

@vmaatta Would be awesome if you could share your script.
I am not that familiar with virtual box.

@SvenDowideit Volume containers is no real option as we use fig for the development process and only some people use macs. So it would make the process more painful for the non-mac users.

@vmaatta
Copy link

vmaatta commented Oct 30, 2014

@lukaswelte Here's my /mnt/sda1/var/lib/boot2docker/bootlocal.sh. You'll need to adapt it to your needs.

#!/bin/sh

if [ -e /Users/username ]; then
    umount /Users
fi

mkdir -p /Users/username/projects
mount -t vboxsf -o uid=999,gid=50 projects /Users/username/projects

@SvenDowideit I'm probably missing the point but… Volume containers are great and should always be used where they make sense. But with regards to bind mounting something from the host they don't change anything. They suffer from the same problems any other container does.

@SvenDowideit
Copy link
Contributor

@vmaatta yeah - you get around that atm by creating your data container like:

docker run --name data -v /data busybox chmod 777 /data

and then you need to copy that data to your local machine using another container.

I use samba containers for a reason :)

@ababushkin
Copy link

Has anyone worked out a simple - StackOverflow style - solution to this problem yet?

I've tried all the conventional solutions, such as:

  1. Creating and configuring the permissions script (as per @motin's) comment
  2. Manually running usermod -u 1000 mysql in a custom Dockerfile (that inherits from this one)

The only 'feasible' solution I see is a custom project created by @dgraziotin, which deviates from the main MySQL / MariaDB docker images (https://github.com/dgraziotin/osx-docker-mysql). This hardly seems like an optimal solution, especially if Docker is to get even more rapid adoption throughout the community.

@ernestom
Copy link

ernestom commented Sep 2, 2016

@ababushkin using docker-machine-nfs worked for me.

@yosifkit
Copy link
Contributor

yosifkit commented Sep 2, 2016

With docker-library/mysql#161 you should be able to run mysql as the owner of the directory in question:

docker run -d -e MYSQL_ROOT_PASSWORD=foobar1234 --user 1000:50 -v /Users/my-user/mysql-data/:/var/lib/mysql/ mysql:5.7

This will fix the permissions problem, but I cannot guarantee that mysql will always run on a VirtualBox Shared Folder. MongoDB for example cannot run on the shared folder since the file system does not support everything it needs.

@motin
Copy link

motin commented Sep 5, 2016

@ababushkin I have read reports from users of native Docker for Mac that they no longer need any workaround. I created the permissions script to make it work on Docker Toolbox.

@dgraziotin
Copy link

@ababushkin I confirm that with Docker v1.12.0 for Mac, there is no need to use my ugly workaround anymore :-) @motin

@ababushkin
Copy link

@dgraziotin @motin Ouch, looks like I'm still running Docker v1.11.0.

I'll upgrade and give it a whirl.

@yosifkit thanks for the tip!

@ababushkin
Copy link

ababushkin commented Sep 5, 2016

@ernestom did you notice any performance improvement for disk sync when using that solution? Symfony runs really poorly for me at the moment. I have the same issue when using Vagrant and VirtualBox shared volumes and worked around it by using an NFS mount.

Update: I just noticed that the new version of docker has its own VM and a new OSX dedicated file system layer. I'll try this out to see if there are still performance issues :)

@ernestom
Copy link

ernestom commented Sep 5, 2016

@ababushkin I didn't notice any significant impact in performance with NFS for Docker, and I've been using it for years on my vagrant/vbox projects without issues.

@krasilich
Copy link

@ababushkin I have been using Docker for Mac for a couple of months now, and unfortunately performance issues still there for me. My case is - project on local, mounted inside container, running inside container. For example some project on Symfony, I run some heavy with high I/O app/console command and waiting tens of minutes or even hours to complete instead of up to 10 minutes on pure local.

@ababushkin
Copy link

@ernestom @krasilich Thats a bummer.

@ernestom do you have a boilerplate docker-compose file that's using the NFS solution by any chance?

@dend
Copy link

dend commented Oct 24, 2016

So just to follow-up here, the issue seems to be gone on the Mac if you install the native Docker beta (use beta channel here). That obviously doesn't solve the problem much for automated scenarios, but works well for local dev.

@ababushkin
Copy link

@dend Yup thats correct, permission issues are fixed with Docker for Mac. You don't need to install the beta version, the stable release fixes it as well.

Unfortunately performance issues have not been fixed. At this stage I've been getting around performance issues by doing smarter builds of my images (e.g. mounting volumes that don't need lots of read/write operations by the app)

@vschoener
Copy link

Just trying back to use Docke for mac with a web project, the ownership issue is still here..
Files are mounted in root:root and even if I change the ownership to www-data with a entrypoint, the new one will be created as root user.

Any idea ? I tried to use USER www-data before starting the apache process but as you know, www-data doesn't have the privilege to start the apache service.. Feel stuck :(

dmitryfleytman pushed a commit to rbld/rebuild that referenced this issue May 29, 2017
Due to a bug in Docker Machine for MAC, only container root
can access files on mounted volumes.

See original issue discussion:
  boot2docker/boot2docker#581

Signed-off-by: Dmitry Fleytman <dmitry@daynix.com>
@TyIsI
Copy link

TyIsI commented Sep 28, 2017

Ran into this issue as well.

@vmaatta was right in his breakdown, and I'd like to add that the "issue" is the -r option in groupadd/useradd versus adding users without that specific option. The -r option creates system users, which by default (set in /etc/adduser.conf) "starts" with UID/GID 999. (Last in range.)

root@9b3da358b593:/# egrep SYSTEM /etc/adduser.conf
# FIRST_SYSTEM_[GU]ID to LAST_SYSTEM_[GU]ID inclusive is the range for UIDs
FIRST_SYSTEM_UID=100
LAST_SYSTEM_UID=999
FIRST_SYSTEM_GID=100
LAST_SYSTEM_GID=999

For regular users, these would be added starting UID/GID 1000 (matching the UID for the boot2docker docker user), which is fine for a single user. This also has the implication that if another user were to be added in a container, that a user with UID 1001 wouldn't be able to access files through vboxsf.

Right now, I don't know how this could be solved easily, but I'm going to look into this.

Recap: Images create system users that don't match the UID of the docker user in boot2docker.

@TyIsI
Copy link

TyIsI commented Sep 28, 2017

Example work-around for rabbitmq:

Dockerfile:

FROM rabbitmq:3-management

RUN usermod -u 1000 rabbitmq
RUN groupmod -g 1000 rabbitmq

docker build -t rabbitmq-test .

docker run -d --hostname my-rabbit --name some-rabbit -p 8080:15672 -v `pwd`/data/rabbitmq/:/var/lib/rabbitmq rabbitmq-test

@yosifkit
Copy link
Contributor

@TyIsI, many of the images provided by Docker Official Images (like rabbitmq) were modified to allow running as a different user so that you would not need to create or modify the user in the image. See docker-library/rabbitmq#60 and the other PRs linked from there. What this means is that in most instances when using boot2docker on a Mac, you can do something like the following:

$ docker run -d -v /Users/myuser/rabbitdir/:/var/lib/rabbitmq/ --user 1000:50 rabbitmq:3-managment

Some notable exceptions that don't work with the VirtualBox shared folder are MongoDB (docker-library/mongo#74 (comment)) and MariaDB 10.1 (docker-library/percona#42 (comment)).

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests