New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"chown: operation not permitted" error mounting host volume for Postgres. #31

Closed
dradtke opened this Issue Jun 23, 2015 · 15 comments

Comments

Projects
None yet
7 participants
@dradtke

dradtke commented Jun 23, 2015

We have a Docker project that uses a Postgres container (postgres:9.3 image) with /var/lib/postgresql/data mounted to the local filesystem, and it works fine on Linux, but errors out on Mac with "chown: operation not permitted". boot2docker was running into permissions issues there as well, which is why we first tried dinghy, though boot2docker's errors were more confusing.

@codekitchen

This comment has been minimized.

Owner

codekitchen commented Jun 23, 2015

Yeah, the official postgres container refuses to start up unless its data directory is owned by the internal container postgres uid. Originally dinghy supported this, but it caused quite a few other problems, so I ended up enabling NFS user squashing as the "lesser of two evils" solution. See the discussion in #15 and d1cf4d9

I still wouldn't be opposed to making this a configuration option, so people can disable user squashing if they prefer that behavior. I'd be happy to accept a PR if you want to pursue that, see the referenced commit.

Internally at my company though, we ended up deciding to not to store our postgres volumes on the host OSX volume anyway. We just let the db disappear if we delete the container, and focus on having a solid dev bootstrapping script to populate the db with data.

@dradtke

This comment has been minimized.

dradtke commented Jun 25, 2015

I started out a recent project by using the bootstrapping script technique, but that won't be an option in the near future as we need to let the client start doing data entry into that database, and after he's started, then tearing it down isn't an option. Since only one of the people on our team uses a Mac, it looks like his workaround for the time being is to use Ubuntu in a VM, but I may spend some time when I get a chance to investigate a little further.

Thanks for the input. I'll take a look at that discussion to see what I can learn from it.

@codekitchen

This comment has been minimized.

Owner

codekitchen commented Jun 25, 2015

Another option would be to put the data on a docker volume mounted to a path in the dinghy linux VM, rather than on the host-mounted NFS volume, but I've never really explored that.

@stevevega

This comment has been minimized.

stevevega commented Jul 17, 2015

mongo and mysql official images also use chown, so they don't work with dinghy out of the box.

@rosskevin

This comment has been minimized.

Contributor

rosskevin commented Sep 23, 2015

I just bumped into this with a mysql mount. I don't need the data to stick around, I am seeing degraded performance though for our CI build (2x the time for a request on avg to mysql). I was wondering if a host volume mount would make a difference, but this doesn't look like an easy experiment...

@vultron81

This comment has been minimized.

vultron81 commented Sep 23, 2015

+1 for the config option

@codekitchen

This comment has been minimized.

Owner

codekitchen commented Sep 25, 2015

2x compared to what baseline? Native on the host?

Have you tried specifying it as VM-based volume, rather than as part of the docker layered filesystem? I'd be surprised if NFS is faster than that, though I haven't benchmarked it.

@rosskevin

This comment has been minimized.

Contributor

rosskevin commented Sep 25, 2015

2x vs native host.

I finally figured out that COPYing the project files (and log dirs that are written to) in the Dockerfile to the image before execution got us on par with host (about 10% slower). So really, if you have a lot of I/O, I just learned to go ahead and copy those target files into the image. Working on extracting results easily now, which is the only pain. Using the host volume still has the problem on ubuntu of being written as root user, so extraction is as easy/portable as anything I've run across.

@codekitchen

This comment has been minimized.

Owner

codekitchen commented Sep 25, 2015

Yep. You'll get even better performance by using a docker volume on the VM, rather than writing logs/data directly to the container.

@rosskevin

This comment has been minimized.

Contributor

rosskevin commented Sep 25, 2015

When you say docker volume on the VM, what do you mean?

I'm currently using a busybox container volume for gem caching i.e. docker create -v #{@gems_volume_path} --name #{@gems_volume_name} busybox

I was using compose volume: .:/target and it was giving be bad performance as a host mount.

You mean something different correct? Interested to hear, as I need this to be as fast as possible, but hopefully portable. We run our dev env on OSX, our CI env on ubuntu.

Are you referring to something like this?
https://rubyplus.com/articles/2431

@codekitchen

This comment has been minimized.

Owner

codekitchen commented Sep 26, 2015

Well, let me back up a bit. Are you using the official mysql image for this? If so, it's already a volume on the VM, due to the VOLUME definition in the Dockerfile: https://github.com/docker-library/mysql/blob/5836bc9af9deb67b68c32bebad09a0f7513da36e/5.6/Dockerfile#L40

Specifying a volume like that, or in the docker create params, without linking it to a path on the NFS mount, will auto-create a volume on the VM's filesystem. This gives maximum performance. NFS has some overhead due it being networked, and writing to the container's root filesystem has overhead due to the layered nature of that filesystem.

@rosskevin

This comment has been minimized.

Contributor

rosskevin commented Sep 28, 2015

I'm using the mysql image for this, but I found out that it wasn't mysql that was the slowdown, but my web container that was mounting .:/project via compose. Adding the source code via Dockerfile using COPY put it on the image, and it was as fast as expected, though code sticks around. I'm using what I think is the VOLUME equivalent for gems, I think I'll investigate that further, thanks for your help. If you are curious on the rails front, I'm tracking this here.

@codekitchen

This comment has been minimized.

Owner

codekitchen commented Jan 4, 2016

I'm going to close this as "won't fix", though I look forward to docker adding user mapping support in the future.

@snario

This comment has been minimized.

snario commented Apr 19, 2016

@codekitchen is the configuration option still possible?

@elmar-hinz

This comment has been minimized.

elmar-hinz commented Apr 20, 2017

My conclusion is, to not use Docker containers for DB at all.

  • Storing data into containers is ugly. They are throw-away programs by concept.
  • Mounting DB into containers either breaks the DB on crashes or has to many access right issues.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment