File access in mounted volumes extremely slow #77

schmunk42 opened this Issue Aug 2, 2016 · 125 comments


None yet
schmunk42 commented Aug 2, 2016 edited

continued from

Expected behavior

File-system performance in the container not significantly slower than on the host or docker-machine.

Actual behavior

Running a composer (PHP) update in a container takes much longer than on the host. In the container we usually hit timeouts when updating huge git repos like twbs/bootstrap.

Information & Steps to reproduce the behavior

See link above

Vanuan commented Sep 4, 2016 edited

Same for npm install, bundle install, browserify, etc.



Ok, that's why a Symfony application is very slow in a Docker container!
I load a page in ~ 150 ms when I'm running my local Symfony server (on my host) VS ~ 2000 ms when the config files have been serialized in the cache directory within a Docker container.
It seems the IO in the vendors and the cache / logs directories are very slow...

lfv89 commented Sep 6, 2016 edited

Same here with rails and its friends (rails, rake, rspec...)

Please let me know what information I can provide to help you guys out.

mduller commented Sep 7, 2016

Same here as well. We're looking into using IntelliJ IDEA and git on the Mac and compiling & running our product in a container, from the shared filesystem. Development cycles are considerably longer and our devs not happy with the proposed switch. Currently evaluating workarounds to still enable the switch to Docker and in particular Docker for Mac.


For those who look for a workaround and didn't read the whole thread, here is the solution :


Same with PHP (Symfony, Nette, Composer deps...). I can confirm is usable workaround.

iwaffles commented Sep 8, 2016

We're experiencing this as well. We have our API dockerized and sometimes endpoints time out on our local machines (only in docker for mac). We're using Rails 5.


In my team, some developers use a Mac, others use Linux... Docker sync is not the solution for me because the project configuration will not be the same for all environments. I hope that the OSXFS issues will be fixed quickly.


@MartialGeek Off topic :
But you should create a makefile to handle the environment properly.


@wadjeroudi Thank you, I will read that ASAP ;)


Same issue here, takes +30 sec to run my babel build using a mounted volume vs. 5 sec using container's fs.


馃憤 Same issue with Git in docker, very slow ! Unusable compare to Docker Machine (with NFS)
Docker For Mac released too fast ! Need to be reviews !!!


Whole team is having issues with Magento, 3 minute page loads vs 8 seconds using docker-sync

BenMcH commented Sep 15, 2016

I'm also experiencing this issue. Hitting the cache takes no less than 1 second for each file.

@aleksandra-tarkowska aleksandra-tarkowska referenced this issue in openmicroscopy/devspace Sep 15, 2016

deploy with ansible #50

apahne commented Sep 17, 2016

Will this issue eventually be addressed?

Vanuan commented Sep 17, 2016

Here's the latest reply from Docker Team:

It's been quite a while since then. We all hope that they have something new to share.


Just checked on latest Docker for Mac update : Version 1.12.1 (build: 12133)

seems to be still an issue.

so0k commented Sep 19, 2016

Latest posts on the alternatives are interesting - - which points to (minimal changes to compose file for developer allow a significant speed improvement for mounted source files)


Yeah, I've gone the UNISON approach (also mentioned a couple times in the original thread) : see here

using this container :

if anyone needs an example to copy and want to use UNISON too.

@justincormack justincormack referenced this issue in docker/docker Sep 21, 2016

Current RC very low IO Speed? #24316


Same issue with Magento. Major performance issues. We are moving to Unison now as well.

BenMcH commented Sep 22, 2016

I just got around to testing the bg-sync workaround with our in-house rails app and it worked perfectly! Thanks for the suggestion @so0k


@BenMcH Hello, I am trying to use bg-sync for rails app too, but I have a problems :) Can you direct me in the right direction? I opened the issue in the bg-sync repo - cweagans/docker-bg-sync#2

motin commented Sep 28, 2016 edited

Anyone have some current performance stats with the latest stable and beta versions for some reproducible usage scenarios?
It would be great to understand if the performance is 10% of native or closer to 80% or whatever, to understand if a workaround is worth implementing, and if there is any noticeable difference between the latest stable and beta versions.

Vanuan commented Sep 28, 2016

Though it also depends on the network latency, I find npm install a good test:

docker run \
    -v `pwd`:/src \
    -v `pwd`/tmp/node_modules/:/src/node_modules \
    -v `pwd`/tmp/npm_cache/:/root/ \
    -v `pwd`/tmp/tmp/:/tmp/ \
    node:4.4.3-slim \
    sh -c 'cd /src && npm config set loglevel info && npm install react babel webpack && echo "npm install completed..."'

The 2nd run, after caching, doesn't involve much networking so it would reveal the differences in disk usage.


FYI, we stopped using unison or rsync. We decided to switch the mac on virtualbox with
No more performance problem.


@wadjeroudi 馃憤 ! this is the only alternative, you're right !!

Briones commented Oct 6, 2016

@wadjeroudi the alternative works with Docker for Mac or is only for the older docker version?

It's incredible that the performance issue is from months now and they can't fix it yet...

o5 commented Oct 6, 2016

@Briones look at README.

Activates NFS for an existing boot2docker box created through Docker Machine.


it works with "both". You just don't use the docker-engine anymore, only a vm created with docker-machine and use docker as a client.

lfv89 commented Oct 7, 2016 edited

I know that the docker team already recognize this issue since they put it in the "Known Issues" list.

What I still don't know though is wether they are already working on this bug or they think that an official fix for it is not a priority right now. Does anybody know anything about it? It is a frustrating bug that has been going on for quite a while now, an official word from them would be much appreciated.

Briones commented Oct 7, 2016

@lfv89 Yep, they are supposedly working in that, according to this message in the Docker Forum:
But although this is a priority, it does not seem to be one... I'm actually using the old docker version, not Docker for Mac and using Dinghy with it in order to accelerate the performance but is slow even in that way.


This issue has the highest number of +1 reactions of all, which means that this is the most painful one amongst the users of Docker for mac.

So surprising not to hear any official comments and to see this discussion even after about half a year since the first release. I remember trying this app shortly after the announcement in around May and switching back to Docker Toolbox after a few hours. Looks like I'll need to do the same again after my second attempt to give Docker for mac a chance this morning 馃槅

Poor mac users... This makes me wonder what OS docker developers are on themselves.

samoht commented Oct 15, 2016

@kachkaev we are still working on improving the performance of volume sharing. The roadmap is still pretty much the same as the one @dsheets detailed in The things than you can do to help -- e.g. give us a minimal reproduction test-case -- still hold. Thanks.

bopm commented Oct 15, 2016 edited

@samoht so it took you three months to finally mention fact that there is a problem and give some explanation in a Forum. Not in the product description, not here in, not somewhere else where everyone who uses MacOS can see it before investing some time into trying to adopt Docker for his needs. And it took you another three months to give those users completely nothing but just left them in a dark. So currently it's not a problem of you having or not some kind of roadmap, but actually, it's problem of trust of because how users can rely on your product after something like that? Your current approach is toxic and you need to accept fact that your product is not reliable on MacOS. Because from responses like this yours, it seems like you think that's users problem. You probably know how users solve problems that require them to do product owners work.

samoht commented Oct 15, 2016 edited

@bopm we are continuously improving the product with the resources that we have. The team here is very aware of that issue, and we are working hard to fix it; but it will not all work magically and we will not have a solution which will work for everyone immediately: we need to prioritise between short-term gain which will solve some problems for some users (what we have done so far) and bigger chunks of work which will have a bigger performance impact -- e.g. write a new kernel module to limit context switches in the VM that we ship -- which is a much bigger project and which should come to completion in the next month or so.

I am very sorry that you suffer from performance issue. We tried to list in applications that we know make heavy file-system polling instead of using file-system notifications (inotify). Depending on your use-case, the other great solutions provided by the community might fit you better.

bopm commented Oct 15, 2016 edited

@samoht I am thinking that you making a really great product. Keep up the good work. But that's not enough in things like this one. Because it's not about a quality of the product but about a quality of user support for one.
It's not users problem to figure out that there is that BIG problem hidden in a plain sight. It's not user responsibility to understand which solution is reliable while you looking for the permanent solution, and which is an ugly hack. It's your responsibility to list this problem in and make a curated list of temporary solutions for a while in that section. Because if you not doing that it creates feeling that you don't care. That you not here for us.

samoht commented Oct 15, 2016 edited

Yes you are perfectly right. Do you think we should keep a more detailed list of issues/workarounds in ? If yes, I will see with our documentation team what's the best way to organise this (btw, the docs are now open-source so anyone in the community can open a PR to add some links to

Also we are working on trying to improve the user support for the product, so thanks for your feedback!

bopm commented Oct 15, 2016

Speaking about this list in the documentation, a typical user will never reach those docs on his own, while they important for all users. You providing users with the application. It needs to mention that list on the first run.

And it's definitely good to make it community driven. But it's your team who needs to provide the expert review for those things.

But generally, be here for us. Six months of nothing is really ugly.

samoht commented Oct 16, 2016 edited

But generally, be here for us. Six months of nothing is really ugly.

This is not very fair. The GA release of Docker for Mac was announced at the end of July (so 2,5 months ago). Since then, we've continuously shipped improvement to the filesystem sharing feature: although we have always prioritised fixing semantics/consistency problems first, as we know that loosing or corrupting data is something that nobody really like. And then we keep improving the performance too. All of this is generally documented in the changelog shown in but also in the app during auto-update.

Also, I would like to clarify that a lots of users report that file-sharing usually works great -- but as people have noticed on this thread, there are some use-cases where osxfs has pathological performance issues. We know about this, and the team here is working very hard on fixing these issues. Having precise benchmarks instead of saying "it's slow" will really help us to help you.

mduller commented Oct 17, 2016

Having precise benchmarks instead of saying "it's slow" will really help us to help you.

I have the following simple experiment to reproduce the issue, showing an almost 18x slower operation on the mounted FS vs. the container's own FS: extracting the Linux kernel sources. It should be straightforward to reproduce by reproducing the commands shown below.
For reference, the experiment shown below was run on a MacBook Pro 13" with 2.9GHz Core i5 and 8GB RAM. 2GB RAM and 2 cores were assigned to Docker for Mac and no other containers were active during the experiment.

michaelsmbp:tmp mduller$ uname -v
Darwin Kernel Version 15.6.0: Mon Aug 29 20:21:34 PDT 2016; root:xnu-3248.60.11~1/RELEASE_X86_64
michaelsmbp:tmp mduller$ docker version
 Version:      1.12.2
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   bb80604
 Built:        Tue Oct 11 05:27:08 2016
 OS/Arch:      darwin/amd64
 Experimental: true

 Version:      1.12.2
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   bb80604
 Built:        Tue Oct 11 05:27:08 2016
 OS/Arch:      linux/amd64
 Experimental: true
michaelsmbp:tmp mduller$ ftp
Connected to
220 Welcome to
331 Please specify the password.
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
200 Switching to Binary mode.
250 Directory successfully changed.
250 Directory successfully changed.
250 Directory successfully changed.
250 Directory successfully changed.
local: linux-4.8.2.tar.gz remote: linux-4.8.2.tar.gz
229 Entering Extended Passive Mode (|||30814|).
150 Opening BINARY mode data connection for linux-4.8.2.tar.gz (139976871 bytes).
100% |**********************************************************************|   133 MiB    7.92 MiB/s    00:00 ETA
226 Transfer complete.
139976871 bytes received in 00:16 (7.91 MiB/s)
221 Goodbye.
michaelsmbp:tmp mduller$ docker run -it -v /tmp:/mountedtmp centos:latest bash -l
[root@10a6e17474ed /]# for i in `seq 1 5`; do mkdir /mountedtmp/testextract$i; cd /mountedtmp/testextract$i; time tar xzf /mountedtmp/linux-4.8.2.tar.gz; done

real    3m43.536s
user    0m5.830s
sys 0m16.480s

real    3m32.625s
user    0m6.380s
sys 0m9.370s

real    3m32.240s
user    0m6.090s
sys 0m10.680s

real    3m23.184s
user    0m6.220s
sys 0m10.710s

real    3m32.727s
user    0m6.200s
sys 0m13.200s

### ===> AVG real: 213s

[root@10a6e17474ed testextract5]# for i in `seq 1 5`; do mkdir /tmp/testextract$i; cd /tmp/testextract$i; time tar xzf /mountedtmp/linux-4.8.2.tar.gz; done

real    0m9.426s
user    0m5.040s
sys 0m4.870s

real    0m13.248s
user    0m5.600s
sys 0m5.680s

real    0m14.047s
user    0m5.680s
sys 0m6.520s

real    0m11.732s
user    0m4.830s
sys 0m4.440s

real    0m11.633s
user    0m4.850s
sys 0m4.390s

### ===> AVG real:  12s

Hope this helps with squashing this performance issue. It is the one issue preventing adoption of Docker-based development on all our developers' Macs.

mduller commented Oct 17, 2016

For reference, extracting the archive natively on the Mac on average also takes ~12s real time.



Test script

docker pull bwits/docker-git-alpine

time git clone clone-osx-native
time docker run --rm bwits/docker-git-alpine clone /clone-container-no-volume
time docker run --rm -v /git bwits/docker-git-alpine clone clone-container-data-volume
time docker run --rm -v $(pwd):/git bwits/docker-git-alpine clone clone-container-host-volume

Docker for Mac

tobias in ~/Webserver/TESTING/osxfs-benchmark-r1 位 time git clone clone-osx-native
Klone nach 'clone-osx-native' ...
git clone clone-osx-native  5,49s user 1,70s system 43% cpu 16,659 total

tobias in ~/Webserver/TESTING/osxfs-benchmark-r1 位 time docker run --rm bwits/docker-git-alpine clone /clone-container-no-volume
Cloning into '/clone-container-no-volume'...
docker run --rm bwits/docker-git-alpine clone  /clone-container-no-volume  0,01s user 0,01s system 0% cpu 17,804 total

tobias in ~/Webserver/TESTING/osxfs-benchmark-r1 位 time docker run --rm -v /git bwits/docker-git-alpine clone clone-container-data-volume
Cloning into 'clone-container-data-volume'...
docker run --rm -v /git bwits/docker-git-alpine clone    0,01s user 0,01s system 0% cpu 17,455 total

tobias in ~/Webserver/TESTING/osxfs-benchmark-r1 位 time docker run --rm -v $(pwd):/git bwits/docker-git-alpine clone clone-container-host-volume
Cloning into 'clone-container-host-volume'...
docker run --rm -v $(pwd):/git bwits/docker-git-alpine clone    0,01s user 0,01s system 0% cpu 54,244 total
tobias in ~/Webserver/TESTING/osxfs-benchmark-r1 位 

VirtualBox docker-machine

tobias in ~/Webserver/TESTING/vbox-benchmark-2 位 time git clone clone-osx-native
Klone nach 'clone-osx-native' ...
git clone clone-osx-native  5,62s user 1,84s system 46% cpu 16,153 total

tobias in ~/Webserver/TESTING/vbox-benchmark-2 位 time docker run --rm bwits/docker-git-alpine clone clone-container-no-volume
Cloning into 'clone-container-no-volume'...
docker run --rm bwits/docker-git-alpine clone  clone-container-no-volume  0,04s user 0,02s system 0% cpu 17,963 total

tobias in ~/Webserver/TESTING/vbox-benchmark-2 位 time docker run --rm -v /git bwits/docker-git-alpine clone clone-container-data-volume
Cloning into 'clone-container-data-volume'...
docker run --rm -v /git bwits/docker-git-alpine clone    0,03s user 0,01s system 0% cpu 20,598 total

tobias in ~/Webserver/TESTING/vbox-benchmark-2 位 time docker run --rm -v $(pwd):/git bwits/docker-git-alpine clone clone-container-host-volume
Cloning into 'clone-container-host-volume'...
docker run --rm -v $(pwd):/git bwits/docker-git-alpine clone    0,03s user 0,01s system 0% cpu 48,817 total

@schmunk42 explain your benchmarks please. I don't get the difference.


@wadjeroudi For the last section "Cloning into 'clone-container-host-volume'..." under both options - the script does a lot of git operations on a host volume, which is about 3x times slower than the other solutions.

The is a very simple benchmark with just one git clone operation.


Is this dup of #668?

cilefen commented Oct 20, 2016

Well, this one was open first...


;-) There is a working fix in there for a while. No need for 3rd party solutions (at least, for many of us it seems).

cilefen commented Oct 20, 2016 edited

#668 seems to work for some reported use cases but check out #77 (comment) and #77 (comment).

(edit) Is #668 about mounted data volumes?

samoht commented Oct 20, 2016 edited

#668 is about I/O performance of the container filesystem (e.g. not using -v). This issue (#77) is about I/O performance of shared mounts (e.g. using -v). Please try to not mix-up threads.

Thanks @mduller and @schmunk42, I have added your benchmarks to our suite, we will try to report here when we make progress on the performance of your use-cases.

schmunk42 commented Oct 20, 2016 edited

@samoht I dug a bit more and found something interesting... benchmarks could be improved with it.

When seeing the very slow performance with composer, there's an internal call to git update --prune origin.

Running a benchmark with the above command on a fresh repo makes almost no difference - that's why it's hard to create a benchmark. A fresh repo is about 100 MB.
But running this from my day-to-day clone in the cache, which is about 500 MB shows the following:

in Container

$ time docker run --rm -v ~/.composer/cache/vcs/ bwits/docker-git-alpine remote update --prune origin
Fetching origin
docker run --rm -v  bwits/docker-git-alpine remote update --prune origin  0,01s user 0,01s system 0% cpu 2:17,89 total

on Host

$ time git -C ~/.composer/cache/vcs/ remote update --prune origin 
Fordere an von origin
git -C ~/.composer/cache/vcs/ remote     0,56s user 0,76s system 35% cpu 3,747 total

one more time in Container

$ time docker run --rm -v ~/.composer/cache/vcs/ bwits/docker-git-alpine remote update --prune origin
Fetching origin
docker run --rm -v  bwits/docker-git-alpine remote update --prune origin  0,01s user 0,01s system 0% cpu 2:32,77 total

馃挜 In the container this is about 40x slower; 2 minutes and 20 seconds vs. 3,7 seconds

The main difference is a huge number of .git/objects and pack in the large repo.

Just for debugging - I tried various garbage collection and pruning for the git repo, but that did not really help.


Link to a zip of my 500 MB repo:

motin commented Oct 20, 2016 edited

Just a note here on an important use-case that is much slower with osxfs than previously with Docker Machine: Running a database service which has it's data files stored in a host volume for persistency reasons.

I don't have reproducible benchmarks yet, but from the latest weeks of experience I have noticed a general slowdown in database operations, especially dumping and loading data takes much longer than before (x2-5 slower), making local development slower.

Journerist commented Oct 20, 2016 edited

same issue here...

this issue blocks my current target to dockerize every service. I need to synchronise a big git repository that will also contain *.class files that will change frequently while working. These files should be available on my

  • application docker image
  • containerised eclipse application
  • host system

There will be other docker images that require access to these files.

Right now there is a compose files that contains container definitions with volume mount definitions.

Building the whole project takes about 45 seconds on my host system (i7 at 4ghz). In the container it takes more than 5 minutes.

I will try to use docker toolbox as a workaround.

Performance results for:

docker run --rm -it -v pwd:pwd -w pwd alpine /bin/sh
time dd if=/dev/zero of=speedtest bs=1024 count=100000

Docker for mac:
~20 seconds

Docker toolbox on mac:
~4 seconds

Docker for windows:
~0,3 seconds

pretty bad


This is a fundamental performance problem with osxfs, full stop. As others have noted, anyone who is attempting to use DfM for a development environment of any significant size must be hitting this -- I can't imagine how they are not.

I encountered similar slowness using with docker-machine and virtualbox via vboxfs. It looked like a show-stopper until I found docker-machine-nfs. Solved. Then, since DfM was "the future", I began testing it, but it was even worse. Thought we'd fall back to docker-machine until I found d4m-nfs. Again, problem solved.

Do you see the pattern?

NFS is orders of magnitude faster for sharing files between the mac and either hosted VM solution. I'm not an OS or FS engineer -- there may be essential functionality invboxfs and osxfs required by other use cases. But we're talking about at least an order of magnitude, and even more in degenerate cases, of performance impact.

Kudos to @samoht for your recent diligence in responding, and from your comments here it's clear this is important to the team.

On the other hand, this is a long standing issue. This thread was opened 3/30, and I still have no idea what the prognosis is. Perhaps it's merely communication or maybe I haven't found the right place to look, so I'll just leave with a few questions:

  1. Does the DfM team understand why the NFS performance is extraordinarily better than osxfs?
  2. Is there any reason that NFS can't, or should not, be used instead of osxfs?
  3. Would it be possible to provide NFS as an alternative solution to osxfs so those of us who require it can simply choose/configure it from within DfM instead of well-written, but ultimately hacky, scripting solutions like those above?
  4. You (@samoht) stated, "I would like to clarify that a lots of users report that file-sharing usually works great...". Are any of those known cases using DfM as a solution for running development environments with FS mappings to share source code between containers and OS X?
  5. Is there an estimate when this particular issue/use-case will be fixed?

If the official DfM team won't be able to address this issue in the very near future I can see a case for adding DfM to docker-machine-nfs. What say you? Will you be able to provide any joy on this, or do we need to stick with the hacked NFS usage?

motin commented Oct 27, 2016

Thank you @goneflyin so much for pointing us to d4m-nfs!

This is incredible!

Benchmark - dd if=/dev/zero of=speedtest bs=1024 count=100000

* On host
100000+0 records in
100000+0 records out
102400000 bytes transferred in 0.307839 secs (332641246 bytes/sec)

real    0m0.327s
user    0m0.020s
sys 0m0.296s

* Within Docker for Mac without a host volume
100000+0 records in
100000+0 records out
real    0m 2.06s
user    0m 0.01s
sys 0m 2.04s

* Within Docker for Mac within a host volume
100000+0 records in
100000+0 records out
real    0m 31.69s
user    0m 0.11s
sys 0m 3.66s

* Within Docker for Mac within a host volume using NFS
100000+0 records in
100000+0 records out
real    0m 1.73s
user    0m 0.00s
sys 0m 0.60s

NFS is 18x faster than OSXFS - and even faster (or at least on par with) than not using a host volume at all (which blows the file-synchronization workarounds away from consideration)!

This gist currently includes the benchmark from @Journerist, feel free to add more if necessary, albeit the above already tells the story imo.

To reproduce this benchmark on your own machine, step into a temporary folder and run:

git clone
git clone bench-77

Ping me when D4M uses NFS by default, or OSXFS is within the same order of magnitude as NFS, I'm finally going back to full speed local development. :)

@motin motin referenced this issue in IFSight/d4m-nfs Oct 27, 2016

Include benchmarks in readme #7

ilg-ul commented Oct 27, 2016

Although I did not run any benchmarks, I also confirm that running long builds inside a mounted folder takes ages to complete. The builds I'm talking about are GNU ARM Eclipse OpenOCD and GNU ARM Eclipse QEMU.

Vanuan commented Oct 27, 2016 edited

While NFS is a lot faster, I think we should not forget why osxfs was introduced:

  • graceful handling of file ownership
  • supporting filesystem events

It would be great if NFS supported fsevents, but it doesn't.

motin commented Oct 28, 2016 edited

@Vanuan Yes, something like the following should be added to the Docker for Mac readme:

Pros and cons osxfs vs NFS:

Pros osxfs:

  • graceful handling of file ownership
  • supporting filesystem events

Cons osxfs:

  • 10-20x slower than NFS

This way developers can choose based on what is most important for them. For me using persistent databases and sharing my application's source code, speed is way more important than ownership handling and fsevents, but there are probably others who prioritize differently.

To also include a migration guide for people coming from docker-machine where NFS is recommended (since docker-machine users have survived with vboxfs, they probably will prefer nfs over osxfs), or mentioning these pros and cons on would also be a good idea.


@motin @Vanuan

I wanted to respond to the valid concerns raised by @Vanuan myself, but @motin said it much more clearly than I could have!

You're absolutely right -- and I'm glad you mentioned those two specific features, fsevents and file ownership. For some use cases, these are of primary concern. For me and @motin (and many others from what I've gathered), they aren't particularly relevant when sharing large sets of files with the host for development purposes.

In addition to itemizing the pros and cons in the appropriate place on the Docker For Mac site -- and the docker toolbox page seems a likely candidate -- I'd like to see some of the obvious use cases that lead one to one choice or the other. For example:

  • For Drupal or Rails development (where the entire source tree gets slurped up on most code executions) NFS is likely the better choice due to speed
  • Running a responsive test runner (e.g. guard for Ruby) likely requires osxfs due to its reliance on fsevents.

I'm sure there are others, but I'm not entirely clear on what they are. For my own, the speed seemed such an obvious barrier I couldn't imagine other use cases that wouldn't be affected by it. Nonetheless, unless/until DfM addresses the speed issue, I think adding support for using NFS as an official alternative is still the best option on the table.


@goneflyin You can already use nfs mounts using docker volume, as the default local driver supports nfs mounts, eg docker volume create --driver local --opt type=nfs --opt o=addr=,rw --opt device=:/path/to/dir --name foo (see for details). Note you currently need to supply the IP address, and you may need o=username=foo,password=bar,addr=.... You do need to set these up per project, or just create one volume for your whole home directory.

WolfgangFahl commented Oct 30, 2016 edited

The design decisions here do not seem to fit the needs of the docker users.
Having a native docker on OSX sounds like a great idea. I'd expected not only to have osxfs but also nfs and cifs as options for mounting withouth much hassle. For CIFS e.g. I found which states:
Yes, Docker is preventing you from mounting a remote volume inside the container as a security measure.

I find this very strange - the host machine is my "own house" why would someone want to give me extra protection for a door which used to be under my own control.
Please make things simpler. I am pretty sure simpler will also be faster.
And please take this serious - there is no milestone, no project no assignee on this bug. That looks troubling.


@WolfgangFahl that post tells you exactly how to do mount, you add the extra flags to docker run. You can mount nfs and cifs from containers or volume drivers as you wish. If you wish to disable all the security protections on the software on your machine you may do that.

WolfgangFahl commented Oct 30, 2016 edited

Thanks justin. I figured out to use CIFS. The performance ratio in my case is:

  • 20 MBytes/sec via CIFS for a simple copy of a multi-mbyte file
  • 60 MBytes/sec via osxfs for a simple copy of a multi-mbyte file

restoring a 280 MByte MySQL Backup:

  • 5 mins 30 secs via CIFS
  • 7 mins 13 secs via osxfs

and this was a CIFS mount to a server on the network and not the local machine!


Thanks @justincormack, that's an approach I wasn't aware of. I did try it, but without success. It's likely due to the fact I already have NFS mounted to the Linux instance in xhyve. I haven't explored it any further as of now.

Since I've already said my piece, I'll be brief: using docker to create NFS volumes on a container-by-container basis is just a workaround for how to deal with the fact that Docker for Mac's current solution osxfs falls short. We already have fairly decent workaround with the forementioned and docker-machine-nfs scripts. What we still need is an appropriate solution built into Docker for Mac directly.

It may be difficult, and it may take time - that's fine. We'll be patient, but some more transparency would be really helpful. My original questions above are still unanswered, but basically: Do you know what the problem is with the performance? Can it be fixed? Will it be fixed? If, will it be soon or are we talking 3-6 months out? If it won't, can the generalized NFS mounting solutions be incorporated as official alternatives within DfM?

Again, thanks very much for the suggestion on docker volumes with nfs! I'll hang on to it and I'm sure it'll be useful for me at some point in the future.


I took a look at the d4m_nfs script, it should be possible to make it into a privileged container that is set to auto restart every time you start docker, so use is pretty transparent.

bramswenson commented Nov 7, 2016 edited

The latest d4m beta seems to be much improved for our use cases (so far). 1.12.3-beta29.3

BenMcH commented Nov 7, 2016

@bramswenson it's still slow for me.

1.12.3-beta29.3 still has this issue.

TomFrost commented Nov 9, 2016

@Vanuan Regarding NFS not forwarding fsevents/inotify events, I created as a solution. The included install script is specifically for VMs created with docker-machine, but the app compiles and runs wonderfully on d4m. The sticking point is that there is no known way to auto-start fs_eventbridge when d4m starts up or reboots.

@samoht Boot2docker supported a /var/lib/boot2docker/ file that the VM would run immediately upon startup. Moby does not appear to have a similar feature -- at least not that I've found. Does something like that exist? If not, I can submit an enhancement ticket for it. That would allow me/the community to get NFS shares and FSEvent forwarding started on VM boot and would make the participants in this issue quite happy until this ticket is fully resolved, one way or the other.

I know there's been quite a bit of time investment into osxfs by the docker team, but I agree with @goneflyin that it seems senseless not to leverage the performance (and negligible CPU hit compared to osxfs) of NFS, either through an over-the-top solution like fs_eventbridge or by patching additional features into NFS itself. Even if it started life as an option alongside osxfs.

lox commented Nov 10, 2016

There have been similar attempts in the past @TomFrost, I think it just ends up being too difficult to do well and it creates complexity for the end user.

Some fiery discussion here guard/listen#258.


@lox Similar attempts at what? If you're referring to event forwarding out of band with NFS, that's working quite nicely right now on top of docker-machine. See DevBox, which sets up and configures a docker implementation on Mac using NFS and event forwarding with fs_eventbridge.

Your link refers to a TCP broadcast, when this is simply a lightweight client-to-server stream. The official implementation could use a UNIX socket, UDP, whatever-- the point is, it's entirely possible and effective, and way faster than osxfs, to use stock NFS with out-of-band inotify. And it works today with excellent benchmarks.

lox commented Nov 10, 2016 edited

Yup, I know what you are talking about @TomFrost. I've implemented the same thing myself a number of ways. It just ends up being an extra piece of complexity with strange edge cases of it's own (e.g how do you handle deletes?).

It's faster than osxfs now, but it's also much more limited in future direction and lacks the ability to handle things like sockets (which are planned and will be huge).

sylus commented Nov 10, 2016 edited

Is it possible at this point in time to at least get an answer to some of @goneflyin questions?

In particular whether the problem has at least been wholly identified and an approximate timeline for a fix has been suggested? As this issue has been a year months old looking at the forums it would be nice to see some concrete plans to address this issue.

If there are no concrete plans at this point then I do agree that at a minimum to make docker for mac usable in a development framework we could look more seriously at nfs for the time being.

samoht commented Nov 10, 2016 edited

@sylus how can this issue been over a year when we released our first stable version 3 months ago? :-)

The team is working on improving the performance of osxfs: Beta30 (released today) contains some improvement to the latency of osxfx which will improve some use-cases (but not all, we are aware of that).

The implementation of a new kernel module in the VM to bypass kernel/userspace context switches is well on its way and this will speed up the data path quite a lot. We still hope to ship it it by the end of the month, but as we are always prioritising fixing data integrity issues first so this could slip if we get new bug reports.

All of the reproducible benchmarks published on that thread are taken into account, thanks for providing them.


@samoht Is there somewhere we can find the up to date release notes and release binaries? I thought I remembered them being tracked in this repo, but I could be mistaken. Thanks!

not found here:

samoht commented Nov 10, 2016

@bramswenson indeed the release docs for Beta30 seem not to be updated yet (it will be shortly). The release notes are:

* New
    - Better support for Split DNS VPN configurations

* Upgrades
    - Docker Compose 1.9.0-rc4
    - Linux kernel 4.4.30

* Bug fixes and minor changes
    - HyperKit: code cleanup and minor fixes (#5688)
    - VPNKit: improvements to DNS handling (#5750)
    - Improvements to Logging and Diagnostics
    - osxfs: switch to libev/kqueue to improve latency (#5629)

And as usual, the binary is available if you click on "Check for auto-Updates" in your 馃惓 menu, otherwise at

sylus commented Nov 10, 2016 edited

@samoht, I was referring to the forum issue @ which was created on march 1st. Although that issue could possibly have been repurposed, and if so my apologies! Additionally march 1st would mean 8-9 months so either way my counts were off and I have corrected my previous statement! Apologies again! :)

I really appreciate you taking the time to answer and it is indeed a great relief to know that this is still on the roadmap and being actively worked on. Thanks again for taking the time and I hope over the course of the next few months to be able to contribute back to this awesome project!

eduwass commented Nov 11, 2016

Does anyone have benchmarks of the improvements with the latest beta release?


@eduwass Vagrant still faster:
8-10 vs 1-2 seconds (Symfony2 dev env)

eduwass commented Nov 11, 2016

Thx @beshkenadze, I also found a couple other answers from users in the forum thread.

Looks like I'm also sticking with d4m-nfs for now, still wishing that osxfs gets a notable performance update.

kachkaev commented Nov 11, 2016 edited

@beshkenadze I was able to make Symfony projects acceptably fast in dev using Docker app for OSX. The trick was in mounting the project's folder the following way:

# docker-compose.yml
version: "2"
    build: .
      - ./:/var/www/symfony
      - web_var-cache:/var/www/symfony/var/cache
      # - web_var-logs:/var/www/symfony/var/logs
      # - web_var-sessions:/var/www/symfony/var/sessions
      - web_vendor:/var/www/symfony/vendor
      - composer-cache:/var/www/.composer/cache

  # web_var-logs:
  # web_var-sessions:
      name: composer-cache # shared between multiple dockerized symfony projects

This reduced page load speed from 10-12 to 1-3 secs, because most of file IO operations (cache + vendor) started to happen within virtual volumes. A big drawback of this is that I can no longer explore these two important parts of my project - they are somewhere far away in a directory that's only accessible by root. One can also put logs and sessions to docker volumes, but I haven't spotted any further speed boost by doing this.

The performance of Docker for mac is still a big issue to me. CPU fans spin like crazy while Docker is on I'm working on a Symfony project :鈥(


I have a multi container app that I boot up using docker-compose. The bottleneck is the file synchronisation of my working directory that also contains compiled files and generated files after some build steps.

A friend of mine asked me why I don't put everything in one container to avoid file synchronisation. He doesn't understand docker very well. At first I thought this violates a lot of best practices but there is an element of truth in his approach. I don't really need this files on my host. I only need it because different container need a joint directory.

So what's about docker in docker to fix our os related issue? I give it a try as soon as possible and post some mac related benchmarks but I think this should work.

xero88 commented Nov 14, 2016

@kachkaev when I try your solution I get :

Fatal error: require(): Failed opening required '/var/www/html/app/../vendor/autoload.php' (include_path='.:/usr/local/lib/php') in /var/www/html/app/autoload.php on line 7


@xero88 You need to run composer install in the container to "copy/download" the files to your host-volume. Otherwise it's empty in the container.

xero88 commented Nov 14, 2016

@schmunk42 thank you, it's working now, I pass from 14 seconds to 11 seconds with this solution

Journerist commented Nov 14, 2016 edited

I finally made it:

using docker for mac

# run dind and attach
docker run --privileged -d  docker:dind
docker exec -it $(docker ps -q) /bin/sh

# in dind container: create a directory that will be mounted (that is your work dir like your git repository)
mkdir testdir
cd testdir

# create an container within the dind container and mount a volume
docker run --rm -it -v `pwd`:`pwd` -w `pwd` alpine  /bin/sh

# run the benchmark
/testdir # time dd if=/dev/zero of=speedtest bs=1024 count=100000
100000+0 records in
100000+0 records out
real    0m 0.21s
user    0m 0.02s
sys 0m 0.19s

I wonder why there isn't simply an integration of the libray. I'd assume that you'd then at least be able to use e.g. an ext2 formatted disk with native performance since the library should be pretty much compatible to the standard linux approach. For my usecase this would be sufficient.

Any thoughts?

lox commented Nov 20, 2016

Dunno what FUSE api you've been using @WolfgangFahl, but there is nothing simple about it. It's a very broad API and trying to map it to mac filesystems isn't easy. Even if you tried to map it to an et2 formatted disk, you are still going through several layers of either virtualization or networked filesystem to your data into docker. This is a non-trivial problem on a lot of levels.


@lox - i had hopped that there could be some sort of direct ext2/ext2 mapping so that there would be a kernel level driver inside the linux that docker uses that avoids all the layers to improve performance and make things much simpler by going 1:1 between the apis. Too bad if that's not feasible.

lox commented Nov 20, 2016

There is. Use Linux as your host.

lox commented Nov 20, 2016

I strongly suggest anyone posting in this issue read over

WolfgangFahl commented Nov 20, 2016 edited

My hope was that and are much more helpful since I think the current's approach's complexity is not leading to a satisfying result in quite a few use cases.

@geerlingguy geerlingguy referenced this issue in docksal/docksal Nov 23, 2016

Remove dependency on Virtualbox #21


Does anybody know about status of this issue in stable/beta release? Thanks


@KravetsAndriy well i'm using Version 1.12.3-beta30.1 speed is improved a little bit.

fozcode commented Nov 30, 2016

To those seeing very slow web page load times in development projects, I just found that most of my slowness was down to a single library (django-mediagenerator) that manages caching of static assets. In development mode it was walking several directories on every single request to look at file modification times and/or calculate a hash of the file content.

I modified the library to receive filesystem events (via inotify / kqueue) instead of constantly scanning for changes, and a 2 minute web page load time went down to 10 seconds, which I can live with. I know of a number of other packages that perform this kind of crude file system scanning in development mode, so hoping this might help someone out.


Too long and Too unstable to continue using docker mac (stable or beta).
All my teams will rollback to a standard local developpement
Docker For Linux is the only way to make it work properly

o5 commented Dec 1, 2016 edited

@jordscream: Did you hear about Docker Toolbox? This tool is used in my teams (macOS/win/lin developers) and it works. Docker for Mac should be better tool, unfortunately still not for now. This issue is known a little bit long time, but I hope it will be fixed.

Luukyb commented Dec 1, 2016

@jordscream @o5 I switch to dinghy, which is based on a VM like Docker Toolbox, but with NFS support. So far I'm very happy with the behave and speed, better than Toolbox and Docker for mac IMO. I would recommend dinghy while the two main issues of Docker for mac remains (this one, and #371 ).

o5 commented Dec 1, 2016

@Luukyb I forgot to mention Docker Machine NFS. Its tricky but definitely better than local environment.

@whitecolor whitecolor referenced this issue in docker/for-win Dec 3, 2016

Shared Volumes Slow #188


@Luukyb @o5 Thanks for these informations.
I was in this system before : docker-machine + docker-nfs and it worked

I was very happy when Docker announce docker for mac because docker-machine can be slow sometimes. We are still in VM mode with container inside (Host + VM + Docker + NFS: 3 layers). Sometimes, it crashes or sometimes the docker-machine stops. It can be good for simple project but when you have a complex project with 10 independant containers... it becomes slow and really unstable.

Docker is a powerful tool which helps a lot. I don't deny it and Thanks to Docker. It changed sensibly the way to work for developers or devops in a right way.

But When you do a summary of Before Docker and after Docker, you realize that you lose a lot of time to configure, fix, restart your projects.... Later, you become busy and tired to work like this.
When you rollback a local host developpement, finally it is not so bad...

This comment seems useless and some people could say, if you are not happy, juste leave it. I just express a general feeling for a big company where I am

So Docker, May the force be with you. I will join again the dark Star when you will be really ready !


@Vanuan Vanuan referenced this issue Dec 7, 2016

Sooooo slow... #1018


@hoatle Your comments here and on docker/for-win were removed as it hit my internal spam filter 馃懢 . If you are a user of Docker for Mac or Docker for Windows and have something to add to the discussion then please feel free to comment in a more constructive manner.


Not to be meant as a concurrent product, but even we have created a custom VM setup with Vagrant, because neither Docker-for-Mac, nor docker-machine or existing NFS addons could solve our problems :(

Vanuan commented Dec 9, 2016

Is there still a plan to open source Docker for Mac?


One more Mac experiencing slow file system problems on development. Any update on this?

dn5 commented Dec 23, 2016

Seriously, can we get an update here? This is getting out of the hands.

barat commented Dec 28, 2016

Got my 1st Mac from company ... and now I'm sad that this is still in progress ... situation is even more tragic if I think, that on Win10 Pro it work's ok ... was surprised that here's not ...
Oh well ... hopefully there is vagrant - I'll fire docker project inside it and will be waiting for the fix ...
Finger crossed that it'll happen soon.


@barat Although this issue definitely needs to be resolved, there's quite a few work arounds that can make DfM a useable dev environment. I mentioned in this thread in September (#77 (comment)) I've ported my Docker dev env to include a Unison container to sync host codebase -> container to negate this issue. Others are using d4m-nfs, and am sure there's a couple others mentioned on the original forums thread as well.

As with most open source tech you gotta take the pros with the cons and decide if its right for your needs. If I can be of any assistance with using DfM in this way, feel free to ping me direct.

barat commented Dec 28, 2016

@joemewes - thanks for reply ... my intention wasn't to be offensive ... i have one older project which uses makefile to orchestrate docker containters and most of the team is using Ubuntu so instead of making some "only mac" workarounds (plus rewriting everything to docker-compose) I'll just fire this docker project in vagrant container with nfs share :) hope some day there will be "built in" solution :)
BTW. I have in plans to test d4m-nfs, because this seems to be most "transparent" and may work with containers like "docker run ... -v pwd:/app:rw" :)


@barat cool cool. no offence spotted at all. just want to make sure you knew there were options. :)


I have that issue with the Docker for Mac beta channel. I installed the beta do get rid of [#189].

But given just HOW extremely slow it is, I have to go back. The 'workaround' from [#668] did not help me.

barat commented Jan 10, 2017

No stable/beta release has fix for this yet ... what I found is that docker-sync gives best performance but it requires additional docker-sync.yml and a little tunning of Your current docker-compose.yml.
If You'r app don't have that many files, then docker-for-mac-nfs with image hacking trick is good as well :)
Hope that someday Apple will give a tool to deal with it, or docker team will find a universal workaround. Until that happens - we need to hack a little.


Really like to see it solved within Docker For Mac instead of an third party solution

docteurklein commented Jan 23, 2017 edited

I just noticed that the number of files (recursively) present in a shared (host) volume has a ridiculous impact on performances!

In my example I passed from a folder with 86K files to a folder with 80 files, and got a 15x speed improvement.

I'm not sure I saw this comparison in this thread or anywhere else on the interwebs so I thought it would worth sharing!

EDIT: it surely has something to do with the fact that most of the 86K files are required by the framework I use (symfony), and it reads a lot of files.

cilefen commented Jan 23, 2017

True. As far as I understand it, the problem is all about the number of files.

entwu commented Feb 2, 2017

I'm eagerly waiting for fixes in performance. Sadly I can't use any workarounds because I'm building projects for whole teams (mainly linux machines).
Any approximate release date?

barat commented Feb 2, 2017

docker-sync is allowing You to have docker-compose.yml and eg docker-compose.mac.yml/docker-compose.linux.yml for "hacks". You can then make it like:
docker-compose -f docker-compose.yml -f docker-compose.mac.yml up

tseho commented Feb 2, 2017

@barat I use a similar approach.
We keep versionned the files docker-compose.override.yml.osx.dist and docker-compose.override.yml.unix.dist, it's up to the developer to enable the correct one.

I find it easier since docker compose will automatically look for compose.override.yml

And of course, our osx version include the configuration for docker-sync.

entwu commented Feb 16, 2017

@barat Its not enough, many of our containers are one time run only (situational, like building the app) and they are defined in Makefile, not in docker-compose, which makes this still unusable for me.


@entwu My team orchestrated docker runs in Makefiles too, because the docker-compose workflow felt more cumbersome than helpful. We ended up moving to DevLab and haven't looked back -- scriptable tasks like Makefiles, but automatic dependency linking like Compose. That would allow you to try out docker-sync for one-off tasks.

But with that said, I'm not a huge fan of the docker-sync solution either, as I still don't like the idea of my mac-based engineers running different containers for dev and test than linux-based engineers. An officially-supported option to support NFS instead of osxfs -- even if it's defaulted to off and not overly advertised -- would be such a wonderful solution right now. The performance just is not going to be beat, for folks who don't actively need fsevents propagation.

dnephin commented Feb 16, 2017

Right, docker-compose was never designed to be a build automation tool. It's designed to easily create and teardown isolated environments for development and testing.

dobi is a build automation tool I've been working on. It seems to be similar in spirit to DevLab. It doesn't currently support any optimizations for OSX, but mounts are defined separately from tasks, which I think would make it easy to introduce.

Vanuan commented Feb 16, 2017

docker-compose+bash is fine for development


I have created workaround with for my symfony project PHP+NGINX+MYSQL with optimal settings. Before using docker-sync page was loading 30 sec, now it's 1 sec.

ciekawy commented Feb 22, 2017

wonder if there is a chance for docker run -v one-liner equivalent...

entwu commented Feb 22, 2017

@Arkowsky No use for me

genei09 commented Feb 22, 2017

So not wanting to have additional dependencies installed I started on a python script that uses rsync and fswatch/inotify. However it didn't really work for our needs. GIT is very noisy and the tools used by the developers use the git repo to detect changes, then run targeted unit tests and compilation.

Further fsevents and inotify aren't the most expedient or consistent to be basing operations off of. I consistently encountered files being modified but not reported by the system. Where I ended up at was running two separate watch/sync operations for .git/ directories and the actual repository contents. For a native operation of 2 seconds, is took 6 seconds. Under heavy use it was 12 seconds. With native osxfs the operation took 20 seconds. I haven't run the experiment with docker for mac 1.13.1 yet, but a similar test was floating between 3x and 4x natively.

@joemewes joemewes referenced this issue in docker/ Feb 22, 2017

Feedback for: docker-for-mac/ #1923

Vanuan commented Feb 22, 2017 edited

Any chance to get it revealed on DockerCon? That would be a nice surprise

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment