Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shared Volumes Slow #188

Open
dimasnake opened this issue Nov 1, 2016 · 125 comments

Comments

Projects
None yet
@dimasnake
Copy link

commented Nov 1, 2016

Expected behavior

File access in volumes should be comparable to access times in non-volumes, similarly to Linux installations of docker

Actual behavior

File access in volumes is many times slower than on non-volumes.

Information

Version: 1.12.3-beta29.2 (8280)
Channel: Beta
Sha1: 902414df0cea7fdc85b87f0077b0106c3af9f64c
Started on: 2016/11/01 21:19:46.408
Resources: C:\Program Files\Docker\Docker\Resources
OS: Windows 10 Pro
Edition: Professional
Id: 1607
Build: 14393
BuildLabName: 14393.351.amd64fre.rs1_release_inmarket.161014-1755

Steps to reproduce the behavior

Get on the commandline of a lightweight docker container

root@a6b2e82c167b:/# dd if=/dev/zero of=test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 0.569183 s, 180 MB/s

and mount a volume:

root@a6b2e82c167b:/var/www# dd if=/dev/zero of=test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 5.11662 s, 20.0 MB/s

In 9 times slower.

@friism

This comment has been minimized.

Copy link

commented Nov 1, 2016

Thanks for reporting. The volume mounts are implemented using a SMB share mounted over the guest/host network.

Out of curiosity, what's your development use-case that requires greater than 20 MB/s transfer-rate when using volumes?

@dimasnake

This comment has been minimized.

Copy link
Author

commented Nov 1, 2016

I use docker for local web development. I have nginx, php-fpm, mysql containers. Speed website pages load very slow 5-10 sec.

It is possible to encrease volumes speed?

@xdesbieys

This comment has been minimized.

Copy link

commented Nov 2, 2016

I have same problem.

Inside container :

root@63d3c3f00862:/# dd if=/dev/zero of=test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 0.208075 s, 492 MB/s

Insidde container share folder :

root@63d3c3f00862:/var/lib/mysql# dd if=/dev/zero of=test.dat bs=1024 count=100000
100000+0 records in
100000+0 records out
102400000 bytes (102 MB) copied, 11.572 s, 8.8 MB/s

Docker info :

PS C:\Users\????\Docker> docker info
Containers: 4
 Running: 4
 Paused: 0
 Stopped: 0
Images: 11
Server Version: 1.12.3
Storage Driver: overlay2
 Backing Filesystem: extfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.27-moby
Operating System: Alpine Linux v3.4
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.919 GiB
Name: moby
ID: DBIQ:SFE4:ZUSS:AEDJ:4KJC:ODIJ:W4L7:33D7:QUB7:OE4R:YEIE:UANK
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 41
 Goroutines: 78
 System Time: 2016-11-02T12:20:15.4120124Z
 EventsListeners: 1
Registry: https://index.docker.io/v1/
WARNING: No kernel memory limit support
Experimental: true
Insecure Registries:
 127.0.0.0/8

Docker volume inspect :

PS C:\Users\????\Docker> docker volume inspect 7c27e585d8d4f55c34a34f6a47d5c0687f0851fc55765b096183f6ee327ea609
[
    {
        "Name": "7c27e585d8d4f55c34a34f6a47d5c0687f0851fc55765b096183f6ee327ea609",
        "Driver": "local",
        "Mountpoint": "/var/lib/docker/volumes/7c27e585d8d4f55c34a34f6a47d5c0687f0851fc55765b096183f6ee327ea609/_data",
        "Labels": null,
        "Scope": "local"
    }
]

Docker compose :

version: "2"
services:
    mysql:
        image: mysql:latest
        volumes:
            - ./mysql:/var/lib/mysql
        environment:
            - MYSQL_ROOT_PASSWORD=password
            - MYSQL_DATABASE=database
            - MYSQL_USER=user
            - MYSQL_PASSWORD=password
        ports:
            - "3306:3306"
    php:
        build: ./php
        image: php:fpm
        ports:
            - "9000:9000"
        links:
            - mysql
    nginx:
        build: ./nginx
        image: nginx:latest
        volumes:
            - ./nginx/website.conf:/etc/nginx/conf.d/website.conf
        ports:
            - "80:80"
        links:
            - php
    phpmyadmin:
        image: phpmyadmin/phpmyadmin
        environment:
            - PMA_HOST=mysql
            - MYSQL_USERNAME=user
            - MYSQL_ROOT_PASSWORD=password
        ports:
            - "8181:80"
        links:
            - mysql

Is it possible to use NFS of other file sharing system ?

@BlueBeN82

This comment has been minimized.

Copy link

commented Nov 5, 2016

Same usecase and behavior for me.

@fmasa

This comment has been minimized.

Copy link

commented Nov 22, 2016

@friism One of the solutions would be to allow modification of rsize&wsize parameters of smb mount. PHP projects usually consist of hundreds of small files (which have to be loaded on every request) and large rsize degrades performance for this use case.

@dimasnake Try beta 21 if you have installer, performance is much better.

@dimasnake

This comment has been minimized.

Copy link
Author

commented Nov 22, 2016

@fmasa I don't have the installer. Where I can download 21 beta?

@saschanaz

This comment has been minimized.

Copy link

commented Nov 25, 2016

Here is a beta version of Docker for Windows but really beta 21? The latest beta has the version tag 2016-11-10 1.12.3-beta30.

@fmasa

This comment has been minimized.

Copy link

commented Nov 25, 2016

@saschanaz Yes. Take a look at the end of the tag (beta30)

@dopee

This comment has been minimized.

Copy link

commented Nov 26, 2016

@dimasnake @saschanaz

I found it!
Build 5971 is beta21
https://download.docker.com/win/beta/1.12.0.5971/InstallDocker.msi

Gonna restart my machine and try it.

@dopee

This comment has been minimized.

Copy link

commented Nov 26, 2016

Can't get it working. Returned to latest beta (30.1).
So frustrating, performance at work on Ubuntu is almost ten times better, while having older hardware.

@fmasa

This comment has been minimized.

Copy link

commented Nov 26, 2016

I read in another issue, that installer auto-updates to latest version. :/ So once you updated, you're stuck with that version.

@dimasnake

This comment has been minimized.

Copy link
Author

commented Nov 28, 2016

@fmasa @dopee No difference in beta 21 and 30.

@dgageot dgageot changed the title Incredibly slow (near unusable) on Docker container Shared Volumes Slow Dec 3, 2016

@whitecolor

This comment has been minimized.

Copy link

commented Dec 3, 2016

This issue has is more appropriate name then #188

There is also alike issue in docker for mac: docker/for-mac#77

I also experience this for example with git, when I attach as volume some folder and make git operations on it, with quite large code base speed of operation is significantly slower then on dev machine.

To reproduce it you may just take some image with GIT installed and attach some big repo and try do do git status.

@xdrew

This comment has been minimized.

Copy link

commented Dec 4, 2016

Facing this issue on symfony app developing.
Symfony

@fmasa

This comment has been minimized.

Copy link

commented Dec 5, 2016

@dimasnake Are you sure you have beta 21 installed? I found out that installer auto-updates to latest version no matter what.

@friism Is there any roadmap for this issue?

@dimasnake

This comment has been minimized.

Copy link
Author

commented Dec 6, 2016

@fmasa I disable auto-update. I have 1.12.0-beta21 (build: 5971). You have in 21 beta performance better? Show output if this command dd if=/dev/zero of=test.dat bs=1024 count=100000 inside docker container.

@friism

This comment has been minimized.

Copy link

commented Dec 7, 2016

@fmasa we're aware of the problem. Note that volume I/O performance will likely always be slower than pure in-container performance and pure on-host performance because the host-mounted volume filesystem is mounted over a network.

We're interested in making that overhead as low as possible, of course, but we're not keen on adding a lot of toggles that users have to change depending on what software they happen to be running in containers right now.

@fmasa

This comment has been minimized.

Copy link

commented Dec 8, 2016

@friism Some performance hit in mounted volumes is expected, but it for this particular use case it's just too much for DfW to be usable on daily basis.
I don't see way to optimize smb shares w/o some sort of toggles. Different stacks have different requirements. Why don' you give an option to tweak some parameters in config files if not in GUI? You're cutting not unsignificant part of community out of DfW, because the only viable workaround right now is Docker Machine or some nasty hacks.
I don't want to flame nor hate, just trying to give you some feedback from PHP world :)

@trickreich

This comment has been minimized.

Copy link

commented Aug 7, 2018

Yes.. same here!

I thought WSL could be maybe another option until i read this article: https://www.phoronix.com/scan.php?page=article&item=windows10-wsl-docker&num=2

@luckydonald

This comment has been minimized.

Copy link

commented Aug 7, 2018

It's usable from macOS.
There are less differences between mac and linux, so it runs a lot better then windows at least, as it doesn't have to fake so much (file permissions, cAsEiNsEnSiTivE file systems, ...)
But yeah, on window's it's a deal breaker, I'm the only mac in the company therefore we can't use it.

@trickreich

This comment has been minimized.

Copy link

commented Aug 7, 2018

@luckydonald I've switched from OSX to Linux because it's not usable! It's also terrible slow.
docker/for-mac#77

@BlueBeN82

This comment has been minimized.

Copy link

commented Aug 7, 2018

Did someone try to mount the volume as delegated or cached? https://docs.docker.com/docker-for-mac/osxfs-caching/

@kamil-rydel

This comment has been minimized.

Copy link

commented Aug 26, 2018

@trickreich @raupie @er1z There is a possible solution for both Windows or OSX and it's Vagrant. It lets you configure shared folders using rsync which has far better performance.

Docker for Windows pretty much didn't detect file changes at all or took ~30 seconds to reload, using rsync it dropped to below 6 seconds and I am pretty sure I can optimize this more in Webpack (Node.js build tool). This is with 5 containers and 3 different volumes.

In my case I work with Unity and use Docker for my server side stack, so it's pretty important to be able to run both on Windows.

Edit: It's necessary to cache the vendor folders though, in my case I exclude node_modules/ from rsync so there is only few MB of files left and instead create a symlink to local node_modules/ in container. Every container installs it's own dependencies and you have to rebuild the images everytime you add any new packages.

Edit 2: With further optimization of the Vagrant (using rsync) + Docker configuration, primarily avoiding any unnecessary I/O and after updating to newest Webpack I was able to lower the build times to 200ms. I am actually surprised how well it runs right now, so if you are on Windows then Vagrant is a viable option but you have to be very careful, keep all of your vendor files/cache in your containers and only rsync your source code.

@er1z

This comment has been minimized.

Copy link

commented Sep 2, 2018

rsync is just a workaround, not a real solution. The only environment Docker works flawlessly is Linux. Windows version is just a „dummy” thing because of I/O. This won't change unless someone create a way to expose Windows-host data to VM. And as the time shows it's almost impossible task to complete.

@jarkt

This comment has been minimized.

Copy link

commented Sep 7, 2018

To really speed up you have to avoid using shared volumes. With a shared volume I have to wait more than 10 seconds, sometimes more than 30, to load the page. Without shared volume the page is delivered in less than 200 milliseconds.

My setup in a symfony project looks like this: There is a web container and a tools container. The tools container is used as a shell for using command line tools like symfony console or composer. For this I use a shared volume to sync changes (especially the vendor folder) back to the windows filesystem.
But the web container doesn't use a shared volume. Instead any changes will be copied with "docker cp" into the container. This is a job for the IDE. I use IntelliJ and there is a "File Watchers" functionality. Every change in the IDE will copied instantly in the container. If you change a file in the tools container, the File Watcher will notice the change, when you activate the IDE, and copy the changes in the web container. But for the logs and cache directories I still use a docker volume which isn't synchronized with windows. Maybe not necessary.

I have set these windows environment variables:

DOCKER_TLS_VERIFY=1
DOCKER_CERT_PATH=C:\Users\<username>\.docker\machine\machines\default
DOCKER_HOST=tcp://192.168.99.100:2376

The file watcher config looks like this:

Program: C:\Program Files\Docker Toolbox\docker.exe
Arguments: cp $FilePath$ [containername]:/project/$FilePathRelativeToProjectRoot$

But before you have to copy the whole project folder in the container. Initialize the container with:

docker cp <projectfolder>/. [containername]:/project

Also you have to do this if the filesystem is out of sync. Usualy it's only needed once.

@tomasbruckner

This comment has been minimized.

Copy link

commented Oct 29, 2018

@jarkt great advice! I am experimenting with this option right now, but what I am missing in your file watcher config is that there is no containername in your docker cp command. Shouldn't this be more like

Arguments: cp $FilePath$ <containerName>:/project/$FilePathRelativeToProjectRoot$
? Or how does the docker cp command knows where to copy it.

@er1z

This comment has been minimized.

Copy link

commented Oct 29, 2018

Without synchronizing back the cache, Symfony plugin cannot operate as effectively as it could have accessed to the fresh container configuration.

@jarkt

This comment has been minimized.

Copy link

commented Oct 29, 2018

@tomasbruckner Yes you're right. It was a display issue. Github seems to filter the content between < and > in comments.

But by this way, what I described is not my current config anymore. I now use SFTP sync. Here is an example, note the comments:

docker network create symfony
docker volume create symfony_tmp
docker volume create symfony_npm
docker volume create symfony_project

# /tmp for symfony cache and log files - not synced with host system:
volume_tmp="-v symfony_tmp:/tmp/"
# /project/node_modules for npm/yarn - not synced with host system:
volume_npm="-v symfony_npm:/project/node_modules/"
# /project for tools container - synced with host system:
volume_project_shared="-v $(cd "$(dirname "$0")/.." || exit; pwd):/project/"
# /project for web and sftp containers - not synced with host system:
volume_project_web="-v symfony_project:/project/"
volume_project_sftp="-v symfony_project:/home/project/upload/"

docker_api="-v /var/run/docker.sock:/var/run/docker.sock"
network="--net symfony"

db_params="\
	-e MYSQL_ROOT_PASSWORD=secret123 \
	-e MYSQL_DATABASE=symfony \
	-e MYSQL_USER=symfony \
	-e MYSQL_PASSWORD=secret123 \
"
sftp_users="-e SFTP_USERS=project:secret123:::upload"

docker create $network     --name symfony_db      $db_params                       -p 3306:3306               mariadb:10
docker create $network     --name symfony_dbadmin -e PMA_HOST=symfony_db           -p 8080:80                 phpmyadmin/phpmyadmin
docker create $network     --name symfony_web     $volume_npm $volume_tmp $volume_project_web  -p 80:8080     symfony/web
docker create $network -it --name symfony_tools   $volume_npm $volume_tmp $volume_project_shared $docker_api  symfony/tools
# Build container for file watching in /project/assets for automatic rebuild:
docker create $network -it --name symfony_builder --volumes-from=symfony_web                                  symfony/tools \
	/bin/bash -c "yarn install && yarn encore dev --watch --watch-poll=500"
docker create              --name symfony_sftp    $sftp_users $volume_project_sftp -p 2222:22 --entrypoint "" atmoz/sftp \
	/bin/bash -c "/entrypoint chown project /home/project/upload && /entrypoint"

This is heavy to describe, but maybe good for inspriration. Important is which folder is shared with whom. "symfony" is the project name and the images "web" and "tools" are not public. You have to create them with all what you need, like web server, nodejs and yarn.
The relevant part is the symfony_sftp container. Connect via SFTP to synchronize the project folder. I'm not sure if you can do it in both directions with WinSCP, but you can use IntelliJ for it.
Hope it helps someone.

@filipesilva

This comment has been minimized.

Copy link

commented Nov 22, 2018

I see this issue when running git commands inside the container on a reasonably large git repository (12k commits, 4k files).

Running git status on a Windows 10 host takes ~0.08 seconds, while running it in a mounted volume in a windowsservercore:1803 container takes around 40s. It also massively increases CPU usage while running.

@docker-desktop-robot

This comment has been minimized.

Copy link
Collaborator

commented Feb 20, 2019

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale comment.
Stale issues will be closed after an additional 30d of inactivity.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle stale

@vasekboch

This comment has been minimized.

Copy link

commented Feb 20, 2019

/remove-lifecycle stale

This is still up to date issue.

@jiangdongzi

This comment has been minimized.

Copy link

commented Feb 28, 2019

same issue, so frustrated!!!!!!!!!!

@kitingChris

This comment has been minimized.

Copy link

commented Mar 21, 2019

windows colleagues in my team reported this issue recently too!

@jva91

This comment has been minimized.

Copy link

commented Mar 27, 2019

same issue, is there any chance it will be fixed for windows

@fchastanet

This comment has been minimized.

Copy link

commented Mar 27, 2019

i've got bad performance because of my windows user is on a corporate network
I've found a solution provided by configuring docker for windows volumes and I get better performance
The idea is to create a local user and share C drive with that user

Hope it helps

@ZielonyBuszmen

This comment has been minimized.

Copy link

commented Mar 31, 2019

I reinstalled whole docker (with delete vm), and I have same problems with slow volumes

@rgembalik

This comment has been minimized.

Copy link

commented Apr 19, 2019

Just to add to the discussion and possible solutions:

I am fine with files being copied slowly, or not being available at once in the container. However, the file access from within the container should not be slowed (at least not that much), and it should not get even slower when there are more files being mounted in the volume (Files, which are not changing for most of the time).

So if there is some midpoint solution possible, I'd go for faster file access. It is especially a problem for dev environments when using stuff like npm (node_modules) and composer (vendor). I hoped, that this will be achieved witch :cached volume mode, but tbh it does not alleviate the issue, it just makes it a bit faster (which is not enough on larger projects).

Yes, you can prepare projects not to mount everything, but that's not always an option.

@tomasfejfar

This comment has been minimized.

Copy link

commented Apr 19, 2019

Cached mounts solved most of the problems on Mac, hopefully it will land in DfW as well. I have high hopes this will improve with Microsoft pushing FOSS and embracing linux for developer stuff. Their own developers are also using Docker and do feel the pain IMHO. They did some impressive work making MSSQL server work on linux with their kernel proxy layer.

@peter-gribanov

This comment has been minimized.

Copy link

commented Apr 19, 2019

Example of concrete numbers, if anyone is interested
image
image
image

@biskyt

This comment has been minimized.

Copy link

commented Apr 30, 2019

My use case is similar to most people's here: large git project for an Apache PHP web application. The file access speeds make docker almost unusable. All git operations take upwards of 1 minute if run inside the container. composer and npm updates take long enough that you might as well go and make a cup of tea. And many web pages in the project take up to 1 minute to load.

Running git operations on the host is obviously much quicker, but unfortunately the makeup of the project means that helper scripts need to run many git operations, plus composer and npm inside the container - otherwise there are too many other issues (including permission issues) if run directly on the windows host.

I may have a look at creating a volume using the cifs driver and see if manually adjusting the r/wsize is possible and makes a difference.

This really is a fundamental problem with Docker for Windows though.

P.S. I have also tried LCOW mode, but that was even slower!

@er1z

This comment has been minimized.

Copy link

commented May 6, 2019

I may have a look at creating a volume using the cifs driver

AFAIR DfW already it does. :)

r/wsize changes almost nothing, that is not a way. I wonder if my life will be long enough to see an I/O bridge with fair performance. :D

@jva91

This comment has been minimized.

Copy link

commented May 7, 2019

https://devblogs.microsoft.com/commandline/announcing-wsl-2/

Microsoft announcing WSL 2, this means docker can run natively on windows without Hyper-V or Virtualbox. I think this will fix the shared volumes issues

@glen-84

This comment has been minimized.

Copy link

commented May 7, 2019

@jva91 Where does it say that Hyper-V will not be required?

From Wikipedia:

In 2019, Microsoft announced a completely redesigned WSL architecture (WSL 2) to use lightweight Hyper-V VMs hosting actual (customized) Linux kernel images, claiming full syscall compatibility.

@jva91

This comment has been minimized.

Copy link

commented May 7, 2019

https://blogs.windows.com/buildingapps/2019/05/06/developing-people-centered-experiences-with-microsoft-365/#oWEZCmHglMpJEftl.97

Windows Subsystem for Linux 2 (WSL 2) is the next version of WSL and is based on a Linux 4.19 kernel shipping in Windows. This same kernel is technology built used for Azure and in both cases helps to reduce Linux boot time and streamline memory use. WSL 2 also improves filesystem I/O performance, Linux compatibility, and can run Docker containers natively so that a VM is no longer needed for containers on Windows. The first WSL 2 preview will be available later this year.

@glen-84

This comment has been minimized.

Copy link

commented May 7, 2019

A separate VM for containers is not needed, but if I understand correctly, WSL 2 itself will run inside a "lightweight utility virtual machine" (which is probably Hyper-V).

@tomasbruckner

This comment has been minimized.

Copy link

commented May 11, 2019

More info from Microsoft devs about Docker in WSL 2 - https://youtu.be/lwhMThePdIo?t=757

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.