Skip to content
This repository has been archived by the owner on Jan 23, 2020. It is now read-only.

Not able to share cloudstor azure named volumes across multiple containers on same host #68

Open
kevinlisota opened this issue Jul 23, 2018 · 12 comments

Comments

@kevinlisota
Copy link

We are using the cloudstor azure driver to persist files across multiple containers and hosts.

Previously, we used the Microsoft Azure docker driver, which has been abandoned in favor of cloudstor.

We have two containers, one nginx and one php, that run on two hosts. We do not use swarm mode.

The latest cloudstor driver is installed, and the driver works fine in a single container. However, when trying to mount the same named volume in a second container on the same host, we get this error:

ERROR: for nginx  Cannot start service nginx: error while mounting volume '/var/lib/docker/plugins/56f927ed3c5aa4a507db1f1671fb219c78a393ad721f82fe01cc46c2c33364ce/propagated-mount/cloudstor/uploads-production': VolumeDriver.Mount: mount failed: exit status 32
output="mount error(16): Resource busy\nRefer to the mount.cifs(8) manual page (e.g. man mount.cifs)\nmount: mounting //foobar.file.core.windows.net/foobar-production on /mnt/cloudstor/uploads-production failed: Resource busy\n"

Our docker compose file looks like this, which is what brings up the two containers on each host. The cloudstor azure volume was created using docker volume create and does propagate across hosts. We do not use swarm.

version: '3.6'
services:
  nginx:
    container_name: nginx
    image: foobar.azurecr.io/foobar/nginx:v3.0
    depends_on:
        - php
    ports:
        - "80:80"
        - "443:443"
    sysctls:
        - net.core.somaxconn=1024
    restart: always
    volumes:
        - /datadrive/foobar-production:/var/www/foobar.com/
        - uploads-production:/var/www/foobar.com/wp-content/uploads/
        - /mnt/fastcgicache:/fastcgicache
  php:
    container_name: php
    image: foobar.azurecr.io/foobar/php:v3.0
    ports:
        - "9000:9000"
    sysctls:
        - net.core.somaxconn=1024
    restart: always
    volumes:
        - /datadrive/foobar-production:/var/www/foobar.com/
        - uploads-production:/var/www/foobar.com/wp-content/uploads/
volumes:
    uploads-production:
        external: true

The /datadrive is a local mount which works fine. uploads-production is an Azure File share meant to be shared across containers and hosts using the cloudstor Azure driver.

@alexvy86
Copy link

I'm just starting to dabble into Cloudstor so I might be off here, but something I read somewhere (can't find it now) made me think that volumes cannot be shared as read/write for several containers. One container can have the volume mounted for read/write, and the others need to mount it read only. If in your case only one of the containers needs to write to the volume and the other one reads, maybe you want to try that? Not sure if/how the read-only option can be set in a compose file, though.

@krufab
Copy link

krufab commented Aug 17, 2018

According to the documentation Docker for Azure persistent data volumes (checked on 2018-07-17), there is no mention of one replica with r/w access to a volume while the others having it mounted read only.

I have the same issue. Only 1 replica starts mounting the shared volume, while all the others fail with:
VolumeDriver.Mount: mount failed: exit status 32 output="mount error(16): Resource busy\nRefer to the mount.cifs(8) manual page (e.g. man mount.cifs)\nmount: mounting //XXXXXXXX.file.core.windows.net/uploaded-files on /mnt/cloudstor/uploaded-files failed: Resource busy\n""

As a side note, the issue might be due to the fact that I'm running a single node docker swarm with some services replicated.
In this thread, it is stated that it works when there are multiple nodes running one service each using the cloudstor volume.

@akomlik
Copy link

akomlik commented Oct 11, 2018

I observe same exact issue using docker-18.06.1 on top of Ubuntu-16
At that I can mount same exact CIFS volume on docker host using mount.cifs command but it fails to mount when launching container.

Amazingly, I can mount same CIFS volume from multiple containers on single host when using "Docker for Azure v18.03.0" product. What takes?
I tried downgrading to 18.03.0-ce on UBuntu hosts to no avail.
Something in docker for azure code or in Alpine images it uses makes it not hit this bug?

@ngrewe
Copy link

ngrewe commented Oct 24, 2018

We've also been stumbling onto this limitation and worked around it by creating multiple cloudstor volumes backed by the same Azure Files share. This can be done by specifying the volume option ‘share‘ when creating the volume.
This works well if you have two (or more) different kinds of services that end up being co-scheduled on the same node, but it unfortunately won't help you if you just have two instances of the same service sharing a node.

Also be very careful when managing such volumes. I have a vague recollection that if you delete one of the volumes, cloudstor will happily delete the underlying share regardless of whether there are other volumes that are still referencing the share. (see also #71)

@akomlik
Copy link

akomlik commented Oct 24, 2018

I considered that workaround with using unique volume name (you can actually use template to add taskID to name) but pointing to same share name.
The problem I had I could not launch container and use '--mount-from' option with this.
Thanks for heads up on deletion gotcha!

I ended up using different approach by using "pseudo" local volumes that actually points to CIFS share.
Essentially, I'm no longer using cloudstor driver at all

Example of volume declaration:
logs:
name: "${DOCKER_STACK}_logs"
driver: local
driver_opts:
type: cifs
o: vers=2.1,file_mode=0640,dir_mode=0750,uid=0,gid=0,mfsymlinks,${STORAGE_CREDS}
device: "//${STORAGE_HOST_IP}/shared-vol-2"

When configuring swarm (I use Terraform) I pre-create dozen of shares in storage account
with names like shared-vol-0..9
Then in compose file use up those names
works fine so far.

@RichAngal
Copy link

We've also been stumbling onto this limitation and worked around it by creating multiple cloudstor volumes backed by the same Azure Files share. This can be done by specifying the volume option ‘share‘ when creating the volume.
This works well if you have two (or more) different kinds of services that end up being co-scheduled on the same node, but it unfortunately won't help you if you just have two instances of the same service sharing a node.

Also be very careful when managing such volumes. I have a vague recollection that if you delete one of the volumes, cloudstor will happily delete the underlying share regardless of whether there are other volumes that are still referencing the share. (see also #71)

@ngrewe Do you have any further documentation on how to do this?

@grkrao
Copy link

grkrao commented Nov 26, 2018

I also have the same issue, does any of you have documentation on how persistent volumes being used on multi swarm host docker platform?

VolumeDriver.Mount: mount failed: exit status 32 output="WARNING: 'file_mode' not expressed in octal.\nmount error(16): Resource busy\nRefer to the mount.cifs(8) manual page (e.g. man mount.cifs)\nmount: mounting //azcedockereedev.file.core.windows.net/nginxshare on /mnt/cloudstor/testdemovol_nginxshare failed: Resource busy\n"

@ngrewe
Copy link

ngrewe commented Nov 30, 2018

@ngrewe Do you have any further documentation on how to do this?

It's reasonably easy to create volumes like this:

 docker volume create -d cloudstor:azure -o share=test-share test-share1
 docker volume create -d cloudstor:azure -o share=test-share test-share2

Now different containers can mount different volumes (from cloudstor's perspective), which point to the same Azure Files share.

@SimonSimCity
Copy link

Is there still not a usable work-around available excepted for the risky one @ngrewe provided?

@akomlik were you able to track down why it works using the Docker for Azure v18.03.0 template?

@akomlik
Copy link

akomlik commented May 30, 2019

@SimonSimCity , no I spent some time and gave up figuring out how they made it work.
My solution works fine for us so far.

@mohag
Copy link

mohag commented Jul 1, 2019

The problem with the workaround is that it does not work if you sometimes use local volumes (if only on one node) and sometimes cloudstor - the local volumes will point to different storage, while the cloudstor ones would use the same storage.

@Juanfree
Copy link

Juanfree commented Oct 1, 2019

Hi,

We are trying to connect 2 or more containers with 1 named volume using cloudstor but we have the same problem.... Someone has a solution for that?

Thanks in advance

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants