-
Notifications
You must be signed in to change notification settings - Fork 18
Not able to share cloudstor azure named volumes across multiple containers on same host #68
Comments
I'm just starting to dabble into Cloudstor so I might be off here, but something I read somewhere (can't find it now) made me think that volumes cannot be shared as read/write for several containers. One container can have the volume mounted for read/write, and the others need to mount it read only. If in your case only one of the containers needs to write to the volume and the other one reads, maybe you want to try that? Not sure if/how the read-only option can be set in a compose file, though. |
According to the documentation Docker for Azure persistent data volumes (checked on 2018-07-17), there is no mention of one replica with r/w access to a volume while the others having it mounted read only. I have the same issue. Only 1 replica starts mounting the shared volume, while all the others fail with: As a side note, the issue might be due to the fact that I'm running a single node docker swarm with some services replicated. |
I observe same exact issue using docker-18.06.1 on top of Ubuntu-16 Amazingly, I can mount same CIFS volume from multiple containers on single host when using "Docker for Azure v18.03.0" product. What takes? |
We've also been stumbling onto this limitation and worked around it by creating multiple cloudstor volumes backed by the same Azure Files share. This can be done by specifying the volume option ‘share‘ when creating the volume. Also be very careful when managing such volumes. I have a vague recollection that if you delete one of the volumes, cloudstor will happily delete the underlying share regardless of whether there are other volumes that are still referencing the share. (see also #71) |
I considered that workaround with using unique volume name (you can actually use template to add taskID to name) but pointing to same share name. I ended up using different approach by using "pseudo" local volumes that actually points to CIFS share. Example of volume declaration: When configuring swarm (I use Terraform) I pre-create dozen of shares in storage account |
@ngrewe Do you have any further documentation on how to do this? |
I also have the same issue, does any of you have documentation on how persistent volumes being used on multi swarm host docker platform? VolumeDriver.Mount: mount failed: exit status 32 output="WARNING: 'file_mode' not expressed in octal.\nmount error(16): Resource busy\nRefer to the mount.cifs(8) manual page (e.g. man mount.cifs)\nmount: mounting //azcedockereedev.file.core.windows.net/nginxshare on /mnt/cloudstor/testdemovol_nginxshare failed: Resource busy\n" |
It's reasonably easy to create volumes like this: docker volume create -d cloudstor:azure -o share=test-share test-share1
docker volume create -d cloudstor:azure -o share=test-share test-share2 Now different containers can mount different volumes (from cloudstor's perspective), which point to the same Azure Files share. |
@SimonSimCity , no I spent some time and gave up figuring out how they made it work. |
The problem with the workaround is that it does not work if you sometimes use local volumes (if only on one node) and sometimes cloudstor - the local volumes will point to different storage, while the cloudstor ones would use the same storage. |
Hi, We are trying to connect 2 or more containers with 1 named volume using cloudstor but we have the same problem.... Someone has a solution for that? Thanks in advance |
We are using the cloudstor azure driver to persist files across multiple containers and hosts.
Previously, we used the Microsoft Azure docker driver, which has been abandoned in favor of cloudstor.
We have two containers, one nginx and one php, that run on two hosts. We do not use swarm mode.
The latest cloudstor driver is installed, and the driver works fine in a single container. However, when trying to mount the same named volume in a second container on the same host, we get this error:
Our docker compose file looks like this, which is what brings up the two containers on each host. The cloudstor azure volume was created using
docker volume create
and does propagate across hosts. We do not use swarm.The /datadrive is a local mount which works fine.
uploads-production
is an Azure File share meant to be shared across containers and hosts using the cloudstor Azure driver.The text was updated successfully, but these errors were encountered: