You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I know this is a duplicate of #29219, but I am trying to again make a case for this feature. I think this can be very useful, so please do not close this outright but try to follow my argumentation in favor of this feature below.
But maybe there are better ways to do what I am trying to do which I am currently not aware of (being still a newb when it comes to swarm) - in that case, please enlighten me.
I am looking for a way to handle persistent data in a Swarm.
I have a number of stateless services, which I can scale using "docker service scale".
However, even while these things are stateless, there is a bit of information that I would like to preserve (for the sake of the example, just assume that this information is log files), even if I scale the service up and down and if the service instances ("tasks") end up on different hosts each time.
This information is stored in in, say, a folder /xyz inside the container. Since scaling the service up and down can make the instances end up on different hosts, using a plain local bind mount is not an option - instance n might end up on host A, but after scaling down to scale =n later, it might be on a different host B, and so it would not find the data from its "previous life". Worse even, if two instances end up on the same host, they would use the same host path, and try to "share" the files found there, which does not work.
My idea was to have a network share which I mount to, e.g., /share on each of my hosts.
I could then use a bind mount like this with "service create"
Note the use of the "Task.Slot" template in the src definition.... the idea was that each task would get a unique directory und /share, whose name is created based on the task's "slot".
This however does not work since the directories /share/1, /share/2 etc. do not exist initially. Of course I could pre-create a reasonable amount of such directories, but what if I scale higher than anticipated?
If there was an option for "--mount" to create the "src" directory if it does not exist, my problem would be solved.
Say, I start my service and set scale to 3:
For each instance, swarm would create the directory under /share according to the instance's "slot number". Since /share is on a network share, I would now have three directories under /share, /1, /2, and /3.
Now I scale down to 1, and back up to 3. Whatever host the instance with slot #3 ends up, it would mount the same directory /test/3, and would find the data from its "previous life".
Without the ability to create the src directories, I need to do a somewhat nasty workaround.
I could pass in the slot ID as environment variable:
-e "slot={{.Task.Slot}}.
and would mount the /test folder with
--mount type=bind,src=/test,dst=/someOtherDir
Then, in my startup script, I would need to use the "slot" environment variable, to create the directory whose name = slot under /someOtherDir, i.e.,
mkdir /someOtherDir/${slot}
(e.g., assuming I am in the instance whose slot is #3, I would create a directory /someOtherDir/3)
and would then have to link this directory to /xyz /because this is where my service expects it stuff.
All that hazzle would not be necessary if I could get an option for "--mount" (e.g., ensureSrc=true) that if set creates the src directory if it doesn't already exist.
The text was updated successfully, but these errors were encountered:
So you suggest using named volume... but what if I wanted such a "dynamically named" named volume to be located on an NFS share, I would have to mount my NFS share into the directory into which Docker puts the named volumes, right?
But what if I don't want all volumes on that Docker host to be on that NFS share?
Would an NFS volume driver be an alternative?
@jgoeres You can use the local driver to mount an NFS volume, but also there are many drivers out there that are designed around NFS and you can use these. The nice thing about this is you don't have to do anything to the host except make sure the nfs kernel module is loaded.
I know this is a duplicate of #29219, but I am trying to again make a case for this feature. I think this can be very useful, so please do not close this outright but try to follow my argumentation in favor of this feature below.
But maybe there are better ways to do what I am trying to do which I am currently not aware of (being still a newb when it comes to swarm) - in that case, please enlighten me.
I am looking for a way to handle persistent data in a Swarm.
I have a number of stateless services, which I can scale using "docker service scale".
However, even while these things are stateless, there is a bit of information that I would like to preserve (for the sake of the example, just assume that this information is log files), even if I scale the service up and down and if the service instances ("tasks") end up on different hosts each time.
This information is stored in in, say, a folder /xyz inside the container. Since scaling the service up and down can make the instances end up on different hosts, using a plain local bind mount is not an option - instance n might end up on host A, but after scaling down to scale =n later, it might be on a different host B, and so it would not find the data from its "previous life". Worse even, if two instances end up on the same host, they would use the same host path, and try to "share" the files found there, which does not work.
My idea was to have a network share which I mount to, e.g., /share on each of my hosts.
I could then use a bind mount like this with "service create"
--mount type=bind,src=/share/{{.Task.Slot}},dst=/xyz
Note the use of the "Task.Slot" template in the src definition.... the idea was that each task would get a unique directory und /share, whose name is created based on the task's "slot".
This however does not work since the directories /share/1, /share/2 etc. do not exist initially. Of course I could pre-create a reasonable amount of such directories, but what if I scale higher than anticipated?
If there was an option for "--mount" to create the "src" directory if it does not exist, my problem would be solved.
Say, I start my service and set scale to 3:
For each instance, swarm would create the directory under /share according to the instance's "slot number". Since /share is on a network share, I would now have three directories under /share, /1, /2, and /3.
Now I scale down to 1, and back up to 3. Whatever host the instance with slot #3 ends up, it would mount the same directory /test/3, and would find the data from its "previous life".
Without the ability to create the src directories, I need to do a somewhat nasty workaround.
I could pass in the slot ID as environment variable:
-e "slot={{.Task.Slot}}.
and would mount the /test folder with
--mount type=bind,src=/test,dst=/someOtherDir
Then, in my startup script, I would need to use the "slot" environment variable, to create the directory whose name = slot under /someOtherDir, i.e.,
mkdir /someOtherDir/${slot}
(e.g., assuming I am in the instance whose slot is #3, I would create a directory /someOtherDir/3)
and would then have to link this directory to /xyz /because this is where my service expects it stuff.
All that hazzle would not be necessary if I could get an option for "--mount" (e.g., ensureSrc=true) that if set creates the src directory if it doesn't already exist.
The text was updated successfully, but these errors were encountered: