New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[1.13.0-rc5] external volumes created when deploying a stack #29976

Open
cghislai opened this Issue Jan 8, 2017 · 7 comments

Comments

Projects
None yet
8 participants
@cghislai

cghislai commented Jan 8, 2017

If I have a compose file with the following volumes declared

volumes:
  my-volume:
    external:
      name: my-volume-name

and if no volume named my-volume-name exists,
then deploying the stack using docker stack deploy will create the volume.

Expected behavior is a failure of the service tasks with a message like volume 'my-volume-name' declared as external but not found. Create external volumes using 'docker volume create'.

I suspect this is a regression.

docker info:

Containers: 45
 Running: 15
 Paused: 0
 Stopped: 30
Images: 760
Server Version: 1.13.0-rc5
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 686
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: active
 NodeID: e1n2zuijycthql63w6h2n4vsz
 Is Manager: true
 ClusterID: 1szzbo8wm61n365hoy41ihebj
 Managers: 1
 Nodes: 1
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 91.121.79.188
 Manager Addresses:
  91.121.79.188:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
runc version: 51371867a01c467f08af739783b8beafc154c4d7
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.8.0-32-generic
Operating System: Ubuntu 16.10
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.58 GiB
Name: cghislaiA
ID: DFG6:IV26:QJCR:GW5X:OWRX:IOJ3:IMVJ:BPC2:GZ4Y:MUVX:TLHW:ADBC
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
@justincormack

This comment has been minimized.

Contributor

justincormack commented Jan 8, 2017

@cghislai

This comment has been minimized.

cghislai commented Jan 8, 2017

@vdemeester vdemeester added this to the 1.13.0 milestone Jan 9, 2017

@dnephin

This comment has been minimized.

Member

dnephin commented Jan 9, 2017

In swarm mode volumes are created by swarmkit when they are defined by a service. There's no API call to create or inspect them directly.

Because of this limitation, stack deploy can only set a Mount definition on a service. There isn't any way (that I know of) to tell swarmkit to not create the mount.

So I don't think this is a regression (swarmkit has had this behaviour since 1.12 I believe), and I don't think there is anything we can do about this for 1.13.

The only option I can see would be to have swarmkit support another field on the Mount spec which says "don't create this if it doesn't exist", however I'm not sure that will actually work in practice. As service tasks are scheduled on new nodes, I think it's necessary for swarmkit to create the volume on the local node.

cc @cpuguy83 @aluzzardi who might know more about volumes in swarm mode.

@dnephin dnephin removed this from the 1.13.0 milestone Jan 9, 2017

@dnephin dnephin removed their assignment Jan 9, 2017

@cpuguy83

This comment has been minimized.

Contributor

cpuguy83 commented Jan 11, 2017

Right, currently no way to not auto-create the volume being mounted.
I believe the original intention was to require driver details to be supplied if a name is given but the volume doesn't exist... not sure how this didn't happen. (phew that's a lot of negatives)

@cghislai

This comment has been minimized.

cghislai commented Jun 5, 2017

Let me rephrase to make sure I understand the intention correctly.
For volumes not marked as external in the compose file, a MountSpec containing a DriverConfig for the 'local' driver should be created. If they are marked as external, no DriverConfig should be present in the spec.
When a node is asked to start a container, it should check that non existing volumes contain a valid DriverConfig, similarly to what is done for bind mounts there:

func (c *containerAdapter) checkMounts() error {
. If not, an error should be raised.
Is that correct?

@lsl

This comment has been minimized.

lsl commented Jun 9, 2017

Watching this, and since there doesn't seem to be a clear use case specified:

If a mount point / volume doesn't exist on the predetermined node I want my database container to fail to create.

Most database containers will happily boot up with an empty database in this situation and act like everything is fine (taking connections etc.)

I think this workflow makes sense when comparing to other "external" resources like networks?

@acidoxee

This comment has been minimized.

acidoxee commented Sep 18, 2018

Is there any update on this behavior?

In my case, I'm having trouble with external volumes in bloc mode being overlooked. When updating my stack configuration for instance, it sometimes happens that existing volumes (which are distant OpenStack volumes) take a while to unmount, and thus local volumes are being created instead by the re-deployed services of the stack.

It would be great if there was a way to prevent this behavior, since it forces me to delete the stack, prune the local volumes, and then redeploy the stack while carefully checking that distant volumes are available again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment