Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker stack deploy --compose-file not idempotent when using private images #29676

Closed
marascha opened this issue Dec 23, 2016 · 30 comments
Closed

Comments

@marascha
Copy link

Description
I have the following compose file comprising two services based on images from our private registry:

version: '3.0'
 services:
   myservice1:
      image: PRIVATE-REGISTRY-URL/service1:latest
   myservice2:
      image: PRIVATE-REGISTRY-URL/service2:latest

docker stack deploy is as far as I know designed to be idempotent, so I assume the services in the stack to remain unchanged, when the stacks is redeployed but nothing has changed in the compose file nor in the image versions.

However, I'm facing the problem, that all services are getting restarted, every time I deploy the stack using docker stack deploy --compose-file . Interestingly, this is not the case, when I first create a bundle using docker-compose bundle and then deploy the resulting bundle file using docker stack deploy --bundle-file.

The problem does not occur when only public images are used in the compose file, so I guess it could have something to do with the communication between docker-engine and our registry.

Any ideas what could be causing this problem?

Output of docker version:
Docker version 1.13.0-rc4, build 88862e7

Output of docker info:
Containers: 31
Running: 0
Paused: 0
Stopped: 31
Images: 75
Server Version: 1.13.0-rc4
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 339
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: active
NodeID: 6btv07f0tr601zqddji46fn3e
Is Manager: true
ClusterID: eiisduhprkfnyr5rjsb5pakfc
Managers: 1
Nodes: 1
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: 192.168.1.2
Manager Addresses:
192.168.1.2:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
runc version: 51371867a01c467f08af739783b8beafc154c4d7
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-57-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.684 GiB
Name: samub
ID: RTIN:ARA2:OTVX:WYVE:HI3G:LKXR:6KJF:CEVY:G7OB:TX3Z:LWYS:EMPH
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

@thaJeztah
Copy link
Member

There's a difference between the way bundle files (created with docker-compose bundle) and the docker-compose integration (--compose-file) work.

Deploying from a bundle file

When running docker-compose bundle, docker-compose resolves the immutable identifier (digest) for each image, and "bakes" that into the bundle file that is produced. For example, the following docker-compose file;

version: "2.1"
services:
  web:
    image: nginx:alpine

Produces this image:

{
  "Services": {
    "web": {
      "Image": "nginx@sha256:aee97412fee873bd3d8fc2331b80862d7bd58913f7b12740cae8515edc1a66e4", 
      "Networks": [
        "default"
      ]
    }
  }, 
  "Version": "0.1"
}

Docker Compose does this by looking at the local images, on the daemon that it is connecting to.

While this works, the image version that is deployed is now dependent on the local image cache,
which may be outdated. For example, when generating the above bundle, my local
image cache was out of date, and nginx:alpine actually wasn't the latest version;

$ docker pull nginx:alpine
alpine: Pulling from library/nginx
Digest: sha256:c04a2d23900ac8772b0e5703720421b449da97176a886ed910b44c270e5e4170
Status: Downloaded newer image for nginx:alpine

Generating the bundle again, shows that the version is updated;

{
  "Services": {
    "web": {
      "Image": "nginx@sha256:c04a2d23900ac8772b0e5703720421b449da97176a886ed910b44c270e5e4170", 
      "Networks": [
        "default"
      ]
    }
  }, 
  "Version": "0.1"
}

If the specified image has not been pushed to a registry, and therefore doesn't
have a digest (e.g., when testing an image that was built locally), docker-compose bundle
will produce an error;

$ docker-compose bundle

ERROR: Some images are missing digests.

The following images need to be pulled:

    web

Use `docker-compose pull web` to pull them.

Deploying from a docker-compose file

When deploying the compose file through --compose-file, the mechanics are a bit different;

  • if a service's image uses an immutable identifier/digest ("nginx@sha256:c04a2d23900ac8772b0e5703720421b449da97176a886ed910b44c270e5e4170"), the service will be created using that identifier. If the specified image-digest is present in the local image-cache, no communication is made with the registry. This allows faster deployments, and is useful for “air-gapped" networks.
  • if the image uses a tag, the image's immutable identifier is resolved by querying the registry. This resolution takes place on the manager node that the service is created on.
  • If if an image is only present locally, and doesn't have a digest, a warning is printed, but the local image is used. This allows for testing services locally in a development environment, for example, when you want to test an image that you built locally before pushing it to a registry. This workflow is intended for single-node swarms (Docker for Desktop).

This flow provides a "freshness" guarantee (i.e., foobar:latest is checked against the registry, and is actually :latest, and not a stale version that was present locally), while keeping the author in control; i.e. specify an immutable identifier to "pin" to a specific version, or a "rolling" tag (image:2, or image:2.1), to allow updating the image on re-deploy.

To see this working, I'll recrete the situation where nginx:alpine is outdated;

$ docker pull nginx@sha256:aee97412fee873bd3d8fc2331b80862d7bd58913f7b12740cae8515edc1a66e4

sha256:aee97412fee873bd3d8fc2331b80862d7bd58913f7b12740cae8515edc1a66e4: Pulling from library/nginx
Digest: sha256:aee97412fee873bd3d8fc2331b80862d7bd58913f7b12740cae8515edc1a66e4
Status: Image is up to date for nginx@sha256:aee97412fee873bd3d8fc2331b80862d7bd58913f7b12740cae8515edc1a66e4

Tag the old image as if it's the latest version;

$ docker tag nginx@sha256:aee97412fee873bd3d8fc2331b80862d7bd58913f7b12740cae8515edc1a66e4 nginx:alpine

$ docker inspect --format='{{ .RepoDigests }}' nginx:alpine

[nginx@sha256:aee97412fee873bd3d8fc2331b80862d7bd58913f7b12740cae8515edc1a66e4]

Deploy the stack using the docker-compose.yml file;

$ docker stack deploy --compose-file=docker-compose.yml bundleexample

And inspect the service specs to see which image is deployed;

$ docker service inspect --format '{{ .Spec.TaskTemplate.ContainerSpec.Image }}' bundleexample_web

nginx:alpine@sha256:c04a2d23900ac8772b0e5703720421b449da97176a886ed910b44c270e5e4170

On the other hand, changing the docker-compose file to

version: "3.0"
services:
  web:
    image: "nginx@sha256:aee97412fee873bd3d8fc2331b80862d7bd58913f7b12740cae8515edc1a66e4"

And re-deploying;

$ docker stack deploy --compose-file=docker-compose.yml bundleexample
Updating service bundleexample_web (id: l1t6znfwdoef7s0p47o3v9gv2)

Pins the service to the old version;

$ docker service inspect --format '{{ .Spec.TaskTemplate.ContainerSpec.Image }}' bundleexample_web

nginx@sha256:aee97412fee873bd3d8fc2331b80862d7bd58913f7b12740cae8515edc1a66e4

Roadmap

Compose integration is only the first step, additional enhancements will be added
in future versions, so feedback on this feature is welcome!

@marascha
Copy link
Author

Thanks a lot for your informative answer. The difference between bundle and docker-compose integration is now clear to me.

This flow provides a "freshness" guarantee (i.e., foobar:latest is checked against the registry, and is actually :latest, and not a stale version that was present locally)...

That's exactly what we need in our workflow. Defining a compose-file with multiple services tagged with latest and getting only those services restarted whose images has changed since the last deployment. All unchanged services in the stack should not get restarted.

So our requirement is fulfilled, only if deploying compose files with images referenced using tags is idempotent. By idempotent I mean, that services in the stack are not restarted when the stack is redeployed and the latest tag in the registry is still associated with the same digest as the previous deployment. Let's stick to your example to make things more clear:

version: "3.0"
services:
  web:
    image: nginx:alpine

docker stack deploy --compose-file=docker-compose.yml bundleexample

docker service inspect --format '{{ .Spec.TaskTemplate.ContainerSpec.Image }}' bundleexample_web
nginx:alpine@sha256:c04a2d23900ac8772b0e5703720421b449da97176a886ed910b44c270e5e4170
docker stack ps bundleexample
ID            NAME                 IMAGE         NODE        DESIRED STATE  CURRENT STATE               ERROR  PORTS
htf56q0h1fqv  bundleexample_web.1  nginx:alpine  samub  Running        Running about a minute ago

Now, when I redeploy the stack using docker stack deploy --compose-file=docker-compose.yml bundleexample, I expect the following steps to be done:

  1. The manager node queries the registry and resolves the images's immutable identifier which is alpine@sha256:c04a2d23900ac8772b0e5703720421b449da97176a886ed910b44c270e5e4170.
  2. The manager compares this identifier with the service image identifier (the one we get by calling docker service inspect --format '{{ .Spec.TaskTemplate.ContainerSpec.Image }}' bundleexample_web) and realizes that the image has not changed
  3. As other service attributes has not changed either, the manager 'ignores' the stack update request for this service and the service continues to run.

This behavior is exactly what I see when trying to redploy the stack after couple of minutes using

docker stack deploy --compose-file=docker-compose.yml bundleexample
Updating service bundleexample_web (id: 76kcrcupy1hyid5o6bncs8ccf)

Calling the stack ps command shows that the service has not been restarted:

docker stack ps bundleexample
ID            NAME                 IMAGE         NODE        DESIRED STATE  CURRENT STATE          ERROR  PORTS
htf56q0h1fqv  bundleexample_web.1  nginx:alpine  samub  Running        Running 7 minutes ago

So everything works as expected. My problem is that this behavior seems to be different when I use images from our private registry. For example:

version: '3.0'
 services:
   myservice1:
      image: PRIVATE-REGISTRY-URL/service1:latest

With this definition the service gets restarted, every time I call docker stack deploy --compose-file=docker-compose.yml bundleexample although no newer versions of the image are pushed to the registry (latest tag is identified by the same digest as the previous deployments)

So it seems to me that the process of resolving image identifiers (manager queries the registry) and restarting only changed services is somehow different when using a private registry. I know this sounds odd, but that's what I'm experiencing in our environment.

@thaJeztah
Copy link
Member

So it seems to me that the process of resolving image identifiers (manager queries the registry) and restarting only changed services is somehow different when using a private registry. I know this sounds odd, but that's what I'm experiencing in our environment.

That's interesting. Do you know what kind of registry you're running? Also, have you tried to diff the service definition before/after doing the update (e.g., storing the docker service inspect output before and after).

It may be worth running the daemon in debug-mode, and see if the log-files show what it's doing

@marascha
Copy link
Author

I do not know much about our docker registry and haven't direct access to it either, but can ask our admins in couple of days. What would be interesting to know? The version of the registry?

I tried a diff between a service which gets restarted on redeployments (interestingly it does not get restarted on every redployment but maybe every three times or so). Only the following elements were not identical between the old and new version of the service:

"Version": {
            "Index": 17145
        },
...
"UpdatedAt": "2016-12-24T16:53:00.337962145Z",
...
 "StartedAt": "2016-12-24T16:52:26.464216596Z",
 "CompletedAt": "2016-12-24T16:53:00.337921255Z",
...

Everything else (e.g. TaskTemplate/ContainerSpec/image) was identical.
I did the experiment in debug-mode, but the logs are not revealing anything helpful.

@thaJeztah
Copy link
Member

I do not know much about our docker registry and haven't direct access to it either, but can ask our admins in couple of days. What would be interesting to know? The version of the registry

No worries, I was curious if it had issues resolving the digest, but your "diff" of the service definition seems to show that is not the problem.

I'll try to reproduce this if i have a bit of time this weekend, and /cc @dnephin @vdemeester in case this is a known issue or there's something i overlooked

@jameshy
Copy link

jameshy commented Dec 24, 2016

I ran into this too and have been trying to debug, here's what i found:

If you have a docker-compose.yml like this:

version: '3'
services:
    nginx:
        image: nginx
        environment:
            A: data
            B: data

Deploying it using:
docker stack deploy --compose-file ./docker-compose.yml service
Causes the container to restart (not every time, but most times)

However if you only have a single environment variable, it does not cause the container to restart.

version: '3'
services:
    nginx:
        image: nginx
        environment:
            A: data

Output of docker version:

Client:
 Version:      1.13.0-rc4
 API version:  1.25
 Go version:   go1.7.3
 Git commit:   88862e7
 Built:        Fri Dec 16 22:52:06 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.13.0-rc4
 API version:  1.25 (minimum version 1.12)
 Go version:   go1.7.3
 Git commit:   88862e7
 Built:        Fri Dec 16 22:52:06 2016
 OS/Arch:      linux/amd64
 Experimental: false

Output of docker info:

Containers: 11
 Running: 2
 Paused: 0
 Stopped: 9
Images: 15
Server Version: 1.13.0-rc4
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 45
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: active
 NodeID: yp2hr8rs1bkl940z0vthvmbn5
 Is Manager: true
 ClusterID: 3v0ouvcjjzglji8o6n185itep
 Managers: 1
 Nodes: 1
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 172.31.6.27
 Manager Addresses:
  172.31.6.27:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
runc version: 51371867a01c467f08af739783b8beafc154c4d7
init version: 949e6fa
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 999 MiB
Name: ip-172-31-6-27
ID: 74IN:O6OA:ML67:ECMY:RNBA:FDPM:KR42:3RNM:UVLR:FT2I:AXAS:TSN5
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: jameshy
Registry: https://index.docker.io/v1/
WARNING: No memory limit support
WARNING: No swap limit support
WARNING: No kernel memory limit support
WARNING: No oom kill disable support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

@thaJeztah
Copy link
Member

Thanks @jameshy, that could play a role (perhaps a different reason for this to happen); if the env-vars are changed into a "map" at some point in the code base, the order of them can be randomized (a feature of the Go language; https://nathanleclaire.com/blog/2014/04/27/a-surprising-feature-of-golang-that-colored-me-impressed/). A different order would result in the service to be updated.

I'll have a look for that as well

@marascha
Copy link
Author

marascha commented Dec 25, 2016

I removed all environment variables from my compose file and now everything works as expected (the services do not get restarted when I redeploy the compose file).

@thaJeztah
Copy link
Member

Thanks @marascha, yes, looks like it's due to Go "randomizing" key/value pairs then. We'll look into this

@marascha
Copy link
Author

marascha commented Jan 7, 2017

@thaJeztah first of all, happy new year :) I was wondering if there have been any updates on this issue? #29732 has not been updated for a while. Do you think the fix will make it into the next rc?

@byrnedo
Copy link

byrnedo commented Jan 23, 2017

I'm also experiencing the environment var map issue.

@vdemeester
Copy link
Member

@marascha @byrnedo it should be in 1.13.1 (we put it into the milestone)

@vdemeester
Copy link
Member

Fixed in 1.13.0 and master, closing

@saamalik
Copy link

saamalik commented Feb 24, 2017

@thaJeztah I'm running into the exact opposite problem; stacks re-deployed using a compose-file with locally built images do not update.

Compose-file uses a locally built image via myapp:latest. I can easily deploy the stack using docker stack deploy --compose-file docker-compose.yaml dev1. If I run the docker stack deploy --compose-file docker-compose.yaml dev1 command again, I get a bunch of warnings of not pinning the image:

Updating service dev1_myapp (id: zi9n4b4u1fl2et4q8p7qjyz9d)
unable to pin image myappi:latest to digest: errors:
denied: requested access to the resource is denied
unauthorized: authentication required

But otherwise the container is not updated because the myapp:latest image didn't change. However, I did expect that triggering docker stack deploy --compose-file docker-compose.yaml dev1 after building a newer myapp image (re-tagged latest) would trigger the shutdown of the older container/start of a new one. Instead nothing happens (other than the warning messages shown above).

The only workaround for triggering for the new containers to start is either run docker rm -f <container-id> or docker service scale dev_myapp=0 && docker service scale dev_myapp=1. Is this a bug or am I not understanding how stack works with compose-files?

Using Docker for Mac (v1.13.1) in Swarm mode (no registries). Thanks!

@theirix
Copy link

theirix commented Mar 5, 2017

Same for me for Docker for Linux (17.03.0-ce, started in experimental mode) without registry (only local images). Adding --with-registry-auth to both stack deploy and service update command didn't help. Workaround by @saamalik helped, thanks!

@aluzzardi
Copy link
Member

/cc @nishanttotla

@nishanttotla
Copy link
Contributor

@saamalik the warnings you get on a second deployment are suspect, and likely shouldn't be there. Also, if the image isn't updated, can you tell me why you expect docker stack deploy --compose-file docker-compose.yaml dev1 to trigger a shutdown of the old container?

@thaJeztah
Copy link
Member

@nishanttotla I think what's happening is that it falls back to using image:tag; if the image:tag doesn't change (but that image is actually updated, and tagged under the same name), the service isn't updated.

This makes sense for "deploy" (where using a fixed tag is bad practice), but for local development, is different behavior than standalone compose, which will always recreate

@nishanttotla
Copy link
Contributor

@thaJeztah ah, okay it makes sense now, thanks for clarifying. This isn't surprising because the image ID (which gets updated on image update) isn't part of the service spec. This is a known issue.

While there are workarounds, like

  • using the image ID to create service (not sure if compose allows this)
  • updating image tag on rebuild
  • using --force

I think it might be useful to have a flag or other way to disable digest pinning, esp. for local workflows. This will be easier to do after #32384. What do you think?

@thiagocaiubi
Copy link

thiagocaiubi commented May 3, 2017

Hi @thaJeztah @marascha. I wanna share something.

That's how I'm using docker stack:

  • I'm using variable interpolation for version and digest:
image: awesome-image:${VERSION}@${DIGEST}
  • I was login in to AWS ECR just before docker stack deploy
export VERSION=v0.0.1
export DIGEST=sha256:awesomedigest
aws ecr get-login | bash
docker stack deploy --compose-file compose-file.yml --with-registry-auth awesome-stack

With this set every time that I called my deploy script all my stack services were been restarted with fresh containers.

After lots of experiments I've noticed login in to ECR was causing that behavior. I've added an extra step to try to pull the image first, if it fails it runs ECR login. So, I can say for sure after 12hours (ECR TTL session) I will restart all my containers if I run my deploy script.

Why authenticating to a private registry will cause containers to restart if they have version and digest set to the image?

Let me know if you want more details about this scenario.

@thaJeztah
Copy link
Member

@thiagocaiubi services are re-deployed if anything in their definition changes. IIRC, aws uses short-lived authentication information, so each time you run get-login you get new credentials; doing --with-registry-auth updates the authentication information in the service definition, and if that changes, new services are deployed

(at least that's what I think is happening)

@ifourmanov
Copy link

@thaJeztah are there any plans for fixing this behaviour? Effectively it's yet another blocker for using stack/swarm in AWS

@oppianmatt
Copy link

I'm running into the same problem as the original issue, except using an env_file in docker-compose instead of environment variables.

The content of the env_file doesn't change, but the modification time does. It's updated to the current set of environment parameters which can be the same, but will cause the file time to change.

This causes services to redeploy even if nothing has changed.

> docker system info
Containers: 17
 Running: 4
 Paused: 0
 Stopped: 13
Images: 89
Server Version: 17.06.0-ce
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 769
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
 NodeID: 1q9vn8g0glx4xudw4fns897ao
 Is Manager: true
 ClusterID: kwxnn7a600ffov97vox6e8m55
 Managers: 1
 Nodes: 5
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Root Rotation In Progress: false
 Node Address: x.x.x.x
 Manager Addresses:
  x.x.x.x:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfb82a876ecc11b5ca0977d1733adbe58599088a
runc version: 2d41c047c83e09a6d61d464906feb2a2f3c52aa4
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-78-generic
Operating System: Ubuntu 16.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.757GiB
Name: xx
ID: RP23:QPHI:D4A4:55UO:3XST:GFLN:GCSB:77BN:UE5A:3UPP:HM26:GQEU
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: xx
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

@sandys
Copy link

sandys commented Jan 9, 2018

I'm facing a similar issue when I'm using locally built images. Is there a workaround for this ? I'm encountering downtime on every deploy.


image uncontainer/stunnel:latest could not be accessed on a registry to record
its digest. Each node will access uncontainer/stunnel:latest independently,
possibly leading to different nodes running different
versions of the image.

why is docker not calculating the digest locally ?

@thaJeztah
Copy link
Member

why is docker not calculating the digest locally ?

When deploying a stack, tasks can land on any node in the swarm; those nodes won't have the image you built locally, so docker resolves the image from the registry; pins the service-definition to that digest, and on the node where the task (container) is deployed, it pulls the image using that digest. That way it's guaranteed that all instances of the service run exactly the same version of the image.

You can use --resolve-image=never, which will skip resolving the digest, and deploy the service using the image:name, but if an image (image:name) exists in the registry, and the node where the tasks is deployed on is able to communicate with the registry, it will docker pull that image (which may be an older version of the image if you did not push the latest version)

@sandys
Copy link

sandys commented Jan 9, 2018

@thaJeztah thanks for that. Is there a way to let "stack deploy" know to pin the service to image:name? Such that the idempotency still works ?

Will the "reasonable" answer be to have a local docker registry running ?

@thaJeztah
Copy link
Member

Using --resolve-image=never (and an image that's only available locally) will pin the service to image:tag;

docker image build -t example:latest -<<EOF
FROM nginx:alpine
LABEL mylabel=foo
EOF


docker stack deploy --resolve-image=never -c- mystack <<EOF
version: '3'
services:
  app:
    image: example:latest
EOF

re-deploying the stack won't re-deploy the services;

docker stack deploy --resolve-image=never -c- mystack <<EOF
version: '3'
services:
  app:
    image: example:latest
EOF

And verify that the service is not re-deployed:

docker service ps mystack_app

ID                  NAME                IMAGE               NODE                    DESIRED STATE       CURRENT STATE                    ERROR               PORTS
bws7zgpcdh1o        mystack_app.1       example:latest      linuxkit-025000000001   Running             Running less than a second ago                       

However, if the local image was updated, it also won't re-deploy the service (because there's no change in the service definition), so to force updating the service, you'd either have to;

  • update a property of the service so that it needs to be re-deployed
  • you could change the tag of the image (:v1.0.0 -> :v1.0.1)
  • use docker service update --force --no-resolve-image

Last option will force a re-deploy, but keeps the service "pinned" to image:tag (due to --no-resolve-image); here I build a new version of the image:

docker image build -t example:latest -<<EOF
FROM nginx:alpine
LABEL mylabel=my-new-version-of-the-image
EOF

Then, update the service with --force and --no-resolve-image:

docker service update --force --no-resolve-image mystack_app

Which forces a re-deploy of the service's instance:

docker service ps mystack_app

ID                  NAME                IMAGE               NODE                    DESIRED STATE       CURRENT STATE                     ERROR               PORTS
a1wbde2iubyq        mystack_app.1       example:latest      linuxkit-025000000001   Running             Running less than a second ago                        
ycivo81pog1t         \_ mystack_app.1   example:latest      linuxkit-025000000001   Shutdown            Shutdown less than a second ago        

@sandys
Copy link

sandys commented Jan 9, 2018 via email

@thaJeztah
Copy link
Member

You're welcome! There's still some things that need to be looked into; the problem is a bit that docker stack deploy is designed for deploying your stack (i.e., to "a swarm cluster"), which means it should not depend on local image cache.

However, that design doesn't fit for people coming from docker-compose (which is designed for local development), and that want to use docker stack deploy for local development (or testing local image before pushing/deploying), so possibly a new option needs to be added for that scenario

@sandys
Copy link

sandys commented Jan 9, 2018 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests