You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When a new watchtower image is detected, the old instance manages to spawn two new instances before being killed of by the new container(s).
time="2019-04-06T12:43:55Z" level=info msg="First run: 2019-04-06 12:44:25 +0000 UTC m=+30.189896205"
time="2019-04-06T12:44:37Z" level=info msg="Found new containrrr/watchtower:latest image (sha256:a5a429658e9e194351f45c4d102442a125c1544366b1f9af8dc8d0cf45525e09)"
time="2019-04-06T12:44:53Z" level=info msg="Creating /AjWwhTHctcuAxhxKQFDaFpLSjFbcXoEF"
time="2019-04-06T12:44:53Z" level=error msg="Error response from daemon: Conflict. The container name \"/AjWwhTHctcuAxhxKQFDaFpLSjFbcXoEF\" is already in use by container \"25317ddf47a28160107306a98f3a827318c346c2efa9b2437beaee59224b3210\". You have to remove (or rename) that container to be able to reuse that name."
time="2019-04-06T12:45:08Z" level=info msg="Found new containrrr/watchtower:latest image (sha256:a5a429658e9e194351f45c4d102442a125c1544366b1f9af8dc8d0cf45525e09)"
time="2019-04-06T12:45:26Z" level=error msg="Error response from daemon: No such container: 29b9ccb29b900d467d489d636fd18cc0c45969c385b933f0f45e79c847d90681"
The bug has been confirmed to be present in v2tec/watchtower:0.2.1 and v2tec/watchtower:0.3.0 as well, so nothing new here. Annoying however, especially if you have notifications enabled - due to heavy amount of spam at the moment of upgrade.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
055fafae2726 v2tec/watchtower:latest "/watchtower --inter…" 16 minutes ago Up 16 minutes XVlBzgbaiCMRAjWwhTHctcuAxhxKQFDa
07f043515501 v2tec/watchtower:latest "/watchtower --inter…" 16 minutes ago Up 16 minutes watchtower
The text was updated successfully, but these errors were encountered:
If anyone has any input on what happens here and any great way to solve it, that would be kindly appreciated. Otherwise, I'm thinking of adding some kind of dormant/sleep mode that forces the old instance to stop polling for updates as soon as it has created the new watchtower instance.
First off, thanks for the cleanup here -- it's appreciated. :) This bug is actually one of a couple that finally drove me to roll my own (less flexible) update scripts for now. I would be interested in getting it sorted out though, and I'm willing to help test.
After looking things over, here is what I see...
The com.centurylinklabs.watchtower label is used to ID actual Watchtower containers in the following workflow:
Attempt to pull new images for monitored containers (potentially including Watchtower) and check if container images are stale (old) compared to what is pulled (client.IsContainerStale(container))
If a Watchtower container is stale, rename it to a random alphanumeric string -- rather than stopping it (client.RenameContainer(container, randName()))
Start new Watchtower container using the previous Watchtower container name client.StartContainer(container)
Cleanup previous Watchtower images if configured (client.RemoveImage(container))
Finally, allow the new Watchtower container to stop any older Watchtower containers on startup, and potentially cleanup old images. (CheckPrereqs(client container.Client, cleanup bool))
What I'm noticing is that the "new" Watchtower container doesn't seem to stop and remove the old Watchtower container(s) via the early CheckPrereqs() call. I'm testing with a 0.3.2 image that should be outputting your new log code ("Found multiple running watchtower instances. Cleaning up"), which I never see in debug mode. So the old original container is still around (named the alphanumeric temp name), which goes on to create trouble. (e.g. it sees itself to update again, etc.). In the end things stabilize with two updated Watchtower containers named alphanumeric strings for me.
It seems that if CheckPrereqs() were functioning properly and removing the old Watchtower containers on start, the whole issue might be solved. Maybe this is what you were noticing as well ?
When a new watchtower image is detected, the old instance manages to spawn two new instances before being killed of by the new container(s).
The bug has been confirmed to be present in
v2tec/watchtower:0.2.1
andv2tec/watchtower:0.3.0
as well, so nothing new here. Annoying however, especially if you have notifications enabled - due to heavy amount of spam at the moment of upgrade.The text was updated successfully, but these errors were encountered: