swarm state incorrect after docker daemon crashed in swarm manager #26223
Labels
area/swarm
kind/bug
Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed.
version/1.12
Milestone
This issue was found in the scenario described in #26193.
Output of
docker version
:Output of
docker info
:in swarm manager
Output of
docker node ls
:in swarm manager
Additional environment details (AWS, VirtualBox, physical, etc.):
AWS, Red Hat Enterprise Linux 7.2 (HVM), SSD Volume Type - ami-d1315fb1
Steps to reproduce the issue:
I have a swarm (swarm mode) with three managers, one master and two slaves.
Describe the results you received:
docker daemon crashed on the swarm manager node.
The problem seems caused by the options object for service update API missing the following:
But the problem is: when the manager is down swarm promoted one slave to be master, but the swarm state seems not correct. When I deployed a new service ("docker service create") the tasks of the service hung in New state.
Describe the results you expected:
The swarm should be working as normal.
Additional information you deem important (e.g. issue happens only occasionally):
The text was updated successfully, but these errors were encountered: