-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added node twice will destroy swarm #34722
Comments
@Fank is this node that you put into drain mode the leader? |
Im not sure, but i dont think so. Because it was offline for hours, so it should have the status "down" |
@Fank Can I request you for the deamon logs for all the nodes ? Thanks! |
Sry but i think the are gone, logrotate killed them. |
Swarm Cluster has been destroyed by adding a new node once or by no reason.
The only msg even after several restarting docker.
Working but status is now "Down" I wonder why node10 has lost its manager status.
Working but status is now "Down", actually, not working.
Now, node10 get its manager status back.
Working but status is now "Down", actually, not working.
Demoted node11
|
Can you tell me how many IT companies are using docker-ce for their live service? |
I see you have 2 manager nodes. You need an odd number of managers to select a leader in a quorum. At least 3 managers to sustain 1 manager node failure |
It happened when we have even 3 manager nodes in docker 17.06.2-ce. |
We had the same issue a couple of days ago with 3 managers. It happened while I was recycling the nodes to upgrade to docker-ce 17.09. I recycled all the non managers node one by one and then started working on the managers. The problem when the last manager was recycled. |
@yunghoy what version and configuration are you running? (i.e. at least make sure to post the output of Also running |
The same issue, I have only 3 nodes which are want to have both roles manager and worker.
|
Description
Steps to reproduce the issue:
docker node update --availability drain node01
docker swarm join --token SWMTKN-1-2x0u0zht9x3us2bcxzr3melvavpzfh82jfbzpxh0sriyqt5sou-54viclw2nj7lywfo8bkcz7suq 192.168.85.102:2377
Error response from daemon: Timeout was reached before node was joined. The attempt to join the swarm will continue in the background. Use the "docker info" command to see the current swarm status of your node.
and lost network connection (this may could be my fault but don't know now)Error response from daemon: rpc error: code = Unknown desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online.
when executing swarm commands likedocker node ls
Describe the results you received:
Only receiving
Error response from daemon: rpc error: code = Unknown desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online.
when executing swarm commands.Describe the results you expected:
Existing "drain" node will be overwritten, instead of beeing added.
Additional information you deem important (e.g. issue happens only occasionally):
Maybe related to #34384
Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.):
3 physical hosts (HPE Proliant blade)
The text was updated successfully, but these errors were encountered: