Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker swarm init --force-new-cluster --autolock fails #36852

Open
Vratislav opened this issue Apr 13, 2018 · 1 comment
Open

docker swarm init --force-new-cluster --autolock fails #36852

Vratislav opened this issue Apr 13, 2018 · 1 comment

Comments

@Vratislav
Copy link

Description

After experiencing #36851, we tried to restore our autolocking swarm using docker swarm init --advertise-addr eth1 --force-new-cluster --autolock. The response to this command is:

Error response from daemon: Swarm is encrypted and needs to be unlocked before it can be used. Please use"docker swarm unlock" to unlock it.

So we tried: docker swarm unlock

And the response to this is:

Error: This node is not part of a swarm

Node is suddenly not part of the old swarm nor is in the new swarm.

Steps to reproduce the issue:

  1. On a running manager, invoke docker swarm init --force-new-cluster --autolock

Describe the results you received:

Error response from daemon: Swarm is encrypted and needs to be unlocked before it can be used. Please use "docker swarm unlock" to unlock it.

Describe the results you expected:

New swarm is created from the current state as advertised

Output of docker version:

Client:
 Version:       18.03.0-ce
 API version:   1.37
 Go version:    go1.9.4
 Git commit:    0520e24
 Built: Wed Mar 21 23:09:15 2018
 OS/Arch:       linux/amd64
 Experimental:  false
 Orchestrator:  swarm

Server:
 Engine:
  Version:      18.03.0-ce
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.9.4
  Git commit:   0520e24
  Built:        Wed Mar 21 23:13:03 2018
  OS/Arch:      linux/amd64
  Experimental: false

Output of docker info:

Containers: 9003
 Running: 8
 Paused: 0
 Stopped: 8995
Images: 255
Server Version: 18.03.0-ce
Storage Driver: overlay
 Backing Filesystem: xfs
 Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
 NodeID: vzwcm7ogtykylyaavtx5bxhip
 Is Manager: true
 ClusterID: todqdbbylj0efvqys1qg47i2f
 Managers: 3
 Nodes: 3
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Autolock Managers: true
 Root Rotation In Progress: false
 Node Address: 192.168.24.104
 Manager Addresses:
  192.168.24.104:2377
  192.168.24.105:2377
  192.168.24.106:2377
  192.168.24.106:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfd04396dc68220d1cecbe686a6cc3aa5ce3667c
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-693.21.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.623GiB
Name: swarm-dev1.xx.xxxxxx.cz
ID: CTWX:4TBD:WIUJ:TOJE:CD3N:NSVR:FLOD:IVDT:64HH:NAGC:WV77:3WDD
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
@rgeraskin
Copy link

Hello. I have the same issue with any locked swarm cluster.

For example, because of this issue, we can't recover from losing the quorum in the swarm cluster with autolock feature enabled.

Steps to reproduce:

/ # docker swarm init --autolock
Swarm initialized: current node (o38p6qfmxc8yyo1uywsmynq9h) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-02wj856nejpyievagxqb58iyqejclong3lkjr6vtg59a22pphx-8flo6xx36nta2ajyyuhubzyyc 172.26.0.2:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

To unlock a swarm manager after it restarts, run the `docker swarm unlock`
command and provide the following key:

    SWMKEY-1-w+DvLCriFbeHbkdQMQsxFlfZabubC+SmsE9YHI0oLpU

Please remember to store this key in a password manager, since without it you
will not be able to restart the manager.
/ # docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
o38p6qfmxc8yyo1uywsmynq9h *   a1b753b98d2f        Ready               Active              Leader              19.03.11
/ # docker swarm init --force-new-cluster
Error response from daemon: Swarm is encrypted and needs to be unlocked before it can be used. Please use "docker swarm unlock" to unlock it.
/ # docker swarm unlock
Error: This node is not part of a swarm
/ # docker swarm init --force-new-cluster
Error response from daemon: Swarm is encrypted and needs to be unlocked before it can be used. Please use "docker swarm unlock" to unlock it.
/ #

Output of docker version:

Client: Docker Engine - Community
 Version:           19.03.11
 API version:       1.40
 Go version:        go1.13.10
 Git commit:        42e35e61f3
 Built:             Mon Jun  1 09:09:53 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.11
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.10
  Git commit:       42e35e61f3
  Built:            Mon Jun  1 09:16:24 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
/ #

Output of docker info:

Client:
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 19.03.11
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.3.0-51-generic
 Operating System: Alpine Linux v3.12 (containerized)
 OSType: linux
 Architecture: x86_64
 CPUs: 1
 Total Memory: 981.3MiB
 Name: a1b753b98d2f
 ID: XTNH:W2QP:GD5F:IFCH:HRSG:LK7B:RQ7Z:EURG:DSFI:ERT4:VDXT:GCGH
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine

WARNING: No swap limit support

logs: https://pastebin.com/tzPQgw83

Docker version 17.12.1, 18.09.9, 19.03.11 is affected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants