Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker 1.12.2 joining a new swarm downgrades network to local scope on manager #27796

Open
coryleeio opened this issue Oct 26, 2016 · 4 comments
Labels
area/networking area/swarm kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. version/1.12

Comments

@coryleeio
Copy link

coryleeio commented Oct 26, 2016

Description

Steps to reproduce the issue:
On manager:
docker swarm init....

On workers:
docker swarm join.....

On manager:
$ docker network create --opt encrypted --driver overlay foobar
$ docker service create --constraint node.hostname==A --name a --network foobar nginx:1.11.5-alpine
$ docker service create --constraint node.hostname==B --name b --network foobar nginx:1.11.5-alpine
$ docker service create --constraint node.hostname==C --name c --network foobar nginx:1.11.5-alpine

On all nodes:
$ docker network ls | grep foobar
=> 7o1bj557ovsr foobar overlay swarm

Testing here with exec, all containers can curl all containers, as expected. The network was created as it was needed on all the machines, everything is peachy.

On all nodes:
docker swarm leave --force

On all nodes:
$ docker network ls | grep foobar
=> 7o1bj557ovsr foobar overlay swarm

still looks good.... though why is the network still there?

On manager:
docker swarm init

On workers:
docker swarm join....

On manager:
$ docker network ls | grep foobar
=> 7o1bj557ovsr foobar overlay local

On workers:
$ docker network ls | grep foobar
=> 7o1bj557ovsr foobar overlay swarm

ids still match, but the manager network got downgraded to a local scope.
swarm network still exists on worker nodes.

Now i run my example on the newly created swarm
$ docker network create --opt encrypted --driver overlay foobar

But I get a network already exists of course.

So i remove the foobar network
$ docker network rm foobar
= > foobar

On manager:
$ docker network ls | grep foobar
=> nothing
On workers:
$ docker network ls | grep foobar
=> 7o1bj557ovsr foobar overlay swarm

foobar network still exists on worker, and is a swarm overlay network, local scope version was deleted from manager, but it did not propogate because it was local scope of course.

On manager I run my example again:
$ docker network create --opt encrypted --driver overlay foobar
$ docker service create --constraint node.hostname==A --name a --network foobar nginx:1.11.5-alpine
$ docker service create --constraint node.hostname==B --name b --network foobar nginx:1.11.5-alpine
$ docker service create --constraint node.hostname==C --name c --network foobar nginx:1.11.5-alpine

On manager:
docker network ls | grep foobar
=> 19wdbjr7nj4k foobar overlay swarm
On worker:
docker network ls | grep foobar
=> 7o1bj557ovsr foobar overlay swarm

Note the ids are different, but our containers launch happy and connect to the different networks named foobar.. No error message is shown about not being able to propogate the newly created foobar network to the workers due to a name collision, as one might expect.

(docker ps will show each container running happily on each node, and they wont be able to communicate since they are on different networks that have the same name, scope, and driver, but different ids.)

Describe the results you received:
Swarm scope network was downgraded on the manager to local scope, and thus deletion was never propogated to the worker nodes. Also, when leaving swarm on worker node, swarm scope network was still present, even though all containers from swarm were removed.

Describe the results you expected:
For the swarm scope networks to be removed when leaving the swarm, or to see an error indicating that the foobar network couldn't be propogated to the worker, since a network of the same name already exists on the worker.

Output of docker version:

Client:
 Version:      1.12.2
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   bb80604
 Built:        Tue Oct 11 18:19:35 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.2
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   bb80604
 Built:        Tue Oct 11 18:19:35 2016
 OS/Arch:      linux/amd64
You have new mail in /var/mail/root

Output of docker info:

Containers: 1
 Running: 1
 Paused: 0
 Stopped: 0
Images: 4
Server Version: 1.12.2
Storage Driver: aufs
 Root Dir: /vault/docker/aufs
 Backing Filesystem: extfs
 Dirs: 16
 Dirperm1 Supported: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: null bridge host overlay
Swarm: active
 NodeID: av6fskaruu92s9k3exh9wu736
 Is Manager: true
 ClusterID: 5zq4ckc9he3s7n86rra4un7yv
 Managers: 1
 Nodes: 3
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 10.47.1.110
Runtimes: runc
Default Runtime: runc
Security Options: apparmor
Kernel Version: 3.13.0-98-generic
Operating System: Ubuntu 14.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 3.676 GiB
Name: manager.milkyway.visiblehealth.com
ID: NANA:HC6F:JYFY:US4A:YU7Q:UKON:ZVC6:6Y4F:6CZF:QAPN:SBNA:J3JD
Docker Root Dir: /vault/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
 127.0.0.0/8

Additional environment details (AWS, VirtualBox, physical, etc.):
AWS
Ubuntu 14.10 LTS

@cpuguy83 cpuguy83 added area/networking area/swarm kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. labels Oct 26, 2016
@cpuguy83
Copy link
Member

ping @mrjana

@thaJeztah
Copy link
Member

Thanks for a very well written bug report @coryleeio

ping @mrjana @aboch PTAL

@coryleeio
Copy link
Author

sure, thanks =]

@aluzzardi
Copy link
Member

/cc @mavenugo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking area/swarm kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. version/1.12
Projects
None yet
Development

No branches or pull requests

5 participants