Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Requests to arbitrary swarm nodes are not redirected to containers #23877

Closed
Umkus opened this issue Jun 23, 2016 · 8 comments · Fixed by #24237
Closed

Requests to arbitrary swarm nodes are not redirected to containers #23877

Umkus opened this issue Jun 23, 2016 · 8 comments · Fixed by #24237
Assignees
Labels
area/networking area/swarm priority/P1 Important: P1 issues are a top priority and a must-have for the next release. version/1.12
Milestone

Comments

@Umkus
Copy link

Umkus commented Jun 23, 2016

Output of docker version:

Client:
 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   906eacd
 Built:        Fri Jun 17 20:35:33 2016
 OS/Arch:      darwin/amd64
 Experimental: true

Server:
 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   906eacd
 Built:        Fri Jun 17 20:45:29 2016
 OS/Arch:      linux/amd64

Output of docker info:

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 1
Server Version: 1.12.0-rc2
Storage Driver: aufs
 Root Dir: /mnt/sda1/var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 6
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: host null bridge overlay
Swarm: active
 NodeID: 8arpft1x5v47sdg91szan8us7
 IsManager: Yes
 Managers: 1
 Nodes: 3
 CACertHash: sha256:8c14026edbbf1619476f4182a787320b9c6a90253e621030f1d9d498b6cbf91e
Runtimes: default
Default Runtime: default
Security Options: seccomp
Kernel Version: 4.4.13-boot2docker
Operating System: Boot2Docker 1.12.0-rc2 (TCL 7.1); HEAD : 52952ef - Fri Jun 17 21:01:09 UTC 2016
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 995.9 MiB
Name: node1
ID: KLEC:VGXM:RFC5:7GNL:6GAN:5S72:U6KI:QPWU:CN5I:JODB:7ABE:DIHE
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 38
 Goroutines: 125
 System Time: 2016-06-22T22:47:04.233777253Z
 EventsListeners: 0
Username: umkus
Registry: https://index.docker.io/v1/
Labels:
 provider=virtualbox
Insecure Registries:
 127.0.0.0/8

Additional environment details (AWS, VirtualBox, physical, etc.):
I'm using the docker machine (docker-machine version 0.8.0-rc1, build fffa6c9) to create 3 nodes, and basically going through the video demo. In the demo it is stated that if I start a single service with port mapping, I should be able to access the chosen port on an arbitrary node of the swarm. Which doesn't seem to be the case.

Steps to reproduce the issue:

  1. docker-machine create --driver virtualbox node1
  2. docker-machine create --driver virtualbox node2
  3. docker-machine create --driver virtualbox node3
  4. docker-machine ssh node1 docker swarm init
  5. docker-machine ssh node2 docker swarm join $(docker-machine ip node1):2377
  6. docker-machine ssh node3 docker swarm join $(docker-machine ip node1):2377
  7. eval $(docker-machine env node1)
  8. docker service create --name vote -p 8080:80 instavote/vote
  9. curl -s -o /dev/null -I -w "%{http_code}\n" 192.168.99.10{0..2}:8080

Describe the results you received:
Two out of three requests fail. Only the request to the node where the service actually runs returns 200.

200
000
000

Describe the results you expected:
All three requests' response code is 200.

200
200
200
@vdemeester
Copy link
Member

/cc @mavenugo @mrjana

@cpuguy83
Copy link
Member

Hmm... is the ip_vs module loaded in boot2docker?

@cpuguy83
Copy link
Member

cpuguy83 commented Jun 24, 2016

Oh, you probably need to accept the nodes into the cluster.

$ docker $(docker-machine config) node accept <node id> # for each node

@Umkus
Copy link
Author

Umkus commented Jun 24, 2016

@cpuguy83 Yes, it's enabled. And I actually did docker swarm join from each node as per my reproduction steps. docker node ls already showed membership as "accepted". I executed the command you mentioned for good measure, but it didn't work.
But I did try to reproduce this with two actual VPS nodes, and it worked flawlessly there!
So this might actually be a boot2docker issue.

@cpuguy83
Copy link
Member

@Umkus Thanks for checking

@tiborvass tiborvass added area/swarm priority/P2 Normal priority: default priority applied. priority/P1 Important: P1 issues are a top priority and a must-have for the next release. and removed priority/P2 Normal priority: default priority applied. labels Jun 27, 2016
@beenanner
Copy link

beenanner commented Jun 28, 2016

@Umkus I ran into this issue while testing with virtualbox locally as well and got it to work by passing the swarm manager ip address via "--listen-addr" when initializing the manager. I think it will work if you run the following during setup.

docker swarm init --listen-addr $(docker-machine ip node1):2377

@mrjana
Copy link
Contributor

mrjana commented Jun 29, 2016

@Umkus @beenanner Yeah the most likely problem in both boot2docker and a plain vbox is that the primary host interface which acts as the default gateway is the NAT interface and if you did not provide an explicit --listen-addr option to choose a second address which make it possible for these nodes to talk to each other using a host-only network, the gossip channel won't work. What you are seeing is a classic symptom of gossip channel not working and so you only get to know about the instances running in that node. So may be the answer is to fail swarm init if there are more than one interface in the host and --listen-addr is not given

@tiborvass
Copy link
Contributor

Related to #23828

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking area/swarm priority/P1 Important: P1 issues are a top priority and a must-have for the next release. version/1.12
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants