Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Swarm: No inter-container alias/hostname resolution #31238

Closed
harshjv opened this issue Feb 21, 2017 · 6 comments
Closed

Swarm: No inter-container alias/hostname resolution #31238

harshjv opened this issue Feb 21, 2017 · 6 comments

Comments

@harshjv
Copy link

harshjv commented Feb 21, 2017

Containers can't connect to other containers using host aliases.

This is my docker-compose.yml file:

version: "3"

services:
  mongo:
    image: mongo
    ports:
      - 27017:27017
    networks:
      mynet:
        aliases:
          - mongo
    deploy:
      placement:
        constraints: [node.role == manager]

  redis:
    image: redis
    ports:
      - 6379:6379
    networks:
      mynet:
        aliases:
          - redis
    deploy:
      placement:
        constraints: [node.role == worker]

  app:
    image: node
    ports:
      - 3030:3030
    networks:
      - mynet
    depends_on:
      - redis
      - mongo
    deploy:
      mode: replicated
      replicas: 2
      placement:
        constraints: [node.role == worker]

networks:
  mynet:

daemon.json

{ "userland-proxy": false }

Description

root@mongo-container:/# ping mongo # works
root@mongo-container:/# ping redis # doesn't work

root@redis-container:/# ping redis # works
root@redis-container:/# ping mongo # doesn't work

root@app-container:/# ping mongo # doesn't work
root@app-container:/# ping redis # doesn't work

Steps to reproduce the issue:

  1. Deploy this stack. docker stack deploy -c docker-compose.ym app
  2. SSH into running containers and ping each other

Describe the results you received:
Containers are not able to resolve other container's alias/hostname.

Additional information you deem important (e.g. issue happens only occasionally):

Output of docker version:

Client:
 Version:      1.13.1
 API version:  1.26
 Go version:   go1.7.5
 Git commit:   092cba3
 Built:        Wed Feb  8 06:50:14 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.13.1
 API version:  1.26 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   092cba3
 Built:        Wed Feb  8 06:50:14 2017
 OS/Arch:      linux/amd64
 Experimental: true

Output of docker info:

Containers: 2
 Running: 2
 Paused: 0
 Stopped: 0
Images: 14
Server Version: 1.13.1
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 41
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
Swarm: active
 NodeID: 0wf5g7qztz9n0mqqn29nwpn6o
 Is Manager: true
 ClusterID: ukgmh2ci2x922oj0wc83b4sjr
 Managers: 1
 Nodes: 3
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 10.132.52.172
 Manager Addresses:
  10.132.52.172:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1
runc version: 9df8b306d01f59d3a8029be411de015b7304dd8f
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-63-generic
Operating System: Ubuntu 16.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 992.4 MiB
Name: bounty1
ID: VLXO:ZKIE:I2O7:LYGV:IFZD:UK33:TKEG:VYFZ:PLPE:3KJB:GD6N:C5XZ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
 10.132.52.172:5000
 127.0.0.0/8
Live Restore Enabled: false

Additional environment details (AWS, VirtualBox, physical, etc.):
Platform: DigitalOcean, 1GB, 3 nodes

@harshjv harshjv changed the title Swarm: No inter-container alias/hostname Swarm: No inter-container alias/hostname resolution Feb 21, 2017
@dimaspivak
Copy link

Seeing something very similar in my Compose-less use case of running Hadoop clusters in Docker containers. Using --network-alias and attaching a container to a user-defined bridge network when doing docker run lets me resolve other containers on the same bridge network without a problem (e.g. a docker run with --network-alias node-1 --network cluster lets me ping that container from another one on the network using node-1.cluster. When I try to move this setup over to using an overlay network, though, DNS fails unless I specify a container name explicitly (which I don't want to do since I want the ability to have one machine host containers resolvable (within their respective networks) as node-1.cluster and node-1.cluster2.

It looks like --network-alias isn't being consulted by the embedded DNS in the overlay network. @sanimej and @mavenugo, you guys previously helped resolve an issue I had with this on bridge networks, so perhaps you can shine some light on what's going on here?

@aboch
Copy link
Contributor

aboch commented Mar 1, 2017

@dimaspivak You may be hitting #31015

@dimaspivak
Copy link

Yep, that looks to be it, @aboch. Adding even a gibberish container name at docker run-time sorta fixes the issue, but then makes a mess of things for my particular use case because, when --name is specified, reverse DNS resolution ends up using that name over the network alias (see #20847 for context). A big no-no for us enterprisers running Hadoop.

@theindra
Copy link

I have docker 18.03.0-ce installed and still experience the same issue.
What can I do here?

@thaJeztah
Copy link
Member

@theindra could you open a new issue with details / reproduction steps?

@theindra
Copy link

Sorry... I found the issue. The recreated the network as overlay network then it worked. Before it was a bridge network.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants