Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Service can't be reached by defined hostname #2925

Closed
brunoborges opened this issue Feb 15, 2016 · 45 comments
Closed

Service can't be reached by defined hostname #2925

brunoborges opened this issue Feb 15, 2016 · 45 comments

Comments

@brunoborges
Copy link

Given the following docker-compose.yml file:

version: '2'

networks: 
  mynet:
    driver: bridge

services:

  master:
    image: busybox
    command: top
    hostname: amasterservice
    networks: 
      - mynet

  slave: 
    image: busybox
    command: top
    networks:
      - mynet

The following ping fails:

$ docker exec -ti test_slave_1 ping amasterservice
ping: bad address 'amasterservice'
@brunoborges
Copy link
Author

Adding depends_on (in slave to master) does not fix the problem.

@jake-low
Copy link

Looking at docker inspect test_master_1, I see two aliases:

...
"Aliases": [
    "master",
    "268ef7e1e5"
],
...

The first comes from the service name, and the second is the container's short ID. I think that by default Compose doesn't create aliases based on hostnames.

May I ask what you're trying to accomplish? Why can't the hostname also be "master", and why does the slave need to ping the master by hostname rather than by service name?

@brunoborges
Copy link
Author

Hi @jake-low, I think question is why isn't hostname defined under aliases. What is the reasoning behind this decision?

@jake-low
Copy link

My guess would be because hostname isn't guaranteed to be unique, and aliases are required to be. How would Compose handle this situation?

version: '2'
services:
  foo:
    image: busybox
    command: top
    hostname: notunique
  bar: 
    image: busybox
    command: top
    hostname: notunique
  test:
    image: busybox
    command: ping notunique

Service name and container ID are both guaranteed to be unique -- service because YAML doesn't allow multiple keys with the same name, and container ID because Docker uses a unique hash automatically.

@dnephin
Copy link

dnephin commented Feb 16, 2016

There was some discussion early on about using hostname instead of container_name as the default name used by docker, but that didn't happen (I'm not sure why).

We add aliases for service name and short id to be backwards compatible and handle the common use cases. We're also adding support for your own net-aliases in #2829

As far as I know, it's not all that uncommon for the internal hostname to be unresolvable from the outside.

We could probably make it another alias, but I'm not sure how we'd deal with conflicts with the service names.

@jake-low
Copy link

This may be a decision better left to the Engine team, as there's currently an incongruity there that needs to be solved.

When you run a container with docker run, no alias gets created for it. However, if you give it a --name, both its hostname (a random SHA) and its container name are resolvable.

It's strange that, even though Docker gives every container a unique ID and nickname, neither is by default DNS-resolvable (based on my tests; someone feel free to correct me).

@brunoborges
Copy link
Author

As a Docker Composer user, I expect that whenever I set a hostname to a service, that hostname should resolve to any instance of that service. Same should apply for the service name.

If these things should not work the same way, and a user should only rely on service name, then hostname definition should not be allowed in the compose file. It just creates confusion.

@jake-low
Copy link

That's a tall order. DNS load balancing is crude and has many unusual failure cases due to DNS's hierarchical organization and client caching behaviour [1]. This is why application-level (so-called Layer 7) load balancing schemes have become popular [2].

Tutum provides DNS-level load balancing (mapped to services, not hostnames), as seen here (last paragraph), but as the rest of the article discusses, this feature is recommended for use as a second-level load balancer. Primary load balancing duties have been assigned to a purpose-built container.

@brunoborges
Copy link
Author

If goal of Docker Compose is to make it easier for users to deploy a composition of services using Docker images and containers, then the file definition should be crystal clear.

When a user defines a hostname to a service, it will expect that hostname to be resolvable from other services, specially when these services participate in the same network (as per my example).

Since a service can have multiple instances, it is just logical that when I define a hostname to a service, that hostname resolves to multiple instances (thus using the new DNS feature in latest version).

If nothing of these can be implemented today, or should not be implemented at all, then it is a matter of clarifying the documentation to state either:

  1. hostname should have restriction like container_name where one service instance only will exist.
  2. hostname should not be allowed in the compose file because there is no value added to that rather creating confusion in users' mind.

@dnephin
Copy link

dnephin commented Feb 16, 2016

We should probably add a warning to the docs about using hostname. I think it is rarely useful.

@brunoborges
Copy link
Author

We should probably add a warning to the docs about using hostname. I think it is rarely useful.

I think it should not be allowed at all.

@dnephin
Copy link

dnephin commented Feb 16, 2016

That is not backwards compatible, and while it is "rarely" useful, there are still cases where it is useful in its current form, so removing it completely is not really an option.

@brunoborges
Copy link
Author

That is not backwards compatible, and while it is "rarely" useful, there are still cases where it is useful in its current form, so removing it completely is not really an option.

Then the feature should work as expected by users using Compose for the first time, by adding that information to the list of alises.

Could you please be more specific where, although rare, the current form is useful?

@jake-low
Copy link

That is not backwards compatible, and while it is "rarely" useful, there are still cases where it is useful in its current form, so removing it completely is not really an option.

Agreed; I use it in a couple of cases where legacy applications expect to be able to resolve themselves by their own hostname (in these cases matching hostname to the service name in the composition).

I've also used it in Kerberos environments, when the hostname on a host needs to be set to an FQDN (not necessarily unique) so that services running there can use their service principals to authenticate themselves.

IMHO this feature does work "as expected" because it works the same way it does in Docker engine. Running a container with the --hostname argument does one thing: sets the hostname. Aliases are defined by the user (with the caveat I mentioned earlier).

@brunoborges
Copy link
Author

IMHO this feature does work "as expected" because it works the same way it does in Docker engine. Running a container with the --hostname argument does one thing: sets the hostname. Aliases are defined by the user (with the caveat I mentioned earlier).

Not really. With overlay network, the hostname set in a container is reachable by another container. So I disagree that it works as "expected". In fact, what users expect (and think about new users), is that when a hostname is set, specially in the case of Docker Compose, that hostname can be used by other containers.

@jake-low
Copy link

With overlay network, the hostname set in a container is reachable by another container.

That's not true.

$ docker run -d --hostname testing --net my_overlay_network alpine sleep 300
12e9d172d1964c2ad843f3bb4b61556eb75c0680ec95d8559fe77e617cf1371b
$ docker run --rm --net my_overlay_network alpine ping testing
ping: bad address 'testing'

What you're saying is true if you set the --name of the container, not the --hostname.

$ docker run -d --name testing --net my_overlay_network alpine sleep 300
ac8ec87ec8c2893588a8d271401deb07ca934d3d5590e6726ea17cea5166876e
$ docker run --rm --net my_overlay_network alpine ping testing
PING testing (10.0.9.3): 56 data bytes
64 bytes from 10.0.9.3: seq=0 ttl=64 time=0.525 ms
64 bytes from 10.0.9.3: seq=1 ttl=64 time=0.495 ms
64 bytes from 10.0.9.3: seq=2 ttl=64 time=0.590 ms
^C
--- testing ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.495/0.536/0.590 ms

Proof that this is a swarm, and an overlay network:

$ docker network inspect my_overlay_network
[
    {
        "Name": "my_overlay_network",
        "Id": "102259bd950d46e3d3b1694f09708dc3833d4cda80a525cde1fbd081ce5bbabd",
        "Scope": "global",
        "Driver": "overlay",
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.9.0/24"
                }
            ]
        },
        "Containers": {
            "1c1609f004bff373d2c26dd269d37411214bfb05c73607620bab286621585d27": {
                "Name": "testing",
                "EndpointID": "c2c5b645e36b52952f7744d02030b3e06e1637a781be55a93ab2eebb63849429",
                "MacAddress": "02:42:0a:00:09:02",
                "IPv4Address": "10.0.9.2/24",
                "IPv6Address": ""
            }
        },
        "Options": {}
    }
]
$ docker info
Containers: 5
 Running: 4
 Paused: 0
 Stopped: 1
Images: 4
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 2
 mhs-demo0: 192.168.99.102:2376
  └ Status: Healthy
  └ Containers: 3
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.021 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=4.1.17-boot2docker, operatingsystem=Boot2Docker 1.10.1 (TCL 6.4.1); master : b03e158 - Thu Feb 11 22:34:01 UTC 2016, provider=virtualbox, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-02-17T01:32:30Z
 mhs-demo1: 192.168.99.103:2376
  └ Status: Healthy
  └ Containers: 2
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.021 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=4.1.17-boot2docker, operatingsystem=Boot2Docker 1.10.1 (TCL 6.4.1); master : b03e158 - Thu Feb 11 22:34:01 UTC 2016, provider=virtualbox, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-02-17T01:32:04Z
Plugins: 
 Volume: 
 Network: 
Kernel Version: 4.1.17-boot2docker
Operating System: linux
Architecture: amd64
CPUs: 2
Total Memory: 2.043 GiB
Name: mhs-demo0

@brunoborges
Copy link
Author

@jake-low indeed, thanks for checking. I missed that I was setting --name too on my shell scripts.

@hacknaked
Copy link

Any update on this issue?

@mafrosis
Copy link

mafrosis commented Jun 23, 2017

It's long been an annoyance for me that compose only creates the DNS container name aliases when using docker-compose up, and not when using docker-compose run.

The --name trick solves it, but it feels like this shouldn't be necessary.

@Ocramius
Copy link

compose only creates the DNS container name aliases when using docker-compose up, and not when using docker-compose run.

This burnt almost 2 days of development on my end (with very cryptic failures).

Do you by chance know if docker-compose exec is also affected?

@slothkong
Copy link

May I ask what is the best practice to resolve this hostname resolving issue in 2018? Should hostname be specified in the compose file?

@barrer
Copy link

barrer commented Sep 21, 2018

@slothkong

I solved this problem. When using driver: bridge, I use the --add-host hostname:172.x.x.x parameter in the docker run command.

The docker run needs to specify the IP (--ip) and network name (--net) for the container.

@strobox
Copy link

strobox commented Dec 17, 2018

@slothkong
Also, do not mistake to refer service name not image name (as I did :-( ):

    mongodb:
        image: mongo

Mistake: MONGODB_URL=mongodb://mongo:27017
Correct: MONGODB_URL=mongodb://mongodb:27017

@pauldraper
Copy link

pauldraper commented Jan 8, 2019

After using docker for years, this docker-compose bug is a surprise.

docker run --hostname myhostname --name myname --network=mybridgenetwork --rm myimage

and as expected the container is resolvable on somebridgenetwork by myhostname but not myname on the network

Surprisingly, in docker-compose, the container would be resolvable by myname but not myhostname on the network.


Side note: These are two orthognal concepts: container names are meant for container management and scoped to a Docker instance, hostnames are meant for network connectivity and scoped to a network.

@yuriy1999
Copy link

@slothkong
Also, do not mistake to refer service name not image name (as I did :-( ):

    mongodb:
        image: mongo

Mistake: MONGODB_URL=mongodb://mongo:27017
Correct: MONGODB_URL=mongodb://mongodb:27017

thanks bro))

@wahello
Copy link

wahello commented Sep 23, 2019

What is the last solution to dns resolve for services with hostname or domainname in the same compose file ????

@stale
Copy link

stale bot commented May 13, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label May 13, 2020
@brunoborges
Copy link
Author

Still doesn't work...

@stale
Copy link

stale bot commented May 13, 2020

This issue has been automatically marked as not stale anymore due to the recent activity.

@stale stale bot removed the stale label May 13, 2020
@Bessonov
Copy link

activity occurs

@iraklisg
Copy link

This unexpected behavior still exists. The best solution I have found is this SO answer

@bayeslearner
Copy link

bayeslearner commented Nov 8, 2020

It's year 2020 and the issue seems to persist still. I need hostname with domainname to work. This is NOT a feature but a bug!!!
Docker-compose version 3.4

Containers are spun up by docker-compose up.

ambari-server:
    image: hdp/ambari-server
    networks:
     - dev
    hostname: ambari
    domainname: dev
...   

works:

  • can be pinged by service name from other containers. i.e, ambar-server
  • can ping itself by service name, i.e. ambari-server
  • hostname command returns the set hostname. i.e. ambari
  • can ping itself by <hostname>.<domainname> , i.e. ambari.dev

doesn't work:

  • can not be reached from other container by <hostname>.<domainname>, i.e. ambari.dev
  • can not be reached from ohter container by <hostname>, i.e. ambari

A work around is to use <hostname>.<domainname> as the service name but this brings out a whole bunch of other issues, e.g. this is not allowed in swarm-kit (moby/swarmkit#2437)

@bayeslearner
Copy link

I think the following summarizes the design philosophy and the solution to this problem:

1> the hostname, domainname tag is for modifying the hosts file. It doesn't help with dns resolution except to the container itself.
2> using network aliases are the recommended solutions. You have to add it to every container but that seems to be most straightforward way to solve the problem.
3> container name as a matter fact can be thought of just another network alias.

It's year 2020 and the issue seems to persist still. I need hostname with domainname to work. This is NOT a feature but a bug!!!
Docker-compose version 3.4

Containers are spun up by docker-compose up.

ambari-server:
    image: hdp/ambari-server
    networks:
     - dev
    hostname: ambari
    domainname: dev
...   

works:

* can be pinged by service name from other containers. i.e, ambar-server

* can ping itself by service name, i.e.  ambari-server

* hostname command returns the set hostname.  i.e. ambari

* can ping itself by `<hostname>.<domainname>` , i.e. ambari.dev

doesn't work:

* can not be reached from other container by  `<hostname>.<domainname>`, i.e. ambari.dev

* can not be reached from ohter container by `<hostname>`, i.e. ambari

A work around is to use <hostname>.<domainname> as the service name but this brings out a whole bunch of other issues, e.g. this is not allowed in swarm-kit (docker/swarmkit#2437)

@mindon
Copy link

mindon commented Jan 21, 2021

version: '3.9'
services:
    a:
        entrypoint: ["ping", "a-service"]
        image: library/busybox
        container_name: a-service
        hostname: a-service
        networks:
             - share-net
        depends_on:
             b:
                 condition: service_started
        #links:
        #    - b:b-service
    b:
        entrypoint: ["ping", "b-service"]
        image: library/busybox
        container_name: b-service
        hostname: b-service
        networks:
             - share-net

networks:
     share-net:
          external: true

docker exec a-service ping b-service
got "ping: bad address 'b-service'"

docker exec a-service ping 172.18.0.5
it's ok (networks no problem):
PING 172.18.0.5 (172.18.0.5): 56 data bytes
64 bytes from 172.18.0.5: seq=0 ttl=64 time=0.140 ms

docker exec a-service nslookup b-service
got "nslookup: write to '127.0.0.11': Connection refused"

docker exec a-service more /etc/hosts
found 172.18.0.6 a-service, but no entry for b-service

the internal container service names resolve failed:
dsn not working, no hosts entry appended

it doesn't work either with links.

docker-compose version 1.28.0
Docker version 19.03.6

it may be an OS relative issue, with almost same docker & compose version, it works fine in another OS ... #5991

@brunoborges
Copy link
Author

Finally! :D

@stale
Copy link

stale bot commented Aug 29, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Aug 29, 2021
@GenaANTG
Copy link

GenaANTG commented Oct 17, 2021

You misunderstand some important things.
Docker Compose hostname is not DNS hostname. It is internal container hostname.

This example is bad:

docker exec -ti test_slave_1 ping amasterservice
ping: bad address 'amasterservice'

These examples are right:

docker exec -ti slave ping master
docker exec -ti master ping slave

Docker using the container names to communicate between them in same network, not internal hostnames.
Internal hostname are available only in local container scope.

You should use container names to communicate, not the internal container hostnames.

@zdima
Copy link

zdima commented Nov 11, 2021

docker exec -ti slave ping master
docker exec -ti master ping slave

@GenaANTG , agree with theory and that is what I would expect.
However, I have one BIG issue.
I am using Docker version 19.03.10, build 9424aea and docker-compose version 1.27.4, build 40524192.
With the same original docker-compose.yml:

$ docker network inspect nettest_mynet 
[
    {
        "Name": "nettest_mynet",
        "Id": "e3d1a40eb6b921352fd023c3cbeeb4ee182de357dcf45e40d3b980e154c3f805",
        "Created": "2021-11-10T21:31:15.546140068-05:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.30.0.0/16",
                    "Gateway": "172.30.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "48842326ff09bc66b6cf40ec64a048afa8b571865cc9f3a21ea2612b672820ff": {
                "Name": "nettest_master_1",
                "EndpointID": "12601355e1bd60616933436c6aba601f907845e6c72f78f53ca4a3fd04bcbfdb",
                "MacAddress": "02:42:ac:1e:00:03",
                "IPv4Address": "172.30.0.3/16",
                "IPv6Address": ""
            },
            "72694d5194de7859a0139c842355f82eb3760b7ccc4ed9ade728cb10e76fc485": {
                "Name": "nettest_slave_1",
                "EndpointID": "166eb4f68adabe868b84971ac871b922702b60f9de5e5190b963ee1ad9e8aa85",
                "MacAddress": "02:42:ac:1e:00:02",
                "IPv4Address": "172.30.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "mynet",
            "com.docker.compose.project": "nettest",
            "com.docker.compose.version": "1.27.4"
        }
    }
]

I expect the ping would use IP 172.30.0.3 for master and 172.30.0.2 for slave. But both are using the docker's host IP:

$ docker exec -ti nettest_slave_1 ping nettest_master_1
PING nettest_master_1 (10.0.4.100): 56 data bytes
64 bytes from 10.0.4.100: seq=0 ttl=64 time=0.072 ms

$ docker exec -ti nettest_slave_1 ping nettest_master_1
PING nettest_master_1 (10.0.4.100): 56 data bytes
64 bytes from 10.0.4.100: seq=0 ttl=64 time=0.072 ms

What do I missing and how do I make resolve the container names to they internal network IPs?

Thanks

1 similar comment
@zdima
Copy link

zdima commented Nov 11, 2021

docker exec -ti slave ping master
docker exec -ti master ping slave

@GenaANTG , agree with theory and that is what I would expect.
However, I have one BIG issue.
I am using Docker version 19.03.10, build 9424aea and docker-compose version 1.27.4, build 40524192.
With the same original docker-compose.yml:

$ docker network inspect nettest_mynet 
[
    {
        "Name": "nettest_mynet",
        "Id": "e3d1a40eb6b921352fd023c3cbeeb4ee182de357dcf45e40d3b980e154c3f805",
        "Created": "2021-11-10T21:31:15.546140068-05:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.30.0.0/16",
                    "Gateway": "172.30.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "48842326ff09bc66b6cf40ec64a048afa8b571865cc9f3a21ea2612b672820ff": {
                "Name": "nettest_master_1",
                "EndpointID": "12601355e1bd60616933436c6aba601f907845e6c72f78f53ca4a3fd04bcbfdb",
                "MacAddress": "02:42:ac:1e:00:03",
                "IPv4Address": "172.30.0.3/16",
                "IPv6Address": ""
            },
            "72694d5194de7859a0139c842355f82eb3760b7ccc4ed9ade728cb10e76fc485": {
                "Name": "nettest_slave_1",
                "EndpointID": "166eb4f68adabe868b84971ac871b922702b60f9de5e5190b963ee1ad9e8aa85",
                "MacAddress": "02:42:ac:1e:00:02",
                "IPv4Address": "172.30.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "mynet",
            "com.docker.compose.project": "nettest",
            "com.docker.compose.version": "1.27.4"
        }
    }
]

I expect the ping would use IP 172.30.0.3 for master and 172.30.0.2 for slave. But both are using the docker's host IP:

$ docker exec -ti nettest_slave_1 ping nettest_master_1
PING nettest_master_1 (10.0.4.100): 56 data bytes
64 bytes from 10.0.4.100: seq=0 ttl=64 time=0.072 ms

$ docker exec -ti nettest_slave_1 ping nettest_master_1
PING nettest_master_1 (10.0.4.100): 56 data bytes
64 bytes from 10.0.4.100: seq=0 ttl=64 time=0.072 ms

What do I missing and how do I make resolve the container names to they internal network IPs?

Thanks

@GenaANTG
Copy link

GenaANTG commented Nov 11, 2021

@zdima Note that you try ping two times master container from slave.

  1. First of all you need to create user-defined network. (driver: bridge, network: github)
    docker network create --driver bridge github

  2. Run certain alpine containers with the --network flag.
    docker run -dit --name alpine1 --network github alpine /bin/ash
    docker run -dit --name alpine2 --network github alpine /bin/ash
    docker run -dit --name alpine3 --network github alpine /bin/ash

  3. Check if those containers are exists in same network:
    docker network inspect github

  4. Try to ping in different ways:
    docker exec -ti alpine1 ping alpine2
    docker exec -ti alpine2 ping alpine3
    docker exec -ti alpine3 ping alpine1

Everything should works as expected.

This article explain everything: https://docs.docker.com/network/network-tutorial-standalone/

@GenaANTG
Copy link

GenaANTG commented Nov 11, 2021

Screenshot 2021-11-11 at 14 54 49

❯ docker version
Client:
 Cloud integration: 1.0.17
 Version:           20.10.8
 API version:       1.41
 Go version:        go1.16.6
 Git commit:        3967b7d
 Built:             Fri Jul 30 19:55:20 2021
 OS/Arch:           darwin/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.8
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.6
  Git commit:       75249d8
  Built:            Fri Jul 30 19:52:31 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.9
  GitCommit:        e25210fe30a0a703442421b0f60afac609f950a3
 runc:
  Version:          1.0.1
  GitCommit:        v1.0.1-0-g4144b63
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
❯ docker-compose version
docker-compose version 1.29.2, build 5becea4c
docker-py version: 5.0.0
CPython version: 3.9.0
OpenSSL version: OpenSSL 1.1.1h  22 Sep 2020

@zdima
Copy link

zdima commented Nov 11, 2021

@GenaANTG
Yes, the manual steps indeed create environment that I would expect.
They issue is with aliasing when using docker-compose.
But service name is not set to internal network IP when created by docker-compose.

docker-compose.yml :

version: '3.3'

networks: 
  mynet:
    driver: bridge

services:

  master:
    image: busybox
    command: top
    hostname: themaster
    networks: 
      - mynet

  slave: 
    image: busybox
    command: top
    hostname: theslave
    networks: 
      - mynet
docker@docker [ ~/nettest ]$ docker-compose up -d 
Creating network "nettest_mynet" with driver "bridge"
Creating nettest_master_1 ... done
Creating nettest_slave_1  ... done
docker@docker [ ~/nettest ]$ 
docker@docker [ ~/nettest ]$ 
docker@docker [ ~/nettest ]$ docker-compose ps
      Name         Command   State   Ports
------------------------------------------
nettest_master_1   top       Up           
nettest_slave_1    top       Up           
docker@docker [ ~/nettest ]$ 
docker@docker [ ~/nettest ]$ 
docker@docker [ ~/nettest ]$ docker network inspect nettest_mynet 
[
    {
        "Name": "nettest_mynet",
        "Id": "b17c8fd7a70a4ebc119f3616ed93416da8df740ace22d2086cde41e9556fe35e",
        "Created": "2021-11-11T09:23:21.194411157-05:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "192.168.144.0/20",
                    "Gateway": "192.168.144.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "aa236d054ea9278807bb50e042f1e310dd5ab357ffab29bf9da9ead558745e5c": {
                "Name": "nettest_slave_1",
                "EndpointID": "86fcf7cfba54203ff1a3dd175813ba231684742ca3b5c719e3cdeac82ec31760",
                "MacAddress": "02:42:c0:a8:90:03",
                "IPv4Address": "192.168.144.3/20",
                "IPv6Address": ""
            },
            "c44a39bda46ff10772340834167d5e3e65b4963758a157fee8388a95063b520b": {
                "Name": "nettest_master_1",
                "EndpointID": "da01b49bf2d67c46fdd70e6bdbdf1ed71ab21c7f08657382e58cdeea33e39930",
                "MacAddress": "02:42:c0:a8:90:02",
                "IPv4Address": "192.168.144.2/20",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "mynet",
            "com.docker.compose.project": "nettest",
            "com.docker.compose.version": "1.27.4"
        }
    }
]
docker@docker [ ~/nettest ]$ 
docker@docker [ ~/nettest ]$ 
docker@docker [ ~/nettest ]$ docker exec -ti nettest_slave_1 ping master -c 2
PING master (10.0.4.100): 56 data bytes
64 bytes from 10.0.4.100: seq=0 ttl=64 time=0.063 ms
64 bytes from 10.0.4.100: seq=1 ttl=64 time=0.099 ms

--- master ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.063/0.081/0.099 ms
docker@docker [ ~/nettest ]$ 
docker@docker [ ~/nettest ]$ 
docker@docker [ ~/nettest ]$ docker exec -ti nettest_slave_1 ping nettest_master_1 -c 2
PING nettest_master_1 (10.0.4.100): 56 data bytes
64 bytes from 10.0.4.100: seq=0 ttl=64 time=0.121 ms
64 bytes from 10.0.4.100: seq=1 ttl=64 time=0.078 ms

--- nettest_master_1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.078/0.099/0.121 ms
docker@docker [ ~/nettest ]$ 
docker@docker [ ~/nettest ]$ 
docker@docker [ ~/nettest ]$ docker exec -ti nettest_slave_1 ping 192.168.144.2 -c 2
PING 192.168.144.2 (192.168.144.2): 56 data bytes
64 bytes from 192.168.144.2: seq=0 ttl=64 time=0.114 ms
64 bytes from 192.168.144.2: seq=1 ttl=64 time=0.124 ms

--- 192.168.144.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.114/0.119/0.124 ms
docker@docker [ ~/nettest ]$ 
docker@docker [ ~/nettest ]$ 
docker@docker [ ~/nettest ]$ docker exec -ti nettest_slave_1 nslookup 192.168.144.2 
Server:		127.0.0.11
Address:	127.0.0.11:53

Non-authoritative answer:
2.144.168.192.in-addr.arpa	name = nettest_master_1.nettest_mynet

docker@docker [ ~/nettest ]$ 
docker@docker [ ~/nettest ]$ 
docker@docker [ ~/nettest ]$ docker exec -ti nettest_slave_1 ping nettest_master_1.nettest_mynet -c 2
PING nettest_master_1.nettest_mynet (192.168.144.2): 56 data bytes
64 bytes from 192.168.144.2: seq=0 ttl=64 time=0.075 ms
64 bytes from 192.168.144.2: seq=1 ttl=64 time=0.116 ms

--- nettest_master_1.nettest_mynet ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.075/0.095/0.116 ms

I am expecting to resolve the service name master to internal network IP : 192.168.144.2.

Even the container name nettest_master_1 resolving to external IP :(
Is that expected?

@GenaANTG
Copy link

GenaANTG commented Nov 11, 2021

@zdima Try to run docker-compose exec instead of docker exec command. I would check it a bit later.

@zdima
Copy link

zdima commented Nov 12, 2021

no difference.

docker@docker [ ~/nettest ]$ docker-compose exec slave ping master -c 2
PING master (10.0.4.100): 56 data bytes
64 bytes from 10.0.4.100: seq=0 ttl=64 time=0.054 ms
64 bytes from 10.0.4.100: seq=1 ttl=64 time=0.092 ms

--- master ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.054/0.073/0.092 ms

Even when I open container's shell the "linked" container is not resolving to local network.

@dincho
Copy link

dincho commented Mar 9, 2023

What a mess

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests