Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

in swarm, sometimes DNS resolve to container id and not name #34882

Open
sp-7indigo opened this issue Sep 18, 2017 · 15 comments
Open

in swarm, sometimes DNS resolve to container id and not name #34882

sp-7indigo opened this issue Sep 18, 2017 · 15 comments

Comments

@sp-7indigo
Copy link

Description
reverse ip lookup randomly returns container id and not the name.
Steps to reproduce the issue:
from inside container dig to ip few times

Describe the results you received:

root@couchdb:/opt/couchdb# dig +noall +answer -x 10.0.9.5 
5.9.0.10.in-addr.arpa.	600	IN	PTR	couchdb.3.mhml1gvvl65lw64wxi2r6k0ez.couchdb-network.
root@couchdb:/opt/couchdb# dig +noall +answer -x 10.0.9.5 
5.9.0.10.in-addr.arpa.	600	IN	PTR	couchdb.3.mhml1gvvl65lw64wxi2r6k0ez.couchdb-network.
root@couchdb:/opt/couchdb# dig +noall +answer -x 10.0.9.5 

5.9.0.10.in-addr.arpa.	600	IN	PTR	**ceb1f81bba19.couchdb-network.**
root@couchdb:/opt/couchdb# dig +noall +answer -x 10.0.9.5 

5.9.0.10.in-addr.arpa.	600	IN	PTR	couchdb.3.mhml1gvvl65lw64wxi2r6k0ez.couchdb-network.
root@couchdb:/opt/couchdb# dig +noall +answer -x 10.0.9.5 
5.9.0.10.in-addr.arpa.	600	IN	PTR	couchdb.3.mhml1gvvl65lw64wxi2r6k0ez.couchdb-network.
root@couchdb:/opt/couchdb# dig +noall +answer -x 10.0.9.5 
5.9.0.10.in-addr.arpa.	600	IN	PTR	couchdb.3.mhml1gvvl65lw64wxi2r6k0ez.couchdb-network.
root@couchdb:/opt/couchdb# dig +noall +answer -x 10.0.9.5 

Describe the results you expected:
i expect resolve to container name always

Output of docker version:

Client:
 Version:      17.06.2-ce
 API version:  1.30
 Go version:   go1.8.3
 Git commit:   cec0b72
 Built:        Tue Sep  5 20:00:17 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.06.2-ce
 API version:  1.30 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   cec0b72
 Built:        Tue Sep  5 19:59:11 2017
 OS/Arch:      linux/amd64
 Experimental: false
@cjbearman
Copy link

I am also seeing this issue and was about to raise a defect before I found this one.

I would like to add my voice to this issue since it creates a problem with one of the few mechanisms that are available within the service containers that allow them to introspect their environment (other than, of course, crawling out to the docker API).

Consider the following. I need to monitor my services and provide some kind of reasonable and consistent statistics on usage and behavior on a per instance basis. As such I need to contact each individual service instance (task).

I can identify the IP addresses of the tasks for a service using the "tasks." DNS:

[root@dev_node-1 mnt]# dig +noall +answer tasks.lab1_das.lab1
tasks.lab1_das.lab1.	600	IN	A	10.0.12.9
tasks.lab1_das.lab1.	600	IN	A	10.0.12.61

So I can see the IPs of the two containers within the lab1_das service. However, what are the instance numbers. I was hoping to retrieve this using reverse DNS, but I hit the same problem described above.

[root@dev_node-1 mnt]# dig +noall +answer -x 10.0.12.9
9.12.0.10.in-addr.arpa.	600	IN	PTR	lab1_das.1.ftnllsan0awehevz3hltdcilb.lab1.
[root@dev_node-1 mnt]# dig +noall +answer -x 10.0.12.9
9.12.0.10.in-addr.arpa.	600	IN	PTR	lab1_das.1.ftnllsan0awehevz3hltdcilb.lab1.
[root@dev_node-1 mnt]# dig +noall +answer -x 10.0.12.9
9.12.0.10.in-addr.arpa.	600	IN	PTR	f712644ed1f8.lab1.

In about 9/10 queries I get the PTR record that includes the task number (lab1_das.1.ftnllsan0awehevz3hltdcilb.lab1) which is useful for me.
In about 1/10 queries I get a the other record (f712644ed1f8.lab1) which has no useful meaning (IMHO).

Environment is 17.06.2-ce running on Centos 7. Details:

docker info
Containers: 17
 Running: 15
 Paused: 0
 Stopped: 2
Images: 16
Server Version: 17.06.2-ce
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local rexray
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
 NodeID: 2eync7dfl92wtsw429t3gf6me
 Is Manager: true
 ClusterID: kwxc6d1ymy0yvz0u8qfjbkmiz
 Managers: 5
 Nodes: 9
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Root Rotation In Progress: false
 Node Address: 173.36.60.98
 Manager Addresses:
  173.36.60.81:2377
  173.36.60.91:2377
  173.36.60.93:2377
  173.36.60.96:2377
  173.36.60.98:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-514.26.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 31.42GiB
Name: caa-dev-swarm01.cisco.com
ID: 6WAF:QLIP:CFVN:F752:YQBZ:X76S:3P4Q:DCAW:E6TQ:PWFV:7WMI:MUA5
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: bridge-nf-call-ip6tables is disabled

@TincaTibo
Copy link

We are having the same behavior: at random time (or seems like it), the DNS reverse resolving in Swarm return container id and not name.

We are tracking network traffic and using reverse DNS to avoid monitoring some specific overloaded nodes. When the name changes, the tracking system falls because of overload.

  • We are using many stacks in the same Swarm
  • This is a random case, appearing even on stable swarm cluster up for days
  • This is a regression and did not happen on previous version (currently in use: 17.06.2-ce)

@overmike
Copy link

I got the same issue on docker for mac 18.02.0-ce-rc1.
Is there any update on the issue?

@cgaspar-deshaw
Copy link

I can reliably reproduce this as well, and it causes hbase to fail. Please let me know what additional information / logs would help.

@dimovnike
Copy link

I can reproduce this too on version 18.05.0-ce, build f150324, is there a fix of a workaround for this?

@olljanat
Copy link
Contributor

olljanat commented Nov 10, 2018

FYI, I did some investigations around of this one and find out that:
a) issue can be easily duplicated using commands:

docker network create --internal --attachable --driver overlay --scope swarm foo-network
docker run -it --name bar-container --hostname bar-container --rm --network foo-network nicolaka/netshoot
for i in {1..1000}
do
  nslookup $(hostname -i) | grep "arpa" | grep -v "bar-container"
done

about 100-150 / 1000 tests fails.

b) I can see from daemon debug log that DNS records are first added for container name and then for it's ID:

time="2018-11-10T18:34:55.673330114+02:00" level=debug msg="9797a65ff5133f41155c7319f15bd8626637dab825403d381b2d4f59bd611490 (l4004lu).addSvcRecords(bar-container, 10.0.1.2, <nil>, true) addServiceInfoToCluster sid:9797a65ff5133f41155c7319f15bd8626637dab825403d381b2d4f59bd611490"
time="2018-11-10T18:34:55.673400404+02:00" level=debug msg="9797a65ff5133f41155c7319f15bd8626637dab825403d381b2d4f59bd611490 (l4004lu).addSvcRecords(abd5fca4a6dc, 10.0.1.2, <nil>, true) addServiceInfoToCluster sid:9797a65ff5133f41155c7319f15bd8626637dab825403d381b2d4f59bd611490"

c) If I disable code lines on https://github.com/docker/libnetwork/blob/master/service_common.go#L65-L68 then problem disappears.

So to be able fix this we need figure out why/where container ID is added as alias and/or make sure that it is not used as PTR record.

EDIT: Created PR which disables adding PTR records for aliases moby/libnetwork#2299

@fcrisciani
Copy link
Contributor

@cjbearman
why not using templates: https://docs.docker.com/engine/swarm/services/#create-services-using-templates
you can set and env variable and use that one as identity instead of trying to resolve yourself.

@cjbearman
Copy link

@fcrisciani
Good suggestion on the use of templates here. Templates were included in swarm at some point after this bug was originated and after my comment. I should have posted a follow up comment after I found that new feature.

Certainly using the .Service.ID template within the target container allows the container itself to know its instance number and enable it to report that in a communications stream - for purposes such as monitoring / logging per my original comment.

Specifically my monitoring code will now resolve the hosts using forwards DNS resolution as described in my comment, but forgo the backwards resolution and allow the target container instance to report it's instance number as part of the data stream I receive from it.

@TincaTibo
Copy link

As mentionned in linked issue, I'm capturing network communication between the containers in the Swarm to analyse and troubleshoot system behavior.
I need to be able to associate the IP to a name for better machine and human processing like:

  • identifying the service that talks
  • aggregating flows from all replicas of the same service.
    Having the IP constantly changing name when reverse resolving it is a pain. I overcame it by settings rules of acceptance over the name resolution result, but this is an ugly hack.

I would dearly appreciate resolution of this issue :)

@dimovnike
Copy link

Also for identifying the tasks on the same node. A fix would be much appreciated.

@olljanat
Copy link
Contributor

@TincaTibo just to clarify. moby/libnetwork#2299 is fix to this issue. Not linked issue ;)

But now Docker maintainers need to make sure that it does not create some other issues.

@TincaTibo
Copy link

@olljanat Thanks, got it!

@olljanat
Copy link
Contributor

@fcrisciani / @thaJeztah FYI this can be closed as moby/libnetwork#2299 is merged.

@AHelper
Copy link

AHelper commented Apr 26, 2019

Is this change supposed to be in Docker 18.09.1-ce? I'm still getting multiple PTR records, docker logs around container creation:

DEBU[2019-04-26T16:49:38.715560543Z] addServiceInfoToCluster START for test_test 142760e8f108e982440e8793dd88e86a4f19e7ed9a84352d2fc248f2683ce20f
DEBU[2019-04-26T16:49:38.716038450Z] event published                               ns=moby topic="/containers/create" type=containerd.events.ContainerCreate
INFO[2019-04-26T16:49:38.730873778Z] shim containerd-shim started                  address="/containerd-shim/moby/b773ff336a96193877b590c75130c1ea79eeef271b77a453537226a59242bb79/shim.sock" debug=true pid=27322
DEBU[2019-04-26T16:49:38.744229583Z] Assigning addresses for endpoint gateway_2914defca2dd's interface on network docker_gwbridge
DEBU[0000] registering ttrpc server
DEBU[0000] serving api on unix socket                    socket="[inherited from parent]"
DEBU[2019-04-26T16:49:38.753924132Z] Programming external connectivity on endpoint gateway_2914defca2dd (0721a48d867884a1026ceeb97e762c962eb08366d6032c6429a15e935c61beab)
DEBU[2019-04-26T16:49:38.754294737Z] Assigning addresses for endpoint gateway_36235bb10422's interface on network docker_gwbridge
DEBU[2019-04-26T16:49:38.754315538Z] RequestAddress(LocalDefault/172.18.0.0/16, <nil>, map[])
DEBU[2019-04-26T16:49:38.754377939Z] Request address PoolID:172.18.0.0/16 App: ipam/default/data, ID: LocalDefault/172.18.0.0/16, DBIndex: 0x0, Bits: 65536, Unselected: 65525, Sequence: (0xffc00000, 1)->(0x0, 2046)->(0x1, 1)->end Curr:10 Serial:false PrefAddress:<nil>
DEBU[2019-04-26T16:49:38.805842929Z] addServiceBinding from addServiceInfoToCluster START for test_test 142760e8f108e982440e8793dd88e86a4f19e7ed9a84352d2fc248f2683ce20f p:0xc00494e200 nid:ikfyixyhrd9alxzjk2rjj96xk skey:{vqtps2mox5h8r9tvktk6jxwua }
DEBU[2019-04-26T16:49:38.811070309Z] Assigning addresses for endpoint gateway_36235bb10422's interface on network docker_gwbridge
DEBU[2019-04-26T16:49:38.823076793Z] Programming external connectivity on endpoint gateway_36235bb10422 (51fe9d2b40f0dba5a91fa42a7dbe1a5b87b773caa0cdc0ccd91a060e99c7a226)
DEBU[2019-04-26T16:49:38.823795304Z] addEndpointNameResolution 142760e8f108e982440e8793dd88e86a4f19e7ed9a84352d2fc248f2683ce20f test_test add_service:false sAliases:[test] tAliases:[f29abf168e60]
DEBU[2019-04-26T16:49:38.823834105Z] addContainerNameResolution 142760e8f108e982440e8793dd88e86a4f19e7ed9a84352d2fc248f2683ce20f test_test.1.ug04kbjwvs8n0mvfma3hwci35
DEBU[2019-04-26T16:49:38.823842305Z] 142760e8f108e982440e8793dd88e86a4f19e7ed9a84352d2fc248f2683ce20f (ikfyixy).addSvcRecords(test_test.1.ug04kbjwvs8n0mvfma3hwci35, 10.0.11.6, <nil>, true) addServiceBinding sid:142760e8f108e982440e8793dd88e86a4f19e7ed9a84352d2fc248f2683ce20f
DEBU[2019-04-26T16:49:38.823857405Z] 142760e8f108e982440e8793dd88e86a4f19e7ed9a84352d2fc248f2683ce20f (ikfyixy).addSvcRecords(f29abf168e60, 10.0.11.6, <nil>, true) addServiceBinding sid:142760e8f108e982440e8793dd88e86a4f19e7ed9a84352d2fc248f2683ce20f
DEBU[2019-04-26T16:49:38.823864605Z] 142760e8f108e982440e8793dd88e86a4f19e7ed9a84352d2fc248f2683ce20f (ikfyixy).addSvcRecords(tasks.test_test, 10.0.11.6, <nil>, false) addServiceBinding sid:vqtps2mox5h8r9tvktk6jxwua
DEBU[2019-04-26T16:49:38.823869905Z] 142760e8f108e982440e8793dd88e86a4f19e7ed9a84352d2fc248f2683ce20f (ikfyixy).addSvcRecords(tasks.test, 10.0.11.6, <nil>, false) addServiceBinding sid:vqtps2mox5h8r9tvktk6jxwua
DEBU[2019-04-26T16:49:38.823874205Z] addServiceBinding from addServiceInfoToCluster END for test_test 142760e8f108e982440e8793dd88e86a4f19e7ed9a84352d2fc248f2683ce20f

Dockerd logs for resolution:

DEBU[2019-04-26T16:49:49.430502829Z] IP To resolve 3.11.0.10
DEBU[2019-04-26T16:49:49.430544730Z] [resolver] lookup for IP 3.11.0.10: name bfd2d691c4ed.test_default
--- 8< ---
DEBU[2019-04-26T16:49:49.521987156Z] IP To resolve 3.11.0.10
DEBU[2019-04-26T16:49:49.522028956Z] [resolver] lookup for IP 3.11.0.10: name test_test.4.76rxl56uro2b7wzhivgfw85hv.test_default

docker info:

Containers: 10
 Running: 10
 Paused: 0
 Stopped: 0
Images: 7
Server Version: 18.09.1-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
 NodeID: sc6l45ko3c2i8r56gna5sy5wr
 Is Manager: true
 ClusterID: i76b6kb3qkpta7oxtyuzs5msr
 Managers: 1
 Nodes: 1
 Default Address Pool: 10.0.0.0/8
 SubnetSize: 24
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 10
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Autolock Managers: false
 Root Rotation In Progress: false
 Node Address: 192.168.1.100
 Manager Addresses:
  192.168.1.100:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9754871865f7fe2f4e74d43e2fc7ccd237edcbce
runc version: 6635b4f0c6af3810594d2770f662f34ddc15b40d
init version: v0.18.0 (expected: fec3683b971d9c3ef73f284f176672c44b448662)
Kernel Version: 4.19.34-0-virt
Operating System: Alpine Linux v3.9
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 11.17GiB
Name: alpine
ID: UDWV:TOZX:CUHM:2KC5:EK2B:J2YG:4YHJ:JY4I:F7IE:6WOS:KLIH:TKMG
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
HTTP Proxy: http://x.x.x.x:x
HTTPS Proxy: http://x.x.x.x:x
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

Minimal reproduction on a swarm-enable docker instance:

docker-compose.yaml

version: "3.5"
services:
  test:
    image: test
    hostname: "{{.Task.Name}}.test_default"
    environment:
      SERVICE_NAME: test
    deploy:
      replicas: 10

Dockerfile for 'test' image:

FROM alpine:3.9
ADD log.sh /
CMD ["/bin/sh", "/log.sh"]

log.sh:

#!/bin/sh

while true; do
    echo "Resolving..."
    nslookup tasks.$SERVICE_NAME
    sleep 1
done

docker ps output after docker stack deploy -c docker-compose.yaml test:

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
cbaa4deb1366        test:latest         "/bin/sh /log.sh"   9 minutes ago       Up 9 minutes                            test_test.3.pta0qgi9z4mojrkemt8lf45yp
874688be09ca        test:latest         "/bin/sh /log.sh"   9 minutes ago       Up 9 minutes                            test_test.6.p0nzrbt0a21hdtlrqz5bfefra
a951cf96db62        test:latest         "/bin/sh /log.sh"   9 minutes ago       Up 9 minutes                            test_test.8.8gooi5gpd7x4d9ib08bmcdaz8
d54b899d93b0        test:latest         "/bin/sh /log.sh"   9 minutes ago       Up 9 minutes                            test_test.4.7enzbfsltfx4b0sr6pzxqqcvp
5e11ccde9f7e        test:latest         "/bin/sh /log.sh"   9 minutes ago       Up 9 minutes                            test_test.7.xojzc2x8oy6ae9hqzf0y8l2kb
1b011f8bcf3b        test:latest         "/bin/sh /log.sh"   9 minutes ago       Up 9 minutes                            test_test.10.fef5lwgy59s5b5hdue9vpg5w6
74ada816b424        test:latest         "/bin/sh /log.sh"   9 minutes ago       Up 9 minutes                            test_test.5.0d1w0bcwd9xjyvg4gf6awhlao
b665c597af01        test:latest         "/bin/sh /log.sh"   9 minutes ago       Up 9 minutes                            test_test.1.i5b6saiccwufb19iaqyleesb4
e0ccacea03de        test:latest         "/bin/sh /log.sh"   9 minutes ago       Up 9 minutes                            test_test.9.95ubli8xvaxijix5s8m0qjfem
a4fa0c6ca24a        test:latest         "/bin/sh /log.sh"   9 minutes ago       Up 9 minutes                            test_test.2.88axff239tuw4byntty0xcx2g

docker logs cbaa4deb1366 latest output:

Resolving...
nslookup: can't resolve '(null)': Name does not resolve

Name:      tasks.test
Address 1: 10.0.11.206 test_test.3.pta0qgi9z4mojrkemt8lf45yp.test_default
Address 2: 10.0.11.205 test_test.6.p0nzrbt0a21hdtlrqz5bfefra.test_default
Address 3: 10.0.11.200 test_test.1.i5b6saiccwufb19iaqyleesb4.test_default
Address 4: 10.0.11.198 test_test.2.88axff239tuw4byntty0xcx2g.test_default
Address 5: 10.0.11.201 test_test.5.0d1w0bcwd9xjyvg4gf6awhlao.test_default
Address 6: 10.0.11.204 test_test.4.7enzbfsltfx4b0sr6pzxqqcvp.test_default
Address 7: 10.0.11.207 a951cf96db62.test_default
Address 8: 10.0.11.202 test_test.10.fef5lwgy59s5b5hdue9vpg5w6.test_default
Address 9: 10.0.11.199 test_test.9.95ubli8xvaxijix5s8m0qjfem.test_default
Address 10: 10.0.11.203 test_test.7.xojzc2x8oy6ae9hqzf0y8l2kb.test_default
Resolving...
nslookup: can't resolve '(null)': Name does not resolve

Name:      tasks.test
Address 1: 10.0.11.207 test_test.8.8gooi5gpd7x4d9ib08bmcdaz8.test_default
Address 2: 10.0.11.202 test_test.10.fef5lwgy59s5b5hdue9vpg5w6.test_default
Address 3: 10.0.11.205 test_test.6.p0nzrbt0a21hdtlrqz5bfefra.test_default
Address 4: 10.0.11.200 test_test.1.i5b6saiccwufb19iaqyleesb4.test_default
Address 5: 10.0.11.199 e0ccacea03de.test_default
Address 6: 10.0.11.206 test_test.3.pta0qgi9z4mojrkemt8lf45yp.test_default
Address 7: 10.0.11.203 test_test.7.xojzc2x8oy6ae9hqzf0y8l2kb.test_default
Address 8: 10.0.11.198 a4fa0c6ca24a.test_default
Address 9: 10.0.11.201 74ada816b424.test_default
Address 10: 10.0.11.204 test_test.4.7enzbfsltfx4b0sr6pzxqqcvp.test_default
Resolving...

nslookup: can't resolve '(null)': Name does not resolve
Name:      tasks.test
Address 1: 10.0.11.201 test_test.5.0d1w0bcwd9xjyvg4gf6awhlao.test_default
Address 2: 10.0.11.204 test_test.4.7enzbfsltfx4b0sr6pzxqqcvp.test_default
Address 3: 10.0.11.205 test_test.6.p0nzrbt0a21hdtlrqz5bfefra.test_default
Address 4: 10.0.11.200 test_test.1.i5b6saiccwufb19iaqyleesb4.test_default
Address 5: 10.0.11.206 test_test.3.pta0qgi9z4mojrkemt8lf45yp.test_default
Address 6: 10.0.11.203 test_test.7.xojzc2x8oy6ae9hqzf0y8l2kb.test_default
Address 7: 10.0.11.207 test_test.8.8gooi5gpd7x4d9ib08bmcdaz8.test_default
Address 8: 10.0.11.198 test_test.2.88axff239tuw4byntty0xcx2g.test_default
Address 9: 10.0.11.202 test_test.10.fef5lwgy59s5b5hdue9vpg5w6.test_default
Address 10: 10.0.11.199 test_test.9.95ubli8xvaxijix5s8m0qjfem.test_default

Dig also agrees with nslookup:

/ # dig -x 10.0.11.199

; <<>> DiG 9.12.3-P4 <<>> -x 10.0.11.199
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34865
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;199.11.0.10.in-addr.arpa.      IN      PTR

;; ANSWER SECTION:
199.11.0.10.in-addr.arpa. 600   IN      PTR     test_test.9.95ubli8xvaxijix5s8m0qjfem.test_default.

;; Query time: 0 msec
;; SERVER: 127.0.0.11#53(127.0.0.11)
;; WHEN: Fri Apr 26 14:58:19 UTC 2019
;; MSG SIZE  rcvd: 130

/ # dig -x 10.0.11.199

; <<>> DiG 9.12.3-P4 <<>> -x 10.0.11.199
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 148
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;199.11.0.10.in-addr.arpa.      IN      PTR

;; ANSWER SECTION:
199.11.0.10.in-addr.arpa. 600   IN      PTR     e0ccacea03de.test_default.

;; Query time: 0 msec
;; SERVER: 127.0.0.11#53(127.0.0.11)
;; WHEN: Fri Apr 26 14:58:20 UTC 2019
;; MSG SIZE  rcvd: 105

@olljanat
Copy link
Contributor

@AHelper 18.09 means code freeze on September, 2018. moby/libnetwork#2299 was merged on November so it will be on next version 19.03. You can see its target dates on https://github.com/docker/docker-ce/milestone/32

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests