Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SVC Records of headless service for StatefulSet doesnot work normally #85759

Closed
scopher128 opened this issue Nov 30, 2019 · 6 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/apps Categorizes an issue or PR as relevant to SIG Apps.

Comments

@scopher128
Copy link

What happened:
I wish to get the StatefulSet pod instances hostname with IP address mapping within containers, like,
Name: web-0.nginx
Address 1: 10.244.1.6

Name: web-1.nginx
Address 1: 10.244.2.6
It should be SVC Records DNS service for StatefulSet Pods.
but now it can parse nginx with nslookup only, it cannot parse web-0.nginx and web-1.nginx.

Refer to:
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
https://www.bogotobogo.com/DevOps/Docker/Docker_Kubernetes_StatefulSet.php

What you expected to happen:
Name: web-0.nginx
Address 1: 10.244.1.6

Name: web-1.nginx
Address 1: 10.244.2.6
It should be SVC Records DNS service for StatefulSet Pods.
It should be able to parse web-0.nginx and web-1.nginx.

How to reproduce it (as minimally and precisely as possible):
The steps can follow https://www.bogotobogo.com/DevOps/Docker/Docker_Kubernetes_StatefulSet.php

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):v1.16
  • Cloud provider or hardware configuration:
  • OS (e.g: cat /etc/os-release):
    NAME="CentOS Linux"
    VERSION="7 (Core)"
    ID="centos"
    ID_LIKE="rhel fedora"
    VERSION_ID="7"
    PRETTY_NAME="CentOS Linux 7 (Core)"
    ANSI_COLOR="0;31"
    CPE_NAME="cpe:/o:centos:centos:7"
    HOME_URL="https://www.centos.org/"
    BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

  • Kernel (e.g. uname -a):
    Linux controller-1 4.19.78-1.el7.centos.ncir.1.x86_64 Unit test coverage in Kubelet is lousy. (~30%) #1 SMP Fri Oct 18 23:53:12 EEST 2019 x86_64 x86_64 x8_64 GNU/Linux

  • Install tools:

  • Network plugin and version (if this is a network-related bug):

  • Others:

@scopher128 scopher128 added the kind/bug Categorizes issue or PR as related to a bug. label Nov 30, 2019
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Nov 30, 2019
@neolit123
Copy link
Member

/sig apps

@k8s-ci-robot k8s-ci-robot added sig/apps Categorizes an issue or PR as relevant to SIG Apps. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Dec 1, 2019
@nokia-t1zhou
Copy link

add some outptut, only can get below SRV from Kube-dns:

[root@zhou-test-4-6d557b5bc-jx4vb /]# dig @10.254.0.254 srv nginx.cran1.svc.local.net

; <<>> DiG 9.11.5-P1-RedHat-9.11.5-2.P1.fc29 <<>> @10.254.0.254 srv nginx.cran1.svc.local.net
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1857
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 3

;; QUESTION SECTION:
;nginx.cran1.svc.local.net.     IN      SRV

;; ANSWER SECTION:
nginx.cran1.svc.local.net. 30   IN      SRV     10 33 0 3933336132336463.nginx.cran1.svc.local.net.
nginx.cran1.svc.local.net. 30   IN      SRV     10 33 0 3164343262343332.nginx.cran1.svc.local.net.
nginx.cran1.svc.local.net. 30   IN      SRV     10 33 0 3335653630646635.nginx.cran1.svc.local.net.

;; ADDITIONAL SECTION:
3933336132336463.nginx.cran1.svc.local.net. 30 IN A 10.244.0.125
3164343262343332.nginx.cran1.svc.local.net. 30 IN A 10.244.0.123
3335653630646635.nginx.cran1.svc.local.net. 30 IN A 10.244.0.124

;; Query time: 0 msec
;; SERVER: 10.254.0.254#53(10.254.0.254)
;; WHEN: Wed Nov 27 07:25:48 UTC 2019
;; MSG SIZE  rcvd: 277

@chrisohaver
Copy link
Contributor

CoreDNS uses the endpoint hostname instead of a numerical id. It also has the endpoint_pod_names option:

  • endpoint_pod_names uses the pod name of the pod targeted by the endpoint as
    the endpoint name in A records, e.g.,
    endpoint-name.my-service.namespace.svc.cluster.local. in A 1.2.3.4
    By default, the endpoint-name name selection is as follows: Use the hostname
    of the endpoint, or if hostname is not set, use the dashed form of the endpoint
    IP address (e.g., 1-2-3-4.my-service.namespace.svc.cluster.local.)
    If this directive is included, then name selection for endpoints changes as
    follows: Use the hostname of the endpoint, or if hostname is not set, use the
    pod name of the pod targeted by the endpoint. If there is no pod targeted by
    the endpoint, use the dashed IP address form.

@nokia-t1zhou
Copy link

i am searched the difference between kube-dns and coredns:
https://kubernetes.io/blog/2018/07/10/coredns-ga-for-kubernetes-cluster-dns/
seem only coredns support to generate srv record with the hostname of endpoint, for headless service.

@nokia-t1zhou
Copy link

after to deployment the coredns with "endpoint_pod_name" option in coredns configMap, everything is OK now.

[root@zhou-test-4-54d87b9999-k9686 /]# dig @192.168.3.100 srv nginx.default.svc.cluster.local

; <<>> DiG 9.11.5-P1-RedHat-9.11.5-2.P1.fc29 <<>> @192.168.3.100 srv nginx.default.svc.cluster.local
; (1 server found)
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51783
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 4
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 16546ea248f0d483 (echoed)
;; QUESTION SECTION:
;nginx.default.svc.cluster.local. IN    SRV

;; ANSWER SECTION:
nginx.default.svc.cluster.local. 5 IN   SRV     0 33 80 zhou-0.nginx.default.svc.cluster.local.
nginx.default.svc.cluster.local. 5 IN   SRV     0 33 80 zhou-1.nginx.default.svc.cluster.local.
nginx.default.svc.cluster.local. 5 IN   SRV     0 33 80 zhou-2.nginx.default.svc.cluster.local.

;; ADDITIONAL SECTION:
zhou-0.nginx.default.svc.cluster.local. 5 IN A  192.168.180.197
zhou-2.nginx.default.svc.cluster.local. 5 IN A  192.168.180.199
zhou-1.nginx.default.svc.cluster.local. 5 IN A  192.168.180.198

;; Query time: 0 msec
;; SERVER: 192.168.3.100#53(192.168.3.100)
;; WHEN: Tue Dec 03 03:16:53 UTC 2019
;; MSG SIZE  rcvd: 501

then i can get the A record by use name "zhou-0.nginx.default.svc.cluster.local"

[root@zhou-test-4-54d87b9999-k9686 /]# dig @192.168.3.100 zhou-0.nginx.default.svc.cluster.local

; <<>> DiG 9.11.5-P1-RedHat-9.11.5-2.P1.fc29 <<>> @192.168.3.100 zhou-0.nginx.default.svc.cluster.local
; (1 server found)
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58772
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: a2985d5b77c9bb17 (echoed)
;; QUESTION SECTION:
;zhou-0.nginx.default.svc.cluster.local.        IN A

;; ANSWER SECTION:
zhou-0.nginx.default.svc.cluster.local. 5 IN A  192.168.180.197

;; Query time: 0 msec
;; SERVER: 192.168.3.100#53(192.168.3.100)
;; WHEN: Tue Dec 03 03:17:24 UTC 2019
;; MSG SIZE  rcvd: 133

thank you, very much

@scopher128
Copy link
Author

Thank you so much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/apps Categorizes an issue or PR as relevant to SIG Apps.
Projects
None yet
Development

No branches or pull requests

5 participants