Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Only one A record set for headless service with pods having single hostname. #116

Open
stroop23 opened this issue Jun 26, 2017 · 10 comments
Assignees
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@stroop23
Copy link

stroop23 commented Jun 26, 2017

/kind bug

What happened
When a headless service is created to point to pods which share a single hostname, (which happens, for example, when the hostname field was set in the template of a deployment/replicaset)

  • Only one A record is returned for the service DNS name
  • A pod DNS name is generated based on this host name, which points to a single pod

What was expected to happen

  • Return A records for all available endpoints on the service DNS name
  • Not sure what the correct behaviour should be for the pod dns name, either also return multiple A records, or don't create at all.

Seems this has to do with the following code:
https://github.com/kubernetes/dns/blob/master/pkg/dns/dns.go#L490

Then the endpointName will be equal for any pod in the service which has the same hostname, so the entry in subCache will be overwritten.

How to reproduce

Apply the following spec:

apiVersion: v1
kind: List
items:
- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    name: depl-1
  spec:
    replicas: 2
    template:
      metadata:
        labels:
          app: depl-1
      spec:
        hostname: depl-1-host
        subdomain: depl-1-service
        containers:
        - name: test
          args:
          - bash
          stdin: true
          tty: true
          image: debian:jessie
- apiVersion: v1
  kind: Service
  metadata:
    name: depl-1-service
  spec:
    clusterIP: None
    selector:
      app: depl-1
    ports:
    - port: 5000

Resolving the hostnames gives back but a single A record.

# host depl-1-host.depl-1-service.default.svc.cluster.local
depl-1-host.depl-1-service.default.svc.cluster.local has address 10.56.0.140
# host depl-1-service.default.svc.cluster.local
depl-1-service.default.svc.cluster.local has address 10.56.0.140

PTR records ARE being created for all the pods, all resolving back to the single hostname. This is expected behaviour.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 30, 2017
@thockin
Copy link
Member

thockin commented Jan 2, 2018

This is a real bug that I expect people to hit. :(
/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Jan 2, 2018
@thockin thockin added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 2, 2018
@StephanX
Copy link

Yep, just ran into this today, while trying to use a headless service to permit direct access to pods in a replicaset. I tried working around it by omitting the hostname from pod definition, but kubedns just ignored the pod all together.

For what it's worth, our use case is to provide a mechanism to pass instructions to individual pods within the replicaset (to poll status, instruct it to quiesce, etc) and absent a way to address the pod via dns, we have to hack around the problem (in our case, telling the pod to publish its IP address to our database.) . Looking forward to seeing this resolved.

@krmayankk
Copy link

Oh my god I need this feature .i am happy to help if someone can give me pointers

@krmayankk
Copy link

/assign @krmayankk

@krmayankk
Copy link

i need this feature urgently and would like to help fix it . Who would be the right person to engage with to fix this @thockin

@johnbelamaric
Copy link
Member

I believe we verified that this already works in CoreDNS. Check out https://github.com/coredns/deployment/tree/master/kubernetes to see how to deploy it.

@krmayankk
Copy link

thanks @johnbelamaric could you point to code where this feature got implemented in coredns. We are using 1.9 where coredns is alpha. I would love to move to coredns when this is made seamless to kube-dns

@johnbelamaric
Copy link
Member

johnbelamaric commented May 17, 2018 via email

@chrisohaver
Copy link
Contributor

It's not a special case per se, the code continues to look for all endpoints that match the query name. It doesn't stop when it finds the first match.
I just added a test for it... coredns/coredns#1811

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

8 participants