Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom DNS entries for kube-dns #55

Closed
morallo opened this issue Feb 14, 2017 · 27 comments
Closed

Custom DNS entries for kube-dns #55

morallo opened this issue Feb 14, 2017 · 27 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@morallo
Copy link

morallo commented Feb 14, 2017

I re-create the issue here as suggested by @bowei.

Kubernetes version (use kubectl version): v1.5.2

Environment:

  • Cloud provider or hardware configuration: baremetal,
  • OS (e.g. from /etc/os-release): Debian GNU/Linux 8 (jessie)
  • Kernel (e.g. uname -a): 3.16.7
  • Install tools: docker-multinode.

Current status: entries in /etc/hosts of the nodes are not used by kube-dns. There is no straightforward way to replicate this custom DNS configuration for the cluster.

What I would like: Some way to easily define custom DNS entries used by kube-dns on a cluster-wide level, without deploying an additional DNS server.

Already considered solutions:

Possible solutions:

  • Implement a special ConfigMap to declare custom entries on a cluster-wide level and make kube-dns look it up.
  • kube-dns imports node's /etc/hosts entries, like it does for /etc/resolv.conf. This is not very elegant and doesn't scale, but it replicates a capability currently existing in non containerized system administration.
@morallo
Copy link
Author

morallo commented Feb 14, 2017

Answering @bowei:

  • If they are in the cluster.local domain, it might not be a good idea given the potential for name clash.

It's not the case, they are in their own domain. With name clash you refer to something like the possibility of sinkholing "google.com" for your cluster? Or unintended clashes?

  • If they are in a separate domain (e.g. acme.local.), there is a proposal coming that will allow you to designate optional stub domains that have their own custom name servers. In that case, you can run your own dnsmasq for that domain and it will be incorporated into the namespace.

This is my specific use case:

  • A pool of Debian Jessie servers used for testing deployment of distributed applications. No internal DNS server, managed manually through /etc/hosts (don't judge!).
  • Kubernetes cluster created using kube-deploy/docker-multinode
  • Kafka server running in nodeX.cluster.int.domain.corp:9092 outside k8s.
  • I can define an Service pointing to an external Endpoint (kafka), and point the container apps inside k8s to that service. However, kafka replies with a list of peers/nodes for the app to poll from, and that reverts to the full *.cluster.int.domain.corp domain.
  • I would need to change the kafka server configuration to not use subdomains for this to work.

What I ended up doing is deploying dnsmasq in one of the nodes, and add it's address to /etc/resolv.conf to the master node, so that kube-dns picks it up as upstream server.

However, IMHO this has two disadvantages compared to the feature I described:

  • You need to deploy a DNS server.
  • Needs to be managed outside the cluster. Adding new names requires host admin level access.

In my mind, this feature is intended for testing environments, just like a "cluster-wide /etc/hosts". As there are several workarounds, maybe the use case is not so common and doesn't justify the effort.

@cmluciano
Copy link

Does externalName work for your use case https://kubernetes.io/docs/user-guide/services/#without-selectors ?

@thockin
Copy link
Member

thockin commented Feb 14, 2017 via email

@morallo
Copy link
Author

morallo commented Feb 22, 2017

Sorry for my late reply.

@cmluciano: externalName does not work because the containers need to resolve the name, and they can't reach our internal DNS.

@thockin: the stub DNS server is what I ended up implementing. However, I still think there is a valid use case for having quick custom DNS profiles.

Feel free to close the issue if you think the feature is not interesting.

@JorritSalverda
Copy link

I'm all for extending KubeDNS's api so you can do CRUD on dns records yourself or by adding the service aliases as mentioned in kubernetes/kubernetes#39792, because that's what I'm really after.

@jamesgetx
Copy link

jamesgetx commented Apr 19, 2017

We met the same case as morallo mentioned above and hope kube-dns can both support k8s service and custom dns rule.

@johnbelamaric
Copy link
Member

You can do this with CoreDNS: http://coredns.io

@johnbelamaric
Copy link
Member

FYI, just put out a blog that shows how to do it. https://blog.coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/

@bowei
Copy link
Member

bowei commented May 8, 2017

You can also use the built-in mechanism with 1.6: http://blog.kubernetes.io/2017/04/configuring-private-dns-zones-upstream-nameservers-kubernetes.html

@valentin2105
Copy link

To make an external service resolvable in the cluster,

You can manually create a Service and a Endpoint that point to an external IPv4 and it would be resolvable w/ the correct namespace and cluster domain.

(I use it for GlusterFS).

@asarkar
Copy link

asarkar commented Oct 20, 2017

We have a similar issue with Couchbase. If we deploy a ClusterIP type service, the pods don’t get assigned DNS entries, and are forced to use IP. On restart, the IP changes and Couchbase considers the node in error.
On the other hand, if we use a headless service, the PODs have DNS and I can tell Couchbase to use that. Restart is no problem; however, clients connection fails on Couchbase restart because the headless Service returns the POD IPs, which the clients hold on to.

All we need is KubeDNS to use a constant entry for PODs fronted by ClusterIP services. If it wants to use hostname.servicename, that’s ok too, because I can set the hostname.

The situation as of now of completely hopeless. I simply can’t get Couchbase working in Kubernetes.

@manigandham
Copy link

manigandham commented Oct 20, 2017

@asarkar

That seems like an issue with the Couchbase client, does it not discover all the endpoints of the database and retry connecting to another IP? If not, then you can just create another service pointing to the same deployment.

For example, have a couchbase-internal headless service for the pods to connect to each other, then create a couchbase-public ClusterIP service for your clients to connect to the database.

You can also use a StatefulSet so that pods are always numbered in order and will keep the same name on restarts.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 18, 2018
@arno01
Copy link

arno01 commented Jan 18, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 18, 2018
@ghost
Copy link

ghost commented Jan 20, 2018

Though it is possible to add entries to hosts via configmap it's nice and natural to import nodes hosts file into kubedns

@irvifa
Copy link
Member

irvifa commented Feb 28, 2018

Hi, I'm using a stubDomain currently and would like to know if the kube-dns failed to resolve the DNS where we could see the logging for this matter? Since I think this will give an impact to service reliability.

@Frodox
Copy link

Frodox commented May 27, 2018

Any ideas/progress on this one?

@krmayankk
Copy link

@asarkar @manigandham would statefulset help you . Their pods get constant dns entries and you can still use a service to round Robin to any of the pods

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. and removed enhancement labels Jun 5, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 3, 2018
@Frodox
Copy link

Frodox commented Sep 3, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 3, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 2, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 1, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@akshaysin
Copy link

/reopen
Was this feature ever added ?

@k8s-ci-robot
Copy link
Contributor

@akshaysin: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen
Was this feature ever added ?

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@johnbelamaric
Copy link
Member

@akshaysin not to kube-dns. you can do it with CoreDNS, probably most easily via the hosts plugin (which IIRC auto-reloads when the hosts file changes, so you can stick it in a configmap and distribute new entries that way)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests