Custom DNS entries for kube-dns #55

Open
morallo opened this Issue Feb 14, 2017 · 18 comments

Comments

Projects
None yet
@morallo

morallo commented Feb 14, 2017

I re-create the issue here as suggested by @bowei.

Kubernetes version (use kubectl version): v1.5.2

Environment:

  • Cloud provider or hardware configuration: baremetal,
  • OS (e.g. from /etc/os-release): Debian GNU/Linux 8 (jessie)
  • Kernel (e.g. uname -a): 3.16.7
  • Install tools: docker-multinode.

Current status: entries in /etc/hosts of the nodes are not used by kube-dns. There is no straightforward way to replicate this custom DNS configuration for the cluster.

What I would like: Some way to easily define custom DNS entries used by kube-dns on a cluster-wide level, without deploying an additional DNS server.

Already considered solutions:

Possible solutions:

  • Implement a special ConfigMap to declare custom entries on a cluster-wide level and make kube-dns look it up.
  • kube-dns imports node's /etc/hosts entries, like it does for /etc/resolv.conf. This is not very elegant and doesn't scale, but it replicates a capability currently existing in non containerized system administration.

@morallo morallo referenced this issue in kubernetes/kubernetes Feb 14, 2017

Closed

Custom DNS entries for kube-dns #41328

@morallo

This comment has been minimized.

Show comment
Hide comment
@morallo

morallo Feb 14, 2017

Answering @bowei:

  • If they are in the cluster.local domain, it might not be a good idea given the potential for name clash.

It's not the case, they are in their own domain. With name clash you refer to something like the possibility of sinkholing "google.com" for your cluster? Or unintended clashes?

  • If they are in a separate domain (e.g. acme.local.), there is a proposal coming that will allow you to designate optional stub domains that have their own custom name servers. In that case, you can run your own dnsmasq for that domain and it will be incorporated into the namespace.

This is my specific use case:

  • A pool of Debian Jessie servers used for testing deployment of distributed applications. No internal DNS server, managed manually through /etc/hosts (don't judge!).
  • Kubernetes cluster created using kube-deploy/docker-multinode
  • Kafka server running in nodeX.cluster.int.domain.corp:9092 outside k8s.
  • I can define an Service pointing to an external Endpoint (kafka), and point the container apps inside k8s to that service. However, kafka replies with a list of peers/nodes for the app to poll from, and that reverts to the full *.cluster.int.domain.corp domain.
  • I would need to change the kafka server configuration to not use subdomains for this to work.

What I ended up doing is deploying dnsmasq in one of the nodes, and add it's address to /etc/resolv.conf to the master node, so that kube-dns picks it up as upstream server.

However, IMHO this has two disadvantages compared to the feature I described:

  • You need to deploy a DNS server.
  • Needs to be managed outside the cluster. Adding new names requires host admin level access.

In my mind, this feature is intended for testing environments, just like a "cluster-wide /etc/hosts". As there are several workarounds, maybe the use case is not so common and doesn't justify the effort.

morallo commented Feb 14, 2017

Answering @bowei:

  • If they are in the cluster.local domain, it might not be a good idea given the potential for name clash.

It's not the case, they are in their own domain. With name clash you refer to something like the possibility of sinkholing "google.com" for your cluster? Or unintended clashes?

  • If they are in a separate domain (e.g. acme.local.), there is a proposal coming that will allow you to designate optional stub domains that have their own custom name servers. In that case, you can run your own dnsmasq for that domain and it will be incorporated into the namespace.

This is my specific use case:

  • A pool of Debian Jessie servers used for testing deployment of distributed applications. No internal DNS server, managed manually through /etc/hosts (don't judge!).
  • Kubernetes cluster created using kube-deploy/docker-multinode
  • Kafka server running in nodeX.cluster.int.domain.corp:9092 outside k8s.
  • I can define an Service pointing to an external Endpoint (kafka), and point the container apps inside k8s to that service. However, kafka replies with a list of peers/nodes for the app to poll from, and that reverts to the full *.cluster.int.domain.corp domain.
  • I would need to change the kafka server configuration to not use subdomains for this to work.

What I ended up doing is deploying dnsmasq in one of the nodes, and add it's address to /etc/resolv.conf to the master node, so that kube-dns picks it up as upstream server.

However, IMHO this has two disadvantages compared to the feature I described:

  • You need to deploy a DNS server.
  • Needs to be managed outside the cluster. Adding new names requires host admin level access.

In my mind, this feature is intended for testing environments, just like a "cluster-wide /etc/hosts". As there are several workarounds, maybe the use case is not so common and doesn't justify the effort.

@cmluciano

This comment has been minimized.

Show comment
Hide comment
Member

cmluciano commented Feb 14, 2017

Does externalName work for your use case https://kubernetes.io/docs/user-guide/services/#without-selectors ?

@thockin

This comment has been minimized.

Show comment
Hide comment
@thockin

thockin Feb 14, 2017

Member
Member

thockin commented Feb 14, 2017

@bowei bowei added the enhancement label Feb 17, 2017

@morallo

This comment has been minimized.

Show comment
Hide comment
@morallo

morallo Feb 22, 2017

Sorry for my late reply.

@cmluciano: externalName does not work because the containers need to resolve the name, and they can't reach our internal DNS.

@thockin: the stub DNS server is what I ended up implementing. However, I still think there is a valid use case for having quick custom DNS profiles.

Feel free to close the issue if you think the feature is not interesting.

morallo commented Feb 22, 2017

Sorry for my late reply.

@cmluciano: externalName does not work because the containers need to resolve the name, and they can't reach our internal DNS.

@thockin: the stub DNS server is what I ended up implementing. However, I still think there is a valid use case for having quick custom DNS profiles.

Feel free to close the issue if you think the feature is not interesting.

@JorritSalverda

This comment has been minimized.

Show comment
Hide comment
@JorritSalverda

JorritSalverda Mar 13, 2017

I'm all for extending KubeDNS's api so you can do CRUD on dns records yourself or by adding the service aliases as mentioned in kubernetes/kubernetes#39792, because that's what I'm really after.

I'm all for extending KubeDNS's api so you can do CRUD on dns records yourself or by adding the service aliases as mentioned in kubernetes/kubernetes#39792, because that's what I'm really after.

@jamesgetx

This comment has been minimized.

Show comment
Hide comment
@jamesgetx

jamesgetx Apr 19, 2017

We met the same case as morallo mentioned above and hope kube-dns can both support k8s service and custom dns rule.

jamesgetx commented Apr 19, 2017

We met the same case as morallo mentioned above and hope kube-dns can both support k8s service and custom dns rule.

@johnbelamaric

This comment has been minimized.

Show comment
Hide comment
@johnbelamaric

johnbelamaric Apr 19, 2017

You can do this with CoreDNS: http://coredns.io

You can do this with CoreDNS: http://coredns.io

@johnbelamaric

This comment has been minimized.

Show comment
Hide comment

FYI, just put out a blog that shows how to do it. https://blog.coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/

@bowei

This comment has been minimized.

Show comment
Hide comment
Member

bowei commented May 8, 2017

@valentin2105

This comment has been minimized.

Show comment
Hide comment
@valentin2105

valentin2105 Sep 15, 2017

To make an external service resolvable in the cluster,

You can manually create a Service and a Endpoint that point to an external IPv4 and it would be resolvable w/ the correct namespace and cluster domain.

(I use it for GlusterFS).

To make an external service resolvable in the cluster,

You can manually create a Service and a Endpoint that point to an external IPv4 and it would be resolvable w/ the correct namespace and cluster domain.

(I use it for GlusterFS).

@asarkar

This comment has been minimized.

Show comment
Hide comment
@asarkar

asarkar Oct 20, 2017

We have a similar issue with Couchbase. If we deploy a ClusterIP type service, the pods don’t get assigned DNS entries, and are forced to use IP. On restart, the IP changes and Couchbase considers the node in error.
On the other hand, if we use a headless service, the PODs have DNS and I can tell Couchbase to use that. Restart is no problem; however, clients connection fails on Couchbase restart because the headless Service returns the POD IPs, which the clients hold on to.

All we need is KubeDNS to use a constant entry for PODs fronted by ClusterIP services. If it wants to use hostname.servicename, that’s ok too, because I can set the hostname.

The situation as of now of completely hopeless. I simply can’t get Couchbase working in Kubernetes.

asarkar commented Oct 20, 2017

We have a similar issue with Couchbase. If we deploy a ClusterIP type service, the pods don’t get assigned DNS entries, and are forced to use IP. On restart, the IP changes and Couchbase considers the node in error.
On the other hand, if we use a headless service, the PODs have DNS and I can tell Couchbase to use that. Restart is no problem; however, clients connection fails on Couchbase restart because the headless Service returns the POD IPs, which the clients hold on to.

All we need is KubeDNS to use a constant entry for PODs fronted by ClusterIP services. If it wants to use hostname.servicename, that’s ok too, because I can set the hostname.

The situation as of now of completely hopeless. I simply can’t get Couchbase working in Kubernetes.

@manigandham

This comment has been minimized.

Show comment
Hide comment
@manigandham

manigandham Oct 20, 2017

@asarkar

That seems like an issue with the Couchbase client, does it not discover all the endpoints of the database and retry connecting to another IP? If not, then you can just create another service pointing to the same deployment.

For example, have a couchbase-internal headless service for the pods to connect to each other, then create a couchbase-public ClusterIP service for your clients to connect to the database.

You can also use a StatefulSet so that pods are always numbered in order and will keep the same name on restarts.

manigandham commented Oct 20, 2017

@asarkar

That seems like an issue with the Couchbase client, does it not discover all the endpoints of the database and retry connecting to another IP? If not, then you can just create another service pointing to the same deployment.

For example, have a couchbase-internal headless service for the pods to connect to each other, then create a couchbase-public ClusterIP service for your clients to connect to the database.

You can also use a StatefulSet so that pods are always numbered in order and will keep the same name on restarts.

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Jan 18, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@arno01

This comment has been minimized.

Show comment
Hide comment
@arno01

arno01 Jan 18, 2018

/remove-lifecycle stale

arno01 commented Jan 18, 2018

/remove-lifecycle stale

@k1-hedayati

This comment has been minimized.

Show comment
Hide comment
@k1-hedayati

k1-hedayati Jan 20, 2018

Though it is possible to add entries to hosts via configmap it's nice and natural to import nodes hosts file into kubedns

k1-hedayati commented Jan 20, 2018

Though it is possible to add entries to hosts via configmap it's nice and natural to import nodes hosts file into kubedns

@irvifa

This comment has been minimized.

Show comment
Hide comment
@irvifa

irvifa Feb 28, 2018

Hi, I'm using a stubDomain currently and would like to know if the kube-dns failed to resolve the DNS where we could see the logging for this matter? Since I think this will give an impact to service reliability.

irvifa commented Feb 28, 2018

Hi, I'm using a stubDomain currently and would like to know if the kube-dns failed to resolve the DNS where we could see the logging for this matter? Since I think this will give an impact to service reliability.

@Frodox

This comment has been minimized.

Show comment
Hide comment
@Frodox

Frodox May 27, 2018

Any ideas/progress on this one?

Frodox commented May 27, 2018

Any ideas/progress on this one?

@krmayankk

This comment has been minimized.

Show comment
Hide comment
@krmayankk

krmayankk May 30, 2018

@asarkar @manigandham would statefulset help you . Their pods get constant dns entries and you can still use a service to round Robin to any of the pods

@asarkar @manigandham would statefulset help you . Their pods get constant dns entries and you can still use a service to round Robin to any of the pods

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment