Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: support zone transfers in Kubernetes middleware #660

Closed
ae6rt opened this issue May 5, 2017 · 14 comments
Closed

Feature request: support zone transfers in Kubernetes middleware #660

ae6rt opened this issue May 5, 2017 · 14 comments

Comments

@ae6rt
Copy link

ae6rt commented May 5, 2017

As a Kubernetes cluster operator, I want to perform zone transfers of cluster.local, or whatever cluster domain is otherwise configured. This feature is not currently supported in CoreDNS-007, at this writing, the latest coredns version.

Context

Kubernetes cluster version:

$ k version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Pods running in my cluster

$ pods
NAMESPACE     NAME                      READY     STATUS    RESTARTS   AGE       IP             NODE
default       centos7                   1/1       Running   2          2h        10.7.119.136   lxc016
kube-system   coredns-512496995-4638p   1/1       Running   0          2h        10.7.119.135   lxc015

CoreDNS version:

$ k -n kube-system exec coredns-512496995-4638p -- /coredns -version
CoreDNS-007

Corefile

Consider this Corefile that configures the pod above

.:53 {
    errors
    log stdout
    health
    kubernetes cluster.local {
      transfer to *
      cidrs 169.254.8.0/21
    }
    proxy . /etc/resolv.conf 
    cache 30
}

Prove DNS works in the cluster:

$ k exec centos7 nslookup kubernetes
Server:		169.254.8.53
Address:	169.254.8.53#53

Name:	kubernetes.default.svc.cluster.local
Address: 169.254.8.1

Attempt a zone transfer

$ k exec centos7 dig cluster.local axfr

; <<>> DiG 9.9.4-RedHat-9.9.4-29.el7_2.3 <<>> cluster.local axfr
;; global options: +cmd
cluster.local.		5	IN	SOA	ns.dns.cluster.local. hostmaster.cluster.local. 1494000263 7200 1800 86400 60
; Transfer failed.

Observe the same failure report in the coredns container logs

$ k -n kube-system logs coredns-512496995-4638p
.:53
2017/05/05 13:41:57 [INFO] CoreDNS-007
CoreDNS-007
10.7.119.136 - [05/May/2017:13:42:26 +0000] "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR 70 362.92µs
10.7.119.136 - [05/May/2017:13:42:32 +0000] "A IN api.twilio.com.default.svc.cluster.local. udp 58 false 512" NXDOMAIN 111 221.227µs
10.7.119.136 - [05/May/2017:13:42:32 +0000] "A IN api.twilio.com.svc.cluster.local. udp 50 false 512" NXDOMAIN 103 186.635µs
10.7.119.136 - [05/May/2017:13:42:32 +0000] "A IN api.twilio.com.cluster.local. udp 46 false 512" NXDOMAIN 99 190.527µs
10.7.119.136 - [05/May/2017:13:42:32 +0000] "A IN api.twilio.com.lxc016.qa.xoom.com. udp 51 false 512" NXDOMAIN 99 1.02957ms
10.7.119.136 - [05/May/2017:13:42:32 +0000] "A IN api.twilio.com.qa.xoom.com. udp 44 false 512" NXDOMAIN 92 846.828µs
10.7.119.136 - [05/May/2017:13:42:32 +0000] "A IN api.twilio.com.xoom.com. udp 41 false 512" NXDOMAIN 104 5.387866ms
10.7.119.136 - [05/May/2017:13:42:32 +0000] "A IN api.twilio.com. udp 32 false 512" NOERROR 445 42.750638ms
10.7.119.136 - [05/May/2017:13:42:32 +0000] "A IN api.twilio.com. tcp 32 false 65535" NOERROR 709 1.268809ms
10.7.119.136 - [05/May/2017:13:42:39 +0000] "AXFR IN cluster.local. tcp 43 false 65535" NXDOMAIN 96 180.064µs
10.7.119.136 - [05/May/2017:13:42:47 +0000] "A IN webserver.2.xoomapi. udp 37 false 512" NOERROR 277 2.827296ms
10.7.119.136 - [05/May/2017:16:03:42 +0000] "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR 70 317.948µs
10.7.119.136 - [05/May/2017:16:04:23 +0000] "AXFR IN cluster.local. tcp 43 false 65535" NXDOMAIN 96 208.056µs
@miekg
Copy link
Member

miekg commented May 5, 2017 via email

@johnbelamaric
Copy link
Member

We have watches on all the K8s resources of interest. We can keep a serial number of some sort, or perhaps construct one out of the individual resource serial numbers.

@miekg
Copy link
Member

miekg commented May 5, 2017 via email

@johnbelamaric
Copy link
Member

But only change it when something comes across the watches? Or we don't care?

@miekg
Copy link
Member

miekg commented May 5, 2017 via email

@johnbelamaric
Copy link
Member

Of course, sure. Do we support notifies today? Given the frequency of changes that could happen in K8s that would be a concern.

@miekg
Copy link
Member

miekg commented May 6, 2017

Yes, we can do notifies, code is in the file middleware.

There is no generic notify layer/middleware that handles though - might also not be worth the complexity.

@miekg miekg added this to the 008 milestone May 30, 2017
@johnbelamaric johnbelamaric modified the milestones: 008, 009 May 30, 2017
@johnbelamaric johnbelamaric removed this from the 009 milestone Jun 26, 2017
@miekg
Copy link
Member

miekg commented Jun 26, 2017

I might actually be tempted to work on this (modulo 8 other PRs)

@miekg
Copy link
Member

miekg commented Jun 26, 2017

tentatively for 009, but maybe 010 would be better

@miekg
Copy link
Member

miekg commented Aug 21, 2017

I've been playing with some code today, far from finished, but I should have some tangible within a few days.

@miekg
Copy link
Member

miekg commented Aug 21, 2017

See #963 which shows the gist of it. Interesting questions this poses:

We may be able to put this in the secondary middleware and use the same tricks as we did for autopath and federation?

@miekg miekg added this to the 1.1.0 milestone Aug 24, 2017
@stale
Copy link

stale bot commented Oct 23, 2017

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Oct 23, 2017
@miekg
Copy link
Member

miekg commented Dec 1, 2017

this is happening in #1259 (and possible followup PRs)

@johnbelamaric
Copy link
Member

Done in #1259! There is still follow up on how to handle records that are provided via the fallthrough mechanism, but the basic K8s records are now available via AXFR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants