-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
coredns 1.0.6 #1504
Comments
I think we need decisions on a couple things, and as to whether or not we hold the release for them. These are:
I would like to see these finalized for the release, and then we can use 1.0.6 in the k8s 1.10 release for the beta of CoreDNS as default. That means we need to act pretty quickly on this. Reload
Opinions? I am OK with any of these. It's nice to have the plugin structure but I did originally also thing about a CLI flag. But are we just kicking the can down the road in not dealing with global plugins? Upstream Behavior |
upstream and the proxy fix are most critical,, the rest is less important IMO |
Thats my understanding. I will verify. |
I was mistaken about kube-dns behavior. kube-dns does not honor stub-domains when looking up CNAMEs. I also found an open issue, kubernetes/dns#131 that describes the same behavior. IMO, this is a bug, but since it's a pre-existing behavior, there is no urgency to fix it. |
[ Quoting <notifications@github.com> in "Re: [coredns/coredns] coredns 1.0.6..." ]
> Thats my understanding. I will verify.
I was mistaken about kube-dns behavior. kube-dns does not honor stub-domains when looking up CNAMEs.
I also found an open issue, kubernetes/dns#131 that describes the same behavior.
IMO, this is a bug, but since it's a pre-existing behavior, there is no urgency to fix it.
John made some good points, that (finally) made me understand the issue. I think
it is good to fix, but if the immediate time pressure is off we could do just
1.0.6 (I mean, releases are cheap)
/Miek
…--
Miek Gieben
|
also seeing all the |
That's fine, but all those issues still apply with signal based reloads. But having the plugin encourages its use a lot more. |
Think all things that should be in are in, and should be could to go for a 1.0.6? |
I think we need to sort out #1532 first... |
Yes, very soon. @fturib just filed one more issue he found, it looks like someone made a backwards-incompatible change in K8s API so we are going to work around it. #1532 - @chrisohaver should have it done today or tomorrow. It's a pain to test because the issue is only when running against a k8s built off of k8s master not in any released version. |
yeah, saw it. Wish we could easily release and update the CoreDNS in k8s, separate from their release cadence |
Not sure if this will be fixed in proxy or not - but for what its worth... I am running coredns to proxy requests arriving on an ipv6 address/interface to an older tinydns authoritative server that I have not bothered to apply this patch since it does not apply cleanly and I was 1) lazy and 2) wanted to play with coredns. I am running coredns in a chroot as non-root and allow it to bind to port 53 ala setcap cap_net_bind_service=+ep coredns Here is the config: . { With that background out of the way - here's the problem I am seeing... When I fire it up it works perfect - for about 5 minutes or so and then it gets nothing but SERVFAIL responses. The "fix" is to restart coredns - then it works fine again - for another 5 minutes or so and then back to all SERVFAIL |
Can you add |
Sure - I now have this: . { Will have something shortly. Not sure if it matters or not, but again - for a full background, coredns gets somewhere between 20-200 queries per second on this setup. |
Results are in: unreachable backend: no upstream host However, I can dig the upstream host (which runs on the same machine) while it is issuing that error. Restarting coredns fixes. Will just add a cron to kill coredns once a minute for now. |
This is 1.0.5 or master? This may already be fixed in master, there are upstream health check problems in 1.0.5 |
1.0.5 - guess i will just update when 1.0.6 gets released and see what happens |
is there any way to disable upstream health checks? |
|
Ok - had chance to play with this a bit more and figured out what is going on. Tinydns by default...
So without the "." in the data file, tinydns never replies to requests for which it has no data - which coredns of treats as a timeout. When I add the "." tinydns replies immediately with NXDOMAIN so coredns does not get a timeout. My guess is that coredns marks the upstream as down and stops trying once it gets too many timeouts? However, most of the queries sent upstream do not timeout even when I do not have the "." in the data file. Therefore, I concur there are upstream health check problems in 1.0.5. Now I guess the question is if this problem is present in master - I will download and build master and try it out to see and let you know. |
Scratch that plan... go build github.com/coredns/coredns |
[ Quoting <notifications@github.com> in "Re: [coredns/coredns] coredns 1.0.6..." ]
Scratch that plan...
go build github.com/coredns/coredns
# github.com/coredns/coredns/core/dnsserver
go/src/github.com/coredns/coredns/core/dnsserver/register.go:33:13: cannot use newContext (type func() caddy.Context) as type func(*caddy.Instance) caddy.Context in field value
See #1540
Use 'make' or manual check out Caddy v0.10.10.
Also, as I've stated in other issues, I'm not happy with the health checking in
proxy. Can you try the forward plugin? (still need to compile your self I
think).
Happy to give you a binary though
|
Thanks for the offer on the binary. I was able to get things working fine by setting max_fails to 0 - I also ended up leaving tinydns responding to unknown hosts as nxdomain since I think this is a better default in any case - doesn't leave resolvers hanging. Overall I really like coredns - a great tool! |
@miekg : do you have an idea when the 1.0.6 will be released ? The code freeze for K8s is this Friday (2/26). We would need this new versions with fixes for integration in v1.10 as BETA. |
yeah, we can release. Want to get #1543 in as well, but that should be OK to do today and then kick off a release. |
and released |
Thanks !!! |
There some important fixes (esp. in proxy); kick of new release.
The text was updated successfully, but these errors were encountered: