-
Notifications
You must be signed in to change notification settings - Fork 39.3k
-
Notifications
You must be signed in to change notification settings - Fork 39.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kube-dns add-on should accept option ndots for SkyDNS or document ConfigMap alternative subPath #33554
Comments
Why would resolving SRV records fail? Suppose a hostname with X dot is queried and X < ndots threshhold, it will have the search paths appended before it is sent to the name server? For the SRV record
Given the search paths is:
Please correct me if I miss something. |
Though SRV records may not fail, the #14051 (comment) describes some scenarios where lowering it "won't work". That doesn't actually seem to be the case, it just changes the behavior from first resolving all search domains and then trying the absolute domain to instead attempting the absolute domain then trying the search domains. cc @thockin (linked your comment above) Our clusters use cluster domains along the lines of: We have an existing application that lives outside our clusters at: The 3 dots means we'll attempt the 6 cluster search domains before actually trying the absolute domain (which ultimately is resolved by the regional DNS servers outside the k8s cluster). That's 12 DNS queries (A and AAAA for each search domain) which ultimately fail since If instead we could configure In any case, it would give the option to the user to decide which is more important for their usage; attempting absolute domains or utilizing the search domains. |
I don't think that behavior is consistent. The resolvers I have seen never try search if the query has >= ndots dots.
|
From the resolv.conf man page:
Implies search domains should still be respected even if the absolute domain is attempted. It does seem not everything respects that though. Seems like something still best left up to the user to determine for themselves? |
We had a rough proposal from someone to add a DNS policy for I would accept such a PR... On Oct 6, 2016 6:56 AM, "Mac Browning" notifications@github.com wrote:
|
tl;dr:It seems like I wouldn't want to just go from Do you have the With
|
So we can not depend on this behavior. Unfortunately, resolv.conf behavior The "proposal" was an offhand remark in a bug or email or something. If On Thu, Oct 6, 2016 at 11:15 AM, Mac Browning notifications@github.com
|
This conversation definitely fixed white spaces I had with understanding ndots in action, thank you for details - I've updated the bug description. This has nothing to the bug relevance tho. |
@thockin hardcoded ndots:5 is still a pain point, despite on implementation details of DNS stack. This keeps this issue relevant. |
NOTE: this is rather a docs issue, see #35525 (comment) |
This issue snuck up and bit me. I was surprised two DNS pods couldn’t handle the load of a 7 node/81 pod cluster when the node running the DNS containers did a GC to delete old docker images. I’m finding it hard to accept a design that causes a single connection to I’ve started testing the configmap workaround for my cluster. There is a chance the dnsmasq I’m curious what the real world use cases are for having ndots set to 5. I’m a little new to k8s but I have traditionally always configured critical software to use FQDNs when possible. The only use case I can think of if you are trying to find a service that is in your namespace but you don’t know what namespace you are in. What am I missing, when do you have to rely on the search list? And more importantly am I going to break something if I run the majority of my services with a ndots set at 1? Can we make sure no Kubernetes components are built requiring ndots = 5 which would then potentially restrict users running it set to 1. |
cc @bowei |
I apologize for the girth of this, but I have a lot to say :) This is a tradeoff between automagic and performance. There are a number of considerations that went into this design. I can explain them, but of course reasonable people can disagree.
= ergo the name of a Service is
This explains how we got to ndots = 5.
We did not change ndots to 6 because This is getting out of hand. I'd very much like to revisit some of the assumptions and the schema. The problem, of course, is how to make a transition, once we have a better schema. Consider an alternative:
That is a better, safer, more appropriate schema that only requires 2 search paths. and in fact, you could argue that it only REQUIRES $zone, while the other is sugar. People love sugar. This still leaves pathologically ndots = 6. If we exposed $zone through downward API (as we do $namespace), then maybe we don't need so much magic. I'm reticent to require $zone to access the kube-master, but maybe we can get away with ndots = 3 (kubernetes.s.default) or ndots = 4 (petname.kubernetes.s.default). That's not much better. We could mitigate some of the perf penalties by always trying names as upstream FQDNs first, but that means that all intra-cluster lookups get slower. Which do we expect more frequently? I'll argue intra-cluster names, if only because the TTL is so low. So that's the wrong tradeoff. Also, that is a client-side thing, so we'd have to implement server-side logic to do search expansion. But namespace is really variable, so it's some hybrid. Blech. OTOH, good caching means that external lookups are slow the first time but fast subsequently. So that's where we've been focused. The schema change would be nice (uses less search domains, but is a little more verbose), but requires some serious ballet to transition, and we have not figured that out. Now, we could make a case for a new DNSPolicy value that cut down on search paths and ndots. We could even make a case for per-namespace defaults that override the global API defaults. We can't make a global change, and I doubt we can make a per-cluster change because ~every example out there will break. @johnbelamaric (spec) |
First, off -- I'm totally in favor of moving to a schema where the "class" is under namespace. i.e. Second -- SRV records are uncommon enough that it is probably okay to make that use case less smooth. Something similar can be set for petsets. Anything using petsets like that will require some elbow grease already. We could expose the zone and namespace as env variables to smooth this over. That means we have 2 cases we care about:
That means ndots needs to be 3, right? Can we have it both ways here? Can we have a local DNS cache that can cache and answer these queries super fast? If we make this be per-node (in the proxy or kubelet or a new binary {ug}) then it is faster, cheaper and config scales with cluster size. |
I think we *could* get to ndots=3 in this way, yes. The transition is the
the hard part.
The current thinking (@bowei) is (ideally) a smallish per-node cache, which
only sends $zone suffixed names to kube-dns, and does not trigger conntrack
for DNS traffic, and allows other stub-domains to branch off. This does
mean that DNS will not get original client IPs, but I think we can live
with that.
…On Sun, Dec 11, 2016 at 11:38 AM, Joe Beda ***@***.***> wrote:
First, off -- I'm totally in favor of moving to a schema where the "class"
is under namespace. i.e. $service.s.$ns.$zone. (Note that this is the
schema we picked for GCE: $host.c.$project.internal.
Second -- SRV records are uncommon enough that it is probably okay to make
that use case less smooth. Something similar can be set for petsets.
Anything using petsets like that will require some elbow grease already. We
could expose the zone and namespace as env variables to smooth this over.
That means we have 2 cases we care about:
- same-namespace service: $otherservice -> $otherservice.s.$namespace.$
zone
- cross-namespace service: $otherservice.s.$othernamespace ->
$otherservice.s.$othernamespace.$zone
That means ndots needs to be 3, right?
Can we have it both ways here? Can we have a local DNS cache that can
cache and answer these queries super fast? If we make this be per-node (in
the proxy or kubelet or a new binary {ug}) then it is faster, cheaper and
config scales with cluster size.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#33554 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVKmlqg1waJ6dfBm-9Lss7B9MJ2Nrks5rHFFAgaJpZM4KHmmB>
.
|
Yes, reading through this thread I found myself reaching the same place in my head. ndots=3 seems like the right value assuming we also switch
I think that's probably the right place to focus.
You mean kube-dns won't see the client IPs? Yeah, that could be interesting in the context of the multi-tenant DNS discussions that have popped up before. I guess so long as each tenant gets its own cache and that cache forwards to the right tenant kube-dns service... |
@thockin it's interesting you've mentioned the per node cache in front of the KubeDNS app. That's how Kargo is currently configures DNS (see the PS. The transition is always the the hard part, but that is not a problem for a properly organized change management (deprecation rules) and docs, right? |
For the transition, as long as we don't have a namespace that is "svc" or "pod", the server can differentiate between the new schema and the old one; so both can be active at the same time. We could use a different DnsPolicy on the client side with the new search and ndots, eventually, with good lead time, make it the default. |
Thanks everyone for continuing this discussion so we can figure out our best options. Here are two thoughts:
|
@thockin @caseydavenport kube-dns not seeing the client IPs can be mitigated by having the local cache append it (and/or other data) as an EDNS0 option. |
If and when that becomes necessary. |
Just as a point of reference. My relatively small cluster was running 1,246 packets per second for all DNS related traffic in the cluster with the default settings. After I implemented the config map workaround for most of the pods to set |
This is firmly @bowei's territory - I'm just making trouble now.
I think we're open to a proposal to either add a DNSPolicy for ndots=1 or
to make a more expansive change and make more params configurable. Need a
volunteer to write the proposal...
…On Mon, Dec 12, 2016 at 1:11 PM, Joe Kemp ***@***.***> wrote:
Just as a point of reference. My relatively small cluster was running
1,246 packets per second for all DNS related traffic in the cluster with
the default settings. After I implemented the config map workaround for
most of the pods to set ndots to 1 the same cluster is now running at 109
pps for DNS traffic.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#33554 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFVgVDwDEAZMkIhb8xJo8414oI9CTGVbks5rHbhkgaJpZM4KHmmB>
.
|
@tonylambiris Could you elaborate why |
Frankly, I don't see the reluctance to allowing this to be configurable per site (or better yet, per container). "We're smart enough to solve this for everyone" is not realistic. There are a number of assumptions in the SkyDNS design which, while undoubtedly true for the given developer's environment, is clearly not true for many of us who are trying to get work done, only to run into this issue. |
/assign |
I am having this exact problem and wish to change ndots to 1. @jkemp101 mentioned a configmap work around. What is that? Where can I find it? or is there a workaround to set the ndots = 1? |
@shivangipatwardhan2018 Just create a
|
This feature is in 1.9 as alpha |
I am going to close this as we now have |
BUG REPORT:
Kubernetes version (use
kubectl version
):Kubernetes v1.3.5+coreos.0,
Kube-dns add-on v19 from gcr.io/google_containers/kubedns-amd64:1.7
Environment:
uname -a
): 4.4.0-38-genericWhat happened:
ndots:5 is hardcoded to the containers' base /etc/resolv.conf by kubelet running with --cluster_dns, --cluster_domain, --resolv-conf=/etc/resolv.conf flags.
What you expected to happen:
ndots should be configurable via the kubedns app definition or a configmap to allow users to chose if skydns shall be attempting absolute domains or utilizing the search domains, f.e.:
AFAICT, DNS SRV records expect ndots:7, thus will fail to resolv via skydns (or maybe not! #33554 (comment))
Also, this might affect DNS performance by generating undesired additional resolve queries for suggested search subdomains before actually trying the absolute domain, when a number of dots in the initial query exceeds the given ndots threshold.
How to reproduce it (as minimally and precisely as possible):
Deploy kube dns cluster add-in, check containers' /etc/resolv.conf within pods
Anything else do we need to know:
This is rather a docs issue, see #35525 (comment) for details and please address that in docs.
The text was updated successfully, but these errors were encountered: