New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deploying to GKE #42
Comments
You might be able to get the service cidr from The cidrs are only needed by coredns for reverse zone lookups. That is, if you make the too wide, the only negative effect is that you’ll be masking some reverse lookups that would otherwise be sent to the proxy server. |
FTR, I believe that kube-dns masks ALL reverse zone lookups. |
We should figure out something similar. I think it adds complexity to have to configure these CIDRs.
On Dec 27, 2017, at 2:43 PM, chrisohaver <notifications@github.com<mailto:notifications@github.com>> wrote:
FTR, I believe that kube-dns masks ALL reverse zone lookups.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub<#42 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AJB4s7klRdesmOGfwOZx7V0Moi7V7craks5tEp3SgaJpZM4RNd4r>.
|
I am facing the same problem. I have deployed a Kubernetes cluster on Azure. My clusterSubnet value is "10.244.0.0/16" . And each agent node is assigned a pod CIDR from this subnet with a /24 range. |
Agreed. If you define 0.0.0.0/0 as the subnet, then it has this effect. We could make this the default behavior of no CIDRs are defined - in effect, defaulting to behave like kube-dns. |
Actually, this may be complex to do in coredns itself because of the general way reverse zones are implemented (it's not a kubernetes plugin specific feature). Making this the default behavior globally would not work. So, we should do this just as a deployment default. |
I was wrong about that. I tested, and it appears that kube-dns actually does a fall-through of some kind to the upstream DNS for reverse zone lookups of IPs that don't exist in the cluster. So, as coredns works now, putting "0.0.0.0/0" in the config is not equivalent to kube-dns behavior. |
Related: coredns/coredns#1074 - specifically, the |
@miekg I think we need to do something about this. Having to update the Corefile for every pod CIDR is a big hassle. Especially since there is no API to get those CIDRs (which in fact come out of the CNI IPAM plugin in the most general case). I think the |
I like Chris' suggestion better in the comment above: #42 (comment) Just use 0/0 (or 0.0.0.0/0) and make that do the right thing? |
To make that do the right thing we need the |
"making that do the right thing" would require a fall-through that only applies to that domain. |
yes? Or is this too implicit and we need/want something in the config? Remind me: why does regular fallthrough not work here? |
Because regular fall through would fall everything through. not just the reverse lookups in the 0/0 zone. |
Seems too implicit to me. Regular fallthrough might work but will fallthrough all queries not just PTR. We really just want PTR to fallthrough in the default case. |
ok right. I would suggest that And document this next to whereever we documented 'fallthrough' (for plugin authors) |
@chrisohaver do you have time to implement this? |
We could add domains to the fallthrough command... and only fallthrough for those domains... e.g. It would need check other "smaller" domains within
we'd want to fall through for all reverse zones except for those in 10.0.0.0/8 ... |
Or...we could add the ability to list zones for
Which is equivalent to what you suggest but is more flexible. |
It's an echo. LOL |
I would like to implement this, but I don't have time. |
Giving fallthrough a list of zones is an excellent idea. Should be not that
hard to do.
…On 5 Jan 2018 18:05, "chrisohaver" ***@***.***> wrote:
I would like to implement this, but I don't have time.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#42 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAVkW0aTl3W5-QCaA1oRjVwzrts450ziks5tHmR9gaJpZM4RNd4r>
.
|
I'm oncall this weekend; which means I can prepare a PR for this (I hope) |
That's ok, I'll take it. |
OK, with the latest master you can do 0.0.0.0/0 and |
[ Quoting <notifications@github.com> in "Re: [coredns/deployment] Deploying ..." ]
OK, with the latest master you can do 0.0.0.0/0 and `fallthrough in-addr.arpa ip6.arpa` to make this work as desired.
Pondering doing a 1.0.3 just because.
|
Let’s get the integration tests first to make sure it’s all clean. Then it makes sense to me.
On Jan 7, 2018, at 3:13 AM, Miek Gieben <notifications@github.com<mailto:notifications@github.com>> wrote:
[ Quoting <notifications@github.com<mailto:notifications@github.com>> in "Re: [coredns/deployment] Deploying ..." ]
OK, with the latest master you can do 0.0.0.0/0 and `fallthrough in-addr.arpa ip6.arpa` to make this work as desired.
Pondering doing a 1.0.3 just because.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#42 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AJB4s2pK4EGlLD2RpGPRQpzJpB0d2V6qks5tIHyYgaJpZM4RNd4r>.
|
1.0.3 is out with in it |
This is awesome. Thank you all. |
Thanks folks for all this work, unfortunately I have a next problem with trying to deploy CoreDNS to container engine, but I'll try make a ticket with them somehow. They run the k8s addons with |
[ Quoting <notifications@github.com> in "Re: [coredns/deployment] Deploying ..." ]
Thanks folks for all this work, unfortunately I have a next problem with trying to deploy CoreDNS to container engine, but I'll try make a ticket with them somehow.
They run the k8s addons with `addonmanager.kubernetes.io/mode: Reconcile` so you can't switch `kube-dns`'s selector to point to coredns.
Is this even so with Alpha clusters?
|
I can't see it in the Addons mentioned or anything, but will test an alpha cluster this weekend. Maybe when they add 1.9 support they'll add it. |
Maybe this just needs to be documented, but do you folks have any ideas on how I'd go ahead finding my service and pod CIDRs for container engine?
I found this feature request but couldn't see any tips from it: kubernetes/kubernetes#25533
I looked at the related issues and none of them looked specifically like they'd have the info I'd need.
I guess I could get a liberal CIDR from the existing service and pod IPs, but that sounds like I'm asking for trouble. The only idea I've had so far is
gcloud compute routes list
which does return a bunch of ranges for the cluster that would all fit in a/16
.The text was updated successfully, but these errors were encountered: