Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deploying to GKE #42

Closed
zoidyzoidzoid opened this issue Dec 27, 2017 · 32 comments
Closed

Deploying to GKE #42

zoidyzoidzoid opened this issue Dec 27, 2017 · 32 comments

Comments

@zoidyzoidzoid
Copy link

Maybe this just needs to be documented, but do you folks have any ideas on how I'd go ahead finding my service and pod CIDRs for container engine?

I found this feature request but couldn't see any tips from it: kubernetes/kubernetes#25533

I looked at the related issues and none of them looked specifically like they'd have the info I'd need.

I guess I could get a liberal CIDR from the existing service and pod IPs, but that sounds like I'm asking for trouble. The only idea I've had so far is gcloud compute routes list which does return a bunch of ranges for the cluster that would all fit in a /16.

@chrisohaver
Copy link
Member

chrisohaver commented Dec 27, 2017

You might be able to get the service cidr from kubectl cluster-info dump. Pod cidr is even more obscure, since it depends on the pod network add-on used.

The cidrs are only needed by coredns for reverse zone lookups. That is, if you make the too wide, the only negative effect is that you’ll be masking some reverse lookups that would otherwise be sent to the proxy server.

@chrisohaver
Copy link
Member

FTR, I believe that kube-dns masks ALL reverse zone lookups.

@johnbelamaric
Copy link
Member

johnbelamaric commented Dec 28, 2017 via email

@nagarjunac
Copy link

I am facing the same problem. I have deployed a Kubernetes cluster on Azure. My clusterSubnet value is "10.244.0.0/16" . And each agent node is assigned a pod CIDR from this subnet with a /24 range.
I have 3 agent nodes as of now and their pod CIDR's are 10.244.6.0/24, 10.244.5.0/24 and 10.244.4.0/24.
What CIDR should I provide while deploying the CoreDNS in place of the pod CIDR?

@chrisohaver
Copy link
Member

We should figure out something similar. I think it adds complexity to have to configure these CIDRs.

Agreed. If you define 0.0.0.0/0 as the subnet, then it has this effect. We could make this the default behavior of no CIDRs are defined - in effect, defaulting to behave like kube-dns.

@chrisohaver
Copy link
Member

We could make this the default behavior ...

Actually, this may be complex to do in coredns itself because of the general way reverse zones are implemented (it's not a kubernetes plugin specific feature). Making this the default behavior globally would not work. So, we should do this just as a deployment default.

@chrisohaver
Copy link
Member

I believe that kube-dns masks ALL reverse zone lookups.

I was wrong about that. I tested, and it appears that kube-dns actually does a fall-through of some kind to the upstream DNS for reverse zone lookups of IPs that don't exist in the cluster. So, as coredns works now, putting "0.0.0.0/0" in the config is not equivalent to kube-dns behavior.

@chrisohaver
Copy link
Member

chrisohaver commented Jan 2, 2018

Related: coredns/coredns#1074 - specifically, the ptr_fallthrough suggestion (which was not adopted).

@johnbelamaric
Copy link
Member

@miekg I think we need to do something about this. Having to update the Corefile for every pod CIDR is a big hassle. Especially since there is no API to get those CIDRs (which in fact come out of the CNI IPAM plugin in the most general case). I think the ptr_fallthrough option makes sense.

@miekg
Copy link
Member

miekg commented Jan 5, 2018

I like Chris' suggestion better in the comment above: #42 (comment)

Just use 0/0 (or 0.0.0.0/0) and make that do the right thing?

@johnbelamaric
Copy link
Member

To make that do the right thing we need the ptr_fallthrough and the 0/0. Just the 0/0 isn't enough. Or we could always to the equivalent of PTR fallthrough if 0/0 is in the CIDR list.

@chrisohaver
Copy link
Member

"making that do the right thing" would require a fall-through that only applies to that domain.

@miekg
Copy link
Member

miekg commented Jan 5, 2018

Or we could always to the equivalent of PTR fallthrough if 0/0 is in the CIDR list.

yes? Or is this too implicit and we need/want something in the config?

Remind me: why does regular fallthrough not work here?

@chrisohaver
Copy link
Member

Because regular fall through would fall everything through. not just the reverse lookups in the 0/0 zone.

@johnbelamaric
Copy link
Member

Seems too implicit to me.

Regular fallthrough might work but will fallthrough all queries not just PTR. We really just want PTR to fallthrough in the default case.

@miekg
Copy link
Member

miekg commented Jan 5, 2018

ok right. I would suggest that reverse_fallthrought is a better name for that and should then match on ip6.arpa and in-addr.arpa instead of checking the qtype.

And document this next to whereever we documented 'fallthrough' (for plugin authors)

@johnbelamaric
Copy link
Member

@chrisohaver do you have time to implement this?

@chrisohaver
Copy link
Member

We could add domains to the fallthrough command... and only fallthrough for those domains...

e.g. fallthrough 0.0.0.0/0 would only fallthrough for queries in the in-addr.arpa. domain.

It would need check other "smaller" domains within 0.0.0.0/0 ...
for example: For ...

plugin 10.0.0.0/8 0.0.0.0/0 {
  fallthrough 0.0.0.0/0
}

we'd want to fall through for all reverse zones except for those in 10.0.0.0/8 ...

@johnbelamaric
Copy link
Member

Or...we could add the ability to list zones for fallthrough:

kubernetes cluster.local 0/0 {
  fallthrough ip6.arpa in-addr.arpa
}

Which is equivalent to what you suggest but is more flexible.

@johnbelamaric
Copy link
Member

It's an echo. LOL

@chrisohaver
Copy link
Member

I would like to implement this, but I don't have time.

@miekg
Copy link
Member

miekg commented Jan 5, 2018 via email

@miekg
Copy link
Member

miekg commented Jan 6, 2018

I'm oncall this weekend; which means I can prepare a PR for this (I hope)

@johnbelamaric
Copy link
Member

That's ok, I'll take it.

@johnbelamaric
Copy link
Member

OK, with the latest master you can do 0.0.0.0/0 and fallthrough in-addr.arpa ip6.arpa to make this work as desired.

@miekg
Copy link
Member

miekg commented Jan 7, 2018 via email

@johnbelamaric
Copy link
Member

johnbelamaric commented Jan 7, 2018 via email

@miekg
Copy link
Member

miekg commented Jan 11, 2018

1.0.3 is out with in it

@miekg miekg closed this as completed Jan 11, 2018
@zoidyzoidzoid
Copy link
Author

This is awesome. Thank you all.

@zoidyzoidzoid
Copy link
Author

Thanks folks for all this work, unfortunately I have a next problem with trying to deploy CoreDNS to container engine, but I'll try make a ticket with them somehow.

They run the k8s addons with addonmanager.kubernetes.io/mode: Reconcile so you can't switch kube-dns's selector to point to coredns.

@miekg
Copy link
Member

miekg commented Jan 26, 2018 via email

@zoidyzoidzoid
Copy link
Author

I can't see it in the Addons mentioned or anything, but will test an alpha cluster this weekend. Maybe when they add 1.9 support they'll add it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants