Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

Add support for CoreDNS as DNS provider #1345

Merged

Conversation

etiennetremel
Copy link
Contributor

Add support for CoreDNS as DNS provider. Fix #1075

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels May 31, 2018
@codecov-io
Copy link

Codecov Report

Merging #1345 into master will increase coverage by 0.02%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1345      +/-   ##
==========================================
+ Coverage   37.42%   37.45%   +0.02%     
==========================================
  Files          68       68              
  Lines        4283     4285       +2     
==========================================
+ Hits         1603     1605       +2     
  Misses       2454     2454              
  Partials      226      226
Impacted Files Coverage Δ
core/controlplane/config/config.go 63.87% <100%> (+0.08%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 19ca2bb...5eac653. Read the comment docs.

@@ -817,6 +820,7 @@ type KubeDnsAutoscaler struct {
}

type KubeDns struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this should retain the name for backward compatibility for now.
But how about renaming this to ClusterLocalDns in the future?
KubeDns sounds like literally specific to kube-dns, that is just one of cluster local DNS server implementations available to Kubernetes.

@mumoshu mumoshu added this to the v0.11.0 milestone Jun 6, 2018
Copy link
Contributor

@mumoshu mumoshu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I appreciate your effort to keep compatibility with kube-dns. Looks great overall!. Thank you for your contribution @etiennetremel 👍

@mumoshu mumoshu merged commit 668473e into kubernetes-retired:master Jun 6, 2018
@cknowles
Copy link
Contributor

cknowles commented Jul 8, 2018

If a user switches provider on an existing cluster, do both deployments run side by side and one has to be manually deleted? We may want to add a note about that.

- --default-params={"linear":{"coresPerReplica":{{ .KubeDns.Autoscaler.CoresPerReplica }},"nodesPerReplica":{{ .KubeDns.Autoscaler.NodesPerReplica }},"min":{{ .KubeDns.Autoscaler.Min}}}}
- --logtostderr=true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was logtostderr intentionally removed here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, at least for me! Good catch.

@cknowles
Copy link
Contributor

I was trying to figure out if #792 is still really needed when using CoreDNS. There seems to be some discussion in kubernetes/kubernetes#32749 about that, it might be that we could deploy CoreDNS as a Daemon Set and reduce the component count involved.

@mumoshu
Copy link
Contributor

mumoshu commented Feb 14, 2019

@c-knowles Hey!

I got a chance to revisit this. And it seems like the momentum is to use CoreDNS as a node-local dns caching agent, as you've said.

See the sig-network proposal at https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/0030-nodelocal-dns-cache.md, which is implemented in the upstream at kubernetes/kubernetes#70555

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants