New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make CoreDNS default in kubeup and update CoreDNS version/manifest in kubeup and kubeadm #69883
Conversation
/hold until 1.2.3 is pushed to gcr.io (#69880) |
/hold |
/kind feature |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@chrisohaver thanks for the update to coredns 1.2.3.
added one minor comment.
resources: | ||
- nodes | ||
verbs: | ||
- get |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm pretty sure these should be under rules:
.
http://people.redhat.com/jrivera/openshift-docs_preview/openshift-online/glusterfs-review/rest_api/apis-rbac.authorization.k8s.io/v1.ClusterRole.html#object-schema
i guess this applies to the rest of the files that are changed from the diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@neolit123, they are under rules:
as one more list item. Am i missing something here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
never mind, i got confused by the indentation.
We've identified a bug in CoreDNS 1.2.3 that makes it invalid for kubernetes deployment. |
Latest commit is intended to run the latest coredns release (1.2.4) through e2e presubmit tests prior to pushing the coredns image to gcr.io. If all is OK, we will kick off the push to gcr.io, and I'll update the image locations in the manifests here (from |
I suspect the e2e test nodes don't have access staging-k8s.gcr.io, hence failure to pull images, and subsequent test failure. Will test now on k8s.gcr.io, since latest version appears to be already promoted. |
d4689f0
to
0cfb4bb
Compare
I've squashed the commits. Commit 0cfb4bb is now pointing to the k8s.gcr.io repo. |
yes, the image up.
there have been quite a bit of confusion about this part over the last couple of cycles - "default in what?"
|
/test pull-kubernetes-integration |
/hold cancel |
/test pull-kubernetes-e2e-kops-aws |
/assign @bowei @timothysc |
/assign @thockin |
/assign @thockin |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: chrisohaver, thockin The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
How about merging this pr to other release branches? (v1.11 and v1.12). Does this pr qualify to be back ported to previous releases? |
I don't believe we can port the entire change back, i.e. Retroactively making CoreDNS the default cluster DNS for kubeup installations. But we could push newer default versions/manifests of coredns into kubedns/kubeadm of older versions. I'd think this would be reserved for security fixes and major bug fixes, although I'm not personally familiar with the policy for support on past versions of k8s. Question for the sig-lifecycle team I think? |
nope, not really. |
good |
What this PR does / why we need it:
Makes CoreDNS default for kube-up
Updates manifest and CoreDNS version for kube-up and kubeadm.
KEP: https://github.com/kubernetes/community/blob/master/keps/sig-network/0012-20180518-coredns-default-proposal.md
Feature: kubernetes/enhancements#566
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
Special notes for your reviewer:
This version of CoreDNS (1.2.4) contains improvements that reduce memory usage at scale. Based on the 500 node and 2000 node e2e scale test results, we have linearly projected that memory use in the 5000 node scale test should be less than the prescribed resource limit. But since the 5000 node scale test is not available as a pre-submit, we cannot confirm until this PR is merged.
Release note: