Coredns fails to start if Corefile migration fails #88725
Labels
area/kubeadm
kind/bug
Categorizes issue or PR as related to a bug.
sig/cluster-lifecycle
Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.
What happened:
Coredns failed to start in 1.17 deployment when it tried to retain a backed up Corefile with error message: Caddyfile via flag: open /etc/coredns/Corefile: no such file or directory
What you expected to happen:
Coredns starts normally with the backed-up Corefile
How to reproduce it (as minimally and precisely as possible):
Deploy a k8s 1.17 cluster and trigger coredns Corefile migration and make it fail. In our case, this was triggered in a three master -setup with 'old' method of running 'kubeadm init' for each master. Also the core-dns docker image used was tagged with '1.6.7-1', apparently causing the corefile migration version check to fail
Anything else we need to know?:
This seems to be caused by
kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go
Line 453 in 39ed64e
Both 'key' and 'path' are set to 'Corefile-backup' when this function is called. Should the 'path' always stay as 'Corefile' because that is where the container expects to find the config file?
Environment:
Kubernetes version (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-16T17:51:57Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-16T17:47:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Cloud provider or hardware configuration:
Custom deployment in Openstack environment
OS (e.g:
cat /etc/os-release
):NAME="SLES"
VERSION="15-SP1"
VERSION_ID="15.1"
PRETTY_NAME="SUSE Linux Enterprise Server 15 SP1"
Kernel (e.g.
uname -a
):4.12.14-197.29-default
Install tools:
kubeadm
Network plugin and version (if this is a network-related bug):
Others:
The text was updated successfully, but these errors were encountered: