Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Coredns fails to start if Corefile migration fails #88725

Closed
anttitapio opened this issue Mar 2, 2020 · 3 comments · Fixed by #88811
Closed

Coredns fails to start if Corefile migration fails #88725

anttitapio opened this issue Mar 2, 2020 · 3 comments · Fixed by #88811
Assignees
Labels
area/kubeadm kind/bug Categorizes issue or PR as related to a bug. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.

Comments

@anttitapio
Copy link

What happened:
Coredns failed to start in 1.17 deployment when it tried to retain a backed up Corefile with error message: Caddyfile via flag: open /etc/coredns/Corefile: no such file or directory

What you expected to happen:
Coredns starts normally with the backed-up Corefile

How to reproduce it (as minimally and precisely as possible):
Deploy a k8s 1.17 cluster and trigger coredns Corefile migration and make it fail. In our case, this was triggered in a three master -setup with 'old' method of running 'kubeadm init' for each master. Also the core-dns docker image used was tagged with '1.6.7-1', apparently causing the corefile migration version check to fail

Anything else we need to know?:
This seems to be caused by

patch := fmt.Sprintf(`{"spec":{"template":{"spec":{"volumes":[{"name": "config-volume", "configMap":{"name": "coredns", "items":[{"key": "%s", "path": "%s"}]}}]}}}}`, coreDNSCorefileName, coreDNSCorefileName)

Both 'key' and 'path' are set to 'Corefile-backup' when this function is called. Should the 'path' always stay as 'Corefile' because that is where the container expects to find the config file?

Environment:

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-16T17:51:57Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-16T17:47:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}

  • Cloud provider or hardware configuration:
    Custom deployment in Openstack environment

  • OS (e.g: cat /etc/os-release):
    NAME="SLES"
    VERSION="15-SP1"
    VERSION_ID="15.1"
    PRETTY_NAME="SUSE Linux Enterprise Server 15 SP1"

  • Kernel (e.g. uname -a):
    4.12.14-197.29-default

  • Install tools:
    kubeadm

  • Network plugin and version (if this is a network-related bug):

  • Others:

@anttitapio anttitapio added the kind/bug Categorizes issue or PR as related to a bug. label Mar 2, 2020
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Mar 2, 2020
@anttitapio
Copy link
Author

/sig cluster-lifecycle

@k8s-ci-robot k8s-ci-robot added sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Mar 2, 2020
@rajansandeep
Copy link
Contributor

/assign

@neolit123
Copy link
Member

/area kubeadm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubeadm kind/bug Categorizes issue or PR as related to a bug. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.
Projects
None yet
4 participants