Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm init failed, scheduler.conf file is not exist #2401

Closed
chenhong0129 opened this issue Mar 9, 2021 · 7 comments
Closed

kubeadm init failed, scheduler.conf file is not exist #2401

chenhong0129 opened this issue Mar 9, 2021 · 7 comments
Labels
triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@chenhong0129
Copy link

chenhong0129 commented Mar 9, 2021

Versions

kubeadm version (use kubeadm version):
v1.15.12
Environment:

  • Kubernetes version (use kubectl version): v1.15.12
  • Cloud provider or hardware configuration: CAS 8C 32G
  • OS (e.g. from /etc/os-release):CentOS Linux release 7.6.1810 (Core) 3.10.0-957.1.3.el7.x86_64
  • Kernel (e.g. uname -a):
  • Others:
  • docker:v18.09.6

What happened?

“kubeadm init --config=./kubeadm_config.yml --ignore-preflight-errors all”
when I execute the above instructions to create a cluster , It get failed with follow info:

  • kubeadm init --config=./kubeadm_config.yml --ignore-preflight-errors all --skip-phases addon/coredns
    [WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
    [WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
    [WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
    [WARNING Hostname]: hostname "centosmaster03" could not be reached
    [WARNING Hostname]: hostname "centosmaster03": lookup centosmaster03 on [::1]:53: read udp [::1]:57980->[::1]:53: read: connection refused
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Using existing ca certificate authority
    [certs] Using existing apiserver certificate and key on disk
    [certs] Using existing apiserver-kubelet-client certificate and key on disk
    [certs] Using existing front-proxy-ca certificate authority
    [certs] Using existing front-proxy-client certificate and key on disk
    [certs] Using existing etcd/ca certificate authority
    [certs] Using existing etcd/server certificate and key on disk
    [certs] Using existing etcd/peer certificate and key on disk
    [certs] Using existing etcd/healthcheck-client certificate and key on disk
    [certs] Using existing apiserver-etcd-client certificate and key on disk
    [certs] Using the existing "sa" key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
    [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
    [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x126499b]

goroutine 1 [running]:
k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig.validateKubeConfig(0x1885697, 0xf, 0x189069f, 0x17, 0xc0002c6300, 0xc000160800, 0xc000161800)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig/kubeconfig.go:227 +0x19b
k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig.createKubeConfigFileIfNotExists(0x1885697, 0xf, 0x189069f, 0x17, 0xc0002c6300, 0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig/kubeconfig.go:248 +0x108
k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig.createKubeConfigFiles(0x1885697, 0xf, 0xc00024e000, 0xc00050da40, 0x1, 0x1, 0x1939c88, 0xc00050da60)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig/kubeconfig.go:107 +0x142
k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig.CreateKubeConfigFile(0x189069f, 0x17, 0x1885697, 0xf, 0xc00024e000, 0x1, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig/kubeconfig.go:80 +0xe2
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runKubeConfigFile.func1(0x17fce40, 0xc0001ca3f0, 0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/kubeconfig.go:143 +0x1b7
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1(0xc0005faf00, 0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235 +0x11a
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll(0xc000103b00, 0xc00050dc58, 0x0, 0x5)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:426 +0x6e
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run(0xc000103b00, 0xc0003b32c0, 0x0, 0x5, 0xc000515d08, 0x1)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:208 +0x14e
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1(0xc0004bc780, 0xc0003b32c0, 0x0, 0x5)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:144 +0x190
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0004bc780, 0xc0003b3270, 0x5, 0x5, 0xc0004bc780, 0xc0003b3270)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:760 +0x2ae
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0005a2000, 0xc00000e010, 0x1ae0b40, 0xc00000e018)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:846 +0x2ec
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:794
k8s.io/kubernetes/cmd/kubeadm/app.Run(0x0, 0x1e3)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50 +0x210
main.main()
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:29 +0x33

  • kube::check_cmd_result 2 'init kubeadm failed'

What you expected to happen?

kubeadm init success, or why it happens?

How to reproduce it (as minimally and precisely as possible)?

If you execute it again, it will succeed. This is an occasional one

Anything else we need to know?

@fabriziopandini
Copy link
Member

/triage need-information

I agree that kubeadm should not panic, but there is something strange here, because it seems that kubeadm was executed twice on this machine (or the majority of the file were already in place), and probably the error was in processing the existing scheduler.conf file.
@chenhong0129 could you provide instruction about how to recreate the error?

Also, let me suggest you to avoid --ignore-preflight-errors all, this is a bad practice and exposed you to several problems hard to detect afterwards.

@k8s-ci-robot
Copy link
Contributor

@fabriziopandini: The label(s) triage/need-information cannot be applied, because the repository doesn't have them.

In response to this:

/triage need-information

I agree that kubeadm should not panic, but there is something strange here, because it seems that kubeadm was executed twice on this machine (or the majority of the file were already in place), and probably the error was in processing the existing scheduler.conf file.
@chenhong0129 could you provide instruction about how to recreate the error?

Also, let me suggest you to avoid --ignore-preflight-errors all, this is a bad practice and exposed you to several problems hard to detect afterwards.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@fabriziopandini fabriziopandini added the triage/needs-information Indicates an issue needs more information in order to work on it. label Mar 9, 2021
@neolit123
Copy link
Member

this panic was fixed some time ago.
please try 1.18+ and verify if the same problem happens.

until then i'm going to close this.
thank
/close

@k8s-ci-robot
Copy link
Contributor

@neolit123: Closing this issue.

In response to this:

this panic was fixed some time ago.
please try 1.18+ and verify if the same problem happens.

until then i'm going to close this.
thank
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@chenhong0129
Copy link
Author

this panic was fixed some time ago.
please try 1.18+ and verify if the same problem happens.

until then i'm going to close this.
thank
/close

Could you please provide the issueID of this panic? I need to report it to my leader. Thank you very much.

@neolit123
Copy link
Member

i think this was the fix:
kubernetes/kubernetes#79165
it was added in 1.16.

@chenhong0129
Copy link
Author

i think this was the fix:
kubernetes/kubernetes#79165
it was added in 1.16.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

4 participants