New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeadm init failed, scheduler.conf file is not exist #2401
Comments
/triage need-information I agree that kubeadm should not panic, but there is something strange here, because it seems that kubeadm was executed twice on this machine (or the majority of the file were already in place), and probably the error was in processing the existing scheduler.conf file. Also, let me suggest you to avoid |
@fabriziopandini: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
this panic was fixed some time ago. until then i'm going to close this. |
@neolit123: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Could you please provide the issueID of this panic? I need to report it to my leader. Thank you very much. |
i think this was the fix: |
Thanks! |
Versions
kubeadm version (use
kubeadm version
):v1.15.12
Environment:
kubectl version
): v1.15.12uname -a
):What happened?
“kubeadm init --config=./kubeadm_config.yml --ignore-preflight-errors all”
when I execute the above instructions to create a cluster , It get failed with follow info:
[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[WARNING Hostname]: hostname "centosmaster03" could not be reached
[WARNING Hostname]: hostname "centosmaster03": lookup centosmaster03 on [::1]:53: read udp [::1]:57980->[::1]:53: read: connection refused
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x126499b]
goroutine 1 [running]:
k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig.validateKubeConfig(0x1885697, 0xf, 0x189069f, 0x17, 0xc0002c6300, 0xc000160800, 0xc000161800)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig/kubeconfig.go:227 +0x19b
k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig.createKubeConfigFileIfNotExists(0x1885697, 0xf, 0x189069f, 0x17, 0xc0002c6300, 0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig/kubeconfig.go:248 +0x108
k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig.createKubeConfigFiles(0x1885697, 0xf, 0xc00024e000, 0xc00050da40, 0x1, 0x1, 0x1939c88, 0xc00050da60)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig/kubeconfig.go:107 +0x142
k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig.CreateKubeConfigFile(0x189069f, 0x17, 0x1885697, 0xf, 0xc00024e000, 0x1, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig/kubeconfig.go:80 +0xe2
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runKubeConfigFile.func1(0x17fce40, 0xc0001ca3f0, 0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/kubeconfig.go:143 +0x1b7
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1(0xc0005faf00, 0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235 +0x11a
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll(0xc000103b00, 0xc00050dc58, 0x0, 0x5)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:426 +0x6e
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run(0xc000103b00, 0xc0003b32c0, 0x0, 0x5, 0xc000515d08, 0x1)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:208 +0x14e
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1(0xc0004bc780, 0xc0003b32c0, 0x0, 0x5)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:144 +0x190
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0004bc780, 0xc0003b3270, 0x5, 0x5, 0xc0004bc780, 0xc0003b3270)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:760 +0x2ae
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0005a2000, 0xc00000e010, 0x1ae0b40, 0xc00000e018)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:846 +0x2ec
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:794
k8s.io/kubernetes/cmd/kubeadm/app.Run(0x0, 0x1e3)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50 +0x210
main.main()
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:29 +0x33
What you expected to happen?
kubeadm init success, or why it happens?
How to reproduce it (as minimally and precisely as possible)?
If you execute it again, it will succeed. This is an occasional one
Anything else we need to know?
The text was updated successfully, but these errors were encountered: