-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kmaster: cp: cannot stat ‘/etc/kubernetes/admin.conf’: No such file or directory #4
Comments
@kamal2222ahmed Edit the below two files
Remove ">/dev/null 2>&1" on all lines where it is present on these two files and run vagrant up again. This time you will see lots of output of individual commands during provisioning. This will help you identify where it is failing. |
I can reproduce this issue. It's caused by $ kubeadm init --apiserver-advertise-address=172.42.42.100 --pod-network-cidr=192.168.0.0/16
W0224 15:28:18.634562 31429 validation.go:28] Cannot validate kubelet config - no validator is available
W0224 15:28:18.634619 31429 validation.go:28] Cannot validate kube-proxy config - no validator is available
[init] Using Kubernetes version: v1.17.3
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.17.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.17.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.17.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.17.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.3-0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.6.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher In some country (e.g. China), Google services are unavailable. Use I also have to use this repo instead of Google's in cat >>/etc/yum.repos.d/kubernetes.repo<<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF |
Same bug happen it is fixed by changing kubernetes version in bootstap.sh |
how to solve this |
Describe the bug
==> kmaster: Running provisioner: shell...
kmaster: Running: /var/folders/jr/kc1rdmj10jb4p1hrw77zttq00000gn/T/vagrant-shell20190425-22062-18oa9b9.sh
kmaster: [TASK 1] Update /etc/hosts file
kmaster: [TASK 2] Install docker container engine
kmaster: [TASK 3] Enable and start docker service
kmaster: [TASK 4] Disable SELinux
kmaster: [TASK 5] Stop and Disable firewalld
kmaster: [TASK 6] Add sysctl settings
kmaster: [TASK 7] Disable and turn off SWAP
kmaster: [TASK 8] Add yum repo file for kubernetes
kmaster: [TASK 9] Install Kubernetes (kubeadm, kubelet and kubectl)
kmaster: [TASK 10] Enable and start kubelet service
kmaster: [TASK 11] Enable ssh password authentication
kmaster: [TASK 12] Set root password
==> kmaster: Running provisioner: shell...
kmaster: Running: /var/folders/jr/kc1rdmj10jb4p1hrw77zttq00000gn/T/vagrant-shell20190425-22062-jr3ng8.sh
kmaster: [TASK 1] Initialize Kubernetes Cluster
kmaster: [TASK 2] Copy kube admin config to Vagrant user .kube directory
kmaster: cp: cannot stat ‘/etc/kubernetes/admin.conf’: No such file or directory
kmaster: [TASK 3] Deploy flannel network
kmaster: -bash: kubectl: command not found
kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh
kmaster: /tmp/vagrant-shell: line 19: kubeadm: command not found
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
How To Reproduce
vagrant up
Expected behavior
Screenshots (if any)
Environment (please complete the following information):
Mac/vagrant/virtualbox
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: