Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kmaster: cp: cannot stat ‘/etc/kubernetes/admin.conf’: No such file or directory #4

Closed
kamal2222ahmed opened this issue Apr 25, 2019 · 4 comments

Comments

@kamal2222ahmed
Copy link

Describe the bug

==> kmaster: Running provisioner: shell...
kmaster: Running: /var/folders/jr/kc1rdmj10jb4p1hrw77zttq00000gn/T/vagrant-shell20190425-22062-18oa9b9.sh
kmaster: [TASK 1] Update /etc/hosts file
kmaster: [TASK 2] Install docker container engine
kmaster: [TASK 3] Enable and start docker service
kmaster: [TASK 4] Disable SELinux
kmaster: [TASK 5] Stop and Disable firewalld
kmaster: [TASK 6] Add sysctl settings
kmaster: [TASK 7] Disable and turn off SWAP
kmaster: [TASK 8] Add yum repo file for kubernetes
kmaster: [TASK 9] Install Kubernetes (kubeadm, kubelet and kubectl)
kmaster: [TASK 10] Enable and start kubelet service
kmaster: [TASK 11] Enable ssh password authentication
kmaster: [TASK 12] Set root password
==> kmaster: Running provisioner: shell...
kmaster: Running: /var/folders/jr/kc1rdmj10jb4p1hrw77zttq00000gn/T/vagrant-shell20190425-22062-jr3ng8.sh
kmaster: [TASK 1] Initialize Kubernetes Cluster
kmaster: [TASK 2] Copy kube admin config to Vagrant user .kube directory
kmaster: cp: cannot stat ‘/etc/kubernetes/admin.conf’: No such file or directory
kmaster: [TASK 3] Deploy flannel network
kmaster: -bash: kubectl: command not found
kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh
kmaster: /tmp/vagrant-shell: line 19: kubeadm: command not found
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
How To Reproduce

vagrant up
Expected behavior

Screenshots (if any)

Environment (please complete the following information):

Mac/vagrant/virtualbox

Additional context
Add any other context about the problem here.

@justmeandopensource
Copy link
Owner

@kamal2222ahmed
I just cloned this repo and tested. It worked perfectly fine on my Linux workstation.
In the shell provisioning script, I have redirected the output of individual commands to /dev/null.
You could delete those redirection and see what actually is going on.

Edit the below two files

  • bootstrap.sh
  • bootstrap_kmaster.sh

Remove ">/dev/null 2>&1" on all lines where it is present on these two files and run vagrant up again. This time you will see lots of output of individual commands during provisioning. This will help you identify where it is failing.

@taohexxx
Copy link

I can reproduce this issue. It's caused by kubeadm init failure.

$ kubeadm init --apiserver-advertise-address=172.42.42.100 --pod-network-cidr=192.168.0.0/16
W0224 15:28:18.634562   31429 validation.go:28] Cannot validate kubelet config - no validator is available
W0224 15:28:18.634619   31429 validation.go:28] Cannot validate kube-proxy config - no validator is available
[init] Using Kubernetes version: v1.17.3
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.17.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.17.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.17.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.17.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.3-0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.6.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

In some country (e.g. China), Google services are unavailable. Use kubeadm init --image-repository=registry.aliyuncs.com/google_containers in bootstrap*.sh solved.

I also have to use this repo instead of Google's in bootstrap.sh:

cat >>/etc/yum.repos.d/kubernetes.repo<<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

@velan-techlab
Copy link

Same bug happen
kmaster: cp: cannot stat ‘/etc/kubernetes/admin.conf’: No such file or directory

it is fixed by changing kubernetes version in bootstap.sh
yum install -y -q kubeadm-1.18.0 kubelet-1.18.0 kubectl-1.18.0

@satya666-cyber
Copy link

kmaster: cp: cannot stat ‘/etc/kubernetes/admin.conf’: No such file or directory

how to solve this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants