You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
OS (e.g. from /etc/os-release): Ubuntu 16.04.1 LTS
Kernel (e.g. uname -a): Linux 4.4.0-45-generic
What happened?
kubeadm init stopped at [init] This might take a minute or longer if the control plane images have to be pulled.. docker ps reported that kube-apiserver could not start and the log files from kubelet.service verified that no connection could be made to the API.
What you expected to happen?
I expected the cluster to initiate correctly (as expected) as described by the documentation.
How to reproduce it (as minimally and precisely as possible)?
The issue stems from using the following addition in the kubeadm configuration:
Only the following directories are (automatically) mounted in the static pod manifest kube-apiserver.yaml:
/etc/kubernetes/pki
/etc/ssl/certs
/etc/kubernetes/cloud-config (file)
Moving the encryption file to /etc/kubernetes/pki/encryption.yaml and updating the configuration solved the issue.
Anything else we need to know?
Supporting all "potential file/directory" arguments in apiServerExtraArgs is probably unfeasable, but it would be nice if the documentation could mention this pitfall ("Do not put files outside these directories if kube-apiserver needs access to them").
It would also be nice to know what the "best practice" is for this case - since kubeadm reset removes all files in /etc/kubernetes/pki it is not necessarily the ideal place to put config files you want to keep between configuration attempts.
The text was updated successfully, but these errors were encountered:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Versions
kubeadm version (use
kubeadm version
):Environment:
kubectl version
):Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
azure
Ubuntu 16.04.1 LTS
uname -a
):Linux 4.4.0-45-generic
What happened?
kubeadm init
stopped at[init] This might take a minute or longer if the control plane images have to be pulled.
.docker ps
reported thatkube-apiserver
could not start and the log files fromkubelet.service
verified that no connection could be made to the API.What you expected to happen?
I expected the cluster to initiate correctly (as expected) as described by the documentation.
How to reproduce it (as minimally and precisely as possible)?
The issue stems from using the following addition in the kubeadm configuration:
Only the following directories are (automatically) mounted in the static pod manifest
kube-apiserver.yaml
:Moving the encryption file to
/etc/kubernetes/pki/encryption.yaml
and updating the configuration solved the issue.Anything else we need to know?
Supporting all "potential file/directory" arguments in
apiServerExtraArgs
is probably unfeasable, but it would be nice if the documentation could mention this pitfall ("Do not put files outside these directories if kube-apiserver needs access to them").It would also be nice to know what the "best practice" is for this case - since
kubeadm reset
removes all files in/etc/kubernetes/pki
it is not necessarily the ideal place to put config files you want to keep between configuration attempts.The text was updated successfully, but these errors were encountered: