Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Marking the master by adding the taints 'error marking master: timed out waiting for the condition' #1227

Closed
joshuacox opened this issue Nov 11, 2018 · 44 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@joshuacox
Copy link

joshuacox commented Nov 11, 2018

What keywords did you search in kubeadm issues before filing this one?

error marking master: timed out waiting for the condition

#1092

#937

#1087

#715

kubernetes/kubernetes#45727

Is this a BUG REPORT or FEATURE REQUEST?

/kind bug

Versions

kubeadm version (use kubeadm version):

kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:51:33Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Kubernetes version (use kubectl version):
kubectl version --kubeconfig=/etc/kubernetes/admin.conf
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:43:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:

ubuntu xenial on baremetal

  • OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="16.04.5 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.5 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
  • Kernel (e.g. uname -a):
Linux testymaster1 4.4.0-131-generic #157-Ubuntu SMP Thu Jul 12 15:51:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
  • Others:

What happened?

kubeadm init --config /etc/kubernetes/kubeadmcfg.yaml
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [testymaster1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.23.238 10.0.23.238 127.0.0.1 10.0.23.241 10.0.23.242 10.0.23.243 10.0.23.238 10.0.23.239 10.0.23.240 10.0.23.244 10.0.23.245 10.0.23.246]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"                                                                                                                      
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"                                                                                                    
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"                                                                                                                      
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 22.506258 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster                                                                                                          
[markmaster] Marking the node testymaster1 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node testymaster1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
error marking master: timed out waiting for the condition

What you expected to happen?

Master to initialize without issue.

How to reproduce it (as minimally and precisely as possible)?

https://gist.github.com/joshuacox/4505fbeceb2e394900a24c3cae14131c

run the above like so:

bash etcd-test6.sh 10.0.0.6 10.0.0.7 10.0.0.8'

at this point you should have a healthy etcd cluster running on three hosts

then on a separate host (10.0.0.9) run the steps detailed here:

https://kubernetes.io/docs/setup/independent/high-availability/#external-etcd

with this config:

apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerCertSANs:
- "127.0.0.1"
- '10.0.0.6'
- '10.0.0.7'
- '10.0.0.8'
- '10.0.0.9'
controlPlaneEndpoint: "10.0.0.9"
etcd:
  external:
      endpoints:
      - https://10.0.0.6:2379
      - https://10.0.0.7:2379
      - https://10.0.0.8:2379
      caFile: /etc/kubernetes/pki/etcd/ca.crt
      certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
      keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
networking:
  podSubnet: 10.244.0.0/16

Anything else we need to know?

CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
cbc9036b0675        51a9c329b7c5           "kube-apiserver --..."   23 minutes ago      Up 23 minutes                           k8s_kube-apiserver_kube-apiserver-testymaster1_kube-system_c55b3dd53dd51e69d2acd3a6aa486e32_0
aeebe73a2c98        d6d57c76136c           "kube-scheduler --..."   23 minutes ago      Up 23 minutes                           k8s_kube-scheduler_kube-scheduler-testymaster1_kube-system_ee7b1077c61516320f4273309e9b4690_0
58fc131c3b50        15548c720a70           "kube-controller-m..."   23 minutes ago      Up 23 minutes                           k8s_kube-controller-manager_kube-controller-manager-testymaster1_kube-system_690790d9ba49d9118c24c004854af4db_0
4f628d299b8e        k8s.gcr.io/pause:3.1   "/pause"                 23 minutes ago      Up 23 minutes                           k8s_POD_kube-scheduler-testymaster1_kube-system_ee7b1077c61516320f4273309e9b4690_0
2fe08cdd58c9        k8s.gcr.io/pause:3.1   "/pause"                 23 minutes ago      Up 23 minutes                           k8s_POD_kube-controller-manager-testymaster1_kube-system_690790d9ba49d9118c24c004854af4db_0
85638811980c        k8s.gcr.io/pause:3.1   "/pause"                 23 minutes ago      Up 23 minutes                           k8s_POD_kube-apiserver-testymaster1_kube-system_c55b3dd53dd51e69d2acd3a6aa486e32_0
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf, 20-etcd-service-manager.conf
   Active: active (running) since Sun 2018-11-11 20:09:14 UTC; 19min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 4731 (kubelet)
    Tasks: 60
   Memory: 40.2M
      CPU: 59.614s
   CGroup: /system.slice/kubelet.service
           └─4731 /usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true

Nov 11 20:27:54 testymaster1 kubelet[4731]: I1111 20:27:54.434903    4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Nov 11 20:27:58 testymaster1 kubelet[4731]: I1111 20:27:58.434922    4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Nov 11 20:28:00 testymaster1 kubelet[4731]: I1111 20:28:00.737709    4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Nov 11 20:28:10 testymaster1 kubelet[4731]: I1111 20:28:10.788482    4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Nov 11 20:28:18 testymaster1 kubelet[4731]: I1111 20:28:18.434933    4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Nov 11 20:28:20 testymaster1 kubelet[4731]: I1111 20:28:20.828593    4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Nov 11 20:28:30 testymaster1 kubelet[4731]: I1111 20:28:30.877710    4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Nov 11 20:28:40 testymaster1 kubelet[4731]: I1111 20:28:40.924675    4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Nov 11 20:28:50 testymaster1 kubelet[4731]: I1111 20:28:50.974638    4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Nov 11 20:29:01 testymaster1 kubelet[4731]: I1111 20:29:01.024980    4731 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach

journalctl -xeu kubelet

https://gist.github.com/joshuacox/3c0b4aa2b66d1172067a32e6e064f948

docker logs cbc9036b0675 the kube api container logs:

https://gist.github.com/joshuacox/ab29412c1653e2b1fd2fa06cdd0ae2e2

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Nov 11, 2018
@timothysc
Copy link
Member

/assign @timothysc

@timothysc
Copy link
Member

/assign @rdodev @liztio

@joshuacox
Copy link
Author

not to be distracting but there is a page full of rabbit holes as well here

eventually if I can get it all figured out I'd like to rewrite a bit of the scripts and docs on this page and this one.

@omegazeng
Copy link

omegazeng commented Nov 13, 2018

i have fixed this problem by disable etcd tls.
cat kubeadm-config.yaml

    apiVersion: kubeadm.k8s.io/v1alpha3
    kind: ClusterConfiguration
    kubernetesVersion: stable
    apiServerCertSANs:
    - "10.20.0.13" # node 2 ip addr
    - "10.20.0.14" # node 3 ip addr
    controlPlaneEndpoint: "lb.xxx.yyy:6443"
    etcd:
        external:
            endpoints:
            - http://10.20.0.11:2379
            - http://10.20.0.13:2379
            - http://10.20.0.14:2379
              #caFile: /etc/kubernetes/pki/etcd/ca.crt
              #certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
              #keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
    networking:
        # This CIDR is a calico default. Substitute or remove for your CNI provider.
        podSubnet: "192.168.0.0/16"

docker 18.06.1-ce
k8s v1.12.2

@rdodev
Copy link

rdodev commented Nov 13, 2018

@joshuacox a long shot, but can you try explicitly adding the port to the end point in the ClusterConfig?

controlPlaneEndpoint: "10.0.0.9:PORT"

Or alternatively if you're using 1.12 try InitConfig+ClusterConfig? For example:

apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
  advertiseAddress: PUBLICIP
  bindPort: PORT
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: HOSTNAME
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerCertSANs:
 # all the relevant SAN hosts here
certificatesDir: /etc/kubernetes/pki
clusterName: CLUSTER_NAME
controlPlaneEndpoint: ""
etcd:
##etcd config here
kubernetesVersion: KUBE_VERSION
networking:
  podSubnet: 192.168.0.0/16
  serviceSubnet: 10.96.0.0/12

@joshuacox
Copy link
Author

@rdodev which port? 6443?

And on the InitConfig+ClusterConfig is that all in one file e.g. /etc/kubernetes/kubeadmcfg.yaml? And does that go on masters or etcd hosts? Or perhaps just the initial master?

@rdodev
Copy link

rdodev commented Nov 13, 2018

Hey @joshuacox

Yes, first to triage with minimal changes, just add the 6643 to that config param and run kubeadm init.

If that still doesn't work, then yes, take that snippet, and replace with pertinent variables all in one kubeadm-config.yaml then kubeadm initi --config pointing to it

@joshuacox
Copy link
Author

while I haven't duplicated the entire run from baremetal yet, I can quickly provision a new cluster on KVM hosts.

cat /etc/kubernetes/kubeadmcfg.yaml
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerCertSANs:
- "127.0.0.1"
- '10.0.23.214'
- '10.0.23.219'
- '10.0.23.215'
- '10.0.23.210'
- '10.0.23.211'
- '10.0.23.212'
- '10.0.23.216'
- '10.0.23.217'
- '10.0.23.218'
controlPlaneEndpoint: "10.0.23.210:6443"
etcd:
  external:
      endpoints:
      - https://10.0.23.214:2379
      - https://10.0.23.219:2379
      - https://10.0.23.215:2379
      caFile: /etc/kubernetes/pki/etcd/ca.crt
      certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
      keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
networking:
  podSubnet: 10.244.0.0/16
kubeadm init --config /etc/kubernetes/kubeadmcfg.yaml
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [extetcdmaster1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.23.210 10.0.23.210 127.0.0.1 10.0.23.214 10.0.23.219 10.0.23.215 10.0.23.210 10.0.23.211 10.0.23.212 10.0.23.216 10.0.23.217 10.0.23.218]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"                                                                                                                      
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"                                                                                                    
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"                                                                                                                      
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 21.507034 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster                                                                                                          
[markmaster] Marking the node extetcdmaster1 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node extetcdmaster1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]                                                                                                                       
error marking master: timed out waiting for the condition

I'll try with the init_cluster stuff next.

@joshuacox
Copy link
Author

same results with the init_cluster config

apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
  advertiseAddress: 10.0.23.210
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: extetcdetcd1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerCertSANs:
- "127.0.0.1"
- '10.0.23.214'
- '10.0.23.219'
- '10.0.23.215'
- '10.0.23.210'
- '10.0.23.211'
- '10.0.23.212'
- '10.0.23.216'
- '10.0.23.217'
- '10.0.23.218'
controlPlaneEndpoint: "10.0.23.210:6443"
etcd:
  external:
      endpoints:
      - https://10.0.23.214:2379
      - https://10.0.23.219:2379
      - https://10.0.23.215:2379
      caFile: /etc/kubernetes/pki/etcd/ca.crt
      certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
      keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
networking:
  podSubnet: 10.244.0.0/16
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
        [WARNING Hostname]: hostname "extetcdetcd1" could not be reached
        [WARNING Hostname]: hostname "extetcdetcd1" lookup extetcdetcd1 on 10.0.23.1:53: no such host
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [extetcdetcd1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.23.210 10.0.23.210 127.0.0.1 
10.0.23.214 10.0.23.219 10.0.23.215 10.0.23.210 10.0.23.211 10.0.23.212 10.0.23.216 10.0.23.217 10.0.23.218]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 22.006916 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node extetcdetcd1 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node extetcdetcd1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
error marking master: timed out waiting for the condition

@rdodev
Copy link

rdodev commented Nov 13, 2018

@joshuacox however the warning do reveal

        [WARNING Hostname]: hostname "extetcdetcd1" could not be reached
        [WARNING Hostname]: hostname "extetcdetcd1" lookup extetcdetcd1 on 10.0.23.1:53: no such host

Which would explain inability to bootstrap the master.

@joshuacox
Copy link
Author

@rdodev that is new with the init cluster config, but might explain some of the successful clusters in the past. i.e. my google fiber router learns the names of the VMs eventually and will return DNS if the VMs have been around long enough for whatever event happens that triggers the router to learn the name of that particular MAC address. Spawning a fresh cluster exposes that problem. I was under the impression that kubernetes had it's own internal DNS?

@rdodev
Copy link

rdodev commented Nov 13, 2018

@joshuacox to clarify: so is the master in your home network and the etcd servers elsewhere? Perhaps I misunderstood the scenario.

@joshuacox
Copy link
Author

@rdodev they are all VMs on my home network, and can communicate with each other just fine, still waiting on the google router to learn the hostnames. I guess I need to setup an internal DNS server, or assign them publicly available hostnames that resolve to internal addresses. But that seems excessive for just a test cluster.

@rdodev
Copy link

rdodev commented Nov 13, 2018

@joshuacox instead of dns for the etcd cluster, can you just use ips?

@joshuacox
Copy link
Author

@rdodev I'm not really certain where that is set? is it the name: line in the init cluster stuff?

@joshuacox
Copy link
Author

joshuacox commented Nov 13, 2018

@rdodev that was a mistake, that indeed was not extetcdetcd1 but actually extetcdmaster1, correcting that still leads to the taint failing:

[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [extetcdmaster1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.23.210 10.0.23.210 127.0.0.1 10.0.23.214 10.0.23.219 10.0.23.215 10.0.23.210 10.0.23.211 10.0.23.212 10.0.23.216 10.0.23.217 10.0.23.218]                                                                                               
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"                                                                                            
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"                                                                          
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"                                                                                            
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"                                                                                               
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 24.005968 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace                                                                                                          
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster                                                                                
[markmaster] Marking the node extetcdmaster1 as master by adding the label "node-role.kubernetes.io/master=''"                                                                                                      
[markmaster] Marking the node extetcdmaster1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]                                                                                             
error marking master: timed out waiting for the condition

and the corrected config:

apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
  advertiseAddress: 10.0.23.210
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: extetcdmaster1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
apiServerCertSANs:
- "127.0.0.1"
- '10.0.23.214'
- '10.0.23.219'
- '10.0.23.215'
- '10.0.23.210'
- '10.0.23.211'
- '10.0.23.212'
- '10.0.23.216'
- '10.0.23.217'
- '10.0.23.218'
controlPlaneEndpoint: "10.0.23.210:6443"
etcd:
  external:
      endpoints:
      - https://10.0.23.214:2379
      - https://10.0.23.219:2379
      - https://10.0.23.215:2379
      caFile: /etc/kubernetes/pki/etcd/ca.crt
      certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
      keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
networking:
  podSubnet: 10.244.0.0/16

@rdodev
Copy link

rdodev commented Nov 13, 2018

@joshuacox isn't clear to me from your original post, did you set up the external cluster using these instructions? https://kubernetes.io/docs/setup/independent/setup-ha-etcd-with-kubeadm/

@joshuacox
Copy link
Author

joshuacox commented Nov 13, 2018

yes I converted them into a single script:

https://gist.github.com/joshuacox/4505fbeceb2e394900a24c3cae14131c

in addition to that I am integrating them into kubash of which I have a branch here

Both of which allow me to repeat the entire procedure pretty quickly with something like:

kubash yaml2cluster -n testy ~/.kubash/examples/testy-cluster.yaml && kubash -n testy -y provision && kubash -n testy --verbosity=105 etcd_ext

or instead of the last step, using the smaller bash script:

tar zcf - scripts/etcd-test.sh|  ssh root@10.0.0.6 'tar zxvf -;cd scripts; bash etcd-test.sh 10.0.0.6 10.0.0.7 10.0.0.8'

or for even less typing:

scripts/tester extetcd 

will tear down the extetcd cluster and build it from scratch and run the extetcd method.

@rdodev
Copy link

rdodev commented Nov 13, 2018

@joshuacox Thanks a bunch for all setup the info. Let me look into this/repro and will get back to you.

@rdodev
Copy link

rdodev commented Nov 14, 2018

@joshuacox are you in K8s Slack? Might be easier for quick comm.

@mcastelino
Copy link

Not sure if this is helpful or related. But I ran into this same issue when using kubernetes on Clearlinux when I was using VM's created using virt-manager. The issue was that the hostname was not resolving.

nslookup myhostname would not resolve.

Adding the hosts to /etc/hosts and ensuring nsswitch.conf uses it did not help.

The dns server (dnsmasq) that handles the VMs had to provide the resolution. Once I ensured that the name resolutions worked properly by ensuring the upstream dns server resolved the hostnames things started working.

@joshuacox
Copy link
Author

@mcastelino not entirely unrelated especially with the discussion about hostname stuff. Of note, here in my situation I am using bridged networking and it is the router providing resolution here in my home setup and not dnsmasq from KVM/libvirt/virt-manager.

@joshuacox
Copy link
Author

just thought I'd make sure that all certs worked and networking was good so I ran the docker test command from the primary master that failed to mark

root@extetcdmaster1:~# docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.18 etcdctl --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --ca-file /etc/kubernetes/pki/etcd/ca.crt --endpoints https://10.0.23.214:2379 cluster-health
member 1a3ca09cf567d334 is healthy: got healthy result from https://10.0.23.215:2379
member 5a25d004511f496e is healthy: got healthy result from https://10.0.23.219:2379
member 9f536c972b739e17 is healthy: got healthy result from https://10.0.23.214:2379
cluster is healthy
root@extetcdmaster1:~# docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.18 etcdctl --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --ca-file /etc/kubernetes/pki/etcd/ca.crt --endpoints https://10.0.23.215:2379 cluster-health
member 1a3ca09cf567d334 is healthy: got healthy result from https://10.0.23.215:2379
member 5a25d004511f496e is healthy: got healthy result from https://10.0.23.219:2379
member 9f536c972b739e17 is healthy: got healthy result from https://10.0.23.214:2379
cluster is healthy
root@extetcdmaster1:~# docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.18 etcdctl --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --ca-file /etc/kubernetes/pki/etcd/ca.crt --endpoints https://10.0.23.219:2379 cluster-health
member 1a3ca09cf567d334 is healthy: got healthy result from https://10.0.23.215:2379
member 5a25d004511f496e is healthy: got healthy result from https://10.0.23.219:2379
member 9f536c972b739e17 is healthy: got healthy result from https://10.0.23.214:2379
cluster is healthy

@joshuacox
Copy link
Author

joshuacox commented Nov 18, 2018

looks like maybe a permission issue? here are the logs from a scheduler container running on a master instance after it fails to mark itself master:

E1118 15:24:08.851240       1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1118 15:24:08.852972       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:178: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1118 15:24:08.853795       1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1118 15:24:08.855062       1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1118 15:24:09.847470       1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1118 15:24:09.848437       1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1118 15:24:09.849649       1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1118 15:24:09.850702       1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1118 15:24:09.851748       1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1118 15:24:09.852623       1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1118 15:24:09.854439       1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1118 15:24:09.855419       1 reflector.go:134] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:178: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1118 15:24:09.856799       1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1118 15:24:09.857736       1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I1118 15:24:11.719441       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I1118 15:24:11.819795       1 controller_utils.go:1034] Caches are synced for scheduler controller
I1118 15:24:11.819966       1 leaderelection.go:187] attempting to acquire leader lease  kube-sys

@joshuacox
Copy link
Author

joshuacox commented Nov 21, 2018

finally have a successful method here:

prep the etcd nodes by running this script on the primary etcd node:

https://gist.github.com/joshuacox/9df2a029b04e63443b62c2824cf5fb95

 tar zcf - scripts/etcd-test.sh|  ssh root@10.0.23.218 'tar zxvf -;cd scripts; bash etcd-test.sh 10.0.23.218 10.0.23.219 10.0.23.220'; 

and then initialize a master, this script can be ran on any host that has been keyed for ssh access to both the master and the primary etcd node

https://gist.github.com/joshuacox/f0f0b25e51df5638f3778d80d4af8c63

bash scripts/final_master.sh 10.0.23.215 10.0.23.218

EDIT: leaving this open while I do some testing to ensure that this is not anomalous

@joshuacox
Copy link
Author

I've repeated this a few times now on bare metal and in VMs

@neolit123
Copy link
Member

we have plans to improve both the way etcd is handled and a HA setup is created, removing some of the manual steps. this is in the roadmap for future releases.

@blieberman
Copy link

blieberman commented Nov 27, 2018

finally have a successful method here:

prep the etcd nodes by running this script on the primary etcd node:

https://gist.github.com/joshuacox/9df2a029b04e63443b62c2824cf5fb95

 tar zcf - scripts/etcd-test.sh|  ssh root@10.0.23.218 'tar zxvf -;cd scripts; bash etcd-test.sh 10.0.23.218 10.0.23.219 10.0.23.220'; 

and then initialize a master, this script can be ran on any host that has been keyed for ssh access to both the master and the primary etcd node

https://gist.github.com/joshuacox/f0f0b25e51df5638f3778d80d4af8c63

bash scripts/final_master.sh 10.0.23.215 10.0.23.218

EDIT: leaving this open while I do some testing to ensure that this is not anomalous

I am running into a similar issue with bootstrapping a cluster with kubeadm. Can you further elaborate on how you resolved? All other tickets related to this issue were closed and pointed to this one.

With a working external etcd cluster, my kubeadm configuration is as follows:

apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
kubernetesVersion: stable
apiServerCertSANs:
  - 127.0.0.1
  - kubernetes.default
  - kubernetes.default.svc.cluster.local
  - kubeapi-lb.example.com
controlPlaneEndpoint: "kubeapi-lb.example.com:6443"
etcd:
  external:
    endpoints:
      - https://10.9.2.60:2379
      - https://10.9.3.67:2379
      - https://10.9.2.33:2379
    caFile: /etcd/kubernetes/pki/etcd/ca.pem
    certFile: /etcd/kubernetes/pki/etcd/client.pem
    keyFile: /etcd/kubernetes/pki/etcd/client-key.pem
networking:
  podSubnet: "10.100.0.1/24"
bootstrapTokens:
- groups:
    - "system:bootstrappers:kubeadm:default-node-token"
  token: "redacted"
  ttl: "0"
  usages:
    - signing
    - authentication
clusterName: "data-nva"
nodeRegistration:
  name: "kubemaster-01"
  criSocket: "/var/run/dockershim.sock"
  taints:
    - effect: NoSchedule
      key: node-role.kubernetes.io/master

failure

# /usr/bin/kubeadm init --config kubeadm-config.yaml
...
...
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 32.040906 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node $mynode as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node $mynode as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
error marking master: timed out waiting for the condition

environment details

# docker --version
Docker version 18.06.1-ce, build e68fc7a215d7133c34aa18e3b72b4a21fd0c6136
# kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
# cat /etc/*release*
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
Amazon Linux release 2 (Karoo)
cpe:2.3:o:amazon:amazon_linux:2

@cyclamen
Copy link

@blieberman
me either !
You can use kubeadm init --config ..... -v265 to see some logs

@joshuacox
Copy link
Author

Don't forget to test the master connection to the etcd stack:

root@extetcdmaster1:~# docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.18 etcdctl --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --ca-file /etc/kubernetes/pki/etcd/ca.crt --endpoints https://10.0.23.219:2379 cluster-health

here's the final script I was using to provision etcd, master, and node:

https://gist.github.com/joshuacox/95aad9bee0c7e49e735ec3ec553b24ca

or in a more robust manner, my full script:

https://kubash.org/

@hreidar
Copy link

hreidar commented Mar 4, 2019

Hi, I'm having similar problem with k8s version 1.13.4

cluster nodes

k8s-c3-lb - 10.10.10.76
k8s-c3-e1 - 10.10.10.90
k8s-c3-e2 - 10.10.10.91
k8s-c3-e3 - 10.10.10.92
k8s-c3-m1 - 10.10.10.93
k8s-c3-m2 - 10.10.10.94
k8s-c3-m3 - 10.10.10.95
k8s-c3-w1 - 10.10.10.96
k8s-c3-w2 - 10.10.10.97
k8s-c3-w3 - 10.10.10.98

node info

root@k8s-c3-m1:~# docker --version
Docker version 18.06.1-ce, build e68fc7a
root@k8s-c3-m1:~# 
root@k8s-c3-m1:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:35:32Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
root@k8s-c3-m1:~#
root@k8s-c3-m1:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
root@k8s-c3-m1:~#  
root@k8s-c3-m1:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.3 LTS
Release:        16.04
Codename:       xenial
root@k8s-c3-m1:~#

nginx lb config

root@k8s-c3-lb:~# cat nginx.conf
worker_processes 4;
worker_rlimit_nofile 40000;

events {
    worker_connections 8192;
}

error_log /var/log/nginx/error.log info;

stream {
  upstream k8s-c3 {
    server 10.10.10.93:6443;
    server 10.10.10.94:6443;
    server 10.10.10.95:6443;
  }
  server {
    listen 6443;
    proxy_pass k8s-c3;
  }
}
root@k8s-c3-lb:~#

kubeadm config on etcd nodes

root@k8s-c3-e1:~# cat kubeadmcfg.yaml 
apiVersion: "kubeadm.k8s.io/v1beta1"
kind: ClusterConfiguration
etcd:
    local:
        serverCertSANs:
        - "10.10.10.90"
        peerCertSANs:
        - "10.10.10.90"
        extraArgs:
            initial-cluster: k8s-c3-e1=https://10.10.10.90:2380,k8s-c3-e2=https://10.10.10.91:2380,k8s-c3-e3=https://10.10.10.92:2380
            initial-cluster-state: new
            name: k8s-c3-e1
            listen-peer-urls: https://10.10.10.90:2380
            listen-client-urls: https://10.10.10.90:2379
            advertise-client-urls: https://10.10.10.90:2379
            initial-advertise-peer-urls: https://10.10.10.90:2380
root@k8s-c3-e1:~#

etcd check from master

root@k8s-c3-m1:~# docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.24 etcdctl --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --ca-file /etc/kubernetes/pki/etcd/ca.crt --endpoints https://10.10.10.90:2379 cluster-health
member 2855b88ffd64a219 is healthy: got healthy result from https://10.10.10.91:2379
member 54861c1657ba1b20 is healthy: got healthy result from https://10.10.10.92:2379
member 6fc6fbb1e152a287 is healthy: got healthy result from https://10.10.10.90:2379
cluster is healthy
root@k8s-c3-m1:~#

kubeadm config on master

root@k8s-c3-m1:~# cat /root/kubeadmcfg.yaml 
apiVersion: "kubeadm.k8s.io/v1beta1"
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
  certSANs:
  - "127.0.0.1"
  - "10.10.10.90"
  - "10.10.10.91"
  - "10.10.10.92"
  - "10.10.10.76"
controlPlaneEndpoint: "10.10.10.76:6443"
etcd:
    external:
        endpoints:
        - https://10.10.10.90:2379
        - https://10.10.10.91:2379
        - https://10.10.10.92:2379
        caFile: /etc/kubernetes/pki/etcd/ca.crt
        certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
        keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
root@k8s-c3-m1:~#

kubeadm output

root@k8s-c3-m1:~# kubeadm init --config /root/kubeadmcfg.yaml -v 256
I0304 14:52:28.103162    1391 initconfiguration.go:169] loading configuration from the given file
I0304 14:52:28.107089    1391 interface.go:384] Looking for default routes with IPv4 addresses
I0304 14:52:28.107141    1391 interface.go:389] Default route transits interface "eth0"
I0304 14:52:28.107440    1391 interface.go:196] Interface eth0 is up
I0304 14:52:28.107587    1391 interface.go:244] Interface "eth0" has 1 addresses :[10.10.10.93/24].
I0304 14:52:28.107695    1391 interface.go:211] Checking addr  10.10.10.93/24.
I0304 14:52:28.107724    1391 interface.go:218] IP found 10.10.10.93
I0304 14:52:28.107759    1391 interface.go:250] Found valid IPv4 address 10.10.10.93 for interface "eth0".
I0304 14:52:28.107791    1391 interface.go:395] Found active IP 10.10.10.93 
I0304 14:52:28.107979    1391 version.go:163] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable.txt
I0304 14:52:29.493555    1391 feature_gate.go:206] feature gates: &{map[]}
[init] Using Kubernetes version: v1.13.4
[preflight] Running pre-flight checks
I0304 14:52:29.494477    1391 checks.go:572] validating Kubernetes and kubeadm version
I0304 14:52:29.494609    1391 checks.go:171] validating if the firewall is enabled and active
I0304 14:52:29.506263    1391 checks.go:208] validating availability of port 6443
I0304 14:52:29.506767    1391 checks.go:208] validating availability of port 10251
I0304 14:52:29.507110    1391 checks.go:208] validating availability of port 10252
I0304 14:52:29.507454    1391 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0304 14:52:29.507728    1391 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0304 14:52:29.507959    1391 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0304 14:52:29.508140    1391 checks.go:283] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0304 14:52:29.508316    1391 checks.go:430] validating if the connectivity type is via proxy or direct
I0304 14:52:29.508504    1391 checks.go:466] validating http connectivity to first IP address in the CIDR
I0304 14:52:29.508798    1391 checks.go:466] validating http connectivity to first IP address in the CIDR
I0304 14:52:29.509053    1391 checks.go:104] validating the container runtime
I0304 14:52:29.749661    1391 checks.go:130] validating if the service is enabled and active
I0304 14:52:29.778962    1391 checks.go:332] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0304 14:52:29.779324    1391 checks.go:332] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0304 14:52:29.779573    1391 checks.go:644] validating whether swap is enabled or not
I0304 14:52:29.779818    1391 checks.go:373] validating the presence of executable ip
I0304 14:52:29.780044    1391 checks.go:373] validating the presence of executable iptables
I0304 14:52:29.780251    1391 checks.go:373] validating the presence of executable mount
I0304 14:52:29.780465    1391 checks.go:373] validating the presence of executable nsenter
I0304 14:52:29.780674    1391 checks.go:373] validating the presence of executable ebtables
I0304 14:52:29.780925    1391 checks.go:373] validating the presence of executable ethtool
I0304 14:52:29.781018    1391 checks.go:373] validating the presence of executable socat
I0304 14:52:29.781221    1391 checks.go:373] validating the presence of executable tc
I0304 14:52:29.781415    1391 checks.go:373] validating the presence of executable touch
I0304 14:52:29.781647    1391 checks.go:515] running all checks
I0304 14:52:29.838382    1391 checks.go:403] checking whether the given node name is reachable using net.LookupHost
I0304 14:52:29.838876    1391 checks.go:613] validating kubelet version
I0304 14:52:29.983771    1391 checks.go:130] validating if the service is enabled and active
I0304 14:52:30.011507    1391 checks.go:208] validating availability of port 10250
I0304 14:52:30.011951    1391 checks.go:307] validating the existence of file /etc/kubernetes/pki/etcd/ca.crt
I0304 14:52:30.012301    1391 checks.go:307] validating the existence of file /etc/kubernetes/pki/apiserver-etcd-client.crt
I0304 14:52:30.012360    1391 checks.go:307] validating the existence of file /etc/kubernetes/pki/apiserver-etcd-client.key
I0304 14:52:30.012408    1391 checks.go:685] validating the external etcd version
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0304 14:52:30.238175    1391 checks.go:833] image exists: k8s.gcr.io/kube-apiserver:v1.13.4
I0304 14:52:30.378446    1391 checks.go:833] image exists: k8s.gcr.io/kube-controller-manager:v1.13.4
I0304 14:52:30.560185    1391 checks.go:833] image exists: k8s.gcr.io/kube-scheduler:v1.13.4
I0304 14:52:30.745876    1391 checks.go:833] image exists: k8s.gcr.io/kube-proxy:v1.13.4
I0304 14:52:30.930200    1391 checks.go:833] image exists: k8s.gcr.io/pause:3.1
I0304 14:52:31.096902    1391 checks.go:833] image exists: k8s.gcr.io/coredns:1.2.6
I0304 14:52:31.097108    1391 kubelet.go:71] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0304 14:52:31.256217    1391 kubelet.go:89] Starting the kubelet
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0304 14:52:31.530165    1391 certs.go:113] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-c3-m1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.10.93 10.10.10.76 127.0.0.1 10.10.10.90 10.10.10.91 10.10.10.92 10.10.10.76]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Using existing etcd/ca keyless certificate authority[certs] External etcd mode: Skipping etcd/server certificate authority generation
[certs] External etcd mode: Skipping etcd/peer certificate authority generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate authority generation
[certs] Using existing apiserver-etcd-client certificate and key on disk
I0304 14:52:33.267470    1391 certs.go:113] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
I0304 14:52:33.995630    1391 certs.go:72] creating a new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0304 14:52:34.708619    1391 kubeconfig.go:92] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0304 14:52:35.249743    1391 kubeconfig.go:92] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0304 14:52:35.798270    1391 kubeconfig.go:92] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0304 14:52:36.159920    1391 kubeconfig.go:92] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0304 14:52:36.689060    1391 manifests.go:97] [control-plane] getting StaticPodSpecs
I0304 14:52:36.701499    1391 manifests.go:113] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0304 14:52:36.701545    1391 manifests.go:97] [control-plane] getting StaticPodSpecs
I0304 14:52:36.703214    1391 manifests.go:113] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0304 14:52:36.703259    1391 manifests.go:97] [control-plane] getting StaticPodSpecs
I0304 14:52:36.704327    1391 manifests.go:113] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0304 14:52:36.704356    1391 etcd.go:97] [etcd] External etcd mode. Skipping the creation of a manifest for local etcd
I0304 14:52:36.704377    1391 waitcontrolplane.go:89] [wait-control-plane] Waiting for the API server to be healthy
I0304 14:52:36.705892    1391 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0304 14:52:36.707216    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:36.711008    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 3 milliseconds
I0304 14:52:36.711030    1391 round_trippers.go:444] Response Headers:
I0304 14:52:36.711077    1391 request.go:779] Got a Retry-After 1s response for attempt 1 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:37.711365    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:37.715841    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 4 milliseconds
I0304 14:52:37.715880    1391 round_trippers.go:444] Response Headers:
I0304 14:52:37.715930    1391 request.go:779] Got a Retry-After 1s response for attempt 2 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:38.716182    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:38.717826    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 1 milliseconds
I0304 14:52:38.717850    1391 round_trippers.go:444] Response Headers:
I0304 14:52:38.717897    1391 request.go:779] Got a Retry-After 1s response for attempt 3 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:39.718135    1391 round_trippers.go:419] curl -k -v -XGET  -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" -H "Accept: application/json, */*" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:39.719946    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 1 milliseconds
I0304 14:52:39.719972    1391 round_trippers.go:444] Response Headers:
I0304 14:52:39.720022    1391 request.go:779] Got a Retry-After 1s response for attempt 4 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:40.720273    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:40.722069    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 1 milliseconds
I0304 14:52:40.722093    1391 round_trippers.go:444] Response Headers:
I0304 14:52:40.722136    1391 request.go:779] Got a Retry-After 1s response for attempt 5 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:41.722440    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:41.724033    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 1 milliseconds
I0304 14:52:41.724058    1391 round_trippers.go:444] Response Headers:
I0304 14:52:41.724103    1391 request.go:779] Got a Retry-After 1s response for attempt 6 to https://10.10.10.76:6443/healthz?timeout=32s
I0304 14:52:42.724350    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:52.725613    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s  in 10001 milliseconds
I0304 14:52:52.725683    1391 round_trippers.go:444] Response Headers:
I0304 14:52:53.226097    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:53.720051    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 493 milliseconds
I0304 14:52:53.720090    1391 round_trippers.go:444] Response Headers:
I0304 14:52:53.720103    1391 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0304 14:52:53.720115    1391 round_trippers.go:447]     X-Content-Type-Options: nosniff
I0304 14:52:53.720125    1391 round_trippers.go:447]     Content-Length: 879
I0304 14:52:53.720135    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:53 GMT
I0304 14:52:53.720197    1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
[-]poststarthook/start-kube-apiserver-admission-initializer failed: reason withheld
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[-]autoregister-completion failed: reason withheld
healthz check failed
I0304 14:52:53.726022    1391 round_trippers.go:419] curl -k -v -XGET  -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" -H "Accept: application/json, */*" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:53.739616    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 13 milliseconds
I0304 14:52:53.739690    1391 round_trippers.go:444] Response Headers:
I0304 14:52:53.739705    1391 round_trippers.go:447]     X-Content-Type-Options: nosniff
I0304 14:52:53.739717    1391 round_trippers.go:447]     Content-Length: 858
I0304 14:52:53.740058    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:53 GMT
I0304 14:52:53.740083    1391 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0304 14:52:53.740342    1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
[-]poststarthook/start-kube-apiserver-admission-initializer failed: reason withheld
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
I0304 14:52:54.226068    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:54.232126    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 6 milliseconds
I0304 14:52:54.232149    1391 round_trippers.go:444] Response Headers:
I0304 14:52:54.232161    1391 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0304 14:52:54.232172    1391 round_trippers.go:447]     X-Content-Type-Options: nosniff
I0304 14:52:54.232182    1391 round_trippers.go:447]     Content-Length: 816
I0304 14:52:54.232192    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:54 GMT
I0304 14:52:54.232234    1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
I0304 14:52:54.726154    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:54.734050    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 7 milliseconds
I0304 14:52:54.734091    1391 round_trippers.go:444] Response Headers:
I0304 14:52:54.734111    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:54 GMT
I0304 14:52:54.734129    1391 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0304 14:52:54.734146    1391 round_trippers.go:447]     X-Content-Type-Options: nosniff
I0304 14:52:54.734163    1391 round_trippers.go:447]     Content-Length: 774
I0304 14:52:54.734250    1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
I0304 14:52:55.226158    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:55.231693    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 500 Internal Server Error in 5 milliseconds
I0304 14:52:55.231734    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.231754    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.231772    1391 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0304 14:52:55.231789    1391 round_trippers.go:447]     X-Content-Type-Options: nosniff
I0304 14:52:55.231805    1391 round_trippers.go:447]     Content-Length: 774
I0304 14:52:55.231998    1391 request.go:942] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
I0304 14:52:55.726404    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/healthz?timeout=32s'
I0304 14:52:55.733705    1391 round_trippers.go:438] GET https://10.10.10.76:6443/healthz?timeout=32s 200 OK in 7 milliseconds
I0304 14:52:55.733746    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.733766    1391 round_trippers.go:447]     Content-Type: text/plain; charset=utf-8
I0304 14:52:55.733792    1391 round_trippers.go:447]     Content-Length: 2
I0304 14:52:55.733809    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.733888    1391 request.go:942] Response Body: ok
[apiclient] All control plane components are healthy after 19.026898 seconds
I0304 14:52:55.736342    1391 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
I0304 14:52:55.738400    1391 uploadconfig.go:114] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0304 14:52:55.741686    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config'
I0304 14:52:55.751480    1391 round_trippers.go:438] GET https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 9 milliseconds
I0304 14:52:55.751978    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.752324    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.752367    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.752586    1391 round_trippers.go:447]     Content-Length: 1423
I0304 14:52:55.752696    1391 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kubeadm-config","uid":"519f6c23-3e69-11e9-8dd7-0050569c544c","resourceVersion":"13121","creationTimestamp":"2019-03-04T10:36:07Z"},"data":{"ClusterConfiguration":"apiServer:\n  certSANs:\n  - 127.0.0.1\n  - 10.10.10.90\n  - 10.10.10.91\n  - 10.10.10.92\n  - 10.10.10.76\n  extraArgs:\n    authorization-mode: Node,RBAC\n  timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta1\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: 10.10.10.76:6443\ncontrollerManager: {}\ndns:\n  type: CoreDNS\netcd:\n  external:\n    caFile: /etc/kubernetes/pki/etcd/ca.crt\n    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt\n    endpoints:\n    - https://10.10.10.90:2379\n    - https://10.10.10.91:2379\n    - https://10.10.10.92:2379\n    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.13.4\nnetworking:\n  dnsDomain: cluster.local\n  podSubnet: \"\"\n  serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n  k8s-c3-m1:\n    advertiseAddress: 10.10.10.93\n    bindPort: 6443\n  k8s-c3-m2:\n    advertiseAddress: 10.10.10.94\n    bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta1\nkind: ClusterStatus\n"}}
I0304 14:52:55.756813    1391 request.go:942] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","creationTimestamp":null},"data":{"ClusterConfiguration":"apiServer:\n  certSANs:\n  - 127.0.0.1\n  - 10.10.10.90\n  - 10.10.10.91\n  - 10.10.10.92\n  - 10.10.10.76\n  extraArgs:\n    authorization-mode: Node,RBAC\n  timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta1\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: 10.10.10.76:6443\ncontrollerManager: {}\ndns:\n  type: CoreDNS\netcd:\n  external:\n    caFile: /etc/kubernetes/pki/etcd/ca.crt\n    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt\n    endpoints:\n    - https://10.10.10.90:2379\n    - https://10.10.10.91:2379\n    - https://10.10.10.92:2379\n    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.13.4\nnetworking:\n  dnsDomain: cluster.local\n  podSubnet: \"\"\n  serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n  k8s-c3-m1:\n    advertiseAddress: 10.10.10.93\n    bindPort: 6443\n  k8s-c3-m2:\n    advertiseAddress: 10.10.10.94\n    bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta1\nkind: ClusterStatus\n"}}
I0304 14:52:55.757443    1391 round_trippers.go:419] curl -k -v -XPOST  -H "Content-Type: application/json" -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps'
I0304 14:52:55.913083    1391 round_trippers.go:438] POST https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps 409 Conflict in 155 milliseconds
I0304 14:52:55.913243    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.913271    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.913290    1391 round_trippers.go:447]     Content-Length: 218
I0304 14:52:55.913335    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.913438    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps \"kubeadm-config\" already exists","reason":"AlreadyExists","details":{"name":"kubeadm-config","kind":"configmaps"},"code":409}
I0304 14:52:55.914863    1391 request.go:942] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","creationTimestamp":null},"data":{"ClusterConfiguration":"apiServer:\n  certSANs:\n  - 127.0.0.1\n  - 10.10.10.90\n  - 10.10.10.91\n  - 10.10.10.92\n  - 10.10.10.76\n  extraArgs:\n    authorization-mode: Node,RBAC\n  timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta1\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: 10.10.10.76:6443\ncontrollerManager: {}\ndns:\n  type: CoreDNS\netcd:\n  external:\n    caFile: /etc/kubernetes/pki/etcd/ca.crt\n    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt\n    endpoints:\n    - https://10.10.10.90:2379\n    - https://10.10.10.91:2379\n    - https://10.10.10.92:2379\n    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.13.4\nnetworking:\n  dnsDomain: cluster.local\n  podSubnet: \"\"\n  serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n  k8s-c3-m1:\n    advertiseAddress: 10.10.10.93\n    bindPort: 6443\n  k8s-c3-m2:\n    advertiseAddress: 10.10.10.94\n    bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta1\nkind: ClusterStatus\n"}}
I0304 14:52:55.915123    1391 round_trippers.go:419] curl -k -v -XPUT  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config'
I0304 14:52:55.923538    1391 round_trippers.go:438] PUT https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 8 milliseconds
I0304 14:52:55.924120    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.924437    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.924810    1391 round_trippers.go:447]     Content-Length: 1423
I0304 14:52:55.925107    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.925521    1391 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kubeadm-config","uid":"519f6c23-3e69-11e9-8dd7-0050569c544c","resourceVersion":"13121","creationTimestamp":"2019-03-04T10:36:07Z"},"data":{"ClusterConfiguration":"apiServer:\n  certSANs:\n  - 127.0.0.1\n  - 10.10.10.90\n  - 10.10.10.91\n  - 10.10.10.92\n  - 10.10.10.76\n  extraArgs:\n    authorization-mode: Node,RBAC\n  timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta1\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: 10.10.10.76:6443\ncontrollerManager: {}\ndns:\n  type: CoreDNS\netcd:\n  external:\n    caFile: /etc/kubernetes/pki/etcd/ca.crt\n    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt\n    endpoints:\n    - https://10.10.10.90:2379\n    - https://10.10.10.91:2379\n    - https://10.10.10.92:2379\n    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.13.4\nnetworking:\n  dnsDomain: cluster.local\n  podSubnet: \"\"\n  serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n  k8s-c3-m1:\n    advertiseAddress: 10.10.10.93\n    bindPort: 6443\n  k8s-c3-m2:\n    advertiseAddress: 10.10.10.94\n    bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta1\nkind: ClusterStatus\n"}}
I0304 14:52:55.926346    1391 request.go:942] Request Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubeadm-config"]}]}
I0304 14:52:55.926823    1391 round_trippers.go:419] curl -k -v -XPOST  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles'
I0304 14:52:55.946643    1391 round_trippers.go:438] POST https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 409 Conflict in 19 milliseconds
I0304 14:52:55.947026    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.947441    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.947798    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.948105    1391 round_trippers.go:447]     Content-Length: 298
I0304 14:52:55.948447    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"roles.rbac.authorization.k8s.io \"kubeadm:nodes-kubeadm-config\" already exists","reason":"AlreadyExists","details":{"name":"kubeadm:nodes-kubeadm-config","group":"rbac.authorization.k8s.io","kind":"roles"},"code":409}
I0304 14:52:55.949132    1391 request.go:942] Request Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubeadm-config"]}]}
I0304 14:52:55.949653    1391 round_trippers.go:419] curl -k -v -XPUT  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:nodes-kubeadm-config'
I0304 14:52:55.960370    1391 round_trippers.go:438] PUT https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:nodes-kubeadm-config 200 OK in 10 milliseconds
I0304 14:52:55.960920    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.961216    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.961507    1391 round_trippers.go:447]     Content-Length: 464
I0304 14:52:55.961789    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.962002    1391 request.go:942] Response Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm%3Anodes-kubeadm-config","uid":"51a356c9-3e69-11e9-8dd7-0050569c544c","resourceVersion":"559","creationTimestamp":"2019-03-04T10:36:07Z"},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubeadm-config"]}]}
I0304 14:52:55.964418    1391 request.go:942] Request Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"},{"kind":"Group","name":"system:nodes"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:nodes-kubeadm-config"}}
I0304 14:52:55.965022    1391 round_trippers.go:419] curl -k -v -XPOST  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings'
I0304 14:52:55.983782    1391 round_trippers.go:438] POST https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 409 Conflict in 18 milliseconds
I0304 14:52:55.983847    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.983890    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.983920    1391 round_trippers.go:447]     Content-Length: 312
I0304 14:52:55.983948    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.984007    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"rolebindings.rbac.authorization.k8s.io \"kubeadm:nodes-kubeadm-config\" already exists","reason":"AlreadyExists","details":{"name":"kubeadm:nodes-kubeadm-config","group":"rbac.authorization.k8s.io","kind":"rolebindings"},"code":409}
I0304 14:52:55.984330    1391 request.go:942] Request Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"},{"kind":"Group","name":"system:nodes"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:nodes-kubeadm-config"}}
I0304 14:52:55.984464    1391 round_trippers.go:419] curl -k -v -XPUT  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm:nodes-kubeadm-config'
I0304 14:52:55.994138    1391 round_trippers.go:438] PUT https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm:nodes-kubeadm-config 200 OK in 9 milliseconds
I0304 14:52:55.994193    1391 round_trippers.go:444] Response Headers:
I0304 14:52:55.994497    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:55.994878    1391 round_trippers.go:447]     Content-Length: 678
I0304 14:52:55.995094    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:55 GMT
I0304 14:52:55.995377    1391 request.go:942] Response Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:nodes-kubeadm-config","namespace":"kube-system","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm%3Anodes-kubeadm-config","uid":"51a61bf8-3e69-11e9-8dd7-0050569c544c","resourceVersion":"560","creationTimestamp":"2019-03-04T10:36:07Z"},"subjects":[{"kind":"Group","apiGroup":"rbac.authorization.k8s.io","name":"system:bootstrappers:kubeadm:default-node-token"},{"kind":"Group","apiGroup":"rbac.authorization.k8s.io","name":"system:nodes"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:nodes-kubeadm-config"}}
I0304 14:52:56.001421    1391 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
I0304 14:52:56.002891    1391 uploadconfig.go:128] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
I0304 14:52:56.005261    1391 request.go:942] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"data":{"kubelet":"address: 0.0.0.0\napiVersion: kubelet.config.k8s.io/v1beta1\nauthentication:\n  anonymous:\n    enabled: false\n  webhook:\n    cacheTTL: 2m0s\n    enabled: true\n  x509:\n    clientCAFile: /etc/kubernetes/pki/ca.crt\nauthorization:\n  mode: Webhook\n  webhook:\n    cacheAuthorizedTTL: 5m0s\n    cacheUnauthorizedTTL: 30s\ncgroupDriver: cgroupfs\ncgroupsPerQOS: true\nclusterDNS:\n- 10.96.0.10\nclusterDomain: cluster.local\nconfigMapAndSecretChangeDetectionStrategy: Watch\ncontainerLogMaxFiles: 5\ncontainerLogMaxSize: 10Mi\ncontentType: application/vnd.kubernetes.protobuf\ncpuCFSQuota: true\ncpuCFSQuotaPeriod: 100ms\ncpuManagerPolicy: none\ncpuManagerReconcilePeriod: 10s\nenableControllerAttachDetach: true\nenableDebuggingHandlers: true\nenforceNodeAllocatable:\n- pods\neventBurst: 10\neventRecordQPS: 5\nevictionHard:\n  imagefs.available: 15%\n  memory.available: 100Mi\n  nodefs.available: 10%\n  nodefs.inodesFree: 5%\nevictionPressureTransitionPeriod: 5m0s\nfailSwapOn: true\nfileCheckFrequency: 20s\nhairpinMode: promiscuous-bridge\nhealthzBindAddress: 127.0.0.1\nhealthzPort: 10248\nhttpCheckFrequency: 20s\nimageGCHighThresholdPercent: 85\nimageGCLowThresholdPercent: 80\nimageMinimumGCAge: 2m0s\niptablesDropBit: 15\niptablesMasqueradeBit: 14\nkind: KubeletConfiguration\nkubeAPIBurst: 10\nkubeAPIQPS: 5\nmakeIPTablesUtilChains: true\nmaxOpenFiles: 1000000\nmaxPods: 110\nnodeLeaseDurationSeconds: 40\nnodeStatusReportFrequency: 1m0s\nnodeStatusUpdateFrequency: 10s\noomScoreAdj: -999\npodPidsLimit: -1\nport: 10250\nregistryBurst: 10\nregistryPullQPS: 5\nresolvConf: /etc/resolv.conf\nrotateCertificates: true\nruntimeRequestTimeout: 2m0s\nserializeImagePulls: true\nstaticPodPath: /etc/kubernetes/manifests\nstreamingConnectionIdleTimeout: 4h0m0s\nsyncFrequency: 1m0s\nvolumeStatsAggPeriod: 1m0s\n"}}
I0304 14:52:56.005580    1391 round_trippers.go:419] curl -k -v -XPOST  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps'
I0304 14:52:56.026664    1391 round_trippers.go:438] POST https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps 409 Conflict in 20 milliseconds
I0304 14:52:56.026763    1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.026798    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:56.026852    1391 round_trippers.go:447]     Content-Length: 228
I0304 14:52:56.026931    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.027084    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps \"kubelet-config-1.13\" already exists","reason":"AlreadyExists","details":{"name":"kubelet-config-1.13","kind":"configmaps"},"code":409}
I0304 14:52:56.027551    1391 request.go:942] Request Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"data":{"kubelet":"address: 0.0.0.0\napiVersion: kubelet.config.k8s.io/v1beta1\nauthentication:\n  anonymous:\n    enabled: false\n  webhook:\n    cacheTTL: 2m0s\n    enabled: true\n  x509:\n    clientCAFile: /etc/kubernetes/pki/ca.crt\nauthorization:\n  mode: Webhook\n  webhook:\n    cacheAuthorizedTTL: 5m0s\n    cacheUnauthorizedTTL: 30s\ncgroupDriver: cgroupfs\ncgroupsPerQOS: true\nclusterDNS:\n- 10.96.0.10\nclusterDomain: cluster.local\nconfigMapAndSecretChangeDetectionStrategy: Watch\ncontainerLogMaxFiles: 5\ncontainerLogMaxSize: 10Mi\ncontentType: application/vnd.kubernetes.protobuf\ncpuCFSQuota: true\ncpuCFSQuotaPeriod: 100ms\ncpuManagerPolicy: none\ncpuManagerReconcilePeriod: 10s\nenableControllerAttachDetach: true\nenableDebuggingHandlers: true\nenforceNodeAllocatable:\n- pods\neventBurst: 10\neventRecordQPS: 5\nevictionHard:\n  imagefs.available: 15%\n  memory.available: 100Mi\n  nodefs.available: 10%\n  nodefs.inodesFree: 5%\nevictionPressureTransitionPeriod: 5m0s\nfailSwapOn: true\nfileCheckFrequency: 20s\nhairpinMode: promiscuous-bridge\nhealthzBindAddress: 127.0.0.1\nhealthzPort: 10248\nhttpCheckFrequency: 20s\nimageGCHighThresholdPercent: 85\nimageGCLowThresholdPercent: 80\nimageMinimumGCAge: 2m0s\niptablesDropBit: 15\niptablesMasqueradeBit: 14\nkind: KubeletConfiguration\nkubeAPIBurst: 10\nkubeAPIQPS: 5\nmakeIPTablesUtilChains: true\nmaxOpenFiles: 1000000\nmaxPods: 110\nnodeLeaseDurationSeconds: 40\nnodeStatusReportFrequency: 1m0s\nnodeStatusUpdateFrequency: 10s\noomScoreAdj: -999\npodPidsLimit: -1\nport: 10250\nregistryBurst: 10\nregistryPullQPS: 5\nresolvConf: /etc/resolv.conf\nrotateCertificates: true\nruntimeRequestTimeout: 2m0s\nserializeImagePulls: true\nstaticPodPath: /etc/kubernetes/manifests\nstreamingConnectionIdleTimeout: 4h0m0s\nsyncFrequency: 1m0s\nvolumeStatsAggPeriod: 1m0s\n"}}
I0304 14:52:56.027830    1391 round_trippers.go:419] curl -k -v -XPUT  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13'
I0304 14:52:56.036853    1391 round_trippers.go:438] PUT https://10.10.10.76:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13 200 OK in 8 milliseconds
I0304 14:52:56.036900    1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.037253    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:56.037291    1391 round_trippers.go:447]     Content-Length: 2133
I0304 14:52:56.037554    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.037755    1391 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubelet-config-1.13","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.13","uid":"51a9de57-3e69-11e9-8dd7-0050569c544c","resourceVersion":"561","creationTimestamp":"2019-03-04T10:36:07Z"},"data":{"kubelet":"address: 0.0.0.0\napiVersion: kubelet.config.k8s.io/v1beta1\nauthentication:\n  anonymous:\n    enabled: false\n  webhook:\n    cacheTTL: 2m0s\n    enabled: true\n  x509:\n    clientCAFile: /etc/kubernetes/pki/ca.crt\nauthorization:\n  mode: Webhook\n  webhook:\n    cacheAuthorizedTTL: 5m0s\n    cacheUnauthorizedTTL: 30s\ncgroupDriver: cgroupfs\ncgroupsPerQOS: true\nclusterDNS:\n- 10.96.0.10\nclusterDomain: cluster.local\nconfigMapAndSecretChangeDetectionStrategy: Watch\ncontainerLogMaxFiles: 5\ncontainerLogMaxSize: 10Mi\ncontentType: application/vnd.kubernetes.protobuf\ncpuCFSQuota: true\ncpuCFSQuotaPeriod: 100ms\ncpuManagerPolicy: none\ncpuManagerReconcilePeriod: 10s\nenableControllerAttachDetach: true\nenableDebuggingHandlers: true\nenforceNodeAllocatable:\n- pods\neventBurst: 10\neventRecordQPS: 5\nevictionHard:\n  imagefs.available: 15%\n  memory.available: 100Mi\n  nodefs.available: 10%\n  nodefs.inodesFree: 5%\nevictionPressureTransitionPeriod: 5m0s\nfailSwapOn: true\nfileCheckFrequency: 20s\nhairpinMode: promiscuous-bridge\nhealthzBindAddress: 127.0.0.1\nhealthzPort: 10248\nhttpCheckFrequency: 20s\nimageGCHighThresholdPercent: 85\nimageGCLowThresholdPercent: 80\nimageMinimumGCAge: 2m0s\niptablesDropBit: 15\niptablesMasqueradeBit: 14\nkind: KubeletConfiguration\nkubeAPIBurst: 10\nkubeAPIQPS: 5\nmakeIPTablesUtilChains: true\nmaxOpenFiles: 1000000\nmaxPods: 110\nnodeLeaseDurationSeconds: 40\nnodeStatusReportFrequency: 1m0s\nnodeStatusUpdateFrequency: 10s\noomScoreAdj: -999\npodPidsLimit: -1\nport: 10250\nregistryBurst: 10\nregistryPullQPS: 5\nresolvConf: /etc/resolv.conf\nrotateCertificates: true\nruntimeRequestTimeout: 2m0s\nserializeImagePulls: true\nstaticPodPath: /etc/kubernetes/manifests\nstreamingConnectionIdleTimeout: 4h0m0s\nsyncFrequency: 1m0s\nvolumeStatsAggPeriod: 1m0s\n"}}
I0304 14:52:56.038255    1391 request.go:942] Request Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubelet-config-1.13"]}]}
I0304 14:52:56.038523    1391 round_trippers.go:419] curl -k -v -XPOST  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles'
I0304 14:52:56.052414    1391 round_trippers.go:438] POST https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 409 Conflict in 13 milliseconds
I0304 14:52:56.052512    1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.052572    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:56.052603    1391 round_trippers.go:447]     Content-Length: 296
I0304 14:52:56.052685    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.052955    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"roles.rbac.authorization.k8s.io \"kubeadm:kubelet-config-1.13\" already exists","reason":"AlreadyExists","details":{"name":"kubeadm:kubelet-config-1.13","group":"rbac.authorization.k8s.io","kind":"roles"},"code":409}
I0304 14:52:56.053398    1391 request.go:942] Request Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubelet-config-1.13"]}]}
I0304 14:52:56.053646    1391 round_trippers.go:419] curl -k -v -XPUT  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:kubelet-config-1.13'
I0304 14:52:56.061599    1391 round_trippers.go:438] PUT https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm:kubelet-config-1.13 200 OK in 7 milliseconds
I0304 14:52:56.061691    1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.061723    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.061779    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:56.061808    1391 round_trippers.go:447]     Content-Length: 467
I0304 14:52:56.061917    1391 request.go:942] Response Body: {"kind":"Role","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/kubeadm%3Akubelet-config-1.13","uid":"51abee39-3e69-11e9-8dd7-0050569c544c","resourceVersion":"562","creationTimestamp":"2019-03-04T10:36:07Z"},"rules":[{"verbs":["get"],"apiGroups":[""],"resources":["configmaps"],"resourceNames":["kubelet-config-1.13"]}]}
I0304 14:52:56.062370    1391 request.go:942] Request Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:nodes"},{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:kubelet-config-1.13"}}
I0304 14:52:56.062564    1391 round_trippers.go:419] curl -k -v -XPOST  -H "Accept: application/json, */*" -H "Content-Type: application/json" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings'
I0304 14:52:56.076620    1391 round_trippers.go:438] POST https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 409 Conflict in 13 milliseconds
I0304 14:52:56.076664    1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.076902    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:56.076938    1391 round_trippers.go:447]     Content-Length: 310
I0304 14:52:56.077092    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.077299    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"rolebindings.rbac.authorization.k8s.io \"kubeadm:kubelet-config-1.13\" already exists","reason":"AlreadyExists","details":{"name":"kubeadm:kubelet-config-1.13","group":"rbac.authorization.k8s.io","kind":"rolebindings"},"code":409}
I0304 14:52:56.077657    1391 request.go:942] Request Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","creationTimestamp":null},"subjects":[{"kind":"Group","name":"system:nodes"},{"kind":"Group","name":"system:bootstrappers:kubeadm:default-node-token"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:kubelet-config-1.13"}}
I0304 14:52:56.077940    1391 round_trippers.go:419] curl -k -v -XPUT  -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" -H "Accept: application/json, */*" -H "Content-Type: application/json" 'https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm:kubelet-config-1.13'
I0304 14:52:56.084893    1391 round_trippers.go:438] PUT https://10.10.10.76:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm:kubelet-config-1.13 200 OK in 6 milliseconds
I0304 14:52:56.084937    1391 round_trippers.go:444] Response Headers:
I0304 14:52:56.085395    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:52:56.085635    1391 round_trippers.go:447]     Content-Length: 675
I0304 14:52:56.085675    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:52:56 GMT
I0304 14:52:56.086357    1391 request.go:942] Response Body: {"kind":"RoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:kubelet-config-1.13","namespace":"kube-system","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/kubeadm%3Akubelet-config-1.13","uid":"51ad932c-3e69-11e9-8dd7-0050569c544c","resourceVersion":"563","creationTimestamp":"2019-03-04T10:36:07Z"},"subjects":[{"kind":"Group","apiGroup":"rbac.authorization.k8s.io","name":"system:nodes"},{"kind":"Group","apiGroup":"rbac.authorization.k8s.io","name":"system:bootstrappers:kubeadm:default-node-token"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"kubeadm:kubelet-config-1.13"}}
I0304 14:52:56.086694    1391 uploadconfig.go:133] [upload-config] Preserving the CRISocket information for the control-plane node
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-c3-m1" as an annotation
...
I0304 14:53:16.587525    1391 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" 'https://10.10.10.76:6443/api/v1/nodes/k8s-c3-m1'
I0304 14:53:16.597510    1391 round_trippers.go:438] GET https://10.10.10.76:6443/api/v1/nodes/k8s-c3-m1 404 Not Found in 9 milliseconds
I0304 14:53:16.597872    1391 round_trippers.go:444] Response Headers:
I0304 14:53:16.597909    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:53:16.598117    1391 round_trippers.go:447]     Content-Length: 188
I0304 14:53:16.598141    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:53:16 GMT
I0304 14:53:16.598332    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"k8s-c3-m1\" not found","reason":"NotFound","details":{"name":"k8s-c3-m1","kind":"nodes"},"code":404}
[kubelet-check] Initial timeout of 40s passed.
...
I0304 14:53:17.111508    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"k8s-c3-m1\" not found","reason":"NotFound","details":{"name":"k8s-c3-m1","kind":"nodes"},"code":404}
I0304 14:54:56.095649    1391 round_trippers.go:419] curl -k -v -XGET  -H "User-Agent: kubeadm/v1.13.4 (linux/amd64) kubernetes/c27b913" -H "Accept: application/json, */*" 'https://10.10.10.76:6443/api/v1/nodes/k8s-c3-m1'
I0304 14:54:56.101815    1391 round_trippers.go:438] GET https://10.10.10.76:6443/api/v1/nodes/k8s-c3-m1 404 Not Found in 6 milliseconds
I0304 14:54:56.101895    1391 round_trippers.go:444] Response Headers:
I0304 14:54:56.101926    1391 round_trippers.go:447]     Content-Type: application/json
I0304 14:54:56.101945    1391 round_trippers.go:447]     Content-Length: 188
I0304 14:54:56.101996    1391 round_trippers.go:447]     Date: Mon, 04 Mar 2019 14:54:56 GMT
I0304 14:54:56.102074    1391 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"k8s-c3-m1\" not found","reason":"NotFound","details":{"name":"k8s-c3-m1","kind":"nodes"},"code":404}
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
root@k8s-c3-m1:~#

docker ps

root@k8s-c3-m1:~# docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
d6e9af6f2585        dd862b749309           "kube-scheduler --ad…"   28 minutes ago      Up 28 minutes                           k8s_kube-scheduler_kube-scheduler-k8s-c3-m1_kube-system_4b52d75cab61380f07c0c5a69fb371d4_1
76bcca06bb0c        40a817357014           "kube-controller-man…"   28 minutes ago      Up 28 minutes                           k8s_kube-controller-manager_kube-controller-manager-k8s-c3-m1_kube-system_3a2670bb8847c2036740fe0f0a3de429_1
74c9b34ec00d        fc3801f0fc54           "kube-apiserver --au…"   About an hour ago   Up About an hour                        k8s_kube-apiserver_kube-apiserver-k8s-c3-m1_kube-system_6fb1fd1d468dedcf6a62eff4d392685e_0
e68bbbc0967e        k8s.gcr.io/pause:3.1   "/pause"                 About an hour ago   Up About an hour                        k8s_POD_kube-scheduler-k8s-c3-m1_kube-system_4b52d75cab61380f07c0c5a69fb371d4_0
0d6e0d0040cf        k8s.gcr.io/pause:3.1   "/pause"                 About an hour ago   Up About an hour                        k8s_POD_kube-controller-manager-k8s-c3-m1_kube-system_3a2670bb8847c2036740fe0f0a3de429_0
29f7974ae280        k8s.gcr.io/pause:3.1   "/pause"                 About an hour ago   Up About an hour                        k8s_POD_kube-apiserver-k8s-c3-m1_kube-system_6fb1fd1d468dedcf6a62eff4d392685e_0
root@k8s-c3-m1:~#

systemctl

root@k8s-c3-m1:~# systemctl status kubelet          
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf, 20-etcd-service-manager.conf
   Active: active (running) since Mon 2019-03-04 14:52:31 GMT; 55min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 1512 (kubelet)
    Tasks: 17
   Memory: 42.1M
      CPU: 2min 45.226s
   CGroup: /system.slice/kubelet.service
           └─1512 /usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true

Mar 04 15:46:26 k8s-c3-m1 kubelet[1512]: I0304 15:46:26.450692    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:27 k8s-c3-m1 kubelet[1512]: I0304 15:46:27.566498    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:36 k8s-c3-m1 kubelet[1512]: I0304 15:46:36.519582    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:46 k8s-c3-m1 kubelet[1512]: I0304 15:46:46.621611    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:56 k8s-c3-m1 kubelet[1512]: I0304 15:46:56.566111    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:56 k8s-c3-m1 kubelet[1512]: I0304 15:46:56.568601    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:56 k8s-c3-m1 kubelet[1512]: I0304 15:46:56.706182    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:47:06 k8s-c3-m1 kubelet[1512]: I0304 15:47:06.778864    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:47:16 k8s-c3-m1 kubelet[1512]: I0304 15:47:16.852441    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:47:26 k8s-c3-m1 kubelet[1512]: I0304 15:47:26.893380    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
root@k8s-c3-m1:~#

journalctl -xeu kubelet

root@k8s-c3-m1:~# journalctl -xeu kubelet
Mar 04 15:38:53 k8s-c3-m1 kubelet[1512]: I0304 15:38:53.135568    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:39:03 k8s-c3-m1 kubelet[1512]: I0304 15:39:03.215031    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:39:13 k8s-c3-m1 kubelet[1512]: I0304 15:39:13.290469    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:39:23 k8s-c3-m1 kubelet[1512]: I0304 15:39:23.367081    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:39:26 k8s-c3-m1 kubelet[1512]: I0304 15:39:26.566426    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:39:33 k8s-c3-m1 kubelet[1512]: I0304 15:39:33.431954    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:39:33 k8s-c3-m1 kubelet[1512]: I0304 15:39:33.566201    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:39:43 k8s-c3-m1 kubelet[1512]: I0304 15:39:43.498836    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:39:53 k8s-c3-m1 kubelet[1512]: I0304 15:39:53.570568    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:40:03 k8s-c3-m1 kubelet[1512]: I0304 15:40:03.655276    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:40:08 k8s-c3-m1 kubelet[1512]: I0304 15:40:08.566616    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:40:13 k8s-c3-m1 kubelet[1512]: I0304 15:40:13.756879    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:40:23 k8s-c3-m1 kubelet[1512]: I0304 15:40:23.821072    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:40:33 k8s-c3-m1 kubelet[1512]: I0304 15:40:33.904937    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:40:34 k8s-c3-m1 kubelet[1512]: I0304 15:40:34.566237    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:40:41 k8s-c3-m1 kubelet[1512]: I0304 15:40:41.566373    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:40:43 k8s-c3-m1 kubelet[1512]: I0304 15:40:43.980238    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:40:54 k8s-c3-m1 kubelet[1512]: I0304 15:40:54.049829    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:41:04 k8s-c3-m1 kubelet[1512]: I0304 15:41:04.120501    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:41:14 k8s-c3-m1 kubelet[1512]: I0304 15:41:14.188172    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:41:24 k8s-c3-m1 kubelet[1512]: I0304 15:41:24.257331    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:41:29 k8s-c3-m1 kubelet[1512]: I0304 15:41:29.566046    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:41:34 k8s-c3-m1 kubelet[1512]: I0304 15:41:34.336272    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:41:44 k8s-c3-m1 kubelet[1512]: I0304 15:41:44.421498    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:41:51 k8s-c3-m1 kubelet[1512]: I0304 15:41:51.566118    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:41:54 k8s-c3-m1 kubelet[1512]: I0304 15:41:54.510862    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:42:04 k8s-c3-m1 kubelet[1512]: I0304 15:42:04.602424    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:42:11 k8s-c3-m1 kubelet[1512]: I0304 15:42:11.566156    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:42:14 k8s-c3-m1 kubelet[1512]: I0304 15:42:14.672348    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:42:24 k8s-c3-m1 kubelet[1512]: I0304 15:42:24.739645    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:42:34 k8s-c3-m1 kubelet[1512]: I0304 15:42:34.809602    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:42:44 k8s-c3-m1 kubelet[1512]: I0304 15:42:44.569874    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:42:44 k8s-c3-m1 kubelet[1512]: I0304 15:42:44.878417    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:42:54 k8s-c3-m1 kubelet[1512]: I0304 15:42:54.949520    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:42:57 k8s-c3-m1 kubelet[1512]: I0304 15:42:57.566517    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:43:05 k8s-c3-m1 kubelet[1512]: I0304 15:43:05.031910    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:43:15 k8s-c3-m1 kubelet[1512]: I0304 15:43:15.131797    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:43:25 k8s-c3-m1 kubelet[1512]: I0304 15:43:25.199036    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:43:29 k8s-c3-m1 kubelet[1512]: I0304 15:43:29.566339    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:43:35 k8s-c3-m1 kubelet[1512]: I0304 15:43:35.311614    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:43:45 k8s-c3-m1 kubelet[1512]: I0304 15:43:45.376789    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:43:55 k8s-c3-m1 kubelet[1512]: I0304 15:43:55.452387    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:43:57 k8s-c3-m1 kubelet[1512]: I0304 15:43:57.566088    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:05 k8s-c3-m1 kubelet[1512]: I0304 15:44:05.502619    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:15 k8s-c3-m1 kubelet[1512]: I0304 15:44:15.582590    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:24 k8s-c3-m1 kubelet[1512]: I0304 15:44:24.567123    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:25 k8s-c3-m1 kubelet[1512]: I0304 15:44:25.622999    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:35 k8s-c3-m1 kubelet[1512]: I0304 15:44:35.669595    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:45 k8s-c3-m1 kubelet[1512]: I0304 15:44:45.742763    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:49 k8s-c3-m1 kubelet[1512]: I0304 15:44:49.566491    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:55 k8s-c3-m1 kubelet[1512]: I0304 15:44:55.812636    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:44:58 k8s-c3-m1 kubelet[1512]: I0304 15:44:58.566265    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:05 k8s-c3-m1 kubelet[1512]: I0304 15:45:05.890388    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:15 k8s-c3-m1 kubelet[1512]: I0304 15:45:15.971426    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:26 k8s-c3-m1 kubelet[1512]: I0304 15:45:26.043344    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:36 k8s-c3-m1 kubelet[1512]: I0304 15:45:36.117636    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:36 k8s-c3-m1 kubelet[1512]: I0304 15:45:36.566338    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:46 k8s-c3-m1 kubelet[1512]: I0304 15:45:46.190995    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:51 k8s-c3-m1 kubelet[1512]: I0304 15:45:51.566093    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:45:56 k8s-c3-m1 kubelet[1512]: I0304 15:45:56.273010    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:06 k8s-c3-m1 kubelet[1512]: I0304 15:46:06.346175    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Mar 04 15:46:16 k8s-c3-m1 kubelet[1512]: I0304 15:46:16.384087    1512 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
root@k8s-c3-m1:~# 

Any ideas ?

@joshuacox
Copy link
Author

I did just release kubash 1.13.4 and I have tested both stacked and extetcd methods using 1.13.4. I'd gladly gather any other info from a running cluster if you'd like.

@hreidar
Copy link

hreidar commented Mar 5, 2019

Hi, I updated the Kubeadm output as I found out that my load balancer, which is running in docker, was not configured to use the host network_mode. Not sure if that mattered but better safe than sorry.

@rdodev @timothysc any idea what my problem is here? Should I open up a new issue for this?

@hreidar
Copy link

hreidar commented Mar 5, 2019

@joshuacox can you share what exactly fixed your issue?

@joshuacox
Copy link
Author

@hreidar I fixed it by changing the final script to this:

https://gist.github.com/joshuacox/95aad9bee0c7e49e735ec3ec553b24ca

I suggest you script out everything so you can reproduce the error consistently, then if we can reproduce your error as well we are way more likely to be able to identify the issue.

@hreidar
Copy link

hreidar commented Mar 5, 2019

Ok, here are the steps I have written down so far...

node preperation

# note! - docker needs to be installed on all nodes (it is on my my 16.04 template VMs)

# install misc tools
apt-get update && apt-get install -y apt-transport-https curl

# install required k8s tools
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

# turn off swap
swapoff -a
sed -i '/ swap / s/^/#/' /etc/fstab

# create systemd config for kubelet
cat << _EOF_ > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
Restart=always
_EOF_

# reload systemd and start kublet
systemctl daemon-reload
systemctl restart kubelet

config creation for kublet and etcd

### on all etcd nodes
# create required variables
declare -A ETCDINFO
ETCDINFO=([k8s-c3-e1]=10.10.10.90 [k8s-c3-e2]=10.10.10.91 [k8s-c3-e3]=10.10.10.92)
mapfile -t ETCDNAMES < <(for KEY in ${!ETCDINFO[@]}; do echo "${ETCDINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
mapfile -t ETCDIPS < <(for KEY in ${ETCDINFO[@]}; do echo "${ETCDINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
declare -A MASTERINFO
MASTERINFO=([k8s-c3-m1]=10.10.10.93 [k8s-c3-m2]=10.10.10.94 [k8s-c3-m3]=10.10.10.95)
mapfile -t MASTERNAMES < <(for KEY in ${!MASTERINFO[@]}; do echo "${MASTERINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
mapfile -t MASTERIPS < <(for KEY in ${MASTERINFO[@]}; do echo "${MASTERINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')

# cerate clusterConfig for etcd
cat << EOF > /root/kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta1"
kind: ClusterConfiguration
etcd:
    local:
        serverCertSANs:
        - "${ETCDINFO[$HOSTNAME]}"
        peerCertSANs:
        - "${ETCDINFO[$HOSTNAME]}"
        extraArgs:
            initial-cluster: ${ETCDNAMES[0]}=https://${ETCDIPS[0]}:2380,${ETCDNAMES[1]}=https://${ETCDIPS[1]}:2380,${ETCDNAMES[2]}=https://${ETCDIPS[2]}:2380
            initial-cluster-state: new
            name: ${HOSTNAME}
            listen-peer-urls: https://${ETCDINFO[$HOSTNAME]}:2380
            listen-client-urls: https://${ETCDINFO[$HOSTNAME]}:2379
            advertise-client-urls: https://${ETCDINFO[$HOSTNAME]}:2379
            initial-advertise-peer-urls: https://${ETCDINFO[$HOSTNAME]}:2380
EOF

generate and distribute certs

### run only on one etcd node (k8s-c3-e1)
# generate the main certificate authority (creates two files in /etc/kubernetes/pki/etcd/)
kubeadm init phase certs etcd-ca

# create certificates
kubeadm init phase certs etcd-server --config=/root/kubeadmcfg.yaml 
kubeadm init phase certs etcd-peer --config=/root/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/root/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/root/kubeadmcfg.yaml

# copy cert files from k8s-c3-e1 to the other etcd nodes
scp -rp /etc/kubernetes/pki ubuntu@${ETCDIPS[1]}: && \
ssh -t ubuntu@${ETCDIPS[1]} "sudo mv pki /etc/kubernetes/ && \
sudo chown -R root.root /etc/kubernetes/pki"

scp -rp /etc/kubernetes/pki ubuntu@${ETCDIPS[2]}: && \
ssh -t ubuntu@${ETCDIPS[2]} "sudo mv pki /etc/kubernetes/ && \
sudo chown -R root.root /etc/kubernetes/pki"

# copy cert files from k8s-c3-e1 to the master nodes
scp -rp /etc/kubernetes/pki ubuntu@${MASTERIPS[0]}: && \
ssh -t ubuntu@${MASTERIPS[0]} "sudo mv pki /etc/kubernetes/ && \
sudo find /etc/kubernetes/pki -not -name ca.crt \
-not -name apiserver-etcd-client.crt \
-not -name apiserver-etcd-client.key \
-type f -delete && \
sudo chown -R root.root /etc/kubernetes/pki"

scp -rp /etc/kubernetes/pki ubuntu@${MASTERIPS[1]}: && \
ssh -t ubuntu@${MASTERIPS[1]} "sudo mv pki /etc/kubernetes/ && \
sudo find /etc/kubernetes/pki -not -name ca.crt \
-not -name apiserver-etcd-client.crt \
-not -name apiserver-etcd-client.key \
-type f -delete && \
sudo chown -R root.root /etc/kubernetes/pki"

scp -rp /etc/kubernetes/pki ubuntu@${MASTERIPS[2]}: && \
ssh -t ubuntu@${MASTERIPS[2]} "sudo mv pki /etc/kubernetes/ && \
sudo find /etc/kubernetes/pki -not -name ca.crt \
-not -name apiserver-etcd-client.crt \
-not -name apiserver-etcd-client.key \
-type f -delete && \
sudo chown -R root.root /etc/kubernetes/pki"

### run on the other etcd nodes
# create certificates
kubeadm init phase certs etcd-server --config=/root/kubeadmcfg.yaml 
kubeadm init phase certs etcd-peer --config=/root/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/root/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/root/kubeadmcfg.yaml

# create manifest for 
kubeadm init phase etcd local --config=/root/kubeadmcfg.yaml


### run only on one etcd node (k8s-c3-e1)
# check if cluster is running
docker run --rm -it \
--net host \
-v /etc/kubernetes:/etc/kubernetes quay.io/coreos/etcd:v3.2.24 etcdctl \
--cert-file /etc/kubernetes/pki/etcd/peer.crt \
--key-file /etc/kubernetes/pki/etcd/peer.key \
--ca-file /etc/kubernetes/pki/etcd/ca.crt \
--endpoints https://${ETCDIPS[0]}:2379 cluster-health

config and init master nodes

### run on all master nodes
# create required variables
declare -A ETCDINFO
ETCDINFO=([k8s-c3-e1]=10.10.10.90 [k8s-c3-e2]=10.10.10.91 [k8s-c3-e3]=10.10.10.92)
mapfile -t ETCDNAMES < <(for KEY in ${!ETCDINFO[@]}; do echo "${ETCDINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
mapfile -t ETCDIPS < <(for KEY in ${ETCDINFO[@]}; do echo "${ETCDINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
declare -A MASTERINFO
MASTERINFO=([k8s-c3-m1]=10.10.10.93 [k8s-c3-m2]=10.10.10.94 [k8s-c3-m3]=10.10.10.95)
mapfile -t MASTERNAMES < <(for KEY in ${!MASTERINFO[@]}; do echo "${MASTERINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
mapfile -t MASTERIPS < <(for KEY in ${MASTERINFO[@]}; do echo "${MASTERINFO[$KEY]}:::$KEY"; done | sort | awk -F::: '{print $2}')
VIP=10.10.10.76

# cerate clusterConfig for master nodes
cat << EOF > /root/kubeadmcfg.yaml
apiVersion: "kubeadm.k8s.io/v1beta1"
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
  certSANs:
  - "127.0.0.1"
  - "${ETCDIPS[0]}"
  - "${ETCDIPS[1]}"
  - "${ETCDIPS[2]}"
  - "${VIP}"
controlPlaneEndpoint: "${VIP}:6443"
etcd:
    external:
        endpoints:
        - https://${ETCDIPS[0]}:2379
        - https://${ETCDIPS[1]}:2379
        - https://${ETCDIPS[2]}:2379
        caFile: /etc/kubernetes/pki/etcd/ca.crt
        certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
        keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
EOF

### run only on the first master node (k8s-c3-m1)
# init the first master node
service kubelet stop && \
kubeadm init --config /root/kubeadmcfg.yaml

... and I'm stuck in the master init step :-)

@joshuacox
Copy link
Author

Is thiis an external etcd setup? Why don't you include that flag?

https://gist.github.com/joshuacox/95aad9bee0c7e49e735ec3ec553b24ca#file-final_node-sh-L42

@hreidar
Copy link

hreidar commented Mar 5, 2019

I was not aware of its existence. Is this the exact command?

kubeadm init  --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml,ExternalEtcdVersion --config /etc/kubernetes/kubeadmcfg.yaml

It gives me an error:

[init] Using Kubernetes version: v1.13.4
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-6443]: Port 6443 is in use
        [ERROR Port-10251]: Port 10251 is in use
        [ERROR Port-10252]: Port 10252 is in use
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

@joshuacox
Copy link
Author

joshuacox commented Mar 5, 2019

are you certain this system is clean? those ports being used indicates you already have a (partially?) running cluster

EDIT: perhaps

kubadm reset

further EDIT: also it appears that was old code, and you are correct about the command now at least in the docs sounds like they implemented a switch on the external block in the json. And indeed my own code in kubash no longer has those flags I have a running 1.13.4 cluster using this line to implement my kubeadm init which indeed has none of those flags

@hreidar
Copy link

hreidar commented Mar 5, 2019

You are right, I did forget to reset, but I'm using the external block in my manifest. I'm trying to follow the official documentation as close as I can but I'm stuck on initializing a master node as shown in my previous posts.

@hreidar
Copy link

hreidar commented Mar 6, 2019

What is this error telling me?

error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition

Is Kubeadm not able to talk to docker via /var/run/dockershim.sock ?
It seems to be trying to annotate a node that does not exist.

@joshuacox how is your docker setup? Which cgroup driver are u using for docker and kubelet?

@joshuacox
Copy link
Author

docker info
Containers: 19
 Running: 17
 Paused: 0
 Stopped: 2
Images: 31
Server Version: 17.03.3-ce
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 103
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6c463891b1ad274d505ae3bb738e530d1df2b3c7
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-142-generic
Operating System: Ubuntu 16.04.6 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 2.119 GiB
Name: thalhalla-master1
ID: JYXS:H6MM:FFLN:ILI3:2LRX:WOKR:AUC6:VTJH:A5W6:DGPD:WPO3:6KNF
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

@hreidar
Copy link

hreidar commented Mar 6, 2019

Ok, it seem to be similar setup but I'm going to try your version of docker.

root@k8s-c3-m1:~# docker info
Containers: 6
 Running: 6
 Paused: 0
 Stopped: 0
Images: 7
Server Version: 18.09.3
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 27
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: e6b3f5632f50dbc4e9cb6288d911bf4f5e95b18e
runc version: 6635b4f0c6af3810594d2770f662f34ddc15b40d
init version: fec3683
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-112-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.859GiB
Name: k8s-c3-m1
ID: EQ42:4KQG:5Z42:GQ67:OUU5:SPUA:P6VB:OM7P:S5XF:VLER:5DZI:DU4S
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine

WARNING: No swap limit support

@hreidar
Copy link

hreidar commented Mar 6, 2019

No luck. It seems that etcd is empty and the only resorce I can list from k8s api after this failed step is a ClusterIP

root@k8s-c3-m1:~# kubectl get all --kubeconfig /etc/kubernetes/admin.conf                
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   2d4h
root@k8s-c3-m1:~#

I think I need to open a new issue to try to get a developer to look at this. The info in the logs are not making any sens to me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests