Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm: Improve resiliency in CreateOrMutateConfigMap #85763

Merged
merged 1 commit into from Nov 30, 2019
Merged

kubeadm: Improve resiliency in CreateOrMutateConfigMap #85763

merged 1 commit into from Nov 30, 2019

Conversation

ereslibre
Copy link
Contributor

@ereslibre ereslibre commented Nov 30, 2019

What type of PR is this?
/kind bug

What this PR does / why we need it:
CreateOrMutateConfigMap was not resilient when it was trying to Create
the ConfigMap. If this operation returned an unknown error the whole
operation would fail, because it was strict in what error it was
expecting right afterwards: if the error returned by the Create call
was a IsAlreadyExists error, it would work fine. However, if an
unexpected error (such as an EOF) happened, this call would fail.

We are seeing this error specially when running control plane node
joins in an automated fashion, where things happen at a relatively
high speed pace.

It was specially easy to reproduce with kind, with several control
plane instances. E.g.:

[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1130 11:43:42.788952     887 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s  in 1013 milliseconds
Post https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s: unexpected EOF
unable to create ConfigMap
k8s.io/kubernetes/cmd/kubeadm/app/util/apiclient.CreateOrMutateConfigMap
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/apiclient/idempotency.go:65

This change makes this logic more resilient to unknown errors. It will
retry on the light of unknown errors until some of the expected error
happens: either IsAlreadyExists, in which case we will mutate the
ConfigMap, or no error, in which case the ConfigMap has been created.

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

kubeadm: retry `kubeadm-config` ConfigMap creation or mutation if the apiserver is not responding. This will improve resiliency when joining new control plane nodes.

/priority important-longterm
/cc @kubernetes/sig-cluster-lifecycle-pr-reviews

@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. kind/bug Categorizes issue or PR as related to a bug. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Nov 30, 2019
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ereslibre

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. area/kubeadm labels Nov 30, 2019
@ereslibre
Copy link
Contributor Author

With several control plane nodes it was very easy to reproduce the EOF problem with kind. With this patch I have been unable to reproduce that problem anymore.

@ereslibre
Copy link
Contributor Author

/assign @neolit123 @rosti @fabriziopandini @yastij

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed release-note-none Denotes a PR that doesn't merit a release note. labels Nov 30, 2019
@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Nov 30, 2019
@k8s-ci-robot k8s-ci-robot added lgtm "Looks good to me", indicates that a PR is ready to be merged. and removed lgtm "Looks good to me", indicates that a PR is ready to be merged. labels Nov 30, 2019
@ereslibre
Copy link
Contributor Author

@neolit123 sorry, did a last minute push to simplify a little more when we are setting lastError (no need for an extra if check that I added)

Copy link
Member

@neolit123 neolit123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@k8s-ci-robot k8s-ci-robot added lgtm "Looks good to me", indicates that a PR is ready to be merged. and removed lgtm "Looks good to me", indicates that a PR is ready to be merged. labels Nov 30, 2019
@neolit123
Copy link
Member

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 30, 2019
@ereslibre
Copy link
Contributor Author

As an aside: I think this is a backport candidate. It's a very focused patch that makes control plane join much more stable. Can we justify a backport here?

CreateOrMutateConfigMap was not resilient when it was trying to Create
the ConfigMap. If this operation returned an unknown error the whole
operation would fail, because it was strict in what error it was
expecting right afterwards: if the error returned by the Create call
was a IsAlreadyExists error, it would work fine. However, if an
unexpected error (such as an EOF) happened, this call would fail.

We are seeing this error specially when running control plane node
joins in an automated fashion, where things happen at a relatively
high speed pace.

It was specially easy to reproduce with kind, with several control
plane instances. E.g.:

```
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1130 11:43:42.788952     887 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s  in 1013 milliseconds
Post https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s: unexpected EOF
unable to create ConfigMap
k8s.io/kubernetes/cmd/kubeadm/app/util/apiclient.CreateOrMutateConfigMap
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/apiclient/idempotency.go:65
```

This change makes this logic more resilient to unknown errors. It will
retry on the light of unknown errors until some of the expected error
happens: either `IsAlreadyExists`, in which case we will mutate the
ConfigMap, or no error, in which case the ConfigMap has been created.
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 30, 2019
@neolit123
Copy link
Member

As an aside: I think this is a backport candidate. It's a very focused patch that makes control plane join much more stable. Can we justify a backport here?

backports are critical-urgent or important-soon (nowadays) in terms of priority label.

as much as it's worth having the backoff i personally haven't seen reports about this particular flake.

my vote is 50/50 on backport.
worth asking @rosti and @fabriziopandini for their votes.

in my opinion we should have the refactor we are discussing here in 1.18:
#85763 (comment)

@neolit123
Copy link
Member

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 30, 2019
@ereslibre
Copy link
Contributor Author

as much as it's worth having the backoff i personally haven't seen reports about this particular flake.

That's interesting. I could see that literally everytime with kind when creating more than one control plane instance.

worth asking @rosti and @fabriziopandini for their votes.

Absolutely, 👍

in my opinion we should have the refactor we are discussing here in 1.18: #85763 (comment)

👍

@neolit123
Copy link
Member

neolit123 commented Nov 30, 2019

That's interesting. I could see that literally everytime with kind when creating more than one control plane instance.

is that in parallel? as in joining more than on CP in parallel.

@ereslibre
Copy link
Contributor Author

ereslibre commented Nov 30, 2019

@ereslibre
Copy link
Contributor Author

/retest

@neolit123
Copy link
Member

we no longer have HA clusters based on kind (only kinder), but kinder also uses serial join of CP nodes and this flake cannot be seen.
https://k8s-testgrid.appspot.com/sig-cluster-lifecycle-kubeadm#kubeadm-kinder-master

i also haven't seen it in recent kind experiments.

@k8s-ci-robot k8s-ci-robot merged commit 1ca289d into kubernetes:master Nov 30, 2019
@k8s-ci-robot k8s-ci-robot added this to the v1.18 milestone Nov 30, 2019
@ereslibre
Copy link
Contributor Author

ereslibre commented Nov 30, 2019

Realized I had kind with a local patch that lowered timeouts (for testing other things). This made the problem appear very frequently. Ran it right now (without this current PR -- with kindest/node:v1.16.3), with 3 control plane instances and 1 worker:

Output

~ > kind create cluster -v10 --retain --config ~/.kind/3-masters-1-worker.yaml
Creating cluster "kind" ...
DEBUG: docker/images.go:58] Image: kindest/node:v1.16.3 present locally
 ✓ Ensuring node image (kindest/node:v1.16.3) 🖼
 ✓ Preparing nodes 📦 
 ✓ Configuring the external load balancer ⚖️ 
DEBUG: config/config.go:90] Using kubeadm config:
apiServer:
  certSANs:
  - localhost
  - 127.0.0.1
apiVersion: kubeadm.k8s.io/v1beta2
clusterName: kind
controlPlaneEndpoint: 172.17.0.2:6443
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
kind: ClusterConfiguration
kubernetesVersion: v1.16.3
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler:
  extraArgs: null
---
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.5
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.5
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.17.0.2:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.5
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
  nodefs.inodesFree: 0%
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
DEBUG: config/config.go:90] Using kubeadm config:
apiServer:
  certSANs:
  - localhost
  - 127.0.0.1
apiVersion: kubeadm.k8s.io/v1beta2
clusterName: kind
controlPlaneEndpoint: 172.17.0.2:6443
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
kind: ClusterConfiguration
kubernetesVersion: v1.16.3
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler:
  extraArgs: null
---
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.3
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.3
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.17.0.3
    bindPort: 6443
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.17.0.2:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.3
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
  nodefs.inodesFree: 0%
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
DEBUG: config/config.go:90] Using kubeadm config:
apiServer:
  certSANs:
  - localhost
  - 127.0.0.1
apiVersion: kubeadm.k8s.io/v1beta2
clusterName: kind
controlPlaneEndpoint: 172.17.0.2:6443
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
kind: ClusterConfiguration
kubernetesVersion: v1.16.3
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler:
  extraArgs: null
---
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.4
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.4
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.17.0.4
    bindPort: 6443
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.17.0.2:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.4
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
  nodefs.inodesFree: 0%
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
DEBUG: config/config.go:90] Using kubeadm config:
apiServer:
  certSANs:
  - localhost
  - 127.0.0.1
apiVersion: kubeadm.k8s.io/v1beta2
clusterName: kind
controlPlaneEndpoint: 172.17.0.2:6443
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
kind: ClusterConfiguration
kubernetesVersion: v1.16.3
networking:
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler:
  extraArgs: null
---
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.6
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.6
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.17.0.6
    bindPort: 6443
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.17.0.2:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.6
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
  nodefs.inodesFree: 0%
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
 ✓ Writing configuration 📜 
DEBUG: kubeadminit/init.go:73] I1130 23:03:40.055687     144 initconfiguration.go:190] loading configuration from "/kind/kubeadm.conf"
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration
I1130 23:03:40.074966     144 feature_gate.go:216] feature gates: &{map[]}
[init] Using Kubernetes version: v1.16.3
[preflight] Running pre-flight checks
I1130 23:03:40.076691     144 checks.go:578] validating Kubernetes and kubeadm version
I1130 23:03:40.076754     144 checks.go:167] validating if the firewall is enabled and active
I1130 23:03:40.090983     144 checks.go:202] validating availability of port 6443
I1130 23:03:40.091373     144 checks.go:202] validating availability of port 10251
I1130 23:03:40.091541     144 checks.go:202] validating availability of port 10252
I1130 23:03:40.091696     144 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I1130 23:03:40.091941     144 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I1130 23:03:40.092011     144 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I1130 23:03:40.092038     144 checks.go:287] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I1130 23:03:40.092065     144 checks.go:433] validating if the connectivity type is via proxy or direct
I1130 23:03:40.092109     144 checks.go:472] validating http connectivity to first IP address in the CIDR
I1130 23:03:40.092127     144 checks.go:472] validating http connectivity to first IP address in the CIDR
I1130 23:03:40.092137     144 checks.go:103] validating the container runtime
I1130 23:03:40.102389     144 checks.go:377] validating the presence of executable crictl
I1130 23:03:40.102600     144 checks.go:336] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I1130 23:03:40.102674     144 checks.go:336] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1130 23:03:40.102719     144 checks.go:650] validating whether swap is enabled or not
	[WARNING Swap]: running with swap on is not supported. Please disable swap
I1130 23:03:40.102778     144 checks.go:377] validating the presence of executable ip
I1130 23:03:40.102826     144 checks.go:377] validating the presence of executable iptables
I1130 23:03:40.102876     144 checks.go:377] validating the presence of executable mount
I1130 23:03:40.102896     144 checks.go:377] validating the presence of executable nsenter
I1130 23:03:40.102944     144 checks.go:377] validating the presence of executable ebtables
I1130 23:03:40.102975     144 checks.go:377] validating the presence of executable ethtool
I1130 23:03:40.102998     144 checks.go:377] validating the presence of executable socat
I1130 23:03:40.103032     144 checks.go:377] validating the presence of executable tc
I1130 23:03:40.103060     144 checks.go:377] validating the presence of executable touch
I1130 23:03:40.103089     144 checks.go:521] running all checks
I1130 23:03:40.116427     144 checks.go:407] checking whether the given node name is reachable using net.LookupHost
I1130 23:03:40.116585     144 checks.go:619] validating kubelet version
I1130 23:03:40.157520     144 checks.go:129] validating if the service is enabled and active
I1130 23:03:40.164630     144 checks.go:202] validating availability of port 10250
I1130 23:03:40.164699     144 checks.go:202] validating availability of port 2379
I1130 23:03:40.164730     144 checks.go:202] validating availability of port 2380
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1130 23:03:40.164761     144 checks.go:250] validating the existence and emptiness of directory /var/lib/etcd
I1130 23:03:40.170505     144 checks.go:839] image exists: k8s.gcr.io/kube-apiserver:v1.16.3
I1130 23:03:40.175899     144 checks.go:839] image exists: k8s.gcr.io/kube-controller-manager:v1.16.3
I1130 23:03:40.181232     144 checks.go:839] image exists: k8s.gcr.io/kube-scheduler:v1.16.3
I1130 23:03:40.194790     144 checks.go:839] image exists: k8s.gcr.io/kube-proxy:v1.16.3
I1130 23:03:40.200391     144 checks.go:839] image exists: k8s.gcr.io/pause:3.1
I1130 23:03:40.205977     144 checks.go:839] image exists: k8s.gcr.io/etcd:3.3.15-0
I1130 23:03:40.211380     144 checks.go:839] image exists: k8s.gcr.io/coredns:1.6.2
I1130 23:03:40.211401     144 kubelet.go:61] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1130 23:03:40.225263     144 kubelet.go:79] Starting the kubelet
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I1130 23:03:40.286259     144 certs.go:104] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.6 172.17.0.2 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I1130 23:03:40.958111     144 certs.go:104] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
I1130 23:03:41.473106     144 certs.go:104] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.6 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.6 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I1130 23:03:42.364503     144 certs.go:70] creating a new public/private key files for signing service account users
[certs] Generating "sa" key and public key
I1130 23:03:42.553419     144 kubeconfig.go:79] creating kubeconfig file for admin.conf
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
I1130 23:03:42.703960     144 kubeconfig.go:79] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I1130 23:03:42.781631     144 kubeconfig.go:79] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1130 23:03:42.889988     144 kubeconfig.go:79] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1130 23:03:43.284859     144 manifests.go:91] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1130 23:03:43.290433     144 manifests.go:116] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I1130 23:03:43.290464     144 manifests.go:91] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1130 23:03:43.291736     144 manifests.go:116] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I1130 23:03:43.291755     144 manifests.go:91] [control-plane] getting StaticPodSpecs
I1130 23:03:43.292376     144 manifests.go:116] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I1130 23:03:43.292909     144 local.go:69] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I1130 23:03:43.292926     144 waitcontrolplane.go:80] [wait-control-plane] Waiting for the API server to be healthy
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1130 23:03:43.293767     144 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf
I1130 23:03:43.295307     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1130 23:03:43.796943     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:03:44.298660     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:03:44.796385     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:03:45.296358     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:03:45.798058     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:03:46.298349     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 2 milliseconds
I1130 23:03:46.797305     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:03:47.297005     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:03:47.796304     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:03:48.297399     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:03:48.795917     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:03:49.297056     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:03:49.796370     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:03:50.296717     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:03:50.798256     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 2 milliseconds
I1130 23:03:51.300359     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:03:51.797583     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:03:52.298568     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 2 milliseconds
I1130 23:03:52.796404     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:03:53.297120     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:03:53.802695     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 6 milliseconds
I1130 23:03:54.297621     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:03:54.797241     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:03:55.297821     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:03:55.796066     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:03:56.298019     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 2 milliseconds
I1130 23:03:56.795954     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:03:57.295859     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:03:57.796770     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:03:58.297042     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:03:58.797577     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:03:59.296992     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:03:59.796381     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:04:00.297013     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:04:00.797013     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:04:01.300333     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 2 milliseconds
I1130 23:04:01.797361     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:04:02.297260     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:04:02.796204     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:04:03.297156     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:04:03.798338     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 2 milliseconds
I1130 23:04:04.296249     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:04:04.796025     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:04:05.295993     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:04:05.797935     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:04:06.298079     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 2 milliseconds
I1130 23:04:06.796944     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:04:07.297537     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:04:07.798459     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 2 milliseconds
I1130 23:04:08.296205     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:04:08.795946     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:04:09.296565     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:04:09.796202     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:04:10.295977     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:04:10.795928     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:04:11.295934     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:04:11.796422     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:04:12.296170     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:04:12.795896     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:04:13.297308     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:04:13.796070     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:04:14.297252     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:04:14.797418     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 1 milliseconds
I1130 23:04:15.296101     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s  in 0 milliseconds
I1130 23:04:15.830548     144 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=32s 200 OK in 34 milliseconds
[apiclient] All control plane components are healthy after 32.536311 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1130 23:04:15.830975     144 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
I1130 23:04:15.853434     144 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 17 milliseconds
I1130 23:04:15.865683     144 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 10 milliseconds
I1130 23:04:15.877425     144 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 9 milliseconds
I1130 23:04:15.879346     144 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
I1130 23:04:15.890937     144 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 8 milliseconds
I1130 23:04:15.900863     144 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 9 milliseconds
I1130 23:04:15.910124     144 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 8 milliseconds
I1130 23:04:15.910526     144 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane node
I1130 23:04:15.910571     144 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/run/containerd/containerd.sock" to the Node API object "kind-control-plane" as an annotation
I1130 23:04:16.412339     144 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane 404 Not Found in 1 milliseconds
I1130 23:04:16.916529     144 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane 404 Not Found in 5 milliseconds
I1130 23:04:17.419417     144 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane 200 OK in 8 milliseconds
I1130 23:04:17.460756     144 round_trippers.go:443] PATCH https://172.17.0.2:6443/api/v1/nodes/kind-control-plane 200 OK in 28 milliseconds
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kind-control-plane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
I1130 23:04:17.970915     144 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane 200 OK in 6 milliseconds
I1130 23:04:17.986541     144 round_trippers.go:443] PATCH https://172.17.0.2:6443/api/v1/nodes/kind-control-plane 200 OK in 12 milliseconds
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1130 23:04:17.994774     144 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-abcdef 404 Not Found in 7 milliseconds
I1130 23:04:18.008053     144 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/secrets 201 Created in 12 milliseconds
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1130 23:04:18.022191     144 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 11 milliseconds
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1130 23:04:18.037505     144 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 12 milliseconds
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1130 23:04:18.049650     144 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 10 milliseconds
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1130 23:04:18.050315     144 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfig
I1130 23:04:18.053280     144 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf
I1130 23:04:18.053349     144 clusterinfo.go:53] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
I1130 23:04:18.055159     144 clusterinfo.go:65] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
I1130 23:04:18.064812     144 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps 201 Created in 9 milliseconds
I1130 23:04:18.065282     144 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I1130 23:04:18.073715     144 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles 201 Created in 8 milliseconds
I1130 23:04:18.083292     144 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings 201 Created in 9 milliseconds
I1130 23:04:18.090552     144 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kube-dns 404 Not Found in 6 milliseconds
I1130 23:04:18.097782     144 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/coredns 404 Not Found in 5 milliseconds
I1130 23:04:18.106040     144 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 7 milliseconds
I1130 23:04:18.121137     144 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterroles 201 Created in 11 milliseconds
I1130 23:04:18.131127     144 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 8 milliseconds
I1130 23:04:18.146485     144 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 12 milliseconds
I1130 23:04:18.189067     144 round_trippers.go:443] POST https://172.17.0.2:6443/apis/apps/v1/namespaces/kube-system/deployments 201 Created in 15 milliseconds
I1130 23:04:18.199072     144 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/services 201 Created in 7 milliseconds
[addons] Applied essential addon: CoreDNS
I1130 23:04:18.203113     144 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/serviceaccounts 201 Created in 3 milliseconds
I1130 23:04:18.364739     144 request.go:538] Throttling request took 158.891702ms, request: POST:https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps
I1130 23:04:18.381586     144 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 16 milliseconds
I1130 23:04:18.445156     144 round_trippers.go:443] POST https://172.17.0.2:6443/apis/apps/v1/namespaces/kube-system/daemonsets 201 Created in 26 milliseconds
I1130 23:04:18.454121     144 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings 201 Created in 8 milliseconds
I1130 23:04:18.456984     144 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 2 milliseconds
I1130 23:04:18.463125     144 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings 201 Created in 5 milliseconds
[addons] Applied essential addon: kube-proxy
I1130 23:04:18.464031     144 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf
I1130 23:04:18.465001     144 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities 
and service account keys on each node and then running the following as root:

  kubeadm join 172.17.0.2:6443 --token <value withheld> \
    --discovery-token-ca-cert-hash sha256:c72b270a9d7f8f98cb43db678dd5e0752e035adebce7a1b77e8d9e85114f42b8 \
    --control-plane 	  

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.17.0.2:6443 --token <value withheld> \
    --discovery-token-ca-cert-hash sha256:c72b270a9d7f8f98cb43db678dd5e0752e035adebce7a1b77e8d9e85114f42b8 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
DEBUG: kubeadmjoin/join.go:133] I1130 23:04:29.504992     568 join.go:363] [preflight] found NodeName empty; using OS hostname as NodeName
I1130 23:04:29.505016     568 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf"
[preflight] Running pre-flight checks
I1130 23:04:29.506162     568 preflight.go:90] [preflight] Running general checks
I1130 23:04:29.506215     568 checks.go:250] validating the existence and emptiness of directory /etc/kubernetes/manifests
I1130 23:04:29.506273     568 checks.go:287] validating the existence of file /etc/kubernetes/kubelet.conf
I1130 23:04:29.506291     568 checks.go:287] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I1130 23:04:29.506303     568 checks.go:103] validating the container runtime
I1130 23:04:29.512952     568 checks.go:377] validating the presence of executable crictl
I1130 23:04:29.512988     568 checks.go:336] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I1130 23:04:29.513045     568 checks.go:336] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1130 23:04:29.513078     568 checks.go:650] validating whether swap is enabled or not
	[WARNING Swap]: running with swap on is not supported. Please disable swap
I1130 23:04:29.513144     568 checks.go:377] validating the presence of executable ip
I1130 23:04:29.513190     568 checks.go:377] validating the presence of executable iptables
I1130 23:04:29.513234     568 checks.go:377] validating the presence of executable mount
I1130 23:04:29.513254     568 checks.go:377] validating the presence of executable nsenter
I1130 23:04:29.513283     568 checks.go:377] validating the presence of executable ebtables
I1130 23:04:29.513314     568 checks.go:377] validating the presence of executable ethtool
I1130 23:04:29.513337     568 checks.go:377] validating the presence of executable socat
I1130 23:04:29.513376     568 checks.go:377] validating the presence of executable tc
I1130 23:04:29.513391     568 checks.go:377] validating the presence of executable touch
I1130 23:04:29.513428     568 checks.go:521] running all checks
I1130 23:04:29.526525     568 checks.go:407] checking whether the given node name is reachable using net.LookupHost
I1130 23:04:29.526761     568 checks.go:619] validating kubelet version
I1130 23:04:29.566773     568 checks.go:129] validating if the service is enabled and active
I1130 23:04:29.574122     568 checks.go:202] validating availability of port 10250
I1130 23:04:29.574374     568 checks.go:433] validating if the connectivity type is via proxy or direct
I1130 23:04:29.574407     568 join.go:433] [preflight] Discovering cluster-info
I1130 23:04:29.574450     568 token.go:199] [discovery] Trying to connect to API Server "172.17.0.2:6443"
I1130 23:04:29.574927     568 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.2:6443"
I1130 23:04:29.580425     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 5 milliseconds
I1130 23:04:29.581042     568 token.go:202] [discovery] Failed to connect to API Server "172.17.0.2:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token
I1130 23:04:34.581356     568 token.go:199] [discovery] Trying to connect to API Server "172.17.0.2:6443"
I1130 23:04:34.584580     568 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.2:6443"
I1130 23:04:34.633289     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 48 milliseconds
I1130 23:04:34.642822     568 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "172.17.0.2:6443"
I1130 23:04:34.643362     568 token.go:205] [discovery] Successfully established connection with API Server "172.17.0.2:6443"
I1130 23:04:34.643432     568 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
I1130 23:04:34.643452     568 join.go:447] [preflight] Fetching init configuration
I1130 23:04:34.643459     568 join.go:485] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
I1130 23:04:34.658472     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 13 milliseconds
I1130 23:04:34.668775     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 8 milliseconds
I1130 23:04:34.673827     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.16 200 OK in 3 milliseconds
I1130 23:04:34.675561     568 interface.go:384] Looking for default routes with IPv4 addresses
I1130 23:04:34.675579     568 interface.go:389] Default route transits interface "eth0"
I1130 23:04:34.675716     568 interface.go:196] Interface eth0 is up
I1130 23:04:34.675770     568 interface.go:244] Interface "eth0" has 1 addresses :[172.17.0.4/16].
I1130 23:04:34.675828     568 interface.go:211] Checking addr  172.17.0.4/16.
I1130 23:04:34.675850     568 interface.go:218] IP found 172.17.0.4
I1130 23:04:34.675863     568 interface.go:250] Found valid IPv4 address 172.17.0.4 for interface "eth0".
I1130 23:04:34.675870     568 interface.go:395] Found active IP 172.17.0.4 
I1130 23:04:34.675939     568 preflight.go:101] [preflight] Running configuration dependant checks
I1130 23:04:34.676858     568 checks.go:578] validating Kubernetes and kubeadm version
[preflight] Running pre-flight checks before initializing the new control plane instance
I1130 23:04:34.676882     568 checks.go:167] validating if the firewall is enabled and active
I1130 23:04:34.687153     568 checks.go:202] validating availability of port 6443
I1130 23:04:34.687378     568 checks.go:202] validating availability of port 10251
I1130 23:04:34.687480     568 checks.go:202] validating availability of port 10252
I1130 23:04:34.687540     568 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I1130 23:04:34.687568     568 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I1130 23:04:34.687581     568 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I1130 23:04:34.687594     568 checks.go:287] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I1130 23:04:34.687610     568 checks.go:433] validating if the connectivity type is via proxy or direct
I1130 23:04:34.687634     568 checks.go:472] validating http connectivity to first IP address in the CIDR
I1130 23:04:34.687651     568 checks.go:472] validating http connectivity to first IP address in the CIDR
I1130 23:04:34.687662     568 checks.go:202] validating availability of port 2379
I1130 23:04:34.687694     568 checks.go:202] validating availability of port 2380
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1130 23:04:34.687722     568 checks.go:250] validating the existence and emptiness of directory /var/lib/etcd
I1130 23:04:34.694794     568 checks.go:839] image exists: k8s.gcr.io/kube-apiserver:v1.16.3
I1130 23:04:34.702005     568 checks.go:839] image exists: k8s.gcr.io/kube-controller-manager:v1.16.3
I1130 23:04:34.708103     568 checks.go:839] image exists: k8s.gcr.io/kube-scheduler:v1.16.3
I1130 23:04:34.713917     568 checks.go:839] image exists: k8s.gcr.io/kube-proxy:v1.16.3
I1130 23:04:34.720958     568 checks.go:839] image exists: k8s.gcr.io/pause:3.1
I1130 23:04:34.727141     568 checks.go:839] image exists: k8s.gcr.io/etcd:3.3.15-0
I1130 23:04:34.734168     568 checks.go:839] image exists: k8s.gcr.io/coredns:1.6.2
I1130 23:04:34.734206     568 controlplaneprepare.go:211] [download-certs] Skipping certs download
I1130 23:04:34.734219     568 certs.go:39] creating PKI assets
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kind-control-plane2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.4 172.17.0.2 127.0.0.1]
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kind-control-plane2 localhost] and IPs [172.17.0.4 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane2 localhost] and IPs [172.17.0.4 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
I1130 23:04:36.016242     568 certs.go:70] creating a new public/private key files for signing service account users
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1130 23:04:36.201597     568 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1130 23:04:36.556568     568 manifests.go:91] [control-plane] getting StaticPodSpecs
I1130 23:04:36.562536     568 manifests.go:116] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I1130 23:04:36.562553     568 manifests.go:91] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1130 23:04:36.563368     568 manifests.go:116] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I1130 23:04:36.563380     568 manifests.go:91] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1130 23:04:36.563881     568 manifests.go:116] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[check-etcd] Checking that the etcd cluster is healthy
I1130 23:04:36.564291     568 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf
I1130 23:04:36.564841     568 local.go:75] [etcd] Checking etcd cluster health
I1130 23:04:36.564850     568 local.go:78] creating etcd client that connects to etcd pods
I1130 23:04:36.571487     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 6 milliseconds
I1130 23:04:36.571795     568 etcd.go:107] etcd endpoints read from pods: https://172.17.0.6:2379
I1130 23:04:36.578438     568 etcd.go:156] etcd endpoints read from etcd: https://172.17.0.6:2379
I1130 23:04:36.578461     568 etcd.go:125] update etcd endpoints: https://172.17.0.6:2379
I1130 23:04:36.591228     568 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I1130 23:04:36.592024     568 loader.go:375] Config loaded from file:  /etc/kubernetes/bootstrap-kubelet.conf
I1130 23:04:36.592373     568 kubelet.go:133] [kubelet-start] Stopping the kubelet
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
I1130 23:04:36.604017     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.16 200 OK in 4 milliseconds
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1130 23:04:36.609924     568 kubelet.go:150] [kubelet-start] Starting the kubelet
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I1130 23:04:37.179567     568 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1130 23:04:37.679502     568 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1130 23:04:37.692922     568 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1130 23:04:37.694493     568 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node
I1130 23:04:37.694505     568 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/run/containerd/containerd.sock" to the Node API object "kind-control-plane2" as an annotation
I1130 23:04:38.377725     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 182 milliseconds
I1130 23:04:38.697927     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 3 milliseconds
I1130 23:04:39.206730     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 11 milliseconds
I1130 23:04:39.716381     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 21 milliseconds
I1130 23:04:40.199428     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 4 milliseconds
I1130 23:04:40.696712     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 1 milliseconds
I1130 23:04:41.208157     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 13 milliseconds
I1130 23:04:41.703884     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 8 milliseconds
I1130 23:04:42.209452     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 13 milliseconds
I1130 23:04:42.700897     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 5 milliseconds
I1130 23:04:43.201562     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 6 milliseconds
I1130 23:04:43.700768     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 5 milliseconds
I1130 23:04:44.200853     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 5 milliseconds
I1130 23:04:44.701603     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 6 milliseconds
I1130 23:04:45.196259     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 1 milliseconds
I1130 23:04:45.700552     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 5 milliseconds
I1130 23:04:46.200574     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 5 milliseconds
I1130 23:04:46.700358     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 5 milliseconds
I1130 23:04:47.210062     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 14 milliseconds
I1130 23:04:47.696213     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 1 milliseconds
I1130 23:04:48.201679     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 6 milliseconds
I1130 23:04:48.696496     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 1 milliseconds
I1130 23:04:49.196617     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 1 milliseconds
I1130 23:04:49.708763     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 2 milliseconds
I1130 23:04:50.210344     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 9 milliseconds
I1130 23:04:50.720310     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 25 milliseconds
I1130 23:04:51.198949     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 4 milliseconds
I1130 23:04:51.706681     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 11 milliseconds
I1130 23:04:52.200382     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 5 milliseconds
I1130 23:04:52.696123     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 1 milliseconds
I1130 23:04:53.203518     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 8 milliseconds
I1130 23:04:53.724561     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 29 milliseconds
I1130 23:04:54.202205     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 7 milliseconds
I1130 23:04:54.702831     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 7 milliseconds
I1130 23:04:55.196753     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 1 milliseconds
I1130 23:04:55.702343     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 7 milliseconds
I1130 23:04:56.201978     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 6 milliseconds
I1130 23:04:56.701358     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 6 milliseconds
I1130 23:04:57.203081     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 8 milliseconds
I1130 23:04:57.739796     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 200 OK in 45 milliseconds
I1130 23:04:57.769722     568 round_trippers.go:443] PATCH https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 200 OK in 26 milliseconds
I1130 23:04:57.770519     568 local.go:127] creating etcd client that connects to etcd pods
I1130 23:04:57.779554     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 8 milliseconds
I1130 23:04:57.779768     568 etcd.go:107] etcd endpoints read from pods: https://172.17.0.6:2379
I1130 23:04:57.786586     568 etcd.go:156] etcd endpoints read from etcd: https://172.17.0.6:2379
I1130 23:04:57.786609     568 etcd.go:125] update etcd endpoints: https://172.17.0.6:2379
I1130 23:04:57.786617     568 local.go:136] Adding etcd member: https://172.17.0.4:2380
[etcd] Announced new etcd member joining to the existing etcd cluster
I1130 23:04:57.796823     568 local.go:142] Updated etcd member list: [{kind-control-plane2 https://172.17.0.4:2380} {kind-control-plane https://172.17.0.6:2380}]
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
I1130 23:04:57.797526     568 etcd.go:372] [etcd] attempting to see if all cluster endpoints ([https://172.17.0.6:2379 https://172.17.0.4:2379]) are available 1/8
{"level":"warn","ts":"2019-11-30T23:05:06.833Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://172.17.0.4:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
I1130 23:05:06.833108     568 etcd.go:377] [etcd] Attempt timed out
I1130 23:05:06.833115     568 etcd.go:369] [etcd] Waiting 5s until next retry
I1130 23:05:11.833248     568 etcd.go:372] [etcd] attempting to see if all cluster endpoints ([https://172.17.0.6:2379 https://172.17.0.4:2379]) are available 2/8
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1130 23:05:11.857552     568 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps  in 0 milliseconds
Post https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps: EOF
unable to create ConfigMap
k8s.io/kubernetes/cmd/kubeadm/app/util/apiclient.CreateOrMutateConfigMap
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/apiclient/idempotency.go:65
k8s.io/kubernetes/cmd/kubeadm/app/phases/uploadconfig.UploadConfiguration
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/uploadconfig/uploadconfig.go:93
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join.runUpdateStatusPhase
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join/controlplanejoin.go:170
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:236
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:424
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:209
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdJoin.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:169
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:830
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:200
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1337
error uploading configuration
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join.runUpdateStatusPhase
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join/controlplanejoin.go:171
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:236
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:424
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:209
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdJoin.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:169
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:830
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:200
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1337
error execution phase control-plane-join/update-status
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:237
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:424
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:209
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdJoin.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:169
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:830
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:200
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1337
 ✗ Joining more control-plane nodes 🎮
ERROR: failed to create cluster: failed to join node with kubeadm: command "docker exec --privileged kind-control-plane2 kubeadm join --config /kind/kubeadm.conf --ignore-preflight-errors=all --v=6" failed with error: exit status 1

Output:
I1130 23:04:29.504992     568 join.go:363] [preflight] found NodeName empty; using OS hostname as NodeName
I1130 23:04:29.505016     568 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf"
[preflight] Running pre-flight checks
I1130 23:04:29.506162     568 preflight.go:90] [preflight] Running general checks
I1130 23:04:29.506215     568 checks.go:250] validating the existence and emptiness of directory /etc/kubernetes/manifests
I1130 23:04:29.506273     568 checks.go:287] validating the existence of file /etc/kubernetes/kubelet.conf
I1130 23:04:29.506291     568 checks.go:287] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I1130 23:04:29.506303     568 checks.go:103] validating the container runtime
I1130 23:04:29.512952     568 checks.go:377] validating the presence of executable crictl
I1130 23:04:29.512988     568 checks.go:336] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I1130 23:04:29.513045     568 checks.go:336] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1130 23:04:29.513078     568 checks.go:650] validating whether swap is enabled or not
	[WARNING Swap]: running with swap on is not supported. Please disable swap
I1130 23:04:29.513144     568 checks.go:377] validating the presence of executable ip
I1130 23:04:29.513190     568 checks.go:377] validating the presence of executable iptables
I1130 23:04:29.513234     568 checks.go:377] validating the presence of executable mount
I1130 23:04:29.513254     568 checks.go:377] validating the presence of executable nsenter
I1130 23:04:29.513283     568 checks.go:377] validating the presence of executable ebtables
I1130 23:04:29.513314     568 checks.go:377] validating the presence of executable ethtool
I1130 23:04:29.513337     568 checks.go:377] validating the presence of executable socat
I1130 23:04:29.513376     568 checks.go:377] validating the presence of executable tc
I1130 23:04:29.513391     568 checks.go:377] validating the presence of executable touch
I1130 23:04:29.513428     568 checks.go:521] running all checks
I1130 23:04:29.526525     568 checks.go:407] checking whether the given node name is reachable using net.LookupHost
I1130 23:04:29.526761     568 checks.go:619] validating kubelet version
I1130 23:04:29.566773     568 checks.go:129] validating if the service is enabled and active
I1130 23:04:29.574122     568 checks.go:202] validating availability of port 10250
I1130 23:04:29.574374     568 checks.go:433] validating if the connectivity type is via proxy or direct
I1130 23:04:29.574407     568 join.go:433] [preflight] Discovering cluster-info
I1130 23:04:29.574450     568 token.go:199] [discovery] Trying to connect to API Server "172.17.0.2:6443"
I1130 23:04:29.574927     568 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.2:6443"
I1130 23:04:29.580425     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 5 milliseconds
I1130 23:04:29.581042     568 token.go:202] [discovery] Failed to connect to API Server "172.17.0.2:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token
I1130 23:04:34.581356     568 token.go:199] [discovery] Trying to connect to API Server "172.17.0.2:6443"
I1130 23:04:34.584580     568 token.go:74] [discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.2:6443"
I1130 23:04:34.633289     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 48 milliseconds
I1130 23:04:34.642822     568 token.go:109] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "172.17.0.2:6443"
I1130 23:04:34.643362     568 token.go:205] [discovery] Successfully established connection with API Server "172.17.0.2:6443"
I1130 23:04:34.643432     568 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
I1130 23:04:34.643452     568 join.go:447] [preflight] Fetching init configuration
I1130 23:04:34.643459     568 join.go:485] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
I1130 23:04:34.658472     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 13 milliseconds
I1130 23:04:34.668775     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 8 milliseconds
I1130 23:04:34.673827     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.16 200 OK in 3 milliseconds
I1130 23:04:34.675561     568 interface.go:384] Looking for default routes with IPv4 addresses
I1130 23:04:34.675579     568 interface.go:389] Default route transits interface "eth0"
I1130 23:04:34.675716     568 interface.go:196] Interface eth0 is up
I1130 23:04:34.675770     568 interface.go:244] Interface "eth0" has 1 addresses :[172.17.0.4/16].
I1130 23:04:34.675828     568 interface.go:211] Checking addr  172.17.0.4/16.
I1130 23:04:34.675850     568 interface.go:218] IP found 172.17.0.4
I1130 23:04:34.675863     568 interface.go:250] Found valid IPv4 address 172.17.0.4 for interface "eth0".
I1130 23:04:34.675870     568 interface.go:395] Found active IP 172.17.0.4 
I1130 23:04:34.675939     568 preflight.go:101] [preflight] Running configuration dependant checks
I1130 23:04:34.676858     568 checks.go:578] validating Kubernetes and kubeadm version
[preflight] Running pre-flight checks before initializing the new control plane instance
I1130 23:04:34.676882     568 checks.go:167] validating if the firewall is enabled and active
I1130 23:04:34.687153     568 checks.go:202] validating availability of port 6443
I1130 23:04:34.687378     568 checks.go:202] validating availability of port 10251
I1130 23:04:34.687480     568 checks.go:202] validating availability of port 10252
I1130 23:04:34.687540     568 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I1130 23:04:34.687568     568 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I1130 23:04:34.687581     568 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I1130 23:04:34.687594     568 checks.go:287] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I1130 23:04:34.687610     568 checks.go:433] validating if the connectivity type is via proxy or direct
I1130 23:04:34.687634     568 checks.go:472] validating http connectivity to first IP address in the CIDR
I1130 23:04:34.687651     568 checks.go:472] validating http connectivity to first IP address in the CIDR
I1130 23:04:34.687662     568 checks.go:202] validating availability of port 2379
I1130 23:04:34.687694     568 checks.go:202] validating availability of port 2380
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1130 23:04:34.687722     568 checks.go:250] validating the existence and emptiness of directory /var/lib/etcd
I1130 23:04:34.694794     568 checks.go:839] image exists: k8s.gcr.io/kube-apiserver:v1.16.3
I1130 23:04:34.702005     568 checks.go:839] image exists: k8s.gcr.io/kube-controller-manager:v1.16.3
I1130 23:04:34.708103     568 checks.go:839] image exists: k8s.gcr.io/kube-scheduler:v1.16.3
I1130 23:04:34.713917     568 checks.go:839] image exists: k8s.gcr.io/kube-proxy:v1.16.3
I1130 23:04:34.720958     568 checks.go:839] image exists: k8s.gcr.io/pause:3.1
I1130 23:04:34.727141     568 checks.go:839] image exists: k8s.gcr.io/etcd:3.3.15-0
I1130 23:04:34.734168     568 checks.go:839] image exists: k8s.gcr.io/coredns:1.6.2
I1130 23:04:34.734206     568 controlplaneprepare.go:211] [download-certs] Skipping certs download
I1130 23:04:34.734219     568 certs.go:39] creating PKI assets
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kind-control-plane2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.4 172.17.0.2 127.0.0.1]
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kind-control-plane2 localhost] and IPs [172.17.0.4 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kind-control-plane2 localhost] and IPs [172.17.0.4 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
I1130 23:04:36.016242     568 certs.go:70] creating a new public/private key files for signing service account users
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1130 23:04:36.201597     568 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1130 23:04:36.556568     568 manifests.go:91] [control-plane] getting StaticPodSpecs
I1130 23:04:36.562536     568 manifests.go:116] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I1130 23:04:36.562553     568 manifests.go:91] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1130 23:04:36.563368     568 manifests.go:116] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I1130 23:04:36.563380     568 manifests.go:91] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1130 23:04:36.563881     568 manifests.go:116] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[check-etcd] Checking that the etcd cluster is healthy
I1130 23:04:36.564291     568 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf
I1130 23:04:36.564841     568 local.go:75] [etcd] Checking etcd cluster health
I1130 23:04:36.564850     568 local.go:78] creating etcd client that connects to etcd pods
I1130 23:04:36.571487     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 6 milliseconds
I1130 23:04:36.571795     568 etcd.go:107] etcd endpoints read from pods: https://172.17.0.6:2379
I1130 23:04:36.578438     568 etcd.go:156] etcd endpoints read from etcd: https://172.17.0.6:2379
I1130 23:04:36.578461     568 etcd.go:125] update etcd endpoints: https://172.17.0.6:2379
I1130 23:04:36.591228     568 kubelet.go:107] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I1130 23:04:36.592024     568 loader.go:375] Config loaded from file:  /etc/kubernetes/bootstrap-kubelet.conf
I1130 23:04:36.592373     568 kubelet.go:133] [kubelet-start] Stopping the kubelet
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
I1130 23:04:36.604017     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.16 200 OK in 4 milliseconds
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1130 23:04:36.609924     568 kubelet.go:150] [kubelet-start] Starting the kubelet
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I1130 23:04:37.179567     568 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1130 23:04:37.679502     568 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1130 23:04:37.692922     568 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1130 23:04:37.694493     568 kubelet.go:168] [kubelet-start] preserving the crisocket information for the node
I1130 23:04:37.694505     568 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/run/containerd/containerd.sock" to the Node API object "kind-control-plane2" as an annotation
I1130 23:04:38.377725     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 182 milliseconds
I1130 23:04:38.697927     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 3 milliseconds
I1130 23:04:39.206730     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 11 milliseconds
I1130 23:04:39.716381     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 21 milliseconds
I1130 23:04:40.199428     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 4 milliseconds
I1130 23:04:40.696712     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 1 milliseconds
I1130 23:04:41.208157     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 13 milliseconds
I1130 23:04:41.703884     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 8 milliseconds
I1130 23:04:42.209452     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 13 milliseconds
I1130 23:04:42.700897     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 5 milliseconds
I1130 23:04:43.201562     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 6 milliseconds
I1130 23:04:43.700768     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 5 milliseconds
I1130 23:04:44.200853     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 5 milliseconds
I1130 23:04:44.701603     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 6 milliseconds
I1130 23:04:45.196259     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 1 milliseconds
I1130 23:04:45.700552     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 5 milliseconds
I1130 23:04:46.200574     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 5 milliseconds
I1130 23:04:46.700358     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 5 milliseconds
I1130 23:04:47.210062     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 14 milliseconds
I1130 23:04:47.696213     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 1 milliseconds
I1130 23:04:48.201679     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 6 milliseconds
I1130 23:04:48.696496     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 1 milliseconds
I1130 23:04:49.196617     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 1 milliseconds
I1130 23:04:49.708763     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 2 milliseconds
I1130 23:04:50.210344     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 9 milliseconds
I1130 23:04:50.720310     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 25 milliseconds
I1130 23:04:51.198949     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 4 milliseconds
I1130 23:04:51.706681     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 11 milliseconds
I1130 23:04:52.200382     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 5 milliseconds
I1130 23:04:52.696123     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 1 milliseconds
I1130 23:04:53.203518     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 8 milliseconds
I1130 23:04:53.724561     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 29 milliseconds
I1130 23:04:54.202205     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 7 milliseconds
I1130 23:04:54.702831     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 7 milliseconds
I1130 23:04:55.196753     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 1 milliseconds
I1130 23:04:55.702343     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 7 milliseconds
I1130 23:04:56.201978     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 6 milliseconds
I1130 23:04:56.701358     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 6 milliseconds
I1130 23:04:57.203081     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 404 Not Found in 8 milliseconds
I1130 23:04:57.739796     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 200 OK in 45 milliseconds
I1130 23:04:57.769722     568 round_trippers.go:443] PATCH https://172.17.0.2:6443/api/v1/nodes/kind-control-plane2 200 OK in 26 milliseconds
I1130 23:04:57.770519     568 local.go:127] creating etcd client that connects to etcd pods
I1130 23:04:57.779554     568 round_trippers.go:443] GET https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 8 milliseconds
I1130 23:04:57.779768     568 etcd.go:107] etcd endpoints read from pods: https://172.17.0.6:2379
I1130 23:04:57.786586     568 etcd.go:156] etcd endpoints read from etcd: https://172.17.0.6:2379
I1130 23:04:57.786609     568 etcd.go:125] update etcd endpoints: https://172.17.0.6:2379
I1130 23:04:57.786617     568 local.go:136] Adding etcd member: https://172.17.0.4:2380
[etcd] Announced new etcd member joining to the existing etcd cluster
I1130 23:04:57.796823     568 local.go:142] Updated etcd member list: [{kind-control-plane2 https://172.17.0.4:2380} {kind-control-plane https://172.17.0.6:2380}]
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
I1130 23:04:57.797526     568 etcd.go:372] [etcd] attempting to see if all cluster endpoints ([https://172.17.0.6:2379 https://172.17.0.4:2379]) are available 1/8
{"level":"warn","ts":"2019-11-30T23:05:06.833Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://172.17.0.4:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
I1130 23:05:06.833108     568 etcd.go:377] [etcd] Attempt timed out
I1130 23:05:06.833115     568 etcd.go:369] [etcd] Waiting 5s until next retry
I1130 23:05:11.833248     568 etcd.go:372] [etcd] attempting to see if all cluster endpoints ([https://172.17.0.6:2379 https://172.17.0.4:2379]) are available 2/8
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1130 23:05:11.857552     568 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps  in 0 milliseconds
Post https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps: EOF
unable to create ConfigMap
k8s.io/kubernetes/cmd/kubeadm/app/util/apiclient.CreateOrMutateConfigMap
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/util/apiclient/idempotency.go:65
k8s.io/kubernetes/cmd/kubeadm/app/phases/uploadconfig.UploadConfiguration
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/uploadconfig/uploadconfig.go:93
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join.runUpdateStatusPhase
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join/controlplanejoin.go:170
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:236
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:424
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:209
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdJoin.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:169
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:830
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:200
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1337
error uploading configuration
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join.runUpdateStatusPhase
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join/controlplanejoin.go:171
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:236
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:424
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:209
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdJoin.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:169
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:830
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:200
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1337
error execution phase control-plane-join/update-status
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:237
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:424
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:209
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdJoin.func1
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/join.go:169
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:830
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:200
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1337

Stack Trace: 
sigs.k8s.io/kind/pkg/errors.WithStack
	/home/ereslibre/projects/go/src/sigs.k8s.io/kind/pkg/errors/errors.go:51
sigs.k8s.io/kind/pkg/exec.(*LocalCmd).Run
	/home/ereslibre/projects/go/src/sigs.k8s.io/kind/pkg/exec/local.go:116
sigs.k8s.io/kind/pkg/internal/cluster/providers/docker.(*nodeCmd).Run
	/home/ereslibre/projects/go/src/sigs.k8s.io/kind/pkg/internal/cluster/providers/docker/node.go:130
sigs.k8s.io/kind/pkg/exec.CombinedOutputLines
	/home/ereslibre/projects/go/src/sigs.k8s.io/kind/pkg/exec/helpers.go:67
sigs.k8s.io/kind/pkg/internal/cluster/create/actions/kubeadmjoin.runKubeadmJoin
	/home/ereslibre/projects/go/src/sigs.k8s.io/kind/pkg/internal/cluster/create/actions/kubeadmjoin/join.go:132
sigs.k8s.io/kind/pkg/internal/cluster/create/actions/kubeadmjoin.joinSecondaryControlPlanes
	/home/ereslibre/projects/go/src/sigs.k8s.io/kind/pkg/internal/cluster/create/actions/kubeadmjoin/join.go:86
sigs.k8s.io/kind/pkg/internal/cluster/create/actions/kubeadmjoin.(*Action).Execute
	/home/ereslibre/projects/go/src/sigs.k8s.io/kind/pkg/internal/cluster/create/actions/kubeadmjoin/join.go:56
sigs.k8s.io/kind/pkg/internal/cluster/create.Cluster
	/home/ereslibre/projects/go/src/sigs.k8s.io/kind/pkg/internal/cluster/create/create.go:136
sigs.k8s.io/kind/pkg/cluster.(*Provider).Create
	/home/ereslibre/projects/go/src/sigs.k8s.io/kind/pkg/cluster/provider.go:100
sigs.k8s.io/kind/pkg/cmd/kind/create/cluster.runE
	/home/ereslibre/projects/go/src/sigs.k8s.io/kind/pkg/cmd/kind/create/cluster/createcluster.go:86
sigs.k8s.io/kind/pkg/cmd/kind/create/cluster.NewCommand.func1
	/home/ereslibre/projects/go/src/sigs.k8s.io/kind/pkg/cmd/kind/create/cluster/createcluster.go:52
github.com/spf13/cobra.(*Command).execute
	/home/ereslibre/projects/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:826
github.com/spf13/cobra.(*Command).ExecuteC
	/home/ereslibre/projects/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:914
github.com/spf13/cobra.(*Command).Execute
	/home/ereslibre/projects/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:864
sigs.k8s.io/kind/cmd/kind/app.Run
	/home/ereslibre/projects/go/src/sigs.k8s.io/kind/cmd/kind/app/main.go:53
sigs.k8s.io/kind/cmd/kind/app.Main
	/home/ereslibre/projects/go/src/sigs.k8s.io/kind/cmd/kind/app/main.go:35
main.main
	/home/ereslibre/projects/go/src/sigs.k8s.io/kind/cmd/kind/main.go:24
runtime.main
	/usr/lib/go/src/runtime/proc.go:203
runtime.goexit
	/usr/lib/go/src/runtime/asm_amd64.s:1357

Also, on kinder we are doing https://github.com/kubernetes/kubeadm/blob/f4019dfc8ffd9bac687757231422e649acf3e96b/kinder/pkg/cluster/manager/actions/kubeadm-join.go#L102 after every new control plane join. Also, in kinder the loadbalancer is reconfigured after every join, instead of preconfigured from the beginning as in kind. This might have an impact on the reproducibility of the issue.

Seen on the wild as well: kubernetes-sigs/kind#1020 -- it was supposed to be a full hard disk, but not really conclusive IMO.

@ereslibre ereslibre deleted the create-or-mutate-configmap-resiliency branch November 30, 2019 23:07
@ereslibre
Copy link
Contributor Author

After seeing that this rarely happens on the wild I don't think we should backport this one, unless we get more reports.

@rosti
Copy link
Contributor

rosti commented Dec 2, 2019

My stance on backports is that we need to do it if it's a bug and it's P0. We may consider doing it if it's a bug + P1. But P2 is definitely not high enough.

@fabriziopandini
Copy link
Member

It seems the consensus is for not backporting now.
BTW, I think that we need to arrange for a code-walkthrough and add retries to everything is accessing api-server/etcd during join

@neolit123
Copy link
Member

neolit123 commented Dec 6, 2019

BTW, I think that we need to arrange for a code-walkthrough and add retries to everything is accessing api-server/etcd during join

we briefly mentioned this here:
#85763 (comment)

tracking issue: kubernetes/kubeadm#1606

@mbert
Copy link
Contributor

mbert commented Mar 20, 2020

I have been experiencing something that looks like related to this here: In some reproducable scenarios when setting up a multi control plane cluster the first kubeadm join --control-plane will always fail while subsequent master nodes can join just fine. Is there any workaround available? Or do I have to wait for 1.18.0? It kind of blocks me in my work ATM...

mbert added a commit to mbert/kubeadm2ha that referenced this pull request Mar 20, 2020
…ill now run as static pods on the master nodes.

Note that with some clusters the nginx/keepalived setup (regardless of this commit) currently joining the first secondary master node always fails while all subsequent masters can be joined just fine. A fix to this is expected with Kubernetes 1.18.0, see also kubernetes/kubernetes#85763
@neolit123
Copy link
Member

neolit123 commented Mar 20, 2020

I have been experiencing something that looks like related to this here: In some reproducable scenarios when setting up a multi control plane cluster the first kubeadm join --control-plane will always fail while subsequent master nodes can join just fine. Is there any workaround available? Or do I have to wait for 1.18.0? It kind of blocks me in my work ATM...

first join should not always fail pre 1.18. it is possible that the api-server (from the init node) to not be ready for a while and an aggressive subsequent join would fail, adding some timeout before the join should resolve that or making sure the primary node is Ready.

recently in cluster API we saw an issue where the second CP join failed with ConfigMap update error, but it was caused by load balancer blackout when a new member is added to the LB config. so waiting for a while or for the right time to join makes sense in higher level infrastructure.

for kubeadm we are adding retries where we can and when we can. the timeline is not clear.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/kubeadm cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants