Skip to content

kubeadm join failed: unable to fetch the kubeadm-config ConfigMap #1596

@omegazeng

Description

@omegazeng

Is this a request for help?

yes

What keywords did you search in kubeadm issues before filing this one?

kubeadm join
unable to fetch the kubeadm-config ConfigMap
controlPlaneEndpoint

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

Versions

kubeadm version (use kubeadm version):

kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:20:34Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Kubernetes version (use kubectl version):
kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:23:09Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:14:56Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:
    vmware VMs: 32 vcpu, 32g memory
  • OS (e.g. from /etc/os-release):
cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
  • Kernel (e.g. uname -a):
uname -a
Linux master-172-16-10-138 3.10.0-957.12.2.el7.x86_64 #1 SMP Tue May 14 21:24:32 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  • Others:

What happened?

create join token on master node:

kubeadm token create --print-join-command
kubeadm join k8s-master.HIDE.xyz:6443 --token xxx.yyyyyy     --discovery-token-ca-cert-hash sha256:zzzzzzzzzzzzzzzzzzzzzz

add new worker node

kubeadm join k8s-master.HIDE.xyz:6443 --token xxx.yyyyyy   -v 3  --discovery-token-ca-cert-hash sha256:zzzzzzzzzzzzzzzzzzzzzz
I0605 10:27:54.668924   24149 join.go:367] [preflight] found NodeName empty; using OS hostname as NodeName
I0605 10:27:54.669026   24149 initconfiguration.go:105] detected and using CRI socket: /var/run/dockershim.sock
[preflight] Running pre-flight checks
I0605 10:27:54.669137   24149 preflight.go:90] [preflight] Running general checks
I0605 10:27:54.669201   24149 checks.go:254] validating the existence and emptiness of directory /etc/kubernetes/manifests
I0605 10:27:54.669248   24149 checks.go:292] validating the existence of file /etc/kubernetes/kubelet.conf
I0605 10:27:54.669257   24149 checks.go:292] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I0605 10:27:54.669268   24149 checks.go:105] validating the container runtime
I0605 10:27:54.745250   24149 checks.go:131] validating if the service is enabled and active
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0605 10:27:54.839995   24149 checks.go:341] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0605 10:27:54.840049   24149 checks.go:341] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0605 10:27:54.840077   24149 checks.go:653] validating whether swap is enabled or not
I0605 10:27:54.840105   24149 checks.go:382] validating the presence of executable ip
I0605 10:27:54.840129   24149 checks.go:382] validating the presence of executable iptables
I0605 10:27:54.840156   24149 checks.go:382] validating the presence of executable mount
I0605 10:27:54.840171   24149 checks.go:382] validating the presence of executable nsenter
I0605 10:27:54.840188   24149 checks.go:382] validating the presence of executable ebtables
I0605 10:27:54.840206   24149 checks.go:382] validating the presence of executable ethtool
I0605 10:27:54.840238   24149 checks.go:382] validating the presence of executable socat
I0605 10:27:54.840260   24149 checks.go:382] validating the presence of executable tc
I0605 10:27:54.840274   24149 checks.go:382] validating the presence of executable touch
I0605 10:27:54.840299   24149 checks.go:524] running all checks
I0605 10:27:54.873438   24149 checks.go:412] checking whether the given node name is reachable using net.LookupHost
I0605 10:27:54.874069   24149 checks.go:622] validating kubelet version
I0605 10:27:54.936806   24149 checks.go:131] validating if the service is enabled and active
I0605 10:27:54.946640   24149 checks.go:209] validating availability of port 10250
I0605 10:27:54.946833   24149 checks.go:292] validating the existence of file /etc/kubernetes/pki/ca.crt
I0605 10:27:54.946849   24149 checks.go:439] validating if the connectivity type is via proxy or direct
I0605 10:27:54.946885   24149 join.go:427] [preflight] Discovering cluster-info
I0605 10:27:54.947024   24149 token.go:200] [discovery] Trying to connect to API Server "k8s-master.HIDE.xyz:6443"
I0605 10:27:54.947792   24149 token.go:75] [discovery] Created cluster-info discovery client, requesting info from "https://k8s-master.HIDE.xyz:6443"
I0605 10:28:04.982049   24149 token.go:141] [discovery] Requesting info from "https://k8s-master.HIDE.xyz:6443" again to validate TLS against the pinned public key
I0605 10:28:15.008018   24149 token.go:164] [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "k8s-master.HIDE.xyz:6443"
I0605 10:28:15.008056   24149 token.go:206] [discovery] Successfully established connection with API Server "k8s-master.HIDE.xyz:6443"
I0605 10:28:15.008090   24149 join.go:441] [preflight] Fetching init configuration
I0605 10:28:15.008104   24149 join.go:474] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Get https://master.HIDE.xyz:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: dial tcp: lookup master.HIDE.xyz on 114.114.114.114:53: no such host

#1447 (comment)
I know "master.HIDE.xyz" can not resolve, because I have changed controlPlaneEndpoint "master.HIDE.xyz" to "k8s-master.HIDE.xyz" and delete A recored for "master.HIDE.xyz".
but why fetch the kubeadm-config ConfigMap from https://master.HiDE.xyz:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config
not https://k8s-master.HiDE.xyz:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config

kubeadm-config

kubectl -n kube-system get cm kubeadm-config -oyaml
apiVersion: v1
data:
  ClusterConfiguration: |
    apiServer:
      certSANs:
      - k8s-master.HIDE.xyz
      - master.HIDE.xyz
      extraArgs:
        authorization-mode: Node,RBAC
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta1
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controlPlaneEndpoint: k8s-master.HIDE.xyz:6443
    controllerManager: {}
    dns:
      type: CoreDNS
    etcd:
      external:
        caFile: /etc/ssl/etcd/etcd-root-ca.pem
        certFile: /etc/ssl/etcd/etcd.pem
        endpoints:
        - https://172.16.10.136:2379
        - https://172.16.10.137:2379
        - https://172.16.10.138:2379
        keyFile: /etc/ssl/etcd/etcd-key.pem
    imageRepository: gcr.azk8s.cn/google_containers
    kind: ClusterConfiguration
    kubernetesVersion: v1.14.2
    networking:
      dnsDomain: cluster.local
      podSubnet: 192.168.0.0/16
      serviceSubnet: 10.96.0.0/12
    scheduler: {}
  ClusterStatus: |
    apiEndpoints:
      master-172-16-10-136:
        advertiseAddress: 172.16.10.136
        bindPort: 6443
      master-172-16-10-137:
        advertiseAddress: 172.16.10.137
        bindPort: 6443
      master-172-16-10-138:
        advertiseAddress: 172.16.10.138
        bindPort: 6443
    apiVersion: kubeadm.k8s.io/v1beta1
    kind: ClusterStatus
kind: ConfigMap
metadata:
  creationTimestamp: "2019-01-17T12:30:52Z"
  name: kubeadm-config
  namespace: kube-system
  resourceVersion: "32801430"
  selfLink: /api/v1/namespaces/kube-system/configmaps/kubeadm-config
  uid: ba5c5c09-1a53-11e9-b46c-000c296fd64f

What you expected to happen?

add worker node

How to reproduce it (as minimally and precisely as possible)?

rerun kubeadm join

Anything else we need to know?

kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://k8s-master.HIDE.xyz:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
kubeadm config view
apiServer:
  certSANs:
  - k8s-master.HIDE.xyz
  - master.HIDE.xyz
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: k8s-master.HIDE.xyz:6443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  external:
    caFile: /etc/ssl/etcd/etcd-root-ca.pem
    certFile: /etc/ssl/etcd/etcd.pem
    endpoints:
    - https://172.16.10.136:2379
    - https://172.16.10.137:2379
    - https://172.16.10.138:2379
    keyFile: /etc/ssl/etcd/etcd-key.pem
imageRepository: gcr.azk8s.cn/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.14.2
networking:
  dnsDomain: cluster.local
  podSubnet: 192.168.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
$ kubectl get no -owide
NAME                   STATUS   ROLES    AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
master-172-16-10-136   Ready    master   138d    v1.14.2   172.16.10.136   <none>        CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://18.9.6
master-172-16-10-137   Ready    master   138d    v1.14.2   172.16.10.137   <none>        CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://18.9.6
master-172-16-10-138   Ready    master   138d    v1.14.2   172.16.10.138   <none>        CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://18.9.6
worker-172-16-10-139   Ready    <none>   138d    v1.14.2   172.16.10.139   <none>        CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://18.9.6
worker-172-16-10-140   Ready    <none>   138d    v1.14.2   172.16.10.140   <none>        CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://18.9.6
worker-172-16-10-141   Ready    <none>   138d    v1.14.2   172.16.10.141   <none>        CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://18.9.6
worker-172-16-10-142   Ready    <none>   63d     v1.14.2   172.16.10.142   <none>        CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://18.9.6
worker-172-16-10-143   Ready    <none>   63d     v1.14.2   172.16.10.143   <none>        CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://18.9.6
worker-172-16-10-145   Ready    <none>   63d     v1.14.2   172.16.10.145   <none>        CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://18.9.6
worker-172-16-10-169   Ready    <none>   4d11h   v1.14.2   172.16.10.169   <none>        CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://18.9.6
worker-172-16-10-170   Ready    <none>   4d11h   v1.14.2   172.16.10.170   <none>        CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://18.9.6
worker-172-16-10-171   Ready    <none>   4d11h   v1.14.2   172.16.10.171   <none>        CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://18.9.6

$ kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}

$ kubectl cluster-info
Kubernetes master is running at https://k8s-master.HIDE.xyz:6443
affable-ibex-kubernetes-dashboard is running at https://k8s-master.HIDE.xyz:6443/api/v1/namespaces/kube-system/services/https:affable-ibex-kubernetes-dashboard:https/proxy
KubeDNS is running at https://k8s-master.HIDE.xyz:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions