Closed
Description
What happened:
I used yurtadm init
to install an openyurt cluster as this document described, and it ran successfully:
[root@host130 openyurt]# _output/local/bin/linux/amd64/yurtadm init --apiserver-advertise-address 192.168.152.130 --openyurt-version latest --passwd 123
I0607 02:43:04.861578 8656 init.go:188] Check and install sealer
I0607 02:43:05.015962 8656 init.go:198] Sealer v0.6.1 already exist, skip install.
I0607 02:43:05.015997 8656 init.go:236] generate Clusterfile for openyurt
I0607 02:43:05.016417 8656 init.go:228] init an openyurt cluster
2022-06-07 02:43:05 [INFO] [local.go:238] Start to create a new cluster
2022-06-07 02:49:35 [INFO] [kube_certs.go:234] APIserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local host130:host130 kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost] map[10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 192.168.152.130:192.168.152.130]}
2022-06-07 02:49:35 [INFO] [kube_certs.go:254] Etcd altnames : {map[host130:host130 localhost:localhost] map[127.0.0.1:127.0.0.1 192.168.152.130:192.168.152.130 ::1:::1]}, commonName : host130
2022-06-07 02:51:03 [INFO] [kubeconfig.go:267] [kubeconfig] Writing "admin.conf" kubeconfig file
2022-06-07 02:51:03 [INFO] [kubeconfig.go:267] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
2022-06-07 02:51:03 [INFO] [kubeconfig.go:267] [kubeconfig] Writing "scheduler.conf" kubeconfig file
2022-06-07 02:51:03 [INFO] [kubeconfig.go:267] [kubeconfig] Writing "kubelet.conf" kubeconfig file
2022-06-07 02:58:11 [INFO] [init.go:228] start to init master0...
2022-06-07 02:59:44 [INFO] [init.go:233] W0607 02:58:12.182775 9521 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubelet.config.k8s.io", Version:"v1beta1", Kind:"KubeletConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "shutdownGracePeriod"
W0607 02:58:12.364950 9521 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.8
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-socat]: socat not found in system path
[WARNING Hostname]: hostname "host130" could not be reached
[WARNING Hostname]: hostname "host130": lookup host130 on 192.168.152.2:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
W0607 02:58:23.970999 9521 kubeconfig.go:242] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://192.168.152.130:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
W0607 02:58:24.038237 9521 kubeconfig.go:242] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://192.168.152.130:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 77.504407 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
72bea8ed6ab6e7c166a1a45520f5109937ffc056a6c4b7c8da959c45215ba9cd
[mark-control-plane] Marking the node host130 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node host130 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 4zcwso.b7df7slikommdbxp
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join apiserver.cluster.local:6443 --token 4zcwso.b7df7slikommdbxp \
--discovery-token-ca-cert-hash sha256:b4ebf15be698c9b275fe066929bcbf45204de47137ae19550b30abdb214597ed \
--control-plane --certificate-key 72bea8ed6ab6e7c166a1a45520f5109937ffc056a6c4b7c8da959c45215ba9cd
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join apiserver.cluster.local:6443 --token 4zcwso.b7df7slikommdbxp \
--discovery-token-ca-cert-hash sha256:b4ebf15be698c9b275fe066929bcbf45204de47137ae19550b30abdb214597ed
2022-06-07 02:59:44 [INFO] [init.go:183] join command is: kubeadm join apiserver.cluster.local:6443 --token 4zcwso.b7df7slikommdbxp \
--discovery-token-ca-cert-hash sha256:b4ebf15be698c9b275fe066929bcbf45204de47137ae19550b30abdb214597ed \
--control-plane --certificate-key 72bea8ed6ab6e7c166a1a45520f5109937ffc056a6c4b7c8da959c45215ba9cd
2022-06-07 03:03:52 [INFO] [local.go:248] Succeeded in creating a new cluster, enjoy it!
node info:
[root@host130 ~]# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
host130 Ready master 20m v1.19.8 192.168.152.130 <none> CentOS Linux 7 (Core) 3.10.0-1127.el7.x86_64 docker://19.3.14
But the yurthub kept restarting (this machine's hostname is "host130"):
[root@host130 openyurt]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-b4bf78944-clwm6 1/1 Running 0 21m
kube-system coredns-b4bf78944-ct6tr 1/1 Running 0 21m
kube-system etcd-host130 1/1 Running 0 25m
kube-system kube-apiserver-host130 1/1 Running 0 25m
kube-system kube-controller-manager-host130 1/1 Running 0 25m
kube-system kube-flannel-ds-8r4wr 1/1 Running 0 24m
kube-system kube-proxy-r2wtd 1/1 Running 0 21m
kube-system kube-scheduler-host130 1/1 Running 0 25m
kube-system yurt-app-manager-67f95668df-lggns 1/1 Running 0 24m
kube-system yurt-controller-manager-7c7bf76c77-4pkqh 1/1 Running 0 24m
kube-system yurt-hub-host130 0/1 CrashLoopBackOff 4 22m
kube-system yurt-tunnel-server-65bbc86566-7jdc5 1/1 Running 0 24m
yurt-hub-host130
's log:
[root@host130 openyurt]# kubectl logs yurt-hub-host130 -n kube-system
yurthub version: projectinfo.Info{GitVersion:"-9873e10", GitCommit:"9873e10", BuildDate:"2022-06-03T02:07:48Z", GoVersion:"go1.17.1", Compiler:"gc", Platform:"linux/amd64"}
I0606 19:32:22.438123 1 start.go:60] FLAG: --access-server-through-hub="true"
I0606 19:32:22.438171 1 start.go:60] FLAG: --add_dir_header="false"
I0606 19:32:22.438178 1 start.go:60] FLAG: --alsologtostderr="false"
I0606 19:32:22.438186 1 start.go:60] FLAG: --bind-address="127.0.0.1"
I0606 19:32:22.438191 1 start.go:60] FLAG: --cert-mgr-mode="hubself"
I0606 19:32:22.438194 1 start.go:60] FLAG: --disabled-resource-filters="[]"
I0606 19:32:22.438201 1 start.go:60] FLAG: --disk-cache-path="/etc/kubernetes/cache/"
I0606 19:32:22.438205 1 start.go:60] FLAG: --dummy-if-ip=""
I0606 19:32:22.438208 1 start.go:60] FLAG: --dummy-if-name="yurthub-dummy0"
I0606 19:32:22.438210 1 start.go:60] FLAG: --enable-dummy-if="true"
I0606 19:32:22.438216 1 start.go:60] FLAG: --enable-iptables="true"
I0606 19:32:22.438219 1 start.go:60] FLAG: --enable-node-pool="true"
I0606 19:32:22.438221 1 start.go:60] FLAG: --enable-resource-filter="true"
I0606 19:32:22.438225 1 start.go:60] FLAG: --gc-frequency="120"
I0606 19:32:22.438230 1 start.go:60] FLAG: --heartbeat-failed-retry="3"
I0606 19:32:22.438233 1 start.go:60] FLAG: --heartbeat-healthy-threshold="2"
I0606 19:32:22.438236 1 start.go:60] FLAG: --heartbeat-timeout-seconds="2"
I0606 19:32:22.438240 1 start.go:60] FLAG: --help="false"
I0606 19:32:22.438243 1 start.go:60] FLAG: --hub-cert-organizations=""
I0606 19:32:22.438246 1 start.go:60] FLAG: --join-token="2d96hl.l2hkfrihj88pguup"
I0606 19:32:22.438250 1 start.go:60] FLAG: --kubelet-ca-file="/etc/kubernetes/pki/ca.crt"
I0606 19:32:22.438253 1 start.go:60] FLAG: --kubelet-client-certificate="/var/lib/kubelet/pki/kubelet-client-current.pem"
I0606 19:32:22.438256 1 start.go:60] FLAG: --kubelet-health-grace-period="40s"
I0606 19:32:22.438261 1 start.go:60] FLAG: --lb-mode="rr"
I0606 19:32:22.438265 1 start.go:60] FLAG: --log-flush-frequency="5s"
I0606 19:32:22.438269 1 start.go:60] FLAG: --log_backtrace_at=":0"
I0606 19:32:22.438275 1 start.go:60] FLAG: --log_dir=""
I0606 19:32:22.438279 1 start.go:60] FLAG: --log_file=""
I0606 19:32:22.438281 1 start.go:60] FLAG: --log_file_max_size="1800"
I0606 19:32:22.438284 1 start.go:60] FLAG: --logtostderr="true"
I0606 19:32:22.438287 1 start.go:60] FLAG: --max-requests-in-flight="250"
I0606 19:32:22.438293 1 start.go:60] FLAG: --node-name="host130"
I0606 19:32:22.438296 1 start.go:60] FLAG: --nodepool-name=""
I0606 19:32:22.438298 1 start.go:60] FLAG: --one_output="false"
I0606 19:32:22.438301 1 start.go:60] FLAG: --profiling="true"
I0606 19:32:22.438306 1 start.go:60] FLAG: --proxy-port="10261"
I0606 19:32:22.438309 1 start.go:60] FLAG: --proxy-secure-port="10268"
I0606 19:32:22.438312 1 start.go:60] FLAG: --root-dir="/var/lib/yurthub"
I0606 19:32:22.438316 1 start.go:60] FLAG: --serve-port="10267"
I0606 19:32:22.438319 1 start.go:60] FLAG: --server-addr="https://apiserver.cluster.local:6443"
I0606 19:32:22.438329 1 start.go:60] FLAG: --skip_headers="false"
I0606 19:32:22.438332 1 start.go:60] FLAG: --skip_log_headers="false"
I0606 19:32:22.438335 1 start.go:60] FLAG: --stderrthreshold="2"
I0606 19:32:22.438337 1 start.go:60] FLAG: --v="2"
I0606 19:32:22.438340 1 start.go:60] FLAG: --version="false"
I0606 19:32:22.438345 1 start.go:60] FLAG: --vmodule=""
I0606 19:32:22.438348 1 start.go:60] FLAG: --working-mode="cloud"
I0606 19:32:22.438417 1 options.go:182] dummy ip not set, will use 169.254.2.1 as default
I0606 19:32:22.438449 1 config.go:208] yurthub would connect remote servers: https://apiserver.cluster.local:6443
I0606 19:32:22.438636 1 restmapper.go:86] initialize an empty DynamicRESTMapper
I0606 19:32:22.440720 1 filter.go:94] Filter servicetopology registered successfully
I0606 19:32:22.440743 1 filter.go:94] Filter masterservice registered successfully
I0606 19:32:22.440750 1 filter.go:94] Filter discardcloudservice registered successfully
I0606 19:32:22.440754 1 filter.go:94] Filter endpoints registered successfully
I0606 19:32:22.440979 1 filter.go:74] prepare list/watch to sync node(host130) for cloud working mode
I0606 19:32:22.441097 1 filter.go:72] Filter servicetopology initialize successfully
I0606 19:32:22.442686 1 filter.go:72] Filter masterservice initialize successfully
I0606 19:32:22.442764 1 filter.go:74] prepare list/watch to sync node(host130) for cloud working mode
I0606 19:32:22.442781 1 filter.go:72] Filter endpoints initialize successfully
I0606 19:32:22.442978 1 approver.go:198] current filter setting: map[kube-proxy/endpointslices/list:servicetopology kube-proxy/endpointslices/watch:servicetopology kube-proxy/services/list:discardcloudservice kube-proxy/services/watch:discardcloudservice kubelet/services/list:masterservice kubelet/services/watch:masterservice nginx-ingress-controller/endpoints/list:endpoints nginx-ingress-controller/endpoints/watch:endpoints] after init
I0606 19:32:22.443049 1 start.go:70] yurthub cfg: &config.YurtHubConfiguration{LBMode:"rr", RemoteServers:[]*url.URL{(*url.URL)(0xc0001b5a70)}, YurtHubServerAddr:"127.0.0.1:10267", YurtHubCertOrganizations:[]string{}, YurtHubProxyServerAddr:"127.0.0.1:10261", YurtHubProxyServerSecureAddr:"127.0.0.1:10268", YurtHubProxyServerDummyAddr:"169.254.2.1:10261", YurtHubProxyServerSecureDummyAddr:"169.254.2.1:10268", GCFrequency:120, CertMgrMode:"hubself", KubeletRootCAFilePath:"/etc/kubernetes/pki/ca.crt", KubeletPairFilePath:"/var/lib/kubelet/pki/kubelet-client-current.pem", NodeName:"host130", HeartbeatFailedRetry:3, HeartbeatHealthyThreshold:2, HeartbeatTimeoutSeconds:2, MaxRequestInFlight:250, JoinToken:"2d96hl.l2hkfrihj88pguup", RootDir:"/var/lib/yurthub", EnableProfiling:true, EnableDummyIf:true, EnableIptables:true, HubAgentDummyIfName:"yurthub-dummy0", StorageWrapper:(*cachemanager.storageWrapper)(0xc0003aee80), SerializerManager:(*serializer.SerializerManager)(0xc0003aeec0), RESTMapperManager:(*meta.RESTMapperManager)(0xc0003aef40), TLSConfig:(*tls.Config)(nil), SharedFactory:(*informers.sharedInformerFactory)(0xc000147360), YurtSharedFactory:(*externalversions.sharedInformerFactory)(0xc000147400), WorkingMode:"cloud", KubeletHealthGracePeriod:40000000000, FilterManager:(*filter.Manager)(0xc00012c7e0), CertIPs:[]net.IP{net.IP{0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xff, 0xff, 0xa9, 0xfe, 0x2, 0x1}, net.IP{0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0xff, 0xff, 0x7f, 0x0, 0x0, 0x1}}}
I0606 19:32:22.443108 1 start.go:85] 1. register cert managers
I0606 19:32:22.443125 1 certificate.go:60] Registered certificate manager hubself
I0606 19:32:22.443129 1 start.go:90] 2. create cert manager with hubself mode
I0606 19:32:22.444516 1 cert_mgr.go:148] apiServer name https://apiserver.cluster.local:6443 not changed
I0606 19:32:22.444576 1 cert_mgr.go:260] /var/lib/yurthub/pki/ca.crt file already exists, check with server
I0606 19:32:22.455678 1 cert_mgr.go:317] /var/lib/yurthub/pki/ca.crt file matched with server's, reuse it
I0606 19:32:22.455737 1 cert_mgr.go:171] use /var/lib/yurthub/pki/ca.crt ca file to bootstrap yurthub
I0606 19:32:22.455867 1 cert_mgr.go:353] yurthub bootstrap conf file already exists, skip init bootstrap
W0606 19:32:22.456074 1 filestore_wrapper.go:49] unexpected error occurred when loading the certificate: no cert/key files read at "/var/lib/yurthub/pki/yurthub-current.pem", ("", "") or ("/var/lib/yurthub/pki", "/var/lib/yurthub/pki"), will regenerate it
I0606 19:32:22.456101 1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Certificate rotation is enabled
I0606 19:32:22.456149 1 cert_mgr.go:481] yurthub config file already exists, skip init config file
I0606 19:32:22.456164 1 certificate.go:83] waiting for preparing client certificate
I0606 19:32:22.456205 1 certificate_manager.go:270] kubernetes.io/kube-apiserver-client: Rotating certificates
I0606 19:32:27.456768 1 certificate.go:83] waiting for preparing client certificate
I0606 19:32:32.457813 1 certificate.go:83] waiting for preparing client certificate
I0606 19:32:37.457794 1 certificate.go:83] waiting for preparing client certificate
I0606 19:32:42.456567 1 certificate.go:83] waiting for preparing client certificate
I0606 19:32:47.456661 1 certificate.go:83] waiting for preparing client certificate
I0606 19:32:52.456599 1 certificate.go:83] waiting for preparing client certificate
I0606 19:32:52.458017 1 cert_mgr.go:471] avoid tcp conn leak, close old tcp conn that used to rotate certificate
I0606 19:32:52.458112 1 connrotation.go:110] forcibly close 0 connections on apiserver.cluster.local:6443 for hub certificate manager dialer
I0606 19:32:52.460037 1 connrotation.go:151] create a connection from 192.168.152.130:38962 to apiserver.cluster.local:6443, total 1 connections in hub certificate manager dialer
I0606 19:32:57.456410 1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:02.457056 1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:07.456956 1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:12.457429 1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:17.456387 1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:22.457187 1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:27.456754 1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:32.457666 1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:37.457018 1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:42.456498 1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:47.456568 1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:52.457048 1 certificate.go:83] waiting for preparing client certificate
I0606 19:33:57.456429 1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:02.458012 1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:07.457251 1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:12.456692 1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:17.456393 1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:22.457189 1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:27.456752 1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:32.457642 1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:37.456297 1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:42.456465 1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:47.456978 1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:52.456956 1 certificate.go:83] waiting for preparing client certificate
I0606 19:34:57.456770 1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:02.456274 1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:07.456317 1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:12.457742 1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:17.456341 1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:22.456346 1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:27.458130 1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:32.456513 1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:37.456810 1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:42.456690 1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:47.456764 1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:52.458030 1 certificate.go:83] waiting for preparing client certificate
I0606 19:35:57.456544 1 certificate.go:83] waiting for preparing client certificate
I0606 19:36:02.456434 1 certificate.go:83] waiting for preparing client certificate
I0606 19:36:07.457276 1 certificate.go:83] waiting for preparing client certificate
I0606 19:36:12.456343 1 certificate.go:83] waiting for preparing client certificate
I0606 19:36:17.457643 1 certificate.go:83] waiting for preparing client certificate
I0606 19:36:22.457454 1 certificate.go:83] waiting for preparing client certificate
I0606 19:36:22.457539 1 certificate.go:83] waiting for preparing client certificate
E0606 19:36:22.457547 1 certificate.go:87] client certificate preparation failed, timed out waiting for the condition
F0606 19:36:22.457561 1 start.go:73] run yurthub failed, could not create certificate manager, timed out waiting for the condition
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0x1)
/go/pkg/mod/k8s.io/klog/v2@v2.9.0/klog.go:1026 +0x8a
k8s.io/klog/v2.(*loggingT).output(0x2dd08e0, 0x3, {0x0, 0x0}, 0xc000407b90, 0x0, {0x235aa6d, 0xc000456060}, 0x0, 0x0)
/go/pkg/mod/k8s.io/klog/v2@v2.9.0/klog.go:975 +0x63d
k8s.io/klog/v2.(*loggingT).printf(0x1c07404, 0x65fd18, {0x0, 0x0}, {0x0, 0x0}, {0x1c19b36, 0x11}, {0xc000456060, 0x2, ...})
/go/pkg/mod/k8s.io/klog/v2@v2.9.0/klog.go:753 +0x1e5
k8s.io/klog/v2.Fatalf(...)
/go/pkg/mod/k8s.io/klog/v2@v2.9.0/klog.go:1514
github.com/openyurtio/openyurt/cmd/yurthub/app.NewCmdStartYurtHub.func1(0xc00013e500, {0x1c07d28, 0x5, 0x5})
/build/cmd/yurthub/app/start.go:73 +0x5a5
github.com/spf13/cobra.(*Command).execute(0xc00013e500, {0xc000134130, 0x5, 0x5})
/go/pkg/mod/github.com/spf13/cobra@v1.2.1/command.go:860 +0x5f8
github.com/spf13/cobra.(*Command).ExecuteC(0xc00013e500)
/go/pkg/mod/github.com/spf13/cobra@v1.2.1/command.go:974 +0x3bc
github.com/spf13/cobra.(*Command).Execute(...)
/go/pkg/mod/github.com/spf13/cobra@v1.2.1/command.go:902
main.main()
/build/cmd/yurthub/yurthub.go:33 +0xaf
goroutine 18 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x0)
/go/pkg/mod/k8s.io/klog/v2@v2.9.0/klog.go:1169 +0x6a
created by k8s.io/klog/v2.init.0
/go/pkg/mod/k8s.io/klog/v2@v2.9.0/klog.go:420 +0xfb
goroutine 48 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0007b14e0, 0x0)
/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x10)
/usr/local/go/src/sync/cond.go:56 +0x8c
golang.org/x/net/http2.(*pipe).Read(0xc0007b14c8, {0xc00046e000, 0x200, 0x200})
/go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/pipe.go:65 +0xeb
golang.org/x/net/http2.transportResponseBody.Read({0xc000287bb8}, {0xc00046e000, 0x534ace, 0xc000287c20})
/go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/transport.go:2110 +0x77
encoding/json.(*Decoder).refill(0xc00046a000)
/usr/local/go/src/encoding/json/stream.go:165 +0x17f
encoding/json.(*Decoder).readValue(0xc00046a000)
/usr/local/go/src/encoding/json/stream.go:140 +0xbb
encoding/json.(*Decoder).Decode(0xc00046a000, {0x1998b40, 0xc000450090})
/usr/local/go/src/encoding/json/stream.go:63 +0x78
k8s.io/apimachinery/pkg/util/framer.(*jsonFrameReader).Read(0xc00043e270, {0xc00046c000, 0x400, 0x400})
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/framer/framer.go:152 +0x19c
k8s.io/apimachinery/pkg/runtime/serializer/streaming.(*decoder).Decode(0xc0004420f0, 0x0, {0x1e90720, 0xc000448240})
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/runtime/serializer/streaming/streaming.go:77 +0xa7
k8s.io/client-go/rest/watch.(*Decoder).Decode(0xc000456040)
/go/pkg/mod/k8s.io/client-go@v0.22.3/rest/watch/decoder.go:49 +0x4f
k8s.io/apimachinery/pkg/watch.(*StreamWatcher).receive(0xc000448200)
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/watch/streamwatcher.go:105 +0x11c
created by k8s.io/apimachinery/pkg/watch.NewStreamWatcher
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/watch/streamwatcher.go:76 +0x135
goroutine 86 [syscall, 4 minutes]:
os/signal.signal_recv()
/usr/local/go/src/runtime/sigqueue.go:169 +0x98
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:24 +0x19
created by os/signal.Notify.func1.1
/usr/local/go/src/os/signal/signal.go:151 +0x2c
goroutine 87 [chan receive, 4 minutes]:
k8s.io/apiserver/pkg/server.SetupSignalContext.func1()
/go/pkg/mod/k8s.io/apiserver@v0.22.3/pkg/server/signal.go:48 +0x2b
created by k8s.io/apiserver/pkg/server.SetupSignalContext
/go/pkg/mod/k8s.io/apiserver@v0.22.3/pkg/server/signal.go:47 +0xe7
goroutine 47 [select, 4 minutes]:
golang.org/x/net/http2.awaitRequestCancel(0xc0001adb00, 0xc000115620)
/go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/transport.go:318 +0xfa
golang.org/x/net/http2.(*clientStream).awaitRequestCancel(0xc0007b14a0, 0x0)
/go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/transport.go:344 +0x2b
created by golang.org/x/net/http2.(*clientConnReadLoop).handleResponse
/go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/transport.go:2056 +0x638
goroutine 80 [select, 4 minutes]:
k8s.io/client-go/tools/watch.UntilWithoutRetry({0x1eab020, 0xc0007a6000}, {0x1e909c8, 0xc000740c60}, {0xc00055f9d0, 0x1, 0xc8})
/go/pkg/mod/k8s.io/client-go@v0.22.3/tools/watch/until.go:73 +0x2f0
k8s.io/client-go/tools/watch.UntilWithSync({0x1eab020, 0xc0007a6000}, {0x1e931a0, 0xc000659ae8}, {0x1e8eb00, 0xc00000ab40}, 0x0, {0xc0005c19d0, 0x1, 0x1})
/go/pkg/mod/k8s.io/client-go@v0.22.3/tools/watch/until.go:151 +0x268
k8s.io/client-go/util/certificate/csr.WaitForCertificate({0x1eab020, 0xc0007a6000}, {0x1efd8f0, 0xc0004202c0}, {0xc00079e070, 0x9}, {0xc0007891a0, 0x24})
/go/pkg/mod/k8s.io/client-go@v0.22.3/util/certificate/csr/csr.go:225 +0x96c
k8s.io/client-go/util/certificate.(*manager).rotateCerts(0xc000504000)
/go/pkg/mod/k8s.io/client-go@v0.22.3/util/certificate/certificate_manager.go:486 +0x5b9
k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x40ce54, 0xc000286cc8})
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:217 +0x1b
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x1eaafe8, 0xc00013c008}, 0x46af53)
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:230 +0x7c
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0x1a0ca40)
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:223 +0x39
k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff({0x77359400, 0x4000000000000000, 0x3fb999999999999a, 0x5, 0x0}, 0x2dd0440)
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:418 +0x5f
k8s.io/client-go/util/certificate.(*manager).Start.func1()
/go/pkg/mod/k8s.io/client-go@v0.22.3/util/certificate/certificate_manager.go:353 +0x3f8
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7fdc6d3836a0)
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0, {0x1e72580, 0xc0004cbc80}, 0x1, 0xc00039a240)
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0, 0x3b9aca00, 0x0, 0x0, 0x0)
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0x0, 0x0)
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/util/certificate.(*manager).Start
/go/pkg/mod/k8s.io/client-go@v0.22.3/util/certificate/certificate_manager.go:321 +0x18f
goroutine 106 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc00073a450, 0x1)
/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0xc0005b6f60)
/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/tools/watch.(*eventProcessor).takeBatch(0xc00073f320)
/go/pkg/mod/k8s.io/client-go@v0.22.3/tools/watch/informerwatcher.go:64 +0xa5
k8s.io/client-go/tools/watch.(*eventProcessor).run(0x0)
/go/pkg/mod/k8s.io/client-go@v0.22.3/tools/watch/informerwatcher.go:51 +0x25
created by k8s.io/client-go/tools/watch.NewIndexerInformerWatcher
/go/pkg/mod/k8s.io/client-go@v0.22.3/tools/watch/informerwatcher.go:140 +0x31f
goroutine 103 [IO wait]:
internal/poll.runtime_pollWait(0x7fdc6d42a5c8, 0x72)
/usr/local/go/src/runtime/netpoll.go:229 +0x89
internal/poll.(*pollDesc).wait(0xc0004cd080, 0xc000710000, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0004cd080, {0xc000710000, 0x902, 0x902})
/usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc0004cd080, {0xc000710000, 0x8fd, 0xc000655a40})
/usr/local/go/src/net/fd_posix.go:56 +0x29
net.(*conn).Read(0xc00040c018, {0xc000710000, 0xc000710000, 0x5})
/usr/local/go/src/net/net.go:183 +0x45
crypto/tls.(*atLeastReader).Read(0xc000792048, {0xc000710000, 0x0, 0x409b8d})
/usr/local/go/src/crypto/tls/conn.go:777 +0x3d
bytes.(*Buffer).ReadFrom(0xc0001a0278, {0x1e6f060, 0xc000792048})
/usr/local/go/src/bytes/buffer.go:204 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0001a0000, {0x7fdc6d3c31d8, 0xc00043e060}, 0x902)
/usr/local/go/src/crypto/tls/conn.go:799 +0xe5
crypto/tls.(*Conn).readRecordOrCCS(0xc0001a0000, 0x0)
/usr/local/go/src/crypto/tls/conn.go:606 +0x112
crypto/tls.(*Conn).readRecord(...)
/usr/local/go/src/crypto/tls/conn.go:574
crypto/tls.(*Conn).Read(0xc0001a0000, {0xc00071f000, 0x1000, 0xc000286cc0})
/usr/local/go/src/crypto/tls/conn.go:1277 +0x16f
bufio.(*Reader).Read(0xc0005a96e0, {0xc0006eaab8, 0x9, 0x18})
/usr/local/go/src/bufio/bufio.go:227 +0x1b4
io.ReadAtLeast({0x1e6ee80, 0xc0005a96e0}, {0xc0006eaab8, 0x9, 0x9}, 0x9)
/usr/local/go/src/io/io.go:328 +0x9a
io.ReadFull(...)
/usr/local/go/src/io/io.go:347
golang.org/x/net/http2.readFrameHeader({0xc0006eaab8, 0x9, 0x8c21ce}, {0x1e6ee80, 0xc0005a96e0})
/go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/frame.go:237 +0x6e
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0006eaa80)
/go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/frame.go:492 +0x95
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000286fa0)
/go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/transport.go:1821 +0x165
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0006f8780)
/go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/transport.go:1743 +0x79
created by golang.org/x/net/http2.(*Transport).newClientConn
/go/pkg/mod/golang.org/x/net@v0.0.0-20210520170846-37e1c6afe023/http2/transport.go:695 +0xb45
goroutine 107 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc00064d248, 0x1)
/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0xc00078f580)
/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc00064d220, 0xc00073f440)
/go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/delta_fifo.go:525 +0x1f6
k8s.io/client-go/tools/cache.(*controller).processLoop(0xc0006fefc0)
/go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/controller.go:183 +0x36
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7fdc6d399a00)
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xae71c8, {0x1e72580, 0xc00073f4a0}, 0x1, 0xc00039afc0)
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006ff028, 0x3b9aca00, 0x0, 0x40, 0x7fdc6d23f1b0)
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*controller).Run(0xc0006fefc0, 0xc00039afc0)
/go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/controller.go:154 +0x2fb
k8s.io/client-go/tools/watch.NewIndexerInformerWatcher.func4()
/go/pkg/mod/k8s.io/client-go@v0.22.3/tools/watch/informerwatcher.go:146 +0x8d
created by k8s.io/client-go/tools/watch.NewIndexerInformerWatcher
/go/pkg/mod/k8s.io/client-go@v0.22.3/tools/watch/informerwatcher.go:143 +0x3d1
goroutine 108 [chan receive, 4 minutes]:
k8s.io/client-go/tools/cache.(*controller).Run.func1()
/go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/controller.go:130 +0x28
created by k8s.io/client-go/tools/cache.(*controller).Run
/go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/controller.go:129 +0x105
goroutine 109 [select, 4 minutes]:
k8s.io/client-go/tools/cache.(*Reflector).watchHandler(0xc0006eab60, {0x0, 0x0, 0x2dd0440}, {0x1e909f0, 0xc000448200}, 0xc000563ba0, 0xc0007a6180, 0xc00039afc0)
/go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/reflector.go:468 +0x1b6
k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch(0xc0006eab60, 0xc00039afc0)
/go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/reflector.go:428 +0x6b6
k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
/go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/reflector.go:221 +0x26
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7fdc6d399a00)
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000715740, {0x1e72560, 0xc000657220}, 0x1, 0xc00039afc0)
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:156 +0xb6
k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0006eab60, 0xc00039afc0)
/go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/reflector.go:220 +0x237
k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:56 +0x22
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/go/pkg/mod/k8s.io/apimachinery@v0.22.3/pkg/util/wait/wait.go:71 +0x88
goroutine 29 [select, 4 minutes]:
k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func2()
/go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/reflector.go:373 +0x139
created by k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch
/go/pkg/mod/k8s.io/client-go@v0.22.3/tools/cache/reflector.go:367 +0x3a5
yurt-hub.yaml
:
[root@host130 manifests]# cat /etc/kubernetes/manifests/yurt-hub.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
k8s-app: yurt-hub
name: yurt-hub
namespace: kube-system
spec:
volumes:
- name: hub-dir
hostPath:
path: /var/lib/yurthub
type: DirectoryOrCreate
- name: kubernetes
hostPath:
path: /etc/kubernetes
type: Directory
- name: pem-dir
hostPath:
path: /var/lib/kubelet/pki
type: Directory
containers:
- name: yurt-hub
image: openyurt/yurthub:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: hub-dir
mountPath: /var/lib/yurthub
- name: kubernetes
mountPath: /etc/kubernetes
- name: pem-dir
mountPath: /var/lib/kubelet/pki
command:
- yurthub
- --v=2
- --server-addr=https://apiserver.cluster.local:6443
- --node-name=$(NODE_NAME)
- --join-token=2d96hl.l2hkfrihj88pguup
- --working-mode=cloud
livenessProbe:
httpGet:
host: 127.0.0.1
path: /v1/healthz
port: 10267
initialDelaySeconds: 300
periodSeconds: 5
failureThreshold: 3
resources:
requests:
cpu: 150m
memory: 150Mi
limits:
memory: 300Mi
securityContext:
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
hostNetwork: true
priorityClassName: system-node-critical
priority: 2000001000
kubeconfig
:
[root@host130 openyurt]# cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2VENDQWRHZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQ0FYRFRJeU1EWXdOakU0TkRrek5Wb1lEekl4TWpJd05URXpNVGcwT1RNMVdqQVZNUk13RVFZRApWUVFERXdwcmRXSmxjbTVsZEdWek1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCnVXSlIvZ1h4NnhsbE5jQ2NTVVBYYzl3NFZxSExLU0t0aGYxblZxTkk1N2lmalNMNW9UQTBvTTJzWUZIakJQTXoKcGlMSlJ6THdULzlRWmFDeHExOWphMUIzbk5OL2d5a014bXRObHNLY3BkRXk3T1pGSWhxZGg0aTVIQWxIR0RsUApUT2RjRktST091aEQyRjdybnQ5RzBUZWp0V2lFenlTYTh2QVE2dDhncUMvQ1hTWWc2eXNJRy9xekppT0xDaXJQCjJxbVV1VDRvZnpndHJFdE8wcy80NDZIdEdTbWI1VWF1NEU2bUdObXRyenhXekJlVWVWWEZEWXRYREtiQnVUdG8KQjcxamM0YXdqV2pyWGRmOVRRUmlzd09jMmp4QmZwV2ptelAwamJDcXFjSkE2aWt2cmc2ZWZrQ202ZGJYUHlUQgpVc0tib3FJZHVOMWtHVFlyNXFJQXZ3SURBUUFCbzBJd1FEQU9CZ05WSFE4QkFmOEVCQU1DQXFRd0R3WURWUjBUCkFRSC9CQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVSajZYaGNEdk9IVFJBVlRFUmp4Z3BFUnVPUU13RFFZSktvWkkKaHZjTkFRRUxCUUFEZ2dFQkFKejN0WGRNS29FMW1oNlRGUkR1KzNkRis3QU1jeUtpWENiQ3c2WGlKTDNDTkt6awpRNmVIVFYwRnk2dHgxVXFCUXRSK1dFZ0xkR2NTb2NWL2lobUFBRlpIaitXRXFUQXhLS1ZnbW5jZE4xZ29oN0ZKCmVsNGg4WU5kUkxFUzAyN1NrR21DMWRsRU85Zmdxbm1Db21DNlhzeHY0aEQ0ZnN3MHltVXdIVnFuYkdpRXA5Q3EKcVhMMDVzLzZ2cU5nNHIzSktRclROMG5pQUdFVHdRbEl2R3FxV3Qyak5VT3FicXRVbEpxL0o4Zkcra2Z4TFF2cQowVzdJMUhaM1N0SWh4TmNyY2ZOMHdrejJmTDg0eDlWS2JRUzQzWC9QNEdoRjRBV01vVW9uNCs5cnFSSDF0S1JGCkpEUDZGT2kvaVFWYmJRSDhmcFhReTU1VDU4NzYrZTlEczdtclJOaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://apiserver.cluster.local:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGVENDQWYyZ0F3SUJBZ0lJVWZ6RlRHdlMvbGd3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWdGdzB5TWpBMk1EWXhPRFE1TXpWYUdBOHlNVEl5TURVeE16RTROVEV3TTFvdwpOREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhHVEFYQmdOVkJBTVRFR3QxWW1WeWJtVjBaWE10CllXUnRhVzR3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRREtyTzhrT2pGYlI2bjUKb01rNU9vUHFqNnlUVS9KT2pmRjlJb0hMaitpTlFQT2JOWURHN2l4RHdpYVdCTXVjRXNxVCtMVXNWcnZtOTZodwp3S0hXbkIrNWJKYUh6QU5xMnhiYm5QTUlnYk9veEh5YU9iZnNVdUVJY3dWQlB5RXVvN0t3bWdYYnhMZ3dsblhqClYzMGxqVlNxRGxMUkkyVWEraitWQ0JWLzlHMmJSUEdyRE5aU1M5Q2dLaXppUFE5WG1Xb0E5b3J4WHE0a2kzNVkKM0IweS9mTDRjQW8xaWVBTlh4MFY3SzdoV3hEeldyRzVPMEhjV1VXQVpPYWZBcDVuNEIvd3QzYUZIOWNESXlNdQovcFMxSnhFQU5aTlJpQnY5Wlhyc05yUktPdTJnNWdkNnFjaFV3WTVnUkJ6MlIvemZrMGZUaDArSWFjYWtRbkh5Ck5ja25pdjZYQWdNQkFBR2pTREJHTUE0R0ExVWREd0VCL3dRRUF3SUZvREFUQmdOVkhTVUVEREFLQmdnckJnRUYKQlFjREFqQWZCZ05WSFNNRUdEQVdnQlJHUHBlRndPODRkTkVCVk1SR1BHQ2tSRzQ1QXpBTkJna3Foa2lHOXcwQgpBUXNGQUFPQ0FRRUFHb1Z0V0djZEZ1M1ZyZVpGVyswYWFwaWZzSFlOaFpTWm1EeUhKZkhZaEpnRzY0M3J2TEVyCklHRUozSm1XenNOajhQRTF6aXQ3Q0ZBd29FcXBYWnEydXVmVlJHS3MrTEo5YnlKR3VpQjFmZ1liTTg2QVJRM3MKZ2V0TTNXSlFpVDdENGJoZkM4M0VMNkRJUEZJdHp3UEpxSTFFcFZ4a04ycmY4cG9RdTNNVEd2eHhhRldrU01SUwpzUWZMYXc5UENhOWRBU21iMmkyaTBCVmZVOEdqQWVsZDltdDFTVFB4eEJ2aVJ5aVB0elIvcTZRS3ViTmFKVWoyCk5PMG9uemtJbTBWR2xyVlBURkNROENPaHVnZGs1c0s4YnUrNW9hZEJsMklMMzZSMVQ5UnpQTDNjOGxKdmpkbGsKV0czS1dpZmhVLzZxU3hJNGFEd2lBTFJQb3RHcmE3ajlXUT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBeXF6dkpEb3hXMGVwK2FESk9UcUQ2bytzazFQeVRvM3hmU0tCeTQvb2pVRHpteldBCnh1NHNROEltbGdUTG5CTEtrL2kxTEZhNzV2ZW9jTUNoMXB3ZnVXeVdoOHdEYXRzVzI1enpDSUd6cU1SOG1qbTMKN0ZMaENITUZRVDhoTHFPeXNKb0YyOFM0TUpaMTQxZDlKWTFVcWc1UzBTTmxHdm8vbFFnVmYvUnRtMFR4cXd6VwpVa3ZRb0NvczRqMFBWNWxxQVBhSzhWNnVKSXQrV053ZE12M3krSEFLTlluZ0RWOGRGZXl1NFZzUTgxcXh1VHRCCjNGbEZnR1RtbndLZVorQWY4TGQyaFIvWEF5TWpMdjZVdFNjUkFEV1RVWWdiL1dWNjdEYTBTanJ0b09ZSGVxbkkKVk1HT1lFUWM5a2Y4MzVOSDA0ZFBpR25HcEVKeDhqWEpKNHIrbHdJREFRQUJBb0lCQVFDZkdUM28zRjJlWUJWSQpSalZ2M1VWc3ZqZ2t0d05CTXgvY3NWZmVhaXVOcHUwVWE5MlpTNkluMXFMZnBRZ0lqcC9EcExya0FYb2poMG9NCnFNcmlZMUJzQ0pmcUpmYVF6VWVXUWhCdUh4TGZhczY5YW8yODBCcWl2VmZrcmgvb01zeTA0Vk96L3lydnlVemwKbCtvL3JrQkY5bFNBcEI1Y0hSSUlkWDRiSWM5ZzBFcFRqcFpib2tGQ0xRakZHR1o1RW1iMkpYVmxzUG5PaGxPSAp2aCtBWFNWdmZIdWpZRjJVZEVLTWtMK1NEQVhsejRsektDelVUcHlUaEJ1bVlmUERyUEJFNmJ2OXNjMWJ3eXpqCm9EQjBBK0RHL2d2d3RySUtyZnA1ZmVuVnFEdE1VOEtEcWxFOHFNcGtLM1ZOUUFodXJuOFQzSXZXMHVlM2x6MkEKNUJmbXIyd2hBb0dCQU8wei9FNFBjamVpcUdxTjlXRjdxV3VCTlRnVlJLVWFOaGN4aGhQOFVUK0VIL01rM1ZZWQpUNVVENExpUEJTcHZYRW12RUdPcEN2UGFDY3grSGhMTXJrOFp6c2lXWVJsQ0hFWHFiNlc0UnlBQTZnb2cxaVhNCnAyNHpkUUhkRm1Ieml1WmNZUVRsWnpTbzUwbmFEUHNrTTlwa1Y2OUZMbXJrV1JrdlAxVkVIR2xwQW9HQkFOcTgKZ2FFWnhqZGQvZWhxWTA3ODB6aml3ZVNwMmUwRWV5YTdtdjBNc1czbE9JU0tNemVQSHdxL25jVmM1WmNndThlOAppRWQ3YnFPbExaaEtOTXlMeTRtWHNVK1pSRTNpT1lIN2ZuWllycGd6RlFYdFZldU4rL24wVzdyeWtrT0FUQlBWCndpOTlySXBSWkszM0Y1NU9XaGdrYitPRnlkSTZXVFF0Tys5VTF5Zi9Bb0dCQU16WmRtMmJuVkk2NFNPVWtYT3MKcmpXdmtseHEwYXVjSlZhR2FIcGVEM1RCRUM2Vmlhak91ZnZCSzVOM3dFaFRmK29LakNibFdCWWNHUlpIWElWegp5cDE1ZGtGNHpVWlk5NzNScHJZQm5Uc2dUdjZNT1NUUHgxQytrN0FXVlR3bWJiQmYyMUcxSkJvd08vNWxsNHhVClNZdXoySjMvS3dVWlMzRWFncUdLZnRieEFvR0JBTGJDcG5UYXVqbHN1VWZHREcvazR2ODJ4OWFzN0Q4VGJXcHgKZWhCUTJMYi92UGRSR1hZa2lVVkwwU0VrZTFpSXF4MDZNNHUyWUQwdk9DZDBhU1UyOEx0b0dXaHVvUm1LR1k2MwplWFNjcUZUVzZZdm9QOC91OUVobW1YWmNVMFUvSDFHN1d1S2ZXTmpCSlNRTnZwZ3cweW8wMTUvOUd5SWlTb0pFCkFUMzVYMFExQW9HQVdRSWtUKzc5ZlR3UDVLNWZqZkVYY2ZtRjRNb21sckVSMEtTdmVMbWgyUThQclJHMWZMZW0KNHJJQWdBSFluL2t3Vk5IM2dOUnBPWURYR05LQjB3Rk56S0RybVpWbTJRODNnOHppWkR4bys2Tk1sNTEyOUxscQpLM0tJQmowWjJsemcxbFVjbGlIY3h3UmhWNDBpeE5GZksxYUp6NUNpc1g4dXVwK1JCalF4K01FPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
What you expected to happen:
yurthub runs succefully instead of being restarted all the time.
Environment:
- OpenYurt version: v0.7.0
- OS (e.g:
cat /etc/os-release
): CentOS7 - Kernel (e.g.
uname -a
): Linux host130 3.10.0-1127.el7.x86_64#1
SMP Tue Mar 31 23:36:51 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
others
Before I use yurtadm init
to install, I have cleaned up the environment follow [this article](FAQ | sealer) and deleted the /var/lib/kubelet
, /var/lib/yurthub
, /var/lib/yurttunnel-server
directory.
/kind bug