Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube-apiserver log always has TLS handshake error #70411

Closed
leon0306 opened this issue Oct 30, 2018 · 50 comments
Closed

kube-apiserver log always has TLS handshake error #70411

leon0306 opened this issue Oct 30, 2018 · 50 comments
Labels
kind/support Categorizes issue or PR as a support question. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@leon0306
Copy link

/triage support

In my cluster, kube-apiserver log always has TLS handshake error like this:

I1030 06:26:29.191023       1 log.go:172] http: TLS handshake error from 10.15.4.118:51084: read tcp 10.15.4.253:6443->10.15.4.118:51084: read: connection reset by peer
I1030 06:31:50.354020       1 log.go:172] http: TLS handshake error from 10.15.4.118:55268: read tcp 10.15.4.253:6443->10.15.4.118:55268: read: connection reset by peer
I1030 06:31:50.354090       1 log.go:172] http: TLS handshake error from 10.15.4.119:37746: read tcp 10.15.4.253:6443->10.15.4.119:37746: read: connection reset by peer
I1030 06:35:20.467731       1 log.go:172] http: TLS handshake error from 10.15.4.118:57980: read tcp 10.15.4.253:6443->10.15.4.118:57980: read: connection reset by peer
I1030 06:36:20.498157       1 log.go:172] http: TLS handshake error from 10.15.4.119:38722: read tcp 10.15.4.253:6443->10.15.4.119:38722: read: connection reset by peer
I1030 06:37:41.540767       1 log.go:172] http: TLS handshake error from 10.15.4.118:59816: read tcp 10.15.4.253:6443->10.15.4.118:59816: read: connection reset by peer
I1030 06:37:41.540837       1 log.go:172] http: TLS handshake error from 10.15.4.119:39020: read tcp 10.15.4.253:6443->10.15.4.119:39020: read: connection reset by peer

10.15.4.118 & 10.15.4.119 is LB server by using Haproxy.

10.15.4.253~255 is my master server, the same error is reported on all three machines. Kube-controller-manager & Kube-scheduler is no error log appears.

Guys please help me to resolve this !

@k8s-ci-robot k8s-ci-robot added kind/support Categorizes issue or PR as a support question. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Oct 30, 2018
@leon0306
Copy link
Author

/sig network
/sig node
/sig cluster-ops

@k8s-ci-robot k8s-ci-robot added sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/cluster-ops and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Oct 30, 2018
@anfernee
Copy link
Member

anfernee commented Nov 1, 2018

I think client side log would provide more info in this case.

@leon0306
Copy link
Author

leon0306 commented Nov 2, 2018

@anfernee you mean the load balancing server's logs, or the kubelet logs on the nodes?

@anfernee
Copy link
Member

anfernee commented Nov 2, 2018

right, logs from 10.15.4.118 and 10.15.4.119.

@leon0306
Copy link
Author

leon0306 commented Nov 5, 2018

@anfernee there is no error log in load balancer server. Haproxy is booting by docker.
configuration file of Haproxy as follow:

global
  log 127.0.0.1 local0
  log-tag haproxy
  user root
  group root
  daemon
  pidfile /root/haproxy.pid

defaults
  mode tcp
  log global
  retries 3
  timeout connect 5s
  timeout client 30s
  timeout server 30s
  timeout check 2s
  no option transparent
  
listen admin_stats
  mode http
  bind 0.0.0.0:1080
  log global
  stats refresh 30s
  stats uri     /haproxy-status
  stats realm   Haproxy\ Statistics
  stats auth   ***:***
  stats admin if TRUE

frontend dnc-k8s
  bind 0.0.0.0:6443
  mode tcp
  log global
  default_backend dnc-k8s-proxy

backend dnc-k8s-proxy
  mode tcp
  balance roundrobin
  server 10.15.4.253 10.15.4.253:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
  server 10.15.4.254 10.15.4.254:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
  server 10.15.4.255 10.15.4.255:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3

I don't understand which process has been accessing the API, causing TLS error.

@anfernee
Copy link
Member

anfernee commented Nov 5, 2018

Can you use your own client like curl to reproduce?

@leon0306
Copy link
Author

leon0306 commented Nov 6, 2018

@anfernee you mean like this?

curl -X GET https://10.15.4.253:6443/healthz -k
ok

if not can u give me some example here? thanks a lot buddy

@anfernee
Copy link
Member

anfernee commented Nov 6, 2018

right. looks like your master is good, which means something wrong with your haproxy.

@victorvarza
Copy link

victorvarza commented Nov 6, 2018

This can be solved by switch LB health check from TCP to SSL.
Here are more details about this issue: kubernetes-retired/kube-aws#295
Try to add: "option ssl-hello-chk" as described in docs: https://www.haproxy.com/documentation/aloha/10-0/traffic-management/lb-layer7/health-checks/.

@leon0306
Copy link
Author

leon0306 commented Nov 6, 2018

@victorvarza

I tried to add SSL check in my Haproxy but i got error like this:

I1106 08:23:51.231176       1 log.go:172] http: TLS handshake error from 10.15.4.118:3010: tls: client offered an unsupported, maximum protocol version of 300
I1106 08:24:01.234126       1 log.go:172] http: TLS handshake error from 10.15.4.118:3134: tls: client offered an unsupported, maximum protocol version of 300
I1106 08:24:11.236613       1 log.go:172] http: TLS handshake error from 10.15.4.118:3250: tls: client offered an unsupported, maximum protocol version of 300
I1106 08:24:21.239310       1 log.go:172] http: TLS handshake error from 10.15.4.118:3368: tls: client offered an unsupported, maximum protocol version of 300
I1106 08:24:31.241379       1 log.go:172] http: TLS handshake error from 10.15.4.118:3482: tls: client offered an unsupported, maximum protocol version of 300

After that, i switched to TCP port check, still got the error like this:

I1106 08:58:02.681476       1 log.go:172] http: TLS handshake error from 10.15.4.118:41744: read tcp 10.15.4.127:6443->10.15.4.118:41744: read: connection reset by peer
I1106 08:58:38.699099       1 log.go:172] http: TLS handshake error from 10.15.4.118:42458: read tcp 10.15.4.127:6443->10.15.4.118:42458: read: connection reset by peer
I1106 08:59:11.345815       1 log.go:172] http: TLS handshake error from 10.15.4.119:42576: read tcp 10.15.4.127:6443->10.15.4.119:42576: read: connection reset by peer

configuration file:

global
  log 127.0.0.1 local0 err
  user root
  group root
  daemon
  pidfile /root/haproxy.pid

defaults
  log global
  timeout connect 60s
  timeout client 60s
  timeout server 60s
  timeout check 10s
  
# tcp options
  option dontlognull
  option splice-response
  option http-keep-alive
  option clitcpka
  option srvtcpka
  option tcp-smart-accept
  option tcp-smart-connect
  option contstats

frontend uat-k8s
  bind *:9443
  mode tcp
  log global
  default_backend uat-k8s-proxy

backend uat-k8s-proxy
  mode tcp
  balance roundrobin
  option tcp-check
  server 10.15.4.127 10.15.4.127:6443 check
  server 10.15.4.128 10.15.4.128:6443 check
  server 10.15.4.209 10.15.4.209:6443 check

do u have any suggestion?

@leon0306
Copy link
Author

leon0306 commented Nov 6, 2018

@victorvarza

I1106 08:59:11.345815       1 log.go:172] http: TLS handshake error from 10.15.4.119:42576: read tcp 10.15.4.127:6443->10.15.4.119:42576: read: connection reset by peer

10.15.4.127 is kube-apiserver 10.15.4.119 is haproxy

I think this error is not caused by a health check.
It seem like the kube-apiserver try to connect Haproxy trigger the error, did u see any like that error before?

@leon0306
Copy link
Author

leon0306 commented Nov 7, 2018

When i close the health check of haproxy, i still can get error log from kube-apiserver log.
i think health check is not root case for this error....
Has anyone ever seen this problem?

@victorgolda
Copy link

im facing this too

@dawidmalina
Copy link

We are facing this as well. We've tried with this configuration:

    # use httpchk to validate backend availability
    option httpchk GET /healthz HTTP/1.1\r\nHost:\ 10.10.0.10\r\nConnection:\ Close
    http-check expect string ok
    server 10-10-0-6 10.10.0.6:443 inter 1s fastinter 1s check check-ssl verify none

but no luck. But what is strange that if i disable check check-ssl verify none, http-check expect and option httpchk i still see this error in api logs.

@mindw
Copy link

mindw commented Jan 1, 2019

Alas, a long standing issue #43784.

@wgliang
Copy link
Contributor

wgliang commented Feb 20, 2019

Have the same issue.
/priority important-soon

@k8s-ci-robot k8s-ci-robot added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Feb 20, 2019
@thockin
Copy link
Member

thockin commented Mar 8, 2019

#43784

@Ashraf-Hassan
Copy link

I wonder why the case is issue is closed, I have been jumping from one link to another and I cannot see the fix for the issue, I have 3 masters behind HAPROXY, and I am suffering exactly from the same problem, and below is the config for the api pod:
spec:
containers:

  • command:
    • kube-apiserver
    • --authorization-mode=Node,RBAC
    • --advertise-address=10.245.10.10
    • --allow-privileged=true
    • --client-ca-file=/etc/kubernetes/pki/ca.crt
    • --enable-admission-plugins=NodeRestriction,PodNodeSelector
    • --enable-bootstrap-token-auth=true
    • --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    • --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    • --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    • --etcd-servers=https://10.245.10.2:2379,https://10.245.10.3:2379,https://10.245.10.3:2379
    • --insecure-port=0
    • --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    • --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    • --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    • --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    • --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    • --requestheader-allowed-names=front-proxy-client
    • --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    • --requestheader-extra-headers-prefix=X-Remote-Extra-
    • --requestheader-group-headers=X-Remote-Group
    • --requestheader-username-headers=X-Remote-User
    • --secure-port=6443
    • --service-account-key-file=/etc/kubernetes/pki/sa.pub
    • --service-cluster-ip-range=10.96.0.0/12
    • --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    • --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
      image: k8s.gcr.io/kube-apiserver:v1.13.4
      And I am getting these error :
      http: TLS handshake error from 10.245.10.5:60478: tls: first record does not look like a TLS handshake
      Where is the IP 10.245.10.5 is the LB IP

@ghost
Copy link

ghost commented Apr 23, 2019

same here. Facing a lot after upgrading to k8s 1.12.5

@frittentheke
Copy link

@boxuan666 ?

@yogarajrafay
Copy link

I am seeing this issue. Can anybody share, how was this resolved.

@invidian
Copy link
Member

The same is happening, if kube-apiserver is behind AWS load balancer, which does TCP health checks by default. If you configure it to use HTTPS, it still does not fully resolve the problem, as if one uses --anonymous-auth=false, then the health probes will be failing, as AWS has no way to configure authentication header for the health checks.

Maybe the log level of this message could be changed?

@perrefe
Copy link

perrefe commented Apr 22, 2020

I'm facing this issue on version v1.18.2 installed via kubelet. I'm using the same haproxy instance (1.5.18) and config as balancer used for version v1.17.3 which has no problems.

@rubenst2013
Copy link

Same for me on 1.18.2
Seeing this ertor after creating a fresh cluster with 3 masters and 2 workers with vagrant/virtualbox and kubeadm.

First two masters seem to be able to communicate properly. The third master and the two workers I had to restart kubelet which then shows the mentioned errors about tls handshaking.

The cluster was created a few hours ago. The join tokens are valid for 24hrs.

@sumitKash
Copy link

I am on kubeadm version :"v1.17.4"

Getting the TLS handshake error, which is causing my kube-apiserver to restart.

Flag --insecure-port has been deprecated, This flag will be removed in a future version. I0430 09:13:29.046060 1 server.go:596] external host was not specified, using 192.168.0.109 I0430 09:13:29.046715 1 server.go:150] Version: v1.17.4 I0430 09:13:29.682184 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass. I0430 09:13:29.682326 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota. I0430 09:13:29.684790 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass. I0430 09:13:29.685087 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota. I0430 09:13:29.689415 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:29.689634 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:29.713493 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:29.713534 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:29.735309 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:29.735559 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:29.831465 1 master.go:267] Using reconciler: lease I0430 09:13:29.832422 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:29.832629 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:29.853007 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:29.853214 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:29.870593 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:29.870879 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:29.889623 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:29.889919 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:29.912727 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:29.916148 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:29.943443 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:29.943510 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:29.963027 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:29.963287 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:29.990784 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:29.991049 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.011970 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.012195 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.043015 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.043580 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.060042 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.060247 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.078773 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.078838 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.101854 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.102291 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.119095 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.119611 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.138909 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.139155 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.178625 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.179108 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.210588 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.210846 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.232103 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.232516 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.254297 1 rest.go:115] the default service ipfamily for this cluster is: IPv4 I0430 09:13:30.497764 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.497846 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.514043 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.514236 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.536677 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.537077 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.548509 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.548541 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.592759 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.593235 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.619654 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.620018 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.640614 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.640989 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.657900 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.658346 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.676889 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.677564 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.680278 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.680613 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.712588 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.713347 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.729198 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.729610 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.744314 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.744538 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.763012 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.763289 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.783781 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.786494 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.816324 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.816431 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.834033 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.834072 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.848592 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.848900 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.864391 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.865217 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.886566 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.886839 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.901444 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.901812 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.920897 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.921173 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.938260 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.938309 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.957502 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.957549 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.979526 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.980037 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:30.994640 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:30.994880 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:31.009476 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:31.009517 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:31.024340 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:31.024796 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:31.049074 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:31.049122 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:31.065782 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:31.066019 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:31.080212 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:31.080242 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:31.092721 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:31.093324 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:31.106598 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:31.106795 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:31.125340 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:31.125530 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:31.169099 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:31.169220 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:31.185814 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:31.185856 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:31.199060 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:31.200899 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:31.247241 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:31.247662 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:31.265445 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:31.265494 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:31.285433 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:31.285614 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:31.309414 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:31.309541 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:31.330430 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:31.330697 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:31.342677 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:31.343239 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] W0430 09:13:31.664953 1 genericapiserver.go:409] Skipping API batch/v2alpha1 because it has no resources. W0430 09:13:31.686651 1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. W0430 09:13:31.709837 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0430 09:13:31.765238 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0430 09:13:31.774200 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0430 09:13:31.802621 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0430 09:13:31.851526 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources. W0430 09:13:31.852214 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources. I0430 09:13:31.872846 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass. I0430 09:13:31.872945 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota. I0430 09:13:31.875566 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:31.875738 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:31.887561 1 client.go:361] parsed scheme: "endpoint" I0430 09:13:31.887792 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I0430 09:13:38.182360 1 dynamic_cafile_content.go:166] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt I0430 09:13:38.182592 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt I0430 09:13:38.183630 1 dynamic_serving_content.go:129] Starting serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key I0430 09:13:38.184672 1 secure_serving.go:178] Serving securely on [::]:6443 I0430 09:13:38.184891 1 tlsconfig.go:219] Starting DynamicServingCertificateController I0430 09:13:38.187130 1 crd_finalizer.go:263] Starting CRDFinalizer I0430 09:13:38.188074 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0430 09:13:38.188180 1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller I0430 09:13:38.188359 1 apiservice_controller.go:94] Starting APIServiceRegistrationController I0430 09:13:38.188504 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0430 09:13:38.188673 1 controller.go:81] Starting OpenAPI AggregationController I0430 09:13:38.189600 1 available_controller.go:386] Starting AvailableConditionController I0430 09:13:38.189730 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0430 09:13:38.189829 1 autoregister_controller.go:140] Starting autoregister controller I0430 09:13:38.189964 1 cache.go:32] Waiting for caches to sync for autoregister controller I0430 09:13:38.211990 1 log.go:172] http: TLS handshake error from 192.168.5.30:35814: EOF I0430 09:13:38.272928 1 log.go:172] http: TLS handshake error from 192.168.5.30:35728: EOF I0430 09:13:38.561863 1 log.go:172] http: TLS handshake error from 192.168.5.30:35730: EOF I0430 09:13:38.562589 1 log.go:172] http: TLS handshake error from 192.168.5.30:35816: EOF I0430 09:13:38.565045 1 log.go:172] http: TLS handshake error from 192.168.5.30:35732: EOF I0430 09:13:38.565215 1 log.go:172] http: TLS handshake error from 192.168.5.30:35818: EOF I0430 09:13:38.565398 1 log.go:172] http: TLS handshake error from 192.168.5.30:35820: EOF I0430 09:13:38.565579 1 log.go:172] http: TLS handshake error from 192.168.5.30:35734: EOF I0430 09:13:38.569222 1 log.go:172] http: TLS handshake error from 192.168.5.30:35822: EOF I0430 09:13:38.570204 1 log.go:172] http: TLS handshake error from 192.168.5.30:35736: EOF I0430 09:13:38.570429 1 log.go:172] http: TLS handshake error from 192.168.5.30:35824: EOF I0430 09:13:38.571230 1 log.go:172] http: TLS handshake error from 192.168.5.30:35738: EOF I0430 09:13:38.572609 1 log.go:172] http: TLS handshake error from 192.168.5.30:35826: EOF I0430 09:13:38.573045 1 log.go:172] http: TLS handshake error from 192.168.5.30:35740: EOF I0430 09:13:38.573422 1 log.go:172] http: TLS handshake error from 192.168.5.30:35828: EOF I0430 09:13:38.573852 1 log.go:172] http: TLS handshake error from 192.168.5.30:35742: EOF I0430 09:13:38.574134 1 log.go:172] http: TLS handshake error from 192.168.5.30:35830: EOF I0430 09:13:38.574656 1 log.go:172] http: TLS handshake error from 192.168.5.30:35744: EOF I0430 09:13:38.574870 1 log.go:172] http: TLS handshake error from 192.168.5.30:35832: EOF I0430 09:13:38.574978 1 log.go:172] http: TLS handshake error from 192.168.5.30:35834: EOF I0430 09:13:38.575059 1 log.go:172] http: TLS handshake error from 192.168.5.30:35746: EOF I0430 09:13:38.575150 1 log.go:172] http: TLS handshake error from 192.168.5.30:35748: EOF I0430 09:13:38.575222 1 log.go:172] http: TLS handshake error from 192.168.5.30:35836: EOF I0430 09:13:38.575290 1 log.go:172] http: TLS handshake error from 192.168.5.30:35838: EOF I0430 09:13:38.575354 1 log.go:172] http: TLS handshake error from 192.168.5.30:35750: EOF I0430 09:13:38.575967 1 log.go:172] http: TLS handshake error from 192.168.5.30:35840: EOF I0430 09:13:38.576260 1 log.go:172] http: TLS handshake error from 192.168.5.30:35752: EOF I0430 09:13:38.576412 1 log.go:172] http: TLS handshake error from 192.168.5.30:35842: EOF I0430 09:13:38.576858 1 log.go:172] http: TLS handshake error from 192.168.5.30:35844: EOF I0430 09:13:38.577099 1 log.go:172] http: TLS handshake error from 192.168.5.30:35754: EOF I0430 09:13:38.577197 1 log.go:172] http: TLS handshake error from 192.168.5.30:35846: EOF I0430 09:13:38.577339 1 log.go:172] http: TLS handshake error from 192.168.5.30:35756: EOF I0430 09:13:38.577537 1 log.go:172] http: TLS handshake error from 192.168.5.30:35848: EOF I0430 09:13:38.577680 1 log.go:172] http: TLS handshake error from 192.168.5.30:35850: EOF I0430 09:13:38.577810 1 log.go:172] http: TLS handshake error from 192.168.5.30:35758: EOF I0430 09:13:38.577939 1 log.go:172] http: TLS handshake error from 192.168.5.30:35852: EOF I0430 09:13:38.578145 1 log.go:172] http: TLS handshake error from 192.168.5.30:35760: EOF I0430 09:13:38.578238 1 log.go:172] http: TLS handshake error from 192.168.5.30:35854: EOF I0430 09:13:38.578498 1 log.go:172] http: TLS handshake error from 192.168.5.30:35762: EOF I0430 09:13:38.578701 1 log.go:172] http: TLS handshake error from 192.168.5.30:35856: EOF I0430 09:13:38.578892 1 log.go:172] http: TLS handshake error from 192.168.5.30:35764: EOF I0430 09:13:38.579075 1 log.go:172] http: TLS handshake error from 192.168.5.30:35858: EOF I0430 09:13:38.579803 1 log.go:172] http: TLS handshake error from 192.168.5.30:35766: EOF I0430 09:13:38.580198 1 log.go:172] http: TLS handshake error from 192.168.5.30:35860: EOF I0430 09:13:38.580597 1 log.go:172] http: TLS handshake error from 192.168.5.30:35862: EOF I0430 09:13:38.585606 1 log.go:172] http: TLS handshake error from 192.168.5.30:35768: EOF E0430 09:13:38.553859 1 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.0.109, ResourceVersion: 0, AdditionalErrorMsg: I0430 09:13:38.253981 1 customresource_discovery_controller.go:208] Starting DiscoveryController I0430 09:13:38.253996 1 naming_controller.go:288] Starting NamingConditionController I0430 09:13:38.254010 1 establishing_controller.go:73] Starting EstablishingController I0430 09:13:38.254026 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController I0430 09:13:38.254047 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController I0430 09:13:38.254115 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt I0430 09:13:38.254132 1 dynamic_cafile_content.go:166] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt I0430 09:13:38.255898 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0430 09:13:38.627246 1 log.go:172] http: TLS handshake error from 192.168.5.30:35864: EOF I0430 09:13:38.629470 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister I0430 09:13:38.629761 1 log.go:172] http: TLS handshake error from 192.168.5.30:35770: EOF I0430 09:13:38.630508 1 log.go:172] http: TLS handshake error from 192.168.5.30:35866: EOF I0430 09:13:38.630841 1 log.go:172] http: TLS handshake error from 192.168.5.30:35772: EOF I0430 09:13:38.632333 1 log.go:172] http: TLS handshake error from 192.168.5.30:35868: EOF I0430 09:13:38.635426 1 log.go:172] http: TLS handshake error from 192.168.5.30:35774: EOF I0430 09:13:38.636151 1 log.go:172] http: TLS handshake error from 192.168.5.30:35870: EOF I0430 09:13:38.636830 1 log.go:172] http: TLS handshake error from 192.168.5.30:35872: EOF I0430 09:13:38.637462 1 log.go:172] http: TLS handshake error from 192.168.5.30:35776: EOF I0430 09:13:38.639845 1 log.go:172] http: TLS handshake error from 192.168.5.30:35874: EOF I0430 09:13:38.640215 1 log.go:172] http: TLS handshake error from 192.168.5.30:35778: EOF I0430 09:13:38.640320 1 log.go:172] http: TLS handshake error from 192.168.5.30:35876: EOF I0430 09:13:38.640512 1 log.go:172] http: TLS handshake error from 192.168.5.30:35780: EOF I0430 09:13:38.685948 1 log.go:172] http: TLS handshake error from 192.168.5.30:35878: EOF I0430 09:13:38.686108 1 log.go:172] http: TLS handshake error from 192.168.5.30:35880: EOF I0430 09:13:38.686197 1 log.go:172] http: TLS handshake error from 192.168.5.30:35782: EOF I0430 09:13:38.686346 1 log.go:172] http: TLS handshake error from 192.168.5.30:35882: EOF I0430 09:13:38.686969 1 log.go:172] http: TLS handshake error from 192.168.5.30:35784: EOF I0430 09:13:38.687069 1 log.go:172] http: TLS handshake error from 192.168.5.30:35884: EOF I0430 09:13:38.687223 1 log.go:172] http: TLS handshake error from 192.168.5.30:35786: EOF I0430 09:13:38.687304 1 log.go:172] http: TLS handshake error from 192.168.5.30:35886: EOF I0430 09:13:38.687372 1 log.go:172] http: TLS handshake error from 192.168.5.30:35788: EOF **I0430 09:13:38.687629 1 log.go:172] http: TLS handshake error from 192.168.5.30:35888: EOF I0430 09:13:38.687798 1 log.go:172] http: TLS handshake error from 192.168.5.30:35890: EOF I0430 09:13:38.687896 1 log.go:172] http: TLS handshake error from 192.168.5.30:35790: EOF** I0430 09:13:38.687997 1 log.go:172] http: TLS handshake error from 192.168.5.30:35792: EOF I0430 09:13:38.688091 1 log.go:172] http: TLS handshake error from 192.168.5.30:35892: EOF I0430 09:13:38.688177 1 log.go:172] http: TLS handshake error from 192.168.5.30:35794: EOF I0430 09:13:38.688261 1 log.go:172] http: TLS handshake error from 192.168.5.30:35894: EOF I0430 09:13:38.688349 1 log.go:172] http: TLS handshake error from 192.168.5.30:35796: EOF I0430 09:13:38.688434 1 log.go:172] http: TLS handshake error from 192.168.5.30:35896: EOF I0430 09:13:38.688522 1 log.go:172] http: TLS handshake error from 192.168.5.30:35798: EOF I0430 09:13:38.688642 1 log.go:172] http: TLS handshake error from 192.168.5.30:35898: EOF I0430 09:13:38.688876 1 log.go:172] http: TLS handshake error from 192.168.5.30:35800: EOF I0430 09:13:38.689216 1 log.go:172] http: TLS handshake error from 192.168.5.30:35802: EOF I0430 09:13:38.689324 1 log.go:172] http: TLS handshake error from 192.168.5.30:35804: EOF I0430 09:13:38.689440 1 log.go:172] http: TLS handshake error from 192.168.5.30:35900: EOF I0430 09:13:38.689548 1 log.go:172] http: TLS handshake error from 192.168.5.30:35806: EOF I0430 09:13:38.689718 1 log.go:172] http: TLS handshake error from 192.168.5.30:35902: EOF I0430 09:13:38.689837 1 log.go:172] http: TLS handshake error from 192.168.5.30:35904: EOF I0430 09:13:38.689930 1 log.go:172] http: TLS handshake error from 192.168.5.30:35808: EOF I0430 09:13:38.690019 1 log.go:172] http: TLS handshake error from 192.168.5.30:35906: EOF I0430 09:13:38.690100 1 log.go:172] http: TLS handshake error from 192.168.5.30:35810: EOF I0430 09:13:38.690318 1 log.go:172] http: TLS handshake error from 192.168.5.30:35812: EOF I0430 09:13:38.690467 1 log.go:172] http: TLS handshake error from 192.168.5.30:35724: EOF I0430 09:13:38.690864 1 log.go:172] http: TLS handshake error from 192.168.5.30:35726: EOF I0430 09:13:38.253961 1 controller.go:85] Starting OpenAPI controller I0430 09:13:38.734400 1 shared_informer.go:204] Caches are synced for crd-autoregister I0430 09:13:38.762650 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I0430 09:13:38.834662 1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller I0430 09:13:38.834755 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0430 09:13:38.858416 1 cache.go:39] Caches are synced for AvailableConditionController controller I0430 09:13:38.859250 1 cache.go:39] Caches are synced for autoregister controller E0430 09:13:38.921389 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E0430 09:13:38.921765 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} I0430 09:13:39.182402 1 controller.go:107] OpenAPI AggregationController: Processing item I0430 09:13:39.182431 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0430 09:13:39.182443 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0430 09:13:39.190561 1 storage_scheduling.go:142] all system priority classes are created successfully or already exist. I0430 09:14:56.859520 1 dynamic_cafile_content.go:181] Shutting down request-header::/etc/kubernetes/pki/front-proxy-ca.crt I0430 09:14:56.859575 1 apiapproval_controller.go:197] Shutting down KubernetesAPIApprovalPolicyConformantConditionController I0430 09:14:56.859584 1 establishing_controller.go:84] Shutting down EstablishingController I0430 09:14:56.859599 1 naming_controller.go:299] Shutting down NamingConditionController I0430 09:14:56.859610 1 customresource_discovery_controller.go:219] Shutting down DiscoveryController I0430 09:14:56.859637 1 dynamic_cafile_content.go:181] Shutting down request-header::/etc/kubernetes/pki/front-proxy-ca.crt I0430 09:14:56.859649 1 dynamic_cafile_content.go:181] Shutting down client-ca-bundle::/etc/kubernetes/pki/ca.crt I0430 09:14:56.859683 1 controller.go:122] Shutting down OpenAPI controller I0430 09:14:56.859698 1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller I0430 09:14:56.859711 1 autoregister_controller.go:164] Shutting down autoregister controller I0430 09:14:56.859722 1 available_controller.go:398] Shutting down AvailableConditionController I0430 09:14:56.859738 1 apiservice_controller.go:106] Shutting down APIServiceRegistrationController I0430 09:14:56.859748 1 crd_finalizer.go:275] Shutting down CRDFinalizer I0430 09:14:56.859804 1 controller.go:87] Shutting down OpenAPI AggregationController I0430 09:14:56.860328 1 crdregistration_controller.go:142] Shutting down crd-autoregister controller I0430 09:14:56.859579 1 nonstructuralschema_controller.go:203] Shutting down NonStructuralSchemaConditionController I0430 09:14:56.860418 1 tlsconfig.go:234] Shutting down DynamicServingCertificateController I0430 09:14:56.860440 1 dynamic_serving_content.go:144] Shutting down serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key I0430 09:14:56.860452 1 dynamic_cafile_content.go:181] Shutting down client-ca-bundle::/etc/kubernetes/pki/ca.crt I0430 09:14:56.859523 1 controller.go:180] Shutting down kubernetes service endpoint reconciler I0430 09:14:56.865291 1 secure_serving.go:222] Stopped listening on [::]:6443 E0430 09:14:56.883346 1 controller.go:183] Get https://localhost:6443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:6443: connect: connection refused
Above is the comeplete log of kube-apiserver

Here 192.168.5.30 is the api LB ip address of HAproxy.

The HAproxy config:

`frontend kubernetes
bind 192.168.5.30:8443
option tcplog
mode tcp
default_backend kubernetes-master-nodes

backend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server master-1 192.168.5.11:6443 check fall 3 rise 2
server master-2 192.168.5.12:6443 check fall 3 rise 2
`

Can someone pls guide me to the solution to fix this issue.

@danieleagle
Copy link

danieleagle commented May 12, 2020

I'm having the exact same issue as @sumitKash. I tried with both CentOS 8 and Ubuntu 18.04 with the same results. What am I missing? I'm using the option to let Kubernetes manage/create all the certificates. The kube-api-server pod keeps restarting over and over.

Edit 1: When I don't specify a config file (e.g. kubeadm init --config /tmp/kubeadm-config.yaml) and instead do everything via command line (e.g. kubeadm init --control-plane-endpoint "LoadBalancerFQDN:6443" --upload-certs), it works fine. I suspect there is an option in the configuration file that is causing the issue.

Edit 2: I removed the option to disable anonymous authentication from the kubeadm-config.yaml file and it worked. I'm still getting TLS handshake errors but the kube-api-server is still up and not restarting.

@zadm
Copy link

zadm commented May 12, 2020

Hello,

I'am facing the same issue without using a load balancer on the front
I'am trying to integrate kubernetes to gitlab and i have this error
log.go:172] http: TLS handshake error from 34.74.90.67:6325: local error: tls: bad record MAC\n","stream":"stderr","time":"2020-05-12T23:31:17.946990982Z"}

Anyone have a fix to this please ?

Thanks in advance

@invidian
Copy link
Member

Since there is not so much of rejection that this should be fixed and I see the simple way to resolve it, I created #91050 to track actionable thing to do.

@zadm
Copy link

zadm commented May 13, 2020

Hello,

Thank you for your response

Is it the raison why the kubernetes integration fails ?

I have installed gitlab locally and i'am facing the same issue
Error in gitlab There was a problem authenticating with your cluster. Please ensure your CA Certificate and Token are valid.

As i said in the last comment, the same credentials works fine with curl
What could be the issue ?

Regards

@dajianderichang
Copy link

有同样的问题。
/优先级很快

你处理了吗,我的集群很正常,但这个日志让人不舒服

@mariusmotea
Copy link

I also receive lot of tls: client offered an unsupported records. I discover these are generated by HAproxy healt check ssl-hello-chk witch use only SSLv3, while apiserver require TLS. I switch to check-ssl which use openssl implemention and disable the verification because the certificates are self signed, generated by kubeadm.
Here are my changes:

backend apiserver
    option httpchk GET /healthz
    http-check expect status 200
    mode tcp
    #option ssl-hello-chk
    balance     roundrobin
        server PZN-BU-ADM-01 10.x.x.11:6443 check check-ssl verify none
        server PZN-BU-ADM-02 10.x.x12:6443 check check-ssl verify none
        server PZN-BU-ADM-03 10.x.x.13:6443 check check-ssl verify none

@kfirfer
Copy link

kfirfer commented Aug 7, 2020

Happens to me as well
and im not using HAproxy

Fresh installed with kubeadm with v1.18.6, having 2 control plane nodes

From APIserver pods:

[kube-apiserver-nuc02] I0807 23:19:15.894341       1 log.go:172] http: TLS handshake error from 172.16.137.160:50864: EOF 
[kube-apiserver-nuc01] I0807 23:19:15.921473       1 log.go:172] http: TLS handshake error from 192.168.5.135:24189: EOF

From etcd pods:

[etcd-nuc02] 2020-08-07 23:19:15.896327 I | embed: rejected connection from "172.16.137.160:40178" (error "EOF", ServerName "") 
[etcd-nuc01] 2020-08-07 23:19:15.923966 I | embed: rejected connection from "192.168.5.135:13800" (error "EOF", ServerName "") 
[etcd-nuc02] 2020-08-07 23:19:15.920069 I | embed: rejected connection from "172.16.137.160:40200" (error "tls: client didn't provide a certificate", ServerName "") 
[etcd-nuc01] 2020-08-07 23:19:15.934204 I | embed: rejected connection from "192.168.5.135:26652" (error "tls: client didn't provide a certificate", ServerName "")

@StrikerExtreem
Copy link

One of my raspberrypi nodes was update from kernel 4.19 to kernel 5.4 on raspbian, after that, TLS Handshake errors occurred. Reverting back to 4.19 seems to solve it. Maybe this will help investigating.

@wangweihong
Copy link

I also receive lot of tls: client offered an unsupported records. I discover these are generated by HAproxy healt check ssl-hello-chk witch use only SSLv3, while apiserver require TLS. I switch to check-ssl which use openssl implemention and disable the verification because the certificates are self signed, generated by kubeadm.
Here are my changes:

backend apiserver
    option httpchk GET /healthz
    http-check expect status 200
    mode tcp
    #option ssl-hello-chk
    balance     roundrobin
        server PZN-BU-ADM-01 10.x.x.11:6443 check check-ssl verify none
        server PZN-BU-ADM-02 10.x.x12:6443 check check-ssl verify none
        server PZN-BU-ADM-03 10.x.x.13:6443 check check-ssl verify none

This solution works in my environment, Thanks a lot!

@demisx
Copy link

demisx commented Aug 11, 2020

@wangweihong Can you please share where these changes should be made? We are getting the same errors after upgrading to v1.18.6

@mariusmotea
Copy link

@demisx these changes where made in haproxy configuration used for api load balancer.

@demisx
Copy link

demisx commented Aug 13, 2020

@demisx these changes where made in haproxy configuration used for api load balancer.

Thank you. My cluster is created and managed by kops. If anyone knows how to make changes via kops for this error to go away I'd highly appreciate it.

@medined
Copy link

medined commented Aug 23, 2020

In my cluster, I was seeing the "TLS handshake error" in my apiserver logs. I resolved the issue by changing the load balancer health check from TCP to HTTPS. This change is discussed at kubernetes-sigs/kubespray#6487.

@ahan-ai
Copy link

ahan-ai commented Oct 11, 2020

I got the same issue, any solution?

@invidian
Copy link
Member

This has been fixed since v1.19.0 (eabb362).

@perezjasonr
Copy link

In my cluster, I was seeing the "TLS handshake error" in my apiserver logs. I resolved the issue by changing the load balancer health check from TCP to HTTPS. This change is discussed at kubernetes-sigs/kubespray#6487.

This appears to have worked for me. Thank you.

This has been fixed since v1.19.0 (eabb362).

well to clarify, by "fixed" we mean its going to trace level so it doesn't flood right? but technically the issue is still happening it just wont fill up kube-api logs it seems.

@invidian
Copy link
Member

well to clarify, by "fixed" we mean its going to trace level so it doesn't flood right? but technically the issue is still happening it just wont fill up kube-api logs it seems.

Yeah, it won't flood by default. But given that the reason for this log message heavily depends on the environment you run on, there is nothing to fix in kube-apiserver itself.

@juliohm1978
Copy link

Just noticed our apiserver logs flooded with these messages today.

In my case, nodes serving the apiserver each have an instance of keepalived also running. They manage a couple of virtual IP addresses for load balance and fail over. The health check on this particular LB setup runs a netcat checking for port 6443.

nc -vz 127.0.0.1 6443

The apiserver logs those health checks as failed TLS handshakes. True enough, since the health check does not bother to initiate a propper TLS connection. Best guess is these error messages will just go away as soon as we upgrade to 1.19 (#91277).

@bbellrose1
Copy link

Was there ever any resolution to this? I see my logs flooded with these errors. I am not running HA Proxy. I have a 6 node cluster 3 control plane and 3 worker nodes. I see the TLS errors on the worker nodes. Health of the cluster seems to be ok. Using certs generated by kubernetes during kubeadm init and all certs are valid...

kubelet[2849943]: I0119 08:48:11.744715 2849943 log.go:181] http: TLS handshake error

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests