-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kube-apiserver log always has TLS handshake error #70411
Comments
/sig network |
I think client side log would provide more info in this case. |
@anfernee you mean the load balancing server's logs, or the kubelet logs on the nodes? |
right, logs from 10.15.4.118 and 10.15.4.119. |
@anfernee there is no error log in load balancer server. Haproxy is booting by docker.
I don't understand which process has been accessing the API, causing TLS error. |
Can you use your own client like |
@anfernee you mean like this?
if not can u give me some example here? thanks a lot buddy |
right. looks like your master is good, which means something wrong with your haproxy. |
This can be solved by switch LB health check from TCP to SSL. |
I tried to add SSL check in my Haproxy but i got error like this:
After that, i switched to TCP port check, still got the error like this:
configuration file:
do u have any suggestion? |
10.15.4.127 is kube-apiserver 10.15.4.119 is haproxy I think this error is not caused by a health check. |
When i close the health check of haproxy, i still can get error log from kube-apiserver log. |
im facing this too |
We are facing this as well. We've tried with this configuration:
but no luck. But what is strange that if i disable |
Alas, a long standing issue #43784. |
Have the same issue. |
I wonder why the case is issue is closed, I have been jumping from one link to another and I cannot see the fix for the issue, I have 3 masters behind HAPROXY, and I am suffering exactly from the same problem, and below is the config for the api pod:
|
same here. Facing a lot after upgrading to k8s 1.12.5 |
@boxuan666 ? |
I am seeing this issue. Can anybody share, how was this resolved. |
The same is happening, if kube-apiserver is behind AWS load balancer, which does TCP health checks by default. If you configure it to use HTTPS, it still does not fully resolve the problem, as if one uses Maybe the log level of this message could be changed? |
I'm facing this issue on version v1.18.2 installed via kubelet. I'm using the same haproxy instance (1.5.18) and config as balancer used for version v1.17.3 which has no problems. |
Same for me on 1.18.2 First two masters seem to be able to communicate properly. The third master and the two workers I had to restart kubelet which then shows the mentioned errors about tls handshaking. The cluster was created a few hours ago. The join tokens are valid for 24hrs. |
I am on kubeadm version :"v1.17.4" Getting the TLS handshake error, which is causing my kube-apiserver to restart.
Here 192.168.5.30 is the api LB ip address of HAproxy. The HAproxy config: `frontend kubernetes backend kubernetes-master-nodes Can someone pls guide me to the solution to fix this issue. |
I'm having the exact same issue as @sumitKash. I tried with both CentOS 8 and Ubuntu 18.04 with the same results. What am I missing? I'm using the option to let Kubernetes manage/create all the certificates. The kube-api-server pod keeps restarting over and over. Edit 1: When I don't specify a config file (e.g. kubeadm init --config /tmp/kubeadm-config.yaml) and instead do everything via command line (e.g. kubeadm init --control-plane-endpoint "LoadBalancerFQDN:6443" --upload-certs), it works fine. I suspect there is an option in the configuration file that is causing the issue. Edit 2: I removed the option to disable anonymous authentication from the kubeadm-config.yaml file and it worked. I'm still getting TLS handshake errors but the kube-api-server is still up and not restarting. |
Hello, I'am facing the same issue without using a load balancer on the front Anyone have a fix to this please ? Thanks in advance |
Since there is not so much of rejection that this should be fixed and I see the simple way to resolve it, I created #91050 to track actionable thing to do. |
Hello, Thank you for your response Is it the raison why the kubernetes integration fails ? I have installed gitlab locally and i'am facing the same issue As i said in the last comment, the same credentials works fine with curl Regards |
你处理了吗,我的集群很正常,但这个日志让人不舒服 |
I also receive lot of
|
Happens to me as well Fresh installed with kubeadm with v1.18.6, having 2 control plane nodes From APIserver pods:
From etcd pods:
|
One of my raspberrypi nodes was update from kernel 4.19 to kernel 5.4 on raspbian, after that, TLS Handshake errors occurred. Reverting back to 4.19 seems to solve it. Maybe this will help investigating. |
This solution works in my environment, Thanks a lot! |
@wangweihong Can you please share where these changes should be made? We are getting the same errors after upgrading to v1.18.6 |
@demisx these changes where made in haproxy configuration used for api load balancer. |
In my cluster, I was seeing the "TLS handshake error" in my apiserver logs. I resolved the issue by changing the load balancer health check from TCP to HTTPS. This change is discussed at kubernetes-sigs/kubespray#6487. |
I got the same issue, any solution? |
This has been fixed since v1.19.0 (eabb362). |
This appears to have worked for me. Thank you.
well to clarify, by "fixed" we mean its going to trace level so it doesn't flood right? but technically the issue is still happening it just wont fill up kube-api logs it seems. |
Yeah, it won't flood by default. But given that the reason for this log message heavily depends on the environment you run on, there is nothing to fix in kube-apiserver itself. |
Just noticed our apiserver logs flooded with these messages today. In my case, nodes serving the apiserver each have an instance of keepalived also running. They manage a couple of virtual IP addresses for load balance and fail over. The health check on this particular LB setup runs a netcat checking for port 6443.
The apiserver logs those health checks as failed TLS handshakes. True enough, since the health check does not bother to initiate a propper TLS connection. Best guess is these error messages will just go away as soon as we upgrade to 1.19 (#91277). |
Was there ever any resolution to this? I see my logs flooded with these errors. I am not running HA Proxy. I have a 6 node cluster 3 control plane and 3 worker nodes. I see the TLS errors on the worker nodes. Health of the cluster seems to be ok. Using certs generated by kubernetes during kubeadm init and all certs are valid... kubelet[2849943]: I0119 08:48:11.744715 2849943 log.go:181] http: TLS handshake error |
/triage support
In my cluster, kube-apiserver log always has TLS handshake error like this:
10.15.4.118 & 10.15.4.119 is LB server by using Haproxy.
10.15.4.253~255 is my master server, the same error is reported on all three machines. Kube-controller-manager & Kube-scheduler is no error log appears.
Guys please help me to resolve this !
The text was updated successfully, but these errors were encountered: