-
Notifications
You must be signed in to change notification settings - Fork 39.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recommended liveness check for kube-apiserver #43784
Comments
Temporary workaround for kubernetes/kubernetes#43784 until we can find a better solution.
Temporary workaround for kubernetes/kubernetes#43784 until we can find a better solution.
You can include credentials in the liveness check or allow unauthenticated requests (until a healthz-only option is available) |
And I came so close! Probably going to enable the insecure port, for the time being at least, in kops. Is there an effort to implement an unprotected healthz endpoint? Can I help? |
Temporary workaround for kubernetes/kubernetes#43784 until we can find a better solution.
@justinsb are you setting |
We always set |
is this a holdover from 1.5.0? From 1.5.1 on, the defaults are reasonable (anonymous auth is disabled by default in 1.5.x, and only enabled in 1.6 when using authorizers that require explicit ACL grants to anonymous users) |
It's a safety gate, yes. We never want anonymous authn to the API, even if authz should catch it it's too easy to make a mistake. |
it's certainly up to the deployer, but I expect that to cause difficulties as auth discovery gets built out ( |
kops v1.6.0 references this issue
But it's not clear what if any action I should take as a part of my configuration. Is that line kops telling me that they're using a workaround or that I need to change my configuration? |
There are currently two ways to allow an unauthenticated health check:
kops disables anonymous requests to the TLS port, so it must enable the unsecured port if it wants to make anonymous health checks |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
I sympathize with the kops point. Neither of these options seem particularly safe. Enabling insecure port isn't viable and anon seems to have its own subtleties and risks for mistakes. https://gist.github.com/erictune/9dc7ae4b22505b9a8c20ad9cd03a45cc |
/remove-lifecycle rotten |
that was from the 1.5 timeframe, when that option was added. by 1.6, all authorizers were updated to require explicitly granting permissions to anonymous requests. the cautions about granting permissions to |
As of Kubernetes 1.10, the insecure flags will be deprecated: kubernetes/kubernetes#59018 Currently, there is no other way to allow unauthenticated health checks (requests on kube-apiserver's /healthz endpoint) other than allowing anonymous requests (which we do not want). Related issue: kubernetes/kubernetes#43784 We are now going to use the basic authentication credentials which the kubelet will use to reach the /healthz endpoint.
Can't use NLB because: * During bootstrap a single instance is in service * NLB cannot direct traffic to the originating instance * It preserves IPs Can't use ALB because: * Health check is HTTP/HTTPS * apiserver will reject health checks w/ 401 (unauthorized) * kubernetes/kubernetes#43784
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
This is still an issue on v1.17.3. Are there any plans e.g. a separate port for health checks? |
As a workaround I did livenessProbe:
failureThreshold: 8
httpGet:
host: xxxx
path: /healthz
port: 6443
scheme: HTTPS
httpHeaders:
- name: Authorization
value: Bearer TOKEN
initialDelaySeconds: 15
timeoutSeconds: 15 So that kubelet are able to check |
A qq if anyone can help to answer. Even with default setup (--anonymous-auth=true), I'm seeing 403 code:
But actually the payload is valid:
I can see This k8s v1.17 |
oh, nvm. I found curl -I is using http head which is forbiden by k8s. For anyone that gets confused as me, this is what you want:
|
Hi, Is there anyway to add this to kubeadm-config? so that every upgrade you dont need to change the kube-api yaml? |
I'm running into the "accessible health checks" issue while using RKE2, a distribution that has a goal/feature of passing the CIS k8s benchmark. The CIS control for anonymous auth doesn't take into consideration the usage of the |
Any ideas about adding to kubeadm-config? |
What happens when the token expires though? Do you refresh the token out of band? |
I have an ugly workaround for kube-apiserver health check that is based on tcpSocket probes. Here is a script that rewrites the manifest (yq as dependency, but can be replaced):
|
Disabiling anonymous access causes API Server to crash and cluster to be unstable (see kubernetes/kubernetes#43784). Signed-off-by: Francesco Ilario <filario@redhat.com>
Disabiling anonymous access causes API Server to crash and cluster to be unstable (see kubernetes/kubernetes#43784) Signed-off-by: Francesco Ilario <filario@redhat.com>
Disabiling anonymous access causes API Server to crash and cluster to be unstable (see kubernetes/kubernetes#43784) Signed-off-by: Francesco Ilario <filario@redhat.com>
Fix API Server crash issue Disabiling anonymous access causes API Server to crash and cluster to be unstable (see kubernetes/kubernetes#43784) Signed-off-by: Francesco Ilario <filario@redhat.com>
This issue is tagged api-machinery, but maybe it should be tagged to be handled on the kubeadm side instead? I won't ping |
Hi! Issue still exist in 1.28.2. When set in My workaround was:
Only after this api-server up and probes finished with success. Maybe will be useful for someone. |
Without creating sa with token, mtls also works. |
Tried that, unfortunately, the common |
For 1.6, what is the healthz endpoint / recommended liveness check for a secure setup (RBAC, kubeadm discovery not enabled, insecure port disabled)?
curl https://127.0.0.1/healthz
is returning a 401 for me.Edit: What I originally called "kubeadm discovery not enabled" is now more commonly / correctly called "setting --anonymous-auth=false on apiserver".
The text was updated successfully, but these errors were encountered: