You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
The internal and external DNS checks try to query the k8s API, but are missing privileges.
The check completes, but it seems it wants to do more.
$ k -n kuberhealthy logs dns-status-internal-1606308364
...
time="2020-11-25T12:46:11Z" level=debug msg="Getting pod: dns-status-internal-1606308364 in order to get its node information"
time="2020-11-25T12:46:11Z" level=error msg="Error waiting for node to reach minimum age: pods \"dns-status-internal-1606308364\" is forbidden: User \"system:serviceaccount:kuberhealthy:default\" cannot get resource \"pods\" in API group \"\" in the namespace \"kuberhealthy\""
...
time="2020-11-25T12:46:11Z" level=debug msg="Getting pod: dns-status-internal-1606308364 in order to get its node information"
time="2020-11-25T12:46:11Z" level=error msg="Error waiting for kube proxy to be ready: error getting kuberhealthy pod: pods \"dns-status-internal-1606308364\" is forbidden: User \"system:serviceaccount:kuberhealthy:default\" cannot get resource \"pods\" in API group \"\" in the namespace \"kuberhealthy\""
...
Steps To Reproduce
Deploy kuberhealthy with enabled internal and external DNS checks
Expected behavior
No error messages in the log.
Screenshots
Versions
Cluster OS: Ubuntu 20.04
Kubernetes Version: 1.18.8
Kuberhealthy Release or build 2.3.1
Additional context
I'm not quite sure whether the API access is needed or not.
The text was updated successfully, but these errors were encountered:
Perhaps, a possible solution is, to add a specific service account, like the one for the daemonset check. This SA could get the needed privileges to query the API.
It looks like this is caused by our nodecheck package trying to auto-detect which node the checker pod lives on in order to be sure the node is old enough to run the check properly (sometimes there is a race condition right as nodes start up).
The code that fetches the pod information is here. Instead of using that code, we could modify nodecheck to have the node the pod runs on passed in, then we could derive the node name from the downwards API using a spec change on all checks that looks like this:
Describe the bug
The internal and external DNS checks try to query the k8s API, but are missing privileges.
The check completes, but it seems it wants to do more.
Steps To Reproduce
Expected behavior
No error messages in the log.
Screenshots
Versions
Additional context
I'm not quite sure whether the API access is needed or not.
The text was updated successfully, but these errors were encountered: