-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kube-apiserver won't start with OIDC flags #40109
Comments
Could someone from @kubernetes/sig-auth-misc and/or someone from @kubernetes/sig-api-machinery-misc please help with triage. Thanks! |
@calebamiles please assign to me. It looks like you added the following flags.
What was the error message? |
There was no error message, the apiserver container just didn't restart. It's like it didn't even try to get restarted, there's no crashed/error'd out docker container in the output of |
Sorry if I'm showing my ignorance of kops, but what you posted was your updated static pod, right? It sounds like your change might have resulted in an invalid manifest. Does your kubelet have any relevant logs? |
Yeah, that's the static pod manifest. kubelet logs don't reveal anything useful. There are logs for it tearing down the old container and the associated resources, and then it immediately kills the new container without any reason why. Here's a snip of that part in action:
So the |
Seems odd it wouldn't write anything to |
@DMXRoid @ericchiang i think similar issue i created here: kubernetes/kubeadm#106 you can see from the video posted: https://www.opentest.co/share/aa1d5390d82f11e6a7cb6f2926412bab it seems the issue is after 1.5.0 |
Per @lilnate22's comments I don't think this is an issue with the OIDC flags and probably something to do with the way kubelets launch static pods. Maybe we can close this issue and move to a more directed one like kubernetes/kubeadm#106? |
@ericchiang I've tried to get useful logs out of everything, to no avail. Other than those two lines about killing the pod, there's no indication that anything at all is going on. |
@calebamiles can you close this one? |
@DMXRoid can you try with other kube-api flags? i tried |
Closing per request. |
I am facing a similar error. After configuring the parameters, i use |
Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.): No
What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): oidc
Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug
Kubernetes version (use
kubectl version
): 1.5.2Environment:
uname -a
): 4.4.41-k8sWhat happened:
When adding the relevant OIDC flags (oidc-issuer-url, oidc-client-id, oidc-username-claim) to the kube-apiserver.manifest file. the kube-apiserver container is unable to restart, and there's just a dead container as a result.
There's a before/after of my config files at:
https://gist.github.com/DMXRoid/93ac2a4dcb91428c91efc5d9d8dc3048
What you expected to happen:
For kube-apiserver to restart as per usual with support for OIDC authentication enabled
How to reproduce it (as minimally and precisely as possible):
Add any of those flags to the kube-apiserver command, although oidc-username-claim anecdotally results in the problem less often than the other two.
Anything else do we need to know: Nope
The text was updated successfully, but these errors were encountered: