Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube-apiserver won't start with OIDC flags #40109

Closed
DMXRoid opened this issue Jan 18, 2017 · 13 comments
Closed

kube-apiserver won't start with OIDC flags #40109

DMXRoid opened this issue Jan 18, 2017 · 13 comments
Assignees
Labels
area/apiserver kind/bug Categorizes issue or PR as related to a bug. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/auth Categorizes an issue or PR as relevant to SIG Auth.

Comments

@DMXRoid
Copy link

DMXRoid commented Jan 18, 2017

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.): No

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): oidc


Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug

Kubernetes version (use kubectl version): 1.5.2

Environment:

  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release): Debian Jessie (8.6)
  • Kernel (e.g. uname -a): 4.4.41-k8s
  • Install tools: kops
  • Others:

What happened:
When adding the relevant OIDC flags (oidc-issuer-url, oidc-client-id, oidc-username-claim) to the kube-apiserver.manifest file. the kube-apiserver container is unable to restart, and there's just a dead container as a result.

There's a before/after of my config files at:
https://gist.github.com/DMXRoid/93ac2a4dcb91428c91efc5d9d8dc3048

What you expected to happen:
For kube-apiserver to restart as per usual with support for OIDC authentication enabled

How to reproduce it (as minimally and precisely as possible):
Add any of those flags to the kube-apiserver command, although oidc-username-claim anecdotally results in the problem less often than the other two.

Anything else do we need to know: Nope

@calebamiles calebamiles added area/apiserver kind/bug Categorizes issue or PR as related to a bug. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/auth Categorizes an issue or PR as relevant to SIG Auth. labels Jan 19, 2017
@calebamiles
Copy link
Contributor

Could someone from @kubernetes/sig-auth-misc and/or someone from @kubernetes/sig-api-machinery-misc please help with triage. Thanks!

@ericchiang
Copy link
Contributor

@calebamiles please assign to me.

It looks like you added the following flags.

 --oidc-issuer-url=https://accounts.google.com --oidc-client-id=xxx-yyy.apps.googleusercontent.com --oidc-username-claim=email

What was the error message?

@DMXRoid
Copy link
Author

DMXRoid commented Jan 19, 2017

There was no error message, the apiserver container just didn't restart. It's like it didn't even try to get restarted, there's no crashed/error'd out docker container in the output of docker ps -a, just the old (pre-refresh) container that gets cleared out after a minute or so. I didn't see anything relevant in any of the other kube log files or container logs either.

@ericchiang
Copy link
Contributor

Sorry if I'm showing my ignorance of kops, but what you posted was your updated static pod, right? It sounds like your change might have resulted in an invalid manifest. Does your kubelet have any relevant logs?

@DMXRoid
Copy link
Author

DMXRoid commented Jan 19, 2017

Yeah, that's the static pod manifest. kubelet logs don't reveal anything useful. There are logs for it tearing down the old container and the associated resources, and then it immediately kills the new container without any reason why. Here's a snip of that part in action:

Jan 19 18:37:48 ip-172-27-178-239 kubelet[1771]: I0119 18:37:48.374119    1771 docker_manager.go:1605] Container "5f629926df9d3de66d231e6b9fee30b4908af72fba913fa97d48f0fbf38407f9 kube-apiserver kube-system/kube-apiserver-ip-172-27-178-239.ec2.internal" exited after 30.097010134s
Jan 19 18:37:48 ip-172-27-178-239 kubelet[1771]: E0119 18:37:48.374529    1771 event.go:208] Unable to write event: 'Post http://127.0.0.1:8080/api/v1/namespaces/kube-system/events: dial tcp 127.0.0.1:8080: getsockopt: connection refused' (may retry after sleeping)
Jan 19 18:37:48 ip-172-27-178-239 kubelet[1771]: I0119 18:37:48.375916    1771 docker_manager.go:1564] Killing container "406aac83154850011cb8c4357b4464740ff09058d312803ce5d878e21e29bd6d kube-system/kube-apiserver-ip-172-27-178-239.ec2.internal" with 30 second grace period
Jan 19 18:37:48 ip-172-27-178-239 kubelet[1771]: I0119 18:37:48.573961    1771 docker_manager.go:1605] Container "406aac83154850011cb8c4357b4464740ff09058d312803ce5d878e21e29bd6d kube-system/kube-apiserver-ip-172-27-178-239.ec2.internal" exited after 198.023659ms

So the 5f6299 pod, the old one, goes down, and then 406aac is killed immediately afterwards.

@ericchiang
Copy link
Contributor

Container "406aac83154850011cb8c4357b4464740ff09058d312803ce5d878e21e29bd6d kube-system/kube-apiserver-ip-172-27-178-239.ec2.internal" exited after 198.023659ms

Seems odd it wouldn't write anything to /var/log/kube-apiserver.log before exiting. If there's no error message, I don't know how much help I can be of. Can you please try to fiddle with your logging to get something out of the API server? Misconfigurations will print to stderr.

@nfons
Copy link

nfons commented Jan 24, 2017

@DMXRoid @ericchiang i think similar issue i created here: kubernetes/kubeadm#106

you can see from the video posted: https://www.opentest.co/share/aa1d5390d82f11e6a7cb6f2926412bab it seems the issue is after 1.5.0

@ericchiang
Copy link
Contributor

Per @lilnate22's comments I don't think this is an issue with the OIDC flags and probably something to do with the way kubelets launch static pods. Maybe we can close this issue and move to a more directed one like kubernetes/kubeadm#106?

@DMXRoid
Copy link
Author

DMXRoid commented Jan 24, 2017

@ericchiang I've tried to get useful logs out of everything, to no avail. Other than those two lines about killing the pod, there's no indication that anything at all is going on.
That said, the behavior here does seem very similar to the issue you linked, except that kubelet does restart the pod when I remove the OIDC flags, and it didn't choke when I added another flag, --runtime-config=batch/v2alpha1 . Also I'm running 1.5.2, not 1.6. The resulting behavior seems similar enough that I'd buy that they have the same root cause though, so feel free to close this in favor of the kubeadm bug.

@ericchiang
Copy link
Contributor

@calebamiles can you close this one?

@nfons
Copy link

nfons commented Jan 24, 2017

@DMXRoid can you try with other kube-api flags? i tried --service-node-port-range

@calebamiles
Copy link
Contributor

Closing per request.

@spnzig
Copy link

spnzig commented Aug 7, 2017

I am facing a similar error. After configuring the parameters, i use kubectl --user=name@gmail.com get nodes but i get error: You must be logged into the server (the server has asked for the client to provide credentials) Could you please suggest how i can proceed with this? @ericchiang @calebamiles

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/apiserver kind/bug Categorizes issue or PR as related to a bug. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/auth Categorizes an issue or PR as relevant to SIG Auth.
Projects
None yet
Development

No branches or pull requests

5 participants