Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Getting errors when attempting token authentication from kubelet #10297
I'm having difficulty getting token-based authentication configured without degrading the behavior of the kubelets.
I've got a master and 3 nodes, running processes like so:
== master (10.0.0.2) ==
== nodes (10.0.0.3, 10.0.0.4. 10.0.0.7) ==
My token files are here, with both auth file and kubeconfig variations:
== known_tokens.csv ==
== kubernetes_auth ==
== kubelet_config ==
== kube_proxy_config ==
For kube-proxy, everything seems to behave the same regardless of whether I use
For kubelet, I'm having more difficulty. I seem to be able to get things to work if I use non-https, non-secure port, and no auth_path or kubeconfig.
However, when using https + secure port for
(As a baseline question -- can token-based authentication be used with either "http + insecure port" or "https + secure port"? If not, that might chop the following table in half right away.)
Below is the summary of the flag combinations I've attempted for
Can you help clarify which combination I should be aiming for in the first place, and weigh in on what appears to be going awry? And for that matter, is the "Steady data flow" output correct, or is that a red herring and still not behaving correctly?
Note that "yes" and "no" for auth_path and kubeconfig is shorthand for the presence or absence of
== Error getting node: ==
== Rejected event, then no data flow: ==
== Initial event, then no data flow: ==
== Certificate errors: ==
== Steady data flow: ==
For the insecure port, the apiserver doesn't do any authentication checking. So it's both insecure in the sense that the traffic is unencrypted and also in the sense that all requests are accepted without any extra credentials (this is why it's configured to only listen on localhost by default).
Since you set
For GCE, we use a
It seems like what you are missing in your kubeconfig files is the context which pulls together the cluster definition with a user definition and the specification of the current context. Can you try adding the following to the bottom of your kubelet's kubeconfig file:
and see if that helps?
We’re going through old support issues and asking everyone to direct your questions to stackoverflow.
We are trying to consolidate the channels to which questions for help/support are posted so that we can improve our efficiency in responding to your requests, and to make it easier for you to find answers to frequently asked questions and how to address common use cases.
We regularly see messages posted in multiple forums, with the full response thread only in one place or, worse, spread across multiple forums. Also, the large volume of support issues on github is making it difficult for us to use issues to identify real bugs.
The Kubernetes team scans stackoverflow on a regular basis, and will try to ensure your questions don't go unanswered.
Updating the closed ticket, in case anyone finds this while looking for documentation on how to use tokens for cluster authentication.
First -- adding the context info (per @roberthbailey's advice) was a key breakthrough -- thanks Robert!
Now, to recap:
Per a buried example, tokens can be generated using:
Next, I generated a token
I couldn't find any doc location which contained this a token-usage walkthrough, so I figured I'd at least share the initial spread of touchpoints which worked for us. For posterity. :-)
(EDIT: Initially included a separate token for a user matching each process, eg token for "kubelet" and one for "kube-scheduler" user etc, but then realized that I can simply use a single token -- naming to match each process is not necessary. Granted, it's possible, and then each process would load a different auth config file with their own token, but not essential to the core example.)