New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can not authenticate with service-account default token #22351
Comments
Update: deleting the default token secret and letting Kubernetes create a new secret solved the problem. Still, it is a bit concerning that Kubernetes can allow invalid token secrets to stick around. |
@erimatnor, how did you delete the default token secret? |
To get the default toker secret execute: To delete for example the kube-system namespace to regenerate, execute: It ins't necessary to restart kubelet service, the token will be create. |
this hit me today @rondinelisaad steps fixed the issue, but why would the token get corrupted|invalidated? |
if you have changed the service account token signing key, existing tokens in etcd will no longer validate |
@liggitt Which key is used to sign the service accounts? When running multiple API Servers, does the TLS private key have to be the same for all of them? We have seen in one of our HA clusters that requests to one of the API servers constantly fails with the verification error when using the service account from within a pod. |
The service account public key has to be the same for all. In an HA setup that means you need to explicitly give it to each apiserver (recommended) or make all the apiservers use the same serving cert/private TLS key (not recommended) |
The service account token private key given to the controller manager is used to sign the tokens. |
@liggitt thanks! |
@liggitt what do you mean "...explicitly give it to each apiserver...", how? I have not seen any documentation about this mention this seemingly huge flaw in HA deployments. |
distribute the public key you want the api servers to use to verify service account tokens as part of distributing the configuration/options. |
I misinterpreted the CoreOS instructions about the api-server and controller-manager configurations. The CLI parameter --service-account-private-key-file= must point at the same key for all api-servers and controller-managers. Now I understand. Thanks. |
1.--service-account-private-key-file=? 2.what is the use of
generate by --admission-control=ServiceAccount
then apiserver.crt ,apiserver.key file will not generate. |
@EamonZhang As a convenience, you can provide a private key to both, and the public key portion of it will be used by the api server to verify token signatures. As a further convenience, the api server's private key for it's serving certificate is used to verify service account tokens if you don't specify
|
I kown clearly now. Thx |
@liggitt : so That does make it more complicated to deploy. With the other components (client auth, e.g.), each master server has its own public/private keypair. As long as the cert is signed by the same CA, all is good. With this, it looks like it must be the same key (not signed by the same CA), that means that we need to worry about private key distribution. Is that correct? |
It is a private key, not a certificate. Only the controller manager needs the private key, which it uses to sign the tokens. The masters only need the public key portion in order to verify the tokens signed by the controller manager. |
That was my question. Why, with everything else, is it proper CA structure, but here it is private key? It means I need to distribute the keys to all master nodes in the cluster. I already have to distribute a CA key (so the masters can sign their server certs), would be good to have it follow the same philosophy - as long as your token is a cert signed by the CA, you are good. |
You can't use a certificate as token auth. |
That's what I don't get. Why not? Everything else uses certs and validates using a CA. This even has the (api) infra in place with Why does the controller not follow the same pattern and instead of issuing a token, issue a cert, and have the API server validate it with a CA? |
Hi, I have a similar issue with the kube-dashboard getting the CrashLoopBackOff error after the pod has been up for 100+ days. I ran the "kubectl delete secret default-token-9shsv --namespace=kube-system" but it didn't seem to resolve the kube-dashboard CrashLoopBackOff.
|
Did you try to delete the pod? It could still have the old token mounted maybe |
@liggitt it's interesting that one should use a "pure" key pair for signing and verifying service account tokens rather than a certificate (with expiration date) - I guess it implies that this key should not expire, and thus not be changed. But what if one wants to change this key anyways, for whatever reason? Let's claim it's just "sane" to rotate keys every now and then. In that case, it seems like changing the key pair will not automatically regenerate service account tokens. In the wonderful and magical world of kubernetes, where most things seem to happen automatically, this appears like a bug to me. Could you perhaps shed some light on the rationale behind this / on how this is intended to be handled, or on what the best practice around this particular key pair (for signing service account tokens) is? |
I have a pod that uses the default service account token in the pod to speak to the API server. However, sometimes this token cannot be used to authenticate with the API server and the logs give this error:
The same thing happens when manually reading the token from the service account secret with
kubectl
and using the token tocurl
the API server using the token.It appears that the token signature is invalid and that the token is either bad or the API server has changed its signing key and not updated the token.
The text was updated successfully, but these errors were encountered: