Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove cached exec credentials when saving a kubeconfig #724

Conversation

waynr
Copy link
Contributor

@waynr waynr commented Dec 23, 2019

We have a customer who reported the following unauthorized error message:

"error: You must be logged in to the server (Unauthorized)".

when attempting to use a kubeconfig configured to exec doctl kubernetes cluster kubeconfig exec-credential --version=v1beta1 --context=personal <cluster-id> to obtain an auth token from doctl which kubectl would then pass to the cluster's apiserver for authentication purposes.

Our internal logging suggests that the token being passed to their cluster is not valid for that particular cluster.

This token is generated during an initial call to doctl kubernetes cluster kubeconfig exec-credential during which it is then cached locally in ~/.config/doctl/cache (at least on my Linux laptop) for subsequent doctl kubernetes cluster kubeconfig exec-credential calls (the idea being that we don't want to either retrieve or generate a token every time we make a Kubernetes api request since that would effectively double the API usage and put customers with automated use cases at risk for API limit exhaustion).

I was only able to reproduce the problem reported by the customer in the following scenario:

  • starting with empty ~/.kube/config
  • starting with empty ~/.config/doctl/config.yaml
  • starting with emtpy ~/.config/doctl/cache
  • using kubectl v1.17.0
  • using doctl v1.36.0

Then running the following series of commands:

  • Initialize my local doctl config
$ doctl --context personal auth init # enter manually-created auth token
  • Update my ~/.kube/config to authenticate with my cluster using doctl kubernetes cluster kubeconfig exec-credential
$ doctl --context personal kubernetes cluster kubeconfig save <cluster-name>
  • Verify that I can access the cluster:
$ kubectl get nodes
NAME              STATUS   AGE
default           Active   5d1h
kube-node-lease   Active   5d1h
kube-public       Active   5d1h
kube-system       Active   5d1h
  • Log in to https://cloud.digitalocean.com
  • Delete the auth token generated by doctl kubernetes cluster kubeconfig exec-credential when I first ran kubectl get nodes; such tokens have names that look like doks-<cluster-uuid>-<expiration-datetime>
  • Wait a few minutes
  • Verify that I can no longer access the cluster:
$ kubectl get nodes
error: You must be logged in to the server (Unauthorized)

I can see that the cached token continues to be used by running doctl kubernetes cluster kubeconfig exec-credential <cluster-name>.

The fix here does not directly address the Unauthorized error but instead provides a way for the customer to invalidate the cache by re-running doctl --context personal kubernetes cluster kubeconfig save <cluster-name> which, with this change in place, always removes the cached token if it exists.

Copy link
Contributor

@Verolop Verolop left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
thanks for taking the time to carefully reproduce the issue at such a detailed level!

@Verolop Verolop merged commit 332ad96 into digitalocean:master Jan 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants