Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If persistHome is enabled, the token in .kube/config isn't renewed #22924

Open
batleforc opened this issue Apr 16, 2024 · 4 comments
Open

If persistHome is enabled, the token in .kube/config isn't renewed #22924

batleforc opened this issue Apr 16, 2024 · 4 comments
Labels
kind/bug Outline of a bug - must adhere to the bug report template. severity/P1 Has a major impact to usage or development of the system. sprint/next

Comments

@batleforc
Copy link

batleforc commented Apr 16, 2024

Describe the bug

Hello,
I have setup two kinds of env, one based on the udi and one that I build. With both image and the persistHome option enabled, I end up with a Kubeconfig with outdated token after 12 hours (the liveness setup in the IDP).

This bug has been found on Kubernetes (K3s,MicroK8s,kubeadm) and will be tested on OpenShift.

Fixed by deleting the /home/user/.kube folder and restarting the workspace

Che version

7.84@latest

Steps to reproduce

  1. Setup an eclipse che with the persistHome option on true (i have the bug with either PerUser storage and PerWorkspace)
  2. Start a workspace
  3. Wait for the time needed for your token to not be valid any more
  4. Type kubectl get pod
  5. Enjoy the error.

Expected behavior

Well, i expect my token to be renewed each time i start a WorkSpace

Runtime

Kubernetes (vanilla)

Screenshots

image

Installation method

chectl/latest, chectl/next

Environment

Windows, Linux

Eclipse Che Logs

No response

Additional context

No response

@batleforc batleforc added the kind/bug Outline of a bug - must adhere to the bug report template. label Apr 16, 2024
@AObuchow
Copy link

@batleforc thanks for reporting. I believe this is a Che-Dashboard issue, as the Dashboard's backend is responsible for injecting the kube config into the workspace pod, however, I believe this injection only happens if the kubeconfig file doesn't exist in the pods filesystem. When persistUserHome is enabled, the kubeconfig file will persist on the PVC and thus will persist.

The required fix would probably be to re-create the kubeconfig file on workspace startup if a certain amount of time has passed since the workspace was last started (I'm not sure if we can actually track this). Or, to just always re-inject/overwrite the kubeconfig file on workspace startup.

@batleforc
Copy link
Author

If there is no other kubeconfig mounted through a secret/configmap, wouldn't checking if the file matches a possible template checking if the token work and if not update it ?

@AObuchow
Copy link

If there is no other kubeconfig mounted through a secret/configmap, wouldn't checking if the file matches a possible template checking if the token work and if not update it ?

That seems like a much better idea than my suggestions, +1 :)

@batleforc
Copy link
Author

I forgot to include, but the problem has been reproduced in the latest version of DevSpaces on OpenShift

@AObuchow AObuchow added the severity/P1 Has a major impact to usage or development of the system. label May 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Outline of a bug - must adhere to the bug report template. severity/P1 Has a major impact to usage or development of the system. sprint/next
Projects
None yet
Development

No branches or pull requests

3 participants