Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubectl error "You must be logged in to the server" #30230

Closed
Nicolas-TS opened this issue Nov 25, 2020 · 19 comments
Closed

Kubectl error "You must be logged in to the server" #30230

Nicolas-TS opened this issue Nov 25, 2020 · 19 comments

Comments

@Nicolas-TS
Copy link

Nicolas-TS commented Nov 25, 2020

SURE-3029
SURE-3609
SURE-3394

Hi everyone !

What kind of request is this (question/bug/enhancement/feature request):

Question or bug

Steps to reproduce (least amount of steps as possible):

  1. Add a LDAP user (FreeIPA) on a project (owner, member ...)
  2. Launch the kubectl command (through Rancher UI or using the kubeconfig file) : It must works
  3. After 24 hours, when the user execute a kubectl command, the following error is displayed :
error: You must be logged in to the server (the server has asked for the client to provide credentials)

Result:
error: You must be logged in to the server (the server has asked for the client to provide credentials)

Other details that may be helpful:

  • LDAP Authentication : FreeIPA
  • Certificates signed by an intern PKI (CA Cert is imported during IPA connector configuration)
  • Rancher upgraded from 2.1.8 to 2.4.5
  • Problem still exists with a Rancher 2.4.5 fresh install
  • Identified workaround : User has to delete his tokens to regenerate them when he gets his new kubeconfig file
  • No problem with local users

Environment information

  • Rancher version (rancher/rancher/rancher/server image tag or shown bottom left in the UI): Rancher 2.4.5
  • Installation option (single install/HA): Single Install

Cluster information

  • Cluster type (Hosted/Infrastructure Provider/Custom/Imported): Vsphere Cluster
  • Machine type (cloud/VM/metal) and specifications (CPU/memory): Virtual Machines
  • Kubernetes version (use kubectl version): v1.18.3
  • Docker version (use docker version): 19.3.11

Thank you for your help !

Nicolas

@Nicolas-TS
Copy link
Author

Hello,

Additional information :

Just see that through the API v3/token that tokens are not expired but enabled is set to false :

"authProvider": "local",
"baseType": "token",
"clusterId": null,
"created": "2020-11-24T14:59:46Z",
"createdTS": 1606229986000,
"creatorId": null,
"current": false,
"description": "Kubeconfig token",
"enabled": false,
"expired": false,
"expiresAt": "",
"groupPrincipals": null,
"id": "kubeconfig-u-5oz67ae73y",

The "enabled" field goes from "true" to "false" after ~24 hours. I didn't have this behavior in Rancher 2.1.8.

@papanito
Copy link

papanito commented Jan 4, 2021

Identified workaround : User has to delete his tokens to regenerate them when he gets his new kubeconfig file

definitively helped me ;-)

@nikhilno1
Copy link

I am facing same problem. I did AD configuration yesterday and today I am facing this issue.
How do I delete the tokens for the user as mentioned in the workaround?

Identified workaround : User has to delete his tokens to regenerate them when he gets his new kubeconfig file

@nikhilno1
Copy link

Found it. You have to click on the Avatar and select "API & Keys". Thanks for the workaround.

@nikhilno1
Copy link

Hello, in my setup I have to keep doing this workaround every hour or so (delete the key, fetch and update the new kubeconfig file). This is quite annoying. Any idea why this is happening and how I can fix it?

@Nicolas-TS
Copy link
Author

Hi,

I don't have a technical solution for the version 2.4.5.

We have done an upgrade of Rancher (to 2.5.3) and we use another LDAP directory (openldap) for the authentication.

And now, we have no more this issue (token are still available after 24 hours).

@nikhilno1
Copy link

I am on latest version (v2.5.5) and even removed the Active Directory configuration. Still facing the problem.
After a few minutes of doing the workaround (deleting the keys and generating new kube-config), the problem is seen again.
When I see the v3/tokens/ API, I can see the enabled flag has been changed to false.
"enabled": false,

If I try to change it back to true, I get 404 error.

{
"baseType": "error",
"code": "NotFound",
"message": "no store found",
"status": 404,
"type": "error"
}

I have tried many options, using tokens, CLI authentication but the same problem is seen.

@michalgar
Copy link

Hi, we are experiencing the same problem.
The workaround of deleting the API token works, but only temporarily.
Did anyone manage to solve it? Any insights on this, please?

@nikhilno1
Copy link

nikhilno1 commented May 28, 2021 via email

@stale
Copy link

stale bot commented Sep 1, 2021

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 60 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

@stale stale bot added the status/stale label Sep 1, 2021
@StrongMonkey StrongMonkey self-assigned this Sep 13, 2021
@stale stale bot removed the status/stale label Sep 13, 2021
@StrongMonkey StrongMonkey added this to the v2.x - Backlog milestone Sep 13, 2021
@StrongMonkey
Copy link
Contributor

reopen for investigation

@SheilaghM SheilaghM modified the milestones: v2.x - Backlog, v2.5.11 Sep 30, 2021
@Jono-SUSE-Rancher Jono-SUSE-Rancher modified the milestones: v2.5.11, v2.5.12 Oct 26, 2021
@StrongMonkey
Copy link
Contributor

@SheilaghM This needs more investigation as we need to reproduce in our setup. As mentioned before, this is caused by our refreshing logic not being able to recongize users so token that belongs to these user are disabled.

@gauravbodar
Copy link

We are facing exact same issue. However we have removed and reinstall rke for rancher host and now we are trying to import existing cluster. but all existing kubernetes clusters are locked and throws the above error. so not sure how to import them again in rancher

@deniseschannon deniseschannon changed the title [2.4.5] Kubectl error "You must be logged in to the server" Kubectl error "You must be logged in to the server" Dec 4, 2021
@cbron cbron modified the milestones: v2.5.13, v2.6.4 - Triaged Dec 7, 2021
@zube zube bot removed the [zube]: To Triage label Dec 7, 2021
MbolotSuse added a commit that referenced this issue Jan 6, 2022
@MbolotSuse
Copy link
Contributor

Has anyone experienced this issue with 2.6.3?

I'm thinking that this is caused by an integer overflow in v2.5 of the go-ldap package. We use that version in rancher up until 2.6.3, where we used v3.4. V3.4 has a larger destination int (there's still potential for overflow, but the result code would have to be substantially higher), so the overflows shouldn't happen (according to the IANA 4096 is the current highest error code, so uint16 should be sufficient).

The reasoning here is that rancher is because of this overflow, rancher isn't seeing the errors, so it's disabling the kubeconfig tokens (since it thinks that the errored searches indicate that the user lost access). It would help to know if people are still experiencing the issue on 2.6.3, as that would indicate the issue is elsewhere.

@samjustus
Copy link
Collaborator

samjustus commented Feb 15, 2022

this was solved in 2.5.12 and 2.6.3 and we will reopen if we see again

@Nox-404
Copy link

Nox-404 commented Mar 14, 2022

Hello,
I'm facing the same issue with v2.4.5, is there a way to backport this fix ?

@lukibahr
Copy link

lukibahr commented Apr 19, 2022

I'm facing the same issue in 2.6.3.

@hoerup
Copy link

hoerup commented Apr 21, 2022

Hmm forgot to write back here - but even after upgrading to 2.5.12 we kept running into the issue
But then I got the idea of giving privs directly to individual users instead of via LDAP groups - and the issues disappeared !

@Nox-404
Copy link

Nox-404 commented Apr 21, 2022

@hoerup I'll try it out tomorrow then.

Thx for the tips

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests