Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected behavior when using multiple contexts in kubeconfig #936

Closed
oz123 opened this issue Sep 17, 2020 · 8 comments
Closed

Unexpected behavior when using multiple contexts in kubeconfig #936

oz123 opened this issue Sep 17, 2020 · 8 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/usability Categorizes an issue or PR as relevant to SIG Usability.

Comments

@oz123
Copy link

oz123 commented Sep 17, 2020

When using a config with multiple contexts, one might see a surprising message if there is no context set.
Let me demostrate:

$ k config get-contexts 
CURRENT   NAME               CLUSTER            AUTHINFO         NAMESPACE
          coldsweet          microk8s-cluster   admin-microk8s   coldsweet
          devops             devops-calico      admin            
          devops-cilium      devops-cilium      admin-cilium     
*         microk8s-context   microk8s-cluster   admin-microk8s   

Here, I have a context set. So kubectl get pods or any other command works fine.
If there is no current-context, kubectl says it can't connect to localhost (that's reasonable if you know k8s, more on that below).

sed -i '/current-context/d' ~/.kube/config
 $ k config get-contexts 
CURRENT   NAME               CLUSTER            AUTHINFO         NAMESPACE
          coldsweet          microk8s-cluster   admin-microk8s   coldsweet
          devops             devops-calico      admin            
          devops-cilium      devops-cilium      admin-cilium     
          microk8s-context   microk8s-cluster   admin-microk8s   
$ k get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?

What would you like to be added:
The above message is baffling if you encounter this for the first time. A better message would be something like this:

Mutliple contexts found but no context is set. Which cluster shall I use? Please set an active context

Why is this needed:
I give Kubernetes workshops, where students use multiple clusters. As we go through setting ~/.kube/confing for usage with multiple clusters, some students will rush to try stuff, and forget to set a context. This results in frustration and funny facial expressions staring at the screen. We explain that kubectl, uses localhost per default - and that older versions of kubernetes where not secured on localhost. A much better UI will be simply telling the user he didn't set up a context.

I'm willing to send a PR for improving this UI. This should not be hard to fix (I have already submitted code to k8s/k8s).

/sig usability

@oz123 oz123 added the kind/feature Categorizes issue or PR as related to a new feature. label Sep 17, 2020
@k8s-ci-robot k8s-ci-robot added the sig/usability Categorizes an issue or PR as relevant to SIG Usability. label Sep 17, 2020
@oz123
Copy link
Author

oz123 commented Sep 18, 2020

I believe this is #737 is related. The author solved his own problem. However, I am guess that's exactly the same behavior the workshop participants experience.

@oz123
Copy link
Author

oz123 commented Sep 18, 2020

Validation of the configuration is done here:

https://github.com/kubernetes/kubernetes/blob/ba35704b510f918254c3ba826fb63608f6ed2dd6/staging/src/k8s.io/client-go/tools/clientcmd/validation.go#L152

The is already a validaor that checks that the context foo exists.

$ grep foo ~/.kube/config
current-context: foo
 $ k get pods
Error in configuration: context was not found for specified context: foo

To fix this issue, a validator that checks that the configuration has current-context set should be added.

@noahi
Copy link

noahi commented Sep 18, 2020

Your suggested error content looks good to me. Is there a situation where localhost is desired?

@knight42 knight42 removed their assignment Sep 20, 2020
oz123 added a commit to oz123/kubernetes that referenced this issue Sep 23, 2020
The current behavior is to ignore this and try to connect to
localhost:8080. [This is quite confusing, as reported in this issue][1].

This commit improves the UX by telling the users they need to explicitly
choose a context in the that multiple context exist:

```
$ kubectl get pods
error: Multiple contexts found. However, no current-context is set in the configuration
```

[1]: kubernetes/kubectl#936
@eddiezane
Copy link
Member

/priority backlog

@k8s-ci-robot k8s-ci-robot added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Oct 14, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 12, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 13, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/usability Categorizes an issue or PR as relevant to SIG Usability.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants