Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prompt for Username and Password when running "kubectl get all" #164

Closed
gmflau opened this issue Aug 15, 2018 · 16 comments
Closed

Prompt for Username and Password when running "kubectl get all" #164

gmflau opened this issue Aug 15, 2018 · 16 comments
Labels
kind/help Request for help

Comments

@gmflau
Copy link

gmflau commented Aug 15, 2018

I used the following command to create my EKS cluster:
% eksctl create cluster --name=eks-c5-xlarge-5 --nodes=5 --node-type=c5.xlarge

Then I was prompted to enter Username and Password when trying to access the cluster by $ kubectl get all. I have no idea what my Username and Password are for the cluster.

@richardcase richardcase added the kind/help Request for help label Aug 16, 2018
@richardcase
Copy link
Contributor

@gmflau - i'm guessing you weren't prompted for a username/password when you ran eksctl create cluster?

Could you confirm what version you are using? (eksctl version)

Also, can you confirm kubectl is using your eksctl context? If you run this command kubectl config current-context you should see the context name ending in eksctl.io.

@gmflau
Copy link
Author

gmflau commented Aug 16, 2018

It did not prompt for username/password when running "eksctl create cluster" command. The output from the command was as follows:
gilbertlau:~$ eksctl create cluster --name=eks-c5-xlarge-5 --nodes=5 --node-type=c5.xlarge
2018-08-15T12:21:27-07:00 [ℹ] setting availability zones to [us-west-2c us-west-2b us-west-2a]
2018-08-15T12:21:27-07:00 [ℹ] importing SSH public key "/Users/gilbertlau/.ssh/id_rsa.pub" as "eksctl-eks-c5-xlarge-5-83:c8:52:95:cc:f8:0b:3a:1c:12:6d:88:69:87:ca:65"
2018-08-15T12:21:27-07:00 [ℹ] creating EKS cluster "eks-c5-xlarge-5" in "us-west-2" region
2018-08-15T12:21:27-07:00 [ℹ] creating ServiceRole stack "EKS-eks-c5-xlarge-5-ServiceRole"
2018-08-15T12:21:27-07:00 [ℹ] creating VPC stack "EKS-eks-c5-xlarge-5-VPC"
2018-08-15T12:34:10-07:00 [✔] created control plane "eks-c5-xlarge-5"
2018-08-15T12:34:10-07:00 [ℹ] creating DefaultNodeGroup stack "EKS-eks-c5-xlarge-5-DefaultNodeGroup"
2018-08-15T12:37:51-07:00 [✔] created DefaultNodeGroup stack "EKS-eks-c5-xlarge-5-DefaultNodeGroup"
2018-08-15T12:37:51-07:00 [✔] all EKS cluster "eks-c5-xlarge-5" resources has been created
2018-08-15T12:37:51-07:00 [✔] saved kubeconfig as "/Users/gilbertlau/.kube/config"
2018-08-15T12:37:51-07:00 [ℹ] the cluster has 0 nodes
2018-08-15T12:37:51-07:00 [ℹ] waiting for at least 5 nodes to become ready
2018-08-15T12:38:13-07:00 [ℹ] the cluster has 5 nodes
2018-08-15T12:38:13-07:00 [ℹ] node "ip-192-168-125-xxx.us-west-2.compute.internal" is ready
2018-08-15T12:38:13-07:00 [ℹ] node "ip-192-168-144-yyy.us-west-2.compute.internal" is ready
2018-08-15T12:38:13-07:00 [ℹ] node "ip-192-168-157-zzz.us-west-2.compute.internal" is ready
2018-08-15T12:38:13-07:00 [ℹ] node "ip-192-168-217-aaa.us-west-2.compute.internal" is ready
2018-08-15T12:38:13-07:00 [ℹ] node "ip-192-168-93-bbb.us-west-2.compute.internal" is not ready
2018-08-15T12:38:14-07:00 [✖] parsing kubectl version string "0.0.0": Version string empty
2018-08-15T12:38:14-07:00 [ℹ] cluster should be functional despite missing (or misconfigured) client binaries
2018-08-15T12:38:14-07:00 [✔] EKS cluster "eks-c5-xlarge-5" in "us-west-2" region is ready

$ eksctl version
2018-08-16T11:11:36-07:00 [ℹ] versionInfo = map[string]string{"gitCommit":"dca39f69e893b89d67156635c483b3f3e8236407", "gitTag":"0.1.0", "builtAt":"2018-08-02T13:58:30Z"}

$ kubectl config current-context
gilbert.lau@eks-c5-xlarge-5.us-west-2.eksctl.io

But when I ran $ kubectl get all, I got the following prompt:
Please enter Username:

@richardcase
Copy link
Contributor

@gmflau - thanks for sending that through. It sounds like you may have an old version of kubectl. Could you check the version you have by running:

kubectl version

@errordeveloper
Copy link
Contributor

2018-08-15T12:38:14-07:00 [✖] parsing kubectl version string "0.0.0": Version string empty

That line indicates that there is certainly something odd with kubectl. Is it an alias or do you have some kind of wrapper?

You need version 1.10.x, the latest is 1.10.7.

@gmflau
Copy link
Author

gmflau commented Aug 21, 2018 via email

@errordeveloper
Copy link
Contributor

@gmflau did you manage to upgrade and re-test? can we close the issue?

@richardcase
Copy link
Contributor

@gmflau - did upgrading kubectl help with this issue?

@gmflau
Copy link
Author

gmflau commented Sep 5, 2018 via email

@richardcase
Copy link
Contributor

Thanks @gmflau. Closing.

@tedmiston
Copy link
Contributor

I experienced this issue today too.

Leaving these commands here for reference for anyone else who has kubectl installed both via Homebrew and Docker for Mac.

$ kubectl version
Please enter Username: 
$ kubectl config current-context
(output matches expected for eksctl)

In my case, kubernetes-cli in Homebrew was up-to-date on v1.11.x but which kubectl was pointing to an outdated v1.9.2 alias set by Docker for Mac edge.

$ brew doctor
...
Warning: You have unlinked kegs in your Cellar
Leaving kegs unlinked can lead to build-trouble and cause brews that depend on
those kegs to fail to run properly once built. Run `brew link` on these:
  kubernetes-cli
...

I'd been hanging on an old build of DfM edge as Kubernetes support is somewhat unstable when upgrading. I'm trying the latest DfM edge build now.

I'm not sure about the best way to handle having 2 different local Kube versions like this but as a temp fix for now, I've overwritten the alias that currently points to DfM kubectl.

$ brew link --overwrite --dry-run kubernetes-cli
Would remove:
/usr/local/bin/kubectl -> /Applications/Docker.app/Contents/Resources/bin/kubectl
$ brew link --overwrite kubernetes-cli
Linking /usr/local/Cellar/kubernetes-cli/1.11.3... 191 symlinks created

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-10T11:44:36Z", GoVersion:"go1.11", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-28T20:13:43Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

I wonder if I should restrict my client version to 1.10.x to match what's available on EKS. Anyone have thoughts on this?

@errordeveloper
Copy link
Contributor

I have my Homebrew installed under ${HOME}/Library/Local/Homebrew, so my PATH is set to
${HOME}/Library/Local/Homebrew/bin:${PATH} (i.e.$HOME/Library/Local/Homebrew/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin).

By the way, I use stable Docker for Mac (18.06.0-ce-mac70), and it worked just fine. I recall having to go through multiple resets/reinstalls doing upgrades on unstable channel, but I believe most of those issues got fixed. In any case, the stable version comes with kubectl 1.10.3, so if you upgrade, you shouldn't have the compatibility issue any more.

Generally speaking to your question of managing kubectl versions, most of the time staying within 3 major versions works no problem at all (once in a while you get a handy new flag, but it's not a deal breaker most of the time). The reason EKS depends on 1.10 is the addition of auth plugins, and this level of compatibility is not something that I've seen happening very often. So I wouldn't worry long term.

Perhaps, we could improve the homebrew package to let user know of the caveat with Docker for Mac and Homebrew both using /usr/local/bin, but it would have to be added to the upstream formula.

@tedmiston
Copy link
Contributor

tedmiston commented Sep 11, 2018

Thanks, I'll give the stable channel a shot again.

Do you think it makes sense to add a note about this in the docs (README.md)? I'm happy to send a PR if so.

@errordeveloper
Copy link
Contributor

errordeveloper commented Sep 12, 2018 via email

@moudidx
Copy link

moudidx commented Jun 26, 2019

i had the same issue, it was provided because of a bad kubectl cluster context, make sure that kubeconfig point to ~/.kube/config in your bashrc then verfiy your context in ~/.kube/config file

@satyasure
Copy link

I had similar issue today ..

  1. All kubectl command are prompting for username and password
  2. Kubectl was working fine,few minutes before.
  3. I looked at what has been changed recently.
  4. new context was created and new user was created and deleted.
  5. During deletion .kube/config file was edited using vi editor
  6. I have to delete the context and user and export conf file
    -bash-4.2$ kubectl version
    Please enter Username: ^C
    -bash-4.2$ kubectl config -h
    Modify kubeconfig files using subcommands like "kubectl config set current-context my-context"

The loading order follows these rules:

  1. If the --kubeconfig flag is set, then only that file is loaded. The flag may only be set once and no merging takes
    place.
  2. If $KUBECONFIG environment variable is set, then it is used as a list of paths (normal path delimiting rules for
    your system). These paths are merged. When a value is modified, it is modified in the file that defines the stanza. When
    a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the
    last file in the list.
  3. Otherwise, ${HOME}/.kube/config is used and no merging takes place.

Available Commands:
current-context Displays the current-context
delete-cluster Delete the specified cluster from the kubeconfig
delete-context Delete the specified context from the kubeconfig
get-clusters Display clusters defined in the kubeconfig
get-contexts Describe one or many contexts
rename-context Renames a context from the kubeconfig file.
set Sets an individual value in a kubeconfig file
set-cluster Sets a cluster entry in kubeconfig
set-context Sets a context entry in kubeconfig
set-credentials Sets a user entry in kubeconfig
unset Unsets an individual value in a kubeconfig file
use-context Sets the current-context in a kubeconfig file
view Display merged kubeconfig settings or a specified kubeconfig file

Usage:
kubectl config SUBCOMMAND [options]

Use "kubectl --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
-bash-4.2$ kubectl config delete-context k8s
warning: this removed your active context, use "kubectl config use-context" to select a different one
deleted context k8s from ~/.kube/conf
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$ kubectl version
Error in configuration: context was not found for specified context: k8s
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
kubernetes-admin@kubernetes kubernetes kubernetes-admin
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$ kubectl config set-context kubernetes-admin@kubernetes
Context "kubernetes-admin@kubernetes" modified.
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$ kubectl version
Error in configuration: context was not found for specified context: k8s
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$ kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
-bash-4.2$
-bash-4.2$
-bash-4.2$
-bash-4.2$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.1", GitCommit:"7879fc12a63337efff607952a323df90cdc7a335", GitTreeState:"clean", BuildDate:"2020-04-08T17:38:50Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.1", GitCommit:"7879fc12a63337efff607952a323df90cdc7a335", GitTreeState:"clean", BuildDate:"2020-04-08T17:30:47Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
-bash-4.2$
-bash-4.2$
Finally Working fine

@tchatua
Copy link

tchatua commented Dec 23, 2022

It did not prompt for username/password when running "eksctl create cluster" command. The output from the command was as follows: gilbertlau:~$ eksctl create cluster --name=eks-c5-xlarge-5 --nodes=5 --node-type=c5.xlarge 2018-08-15T12:21:27-07:00 [ℹ] setting availability zones to [us-west-2c us-west-2b us-west-2a] 2018-08-15T12:21:27-07:00 [ℹ] importing SSH public key "/Users/gilbertlau/.ssh/id_rsa.pub" as "eksctl-eks-c5-xlarge-5-83:c8:52:95:cc:f8:0b:3a:1c:12:6d:88:69:87:ca:65" 2018-08-15T12:21:27-07:00 [ℹ] creating EKS cluster "eks-c5-xlarge-5" in "us-west-2" region 2018-08-15T12:21:27-07:00 [ℹ] creating ServiceRole stack "EKS-eks-c5-xlarge-5-ServiceRole" 2018-08-15T12:21:27-07:00 [ℹ] creating VPC stack "EKS-eks-c5-xlarge-5-VPC" 2018-08-15T12:34:10-07:00 [✔] created control plane "eks-c5-xlarge-5" 2018-08-15T12:34:10-07:00 [ℹ] creating DefaultNodeGroup stack "EKS-eks-c5-xlarge-5-DefaultNodeGroup" 2018-08-15T12:37:51-07:00 [✔] created DefaultNodeGroup stack "EKS-eks-c5-xlarge-5-DefaultNodeGroup" 2018-08-15T12:37:51-07:00 [✔] all EKS cluster "eks-c5-xlarge-5" resources has been created 2018-08-15T12:37:51-07:00 [✔] saved kubeconfig as "/Users/gilbertlau/.kube/config" 2018-08-15T12:37:51-07:00 [ℹ] the cluster has 0 nodes 2018-08-15T12:37:51-07:00 [ℹ] waiting for at least 5 nodes to become ready 2018-08-15T12:38:13-07:00 [ℹ] the cluster has 5 nodes 2018-08-15T12:38:13-07:00 [ℹ] node "ip-192-168-125-xxx.us-west-2.compute.internal" is ready 2018-08-15T12:38:13-07:00 [ℹ] node "ip-192-168-144-yyy.us-west-2.compute.internal" is ready 2018-08-15T12:38:13-07:00 [ℹ] node "ip-192-168-157-zzz.us-west-2.compute.internal" is ready 2018-08-15T12:38:13-07:00 [ℹ] node "ip-192-168-217-aaa.us-west-2.compute.internal" is ready 2018-08-15T12:38:13-07:00 [ℹ] node "ip-192-168-93-bbb.us-west-2.compute.internal" is not ready 2018-08-15T12:38:14-07:00 [✖] parsing kubectl version string "0.0.0": Version string empty 2018-08-15T12:38:14-07:00 [ℹ] cluster should be functional despite missing (or misconfigured) client binaries 2018-08-15T12:38:14-07:00 [✔] EKS cluster "eks-c5-xlarge-5" in "us-west-2" region is ready

$ eksctl version 2018-08-16T11:11:36-07:00 [ℹ] versionInfo = map[string]string{"gitCommit":"dca39f69e893b89d67156635c483b3f3e8236407", "gitTag":"0.1.0", "builtAt":"2018-08-02T13:58:30Z"}

$ kubectl config current-context gilbert.lau@eks-c5-xlarge-5.us-west-2.eksctl.io

But when I ran $ kubectl get all, I got the following prompt: Please enter Username:

Hi.
I have the same issue and I solve it by running back

kops update cluster --name <name_cluter> --yes --admin
--- Make sure to export first your s3 bucket. (export KOPS_STATE_STORE=s3://you_s3)bucket)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/help Request for help
Projects
None yet
Development

No branches or pull requests

7 participants