Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to connect to the server: getting credentials: exec: exit status 255 #747

Closed
b0nete opened this issue Oct 28, 2019 · 25 comments
Closed
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@b0nete
Copy link

b0nete commented Oct 28, 2019

Hello, i'm having this error when try get pods or any resource in my k8s cluster with kubectl.

kubectl get pods

'credentialScope'

'credentialScope'

'credentialScope'

'credentialScope'

'credentialScope'
Unable to connect to the server: getting credentials: exec: exit status 255

The error only appears with EKS, with GKS cluster works fine.
I've uninstalled aws-cli, aws-iam-authenticator and kubectl and reinstall but error continues.

aws-cli is working fine, I can list the clusters, s3 buckets, etc with.
Also deleted ~/.kube/config and recreated with aws eks update-kubeconfig --name NAME but follow without work.

Versions:
aws-cli 1.16.254
aws-iam-authenticator-bin 0.4.0
kubectl 1.15.2

I hope help, thanks.

@justenspring
Copy link

justenspring commented Oct 28, 2019

Exact same issue here, took the same remedial steps without any luck. I can also run aws cli commands successfully in our multiple accounts, but receive this EKS error on every cluster.

In my case, IAM is through SAML federated auth on a role. I'm not aware of any changes made in our environment over the weekend, but this worked on Friday so I'm not sure what to make of it.

Versions:
aws-cli 1.16.203
aws-iam-authenticator-bin 0.4.0
kubectl 1.16

Tested on a different machine (this time Linux instead of macOS) and it worked, so the issue is definitely specific to my local machine. Would love to figure out how to get more detailed error messages on this.

@tmat-s
Copy link

tmat-s commented Oct 29, 2019

I ran into same issue here.

In my case, removing aws-iam-authenticator from PATH and recreating kubeconfig via aws eks update-kubeconfig seem to have resolved the issue.

kubectl can interact with EKS cluster without aws-iam-authenticator now.

Versions:
MacOS 10.14.2
aws-cli 1.16.268
kubectl v1.16.2

@justenspring
Copy link

Dang, I just tried those steps but no luck, still getting the 'credentialScope'...exit status 255 error.

Do you know if there's any way to figure out more detailed info on what the root error is?

(Forgot to add, I'm also on 10.14 of MacOS)

@tmat-s
Copy link

tmat-s commented Oct 30, 2019

You can get more detailed logs by adding -v 8 option to kubectl.
In my case, initial connection to EKS endpoint seems to have failed with 255 error.
According to this issue and official document, error 255 can be caused by syntax error in kubeconfig.

The 255 indicates the command failed and there were errors thrown by either the CLI or by the service the request was made to.

So, at least in my case, it was definitely aws-related issue.
Have you tried deleting(renaming) $HOME/.kube completely , then recreating kubeconfig?

@b0nete
Copy link
Author

b0nete commented Oct 30, 2019

You can get more detailed logs by adding -v 8 option to kubectl.
In my case, initial connection to EKS endpoint seems to have failed with 255 error.
According to this issue and official document, error 255 can be caused by syntax error in kubeconfig.

The 255 indicates the command failed and there were errors thrown by either the CLI or by the service the request was made to.

So, at least in my case, it was definitely aws-related issue.
Have you tried deleting(renaming) $HOME/.kube completely , then recreating kubeconfig?

I tried remove $HOME/.kube and recreate config with aws eks update-kubeconfig but the error persisted.
Now i could solve this problem change the follow lines in kubeconfig as this thread says.
https://itnext.io/kubernetes-part-4-aws-eks-authentification-aws-iam-authenticator-and-aws-iam-5985d5af9317

Change this:

...
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-east-2
      - eks
      - get-token
      - --cluster-name
      - mobilebackend-dev-eks-0-cluster
      command: aws
...

for this:

...
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
        - token
        - -i
        - mobilebackend-dev-eks-0-cluster
      command: aws-iam-authenticator
...

@ac-hibbert
Copy link

For me this seems to be related to botocore version (which is pulled in as a dependency of awscli - I am guessing it is just installing the lastest version). It works okay with this version:-

$ aws --version
aws-cli/1.16.259 Python/3.6.8 Linux/4.15.0-1051-aws botocore/1.12.249

But not with this

$ aws --version
aws-cli/1.16.259 Python/3.6.8 Linux/4.15.0-1051-aws botocore/1.13.5

@akefirad
Copy link

For me upgrading to the latest version of AWS CLI solved the issue.

@justenspring
Copy link

@tmat-s thank you for that verbose flag, I really appreciate it.

It seems the solution that worked for me today was as @Hibbert and @akefirad mentioned (thank you both!), updating the aws-cli and botocore versions. Which is odd, since these hadn't worked for me last week, but I'm happy to see it working again.

@danielschlegel
Copy link

Which version is the working version of the AWS CLI?

@akefirad
Copy link

akefirad commented Dec 2, 2019

Which version is the working version of the AWS CLI?

aws-cli/1.16.276 Python/3.6.9 Linux/5.0.0-36-generic botocore/1.13.12

@irajhedayati
Copy link

irajhedayati commented Dec 10, 2019

I had the same issue. The version was:
aws-cli/1.16.215 Python/2.7.16 Darwin/18.7.0 botocore/1.13.34

Upgraded to the latest and fixed:
aws-cli/1.16.299 Python/2.7.16 Darwin/18.7.0 botocore/1.13.35

@leonardorifeli
Copy link

For me upgrading to the latest version of AWS CLI solved the issue: https://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-configure-cli.html

@BlueShells
Copy link

Run into the same issue on Mac
after upgrade everything , I found its the permission issue of somefile that in the .kube folder

PermissionError: [Errno 13] Permission denied: '/Users/abc/.aws/cli/cache/*.json'

change the owner to the user that run kubectl command

@brianpursley
Copy link
Member

/kind bug

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Apr 29, 2020
@brianpursley
Copy link
Member

This might be resolved, we just need someone to try to reproduce the issue to confirm

@pocman
Copy link

pocman commented May 26, 2020

hi @brianpursley
I reproduce this issue with

➜  ~ aws --version
aws-cli/1.16.244 Python/3.6.9 Linux/4.15.0-101-generic botocore/1.16.16

@franklin216
Copy link

For me, the update of the version was not possible, because of Admin Rights. So i tried something else.

After deleting the folders 'cli' and 'boto' in C:\Users\username.aws everything worked like before.
Maybe deleting 'cli' would have been enough.

@JnMik
Copy link

JnMik commented Jul 14, 2020

Same issue, upgrading to this fixed it

aws-cli/1.18.97 Python/2.7.17 Linux/5.3.0-7642-generic botocore/1.17.20

Simply running pip install awscli

@eddiezane
Copy link
Member

/assign
/priority backlog

@k8s-ci-robot k8s-ci-robot added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Jul 22, 2020
@eddiezane
Copy link
Member

I think this is safe to close as it appears to be closely tied to different versions of the aws cli and iam authenticator.

If anyone stumbles on this from a search please make sure you have the latest versions of all tools mentioned above.

If the issues persists please reopen with details on versions and reproduction steps/examples.

/close

@k8s-ci-robot
Copy link
Contributor

@eddiezane: Closing this issue.

In response to this:

I think this is safe to close as it appears to be closely tied to different versions of the aws cli and iam authenticator.

If anyone stumbles on this from a search please make sure you have the latest versions of all tools mentioned above.

If the issues persists please reopen with details on versions and reproduction steps/examples.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@codewithrajranjan
Copy link

please delete the below folder. It worked for me

~/.aws/cli/cache

@poindexter-1
Copy link

upgrading the aws-cli and botocore version fixes the issue:
Last working versions: aws-cli/1.18.165 Python/3.6.4 Darwin/19.6.0 botocore/1.19.5

@jonathanmdr
Copy link

This problem occurred for me after updating macOS from Big Sur to Monterey.

I solved using the following command line:

xcode-select --install

@dnz-siliang
Copy link

We have met similar issues before with the same error msg when we are deploying EKS cluster via CDK.

The solution is to involve kubectl environment as below:

                                              kubectl_environment={
                                                'AWS_STS_REGIONAL_ENDPOINTS': 'regional',
                                              },

As we are limiting AWS service to specific region, so when STS was using default US region, it was failed to connect.

Below is the kubectl lambda layer codes generated by CDK:

    # "log in" to the cluster
    cmd = [ 'aws', 'eks', 'update-kubeconfig',
        '--role-arn', role_arn,
        '--name', cluster_name,
        '--kubeconfig', kubeconfig
    ]

Which is "missing" region config, so need above kubectl environment variable to workaround it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests