New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to connect to the server: getting credentials: exec: exit status 255 #747
Comments
Exact same issue here, took the same remedial steps without any luck. I can also run aws cli commands successfully in our multiple accounts, but receive this EKS error on every cluster. In my case, IAM is through SAML federated auth on a role. I'm not aware of any changes made in our environment over the weekend, but this worked on Friday so I'm not sure what to make of it. Versions: Tested on a different machine (this time Linux instead of macOS) and it worked, so the issue is definitely specific to my local machine. Would love to figure out how to get more detailed error messages on this. |
I ran into same issue here. In my case, removing aws-iam-authenticator from PATH and recreating kubeconfig via kubectl can interact with EKS cluster without aws-iam-authenticator now. Versions: |
Dang, I just tried those steps but no luck, still getting the 'credentialScope'...exit status 255 error. Do you know if there's any way to figure out more detailed info on what the root error is? (Forgot to add, I'm also on 10.14 of MacOS) |
You can get more detailed logs by adding
So, at least in my case, it was definitely aws-related issue. |
I tried remove $HOME/.kube and recreate config with aws eks update-kubeconfig but the error persisted. Change this:
for this:
|
For me this seems to be related to botocore version (which is pulled in as a dependency of awscli - I am guessing it is just installing the lastest version). It works okay with this version:- $ aws --version But not with this $ aws --version |
For me upgrading to the latest version of AWS CLI solved the issue. |
@tmat-s thank you for that verbose flag, I really appreciate it. It seems the solution that worked for me today was as @Hibbert and @akefirad mentioned (thank you both!), updating the aws-cli and botocore versions. Which is odd, since these hadn't worked for me last week, but I'm happy to see it working again. |
Which version is the working version of the AWS CLI? |
|
I had the same issue. The version was: Upgraded to the latest and fixed: |
For me upgrading to the latest version of AWS CLI solved the issue: https://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-configure-cli.html |
Run into the same issue on Mac
change the owner to the user that run kubectl command |
/kind bug |
This might be resolved, we just need someone to try to reproduce the issue to confirm |
hi @brianpursley
|
For me, the update of the version was not possible, because of Admin Rights. So i tried something else. After deleting the folders 'cli' and 'boto' in C:\Users\username.aws everything worked like before. |
Same issue, upgrading to this fixed it aws-cli/1.18.97 Python/2.7.17 Linux/5.3.0-7642-generic botocore/1.17.20 Simply running pip install awscli |
/assign |
I think this is safe to close as it appears to be closely tied to different versions of the aws cli and iam authenticator. If anyone stumbles on this from a search please make sure you have the latest versions of all tools mentioned above. If the issues persists please reopen with details on versions and reproduction steps/examples. /close |
@eddiezane: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
please delete the below folder. It worked for me ~/.aws/cli/cache |
upgrading the aws-cli and botocore version fixes the issue: |
This problem occurred for me after updating macOS from I solved using the following command line: xcode-select --install |
We have met similar issues before with the same error msg when we are deploying EKS cluster via CDK. The solution is to involve kubectl environment as below:
As we are limiting AWS service to specific region, so when STS was using default US region, it was failed to connect. Below is the kubectl lambda layer codes generated by CDK:
Which is "missing" region config, so need above kubectl environment variable to workaround it. |
Hello, i'm having this error when try get pods or any resource in my k8s cluster with kubectl.
The error only appears with EKS, with GKS cluster works fine.
I've uninstalled aws-cli, aws-iam-authenticator and kubectl and reinstall but error continues.
aws-cli is working fine, I can list the clusters, s3 buckets, etc with.
Also deleted ~/.kube/config and recreated with aws eks update-kubeconfig --name NAME but follow without work.
Versions:
aws-cli 1.16.254
aws-iam-authenticator-bin 0.4.0
kubectl 1.15.2
I hope help, thanks.
The text was updated successfully, but these errors were encountered: