-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gcloud container get-credentials not authenticating service account #30617
Comments
@cjcullen Redirecting to CJ. |
Previously, gcloud would have configured kubectl to use the cluster's static client certificate to authenticate. Now, gcloud is configuring kubectl to use the service account's credentials. Kubectl is just using the Application Default Credentials library, and it looks like this is part of the ADC flow for using a JSON-key service account. I'll see if there is a way that we could make this nicer. If you don't want to add the
or
|
Several in #google-containers have discovered the client certificate On Mon, Aug 15, 2016 at 4:12 PM, CJ Cullen notifications@github.com wrote:
|
So this issue caused us to be unable to deploy any of our services for ~24hrs whilst we struggled to find a cause and eventually this thread. The default behaviour of gcloud has changed in v121 to generate the Our service account file is stored, encrypted, within our repositories meaning we are able to generate a kubeconfig entry however as the service account is not in the standard application default credentials location (we actually delete the service account after activating it in gcloud), kubectl then fails to authenticate as it's unable to find the service account. The biggest issue here seems to be that the default behaviour has changed, and without much warning or particularly visible documentation. As a workaround we've added the |
I've found some info on this change on the gcloud release notes (https://cloud.google.com/sdk/release_notes), my bad. I'm not sure how best this switch should be handled and/or whether kubectl itself could print a more useful error message. |
@munnerz you're not alone on the sentiment. I created this issue after pulling my hair for hours attempting to figure out where the error was coming from. The cryptic error message yields no help and I couldn't find anything on google. |
Uggh sorry about this.
There's some meager documentation here, but we could have done a better job communicating this change. Please continue this thread with any more questions you have, because it will help us get better documentation. |
As for guidance, using Google OAuth2 identities is probably safer (access can be revoked, and only temporary credentials are passed around). If you need per-cluster permissions, we don't support that yet in GKE, so you might be better off using the per-cluster credentials. And we won't take away the ability to use the per-cluster credentials on your existing clusters. We'd like to encourage using OAuth2 credentials when possible, but you should use whatever works best for your workflow. |
Some questions:
|
@Cloven https://cloud.google.com/compute/docs/access/iam Took a while but I found this We use this on deployment scripts by authenticating gcloud with a service account, configuring kubectl and then running kubectl commands to deploy new releases. So if I understood this correctly, to use the new OAuth2 style of authentication all I have to do is point the environment variable to the private key and it'll use the new authentication style behind the scenes for me. Is that correct? |
Container Engine Admin: Permissions for all GKE operations. Container Engine Cluster Admin: Permission to Create/Update/Delete clusters, but not to create Kubernetes resources inside of them. Container Engine Developer: Permission on all Kubernetes resources inside of clusters, but not to create/update/delete clusters. Container Engine Viewer: Read-only permission to everything (except secrets). |
@Draiken that's correct. Kubectl will use the ApplicationDefaultCredentials library, which should now be authenticating as the same service account that gcloud is using. |
@cjcullen, thanks, sounds about right. Glad to hear the docs are catching up. Also look forward to more finely-grained IAM permissions; e.g., I'd like to differentially secure dev, staging and prod clusters under one project. |
Here are some docs. Feel free to comment here on what you think is missing/could be better: |
Per-cluster granularity is also on the roadmap :) |
Hi @cjcullen, As other said, it's nice to see that IAM is working now. Could we hope per cluster / per namespace granularity ? That would be awesome ! |
@rvrignaud Both are in our plans. |
@cjcullen thanks so much for your comments! I think I'm still a little confused about what the right solution for authenticating kubectl is. If you have some time, maybe you could see if there's something obvious that I'm doing wrong? Here's what I've tried to get my CI to use a Google service account and not rely on the client certificate. (I'm going to lay it all out here in this comment, but I also have my CI script and the failed build open for others to see.) Using the latest gcloud, I run I then tried tossing in To avoid using the client certificate, what's left for me to do? I think maybe I'm not sure what environment variable that "point the environment variable to the private key" is referring to if it's not GOOGLE_APPLICATION_CREDENTIALS. |
@jmhodges I would expect that flow to work. The "server has asked for credentials" error means that the credentials that kubectl is using are not getting authenticated for some reason (but it looks like your gcloud commands are working fine with them). I'll take a closer look tomorrow. |
Great! Thanks for looking at it! I think this is the problem a lot of folks have been having. We've been getting a regular stream of folks in the #google-containers channel on the kubernetes slack that have this problem (and all solved with use_client_certificate). |
Oh, and for further hopeful debugging help: I first got a red build (with no deploy code changes) on Aug 12th, but that's not necessarily the first day it would have been broken. That seems to be about when other folks started noticing it. |
So I created a service account with the container dev role, and downloaded its json key, and authed it, with latest gcloud. I confirmed that it was limited to inside-container operations by trying to create a new cluster with it, and was properly denied:
I then switched accounts to the account owner account, created a new cluster from the command line as my account owner, and then switched back to the service account with 'gcloud config set account gkerunner@amorphous-horse.iam.gserviceaccount.com'. I then downloaded the cluster credential:
I then confirmed that I could deploy new images to the cluster using the service account:
and checking the logs and dashboard confirmed that the deployment and associated pod were up and running successfully. I then tried to replicate @jmhodges' issue by replicating the exact command that his CI script used, on the theory that perhaps there was a permissioning issue specifically involving just the patch subcommand of kubectl:
and to my faint surprise everything seemed to work fine.
So I don't know what that means, but either I'm misunderstanding a step, or I was able to complete the entire process without granting additional abilities or using the 'legacy' certificate flow. I had been all excited because I thought maybe, aha, kubectl patch (which also operates on nodes) might be under more severe permission restrictions, but ... apparently not. @jmhodges, what happens if you try using the set image/rollout flow (http://kubernetes.io/docs/user-guide/kubectl/kubectl_rollout/), which might maybe theoretically possibly what maybe you perhaps want anyway, instead of the patch flow? |
@Cloven: It's very possible that your @jmhodges: Try |
@cjcullen Thanks for the idea! The
Build with that in it is at: https://travis-ci.org/jmhodges/howsmyssl/builds/155478643. (There's a bit of extra noise in there because And I triple checked my assumptions: The email address of the svc account listed in the build (howsmyssl-travis-deploy@personal-sites-1295.iam.gserviceaccount.com) matches the one I have in my project's IAM with the "Container Engine Developer" role and the last modified date of the key file in the git commit log and the creation date of the svc account in IAM. So, I'm pretty confident I've got the right account. |
Oh, and here's the patch I put in play: jmhodges/howsmyssl@297b2b2 (you can find it linked in the travis build page, but it's a little hard to find there) |
/sig cluster-lifecycle |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
astonished to see this issue has been there for more than 2 years now! Google Cloud wake up |
Hi,
In my K8s config file it fills the fields:
Am I missing something or it happens to others? I am authenticated with a service account that has roles K8s Engine Admin and K8s Engine Cluster Admin. |
@damien75 Generally speaking, you should not set If you want to use your own account permissions, use: gcloud auth application-default login
gcloud container clusters get-credentials clusterName If you want to use a service account file, use: gcloud auth activate-service-account [ACCOUNT] --key-file=KEY_FILE
gcloud container clusters get-credentials clusterName You can also set the In both cases, you should make sure that |
If you have a region-wide cluster, pass in |
I discovered this thread when searching for a solution to some unrelated use of kubernetes, where kubectl suddenly wouldn't be able to connect to the cluster, throwing:
Here is the method i used to work around it, in case it can help someone else: Not wanting to use the older, more insecure authentication method;
I deleted the kubeconfig file
and re-authenticated with |
In my case, I get this error after a Gcloud service account recreation(delete/create with the same SA email - username@projectXXX.iam.gserviceaccount.com). Creating the SA with a different email fixed my issue. |
@mboret I had the same issue today re-using the same email for service account. Lucky that I saw your comment. Changed the name and it worked. Thanks. |
for anyone who still gets it in my end the person I was helping with this problem had multiple accounts |
Seems like a legacy option (kubernetes/kubernetes#30617 (comment))
I ran into the same issue but the comment from @cjcullen (09/29/2016) fixed it for me! Thanks a lot! |
@rochdev While this is the recommended solution for new clusters on GKE, this seems to have issues when the clusters are in different projects, or different accounts, and we use two different gcloud accounts to do a Because the A hack (not recommended) would be to allow the same gcloud account to have access to both clusters and then do a Am I missing something here? What is the recommended way to fetch kubeconfigs for multiple GKE clusters running on different GCP accounts or projects, with no single service account that has access to both of them? |
Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.3", GitCommit:"c6411395e09da356c608896d3d9725acab821418", GitTreeState:"clean", BuildDate:"2016-07-22T20:29:38Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Environment:
uname -a
): Linux 5ab86176e0e5 3.13.0-88-generic Add load balancing support to services. #135-Ubuntu SMP Wed Jun 8 21:10:42 UTC 2016 x86_64 GNU/LinuxWhat happened: After successfully authenticating with a service account on
gcloud
using:The cluster credentials were downloaded using:
Even tho no errors are raised, when attempting to run
kubectl version
I get this error:What you expected to happen: Kubectl should be configured to use the cluster properly
How to reproduce it (as minimally and precisely as possible):
Anything else do we need to know:
After looking into the configuration environment I observed the user is not properly configured on
.kube/config
:After looking in the gcloud docs, I found this instruction:
After exporting the variable and running
get-credentials
once again, runningkubectl version
worked, and then runningkubectl config view
correctly displayed the user as authenticatedNote that this workflow was running perfectly in the past. My guess is that kubectl is no longer correctly detecting the gcloud service account that is authenticated. It's only looking for the environment variable. Either way, something seems broken :)
The text was updated successfully, but these errors were encountered: