Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Authentication from "gcp" provider #210

Closed
jeremywadsack opened this issue Nov 11, 2016 · 18 comments
Closed

Authentication from "gcp" provider #210

jeremywadsack opened this issue Nov 11, 2016 · 18 comments

Comments

@jeremywadsack
Copy link
Contributor

jeremywadsack commented Nov 11, 2016

I'm trying to use kubeclient with my local ~/.kube/config file with GKE. Following the instructions in the README I expected this to work as follows:

          kubeconfig = File.join(ENV["HOME"], ".kube", "config")
          config = Kubeclient::Config.read(kubeconfig)
          @k8s_client = Kubeclient::Client.new(
              config.context.api_endpoint,
              config.context.api_version,
              {
                  ssl_options: config.context.ssl_options,
                  auth_options: config.context.auth_options
              }
          )

However, this failed because the Config class expects the users section of the configuration file to contain either a token parameter or username and password for the appropriate user.

When I look at my ~/.kube/config file it has the following structure for the user auth (key partially redacted):

  user:
    auth-provider:
      config:
        access-token: ya29.C*******W
        expiry: 2016-11-11T13:11:05.459378162-08:00
      name: gcp

I was able to change Config#fetch_user_auth_options to read the access token instead with the following patch:

diff --git a/lib/kubeclient/config.rb b/lib/kubeclient/config.rb
index 710dda4..5838edd 100644
--- a/lib/kubeclient/config.rb
+++ b/lib/kubeclient/config.rb
@@ -115,6 +115,8 @@ module Kubeclient
       options = {}
       if user.key?('token')
         options[:bearer_token] = user['token']
+      elsif user.key?('auth-provider') && user['auth-provider'].key?('config') && user['auth-provider']['config'].key?('access-token')
+        options[:bearer_token] = user['auth-provider']['config']['access-token']
       else
         %w(username password).each do |attr|
           options[attr.to_sym] = user[attr] if user.key?(attr)

However, this only works until expiry. In order to get a new token I have to run a kubectl command and then re-create my Kubernetes::Client.

I dug around on this but I can't figure out how kubeclient is picking up the new auth token when the one it has stored expired. I'd love to get this working but could use some pointers from anyone who understands this better.

@simon3z
Copy link
Collaborator

simon3z commented Nov 14, 2016

I was able to change Config#fetch_user_auth_options to read the access token instead with the following patch:

@jeremywadsack can you send a PR for this fix?

I dug around on this but I can't figure out how kubeclient is picking up the new auth token when the one it has stored expired. I'd love to get this working but could use some pointers from anyone who understands this better.

You need to re-authenticate with the server using oauth.

@jeremywadsack
Copy link
Contributor Author

I'm happy to submit a patch but the token is only good for at most an hour.
I'd rather submit a PR that covers the OAuth re-authentication but I don't
know how kubectl does that without a web page. Is there another token
somewhere that we can use?
On Mon, Nov 14, 2016 at 8:03 AM Federico Simoncelli <
notifications@github.com> wrote:

I was able to change Config#fetch_user_auth_options to read the access
token instead with the following patch:

@jeremywadsack https://github.com/jeremywadsack can you send a PR for
this fix?

I dug around on this but I can't figure out how kubeclient is picking up
the new auth token when the one it has stored expired. I'd love to get this
working but could use some pointers from anyone who understands this better.

You need to re-authenticate with the server using oauth.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#210 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAOurClDgv3XiZxuogwxmwwiIRFjQ265ks5q-IZCgaJpZM4KwL3t
.

Jeremy Wadsack

@simon3z
Copy link
Collaborator

simon3z commented Nov 14, 2016

I'm happy to submit a patch but the token is only good for at most an hour.
I'd rather submit a PR that covers the OAuth re-authentication but I don't
know how kubectl does that without a web page. Is there another token
somewhere that we can use?

@jeremywadsack for never-expiring tokens you should use service accounts (at least that's what we do in OpenShift). You may have better luck getting this questions answered on the kubernetes repository (there's nothing specific to this gem).

@jeremywadsack
Copy link
Contributor Author

@simon3z: Right. We use service accounts on the servers. I was trying to set this up to be able to run/test in development where we have kubectl installed. I guess I was expecting to mimic the behavior of kubectl. I don't have to use a service account with that and it re-authorizes the token whenever it expires.

I'll go ahead and submit a PR for this as is, with the caveat that you will need to run a kubectl command every hour to renew the token (I can add an exception message to that effect).

I'll dig into kubectl further and check with kubernetes folks about that.

@moolitayer
Copy link
Collaborator

@jeremywadsack looking at both patches and the google doc you attached I still don't understand the flow 😔

Can you please give a high level flow of what you are trying to achieve?

@jeremywadsack
Copy link
Contributor Author

@moolitayer Sorry if that wasn't clear.

On my dev environment, I'd like to be able to use kubeclient to run (test) Job scaling. Approximately the following:

        our_jobs = jobs_client.get_jobs(label_selector: "resque-kubernetes=job")
        finished = our_jobs.select { |job| job.spec.completions == job.status.succeeded }

        finished.each do |job|
          jobs_client.delete_job(job.metadata.name, job.metadata.namespace)
        end

        job = Kubeclient::Resource.new(manifest)
        jobs_client.create_job(job)

I have already authorized glcoud with my Google credentials, which configures kubectl so that I can make calls whenever I need to. I don't need a separate service account, and I don't need to go through a browser-based OAuth2 flow. Ideally I'd like kubeclient to work with the same credentials that are already on the system and not have to add new credentials or a new flow to use this.

PR #211 was the first attempt at this. I saw from the README that I could authorize kubeclient using my ~/.kube/config file so I tried that. But it wasn't working. In digging into the code I realized that the auth details in my config were not supported by the code so I tried to add support for that but ran into the issue that the token in ~/.kube/config is only good for an hour. I would vote to close that in favor of #213.

PR #213 was the second attempt, which uses Google's default application credentials. I think it's the correct solution to this, because it uses the same source credentials that kubectl does, which is a published spec (rather than the configuration file which may not be). With this, I just tell the client to use default credentials and it works:

         jobs_client = Kubeclient::Client.new(
              config.context.api_endpoint + "/apis/batch",
              config.context.api_version,
              {
                  ssl_options: config.context.ssl_options,
                  auth_options: { default_credentials: true }
              }
          )

It has the additional advantage that on GKE I think it will work the same way (because the default application credentials are already installed), without having to specify the location of the bearer_token_file (which might change).

Does that help clarify things?

@garethr
Copy link

garethr commented Jan 16, 2017

Just a note that we (myself and @kenazk) ran into this issue. If anyone else hits it and needs a workaround until the linked PRs work there way through into a release you can request GCP gives you the client certificate rather than the new OAUTH token. Simply run the following command:

gcloud config set container/use_client_certificate True

And then get the credentials again:

gcloud container clusters get-credentials <CLUSTER-NAME>

The .kube/config file should now be compatible with kubeclient.

@simon3z
Copy link
Collaborator

simon3z commented Jan 18, 2017

Thanks @garethr

@jeremywadsack
Copy link
Contributor Author

BTW, in our testing of #213, I discovered that it also only works if you've installed default application credentials.

gcloud auth application-default login

So perhaps, just updating the README with @garethr's suggestion is a simpler approach to solving this, if you are not keen on having this gem support GSC default authentication credentials generally.

@simon3z
Copy link
Collaborator

simon3z commented Jan 18, 2017

@garethr @jeremywadsack can you send a PR for the README?

If there's anything that we can do to facilitate an external/3rd-party integration for gsc please let us know.

@jeremywadsack
Copy link
Contributor Author

@simon3z by "external/3rd-party integration for gsc" do you mean a separate gem that combined kubeclient and custom code to achieve what #213 does?

In that case we'd need some "plug-in" support that allows defining custom options, validating those options, and acting on them.

@simon3z
Copy link
Collaborator

simon3z commented Jan 19, 2017

@simon3z by "external/3rd-party integration for gsc" do you mean a separate gem that combined kubeclient and custom code to achieve what #213 does?

Yes.

In that case we'd need some "plug-in" support that allows defining custom options, validating those options, and acting on them.

Don't you just need to wrap the calls with something that catches the specific error code, re-authenticates and repeats the request?

@stevenaldinger
Copy link

Thanks @jeremywadsack and @garethr. I've been super excited about your puppet module but hadn't been able to get it working yet with the 401s, didn't dive into kubeclient until this morning. 👍

@cben
Copy link
Collaborator

cben commented Apr 27, 2018

I believe this is covered by just merged #213 🎆

@cben cben closed this as completed Apr 27, 2018
@jeremywadsack
Copy link
Contributor Author

@cben #213 adds support for reading Google's Application Default Credentials, but that's separate from the "gcp" provider for .kube/config. I was confused about this when I first opened these issues so I want to be clear about my current understanding.

When a Google customer has used gcloud to configure kubectl for a GKE cluster it adds the gcp provider to ~/.kube/config.

gcloud no longer generates the Application Default Credentials. At the moment, for someone with a gcloud-configured cluster, that's an option for using kubectl, but they would need to run a command to generate it if they haven't already.

The ~/.kube/config file may include a bearer token (I saw a machines that it didn't include the token perhaps because it had never connected to GKE with kubectl). If it has a token then the token is good for one hour after it was created, which is at the latest whenever kubectl last ran, and therefore could be expired. The expiration date is also provided in the config. It appears that the config now includes details about how to get a new token using gcloud:

- name: gke_my-gcp-project_us-west1-b_production
  user:
    auth-provider:
      config:
        access-token: ya29.Gl2qBf_gF6vY8F4-...-Q1gZA
        cmd-args: config config-helper --format=json
        cmd-path: /Users/jeremywadsack/google-cloud-sdk/bin/gcloud
        expiry: 2018-04-27T04:10:15Z
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp

Given the details above, how would you feel about a change to Kubeclient::Config that added a handler for the gcp provider. The handler would use the cmd-path, cmd-args, expiry-key, and token-key values to generate a new bearer token that is good for an hour.

Then the instructions in the README would work for gcp providers, with the only change being a caveat that the access may only be good for an hour.

config = Kubeclient::Config.read('/path/to/.kube/config')
Kubeclient::Client.new(
  config.context.api_endpoint,
    config.context.api_version,
    {
      ssl_options: config.context.ssl_options,
      auth_options: config.context.auth_options
    }
)

I wouldn't be surprised if this token expiration were true in other environments as well. Maybe there's a way we can expose the token expiration on either the Config or the Client but I don't know how to discover that in other environments.

@lucasmazza
Copy link
Contributor

sorry for bumping an 1 year old issue, but my team stumbled on the same issue as @jeremywadsack with the gcp provider and the shopify/kubernetes-deploy commands that use kubeclient over invoking kubectl.

@cben would still be desirable to support this? I might take a shot on implementing it, taking some notes from gcp.go.

@jeremywadsack
Copy link
Contributor Author

@lucasmazza, see #394 (completed) for a step in that direction, at least when application default credentials are present (e.g. in development). Is the issue you had related to refreshing an expired token? See issues #393 for discussion. #400 add notes about how to manually renew. I would support a PR towards auto-renew (although I'm not a maintainer).

@cben
Copy link
Collaborator

cben commented Jul 31, 2020

@jeremywadsack @lucasmazza can we close this, has it been covered by #410?
(renewal is not implemented yet but tracked in #393)

@cben cben closed this as completed Aug 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants