Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't add a GKE cluster #14

Closed
dobegor opened this issue Jan 24, 2020 · 13 comments · Fixed by #19
Closed

Can't add a GKE cluster #14

dobegor opened this issue Jan 24, 2020 · 13 comments · Fixed by #19
Labels
enhancement New feature or request

Comments

@dobegor
Copy link

dobegor commented Jan 24, 2020

I can't add a GKE cluster. It doesn't allow to save the kubeconfig file (says it's invalid) and I can't add it manually, as GKE instructs kubeconfig to use gcloud command line tool to obtain login info.

There's a scope to access Google Cloud with OAuth2, though.

@ricoberger
Copy link
Member

Hi and thanks for trying kubenav. Can you check your kubeconfig file if it uses the certificate-authority field instead of certificate-authority-data, then you can copy the content from this file into the Certificate Authority Data field.

  • certificate-authority -> certificate-authority-data
  • client-certificate -> client-certificate-data
  • client-key -> client-key-data

I will also concentrate on the support of OIDC for GKE, EKS and AKS, because it's currently the most requested feature. Hopefully I get it into the next release, but I can't promise it.

@ricoberger ricoberger added the enhancement New feature or request label Jan 24, 2020
@dobegor
Copy link
Author

dobegor commented Jan 24, 2020

The thing is there's no client-certificate and client-key fields, there's only access-token field and it doesn't work if I paste it into "Token" field in Kubenav.

@dobegor
Copy link
Author

dobegor commented Jan 24, 2020

my kubeconfig looks like this:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <omitted>
    server: <omitted>
  name: gke_sparkbackend_us-east1-c_smartback
contexts:
- context:
    cluster: gke_sparkbackend_us-east1-c_smartback
    user: gke_sparkbackend_us-east1-c_smartback
  name: gke_sparkbackend_us-east1-c_smartback
current-context: gke_sparkbackend_us-east1-c_smartback
kind: Config
preferences: {}
users:
- name: gke_sparkbackend_us-east1-c_smartback
  user:
    auth-provider:
      config:
        access-token: <omitted>
        cmd-args: config config-helper --format=json
        cmd-path: /Users/dobegor/google-cloud-sdk 16.26.14/bin/gcloud
        expiry: "2020-01-24T17:04:29Z"
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp

@ricoberger
Copy link
Member

Hi and thanks for the example kubeconfig. As long as I'm working on OIDC support you can try the following two solutions. I couldn't test the first one, but the second one should definitely work.

Solution 1: Use Bearer Token from OIDC

Run kubectl get ns -v 10, then there should be a line in the output similar to the following one:

curl -k -v -XGET  -H "Authorization: Bearer <TOKEN>" -H "Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json" -H "User-Agent: kubectl/v1.17.1 (darwin/amd64) kubernetes/d224476" '<URL>/api/v1/namespaces?limit=500'

If the authorization header includes a bearer token, you can use this token within kubenav.

Solution 2: Service Account

Run the following, which will create a namespace kubenav and a service account kubenav which has the rights to do everything:

cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: Namespace
metadata:
  name: kubenav

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kubenav
  namespace: kubenav

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kubenav
  namespace: kubenav
rules:
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["*"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubenav
  namespace: kubenav
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubenav
subjects:
  - kind: ServiceAccount
    name: kubenav
    namespace: kubenav
EOF

Get the corresponding secret for the service account:

kubectl get sa --namespace kubenav kubenav -o yaml
kubectl get secret kubenav-token-lsxc5 -o yaml

Use the values from the ca.crt and token fields within the manual cluster configuration. The value from the the token field must be decoded (echo -n "<TOKEN>" | base64 --decode).

@dobegor
Copy link
Author

dobegor commented Jan 27, 2020

Awesome thanks for such a detailed answer!
I was able to add a GKE cluster using Service Account successfully.

@ricoberger
Copy link
Member

Nice to hear and thanks for the feedback.

@jicowan
Copy link

jicowan commented Feb 3, 2020

Cannot add an EKS cluster either.
Sample Kubeconfig:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: {cert-data}
    server: https://ABCDEFGHIJKLM0123456789.gr7.us-west-2.eks.amazonaws.com
  name: server.us-west-2.eksctl.io
contexts:
- context:
    cluster: server.us-west-2.eksctl.io
    user: kubernetes@server.us-west-2.eksctl.io
  name: kubernetes@server.us-west-2.eksctl.io
current-context: kubernetes@server.us-west-2.eksctl.io
kind: Config
preferences: {}
users:
- name: kubernetes@server.us-west-2.eksctl.io
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - server
      command: aws-iam-authenticator
      env: null

@ricoberger
Copy link
Member

Hi @jicowan, thanks for the example kubeconfig file. I'm currently working on the integration of GKE and EKS, which hopefully gets into the next release.

For now you can try the workaround via service account.

@mattdornfeld
Copy link

Hi @ricoberger, would very much appreciate support for gke kubeconfigs out of the box. It seems the options in manual configuration will not work very well as they disable client certificates by default and recommend they stay disabled https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster.

@ricoberger
Copy link
Member

Short update from my side: I added support for GKE via OIDC (see #19). Currently only the approvement of the Google OAuth consent screen screen and extensive testing is missing.

If I understand it correctly, the approvement can take some time, so I'll deal with EKS next.

@ricoberger
Copy link
Member

Update:

  • EKS clusters are now also supported. Therefor you have to provide an access key id, a secret key and a region. The credentials are needed to import the clusters and to generate the bearer tokens for the Kubernetes API requests.
  • Google does not approve the OAuth consent screen, because the required API endpoint is not allowed for external applications. I will adjust the code, so everyone can provide his own Client ID for Google. I will add a help section with some detailed instructions on the setup for this.
  • I will submit the new app version over the weekend, so they should be available by mid of next week.
  • I will close this ticket. If you have any recommendations for improvements or problems with the implementation for GKE and EKS please open a new issue.

Thanks for your patience.

@nixiam
Copy link

nixiam commented Jul 18, 2022

Awesome thanks for such a detailed answer!
I was able to add a GKE cluster using Service Account successfully.

Hi and thanks for the example kubeconfig. As long as I'm working on OIDC support you can try the following two solutions. I couldn't test the first one, but the second one should definitely work.

Solution 1: Use Bearer Token from OIDC

Run kubectl get ns -v 10, then there should be a line in the output similar to the following one:

curl -k -v -XGET  -H "Authorization: Bearer <TOKEN>" -H "Accept: application/json;as=Table;v=v1beta1;g=meta.k8s.io, application/json" -H "User-Agent: kubectl/v1.17.1 (darwin/amd64) kubernetes/d224476" '<URL>/api/v1/namespaces?limit=500'

If the authorization header includes a bearer token, you can use this token within kubenav.

Solution 2: Service Account

Run the following, which will create a namespace kubenav and a service account kubenav which has the rights to do everything:

cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: Namespace
metadata:
  name: kubenav

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kubenav
  namespace: kubenav

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kubenav
  namespace: kubenav
rules:
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["*"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubenav
  namespace: kubenav
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubenav
subjects:
  - kind: ServiceAccount
    name: kubenav
    namespace: kubenav
EOF

Get the corresponding secret for the service account:

kubectl get sa --namespace kubenav kubenav -o yaml
kubectl get secret kubenav-token-lsxc5 -o yaml

Use the values from the ca.crt and token fields within the manual cluster configuration. The value from the the token field must be decoded (echo -n "<TOKEN>" | base64 --decode).

Hi, I tried to follow your 2* point but, I produced sa and secret yaml but can you share the filled manual configuration in kubenav? I tried but it doesn't work. Thank's

@ricoberger
Copy link
Member

Hi @nixiam, sorry that I missed your question.

Can you have a look at the following page please, to see which value must be used in which field https://docs.kubenav.io/mobile/manual/

Besides the certificate authority data and the token values, you just have to provide the server url in the manual configuration, then it should be working.

If you have any further question please let me know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants