Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while using k8sec without any other kubectl command before #27

Closed
thomas-hilaire opened this issue Sep 12, 2017 · 3 comments
Closed

Comments

@thomas-hilaire
Copy link

thomas-hilaire commented Sep 12, 2017

Hello,

I'm using k8sec on my CD server to update my kubernetes secrets before deploying. I encounter an issue with k8sec, when no other kubectl command has run before.

When I do this command
$ k8sec set --base64 MY_SECRET KEY=$VAL

I get the following error:

Failed to get current secret. name=MY_SECRET KEY: Get https://kube.master.ip/api/v1/namespaces/default/secrets: error executing access token command "/google-cloud-sdk/bin/gcloud ": exit status 2

If I run a kubectl command before (like kubectl get secrets), my k8sec set command succeed.
Note that I also get this error sometimes on my local machine.

Thanks for your tool!

@dtan4 dtan4 closed this as completed Sep 18, 2017
@dwcaraway
Copy link

dwcaraway commented Dec 7, 2017

found this error initially confusing as well. For anyone encountering the ephemeral token problem with google kubernetes engine, you'll see a message like below. This particular message is on a Mac with gcloud/kubectl and k8sec installed using homebrew.

$ k8sec list

Failed to retrieve secrets.: Get https://<api address>/api/v1/namespaces/default/secrets: error 
executing access token command "/usr/local/Caskroom/google-cloud-sdk/latest/
google-cloud-sdk/bin/gcloud ": exit status 2

Just run a kubectl command to download the ephemeral token again. E.g.

$ kubectl get secrets

@dtan4 thanks for building a great tool, the kubernetes secrets are awkward to work with otherwise and the environmental variable file support fits well into 12factor built apps.

@bholzer
Copy link

bholzer commented Dec 28, 2017

I stumbled here with a use-case that exhibited the same problem. In my case, I am using helm to manage kubernetes deployments, and I am trying to automate those deployments from Google's container cloudbuild service.

I was using a step to get the credentials and save them to a shared volume so a subsequent step with a container that doesn't have gcloud or kubectl could authenticate to the cluster.

Broken, it looked like this:

steps:
- name: 'gcr.io/cloud-builders/docker'
  args: ['build', '-t', 'gcr.io/$PROJECT_ID/base_image:$BRANCH_NAME', '.']
- name: 'gcr.io/cloud-builders/gcloud'
  entrypoint: '/bin/bash'
  args: ['-c', 'gcloud container clusters get-credentials cluster-1 --zone us-west1-a --project $PROJECT_ID && cp ~/.kube/config /workspace/kubeconfig']
- name: 'gcr.io/cloud-builders/docker'
  args: [
    'run', '-e', 'KUBECONFIG=/root/app/kubeconfig', '-v', '/workspace:/root/app', '--entrypoint', '/bin/sh', 'linkyard/docker-helm',
    '-c', '/bin/helm init -c && /bin/helm install /root/app/k8s/helm --debug --set image="gcr.io/${PROJECT_ID}/base_image:${BRANCH_NAME}"'
  ]
images: ['gcr.io/$PROJECT_ID/base_image:$BRANCH_NAME']
timeout: 900s

The generated kubeconfig file didn't have the credentials, but instead a set of instructions on how to get them later.

To fix this, I just called kubectl version after getting the credentials:

steps:
- name: 'gcr.io/cloud-builders/docker'
  args: ['build', '-t', 'gcr.io/$PROJECT_ID/base_image:$BRANCH_NAME', '.']
- name: 'gcr.io/cloud-builders/gcloud'
  entrypoint: '/bin/bash'
  args: ['-c', 'gcloud container clusters get-credentials cluster-1 --zone us-west1-a --project $PROJECT_ID && kubectl version && cp ~/.kube/config /workspace/kubeconfig']
- name: 'gcr.io/cloud-builders/docker'
  args: [
    'run', '-e', 'KUBECONFIG=/root/app/kubeconfig', '-v', '/workspace:/root/app', '--entrypoint', '/bin/sh', 'linkyard/docker-helm',
    '-c', '/bin/helm init -c && /bin/helm install /root/app/k8s/helm --debug --set image="gcr.io/${PROJECT_ID}/base_image:${BRANCH_NAME}"'
  ]
images: ['gcr.io/$PROJECT_ID/base_image:$BRANCH_NAME']
timeout: 900s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants