Docker image for issuing kubectl commands in Codeship Pro build
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.gitignore
Dockerfile
Dockerfile.eks
LICENSE
README.md
codeship-services.yml
codeship-steps.yml
dockercfg.encrypted
env_var_helper_client.rb
env_var_helper_client.sh
k8s-env.encrypted

README.md

codeship/kubectl container

kubectl binary container, with guidance for configuring credentials (including secrets) to issue commands to your cluster from a Codeship Pro step.

Distill your k8s configurations to single file

kubectl config view --flatten > kubeconfigdata # add --minify flag to reduce info to current context

Copy contents to env var file using our codeship/env-var-helper container

docker run --rm -it -v $(pwd):/files codeship/env-var-helper cp kubeconfigdata:/root/.kube/config k8s-env

Check out the codeship/env-var-helper README for more information.

Encrypt the file, remove files and/or add to .gitignore

jet encrypt k8s-env k8s-env.encrypted
rm kubeconfigdata k8s-env

Configure the service and steps into the build with the following as guidance

# codeship-services.yml

kubectl:
  build:
    image: codeship/kubectl
    dockerfile: Dockerfile
  encrypted_env_file: k8s-env.encrypted
# codeship-steps.yml

- name: check response to kubectl config
  service: kubectl
  command: kubectl config view

Deploying to EKS?

The same workflow outline above works with EKS, but use the codeship/eks-kubectl image, which comes installed with aws-iam-authenticator and an AWS-vendored copy of kubectl.