Skip to content

Run kubectl against multiple K8s clusters in a single terraform-apply. AWS auth/AssumeRole supported.

Notifications You must be signed in to change notification settings

mumoshu/terraform-provider-kubectl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

terraform-provider-kubectl

A terraform provider to run various kubectl operations across multiple clusters in a single terraform apply.

Use-cases:

  • Integrate kubectl run(s) into Terraform-managed DAG of resources
    • Annotate or label K8s resources created by other Terraform resource like eksctl_cluster and eksctl
  • AssumeRole before running kubectl

Prerequisites

  • Kubectl (If you prefer not letting the provider to install it)

Installation

For Terraform 0.12:

Install the terraform-provider-kubectl binary under .terraform/plugins/${OS}_${ARCH}, so that the binary is at e.g. ${WORKSPACE}/.terraform/plugins/darwin_amd64/terraform-provider-kubectl.

For Terraform 0.13 and later:

The provider is available at Terraform Registry so you can just add the following to your tf file for installation:

terraform {
  required_providers {
    kubectl = {
      source = "mumoshu/kubectl"
      version = "VERSION"
    }
  }
}

Please replace VERSION with the version number of the provider without the v prefix, like 0.1.0.

Examples

There is nothing to configure for the provider, so you firstly declare the provider like:

provider "kubectl" {}

The only supported resource is kubectl_ensure.

resource "kubectl_ensure" "meta" {
  kubeconfig = var.kubeconfig

  namespace = "kube-system"
  resource = "configmap"
  name = "aws-auth"

  labels = {
    "key1" = "one"
    "key2" = "two"
  }

  annotations = {
    "key3" = "three"
    "key4" = "four"
  }
}

See the labels and annotations example for more details.

Advanced Features

Declarative binary version management

terraform-provider-kubectl has a built-in package manager called shoal. With that, you can specify the following kubectl_ensure attributes to let the provider install the executable binaries on demand:

  • version for installing kubectl

version uses the Go runtime and go-git so it should work without any dependency.

With the below example, the provider installs the latest version of kubectl v1.18.x so that you don't need to install them beforehand. This should be handy when you're trying to use this provider on Terraform Cloud, whose runtime environment is not available for customization by the user.

resource "kubectl_ensure" "meta" {
  version = ">= 1.18.0, < 1.19.0"

  // snip

Please see this example for more details.

AWS authentication and AssumeRole support

Providing any combination of aws_region, aws_profile, and aws_assume_role, the provider obtains the following environment variables and provider it to any kubectl command.

  • aws_region attribute: AWS_DEFAULT_REGION
  • aws_profile attribute: AWS_PROFILE
  • aws_assume_role block: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN obtained by calling sts:AssumeRole
resource "kubectl_ensure" "meta" {
  aws_region = var.region
  aws_profile = var.profile
  aws_assume_role {
    role_arn = "arn:aws:iam::${var.account_id}:role/${var.role_name}"
  }
  // snip

Those environment variables go from the provider to kubectl, client-go, and finally the aws exec credentials provider that reads the environment variables to call aws eks get-token which internally calls sts:GetCallerIdentity(e.g. aws sts get-caller-identity) for authentication.

See https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html for more information on how authentication works on EKS.

Develop

If you wish to build this yourself, follow the instructions:

cd terraform-provider-kubectl
go build

Alternatives

This project has been published in October 2020. At that time, there was already an existing, well-known, and awesome project called gavinbunney/terraform-provider-kubectl.

The reason I created this was I wanted to patch K8s resources with labels and annotations, so that I can adopt aws-auth configmap created by eksctl to be managed by helm.

I may gradually add other features like gabinbunney's kubectl provider cover, like applying K8s manifests. But if you think this provider isn't what you want, I definitely recommend checking out gabinbuney's as an alternative to this provider!

About

Run kubectl against multiple K8s clusters in a single terraform-apply. AWS auth/AssumeRole supported.

Topics

Resources

Stars

Watchers

Forks

Sponsor this project

 

Packages

No packages published