Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: equivalent of kubectl patch #723

Open
zapman449 opened this issue Jan 2, 2020 · 89 comments
Open

Feature Request: equivalent of kubectl patch #723

zapman449 opened this issue Jan 2, 2020 · 89 comments

Comments

@zapman449
Copy link

Terraform Version

Terraform v0.12.18

Affected Resource(s)

n/a (request for new resource)

In AWS EKS, clusters come "pre-configured" with several things running in the kube-system namespace. We need to patch those pre-configured things, while retaining any "upstream" changes which happen to be made. (for example: set HTTP_PROXY variables)

kubectl provides the patch keyword to handle this use-case.

The kubernetes provider for terraform should do the same.

Proposed example (this would add the proxy-environment-variables ConfigMap to the existing envFrom list which already contains aws-node-environment-variable-additions for the container named aws-node):

resource "kubernetes_patch" "aws-node" {
  kind = daemonset
  metadata {
    name      = "aws-node"
    namespace = "kube-system"
  }
  spec {
    template {
      spec {
        container {
          name = "aws-node"
          envFrom {
            [
              configMapRef {
                name: proxy-environment-variables
              }
              configMapRef {
                name: aws-node-environment-variable-additions
              }
            ]
          }
        }
      }
    }
  }
}
@antonosmond
Copy link

I have 2 additional use cases for the same feature, both on EKS.

  1. If you want to utilise node taints & tolerations for all your nodes, any EKS managed k8s resources e.g. coredns must be patched to tolerate the taints.

  2. Fargate on EKS. If you want to run a nodeless cluster and use Fargate to run everything, some EKS managed resources e.g. coredns prevent this via an annotation e.g.

  annotations:
    eks.amazonaws.com/compute-type: ec2

The annotation must be removed.
The ability to patch resources would solve both these use cases and many others.

@oleksandrsemak
Copy link

I have 2 additional use cases for the same feature, both on EKS.

  1. If you want to utilise node taints & tolerations for all your nodes, any EKS managed k8s resources e.g. coredns must be patched to tolerate the taints.
  2. Fargate on EKS. If you want to run a nodeless cluster and use Fargate to run everything, some EKS managed resources e.g. coredns prevent this via an annotation e.g.
  annotations:
    eks.amazonaws.com/compute-type: ec2

The annotation must be removed.
The ability to patch resources would solve both these use cases and many others.

Yeah also it would be nice to have equivalent of kubectl taint node as it will not work w/o taint node before

@stoimendhristov

This comment has been minimized.

@stoimendhristov
Copy link

I am currently trying to update an existing ConfigMap and simply add more rules to it but once the CM created it seems that it cannot be referred to in order to be updated.

Any thoughts?

Thanks

@adilc

This comment has been minimized.

@Sharathmk99

This comment has been minimized.

@eugene-burachevskiy
Copy link

When we setup EKS cluster with terraform and are using tainted on-demand nodes for all system services, we have to patch CoreDNS first to make all further installed apps working. For now we can't patch existing EKS CoreDNS with terraform so we have to install 3rd party CoreDNS helm chart at the beginning.

Ability to patch existing deployments would be really great.

@hijakebye
Copy link

+1 Would love this for some of our enterprise EKS Fargate Deployments

@ivan-sukhomlyn
Copy link

It would nice to have such a Terraform resource to patch EKS aws-node DaemonSet with a custom ServiceAccount. For example, in the case of the IRSA approach usage for Pods authorization.

@aareet aareet added the acknowledged Issue has undergone initial review and is in our work queue. label May 27, 2020
@vide
Copy link

vide commented May 28, 2020

This is also needed to patch EKS clusters hit by kubernetes/kubernetes#61486

@blawlor
Copy link

blawlor commented Jul 28, 2020

This feature would feed very well into things like custom CNI on EKS

@memory
Copy link

memory commented Aug 5, 2020

This would also help for management of service meshes such as linkerd or istio, where one might want to add annotations to control mesh proxy injection into the kube-system or default namespace.

This request is actually being made in different forms in several issues now, see also:

#238
hashicorp/terraform#22754

@memory
Copy link

memory commented Aug 5, 2020

For anyone else who's running into this, we've for the moment worked around it with a truly awful abuse of the null resource and local provisioner:

resource "null_resource" "k8s_patcher" {
  triggers = {
    // fire any time the cluster is update in a way that changes its endpoint or auth
    endpoint = google_container_cluster.default.endpoint
    ca_crt   = google_container_cluster.default.master_auth[0].cluster_ca_certificate
    token    = data.google_client_config.provider.access_token
  }

  # download kubectl and patch the default namespace
  provisioner "local-exec" {
    command = <<EOH
cat >/tmp/ca.crt <<EOF
${base64decode(google_container_cluster.default.master_auth[0].cluster_ca_certificate)}
EOF
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x ./kubectl
./kubectl \
  --server="https://${google_container_cluster.default.endpoint}" \
  --token="${data.google_client_config.provider.access_token}" \
  --certificate_authority=/tmp/ca.crt \
  patch namespace default \
  -p '{"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{"linkerd.io/inject":"enabled"},"name":"default"}}'
EOH
  }
}

@Haptr3c
Copy link

Haptr3c commented Aug 24, 2020

I tweaked @memory's null_resource workaround to work with the aws provider. This should save anyone looking to run fargate-only EKS a bit of time.

resource "aws_eks_fargate_profile" "coredns" {
  cluster_name           = aws_eks_cluster.main.name
  fargate_profile_name   = "coredns"
  pod_execution_role_arn = aws_iam_role.fargate_pod_execution_role.arn
  subnet_ids             = var.private_subnets.*.id
  selector {
    namespace = "kube-system"
    labels = {
      k8s-app = "kube-dns"
    }
  }
}

resource "null_resource" "k8s_patcher" {
  depends_on = [ aws_eks_fargate_profile.coredns ]
  triggers = {
    // fire any time the cluster is update in a way that changes its endpoint or auth
    endpoint = aws_eks_cluster.main.endpoint
    ca_crt   = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
    token    = data.aws_eks_cluster_auth.cluster.token
  }
  provisioner "local-exec" {
    command = <<EOH
cat >/tmp/ca.crt <<EOF
${base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)}
EOF
apk --no-cache add curl && \
curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.17.9/2020-08-04/bin/linux/amd64/aws-iam-authenticator && chmod +x ./aws-iam-authenticator && \
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x ./kubectl && \
mkdir -p $HOME/bin && mv ./aws-iam-authenticator $HOME/bin/ && export PATH=$PATH:$HOME/bin && \
./kubectl \
  --server="${aws_eks_cluster.main.endpoint}" \
  --certificate_authority=/tmp/ca.crt \
  --token="${data.aws_eks_cluster_auth.cluster.token}" \
  patch deployment coredns \
  -n kube-system --type json \
  -p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'
EOH
  }
}

@Z3R6

This comment has been minimized.

@kvaidas
Copy link

kvaidas commented Jan 27, 2021

The following might also be a viable workaround:

resource "local_file" "kubeconfig" {
  filename = pathexpand("~/.kube/config")
  content = <<-CONFIG
    apiVersion: v1
    kind: Config
    clusters:
    - name: clustername
      cluster:
        server: ${aws_eks_cluster.this.endpoint}
        certificate-authority-data: ${aws_eks_cluster.this.certificate_authority.0.data}
    contexts:
    - name: contextname
      context:
        cluster: clustername
        user: username
    current-context: contextname
    users:
    - name: username
      user:
        token: ${data.aws_eks_cluster_auth.this-auth.token}
  CONFIG
}

Might work quicker since the token should only be requested once and then reused for any kubectl commands.

Also doesn't depend on having aws-cli installed.

@vdahmane

This comment has been minimized.

@johanferguth
Copy link

Same question for me :) :
If I want to allow other users than me to manage an Aws Eks cluster, I have to edit the configmap aws-auth. It could be very useful to patch this configmap after a deployment rather than replace it totally.

@ams0
Copy link

ams0 commented Feb 12, 2021

Adding to the list, patching argocd-cm ConfigMap to add a private repository. I bootstrap AKS+ArgoCD and I'd like to use a private repos for the apps.

@GolubevV
Copy link

Same issue here, with the need to patch the coredns for taints-tollerations setup and aws-node daemonset for some parameters tweaking (like IP warm target and external SNAT enable).

Will be really nice to be able to get all resources provisioned in one shot by terraform without workarrounds like local-exec provisioner which does not work on TFE out of the box due to missing kubectl.

@holgerson97
Copy link

This is also relevant when you want to deploy an EKS cluster only running Fargate. you need to patch the existing CoreDNS deployment in order to deploy it as Fargate.

@shanoor
Copy link

shanoor commented Mar 29, 2021

Also needed to simply edit the coredns-custom configmap that is created by default in AKS.

@adamrushuk
Copy link

Adding to the list, patching argocd-cm ConfigMap to add a private repository. I bootstrap AKS+ArgoCD and I'd like to use a private repos for the apps.

I've got a similar requirement, so until there is a better method, I'm using a template and null resource:

# argocd-cm patch
# https://registry.terraform.io/providers/hashicorp/template/latest/docs/data-sources/file
data "template_file" "argocd_cm" {
  template = file(var.argocd_cm_yaml_path)
  vars = {
    tenantId    = data.azurerm_client_config.current.tenant_id
    appClientId = azuread_service_principal.argocd.application_id
  }
}

# https://www.terraform.io/docs/provisioners/local-exec.html
resource "null_resource" "argocd_cm" {
  triggers = {
    yaml_contents = filemd5(var.argocd_cm_yaml_path)
    sp_app_id     = azuread_service_principal.argocd.application_id
  }

  provisioner "local-exec" {
    interpreter = ["/bin/bash", "-c"]
    environment = {
      KUBECONFIG = var.aks_config_path
    }
    command = <<EOT
      kubectl patch configmap/argocd-cm --namespace argocd --type merge --patch "${data.template_file.argocd_cm.rendered}"
    EOT
  }

  depends_on = [
    local_file.kubeconfig,
    null_resource.argocd_configure
  ]
}

@jamesanto
Copy link

jamesanto commented May 19, 2022

Expanding on @cmanfre4's answer, we could probably simplify it to a single job with this command:

["/bin/sh", "-c", "compute_type=$(kubectl get deployments.app/coredns -n kube-system -o jsonpath='{.spec.template.metadata.annotations.eks\\.amazonaws\\.com/compute-type}'); [ ! -z \"$compute_type\" ] && kubectl patch deployments.app/coredns -n kube-system --type json -p='[{\"op\":\"remove\", \"path\": \"/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type\"}]' && kubectl rollout restart deployments.app/coredns -n kube-system"]

It does 2 things:

  • patches only if required (to avoid errors)
  • restarts in the same job if patched

@bryantbiggs
Copy link

Just FYI - if you patch the CoreDNS on EKS, you'll want to eject from the EKS API managing the CoreDNS deployment using the preserve = true. If not, the next time the EKS API updates the addon, it will remove your patch and cause your DNS to fail

@gnuletik
Copy link

Thanks @jkroepke!

I was able to remove the default storage class from an EKS cluster with https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/annotations.

resource "kubernetes_annotations" "default-storageclass" {
  api_version = "storage.k8s.io/v1"
  kind        = "StorageClass"
  force       = "true"

  metadata {
    name = "gp2"
  }
  annotations = {
    "storageclass.kubernetes.io/is-default-class" = "false"
  }
}

@FernandoMiguel
Copy link

Thanks @jkroepke!

I was able to remove the default storage class from an EKS cluster with https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/annotations.

resource "kubernetes_annotations" "default-storageclass" {
  api_version = "storage.k8s.io/v1"
  kind        = "StorageClass"
  force       = "true"

  metadata {
    name = "gp2"
  }
  annotations = {
    "storageclass.kubernetes.io/is-default-class" = "false"
  }
}

that's very interesting way of doing this

@dudicoco
Copy link

dudicoco commented May 31, 2022

I have a very simple workaround in the form of a bootstrap script which deletes the relevant default resources:

#!/usr/bin/env bash

set -euo pipefail

while test $# -gt 0; do
  case "$1" in
  -h | --help)
    echo " "
    echo "options:"
    echo "-h, --help            show brief help"
    echo "--context             specify kube contxt"
    exit 0
    ;;
  --context)
    shift
    if test $# -gt 0; then
      context=$1
    else
      echo "no kube context specified"
      exit 1
    fi
    shift
    ;;
  *)
    break
    ;;
  esac
done

for kind in daemonset clusterRole clusterRoleBinding serviceAccount; do
  echo "deleting $kind/aws-node"
  kubectl --context "$context" --namespace kube-system delete $kind aws-node
done

for kind in customResourceDefinition; do
  echo "deleting $kind/eniconfigs.crd.k8s.amazonaws.com"
  kubectl --context "$context" --namespace kube-system delete $kind eniconfigs.crd.k8s.amazonaws.com
done

for kind in daemonset serviceAccount; do
  echo "deleting $kind/kube-proxy"
  kubectl --context "$context" --namespace kube-system delete $kind kube-proxy
done

for kind in configMap; do
  echo "deleting $kind/kube-proxy-config"
  kubectl --context "$context" --namespace kube-system delete $kind kube-proxy-config
done

for kind in deployment serviceAccount configMap; do
  echo "deleting $kind/coredns"
  kubectl --context "$context" --namespace kube-system delete $kind coredns
done

for kind in service; do
  echo "deleting $kind/kube-dns"
  kubectl --context "$context" --namespace kube-system delete $kind kube-dns
done

for kind in storageclass; do
  echo "deleting $kind/gp2"
  kubectl --context "$context" delete $kind gp2
done

@adiii717
Copy link

I've got a similar requirement to update the ArgoCD password, and this worked for me

resource "null_resource" "argocd_update_pass" {
  provisioner "local-exec" {
    interpreter = ["/bin/bash", "-c"]
    command     = <<EOT
    kubectl patch secret -n argocd argocd-secret -p '{"stringData": { "admin.password": "'$(htpasswd -bnBC 10 "" ${data.azurerm_key_vault_secret.argocd-password.value} | tr -d ':\n')'"}}'  --kubeconfig ./temp/kube-config.yaml;
    EOT
  }
  depends_on = [
    helm_release.argocd,
    local_file.kube_config
  ]
}

resource "local_file" "kube_config" {
  content  = azurerm_kubernetes_cluster.aks.kube_config_raw
  filename = "${path.module}/temp/kube-config.yaml"
}

@michelzanini
Copy link

michelzanini commented Jun 20, 2022

For those of you whose use-case is patching labels, annotations, and ConfigMap entries v2.10.0 of the provider brought support for doing this using Server-Side Apply & Field Manager in the following resources:

Other use-cases on our radar for resources where Terraform will partially manage a Kubernetes resource:

  • Adding container environment variables
  • Setting taints and tolerations

If you have another use-case please share it.

For some context on why we haven't added a completely generic patch resource see this discussion.


Thank you for this!
I think this resolves a few use cases around annotations, labels and config maps that people have in this thread.

I think the top most wanted use cases missing are:

  • Be able to change env variables inside daemonsets/deployments/etc
  • Be able to change annotation/labels inside the daemonsets/deployments spec (as oppose to the top level annotations themselves)

I believe looking at this thread these are the top priority ones to do first.
It was also mentioned, but seems to be less common, the following use-cases:

  • Change Taints and Tolerations
  • Change Affinity

I wonder if it would not be better to close this issue and open specific focused issues for these 4 use-cases.

Keep up the good work!
Thanks.

@simonvanderveldt
Copy link

If you have another use-case please share it.

@jrhouston We have another usecase. We're running on GKE with Calico enabled, this gives us a lot of readiness/liveness probe failures because the timeout is set to 1s. This is fixed in Calico (see projectcalico/calico#5122 (comment)) but the version of Calico that includes this fix isn't available on (stable) GKE yet. So we want to apply a patch to increase the timeout to match the value as set by newer Calico versions.

Also whilst I understand the desire of Terraform to keep it simple with regards to the implementation, conceptually matching kubectl will probably be simpler for most users to understand and there'd be no need for a dozen or so specific resources.

@Thibault-Brocheton
Copy link

Here's a use case:
I'm using EKS created by terraform terraform-aws-modules/terraform-aws-eks module
I have different types of self managed node groups in my cluster, some small ec2 called "admins" handling system pods (coredns, autoscaler, alb constroller, ...) and some large ec2 called "applications" that handle my business applications
I'm looking to automatically update the coredns deployment, created by EKS, so it's nodeSelector target my admins pods

I would love some kind of kubernetes_node_selector to do this patch, instead of having to workaround from a bash command or manually importing coredns after my eks creation

@b-a-t
Copy link

b-a-t commented Sep 20, 2022

This is an issue for us as well since we frequently do work in AWS EKS where other users need to be added to aws-auth configmap, but this is not currently possible without external dependencies (kubectl).

On top of this, since release 18 of terraform-aws-modules/terraform-aws-eks, aws-auth isn't managed by the module anymore, most of the workarounds are based on exec/kubectl which is not something everyone can do.

Well, parameters to the module like manage_aws_auth_configmap suggest otherwise...

@StephanX
Copy link

Thanks @cmanfre4 for the tip. I repurposed your solution to replace my default EKS gp2 storage class (which is unencrypted by default.) I also added a variable cluster_bootstrap so that the job only needs to run the first time, while the replacement gp2 storage class is still managed by terraform:

terraform apply -var='cluster_bootstrap=true'

resource "kubernetes_service_account" "replace_storage_class_gp2" {
  metadata {
    name      = "replace-storage-class-gp2"
    namespace = "kube-system"
  }
}

resource "kubernetes_cluster_role" "replace_storage_class_gp2" {
  metadata {
    name = "replace-storage-class-gp2"
  }

  rule {
    api_groups     = ["storage.k8s.io" ]
    resources      = ["storageclasses"]
    resource_names = ["gp2"]
    verbs          = ["get", "delete"]
  }
}

resource "kubernetes_cluster_role_binding" "replace_storage_class_gp2" {
  metadata {
    name      = "replace-storage-class-gp2"
  }
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = kubernetes_cluster_role.replace_storage_class_gp2.metadata[0].name
  }
  subject {
    kind      = "ServiceAccount"
    name      = kubernetes_service_account.replace_storage_class_gp2.metadata[0].name
    namespace = "kube-system"
  }
}

resource "kubernetes_job" "replace_storage_class_gp2" {
  count = var.cluster_bootstrap ? 1 : 0
  depends_on = [
    kubernetes_cluster_role_binding.replace_storage_class_gp2
  ]
  metadata {
    name      = "replace-storage-class-gp2"
    namespace = "kube-system"
  }
  spec {
    template {
      metadata {}
      spec {
        service_account_name = kubernetes_service_account.replace_storage_class_gp2.metadata[0].name
        container {
          name    = "replace-storage-class-gp2"
          image   = "bitnami/kubectl:latest"
          command = ["/bin/sh", "-c", "kubectl delete storageclass gp2"]
        }
        restart_policy = "Never"
      }
    }
  }
  wait_for_completion = true
  timeouts {
    create = "5m"
  }
}

resource "kubernetes_storage_class" "gp2" {
  metadata {
    name = "gp2"
  }
  storage_provisioner = "kubernetes.io/aws-ebs"
  reclaim_policy      = "Delete"
  parameters = {
    encrypted = "true"
    fsType = "ext4"
    type = "gp2"
  }
  depends_on = [
    kubernetes_job.replace_storage_class_gp2
  ]
}

@ArieLevs
Copy link

ArieLevs commented Nov 3, 2022

@bryantbiggs @michelzanini checkout version 2.15.0 released two days ago
it now contains the kubernetes_env resource which can be used exactly for your use case,
i've successfully tested this updating the AWS CNI plugin (daemonset)

@djmcgreal
Copy link

Hi. I’d like to patch imagePullSecrets into a ServiceAccount.

@peteneville
Copy link

Hi, I would like to patch a meshconfig and add ingressGateway.

@benemon
Copy link

benemon commented Mar 9, 2023

Would very much appreciate the ability to patch. When bootstrapping Red Hat OpenShift clusters, there are a large number of Day 1 configuration elements - authn / authz, storage, registry configs etc - where the workflow revolves around patching existing Cluster Resources into the state required.

@alicancakil
Copy link

Hello @ArieLevs
How can I use kubernetes_env to do something like this?

kubectl patch deployment coredns \
    -n kube-system \
    --type json \
    -p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'

I looked at the docs but couldn't get it working. Would you be able to kindly provide an example?

@z0rc
Copy link

z0rc commented Mar 14, 2023

@alicancakil your specific example can be solved by addon's optional configuration. See https://aws.amazon.com/blogs/containers/amazon-eks-add-ons-advanced-configuration/. https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_addon already supports passing configuration values to addon.

@ArieLevs
Copy link

@alicancakil the kubernetes_env will only add environment values to a supported api resource.
i'm not sure the provider yet supports annotations removals (classic patch) but maybe you can use the kubernetes_annotations resource, so maybe something like that can help you

resource "kubernetes_annotations" "coredns" {
  api_version = "apps/v1"
  kind        = "Deployment"
  metadata {
    name = "coredns"
    namespace = "kube-system"
  }
  # These annotations will be applied to the Pods created by the Deployment
  template_annotations = {
    "eks.amazonaws.com/compute-type" = ""
  }

  force = true
}

this will remove the value of eks.amazonaws.com/compute-type

@bryantbiggs
Copy link

bryantbiggs commented Mar 14, 2023

to create a fully serverless EKS Fargate based cluster, you only need to use the addon configuration like @z0rc mentioned. Here is an example https://github.com/clowdhaus/eks-reference-architecture/blob/f37390db1b38d154979cc1aeb4d72ab53929e847/serverless/eks.tf#L13-L15

@partcyborg
Copy link
Contributor

We have another use case

When setting up the GKE identity service for enabling OIDC authentication on the k8s API, the setup instructions require you to edit a pre-existing ClientConfig resource (a CRD provided by GKE) and fill in a bunch of fields. There does not appear to be any way to configure this using terraform other than the null_resource hack.

@BBBmau
Copy link
Contributor

BBBmau commented Apr 21, 2023

We recently merged and released the ability to patch initContainers with #2067, let us know if there are any issues when using the new patch attribute.

@framctr
Copy link

framctr commented Jul 11, 2023

Another use case would be to be able to patch an existing resource created by an operator. For example, if I deploy rancher managed Prometheus and then I want to change the configuration of the Prometheus resource.

@WyriHaximus
Copy link

To add another use case to the list: Patch a priority class name on deployments/statefulsets

@gorkemgoknar
Copy link

gorkemgoknar commented Oct 11, 2023

Another use case, edit configurations of EKS provided kube-proxy via patching configmap (not via eks addon)

@jarrettprosser
Copy link

Yet another use case: I'd like to be able to patch the default storageclass on an AKS cluster to add tags to the created volumes. That would require adding

parameters:
  tags: some-tag=some-value

Patching would be preferable to creating new storageclasses, as there are already 7 by default.

@littlejo
Copy link

I created a resource to patch daemonset in my provider:
https://registry.terraform.io/providers/littlejo/cilium/latest/docs/resources/kubeproxy_free

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests