Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changing a ConfigMap's data causes a replacement #1567

Open
ghost opened this issue May 6, 2021 · 13 comments
Open

Changing a ConfigMap's data causes a replacement #1567

ghost opened this issue May 6, 2021 · 13 comments
Labels
kind/enhancement Improvements or new features

Comments

@ghost
Copy link

ghost commented May 6, 2021

Changing a ConfigMap's data causes a replacement of that resource, causing other resources that depend on it to be replaced as well (e.g. Deployment using envFrom), causing downtime.

Expected behavior

The ConfigMap should be updated in place.

Steps to reproduce

Running pulumi up twice with the following code:

const testConfigMap = new k8s.core.v1.ConfigMap(
  'test',
  {
    data: {foo: `${Date.now()}`},
  }
);

will result in a replacement of that configMap

 +-  └─ kubernetes:core/v1:ConfigMap  test                                       replace     [diff: ~data]
@ghost ghost added the kind/bug Some behavior is incorrect or out of spec label May 6, 2021
@lblackstone lblackstone self-assigned this May 6, 2021
@lblackstone
Copy link
Member

@pierlucg-xs Is the Deployment being replaced, or just updated?

Pulumi's k8s provider intentionally treats ConfigMap resources as immutable. I would expect this to trigger a rollout (update) in dependent Deployments rather than a replacement.

@ghost
Copy link
Author

ghost commented May 6, 2021

Yes, here's how I reproduced it with K3S:

const provider = new k8s.Provider("k3s", {
  kubeconfig,
  enableDryRun: false,
});

const configMap = new k8s.core.v1.ConfigMap(
  "configmap",
  {
    data: { foo: `${Date.now()}` },
    metadata: {
      name: "configmap",
    },
  },
  { provider: provider }
);

const deployment = new k8s.apps.v1.Deployment(
  "deployment",
  {
    apiVersion: "apps/v1",
    kind: "Deployment",
    metadata: {
      labels: {
        app: "bar",
      },
      name: "bar",
      namespace: "default",
    },
    spec: {
      replicas: 1,
      selector: {
        matchLabels: {
          app: "bar",
        },
      },
      template: {
        metadata: {
          labels: {
            app: "bar",
          },
        },
        spec: {
          containers: [
            {
              envFrom: [{ configMapRef: { name: configMap.metadata.name } }],
              image: "nginxdemos/hello",
              name: "demo-service",
              ports: [{ containerPort: 8080 }],
            },
          ],
        },
      },
    },
  },
  { provider: provider }
);

Second pulumi up:

     Type                              Name               Plan        Info
   pulumi:pulumi:Stack               k3s-configmaptest              
+-  ├─ kubernetes:apps/v1:Deployment  deployment         replace     
+-  └─ kubernetes:core/v1:ConfigMap   configmap          replace     [diff: ~data]

Pulumi's k8s provider intentionally treats ConfigMap resources as immutable

Any ideas why? AFAIK ConfigMaps have a PATCH api

@lblackstone
Copy link
Member

lblackstone commented May 6, 2021

Ah, ok. The reason you're seeing the replacement is because you are manually specifying the ConfigMap name rather than using auto-naming:

metadata: {
  name: "configmap",
},

If you remove that and let Pulumi auto-name the ConfigMap, the Deployment will update rather than replace.


Our k8s provider intentionally treats ConfigMap and Secret resources as immutable rather than using the PATCH API for a couple reasons:

  1. Pods referencing the ConfigMap/Secret have to be restarted for changes to take effect. This is a common source of subtle errors where a subset of Pods are referencing the updated ConfigMap/Secret values, while others are still using stale data.
  2. With autonamed ConfigMap/Secret resources, Pulumi will create a new instance, update dependents to refer to the new instance, and only remove the old instance once that update is successful. This makes it easier to rollback errors because the original data is still present.

I'd be interested to hear more about the use case if you are intentionally trying to reuse the same ConfigMap.

@ghost
Copy link
Author

ghost commented May 6, 2021

I was not using auto-naming because the cluster was managed with kubectl before and I imported those resources into pulumi as-is, but it's not a hard requirement and I'll definitely change it to use auto-naming.


I've definitely been bitten by number 1. in the past! I'd argue that by trying to alleviate a Kubernetes issue, Pulumi is acting in a non "idiomatic K8S" way.

All in all, the benefits of auto-naming are clear and will resolve my issue, thanks a lot for your quick response!

@ghost ghost closed this as completed May 6, 2021
@ghost ghost changed the title Chaging a ConfigMap's data causes a replacement Changing a ConfigMap's data causes a replacement May 6, 2021
@lblackstone
Copy link
Member

I'd argue that by trying to alleviate a Kubernetes issue, Pulumi is acting in a non "idiomatic K8S" way.

Right, sorry for the confusion here, and thanks for linking that upstream issue! I agree that this behavior could be a surprise to some k8s users.

I do think this is the behavior that most users actually want, so I'll open a work item to make this clearer in our docs.

@iridian-ks
Copy link

I know this is a closed issue so happy to start a new ticket. Unrelated to deployments, in EKS there is an aws-auth ConfigMap. I noticed when trying to add new users and roles Pulumi will replace the ConfigMap. This causes all of the NodeGroups to permanently enter a failing state and never recover (it does seems random and it doesn't happen all of the time [but most of the time]).

I understand for Deployments that a replace is beneficial, but it would be nice if the user could control the behavior if they need to instead of making assumptions on what is using the ConfigMap.

@aschleck
Copy link

aschleck commented Oct 6, 2023

This aws-auth behavior remains quite a large landmine and it's surprising behavior that the Pulumi default behavior is to risk bricking the cluster.

@ericrudder
Copy link
Member

Let's take another look at this ...

@ericrudder ericrudder reopened this Oct 7, 2023
@ericrudder ericrudder added the needs-triage Needs attention from the triage team label Oct 7, 2023
@EronWright EronWright self-assigned this Oct 9, 2023
@mikhailshilkov mikhailshilkov added kind/enhancement Improvements or new features and removed kind/bug Some behavior is incorrect or out of spec needs-triage Needs attention from the triage team labels Oct 11, 2023
@mikhailshilkov
Copy link
Member

Thank you for your patience everyone. It sounds like the current behaviour of replacing ConfigMap is by design, but we need to provide a mechanism to override this behaviour and change the replacement to an in-place update. Probably, a solution should involve some sort of resource option or annotation.

I've started an internal discussion of what the design for it may be.

@lblackstone
Copy link
Member

we need to provide a mechanism to override this behaviour and change the replacement to an in-place update

This is possible already if you use the enableConfigMapMutable provider flag. I suspect that we may want to swap the default in a future (major?) release.

Given the launch of Pulumi ESC this week, I'll also note that you could set this as a default via the environment variable. That could be a nice way to pull in standardized configurations that are applied to each stack in an organization.

@mikhailshilkov
Copy link
Member

Awesome! Thank you @lblackstone

I'd love to hear back from folks on this ticket whether this workaround works for them.

@aschleck
Copy link

Was not familiar with that option but it certainly sounds like exactly what's needed here. Not sure when I'll have time to give it a try but will do so. Thank you all!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement Improvements or new features
Projects
None yet
Development

No branches or pull requests

6 participants