-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changing a ConfigMap's data causes a replacement #1567
Comments
@pierlucg-xs Is the Deployment being replaced, or just updated? Pulumi's k8s provider intentionally treats ConfigMap resources as immutable. I would expect this to trigger a rollout (update) in dependent Deployments rather than a replacement. |
Yes, here's how I reproduced it with K3S: const provider = new k8s.Provider("k3s", {
kubeconfig,
enableDryRun: false,
});
const configMap = new k8s.core.v1.ConfigMap(
"configmap",
{
data: { foo: `${Date.now()}` },
metadata: {
name: "configmap",
},
},
{ provider: provider }
);
const deployment = new k8s.apps.v1.Deployment(
"deployment",
{
apiVersion: "apps/v1",
kind: "Deployment",
metadata: {
labels: {
app: "bar",
},
name: "bar",
namespace: "default",
},
spec: {
replicas: 1,
selector: {
matchLabels: {
app: "bar",
},
},
template: {
metadata: {
labels: {
app: "bar",
},
},
spec: {
containers: [
{
envFrom: [{ configMapRef: { name: configMap.metadata.name } }],
image: "nginxdemos/hello",
name: "demo-service",
ports: [{ containerPort: 8080 }],
},
],
},
},
},
},
{ provider: provider }
); Second
Any ideas why? AFAIK ConfigMaps have a PATCH api |
Ah, ok. The reason you're seeing the replacement is because you are manually specifying the ConfigMap name rather than using auto-naming:
If you remove that and let Pulumi auto-name the ConfigMap, the Deployment will update rather than replace. Our k8s provider intentionally treats ConfigMap and Secret resources as immutable rather than using the PATCH API for a couple reasons:
I'd be interested to hear more about the use case if you are intentionally trying to reuse the same ConfigMap. |
I was not using auto-naming because the cluster was managed with I've definitely been bitten by number 1. in the past! I'd argue that by trying to alleviate a Kubernetes issue, Pulumi is acting in a non "idiomatic K8S" way. All in all, the benefits of auto-naming are clear and will resolve my issue, thanks a lot for your quick response! |
Right, sorry for the confusion here, and thanks for linking that upstream issue! I agree that this behavior could be a surprise to some k8s users. I do think this is the behavior that most users actually want, so I'll open a work item to make this clearer in our docs. |
I know this is a closed issue so happy to start a new ticket. Unrelated to deployments, in EKS there is an aws-auth ConfigMap. I noticed when trying to add new users and roles Pulumi will replace the ConfigMap. This causes all of the NodeGroups to permanently enter a failing state and never recover (it does seems random and it doesn't happen all of the time [but most of the time]). I understand for Deployments that a replace is beneficial, but it would be nice if the user could control the behavior if they need to instead of making assumptions on what is using the ConfigMap. |
This aws-auth behavior remains quite a large landmine and it's surprising behavior that the Pulumi default behavior is to risk bricking the cluster. |
Let's take another look at this ... |
Thank you for your patience everyone. It sounds like the current behaviour of replacing I've started an internal discussion of what the design for it may be. |
This is possible already if you use the enableConfigMapMutable provider flag. I suspect that we may want to swap the default in a future (major?) release. Given the launch of Pulumi ESC this week, I'll also note that you could set this as a default via the environment variable. That could be a nice way to pull in standardized configurations that are applied to each stack in an organization. |
Awesome! Thank you @lblackstone I'd love to hear back from folks on this ticket whether this workaround works for them. |
Was not familiar with that option but it certainly sounds like exactly what's needed here. Not sure when I'll have time to give it a try but will do so. Thank you all! |
Changing a ConfigMap's data causes a replacement of that resource, causing other resources that depend on it to be replaced as well (e.g. Deployment using
envFrom
), causing downtime.Expected behavior
The ConfigMap should be updated in place.
Steps to reproduce
Running
pulumi up
twice with the following code:will result in a replacement of that configMap
The text was updated successfully, but these errors were encountered: