-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes to 1st class provider inputs forcing recreation of provider and all dependent resources #2012
Comments
I've tried to reproduce this - but cannot trigger the reported replacements. I used the above - along with this: const provider = helper.getK8sProviderByClusterName("luke-cluster");
new k8s.core.v1.Pod("nginx", {
spec: {
containers: [
{
name: "nginx",
image: "nginx",
}
]
}
}, { provider: provider }); This deploys fine. If I then make a change to my cluster - like decreasing the nodePool size - then I do see that diff as part of the next
But other than that no changes are triggered in my provider or in the Pod I have deployed to the cluster. @geekflyer If you have a diff view of |
I just tried reproduce the behaviour just by only changing the node count but that indeed doesn't trigger an update.
I'm suspecting a bit that overnight GKE might sometimes update some properties / (i.e. due to automatic maintenance) of the cluster which cause a recreation of the resource. Will continue to observe this behaviour and the next time I see that behaviour post the diff. |
I had a very similar experience today with an Azure AKS cluster. The main difference is that I create the cluster itself also in Pulumi code. Here are the parts that create the cluster and the provider: const adApplication = new azure.ad.Application("BZV2");
export const adServicePrincipal = new azure.ad.ServicePrincipal("BZV2-SP", { applicationId: adApplication.applicationId });
const adServicePrincipalPassword = new azure.ad.ServicePrincipalPassword("BZV2-SP-Password", {
servicePrincipalId: adServicePrincipal.id,
value: config.servicePrincipalPassword,
endDate: "2099-01-01T00:00:00Z",
});
export const k8sResourceGroup = new azure.core.ResourceGroup("BZV2-RG", {
location: config.location,
});
export const k8sCluster = new azure.containerservice.KubernetesCluster("BZV2-K8S", {
resourceGroupName: k8sResourceGroup.name,
location: config.location,
kubernetesVersion: config.kubernetesVersion,
agentPoolProfile: {
name: "bzagentpool",
count: config.nodeCount,
vmSize: config.nodeSize,
},
dnsPrefix: `${pulumi.getStack()}-bzcluster`,
linuxProfile: {
adminUsername: "adminuser",
sshKeys: [{
keyData: config.sshPublicKey,
}],
},
servicePrincipal: {
clientId: adApplication.applicationId,
clientSecret: adServicePrincipalPassword.value,
},
});
// Expose a K8s provider instance using our custom cluster instance.
export const k8sProvider = new k8s.Provider("BZV2-K8S-Provider", {
kubeconfig: k8sCluster.kubeConfigRaw,
}); Then I use this provider to deploy all kinds of Kubernetes resources (cert-manager, heptio-contour, argo and own apps). I updated many things in the cluster (I'm on update 39) without any trouble. Now once I changed the node count (
And it continues on. The indicated change for the provider is the Pulumi CLI version: v0.15.4, @pulumi/pulumi: 0.15.4, @pulumi/azure: 0.15.2, @pulumi/kubernetes: 0.17.0 |
It turns out that this was an unfortunate but intentional decision, but that the decision should only impact previews. When designing support for first-class providers, we debated over whether or not changes to the inputs for a provider should require replacing the provider (and thus its resources). We decided that this should be a decision that is left up to the provider itself, as only it knows whether or not a change to one of its inputs makes it impossible for the provider to manage any resources that depend upon it. The existing provider interface does not accommodate this sort of configuration diffing, so until we add that capability it is the responsibility of the engine. We decided that the engine's configuration diff logic should require replacement if any property was unknown in order to indicate that we did not have enough information to decide what might happen and were making a conservative decision. To make matters more confusing, this situation is only possible during a preview, so we will never in fact replace the provider during an update. #2088 has changes to align the engine's behavior in both situations s.t. we will never indicate that a provider instance requires replacement. |
From @geekflyer on Slack:
He is creating a
new new k8s.Provider
using code like below which.get
s the kubeconfig off a GKE resource. This is causing even small changes to the GKE instance to trigger the K8s provider to get replaced, and in turn all the Kubernetes resources.The text was updated successfully, but these errors were encountered: