-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
increasing minMasterVersion
of google container cluster previews replacement
instead of update
#88
Comments
I just tried this with latest GCP provider - and I see an
The one change I had to make was to use Just to make sure - was this the only property change you made? Is your real use case a more complex configuration of the |
@geekflyer I wasn't able to reproduce this after following the steps you describe. If there were any other properties set on the resource, or changes made, that would help reproduce, let me know and we'll reopen. It's very possible this was hitting something like the issue in pulumi/pulumi-azure#182, but I can't confirm that without a reproduction of the issue unfortunately. |
I am seeing the same issue while attempting to update from 1.11.7-gke.12 to 1.12.5-gke.5:
Here are all the properties I am setting on the resource:
@lukehoban let me know if there is any additional information I can provide to help troubleshoot this. |
I was able to reproduce this with the program below. The output here is misleading due to pulumi/pulumi#2453, but verbose logging shows that the properties forcing replacement are actuall:
Indeed, if I comment out the single element of the I am not yet clear on exactly why that is triggering a replacement even though these is not a change. This is almost certainly related to pulumi/pulumi-terraform#329. But as noted in that issue, every other case of this has been a bug in the upstream Terraform provider. I'll need to spend some more time looking into this to nail down what is triggering this (and whether the same problem exists in Terraform. import * as gcp from "@pulumi/gcp";
const minMasterVersion = "1.12.5-gke.5";
const name = "hello";
const cidrBlock = "10.0.0.0/16";
const project = "pulumi-development";
const region = "us-central1";
const subnetName = "mysubnet";
const network = new gcp.compute.Network("network");
const k8sCluster = new gcp.container.Cluster(name, {
name,
masterAuthorizedNetworksConfig: {
cidrBlocks: [{ cidrBlock, displayName: "mycidr"}],
},
minMasterVersion,
project,
region,
privateClusterConfig: {
enablePrivateNodes: true,
masterIpv4CidrBlock: '172.16.2.0/28'
},
ipAllocationPolicy: {
createSubnetwork: true,
subnetworkName: subnetName
},
network: network.selfLink,
removeDefaultNodePool: true,
nodePools: [
{
name: 'default-pool',
nodeCount: 0
}
]
});
export const endpoint = k8sCluster.endpoint; |
hashicorp/terraform-provider-google#3319 |
Indeed - that's exactly what I was looking for but couldn't find last night. Looks like we'll pull down that fix with the next release of the upstream Terraform provider. |
The
or by removing the dummy |
@lukehoban I'm facing this issue now again with one of our clusters (it actually is our most important production cluster, so this is kinda scary). Here's a subset of the pulumi program of that cluster:
What I attempted to do is to upgrade the master from The preview shows it it is planning to do a replacement of the cluster due a change in ~minMasterVesion. versions are:
This is kind of blocking me to do some required maintenance on that cluster and since it's a high value prod cluster I can't easily "replace" it :) In one of the comments above you said I don't really have a separate shareable program for a reproduction but I'm more than happy to jump on a screenshare to debug the issue. |
You can do:
And then in I am so far aware of upstream Terraform provider issues related to the following:
However, I do not see any of those in your example above. |
Ok cool, looks like Here's a subset of the logs which I thought captures the most interesting parts:
I can share you the entire log in private if that's necessary. |
Talked with @geekflyer offline, and established that in his case, he had used This feels like yet-another issue in the upstream provider. I'll see if I can repro it independently. |
The core issue with
Changing
|
@lukehoban This is a blocker for us. Which is causing to delete the cluster. What is the workaround for this? |
For anyone hitting the |
Tried it 3 times now and still an issue. Brand new stack with
Bump the minMaster version to Results in:
From the verbose logging:
|
I never put anything in the nodePools section. We always create our clusters with removeDefaultPool: true. The workaround that helped (found with Luke's help) when the replacement issue occurred was to change initialNodeCount from 1 to 0. |
Ours has always been zero. |
@casey-robertson You appear to be hitting exactly what is described above in the workaround at #88 (comment). |
ok - but do you mean that the workaround actually works because at least in my testing it doesn't fix it and the cluster still wants to be replaced. Sorry if I'm being dense :-) |
I've debugged variants of this with 5 users. Ultimately - all issues here have boiled down to issues in the upstream Terraform provider. So far, this has been caused by one-or-more of the following:
As these fixes get released in the upstream provider, they will get pulled into |
We have a bunch gke clusters.
I was just planning to upgrade one of their masters by bumping
minMasterVersion
.Unfortunately when doing so
pulumi preview
says it will do a replacement of the entire cluster instead of an update. I'm not sure if this is just an issue withpreview
but I'm pretty sure bumping the master version shouldn't replace the entire cluster and a replacement of a cluster is a pretty scary operation. As intermediate workaround I upgraded the cluster via GCPs UI and left the pulumi minMasterVersion param untouched.Example:
existing cluster with:
when changing
minMasterVersion
to1.10.12-gke.1
pulumi preview shows the following:The text was updated successfully, but these errors were encountered: