-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
beta update-variant module isn't idempotent #326
Comments
Looking at the plan, it seems likely that specifying node_locations instead of location (or in addition to it) would correct the problem. Or telling terraform to ignore that difference. I don't have time to experiment right now, unfortunately. |
Once I dug into the code, it became clear that including the following in any node_pool declaration will fix the problem:
However, it would certainly be much better if it was correctly handled whether an explicit node_locations is provided or not, since it is coded as though it is optional - but the default value seems to come up empty on creation, but has a value when pulling the value while comparing to current state. Seems like something of a provider issue, though it could be fixed with a kludge to compute the default's correct value when it comes back empty. In the meantime, if you bump into this problem, an explicit node_locations value in the node_pool declaration will fix the problem for you. |
Thanks, I think #327 might fix this. There's probably no reason for us to touch the |
I assumed that if I had a regional cluster, node_locations might have several zones in it, while location would just have the region in it, and I wasn't sure how that would work in the plan comparison. It seemed like an opportunity for the existing resource to appear out of sync with the new plan, and I didn't want to take the time to try it to confirm. |
When running the same terraform apply command twice in a row, the first succeeds and the second results in the following plan and eventual error. I could really use some help with this one, as it blocks making any changes to a cluster without first running
terragrunt destroy
and then re-applying, or at least targeting module.gke.google_container_node_pool.pools[0] with destroy and re-apply.The text was updated successfully, but these errors were encountered: