Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid plan when re-applying terraform module #261

Closed
ananthrav opened this issue Sep 17, 2019 · 4 comments · Fixed by #295
Closed

Invalid plan when re-applying terraform module #261

ananthrav opened this issue Sep 17, 2019 · 4 comments · Fixed by #295
Assignees
Labels
bug Something isn't working

Comments

@ananthrav
Copy link

When creating a gke cluster using the beta-private-cluster module, I'm running into issues when I apply the configuration again. It works fine for the first apply, subsequent plans and applies fail for the same code.

Terraform version

Terraform v0.12.8
+ provider.google v2.14.0
+ provider.google-beta v2.14.0
+ provider.kubernetes v1.9.0
+ provider.null v2.1.2
+ provider.random v2.2.0

Getting the following error

------------------------------------------------------------------------

Error: Provider produced invalid plan

Provider "google-beta" has indicated "requires replacement" on
module.gke.google_container_cluster.primary for a non-existent attribute path
cty.Path{cty.GetAttrStep{Name:"private_cluster_config"},
cty.IndexStep{Key:cty.NumberIntVal(0)},
cty.GetAttrStep{Name:"master_ipv4_cidr_block"}}.

The module looks something like this

module "gke" {

  source = "git::https://github.com/terraform-google-modules/terraform-google-kubernetes-engine.git//modules/beta-private-cluster?ref=master"

  name                       = local.kubernetes_cluster_name
  project_id                 = "${var.project_name}"
  region                     = "${var.region}"
  network                    = "${data.terraform_remote_state.network.outputs.network_name}"
  subnetwork                 = data.terraform_remote_state.network.outputs.subnets_names[0]
  ip_range_pods              = "${data.terraform_remote_state.network.outputs.subnets_names[0]}-1"
  ip_range_services          = "${data.terraform_remote_state.network.outputs.subnets_names[0]}-2"
  http_load_balancing        = "${var.http_load_balancing}"
  horizontal_pod_autoscaling = "${var.horizontal_pod_autoscaling}"
  kubernetes_dashboard       = "${var.kubernetes_dashboard}"
  network_policy             = "${var.network_policy}"
  monitoring_service         = "${var.kubernetes_monitoring_service}"
  logging_service            = "${var.kubernetes_logging_service}"
  create_service_account     = false
  service_account            = "${var.service_account}"
  enable_private_endpoint    = "${var.enable_private_endpoint}"
  enable_private_nodes       = "${var.enable_private_nodes}"
  remove_default_node_pool   = "${var.remove_default_node_pool}"

  master_authorized_networks_config = [
    {
      cidr_blocks = [
        {
          cidr_block   = "xx.xx.xx.xx/32"
          display_name = "Whitelist"
        },
      ]
    },
  ]

  node_pools = [
    {
      name               = "default-node-pool"
      machine_type       = "${var.kubernetes_machine_type}"
      disk_size_gb       = "${var.kubernetes_node_pool_disk_size}"
      disk_type          = "${var.kubernetes_node_pool_disk_type}"
      image_type         = "${var.kubernetes_node_pool_image_type}"
      auto_repair        = true
      auto_upgrade       = true
      service_account    = "${var.service_account}"
      preemptible        = false
      initial_node_count = "${var.initial_node_count}"
    },
  ]

  node_pools_oauth_scopes = {
    all = []

    default-node-pool = [
      "https://www.googleapis.com/auth/cloud-platform",
    ]
  }
}

Using master because a PR that was causing issues for me was merged and a new release hasn't been tagged. Please let me know if I can provide more info

@aaron-lane aaron-lane added the bug Something isn't working label Sep 19, 2019
@cilindrox
Copy link

Happened to me twice with both the beta and standard private cluster modules. Think the issue's related to the deletion of the default cluster node pool

@nick4fake
Copy link
Contributor

nick4fake commented Oct 23, 2019

@nick4fake
Copy link
Contributor

Looks like it works in 2.17.0

@morgante
Copy link
Contributor

@ananthrav Can you upgrade to the v2.17.0 provider and confirm it fixed this?

bohdanyurov-gl added a commit to bohdanyurov-gl/terraform-google-kubernetes-engine that referenced this issue Oct 24, 2019
bohdanyurov-gl added a commit to bohdanyurov-gl/terraform-google-kubernetes-engine that referenced this issue Oct 25, 2019
bohdanyurov-gl added a commit to bohdanyurov-gl/terraform-google-kubernetes-engine that referenced this issue Oct 28, 2019
bohdanyurov-gl added a commit to bohdanyurov-gl/terraform-google-kubernetes-engine that referenced this issue Oct 28, 2019
bohdanyurov-gl added a commit to bohdanyurov-gl/terraform-google-kubernetes-engine that referenced this issue Oct 28, 2019
bohdanyurov-gl added a commit to bohdanyurov-gl/terraform-google-kubernetes-engine that referenced this issue Oct 30, 2019
bohdanyurov-gl added a commit to bohdanyurov-gl/terraform-google-kubernetes-engine that referenced this issue Oct 30, 2019
bohdanyurov-gl added a commit to bohdanyurov-gl/terraform-google-kubernetes-engine that referenced this issue Oct 30, 2019
morgante added a commit that referenced this issue Nov 1, 2019
#261: Invalid plan when re-applying terraform module
aaron-lane pushed a commit that referenced this issue Nov 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
7 participants