Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A simple GKE pool initial_node_count makes terraform to recreate the entire pool #10105

Closed
i2dcarrasco opened this issue Sep 20, 2021 · 2 comments
Assignees
Labels

Comments

@i2dcarrasco
Copy link

i2dcarrasco commented Sep 20, 2021

Hello, today I have noticed that terraform doesn't manage the initial_node_count in pool like the console. If you change the Number of nodes manually or the value of the initial_node_count, it tries to recreate the entire pool (which is very extreme). Using the console you can change this value easily without destroy anything.

Terraform Version

Terraform v0.14.4
+ provider registry.terraform.io/hashicorp/archive v2.2.0
+ provider registry.terraform.io/hashicorp/google v3.83.0
+ provider registry.terraform.io/hashicorp/google-beta v3.83.0
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.4
+ provider registry.terraform.io/hashicorp/local v1.4.0
+ provider registry.terraform.io/hashicorp/random v2.3.1
+ provider registry.terraform.io/hashicorp/template v2.1.2
+ provider registry.terraform.io/terraform-providers/mysql v1.9.0

Affected Resource(s)

  • google_container_node_pool

Terraform Configuration Files

resource "google_container_node_pool" "gke" {
  name               = local.gke_cluster.pool_name
  location           = local.settings.gke.location
  initial_node_count = 1
  cluster            = google_container_cluster.gke.name
  version            = local.settings.k8s_master_version
  //  node_count = 1

  autoscaling {
    min_node_count = local.gke_cluster.nodes.min
    max_node_count = local.gke_cluster.nodes.max
  }

  management {
    auto_repair  = "true"
    auto_upgrade = "false"
  }

  node_config {
    preemptible  = "false"
    machine_type = local.gke_cluster.type

    // Disk Type in nodes. Options: pd-standard | pd-ssd
    disk_type    = local.gke_cluster.node_disk.type
    disk_size_gb = local.gke_cluster.node_disk.size

    metadata = {
      disable-legacy-endpoints = "true"
    }

    oauth_scopes = [
      "https://www.googleapis.com/auth/compute",
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
    ]

    labels = {
      env      = terraform.workspace
      location = local.gke_cluster.region
      machine  = local.gke_cluster.type
    }
    tags = [terraform.workspace, "gke-cluster", local.gke_cluster.type, local.gke_cluster.region]
  }
  lifecycle {
    //create_before_destroy = true
  }
  depends_on = [google_container_cluster.gke, ]
}

Actual Behavior

When pool Number of nodes is changed manually or initial_node_count changes, then terraform tries to recreate the pool instead to change the value on the fly like GCP console does.

Steps to Reproduce

  1. Create any GKE pool using an initial_node_count of 1 for example using terraform
  2. Change the Number of nodes to 0 for example using the GCP console
  3. Execute terraform apply to return to the desired value and terraform will return a plan of recreate the pool
  4. Change the Number of nodes manually to the desired value using the console again, and then terraform will not destroy the cluster.
@edwardmedia edwardmedia self-assigned this Sep 20, 2021
@edwardmedia
Copy link
Contributor

dup of #9570. Closing this then

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 21, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants