Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

google_container_cluster tries to recreate cluster when node_config is set #11022

Open
foosinn opened this issue Feb 2, 2022 · 4 comments
Open

Comments

@foosinn
Copy link

foosinn commented Feb 2, 2022

Community Note

  • Please vote on this issue by adding a 馃憤 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

Terraform Version

Terraform v0.12.31

  • provider.google v4.6.0

Affected Resource(s)

  • google_container_cluster

Terraform Configuration Files

resource "google_container_cluster" "kube02" {
  name     = "kube02"
  location = var.region

  remove_default_node_pool = true
  initial_node_count       = 1

  network    = "projects/nm-infrastructure/global/networks/net-10-13-0-0-16"
  subnetwork = "projects/nm-infrastructure/regions/europe-west3/subnetworks/int-kube02"

  master_authorized_networks_config {
    cidr_blocks {
      display_name = "VPN"
      cidr_block   = "10.12.254.0/24"
    }
    cidr_blocks {
      display_name = "cluster nodes"
      cidr_block = "10.13.5.0/24"
    }
  }

  ip_allocation_policy {
    cluster_secondary_range_name  = "pods"
    services_secondary_range_name = "services"
  }

  private_cluster_config {
    enable_private_nodes    = true
    enable_private_endpoint = true
    master_ipv4_cidr_block  = "192.168.13.0/28"
  }

  node_config {
    tags = ["kube02", "kube-no-external-ip"]
  }
}

# Node Pool
resource "google_container_node_pool" "kube02_nodes" {
  name       = "nodes"
  location   = var.region
  cluster    = google_container_cluster.kube02.name
  node_count = var.gke_kube02_num_nodes

  node_config {
    tags = ["kube02", "kube-no-external-ip"]
    labels = {
      env = var.project_id
    }
    metadata = {
      disable-legacy-endpoints = true
    }

    machine_type = "e2-highcpu-2"
    oauth_scopes = [
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
    ]
  }
}

Output

      + node_config { # forces replacement
          + disk_size_gb      = (known after apply)
          + disk_type         = (known after apply)
          + guest_accelerator = (known after apply) # forces replacement
          + image_type        = (known after apply)
          + labels            = (known after apply) # forces replacement
          + local_ssd_count   = (known after apply)
          + machine_type      = (known after apply)
          + metadata          = (known after apply) # forces replacement
          + oauth_scopes      = (known after apply) # forces replacement
          + preemptible       = false # forces replacement
          + service_account   = (known after apply)
          + tags              = [
              + "kube02",
              + "kube-no-external-ip",
            ] # forces replacement
          + taint             = (known after apply) # forces replacement

          + shielded_instance_config { # forces replacement
              + enable_integrity_monitoring = (known after apply)
              + enable_secure_boot          = (known after apply)
            }

          + workload_metadata_config { # forces replacement
              + mode = (known after apply)
            }
        }

Expected Behavior

Being able to re-run terraform apply for a kubernetes cluster with custom network tags.

Actual Behavior

Setting routing rules in node_config on the cluster forces recreation of the cluster. This also has been discussed in #2115, but was closed without mentioning this use case.

Important Factoids

Not being able to set network tags limits firewall and routing rules by quite a lot. Seems to me like it should be supported?

References

Thank you!

@foosinn foosinn added the bug label Feb 2, 2022
@slevenick
Copy link
Collaborator

So you're saying that the only way to set network tags on the cluster's node pool is through node_config which prevents using non-default node pools? I'll have to do some digging as I'm not particularly sure if there's another way to handle adding those tags

@slevenick
Copy link
Collaborator

slevenick commented Feb 4, 2022

Looking at the docs it seems like you should be able to set tags on the node_config of the pool (which you currently have in your config), which should let you omit it from the node_config of the cluster. Does that not work for your use case?

@foosinn
Copy link
Author

foosinn commented Feb 8, 2022

Hey @slevenick thanks for your time!

I tried that, but the default nodes alwas have to be deployed and afterwards removed from what i read.

Since the default nodes will never succeed without reaching the internet for google apis the cluster won't come online.

As a workaround I'm currently using the default node pool even though thats not recommended.

@abulfat-masimov
Copy link

Got the same issue, we have firewall rules blocking default egress one and allowing only service accounts. Once you specify node_config in google_container_cluster, it tries to recreate the cluster each time.
Please advise.

modular-magician added a commit to modular-magician/terraform-provider-google that referenced this issue Jun 24, 2024
[upstream:62869102395e9659ae75cbfdd9ee3879d5e761b5]

Signed-off-by: Modular Magician <magic-modules@google.com>
modular-magician added a commit that referenced this issue Jun 24, 2024
[upstream:62869102395e9659ae75cbfdd9ee3879d5e761b5]

Signed-off-by: Modular Magician <magic-modules@google.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants