Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AKS: Adding labels to node_pool requires delete and recreate of the cluster #172

Open
pst opened this issue Apr 6, 2021 · 1 comment
Open

Comments

@pst
Copy link
Member

pst commented Apr 6, 2021

Adding node_labels to the default node pool forces a destroy and recreate plan.

  # module.aks_zero.module.cluster.azurerm_kubernetes_cluster.current must be replaced
-/+ resource "azurerm_kubernetes_cluster" "current" {
     [...]

      ~ default_node_pool {
          - availability_zones    = [] -> null
          - enable_node_public_ip = false -> null
          ~ max_pods              = 110 -> (known after apply)
            name                  = "default"
          ~ node_count            = 1 -> (known after apply)
          ~ node_labels           = { # forces replacement
              + "kubestack.com-cluster_domain"          = "azure.infra.serverwolken.de"
              + "kubestack.com-cluster_fqdn"            = "kbstacctest-ops-westeurope.azure.infra.serverwolken.de"
              + "kubestack.com-cluster_name"            = "kbstacctest-ops-westeurope"
              + "kubestack.com-cluster_provider_name"   = "azure"
              + "kubestack.com-cluster_provider_region" = "westeurope"
              + "kubestack.com-cluster_workspace"       = "ops"
            }
          - node_taints           = [] -> null
          ~ orchestrator_version  = "1.18.14" -> (known after apply)
          - tags                  = {} -> null
            # (7 unchanged attributes hidden)
        }
    }
pst added a commit that referenced this issue Apr 6, 2021
…in eks"

This reverts commit 170bf26, because adding
the node_labels to the default node pool requires a destroy and re-create on
the AKS cluster.

This is likely due to the node_pool using the legacy `default_node_pool`
block of the `azurerm_kubernetes_cluster` resource. Must migrate to using
a `azurerm_kubernetes_cluster_node_pool`. Will tackle this as part of
revamping node pools support in general.

Opening an issue to track this:

#172
@pst
Copy link
Member Author

pst commented Apr 6, 2021

FYI @to266, unfortunately I had to revert this part of your contribution from this release. Part of revamping the node pools support will be migrating from the default_node_pool block to the azurerm_kubernetes_cluster_node_pool resource. I prefer to tackle the node_labels together with that change, so I can handle both with a single transitional release to avoid downtime for users.

pst added a commit that referenced this issue Apr 6, 2021
This reverts commit 170bf26, because adding
the node_labels to the default node pool requires a destroy and re-create on
the AKS cluster.

This is likely due to the node_pool using the legacy `default_node_pool`
block of the `azurerm_kubernetes_cluster` resource. Must migrate to using
a `azurerm_kubernetes_cluster_node_pool`. Will tackle this as part of
revamping node pools support in general.

Opening an issue to track this:

#172
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant