Skip to content
This repository has been archived by the owner on Dec 15, 2022. It is now read-only.

Autoscaling AKS Nodepools cause de-sync and deletion attempts #186

Open
noyoshi opened this issue May 9, 2022 · 0 comments
Open

Autoscaling AKS Nodepools cause de-sync and deletion attempts #186

noyoshi opened this issue May 9, 2022 · 0 comments
Labels
bug Something isn't working

Comments

@noyoshi
Copy link

noyoshi commented May 9, 2022

What happened?

It appears that the KubernetesClusterNodePool resource will attempt to delete itself when you enable autoscaling. I assume this is because the NodeCount changes when the resource gets autoscaled.

Thankfully the lifecycle prevents destruction, but now I have a ton of nodepools that are de-synced.

How can we reproduce it?

Create a nodepool w/ autoscaling enabled. Have the nodepool get autoscaled, then notice that "SYNCED" becomes false, and the status
" Warning CannotObserveExternalResource 93s managed/containerservice.azure.jet.crossplane.io/v1alpha2, kind=kubernetesclusternodepool cannot run plan: plan failed: Instance cannot be destroyed: Resource azurerm_kubernetes_cluster_node_pool.-2fj2m has lifecycle.prevent_destroy set, but the plan calls for this resource to be destroyed. To avoid this error and continue with the plan, either disable lifecycle.prevent_destroy or reduce the scope of the plan using the -target flag.: File name: main.tf.json"

appears.

What environment did it happen in?

Crossplane version:

Crossplane version: 1.6.1

AKS, provider-jet-azure release 0.8.0.

@noyoshi noyoshi added the bug Something isn't working label May 9, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant