-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node pool update via managed cluster not allowed. Use per nodepool operations #3549
Comments
Hey @katbyte .. is this a well known issue? Any known work-arounds yet ? Thanks a lot for helping 👍 |
Just updated to 1.32 and got the same issue. No way of updating the agent pool min and max_count as I'm blocked by the error message.
|
@TVH7 sometimes it's useful to use lifecycle {
ignore_changes = ["agent_pool_profile"]
} and with the new provider, I will probably enable auto-scaling anyway! |
Hey folks - AKS person here, catching up. The issue you are seeing happens when a cluster has multiple node pools enabled, thus actions like update/scale need to happen through the agent pool profile instead of the managed cluster for us to distinguish which node pool should be changed. Silly question - does Terraform already support multiple node pools? |
@jluk yes it does, the issue is that provider tries to update node pool with AKS api, while it should do so with node pool API |
@titilambert this is the issue :) people are going crazy, lets try to fix tomorrow ? @alex-goncharov that's the issue indeed, @jluk yes, is does, we added the support to allow having hybrid windows/linux AKS clusters. If no one is looking at this now, we will be working on this soon, as it is a blocker for us as well. |
It's not just updates, this is affecting AKS cluster creation as well. Sometime in the past week the API was changed: I have Terraform code with 4
It looks like the work @titilambert is doing in #4001 will fix this in 1.33 of the azurerm provider by providing the new azurerm_kubernetes_cluster_agentpool resource, but until that is released, could whoever made this API change back it out? As it is, I am now unable to provision an AKS cluster with Terraform due to this problem. |
This has been released in version 1.37.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example: provider "azurerm" {
version = "~> 1.37.0"
}
# ... other configuration ... |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks! |
Community Note
Terraform (and AzureRM Provider) Version
Terraform v0.12.0
Affected Resource(s)
azurerm_kubernetes_cluster
Diff
Expected Behavior
TF scales the agent pool
Actual Behavior
Error: Error creating/updating Managed Kubernetes Cluster "dustydog" (Resource Group "dustydog"): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="OperationNotAllowed" Message="Node pool update via managed cluster not allowed. Use per nodepool operations."
Steps to Reproduce
terraform apply
The text was updated successfully, but these errors were encountered: