Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New resource - azurerm_kubernetes_cluster_agentpool #4001

Closed
titilambert opened this issue Aug 2, 2019 · 35 comments · Fixed by #4899
Closed

New resource - azurerm_kubernetes_cluster_agentpool #4001

titilambert opened this issue Aug 2, 2019 · 35 comments · Fixed by #4899

Comments

@titilambert
Copy link
Contributor

titilambert commented Aug 2, 2019

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Description

There is a new resource to manage correctly AKS cluster agent pool.
With this, we will be able to upgrade/scale up/scale down an agent pool

New or Affected Resource(s)

  • azurerm_kubernetes_cluster_agentpool

Potential Terraform Configuration

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.

References

resource "azurerm_kubernetes_cluster_agentpool" "aks" {
  resource_group_name = ""
  agent_pool_name = ""
  aks_cluster_name = ""
  profile {
   count = ""
   vm_size = ""
    os_disk_size_gb = ""
    vnet_subnet_id = ""
    max_pods = ""
    os_type= ""
    max_count = ""
    min_count = ""
    enable_auto_scaling = ""
    agent_pool_type =""
    orchestrator_version = ""
    provisioning_state = ""
    availability_zones = []
    enable_node_public_ip = false
    scale_set_priority = ""
    scale_set_eviction_policy = ""
    node_taints  = []
  }
}

@mbfrahry @tombuildsstuff
I'm going to start this PR on Monday, do you have any warnings/thoughts/recommandations ?

@djsly
Copy link
Contributor

djsly commented Aug 2, 2019

Also, note that Azure just changed the way they handle the call of func (client ManagedClustersClient) CreateOrUpdate

until Wednesday, we could pass multiple agentPoolProfiles to it and the first one was treated as the primary. Now, when we pass multiple AgentPoolProfile, it creates an AKS resource which will be marked as a success but there won't be any VMSS created, hence no nodes.

It looks like MSFT are changing the way they are handling the creation of the AKS resource while forcing the users to manage agent pool separately, using the AgentPoolClient API.

@grayzu
Copy link
Collaborator

grayzu commented Aug 6, 2019

Hey @titilambert ,

I am not sure if you have started on this yet but I wanted to first thank you for this contribution and also for proactively letting us know that you will be working on this and your proposed schema. Super helpful!

MarkG

@WodansSon
Copy link
Collaborator

WodansSon commented Aug 7, 2019

@titilambert

This is awesome, thanks for this... based off what I've seen here I have a couple of suggestions if you are open to hearing them.

  1. I think it might be helpful if you can change aks_cluster_name to aks_cluster_id, this way it will make it easier for Terraform to understand that there is a dependency between these two resources, help decide on how to provision the resource to Azure, and make this resource consistent with other Terraform resources.

  2. Maybe breakup the long flat list of unrelated attributes in to an organized hierarchy of related sub-blocks that have meaningful, understandable names, something like below:

resource "azurerm_kubernetes_cluster_agentpool" "test" {
 resource_group_name  = ""
 agent_pool_name      = ""
 aks_cluster_id       = azurerm_kubernetes_cluster.test.id
 agent_pool_type      = "{AvailabilitySet | VirtualMachineScaleSets}"
 orchestrator_version = ""
 node_count           = 2

 node_profile = {
   vm_size            = ""
   availability_zones = [""]
   os_disk_size_gb    = ""
   max_pods           = 40
   os_type            = "{Linux | Windows}" 
   vnet_subnet_id     = ""
   enable_public_ip   = false
   taints             = [""]
 }

 node_auto_scaling = {
   enabled        = {true | false}
   max_node_count = 10
   min_node_count = 2
 }

 vm_scale_set_profile = {
   priority        = ""
   eviction_policy = ""
 }
}

Other than those couple of suggestions, I am really looking forward to this contribution. 🚀

@coreywagehoft
Copy link

@titilambert I don’t know if you have this in your design but we noticed a behavior that AKS won’t allow nodepool changes if the auto scaler is active on the nodepool.

If you haven’t already accounted for this could you build in logic to disable the auto scaler if enabled before performing nodepool changes on an existing nodepool?

@djsly
Copy link
Contributor

djsly commented Aug 7, 2019

@jeffreyCline : are AgentPool supported with AvailabilitySet ? though the required VMSS

@tombuildsstuff
Copy link
Member

tombuildsstuff commented Aug 7, 2019

👋🏻

This is great 👍 - there's a few changes I'd suggest to @jeffreyCline's proposal, but this otherwise looks good:

  1. The name would better match other Terraform resources if it's agent_pool rather than agentpool
  2. Since the properties in the node_profile block are all for the node pool, I think they can be moved to the top level
  3. I'd suggest we use kubernetes_cluster_id rather than aks_cluster_id since services can change name slightly over time
  4. We should be able to infer that the autoscale_configuration block being present means this wants to be enabled/disabled - which also means we could add a ConflictsWith for the node_count field to ensure only one of them can be set.
resource "azurerm_kubernetes_cluster_agent_pool" "test" {
  # required
  name = ""
  resource_group_name   = ""
  agent_pool_name       = ""
  kubernetes_cluster_id = azurerm_kubernetes_cluster.test.id
  agent_pool_type       = "{AvailabilitySet | VirtualMachineScaleSets}"
  orchestrator_version  = "" # can this be inferred from the cluster?
  os_type            = "{Linux | Windows}" 
  vm_size            = "Standard_F2"

  # optional
  node_count            = 2 # maybe this wants to be named fixed_node_count?
  os_disk_size_gb    = 100
  max_pods           = 40
  availability_zones = [""]
  subnet_id     = ""
  enable_public_ip   = false
  taints             = [""]

  autoscale_configuration = { # we can infer `enabled` based on this block being present or not
    min_nodes = 2
    max_nodes = 10
  }

  vm_scale_set_profile = {
    priority        = ""
    eviction_policy = ""
  }
}

WDYT?

@titilambert
Copy link
Contributor Author

titilambert commented Aug 7, 2019

Thanks for all your comments,
I just got my first test working for creation.
I'm going now to implement what @tombuildsstuff suggested.

@titilambert
Copy link
Contributor Author

@tombuildsstuff I will need to make some change in the kubernetes resources.
Terraform see a diff because I added an agent pool.
What could be the soluton? Ignore Agent Pools for a kubernetes cluster diff ?

@grayzu
Copy link
Collaborator

grayzu commented Aug 7, 2019

Thank you @titilambert! @tombuildsstuff's suggestions sound good with a couple of comments:

  1. part II of New Resource: azurerm_service_fabric_cluster #4. The node_count property is required and is still relevant when using auto_scale properties. it acts as the initial node count which can then auto_scale up or down from there.
  2. To answer your comment on the cluster version. The version of the cluster and the nodes do not need to match so the orchestrator_version is needed. We might be able to be smart and default it to the version from the cluster if the property is not specified, however.

@grayzu
Copy link
Collaborator

grayzu commented Aug 7, 2019

Also the following should not be required:

  • os_type: this defaults to Linux if not passed
  • agent_pool_type: this defaults to AvailabilitySet

@coreywagehoft
Copy link

@titilambert I noticed some issues when updating a agent_pool_profile today. We are loading the values from a terraform variable type "map". It seems like the value is being interpreted as the literal value after the = for each attribute.

-/+ module.azure-aks.azurerm_kubernetes_cluster.aks (new resource required)
      id:                                                                     "/subscriptions/************/resourcegroups/********/providers/Microsoft.ContainerService/managedClusters/********" => <computed> (forces new resource)
      addon_profile.#:                                                        "1" => <computed>
      agent_pool_profile.#:                                                   "2" => "1"
      agent_pool_profile.0.count:                                             "2" => "0"
      agent_pool_profile.0.dns_prefix:                                        "" => <computed>
      agent_pool_profile.0.fqdn:                                              "********-*****.hcp.centralus.azmk8s.io" => <computed>
      agent_pool_profile.0.max_pods:                                          "60" => "0" (forces new resource)
      agent_pool_profile.0.name:                                              "pool1" => "${lookup(data.external.clusters.result, \"scaleset01_name\")}" (forces new resource)
      agent_pool_profile.0.os_disk_size_gb:                                   "30" => "0" (forces new resource)
      agent_pool_profile.0.os_type:                                           "Linux" => "${lookup(data.external.clusters.result, \"scaleset01_os_type\")}" (forces new resource)
      agent_pool_profile.0.type:                                              "VirtualMachineScaleSets" => "${lookup(data.external.clusters.result, \"scaleset01_type\")}" (forces new resource)
      agent_pool_profile.0.vm_size:                                           "Standard_DS13_v2" => "${lookup(data.external.clusters.result, \"scaleset01_vm_size\")}" (forces new resource)
      agent_pool_profile.0.vnet_subnet_id:                                    "/subscriptions/************/resourceGroups/*********/providers/Microsoft.Network/virtualNetworks/********-vnet/subnets/********-nodes-subnet" => "/subscriptions/************/resourceGroups/********/providers/Microsoft.Network/virtualNetworks/********-vnet/subnets/********-nodes-subnet"

I may need to file a separate issue about this, but reading your post it seemed relevant and possibly related.

@tombuildsstuff
Copy link
Member

@grayzu

part II of #4. The node_count property is required and is still relevant when using auto_scale properties. it acts as the initial node count which can then auto_scale up or down from there.

Since there's two very different behaviours, perhaps it makes sense to expose this field twice, once as fixed_node_count and another within the auto_scale_configuration block as initial_node_count? Whilst ordinarily I'd suggest against this - given the behaviours are different and we should be able to use the ConflictsWith, that should mean it's only possible to set this once.

To answer your comment on the cluster version. The version of the cluster and the nodes do not need to match so the orchestrator_version is needed. We might be able to be smart and default it to the version from the cluster if the property is not specified, however.

👍 to looking this up if it's not specified

Also the following should not be required:
os_type: this defaults to Linux if not passed
agent_pool_type: this defaults to AvailabilitySet

whilst that's the behaviour of the azurerm_kubernetes_cluster today I'd suggest against that with a new resource - since presumably the Availability Sets will be deprecated longer term in favour of the VM Scale Set approach.

OSType is more complicated - whilst I can see the argument for defaulting to Linux here it may be prudent to be explicit, given changing this requires reprovisioning the pool?

@tombuildsstuff
Copy link
Member

@titilambert

@tombuildsstuff I will need to make some change in the kubernetes resources.
Terraform see a diff because I added an agent pool.
What could be the soluton? Ignore Agent Pools for a kubernetes cluster diff ?

Just to ensure I understand the behaviour here to point you in the right direction: is the intention that users would provision an AKS Cluster with a single node via the azurerm_kubernetes_cluster resource; and then use this new resource to provision the others? Or does the API now allow us to provision just the AKS Cluster without any agent pools - meaning we can exclusively use this resource to manage Agent Pools?

@djsly
Copy link
Contributor

djsly commented Aug 7, 2019

@tombuildsstuff the API expect the primary Agent pool to be pass with the managedcluster client.

other agent pool need to be provisioned with the agentpool client.

Tehre was a question we were wondering, should we do le the NSG resource where we can do either pool definition at the kubernetes_cluster resource using a list and having a flag to define the primary agent pool (this one would be pass to the managedcluster client, while the remaining would be handled by the agentpool client). or should we limit to a single agent_pool block within the kubernetes_resource and force users to use kubernetes__cluster_agent_pool resource.

@grayzu
Copy link
Collaborator

grayzu commented Aug 7, 2019

@tombuildsstuff

Since there's two very different behaviours, perhaps it makes sense to expose this field twice, once as fixed_node_count and another within the auto_scale_configuration block as initial_node_count? Whilst ordinarily I'd suggest against this - given the behaviours are different and we should be able to use the ConflictsWith, that should mean it's only possible to set this once.

This one is actually implemented in a way that is not TF friendly. The count is required on create and can be passed with autoscale enabled but cannot be passed afterwards if the count differs from the actual node count. I will touch base with the AKS team about this.

whilst that's the behaviour of the azurerm_kubernetes_cluster today I'd suggest against that with a new resource - since presumably the Availability Sets will be deprecated longer term in favour of the VM Scale Set approach.

I am not saying that we set the defaults for these in the provider. The properties are not required by the API so we should not require them. And I stand corrected. It looks like the default for the type is actually VMSS for the Agent Pool API.

For OSType, I am always in favor of reducing unnecessary typing but can see your point.

titilambert added a commit to titilambert/terraform-provider-azurerm that referenced this issue Aug 9, 2019
@titilambert
Copy link
Contributor Author

titilambert commented Aug 9, 2019

This is the first commit for this new feature #4046

Breaking change:

  • I reset the limit of agent profiles to kubernetes_cluster to 1, it will force people to use the new resource instead.

What is working:

  • You can add/remove/scale up/scale down/edit agent pool using the new resource
  • You can add/remove/scale up/scale down/edit the first (and only) agent pool set in the kubernetes_cluster resource

Missing:

  • Datasource yet (Should I ???)
  • Test file

I will probably wait for some feedback for some days before continuing on this.

@davidack
Copy link

@titilambert did you change the limit of agent profiles for azure_kubernetes_cluster to 1 just in the azurerm provider, or did you change the API itself? Because I am seeing behavior that indicates the API itself was changed. Terraform code with 4 agent_pool_profile resources in my azure_kubernetes_cluster which worked fine last week, now results in this error:

Error: Error creating/updating Managed Kubernetes Cluster "foo" (Resource Group "foo-rg"): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="AgentPoolUpdateViaManagedClusterNotSupported" Message="Node pool update via managed cluster not allowed when the cluster contains more than one node pool. Please use per node pool operations."

This is the same problem in #3549. I will add this comment there as well... that issue is the bug, this issue is the fix. I will leave a related comment there too.

In the mean time, if you could get whoever made this API change to back it out until version 1.33 of the azurerm provider with your kubernetes_cluster_agent_pool changes is released, that would be great. As it is, I am now unable to provision an AKS cluster with Terraform due to this problem.

@djsly
Copy link
Contributor

djsly commented Aug 15, 2019 via email

@coreywagehoft
Copy link

Any updates when this will be merged and released?

@ranokarno
Copy link

@djsly
I am encountering the same issue as yours for Asia region. it seem MS update applid in all region.
can you advise if this MS support revert back server API for your subscription only or for the REGION ?

@davidack
Copy link

@ranokarno
I opened a case on this over 2 weeks ago with MSFT support, I am having the problem in the japaneast region. In which region in Asia are you encountering this problem?

MSFT have yet to revert the API change so I am still blocked. They are rolling out an update across all regions that they say will fix the problem, but they have not provided any specifics about how. Until the new azurerm_kubernetes_cluster_agentpool resource (using the VMSS API instead of the AKS API) is released as part of a new version of the azurerm provider, I can't think of any way the change they are rolling out now would fix the problem I am having unless they remove the limit of not being able to create or update more than one node pool via the AKS API.

@xizha162
Copy link

xizha162 commented Sep 4, 2019

@davidack , Sorry for the delay. The fix has been deployed in Japan East region. With the fix,you can still use MC API to scale agent pools. Please try it.

@ranokarno
Copy link

@davidack , i encounter the same issue in "eastasia" and "southeastasia" azure region.
@xizhamsft , is there possibility to apply fix the same for "eastasia" and "southeasia" region as well ?

@lewalt000
Copy link

@titilambert Thank you for working on this new resource. Do you have any updates when you expect the PR for this can be merged?

#4046

@djsly
Copy link
Contributor

djsly commented Sep 4, 2019 via email

@davidack
Copy link

davidack commented Sep 9, 2019

@xizhamsft , thanks for the heads-up... I tried it last week and I was able to provision a new AKS cluster in japaneast with multiple node pools that were specified as blocks within the azure_kubernetes_cluster resource. I look forward to using the new azurerm_kubernetes_cluster_agentpool resource once it is available, but for now the original method using the MC API once again works. Thanks to the Azure team for getting this fixed.

Also, congrats to @titilambert on the new baby!

NathanielRose pushed a commit to NathanielRose/terraform-provider-azurerm that referenced this issue Sep 10, 2019
@xizha162
Copy link

@ranokarno, sorry for the late reply. Yes. We already pushed the release to all regions.

@davidack
Copy link

davidack commented Oct 4, 2019

Recently I have been investigating how to add custom data to my node pools to make some changes to every node as it boots, and I discovered that what I need is the custom_data argument in the os_profile block of the azurerm_virtual_machine_scale_set resource. It occurred to me that duplicating everything that is in azurerm_virtual_machine_scale_set in this new azurerm_kubernetes_cluster_agentpool resource will be a lot of extra work, and has the potential to lag behind the azurerm_virtual_machine_scale_set changes.

Wouldn't it make sense to have a vm_scale_set argument in the azurerm_kubernetes_cluster_agentpool resource (which would now be much simpler), and leave all the VMSS settings to the azurerm_virtual_machine_scale_set resource, or am I missing something? Is there something about the underlying API details that cause a problem for this approach? Since MSFT has stated they will only support VMSS for multiple agent pools, this approach makes sense to me.

@coreywagehoft
Copy link

@titilambert any updates on this? Will this be in the next provider release?

Also hope paternity leave went well!

@jluk
Copy link

jluk commented Oct 15, 2019

For awareness and help on priority the multiple nodepools feature will be GA in a few weeks.

However in the agent pool body described above some of those properties are limited preview features including:

    enable_node_public_ip = false
    scale_set_priority = ""
    scale_set_eviction_policy = ""

titilambert added a commit to titilambert/terraform-provider-azurerm that referenced this issue Oct 16, 2019
@titilambert
Copy link
Contributor Author

@coreywagehoft I just rebased the PR on master

@lewalt000
Copy link

@tombuildsstuff I noticed you recently assigned this issue to yourself. Would you be able to provide any guidance on how much work you have left for it? Thank you.

@tombuildsstuff
Copy link
Member

@lewalt000 at the moment we're waiting on a PR to be merged on Microsoft's side so that we can push the changes needed to make this happen

@ghost
Copy link

ghost commented Nov 26, 2019

This has been released in version 1.37.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
    version = "~> 1.37.0"
}
# ... other configuration ...

@ghost
Copy link

ghost commented Mar 29, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 29, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.