Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding update support for labels to google_container_node_pool #6599

Merged
merged 1 commit into from
Oct 12, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -1038,6 +1038,61 @@ func nodePoolUpdate(d *schema.ResourceData, meta interface{}, nodePoolInfo *Node
}

if d.HasChange(prefix + "node_config") {

if d.HasChange(prefix + "node_config.0.tags") {
req := &container.UpdateNodePoolRequest{
Name: name,
}
if v, ok := d.GetOk(prefix + "node_config.0.tags"); ok {
tagsList := v.([]interface{})
tags := []string{}
for _, v := range tagsList {
if v != nil {
tags = append(tags, v.(string))
}
}
ntags := &container.NetworkTags{
Tags: tags,
}
req.Tags = ntags
}

// sets tags to the empty list when user removes a previously defined list of tags entriely
// aka the node pool goes from having tags to no longer having any
if req.Tags == nil {
tags := []string{}
ntags := &container.NetworkTags{
Tags: tags,
}
req.Tags = ntags
}

updateF := func() error {
clusterNodePoolsUpdateCall := config.NewContainerClient(userAgent).Projects.Locations.Clusters.NodePools.Update(nodePoolInfo.fullyQualifiedName(name), req)
if config.UserProjectOverride {
clusterNodePoolsUpdateCall.Header().Add("X-Goog-User-Project", nodePoolInfo.project)
}
op, err := clusterNodePoolsUpdateCall.Do()
if err != nil {
return err
}

// Wait until it's updated
return containerOperationWait(config, op,
nodePoolInfo.project,
nodePoolInfo.location,
"updating GKE node pool tags", userAgent,
timeout)
}

// Call update serially.
if err := lockedCall(lockKey, updateF); err != nil {
return err
}

log.Printf("[INFO] Updated tags for node pool %s", name)
}

if d.HasChange(prefix + "node_config.0.image_type") {
req := &container.UpdateClusterRequest{
Update: &container.ClusterUpdate{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1662,6 +1662,8 @@ resource "google_container_node_pool" "np_with_node_config" {
]
preemptible = true
min_cpu_platform = "Intel Broadwell"

tags = ["ga"]

taint {
key = "taint_key"
Expand Down Expand Up @@ -1707,6 +1709,8 @@ resource "google_container_node_pool" "np_with_node_config" {
preemptible = true
min_cpu_platform = "Intel Broadwell"

tags = ["beta"]

taint {
key = "taint_key"
value = "taint_value"
Expand Down
1 change: 0 additions & 1 deletion mmv1/third_party/terraform/utils/node_config.go.erb
Original file line number Diff line number Diff line change
Expand Up @@ -301,7 +301,6 @@ func schemaNodeConfig() *schema.Schema {
"tags": {
Type: schema.TypeList,
Optional: true,
ForceNew: true,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file is also used in google_container_cluster resource. Do we know if the tags under google_container_cluster resource can also be updated. If not, we probably want to add it to forceNewClusterNodeConfigFields in resource_container_cluster.go.erb file at https://github.com/GoogleCloudPlatform/magic-modules/blob/main/mmv1/third_party/terraform/resources/resource_container_cluster.go.erb#L78.

Copy link
Contributor Author

@NA2047 NA2047 Oct 4, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I see, looking at the documentation

nodeConfig (deprecated)

Parameters used in creating the cluster's nodes. For requests, this field should only be used in lieu of a "nodePool" object, since this configuration (along with the "initialNodeCount") will be used to create a "NodePool" object with an auto-generated name. Do not use this and a nodePool at the same time. For responses, this field will be populated with the node configuration of the first node pool. (For configuration of each node pool, see nodePool.config)

It looks like the preferred method is to not define the nodeConfig within the google_container_cluster
https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.locations.clusters

node_pool - (Optional) List of node pools associated with this cluster. Warning: node pools defined inside a cluster can't be changed (or added/removed) after cluster creation without deleting and recreating the entire cluster. Unless you absolutely need the ability to say "these are the only node pools associated with this cluster", use the [google_container_node_pool] resource instead of this property.

let me know what you think @shuyama1

UPDATE:

I have tested defining the node_pool inside of the google_container_cluster and the tags do update as expected without changing/updating forceNewClusterNodeConfigFields in resource_container_cluster.go.erb

I believe this is due to this code block, I could be wrong

(

)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe if the tag can be updated by defining the node_pool inside of the google_container_cluster, which means the in-place update functionality can be supported also in google_container_cluster, then we don't need to add this field to forceNewClusterNodeConfigFields (it should only contains the field that can be updated for google_container_node_pool but not google_container_cluster).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Understood makes sense to me

Elem: &schema.Schema{Type: schema.TypeString},
Description: `The list of instance tags applied to all nodes.`,
},
Expand Down