diff --git a/tencentcloud/resource_tc_kubernetes_node_pool.go b/tencentcloud/resource_tc_kubernetes_node_pool.go index 420ac8f102..978971b302 100644 --- a/tencentcloud/resource_tc_kubernetes_node_pool.go +++ b/tencentcloud/resource_tc_kubernetes_node_pool.go @@ -2,8 +2,11 @@ Provide a resource to create an auto scaling group for kubernetes cluster. ~> **NOTE:** We recommend the usage of one cluster with essential worker config + node pool to manage cluster and nodes. Its a more flexible way than manage worker config with tencentcloud_kubernetes_cluster, tencentcloud_kubernetes_scale_worker or exist node management of `tencentcloud_kubernetes_attachment`. Cause some unchangeable parameters of `worker_config` may cause the whole cluster resource `force new`. + ~> **NOTE:** In order to ensure the integrity of customer data, if you destroy nodepool instance, it will keep the cvm instance associate with nodepool by default. If you want destroy together, please set `delete_keep_instance` to `false`. +~> **NOTE:** In order to ensure the integrity of customer data, if the cvm instance was destroyed due to shrinking, it will keep the cbs associate with cvm by default. If you want destroy together, please set `delete_with_instance` to `true`. + Example Usage ```hcl @@ -240,7 +243,7 @@ func composedKubernetesAsScalingConfigPara() map[string]*schema.Schema { "delete_with_instance": { Type: schema.TypeBool, Optional: true, - Description: "Indicates whether the disk remove after instance terminated.", + Description: "Indicates whether the disk remove after instance terminated. Default is `false`.", }, "encrypt": { Type: schema.TypeBool, diff --git a/website/docs/r/kubernetes_node_pool.html.markdown b/website/docs/r/kubernetes_node_pool.html.markdown index 68793d214c..2e8ed5538a 100644 --- a/website/docs/r/kubernetes_node_pool.html.markdown +++ b/website/docs/r/kubernetes_node_pool.html.markdown @@ -12,8 +12,11 @@ description: |- Provide a resource to create an auto scaling group for kubernetes cluster. ~> **NOTE:** We recommend the usage of one cluster with essential worker config + node pool to manage cluster and nodes. Its a more flexible way than manage worker config with tencentcloud_kubernetes_cluster, tencentcloud_kubernetes_scale_worker or exist node management of `tencentcloud_kubernetes_attachment`. Cause some unchangeable parameters of `worker_config` may cause the whole cluster resource `force new`. + ~> **NOTE:** In order to ensure the integrity of customer data, if you destroy nodepool instance, it will keep the cvm instance associate with nodepool by default. If you want destroy together, please set `delete_keep_instance` to `false`. +~> **NOTE:** In order to ensure the integrity of customer data, if the cvm instance was destroyed due to shrinking, it will keep the cbs associate with cvm by default. If you want destroy together, please set `delete_with_instance` to `true`. + ## Example Usage ```hcl @@ -217,7 +220,7 @@ The `data_disk` object supports the following: The `data_disk` object supports the following: -* `delete_with_instance` - (Optional, Bool) Indicates whether the disk remove after instance terminated. +* `delete_with_instance` - (Optional, Bool) Indicates whether the disk remove after instance terminated. Default is `false`. * `disk_size` - (Optional, Int) Volume of disk in GB. Default is `0`. * `disk_type` - (Optional, String) Types of disk. Valid value: `CLOUD_PREMIUM` and `CLOUD_SSD`. * `encrypt` - (Optional, Bool) Specify whether to encrypt data disk, default: false. NOTE: Make sure the instance type is offering and the cam role `QcloudKMSAccessForCVMRole` was provided.