From 293df90601bd1cf0b7f6cbb5cbf3dc7cd12f4b53 Mon Sep 17 00:00:00 2001 From: hellertang Date: Tue, 14 Mar 2023 23:08:29 +0800 Subject: [PATCH 1/2] update changelog --- CHANGELOG.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 3f1c512071..8da96dae6f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,19 @@ +## 1.79.15 (March 14, 2023) + +FEATURES: + +* **New Data Source:** `tencentcloud_tcm_mesh` ([#1600](https://github.com/tencentcloudstack/terraform-provider-tencentcloud/pull/1600)) +* **New Resource:** `tencentcloud_mariadb_instance` ([#1525](https://github.com/tencentcloudstack/terraform-provider-tencentcloud/pull/1525)) +* **New Resource:** `tencentcloud_mps_person_sample` ([#1601](https://github.com/tencentcloudstack/terraform-provider-tencentcloud/pull/1601)) + +ENHANCEMENTS: + +* resource/tencentcloud_mysql_account: support import ([#1598](https://github.com/tencentcloudstack/terraform-provider-tencentcloud/pull/1598)) + +BUG FIXES: + +* resource/tencentcloud_vpn_connection: fix dpd_timeout read error ([#1597](https://github.com/tencentcloudstack/terraform-provider-tencentcloud/pull/1597)) + ## 1.79.14 (March 08, 2023) FEATURES: From a6472dd891206143d70e474cd007a711dbb8d79c Mon Sep 17 00:00:00 2001 From: hellertang Date: Wed, 15 Mar 2023 21:52:01 +0800 Subject: [PATCH 2/2] fix as doc --- tencentcloud/resource_tc_as_scaling_config.go | 5 ++++- tencentcloud/resource_tc_kubernetes_node_pool.go | 4 ++-- website/docs/r/as_scaling_config.html.markdown | 4 +++- website/docs/r/kubernetes_node_pool.html.markdown | 4 ++-- 4 files changed, 11 insertions(+), 6 deletions(-) diff --git a/tencentcloud/resource_tc_as_scaling_config.go b/tencentcloud/resource_tc_as_scaling_config.go index 722d275da6..513cd444fa 100644 --- a/tencentcloud/resource_tc_as_scaling_config.go +++ b/tencentcloud/resource_tc_as_scaling_config.go @@ -1,6 +1,9 @@ /* Provides a resource to create a configuration for an AS (Auto scaling) instance. +~> **NOTE:** In order to ensure the integrity of customer data, if the cvm instance was destroyed due to shrinking, it will keep the cbs associate with cvm by default. If you want to destroy together, please set `delete_with_instance` to `true`. + + Example Usage ```hcl @@ -142,7 +145,7 @@ func resourceTencentCloudAsScalingConfig() *schema.Resource { "delete_with_instance": { Type: schema.TypeBool, Optional: true, - Description: "Indicates whether the disk remove after instance terminated.", + Description: "Indicates whether the disk remove after instance terminated. Default is `false`.", }, }, }, diff --git a/tencentcloud/resource_tc_kubernetes_node_pool.go b/tencentcloud/resource_tc_kubernetes_node_pool.go index 978971b302..b52a802b4f 100644 --- a/tencentcloud/resource_tc_kubernetes_node_pool.go +++ b/tencentcloud/resource_tc_kubernetes_node_pool.go @@ -3,9 +3,9 @@ Provide a resource to create an auto scaling group for kubernetes cluster. ~> **NOTE:** We recommend the usage of one cluster with essential worker config + node pool to manage cluster and nodes. Its a more flexible way than manage worker config with tencentcloud_kubernetes_cluster, tencentcloud_kubernetes_scale_worker or exist node management of `tencentcloud_kubernetes_attachment`. Cause some unchangeable parameters of `worker_config` may cause the whole cluster resource `force new`. -~> **NOTE:** In order to ensure the integrity of customer data, if you destroy nodepool instance, it will keep the cvm instance associate with nodepool by default. If you want destroy together, please set `delete_keep_instance` to `false`. +~> **NOTE:** In order to ensure the integrity of customer data, if you destroy nodepool instance, it will keep the cvm instance associate with nodepool by default. If you want to destroy together, please set `delete_keep_instance` to `false`. -~> **NOTE:** In order to ensure the integrity of customer data, if the cvm instance was destroyed due to shrinking, it will keep the cbs associate with cvm by default. If you want destroy together, please set `delete_with_instance` to `true`. +~> **NOTE:** In order to ensure the integrity of customer data, if the cvm instance was destroyed due to shrinking, it will keep the cbs associate with cvm by default. If you want to destroy together, please set `delete_with_instance` to `true`. Example Usage diff --git a/website/docs/r/as_scaling_config.html.markdown b/website/docs/r/as_scaling_config.html.markdown index 3b02ff5ac9..272bbb7ca9 100644 --- a/website/docs/r/as_scaling_config.html.markdown +++ b/website/docs/r/as_scaling_config.html.markdown @@ -11,6 +11,8 @@ description: |- Provides a resource to create a configuration for an AS (Auto scaling) instance. +~> **NOTE:** In order to ensure the integrity of customer data, if the cvm instance was destroyed due to shrinking, it will keep the cbs associate with cvm by default. If you want to destroy together, please set `delete_with_instance` to `true`. + ## Example Usage ```hcl @@ -87,7 +89,7 @@ The following arguments are supported: The `data_disk` object supports the following: -* `delete_with_instance` - (Optional, Bool) Indicates whether the disk remove after instance terminated. +* `delete_with_instance` - (Optional, Bool) Indicates whether the disk remove after instance terminated. Default is `false`. * `disk_size` - (Optional, Int) Volume of disk in GB. Default is `0`. * `disk_type` - (Optional, String) Types of disk. Valid values: `CLOUD_PREMIUM` and `CLOUD_SSD`. valid when disk_type_policy is ORIGINAL. * `snapshot_id` - (Optional, String) Data disk snapshot ID. diff --git a/website/docs/r/kubernetes_node_pool.html.markdown b/website/docs/r/kubernetes_node_pool.html.markdown index 2e8ed5538a..0dd6deff69 100644 --- a/website/docs/r/kubernetes_node_pool.html.markdown +++ b/website/docs/r/kubernetes_node_pool.html.markdown @@ -13,9 +13,9 @@ Provide a resource to create an auto scaling group for kubernetes cluster. ~> **NOTE:** We recommend the usage of one cluster with essential worker config + node pool to manage cluster and nodes. Its a more flexible way than manage worker config with tencentcloud_kubernetes_cluster, tencentcloud_kubernetes_scale_worker or exist node management of `tencentcloud_kubernetes_attachment`. Cause some unchangeable parameters of `worker_config` may cause the whole cluster resource `force new`. -~> **NOTE:** In order to ensure the integrity of customer data, if you destroy nodepool instance, it will keep the cvm instance associate with nodepool by default. If you want destroy together, please set `delete_keep_instance` to `false`. +~> **NOTE:** In order to ensure the integrity of customer data, if you destroy nodepool instance, it will keep the cvm instance associate with nodepool by default. If you want to destroy together, please set `delete_keep_instance` to `false`. -~> **NOTE:** In order to ensure the integrity of customer data, if the cvm instance was destroyed due to shrinking, it will keep the cbs associate with cvm by default. If you want destroy together, please set `delete_with_instance` to `true`. +~> **NOTE:** In order to ensure the integrity of customer data, if the cvm instance was destroyed due to shrinking, it will keep the cbs associate with cvm by default. If you want to destroy together, please set `delete_with_instance` to `true`. ## Example Usage