diff --git a/.changelog/12108.txt b/.changelog/12108.txt new file mode 100644 index 000000000000..f210a4a379ee --- /dev/null +++ b/.changelog/12108.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_iot_provisioning_template +``` \ No newline at end of file diff --git a/.changelog/13203.txt b/.changelog/13203.txt new file mode 100644 index 000000000000..a7fc1f742614 --- /dev/null +++ b/.changelog/13203.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_redshift_cluster: Correctly use `number_of_nodes` argument value when restoring from snapshot +``` \ No newline at end of file diff --git a/.changelog/13392.txt b/.changelog/13392.txt new file mode 100644 index 000000000000..8d411acf4bf2 --- /dev/null +++ b/.changelog/13392.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_iot_logging_options +``` \ No newline at end of file diff --git a/.changelog/14075.txt b/.changelog/14075.txt new file mode 100644 index 000000000000..114373f335b0 --- /dev/null +++ b/.changelog/14075.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_dynamodb_table_item: Allow `item` names to still succeed if they include non-letters +``` \ No newline at end of file diff --git a/.changelog/15355.txt b/.changelog/15355.txt new file mode 100644 index 000000000000..8d7738c04105 --- /dev/null +++ b/.changelog/15355.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_storagegateway_gateway: Add `maintenance_start_time` argument +``` \ No newline at end of file diff --git a/.changelog/18088.txt b/.changelog/18088.txt new file mode 100644 index 000000000000..4bfb8ed834c7 --- /dev/null +++ b/.changelog/18088.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_mq_broker: Add validation to `broker_name` and `security_groups` arguments +``` diff --git a/.changelog/19713.txt b/.changelog/19713.txt new file mode 100644 index 000000000000..614286177b1d --- /dev/null +++ b/.changelog/19713.txt @@ -0,0 +1,11 @@ +```release-note:enhancement +resource/aws_elasticsearch_domain: Add `cold_storage_options` argument to the `cluster_config` configuration block +``` + +```release-note:enhancement +data-source/aws_elasticsearch_domain: Add `cold_storage_options` attribute to the `cluster_config` configuration block +``` + +```release-note:enhancement +resource/aws_elasticsearch_domain: Add configurable Create and Delete timeouts +``` \ No newline at end of file diff --git a/.changelog/20068.txt b/.changelog/20068.txt new file mode 100644 index 000000000000..e401b93a9a3c --- /dev/null +++ b/.changelog/20068.txt @@ -0,0 +1,15 @@ +```release-note:enhancement +resource/aws_elasticache_cluster: Add `log_delivery_configuration` argument +``` + +```release-note:enhancement +data-source/aws_elasticache_cluster: Add `log_delivery_configuration` attribute +``` + +```release-note:enhancement +resource/aws_elasticache_replication_group: Add `log_delivery_configuration` argument +``` + +```release-note:enhancement +data-source/aws_elasticache_replication_group: Add `log_delivery_configuration` attribute +``` diff --git a/.changelog/20708.txt b/.changelog/20708.txt new file mode 100644 index 000000000000..fca6790b5e98 --- /dev/null +++ b/.changelog/20708.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_autoscaling_group: Fix issue where group was not recreated if `initial_lifecycle_hook` changed +``` \ No newline at end of file diff --git a/.changelog/20709.txt b/.changelog/20709.txt new file mode 100644 index 000000000000..640cd6692ad7 --- /dev/null +++ b/.changelog/20709.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_cloudfront_distribution: Fix default value of `origin_path` in `origin` block +``` diff --git a/.changelog/20892.txt b/.changelog/20892.txt new file mode 100644 index 000000000000..def01eb55c63 --- /dev/null +++ b/.changelog/20892.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_elasticsearch_domain_saml_option: Fix difference caused by `subject_key` default not matching AWS default; old and new defaults are equivalent +``` diff --git a/.changelog/21941.txt b/.changelog/21941.txt new file mode 100644 index 000000000000..46e3629096b0 --- /dev/null +++ b/.changelog/21941.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_mwaa_environment: Add `schedulers` argument +``` diff --git a/.changelog/22097.txt b/.changelog/22097.txt new file mode 100644 index 000000000000..1c7ad4097894 --- /dev/null +++ b/.changelog/22097.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_rds_cluster_activity_stream +``` \ No newline at end of file diff --git a/.changelog/22160.txt b/.changelog/22160.txt new file mode 100644 index 000000000000..ceef8b670311 --- /dev/null +++ b/.changelog/22160.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_cloudformation_stack_set: Consider `QUEUED` a valid pending state for resource creation +``` \ No newline at end of file diff --git a/.changelog/22355.txt b/.changelog/22355.txt new file mode 100644 index 000000000000..8f9ffa1cc4de --- /dev/null +++ b/.changelog/22355.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_network_firewall_policy: Allow use of managed rule group arns for network firewall managed rule groups. +``` \ No newline at end of file diff --git a/.changelog/23157.txt b/.changelog/23157.txt new file mode 100644 index 000000000000..3a7e5e90b451 --- /dev/null +++ b/.changelog/23157.txt @@ -0,0 +1,3 @@ +```release-note:new-data-source +aws_eks_addon_version +``` diff --git a/.changelog/23544.txt b/.changelog/23544.txt new file mode 100644 index 000000000000..0b0253c60c78 --- /dev/null +++ b/.changelog/23544.txt @@ -0,0 +1,11 @@ +```release-note:enhancement +resource/aws_mskconnect_custom_plugin: Implement resource Delete +``` + +```new-resource +aws_mskconnect_connector +``` + +```new-data-source +aws_mskconnect_connector +``` \ No newline at end of file diff --git a/.changelog/23759.txt b/.changelog/23759.txt new file mode 100644 index 000000000000..32a903abdd4f --- /dev/null +++ b/.changelog/23759.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_db_instance_automated_backups_replication +``` diff --git a/.changelog/23798.txt b/.changelog/23798.txt new file mode 100644 index 000000000000..1e9a0306c330 --- /dev/null +++ b/.changelog/23798.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_s3_bucket: Update `acl` and `grant` parameters to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring these parameters with the standalone `aws_s3_bucket_acl` resource. +``` \ No newline at end of file diff --git a/.changelog/23816.txt b/.changelog/23816.txt new file mode 100644 index 000000000000..ccec53a7cd9f --- /dev/null +++ b/.changelog/23816.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_s3_bucket: Update `acceleration_status` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_accelerate_configuration` resource. +``` \ No newline at end of file diff --git a/.changelog/23817.txt b/.changelog/23817.txt new file mode 100644 index 000000000000..9fe2bc085462 --- /dev/null +++ b/.changelog/23817.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_s3_bucket: Update `cors_rule` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_cors_configuration` resource. +``` \ No newline at end of file diff --git a/.changelog/23818.txt b/.changelog/23818.txt new file mode 100644 index 000000000000..63e822ed5753 --- /dev/null +++ b/.changelog/23818.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_s3_bucket: Update `lifecycle_rule` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_lifecycle_configuration` resource. +``` \ No newline at end of file diff --git a/.changelog/23819.txt b/.changelog/23819.txt new file mode 100644 index 000000000000..c60b34caefa0 --- /dev/null +++ b/.changelog/23819.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_s3_bucket: Update `logging` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_logging` resource. +``` \ No newline at end of file diff --git a/.changelog/23820.txt b/.changelog/23820.txt new file mode 100644 index 000000000000..e53b49cf787e --- /dev/null +++ b/.changelog/23820.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_s3_bucket: Update `versioning` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_versioning` resource. +``` \ No newline at end of file diff --git a/.changelog/23821.txt b/.changelog/23821.txt new file mode 100644 index 000000000000..561b71ed5349 --- /dev/null +++ b/.changelog/23821.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_s3_bucket: Update `website` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_website_configuration` resource. +``` \ No newline at end of file diff --git a/.changelog/23822.txt b/.changelog/23822.txt new file mode 100644 index 000000000000..aeadeacdf08f --- /dev/null +++ b/.changelog/23822.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_s3_bucket: Update `server_side_encryption_configuration` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_server_side_encryption_configuration` resource. +``` \ No newline at end of file diff --git a/.changelog/23842.txt b/.changelog/23842.txt new file mode 100644 index 000000000000..b757bbb820fe --- /dev/null +++ b/.changelog/23842.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_s3_bucket: Update `replication_configuration` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_replication_configuration` resource. +``` \ No newline at end of file diff --git a/.changelog/23843.txt b/.changelog/23843.txt new file mode 100644 index 000000000000..fbcf45069187 --- /dev/null +++ b/.changelog/23843.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_s3_bucket: Update `policy` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_policy` resource. +``` \ No newline at end of file diff --git a/.changelog/23844.txt b/.changelog/23844.txt new file mode 100644 index 000000000000..b4c96c31a8e6 --- /dev/null +++ b/.changelog/23844.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_s3_bucket: Update `request_payer` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_request_payment_configuration` resource. +``` \ No newline at end of file diff --git a/.changelog/23859.txt b/.changelog/23859.txt new file mode 100644 index 000000000000..71f09215769b --- /dev/null +++ b/.changelog/23859.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +data-source/aws_eips: Set `public_ips` for VPC as well as EC2 Classic +``` diff --git a/.changelog/23862.txt b/.changelog/23862.txt new file mode 100644 index 000000000000..bbe4070b3aec --- /dev/null +++ b/.changelog/23862.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_cloudwatch_event_target: Fix setting `path_parameter_values`. +``` diff --git a/.changelog/23873.txt b/.changelog/23873.txt new file mode 100644 index 000000000000..28066cf0f82b --- /dev/null +++ b/.changelog/23873.txt @@ -0,0 +1,7 @@ +```release-note:enhancement +resource/aws_lambda_function: Add `ephemeral_storage` argument +``` + +```release-note:enhancement +data-source/aws_lambda_function: Add `ephemeral_storage` attribute +``` diff --git a/.changelog/23879.txt b/.changelog/23879.txt new file mode 100644 index 000000000000..cb5d194cd4a2 --- /dev/null +++ b/.changelog/23879.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_lambda_function: Add error handling for `ResourceConflictException` errors on create and update +``` \ No newline at end of file diff --git a/.changelog/23880.txt b/.changelog/23880.txt new file mode 100644 index 000000000000..4734cf70ea38 --- /dev/null +++ b/.changelog/23880.txt @@ -0,0 +1,11 @@ +```release-note:enhancement +resource/aws_dlm_lifecycle_policy: Add `policy_details.schedule.create_rule.cron_expression`, `policy_details.schedule.retain_rule.interval`, `policy_details.schedule.retain_rule.interval_unit`, `policy_details.policy_type`, `policy_details.schedule.deprecate_rule`, `policy_details.parameters`, `policy_details.schedule.variable_tags`, `policy_details.schedule.fast_restore_rule`, `policy_details.schedule.share_rule`, `policy_details.resource_locations`, `policy_details.schedule.create_rule.location`, `policy_details.action` and `policy_details.event_source` arguments +``` + +```release-note:enhancement +resource/aws_dlm_lifecycle_policy: Add plan time validations for `policy_details.resource_types` and `description` arguments +``` + +```release-note:enhancement +resource/aws_dlm_lifecycle_policy: Make `policy_details.resource_types`, `policy_details.schedule`, `policy_details.target_tags`, `policy_details.schedule.retain_rule` and `policy_details.schedule.create_rule.interval` arguments optional +``` diff --git a/.changelog/23890.txt b/.changelog/23890.txt new file mode 100644 index 000000000000..fc83b8694544 --- /dev/null +++ b/.changelog/23890.txt @@ -0,0 +1,3 @@ +```release-note:new-data-source +aws_memorydb_user +``` diff --git a/.changelog/23891.txt b/.changelog/23891.txt new file mode 100644 index 000000000000..39bc0f068d7a --- /dev/null +++ b/.changelog/23891.txt @@ -0,0 +1,3 @@ +```release-note:new-data-source +aws_memorydb_acl +``` diff --git a/.changelog/23893.txt b/.changelog/23893.txt new file mode 100644 index 000000000000..3f8eab572f79 --- /dev/null +++ b/.changelog/23893.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_s3_bucket_lifecycle_configuration: Prevent `MalformedXML` errors when handling diffs in `rule.filter` +``` \ No newline at end of file diff --git a/.changelog/23902.txt b/.changelog/23902.txt new file mode 100644 index 000000000000..f0f2daa1273b --- /dev/null +++ b/.changelog/23902.txt @@ -0,0 +1,15 @@ +```release-note:new-data-source +aws_opensearch_domain +``` + +```release-note:new-resource +aws_opensearch_domain_policy +``` + +```release-note:new-resource +aws_opensearch_domain_saml_options +``` + +```release-note:new-resource +aws_opensearch_domain +``` \ No newline at end of file diff --git a/.changelog/23908.txt b/.changelog/23908.txt new file mode 100644 index 000000000000..5ef93e5348f8 --- /dev/null +++ b/.changelog/23908.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_cloudformation_stack_set: Add `operation_preferences` argument +``` \ No newline at end of file diff --git a/.changelog/23924.txt b/.changelog/23924.txt new file mode 100644 index 000000000000..fa468d2b616d --- /dev/null +++ b/.changelog/23924.txt @@ -0,0 +1,7 @@ +```release-note:enhancement +resource/aws_imagebuilder_distribution_configuration: Add `account_id` argument to the `launch_template_configuration` attribute of the `distribution` configuration block +``` + +```release-note:enhancement +data-source/aws_imagebuilder_distribution_configuration: Add `account_id` attribute to the `launch_template_configuration` attribute of the `distribution` configuration block +``` diff --git a/.changelog/23930.txt b/.changelog/23930.txt new file mode 100644 index 000000000000..f48a1d0d5fe9 --- /dev/null +++ b/.changelog/23930.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_organizations_account: Add `close_on_deletion` argument to close account on deletion +``` \ No newline at end of file diff --git a/.changelog/23947.txt b/.changelog/23947.txt new file mode 100644 index 000000000000..2dc5c08dba1e --- /dev/null +++ b/.changelog/23947.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_dynamodb_contributor_insights +``` \ No newline at end of file diff --git a/.changelog/23952.txt b/.changelog/23952.txt new file mode 100644 index 000000000000..4615e7446a3f --- /dev/null +++ b/.changelog/23952.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_fms_policy: Retry when `InternalErrorException` errors are returned from the AWS API +``` \ No newline at end of file diff --git a/.changelog/23967.txt b/.changelog/23967.txt new file mode 100644 index 000000000000..178c85d18dd8 --- /dev/null +++ b/.changelog/23967.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_backup_report_plan: Wait for asynchronous lifecycle operations to complete +``` \ No newline at end of file diff --git a/.changelog/23972.txt b/.changelog/23972.txt new file mode 100644 index 000000000000..6f9e9032a493 --- /dev/null +++ b/.changelog/23972.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_lb: Fix attribute key not recognized issue preventing creation in ISO-B regions +``` \ No newline at end of file diff --git a/.changelog/23973.txt b/.changelog/23973.txt new file mode 100644 index 000000000000..bd0d3e296f86 --- /dev/null +++ b/.changelog/23973.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_vpc_ipam: add `cascade` argument +``` \ No newline at end of file diff --git a/.changelog/23979.txt b/.changelog/23979.txt new file mode 100644 index 000000000000..82f3de11b9c3 --- /dev/null +++ b/.changelog/23979.txt @@ -0,0 +1,27 @@ +```release-note:bug +data-source/aws_elasticache_cluster: Allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) +``` + +```release-note:bug +resource/aws_elasticache_cluster: Attempt `tags`-on-create, fallback to tag after create, and allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) +``` + +```release-note:bug +resource/aws_elasticache_parameter_group: Attempt `tags`-on-create, fallback to tag after create, and allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) +``` + +```release-note:bug +resource/aws_elasticache_replication_group: Attempt `tags`-on-create, fallback to tag after create, and allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) +``` + +```release-note:bug +resource/aws_elasticache_subnet_group: Attempt `tags`-on-create, fallback to tag after create, and allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) +``` + +```release-note:bug +resource/aws_elasticache_user_group: Attempt `tags`-on-create, fallback to tag after create, and allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) +``` + +```release-note:bug +resource/aws_elasticache_user: Attempt `tags`-on-create, fallback to tag after create, and allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) +``` \ No newline at end of file diff --git a/.changelog/23984.txt b/.changelog/23984.txt new file mode 100644 index 000000000000..d20dae5ada00 --- /dev/null +++ b/.changelog/23984.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_s3_bucket: Update `object_lock_configuration.rule` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_object_lock_configuration` resource. +``` \ No newline at end of file diff --git a/.changelog/23985.txt b/.changelog/23985.txt new file mode 100644 index 000000000000..a2849a6646d3 --- /dev/null +++ b/.changelog/23985.txt @@ -0,0 +1,3 @@ +```release-note:note +resource/aws_s3_bucket: The `acceleration_status`, `acl`, `cors_rule`, `grant`, `lifecycle_rule`, `logging`, `object_lock_configuration.rule`, `policy`, `replication_configuration`, `request_payer`, `server_side_encryption_configuration`, `versioning`, and `website` parameters are now Optional. Please refer to the documentation for details on drift detection and potential conflicts when configuring these parameters with the standalone `aws_s3_bucket_*` resources. +``` \ No newline at end of file diff --git a/.changelog/23990.txt b/.changelog/23990.txt new file mode 100644 index 000000000000..aee2899f2d64 --- /dev/null +++ b/.changelog/23990.txt @@ -0,0 +1,3 @@ +```release-note:new-data-source +aws_memorydb_snapshot +``` diff --git a/.changelog/23991.txt b/.changelog/23991.txt new file mode 100644 index 000000000000..44296c119bed --- /dev/null +++ b/.changelog/23991.txt @@ -0,0 +1,3 @@ +```release-note:new-data-source +aws_memorydb_cluster +``` diff --git a/.changelog/23993.txt b/.changelog/23993.txt new file mode 100644 index 000000000000..6c090ac1fd22 --- /dev/null +++ b/.changelog/23993.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_iot_authorizer: Add `enable_caching_for_http` argument +``` \ No newline at end of file diff --git a/.changelog/23996.txt b/.changelog/23996.txt new file mode 100644 index 000000000000..72ca5ee78ce9 --- /dev/null +++ b/.changelog/23996.txt @@ -0,0 +1,7 @@ +```release-note:enhancement +resource/aws_elasticache_cluster: Add `auto_minor_version_upgrade` argument +``` + +```release-note:bug +resource/aws_elasticache_replication_group: Allow disabling `auto_minor_version_upgrade` +``` diff --git a/.changelog/24001.txt b/.changelog/24001.txt new file mode 100644 index 000000000000..e44f74a6c78d --- /dev/null +++ b/.changelog/24001.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_lambda_permission: Add `principal_org_id` argument. +``` \ No newline at end of file diff --git a/.changelog/24002.txt b/.changelog/24002.txt new file mode 100644 index 000000000000..c1fae7cb0743 --- /dev/null +++ b/.changelog/24002.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_fsx_ontap_file_system: Support updating `storage_capacity`, `throughput_capacity`, and `disk_iops_configuration`. +``` diff --git a/.changelog/24011.txt b/.changelog/24011.txt new file mode 100644 index 000000000000..d6035ac16504 --- /dev/null +++ b/.changelog/24011.txt @@ -0,0 +1,3 @@ +```release-note:new-data-source +aws_ssm_maintenance_windows +``` diff --git a/.changelog/24020.txt b/.changelog/24020.txt new file mode 100644 index 000000000000..729b3532de8b --- /dev/null +++ b/.changelog/24020.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_s3_bucket: Speed up resource deletion, especially when the S3 buckets contains a large number of objects and `force_destroy` is `true` +``` diff --git a/.changelog/24021.txt b/.changelog/24021.txt new file mode 100644 index 000000000000..a1ea4a0a0d5b --- /dev/null +++ b/.changelog/24021.txt @@ -0,0 +1,3 @@ +```release-note:bug +resource/aws_elasticache_replication_group: Waits for available state before updating tags +``` diff --git a/.changelog/24024.txt b/.changelog/24024.txt new file mode 100644 index 000000000000..00a4c6582614 --- /dev/null +++ b/.changelog/24024.txt @@ -0,0 +1,27 @@ +```release-note:bug +resource/aws_route: Ensure that resource ID is set in case of wait-for-creation time out +``` + +```release-note:enhancement +resource/aws_route: Add `core_network_arn` argument +``` + +```release-note:enhancement +data-source/aws_route: Add `core_network_arn` argument +``` + +```release-note:enhancement +resource/aws_route_table: Add `core_network_arn` argument to the `route` configuration block +``` + +```release-note:enhancement +data-source/aws_route_table: Add 'routes.core_network_arn' attribute' +``` + +```release-note:enhancement +resource/aws_default_route_table: Add `core_network_arn` argument to the `route` configuration block +``` + +```release-note:enhancement +resource/aws_vpn_connection: Add `core_network_arn` and `core_network_attachment_arn` attributes +``` diff --git a/.changelog/24028.txt b/.changelog/24028.txt new file mode 100644 index 000000000000..7ae664679d74 --- /dev/null +++ b/.changelog/24028.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_xray_group: Add `insights_configuration` argument +``` \ No newline at end of file diff --git a/.changelog/24038.txt b/.changelog/24038.txt new file mode 100644 index 000000000000..0a3e6218a7cc --- /dev/null +++ b/.changelog/24038.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_storagegateway_nfs_file_share: Add `bucket_region` and `vpc_endpoint_dns_name` arguments to support PrivateLink endpoints +``` diff --git a/.changelog/24053.txt b/.changelog/24053.txt new file mode 100644 index 000000000000..8c2dcea3acd6 --- /dev/null +++ b/.changelog/24053.txt @@ -0,0 +1,7 @@ +```release-note:new-resource +aws_lambda_function_url +``` + +```release-note:new-data-source +aws_lambda_function_url +``` diff --git a/.changelog/24064.txt b/.changelog/24064.txt new file mode 100644 index 000000000000..54e14acdd34e --- /dev/null +++ b/.changelog/24064.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +provider: Add support for reading custom CA bundle setting from shared config files +``` diff --git a/.changelog/9929.txt b/.changelog/9929.txt new file mode 100644 index 000000000000..f513c00150a9 --- /dev/null +++ b/.changelog/9929.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_iot_indexing_configuration +``` \ No newline at end of file diff --git a/.github/labeler-issue-triage.yml b/.github/labeler-issue-triage.yml index efbed7e61dba..a88c7d559d92 100644 --- a/.github/labeler-issue-triage.yml +++ b/.github/labeler-issue-triage.yml @@ -276,6 +276,8 @@ service/networkfirewall: - '((\*|-) ?`?|(data|resource) "?)aws_networkfirewall_' service/networkmanager: - '((\*|-) ?`?|(data|resource) "?)aws_networkmanager_' +service/opensearch: + - '((\*|-) ?`?|(data|resource) "?)aws_opensearch_' service/opsworks: - '((\*|-) ?`?|(data|resource) "?)aws_opsworks_' service/organizations: diff --git a/.github/labeler-pr-triage.yml b/.github/labeler-pr-triage.yml index 50d71bf9919a..10a4d2d23bed 100644 --- a/.github/labeler-pr-triage.yml +++ b/.github/labeler-pr-triage.yml @@ -475,6 +475,9 @@ service/networkfirewall: service/networkmanager: - 'internal/service/networkmanager/**/*' - 'website/**/networkmanager_*' +service/opensearch: + - 'internal/service/opensearch/**/*' + - 'website/**/opensearch_*' service/opsworks: - 'internal/service/opsworks/**/*' - 'website/**/opsworks_*' diff --git a/CHANGELOG.md b/CHANGELOG.md index 03e949e69339..59d17ca3dcc8 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,120 @@ +## 4.9.0 + +NOTES: + +* resource/aws_s3_bucket: The `acceleration_status`, `acl`, `cors_rule`, `grant`, `lifecycle_rule`, `logging`, `object_lock_configuration.rule`, `policy`, `replication_configuration`, `request_payer`, `server_side_encryption_configuration`, `versioning`, and `website` parameters are now Optional. Please refer to the documentation for details on drift detection and potential conflicts when configuring these parameters with the standalone `aws_s3_bucket_*` resources. ([#23985](https://github.com/hashicorp/terraform-provider-aws/issues/23985)) + +FEATURES: + +* **New Data Source:** `aws_eks_addon_version` ([#23157](https://github.com/hashicorp/terraform-provider-aws/issues/23157)) +* **New Data Source:** `aws_lambda_function_url` ([#24053](https://github.com/hashicorp/terraform-provider-aws/issues/24053)) +* **New Data Source:** `aws_memorydb_acl` ([#23891](https://github.com/hashicorp/terraform-provider-aws/issues/23891)) +* **New Data Source:** `aws_memorydb_cluster` ([#23991](https://github.com/hashicorp/terraform-provider-aws/issues/23991)) +* **New Data Source:** `aws_memorydb_snapshot` ([#23990](https://github.com/hashicorp/terraform-provider-aws/issues/23990)) +* **New Data Source:** `aws_memorydb_user` ([#23890](https://github.com/hashicorp/terraform-provider-aws/issues/23890)) +* **New Data Source:** `aws_opensearch_domain` ([#23902](https://github.com/hashicorp/terraform-provider-aws/issues/23902)) +* **New Data Source:** `aws_ssm_maintenance_windows` ([#24011](https://github.com/hashicorp/terraform-provider-aws/issues/24011)) +* **New Resource:** `aws_db_instance_automated_backups_replication` ([#23759](https://github.com/hashicorp/terraform-provider-aws/issues/23759)) +* **New Resource:** `aws_dynamodb_contributor_insights` ([#23947](https://github.com/hashicorp/terraform-provider-aws/issues/23947)) +* **New Resource:** `aws_iot_indexing_configuration` ([#9929](https://github.com/hashicorp/terraform-provider-aws/issues/9929)) +* **New Resource:** `aws_iot_logging_options` ([#13392](https://github.com/hashicorp/terraform-provider-aws/issues/13392)) +* **New Resource:** `aws_iot_provisioning_template` ([#12108](https://github.com/hashicorp/terraform-provider-aws/issues/12108)) +* **New Resource:** `aws_lambda_function_url` ([#24053](https://github.com/hashicorp/terraform-provider-aws/issues/24053)) +* **New Resource:** `aws_opensearch_domain` ([#23902](https://github.com/hashicorp/terraform-provider-aws/issues/23902)) +* **New Resource:** `aws_opensearch_domain_policy` ([#23902](https://github.com/hashicorp/terraform-provider-aws/issues/23902)) +* **New Resource:** `aws_opensearch_domain_saml_options` ([#23902](https://github.com/hashicorp/terraform-provider-aws/issues/23902)) +* **New Resource:** `aws_rds_cluster_activity_stream` ([#22097](https://github.com/hashicorp/terraform-provider-aws/issues/22097)) + +ENHANCEMENTS: + +* data-source/aws_imagebuilder_distribution_configuration: Add `account_id` attribute to the `launch_template_configuration` attribute of the `distribution` configuration block ([#23924](https://github.com/hashicorp/terraform-provider-aws/issues/23924)) +* data-source/aws_route: Add `core_network_arn` argument ([#24024](https://github.com/hashicorp/terraform-provider-aws/issues/24024)) +* data-source/aws_route_table: Add 'routes.core_network_arn' attribute' ([#24024](https://github.com/hashicorp/terraform-provider-aws/issues/24024)) +* provider: Add support for reading custom CA bundle setting from shared config files ([#24064](https://github.com/hashicorp/terraform-provider-aws/issues/24064)) +* resource/aws_cloudformation_stack_set: Add `operation_preferences` argument ([#23908](https://github.com/hashicorp/terraform-provider-aws/issues/23908)) +* resource/aws_default_route_table: Add `core_network_arn` argument to the `route` configuration block ([#24024](https://github.com/hashicorp/terraform-provider-aws/issues/24024)) +* resource/aws_dlm_lifecycle_policy: Add `policy_details.schedule.create_rule.cron_expression`, `policy_details.schedule.retain_rule.interval`, `policy_details.schedule.retain_rule.interval_unit`, `policy_details.policy_type`, `policy_details.schedule.deprecate_rule`, `policy_details.parameters`, `policy_details.schedule.variable_tags`, `policy_details.schedule.fast_restore_rule`, `policy_details.schedule.share_rule`, `policy_details.resource_locations`, `policy_details.schedule.create_rule.location`, `policy_details.action` and `policy_details.event_source` arguments ([#23880](https://github.com/hashicorp/terraform-provider-aws/issues/23880)) +* resource/aws_dlm_lifecycle_policy: Add plan time validations for `policy_details.resource_types` and `description` arguments ([#23880](https://github.com/hashicorp/terraform-provider-aws/issues/23880)) +* resource/aws_dlm_lifecycle_policy: Make `policy_details.resource_types`, `policy_details.schedule`, `policy_details.target_tags`, `policy_details.schedule.retain_rule` and `policy_details.schedule.create_rule.interval` arguments optional ([#23880](https://github.com/hashicorp/terraform-provider-aws/issues/23880)) +* resource/aws_elasticache_cluster: Add `auto_minor_version_upgrade` argument ([#23996](https://github.com/hashicorp/terraform-provider-aws/issues/23996)) +* resource/aws_fms_policy: Retry when `InternalErrorException` errors are returned from the AWS API ([#23952](https://github.com/hashicorp/terraform-provider-aws/issues/23952)) +* resource/aws_fsx_ontap_file_system: Support updating `storage_capacity`, `throughput_capacity`, and `disk_iops_configuration`. ([#24002](https://github.com/hashicorp/terraform-provider-aws/issues/24002)) +* resource/aws_imagebuilder_distribution_configuration: Add `account_id` argument to the `launch_template_configuration` attribute of the `distribution` configuration block ([#23924](https://github.com/hashicorp/terraform-provider-aws/issues/23924)) +* resource/aws_iot_authorizer: Add `enable_caching_for_http` argument ([#23993](https://github.com/hashicorp/terraform-provider-aws/issues/23993)) +* resource/aws_lambda_permission: Add `principal_org_id` argument. ([#24001](https://github.com/hashicorp/terraform-provider-aws/issues/24001)) +* resource/aws_mq_broker: Add validation to `broker_name` and `security_groups` arguments ([#18088](https://github.com/hashicorp/terraform-provider-aws/issues/18088)) +* resource/aws_organizations_account: Add `close_on_deletion` argument to close account on deletion ([#23930](https://github.com/hashicorp/terraform-provider-aws/issues/23930)) +* resource/aws_route: Add `core_network_arn` argument ([#24024](https://github.com/hashicorp/terraform-provider-aws/issues/24024)) +* resource/aws_route_table: Add `core_network_arn` argument to the `route` configuration block ([#24024](https://github.com/hashicorp/terraform-provider-aws/issues/24024)) +* resource/aws_s3_bucket: Speed up resource deletion, especially when the S3 buckets contains a large number of objects and `force_destroy` is `true` ([#24020](https://github.com/hashicorp/terraform-provider-aws/issues/24020)) +* resource/aws_s3_bucket: Update `acceleration_status` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_accelerate_configuration` resource. ([#23816](https://github.com/hashicorp/terraform-provider-aws/issues/23816)) +* resource/aws_s3_bucket: Update `acl` and `grant` parameters to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring these parameters with the standalone `aws_s3_bucket_acl` resource. ([#23798](https://github.com/hashicorp/terraform-provider-aws/issues/23798)) +* resource/aws_s3_bucket: Update `cors_rule` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_cors_configuration` resource. ([#23817](https://github.com/hashicorp/terraform-provider-aws/issues/23817)) +* resource/aws_s3_bucket: Update `lifecycle_rule` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_lifecycle_configuration` resource. ([#23818](https://github.com/hashicorp/terraform-provider-aws/issues/23818)) +* resource/aws_s3_bucket: Update `logging` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_logging` resource. ([#23819](https://github.com/hashicorp/terraform-provider-aws/issues/23819)) +* resource/aws_s3_bucket: Update `object_lock_configuration.rule` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_object_lock_configuration` resource. ([#23984](https://github.com/hashicorp/terraform-provider-aws/issues/23984)) +* resource/aws_s3_bucket: Update `policy` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_policy` resource. ([#23843](https://github.com/hashicorp/terraform-provider-aws/issues/23843)) +* resource/aws_s3_bucket: Update `replication_configuration` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_replication_configuration` resource. ([#23842](https://github.com/hashicorp/terraform-provider-aws/issues/23842)) +* resource/aws_s3_bucket: Update `request_payer` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_request_payment_configuration` resource. ([#23844](https://github.com/hashicorp/terraform-provider-aws/issues/23844)) +* resource/aws_s3_bucket: Update `server_side_encryption_configuration` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_server_side_encryption_configuration` resource. ([#23822](https://github.com/hashicorp/terraform-provider-aws/issues/23822)) +* resource/aws_s3_bucket: Update `versioning` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_versioning` resource. ([#23820](https://github.com/hashicorp/terraform-provider-aws/issues/23820)) +* resource/aws_s3_bucket: Update `website` parameter to be configurable. Please refer to the documentation for details on drift detection and potential conflicts when configuring this parameter with the standalone `aws_s3_bucket_website_configuration` resource. ([#23821](https://github.com/hashicorp/terraform-provider-aws/issues/23821)) +* resource/aws_storagegateway_gateway: Add `maintenance_start_time` argument ([#15355](https://github.com/hashicorp/terraform-provider-aws/issues/15355)) +* resource/aws_storagegateway_nfs_file_share: Add `bucket_region` and `vpc_endpoint_dns_name` arguments to support PrivateLink endpoints ([#24038](https://github.com/hashicorp/terraform-provider-aws/issues/24038)) +* resource/aws_vpc_ipam: add `cascade` argument ([#23973](https://github.com/hashicorp/terraform-provider-aws/issues/23973)) +* resource/aws_vpn_connection: Add `core_network_arn` and `core_network_attachment_arn` attributes ([#24024](https://github.com/hashicorp/terraform-provider-aws/issues/24024)) +* resource/aws_xray_group: Add `insights_configuration` argument ([#24028](https://github.com/hashicorp/terraform-provider-aws/issues/24028)) + +BUG FIXES: + +* data-source/aws_elasticache_cluster: Allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) ([#23979](https://github.com/hashicorp/terraform-provider-aws/issues/23979)) +* resource/aws_backup_report_plan: Wait for asynchronous lifecycle operations to complete ([#23967](https://github.com/hashicorp/terraform-provider-aws/issues/23967)) +* resource/aws_cloudformation_stack_set: Consider `QUEUED` a valid pending state for resource creation ([#22160](https://github.com/hashicorp/terraform-provider-aws/issues/22160)) +* resource/aws_dynamodb_table_item: Allow `item` names to still succeed if they include non-letters ([#14075](https://github.com/hashicorp/terraform-provider-aws/issues/14075)) +* resource/aws_elasticache_cluster: Attempt `tags`-on-create, fallback to tag after create, and allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) ([#23979](https://github.com/hashicorp/terraform-provider-aws/issues/23979)) +* resource/aws_elasticache_parameter_group: Attempt `tags`-on-create, fallback to tag after create, and allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) ([#23979](https://github.com/hashicorp/terraform-provider-aws/issues/23979)) +* resource/aws_elasticache_replication_group: Allow disabling `auto_minor_version_upgrade` ([#23996](https://github.com/hashicorp/terraform-provider-aws/issues/23996)) +* resource/aws_elasticache_replication_group: Attempt `tags`-on-create, fallback to tag after create, and allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) ([#23979](https://github.com/hashicorp/terraform-provider-aws/issues/23979)) +* resource/aws_elasticache_replication_group: Waits for available state before updating tags ([#24021](https://github.com/hashicorp/terraform-provider-aws/issues/24021)) +* resource/aws_elasticache_subnet_group: Attempt `tags`-on-create, fallback to tag after create, and allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) ([#23979](https://github.com/hashicorp/terraform-provider-aws/issues/23979)) +* resource/aws_elasticache_user: Attempt `tags`-on-create, fallback to tag after create, and allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) ([#23979](https://github.com/hashicorp/terraform-provider-aws/issues/23979)) +* resource/aws_elasticache_user_group: Attempt `tags`-on-create, fallback to tag after create, and allow some `tags` errors to be non-fatal to support non-standard AWS partitions (i.e., ISO) ([#23979](https://github.com/hashicorp/terraform-provider-aws/issues/23979)) +* resource/aws_elasticsearch_domain_saml_option: Fix difference caused by `subject_key` default not matching AWS default; old and new defaults are equivalent ([#20892](https://github.com/hashicorp/terraform-provider-aws/issues/20892)) +* resource/aws_lb: Fix attribute key not recognized issue preventing creation in ISO-B regions ([#23972](https://github.com/hashicorp/terraform-provider-aws/issues/23972)) +* resource/aws_redshift_cluster: Correctly use `number_of_nodes` argument value when restoring from snapshot ([#13203](https://github.com/hashicorp/terraform-provider-aws/issues/13203)) +* resource/aws_route: Ensure that resource ID is set in case of wait-for-creation time out ([#24024](https://github.com/hashicorp/terraform-provider-aws/issues/24024)) +* resource/aws_s3_bucket_lifecycle_configuration: Prevent `MalformedXML` errors when handling diffs in `rule.filter` ([#23893](https://github.com/hashicorp/terraform-provider-aws/issues/23893)) + +## 4.8.0 (March 25, 2022) + +FEATURES: + +* **New Data Source:** `aws_mskconnect_connector` ([#23792](https://github.com/hashicorp/terraform-provider-aws/issues/23544)) +* **New Resource:** `aws_mskconnect_connector` ([#23765](https://github.com/hashicorp/terraform-provider-aws/issues/23544)) + +ENHANCEMENTS: + +* data-source/aws_eips: Set `public_ips` for VPC as well as EC2 Classic ([#23859](https://github.com/hashicorp/terraform-provider-aws/issues/23859)) +* data-source/aws_elasticache_cluster: Add `log_delivery_configuration` attribute ([#20068](https://github.com/hashicorp/terraform-provider-aws/issues/20068)) +* data-source/aws_elasticache_replication_group: Add `log_delivery_configuration` attribute ([#20068](https://github.com/hashicorp/terraform-provider-aws/issues/20068)) +* data-source/aws_elasticsearch_domain: Add `cold_storage_options` attribute to the `cluster_config` configuration block ([#19713](https://github.com/hashicorp/terraform-provider-aws/issues/19713)) +* data-source/aws_lambda_function: Add `ephemeral_storage` attribute ([#23873](https://github.com/hashicorp/terraform-provider-aws/issues/23873)) +* resource/aws_elasticache_cluster: Add `log_delivery_configuration` argument ([#20068](https://github.com/hashicorp/terraform-provider-aws/issues/20068)) +* resource/aws_elasticache_replication_group: Add `log_delivery_configuration` argument ([#20068](https://github.com/hashicorp/terraform-provider-aws/issues/20068)) +* resource/aws_elasticsearch_domain: Add `cold_storage_options` argument to the `cluster_config` configuration block ([#19713](https://github.com/hashicorp/terraform-provider-aws/issues/19713)) +* resource/aws_elasticsearch_domain: Add configurable Create and Delete timeouts ([#19713](https://github.com/hashicorp/terraform-provider-aws/issues/19713)) +* resource/aws_lambda_function: Add `ephemeral_storage` argument ([#23873](https://github.com/hashicorp/terraform-provider-aws/issues/23873)) +* resource/aws_lambda_function: Add error handling for `ResourceConflictException` errors on create and update ([#23879](https://github.com/hashicorp/terraform-provider-aws/issues/23879)) +* resource/aws_mskconnect_custom_plugin: Implement resource Delete ([#23544](https://github.com/hashicorp/terraform-provider-aws/issues/23544)) +* resource/aws_mwaa_environment: Add `schedulers` argument ([#21941](https://github.com/hashicorp/terraform-provider-aws/issues/21941)) +* resource/aws_network_firewall_policy: Allow use of managed rule group arns for network firewall managed rule groups. ([#22355](https://github.com/hashicorp/terraform-provider-aws/issues/22355)) + +BUG FIXES: + +* resource/aws_autoscaling_group: Fix issue where group was not recreated if `initial_lifecycle_hook` changed ([#20708](https://github.com/hashicorp/terraform-provider-aws/issues/20708)) +* resource/aws_cloudfront_distribution: Fix default value of `origin_path` in `origin` block ([#20709](https://github.com/hashicorp/terraform-provider-aws/issues/20709)) +* resource/aws_cloudwatch_event_target: Fix setting `path_parameter_values`. ([#23862](https://github.com/hashicorp/terraform-provider-aws/issues/23862)) + ## 4.7.0 (March 24, 2022) FEATURES: diff --git a/docs/contributing/contribution-checklists.md b/docs/contributing/contribution-checklists.md index b198cfc9f3a4..172a7192ecc0 100644 --- a/docs/contributing/contribution-checklists.md +++ b/docs/contributing/contribution-checklists.md @@ -908,7 +908,7 @@ into Terraform. - Run the following then submit the pull request: ```sh - go test ./aws + make test go mod tidy ``` diff --git a/go.mod b/go.mod index 9a9ff80480ce..cdacf5e69c85 100644 --- a/go.mod +++ b/go.mod @@ -4,20 +4,20 @@ go 1.17 require ( github.com/ProtonMail/go-crypto v0.0.0-20210428141323-04723f9f07d7 - github.com/aws/aws-sdk-go v1.43.21 - github.com/aws/aws-sdk-go-v2 v1.15.0 - github.com/aws/aws-sdk-go-v2/service/route53domains v1.12.0 + github.com/aws/aws-sdk-go v1.43.34 + github.com/aws/aws-sdk-go-v2 v1.16.2 + github.com/aws/aws-sdk-go-v2/service/route53domains v1.12.3 github.com/beevik/etree v1.1.0 github.com/google/go-cmp v0.5.7 github.com/hashicorp/aws-cloudformation-resource-schema-sdk-go v0.16.0 - github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.13 - github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2 v2.0.0-beta.14 + github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.14 + github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2 v2.0.0-beta.15 github.com/hashicorp/awspolicyequivalence v1.5.0 github.com/hashicorp/go-cleanhttp v0.5.2 github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320 github.com/hashicorp/go-multierror v1.1.1 github.com/hashicorp/go-version v1.4.0 - github.com/hashicorp/terraform-plugin-sdk/v2 v2.12.0 + github.com/hashicorp/terraform-plugin-sdk/v2 v2.13.0 github.com/mattbaird/jsonpatch v0.0.0-20200820163806-098863c1fc24 github.com/mitchellh/copystructure v1.2.0 github.com/mitchellh/go-homedir v1.1.0 @@ -36,14 +36,14 @@ require ( github.com/aws/aws-sdk-go-v2/config v1.15.0 // indirect github.com/aws/aws-sdk-go-v2/credentials v1.10.0 // indirect github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.12.0 // indirect - github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.6 // indirect - github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.0 // indirect + github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.9 // indirect + github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.3 // indirect github.com/aws/aws-sdk-go-v2/internal/ini v1.3.7 // indirect github.com/aws/aws-sdk-go-v2/service/iam v1.18.0 // indirect github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.0 // indirect github.com/aws/aws-sdk-go-v2/service/sso v1.11.0 // indirect github.com/aws/aws-sdk-go-v2/service/sts v1.16.0 // indirect - github.com/aws/smithy-go v1.11.1 // indirect + github.com/aws/smithy-go v1.11.2 // indirect github.com/boombuler/barcode v1.0.1-0.20190219062509-6c824513bacc // indirect github.com/davecgh/go-spew v1.1.1 // indirect github.com/evanphx/json-patch v0.5.2 // indirect diff --git a/go.sum b/go.sum index 869b9851890f..964ad270d325 100644 --- a/go.sum +++ b/go.sum @@ -27,34 +27,38 @@ github.com/apparentlymart/go-textseg/v13 v13.0.0/go.mod h1:ZK2fH7c4NqDTLtiYLvIkE github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs= github.com/aws/aws-sdk-go v1.42.18/go.mod h1:585smgzpB/KqRA+K3y/NL/oYRqQvpNJYvLm+LY1U59Q= github.com/aws/aws-sdk-go v1.42.52/go.mod h1:OGr6lGMAKGlG9CVrYnWYDKIyb829c6EVBRjxqjmPepc= -github.com/aws/aws-sdk-go v1.43.21 h1:E4S2eX3d2gKJyI/ISrcIrSwXwqjIvCK85gtBMt4sAPE= -github.com/aws/aws-sdk-go v1.43.21/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo= -github.com/aws/aws-sdk-go-v2 v1.15.0 h1:f9kWLNfyCzCB43eupDAk3/XgJ2EpgktiySD6leqs0js= +github.com/aws/aws-sdk-go v1.43.34 h1:8+P+773CDgQqN1eLH1QHT6XgXHUbME3sAbDGszzjajY= +github.com/aws/aws-sdk-go v1.43.34/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo= github.com/aws/aws-sdk-go-v2 v1.15.0/go.mod h1:lJYcuZZEHWNIb6ugJjbQY1fykdoobWbOS7kJYb4APoI= +github.com/aws/aws-sdk-go-v2 v1.16.2 h1:fqlCk6Iy3bnCumtrLz9r3mJ/2gUT0pJ0wLFVIdWh+JA= +github.com/aws/aws-sdk-go-v2 v1.16.2/go.mod h1:ytwTPBG6fXTZLxxeeCCWj2/EMYp/xDUgX+OET6TLNNU= github.com/aws/aws-sdk-go-v2/config v1.15.0 h1:cibCYF2c2uq0lsbu0Ggbg8RuGeiHCmXwUlTMS77CiK4= github.com/aws/aws-sdk-go-v2/config v1.15.0/go.mod h1:NccaLq2Z9doMmeQXHQRrt2rm+2FbkrcPvfdbCaQn5hY= github.com/aws/aws-sdk-go-v2/credentials v1.10.0 h1:M/FFpf2w31F7xqJqJLgiM0mFpLOtBvwZggORr6QCpo8= github.com/aws/aws-sdk-go-v2/credentials v1.10.0/go.mod h1:HWJMr4ut5X+Lt/7epc7I6Llg5QIcoFHKAeIzw32t6EE= github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.12.0 h1:gUlb+I7NwDtqJUIRcFYDiheYa97PdVHG/5Iz+SwdoHE= github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.12.0/go.mod h1:prX26x9rmLwkEE1VVCelQOQgRN9sOVIssgowIJ270SE= -github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.6 h1:xiGjGVQsem2cxoIX61uRGy+Jux2s9C/kKbTrWLdrU54= github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.6/go.mod h1:SSPEdf9spsFgJyhjrXvawfpyzrXHBCUe+2eQ1CjC1Ak= -github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.0 h1:bt3zw79tm209glISdMRCIVRCwvSDXxgAxh5KWe2qHkY= +github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.9 h1:onz/VaaxZ7Z4V+WIN9Txly9XLTmoOh1oJ8XcAC3pako= +github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.9/go.mod h1:AnVH5pvai0pAF4lXRq0bmhbes1u9R8wTE+g+183bZNM= github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.0/go.mod h1:viTrxhAuejD+LszDahzAE2x40YjYWhMqzHxv2ZiWaME= +github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.3 h1:9stUQR/u2KXU6HkFJYlqnZEjBnbgrVbG6I5HN09xZh0= +github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.3/go.mod h1:ssOhaLpRlh88H3UmEcsBoVKq309quMvm3Ds8e9d4eJM= github.com/aws/aws-sdk-go-v2/internal/ini v1.3.7 h1:QOMEP8jnO8sm0SX/4G7dbaIq2eEP2wcWEsF0jzrXLJc= github.com/aws/aws-sdk-go-v2/internal/ini v1.3.7/go.mod h1:P5sjYYf2nc5dE6cZIzEMsVtq6XeLD7c4rM+kQJPrByA= github.com/aws/aws-sdk-go-v2/service/iam v1.18.0 h1:ZYpP40/QE7/R0zDxdrZyGGUijX26iB+Pint/NYzF/tQ= github.com/aws/aws-sdk-go-v2/service/iam v1.18.0/go.mod h1:9wRsXAkRJ7qBWIDTFYa66Cx+oQJsPEnBYCPrinanpS8= github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.0 h1:YQ3fTXACo7xeAqg0NiqcCmBOXJruUfh+4+O2qxF2EjQ= github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.0/go.mod h1:R31ot6BgESRCIoxwfKtIHzZMo/vsZn2un81g9BJ4nmo= -github.com/aws/aws-sdk-go-v2/service/route53domains v1.12.0 h1:PN0LQirFrjh9esAO80iZXo+asiTtLpjNCXpzZ+1BKCw= -github.com/aws/aws-sdk-go-v2/service/route53domains v1.12.0/go.mod h1:xzqCQW+Y6wn/4+9WVo3IPmnRTsN8Nwlw6cNUd6HVzqI= +github.com/aws/aws-sdk-go-v2/service/route53domains v1.12.3 h1:X3aPLG+0t1h8BA6IKfWc5j9arslvae+ajXwDXHuOOf8= +github.com/aws/aws-sdk-go-v2/service/route53domains v1.12.3/go.mod h1:eUV9E0VmNo8aHqGN9qlf3qdNa7z+kT1gxttP3HLGPUI= github.com/aws/aws-sdk-go-v2/service/sso v1.11.0 h1:gZLEXLH6NiU8Y52nRhK1jA+9oz7LZzBK242fi/ziXa4= github.com/aws/aws-sdk-go-v2/service/sso v1.11.0/go.mod h1:d1WcT0OjggjQCAdOkph8ijkr5sUwk1IH/VenOn7W1PU= github.com/aws/aws-sdk-go-v2/service/sts v1.16.0 h1:0+X/rJ2+DTBKWbUsn7WtF0JvNk/fRf928vkFsXkbbZs= github.com/aws/aws-sdk-go-v2/service/sts v1.16.0/go.mod h1:+8k4H2ASUZZXmjx/s3DFLo9tGBb44lkz3XcgfypJY7s= -github.com/aws/smithy-go v1.11.1 h1:IQ+lPZVkSM3FRtyaDox41R8YS6iwPMYIreejOgPW49g= github.com/aws/smithy-go v1.11.1/go.mod h1:3xHYmszWVx2c0kIwQeEVf9uSm4fYZt67FBJnwub1bgM= +github.com/aws/smithy-go v1.11.2 h1:eG/N+CcUMAvsdffgMvjMKwfyDzIkjM6pfxMJ8Mzc6mE= +github.com/aws/smithy-go v1.11.2/go.mod h1:3xHYmszWVx2c0kIwQeEVf9uSm4fYZt67FBJnwub1bgM= github.com/beevik/etree v1.1.0 h1:T0xke/WvNtMoCqgzPhkX2r4rjY3GDZFi+FjpRZY2Jbs= github.com/beevik/etree v1.1.0/go.mod h1:r8Aw8JqVegEf0w2fDnATrX9VpkMcyFeM0FhwO62wh+A= github.com/boombuler/barcode v1.0.1-0.20190219062509-6c824513bacc h1:biVzkmvwrH8WK8raXaxBx6fRVTlJILwEwQGL1I/ByEI= @@ -130,10 +134,10 @@ github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+ github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw= github.com/hashicorp/aws-cloudformation-resource-schema-sdk-go v0.16.0 h1:r2RUzeK2gAitl0HY9SLH1axAEu+6aPBY20g1jOoBepM= github.com/hashicorp/aws-cloudformation-resource-schema-sdk-go v0.16.0/go.mod h1:C6GVuO9RWOrt6QCGTmLCOYuSHpkfQSBDuRqTteOlo0g= -github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.13 h1:FIIkxL5+CHVt4TqwqY1pxG8k35ac8+vi3wGKpsRSTcI= -github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.13/go.mod h1:QP/Uy/4K9XLzpwDSKX7fLGFuQfQq2Nz+OacCTbuaKKQ= -github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2 v2.0.0-beta.14 h1:Ar8qQRk0SomjSlmSr3oKzbt65/GtESrmwLGWS/9DI3M= -github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2 v2.0.0-beta.14/go.mod h1:WcbJAJErVMrVS/H7q57C83iFXFeay+xg29dxwQc/GqI= +github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.14 h1:tZBIvSx7Ympn/tb8ti9spm6t5/aZGDYXgGqScTbi73E= +github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.14/go.mod h1:QP/Uy/4K9XLzpwDSKX7fLGFuQfQq2Nz+OacCTbuaKKQ= +github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2 v2.0.0-beta.15 h1:5APIXK6BrpmZfN5zziM9QEfRPQlvipPAEL5kdP0/kmA= +github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2 v2.0.0-beta.15/go.mod h1:egFflsd1GSZDS/NvREHcsYAIxJVerRiFWDoWtP+YH0c= github.com/hashicorp/awspolicyequivalence v1.5.0 h1:tGw6h9qN1AWNBaUf4OUcdCyE/kqNBItTiyTPQeV/KUg= github.com/hashicorp/awspolicyequivalence v1.5.0/go.mod h1:9IOaIHx+a7C0NfUNk1A93M7kHd5rJ19aoUx37LZGC14= github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA= @@ -174,8 +178,8 @@ github.com/hashicorp/terraform-plugin-go v0.8.0 h1:MvY43PcDj9VlBjYifBWCO/6j1wf10 github.com/hashicorp/terraform-plugin-go v0.8.0/go.mod h1:E3GuvfX0Pz2Azcl6BegD6t51StXsVZMOYQoGO8mkHM0= github.com/hashicorp/terraform-plugin-log v0.3.0 h1:NPENNOjaJSVX0f7JJTl4f/2JKRPQ7S2ZN9B4NSqq5kA= github.com/hashicorp/terraform-plugin-log v0.3.0/go.mod h1:EjueSP/HjlyFAsDqt+okpCPjkT4NDynAe32AeDC4vps= -github.com/hashicorp/terraform-plugin-sdk/v2 v2.12.0 h1:rjJxyLUVA180BG0ZXTOree4x2RVvo2jigdYoT2rw5j0= -github.com/hashicorp/terraform-plugin-sdk/v2 v2.12.0/go.mod h1:TPjMXvpPNWagHzYOmVPzzRRIBTuaLVukR+esL08tgzg= +github.com/hashicorp/terraform-plugin-sdk/v2 v2.13.0 h1:MyzzWWHOQgYCsoJZEC9YgDqyZoG8pftt2pcYG30A+Do= +github.com/hashicorp/terraform-plugin-sdk/v2 v2.13.0/go.mod h1:TPjMXvpPNWagHzYOmVPzzRRIBTuaLVukR+esL08tgzg= github.com/hashicorp/terraform-registry-address v0.0.0-20210412075316-9b2996cce896 h1:1FGtlkJw87UsTMg5s8jrekrHmUPUJaMcu6ELiVhQrNw= github.com/hashicorp/terraform-registry-address v0.0.0-20210412075316-9b2996cce896/go.mod h1:bzBPnUIkI0RxauU8Dqo+2KrZZ28Cf48s8V6IHt3p4co= github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734 h1:HKLsbzeOsfXmKNpr3GiT18XAblV0BjCbzL8KQAMZGa0= diff --git a/infrastructure/repository/labels-service.tf b/infrastructure/repository/labels-service.tf index 0e4a400b824b..73435660597c 100644 --- a/infrastructure/repository/labels-service.tf +++ b/infrastructure/repository/labels-service.tf @@ -154,6 +154,7 @@ variable "service_labels" { "neptune", "networkfirewall", "networkmanager", + "opensearch", "opsworks", "opsworkscm", "organizations", diff --git a/infrastructure/repository/main.tf b/infrastructure/repository/main.tf index 6d0364d8885d..20b8b803ca74 100644 --- a/infrastructure/repository/main.tf +++ b/infrastructure/repository/main.tf @@ -10,7 +10,7 @@ terraform { required_providers { github = { source = "integrations/github" - version = "4.22.0" + version = "4.23.0" } } diff --git a/internal/acctest/acctest.go b/internal/acctest/acctest.go index e7a2124d2024..746b968b25c0 100644 --- a/internal/acctest/acctest.go +++ b/internal/acctest/acctest.go @@ -18,7 +18,6 @@ import ( "github.com/aws/aws-sdk-go/service/directoryservice" "github.com/aws/aws-sdk-go/service/ec2" "github.com/aws/aws-sdk-go/service/iam" - "github.com/aws/aws-sdk-go/service/organizations" "github.com/aws/aws-sdk-go/service/outposts" "github.com/aws/aws-sdk-go/service/ssoadmin" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" @@ -33,6 +32,7 @@ import ( tfec2 "github.com/hashicorp/terraform-provider-aws/internal/service/ec2" tforganizations "github.com/hashicorp/terraform-provider-aws/internal/service/organizations" tfsts "github.com/hashicorp/terraform-provider-aws/internal/service/sts" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) const ( @@ -59,7 +59,7 @@ const ( const RFC3339RegexPattern = `^[0-9]{4}-(0[1-9]|1[012])-(0[1-9]|[12][0-9]|3[01])[Tt]([01][0-9]|2[0-3]):[0-5][0-9]:[0-5][0-9](\.[0-9]+)?([Zz]|([+-]([01][0-9]|2[0-3]):[0-5][0-9]))$` const awsRegionRegexp = `[a-z]{2}(-[a-z]+)+-\d` -const awsAccountIDRegexp = `(aws|\d{12})` +const awsAccountIDRegexp = `(aws|aws-managed|\d{12})` // Skip implements a wrapper for (*testing.T).Skip() to prevent unused linting reports // @@ -234,6 +234,11 @@ func providerAccountID(provo *schema.Provider) string { return client.AccountID } +// CheckDestroyNoop is a TestCheckFunc to be used as a TestCase's CheckDestroy when no such check can be made. +func CheckDestroyNoop(_ *terraform.State) error { + return nil +} + // CheckResourceAttrAccountID ensures the Terraform state exactly matches the account ID func CheckResourceAttrAccountID(resourceName, attributeName string) resource.TestCheckFunc { return func(s *terraform.State) error { @@ -670,25 +675,26 @@ func PreCheckPartition(partition string, t *testing.T) { } func PreCheckOrganizationsAccount(t *testing.T) { - conn := Provider.Meta().(*conns.AWSClient).OrganizationsConn - input := &organizations.DescribeOrganizationInput{} - _, err := conn.DescribeOrganization(input) - if tfawserr.ErrCodeEquals(err, organizations.ErrCodeAWSOrganizationsNotInUseException) { + _, err := tforganizations.FindOrganization(Provider.Meta().(*conns.AWSClient).OrganizationsConn) + + if tfresource.NotFound(err) { return } + if err != nil { t.Fatalf("error describing AWS Organization: %s", err) } + t.Skip("skipping tests; this AWS account must not be an existing member of an AWS Organization") } func PreCheckOrganizationsEnabled(t *testing.T) { - conn := Provider.Meta().(*conns.AWSClient).OrganizationsConn - input := &organizations.DescribeOrganizationInput{} - _, err := conn.DescribeOrganization(input) - if tfawserr.ErrCodeEquals(err, organizations.ErrCodeAWSOrganizationsNotInUseException) { + _, err := tforganizations.FindOrganization(Provider.Meta().(*conns.AWSClient).OrganizationsConn) + + if tfresource.NotFound(err) { t.Skip("this AWS account must be an existing member of an AWS Organization") } + if err != nil { t.Fatalf("error describing AWS Organization: %s", err) } diff --git a/internal/conns/conns.go b/internal/conns/conns.go index b8cd4176f281..1feb0be2f1ff 100644 --- a/internal/conns/conns.go +++ b/internal/conns/conns.go @@ -199,6 +199,7 @@ import ( "github.com/aws/aws-sdk-go/service/networkfirewall" "github.com/aws/aws-sdk-go/service/networkmanager" "github.com/aws/aws-sdk-go/service/nimblestudio" + "github.com/aws/aws-sdk-go/service/opensearchservice" "github.com/aws/aws-sdk-go/service/opsworks" "github.com/aws/aws-sdk-go/service/opsworkscm" "github.com/aws/aws-sdk-go/service/organizations" @@ -519,6 +520,7 @@ type AWSClient struct { NetworkFirewallConn *networkfirewall.NetworkFirewall NetworkManagerConn *networkmanager.NetworkManager NimbleStudioConn *nimblestudio.NimbleStudio + OpenSearchConn *opensearchservice.OpenSearchService OpsWorksCMConn *opsworkscm.OpsWorksCM OpsWorksConn *opsworks.OpsWorks OrganizationsConn *organizations.Organizations @@ -918,6 +920,7 @@ func (c *Config) Client(ctx context.Context) (interface{}, diag.Diagnostics) { NetworkFirewallConn: networkfirewall.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.NetworkFirewall])})), NetworkManagerConn: networkmanager.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.NetworkManager])})), NimbleStudioConn: nimblestudio.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.NimbleStudio])})), + OpenSearchConn: opensearchservice.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.OpenSearch])})), OpsWorksCMConn: opsworkscm.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.OpsWorksCM])})), OpsWorksConn: opsworks.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.OpsWorks])})), OrganizationsConn: organizations.New(sess.Copy(&aws.Config{Endpoint: aws.String(c.Endpoints[names.Organizations])})), @@ -1229,6 +1232,13 @@ func (c *Config) Client(ctx context.Context) (interface{}, diag.Diagnostics) { if tfawserr.ErrMessageContains(r.Error, fms.ErrCodeInvalidOperationException, "Your AWS Organization is currently onboarding with AWS Firewall Manager and cannot be offboarded.") { r.Retryable = aws.Bool(true) } + // System problems can arise during FMS policy updates (maybe also creation), + // so we set the following operation as retryable. + // Reference: https://github.com/hashicorp/terraform-provider-aws/issues/23946 + case "PutPolicy": + if tfawserr.ErrCodeEquals(r.Error, fms.ErrCodeInternalErrorException) { + r.Retryable = aws.Bool(true) + } } }) diff --git a/internal/generate/listpages/main.go b/internal/generate/listpages/main.go index 4f9ce2ef8759..fb9568eb6bfe 100644 --- a/internal/generate/listpages/main.go +++ b/internal/generate/listpages/main.go @@ -548,6 +548,7 @@ func init() { awsServiceNames["networkfirewall"] = "NetworkFirewall" awsServiceNames["networkmanager"] = "NetworkManager" awsServiceNames["nimblestudio"] = "NimbleStudio" + awsServiceNames["opensearchservice"] = "OpenSearchService" awsServiceNames["opsworks"] = "OpsWorks" awsServiceNames["opsworkscm"] = "OpsWorksCM" awsServiceNames["organizations"] = "Organizations" diff --git a/internal/generate/namevaluesfilters/generators/servicefilters/main.go b/internal/generate/namevaluesfilters/generators/servicefilters/main.go index d6d18ce42c1f..e7c58aadc804 100644 --- a/internal/generate/namevaluesfilters/generators/servicefilters/main.go +++ b/internal/generate/namevaluesfilters/generators/servicefilters/main.go @@ -29,6 +29,7 @@ var sliceServiceNames = []string{ "imagebuilder", "licensemanager", "neptune", + "opensearchservice", "rds", "resourcegroupstaggingapi", "route53resolver", diff --git a/internal/generate/tags/main.go b/internal/generate/tags/main.go index 04e87be99a49..dd126f3fb1c3 100644 --- a/internal/generate/tags/main.go +++ b/internal/generate/tags/main.go @@ -1051,6 +1051,7 @@ func init() { awsServiceNames["networkfirewall"] = "NetworkFirewall" awsServiceNames["networkmanager"] = "NetworkManager" awsServiceNames["nimblestudio"] = "NimbleStudio" + awsServiceNames["opensearchservice"] = "OpenSearchService" awsServiceNames["opsworks"] = "OpsWorks" awsServiceNames["opsworkscm"] = "OpsWorksCM" awsServiceNames["organizations"] = "Organizations" diff --git a/internal/provider/provider.go b/internal/provider/provider.go index 87fd40bf0593..1ab4f556e1ec 100644 --- a/internal/provider/provider.go +++ b/internal/provider/provider.go @@ -120,6 +120,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/service/neptune" "github.com/hashicorp/terraform-provider-aws/internal/service/networkfirewall" "github.com/hashicorp/terraform-provider-aws/internal/service/networkmanager" + "github.com/hashicorp/terraform-provider-aws/internal/service/opensearch" "github.com/hashicorp/terraform-provider-aws/internal/service/opsworks" "github.com/hashicorp/terraform-provider-aws/internal/service/organizations" "github.com/hashicorp/terraform-provider-aws/internal/service/outposts" @@ -596,12 +597,13 @@ func Provider() *schema.Provider { "aws_efs_file_system": efs.DataSourceFileSystem(), "aws_efs_mount_target": efs.DataSourceMountTarget(), - "aws_eks_addon": eks.DataSourceAddon(), - "aws_eks_cluster": eks.DataSourceCluster(), - "aws_eks_clusters": eks.DataSourceClusters(), - "aws_eks_cluster_auth": eks.DataSourceClusterAuth(), - "aws_eks_node_group": eks.DataSourceNodeGroup(), - "aws_eks_node_groups": eks.DataSourceNodeGroups(), + "aws_eks_addon": eks.DataSourceAddon(), + "aws_eks_addon_version": eks.DataSourceAddonVersion(), + "aws_eks_cluster": eks.DataSourceCluster(), + "aws_eks_clusters": eks.DataSourceClusters(), + "aws_eks_cluster_auth": eks.DataSourceClusterAuth(), + "aws_eks_node_group": eks.DataSourceNodeGroup(), + "aws_eks_node_groups": eks.DataSourceNodeGroups(), "aws_elasticache_cluster": elasticache.DataSourceCluster(), "aws_elasticache_replication_group": elasticache.DataSourceReplicationGroup(), @@ -679,6 +681,7 @@ func Provider() *schema.Provider { "aws_msk_configuration": kafka.DataSourceConfiguration(), "aws_msk_kafka_version": kafka.DataSourceVersion(), + "aws_mskconnect_connector": kafkaconnect.DataSourceConnector(), "aws_mskconnect_custom_plugin": kafkaconnect.DataSourceCustomPlugin(), "aws_mskconnect_worker_configuration": kafkaconnect.DataSourceWorkerConfiguration(), @@ -698,6 +701,7 @@ func Provider() *schema.Provider { "aws_lambda_alias": lambda.DataSourceAlias(), "aws_lambda_code_signing_config": lambda.DataSourceCodeSigningConfig(), + "aws_lambda_function_url": lambda.DataSourceFunctionURL(), "aws_lambda_function": lambda.DataSourceFunction(), "aws_lambda_invocation": lambda.DataSourceInvocation(), "aws_lambda_layer_version": lambda.DataSourceLayerVersion(), @@ -716,8 +720,12 @@ func Provider() *schema.Provider { "aws_regions": meta.DataSourceRegions(), "aws_service": meta.DataSourceService(), + "aws_memorydb_acl": memorydb.DataSourceACL(), + "aws_memorydb_cluster": memorydb.DataSourceCluster(), "aws_memorydb_parameter_group": memorydb.DataSourceParameterGroup(), + "aws_memorydb_snapshot": memorydb.DataSourceSnapshot(), "aws_memorydb_subnet_group": memorydb.DataSourceSubnetGroup(), + "aws_memorydb_user": memorydb.DataSourceUser(), "aws_mq_broker": mq.DataSourceBroker(), @@ -735,6 +743,8 @@ func Provider() *schema.Provider { "aws_networkmanager_site": networkmanager.DataSourceSite(), "aws_networkmanager_sites": networkmanager.DataSourceSites(), + "aws_opensearch_domain": opensearch.DataSourceDomain(), + "aws_organizations_delegated_administrators": organizations.DataSourceDelegatedAdministrators(), "aws_organizations_delegated_services": organizations.DataSourceDelegatedServices(), "aws_organizations_organization": organizations.DataSourceOrganization(), @@ -819,11 +829,12 @@ func Provider() *schema.Provider { "aws_sqs_queue": sqs.DataSourceQueue(), - "aws_ssm_document": ssm.DataSourceDocument(), - "aws_ssm_instances": ssm.DataSourceInstances(), - "aws_ssm_parameter": ssm.DataSourceParameter(), - "aws_ssm_parameters_by_path": ssm.DataSourceParametersByPath(), - "aws_ssm_patch_baseline": ssm.DataSourcePatchBaseline(), + "aws_ssm_document": ssm.DataSourceDocument(), + "aws_ssm_instances": ssm.DataSourceInstances(), + "aws_ssm_maintenance_windows": ssm.DataSourceMaintenanceWindows(), + "aws_ssm_parameter": ssm.DataSourceParameter(), + "aws_ssm_parameters_by_path": ssm.DataSourceParametersByPath(), + "aws_ssm_patch_baseline": ssm.DataSourcePatchBaseline(), "aws_ssoadmin_instances": ssoadmin.DataSourceInstances(), "aws_ssoadmin_permission_set": ssoadmin.DataSourcePermissionSet(), @@ -1192,6 +1203,7 @@ func Provider() *schema.Provider { "aws_directory_service_directory": ds.ResourceDirectory(), "aws_directory_service_log_subscription": ds.ResourceLogSubscription(), + "aws_dynamodb_contributor_insights": dynamodb.ResourceContributorInsights(), "aws_dynamodb_global_table": dynamodb.ResourceGlobalTable(), "aws_dynamodb_kinesis_streaming_destination": dynamodb.ResourceKinesisStreamingDestination(), "aws_dynamodb_table": dynamodb.ResourceTable(), @@ -1500,8 +1512,11 @@ func Provider() *schema.Provider { "aws_iot_authorizer": iot.ResourceAuthorizer(), "aws_iot_certificate": iot.ResourceCertificate(), + "aws_iot_indexing_configuration": iot.ResourceIndexingConfiguration(), + "aws_iot_logging_options": iot.ResourceLoggingOptions(), "aws_iot_policy": iot.ResourcePolicy(), "aws_iot_policy_attachment": iot.ResourcePolicyAttachment(), + "aws_iot_provisioning_template": iot.ResourceProvisioningTemplate(), "aws_iot_role_alias": iot.ResourceRoleAlias(), "aws_iot_thing": iot.ResourceThing(), "aws_iot_thing_group": iot.ResourceThingGroup(), @@ -1514,6 +1529,7 @@ func Provider() *schema.Provider { "aws_msk_configuration": kafka.ResourceConfiguration(), "aws_msk_scram_secret_association": kafka.ResourceScramSecretAssociation(), + "aws_mskconnect_connector": kafkaconnect.ResourceConnector(), "aws_mskconnect_custom_plugin": kafkaconnect.ResourceCustomPlugin(), "aws_mskconnect_worker_configuration": kafkaconnect.ResourceWorkerConfiguration(), @@ -1545,6 +1561,7 @@ func Provider() *schema.Provider { "aws_lambda_event_source_mapping": lambda.ResourceEventSourceMapping(), "aws_lambda_function": lambda.ResourceFunction(), "aws_lambda_function_event_invoke_config": lambda.ResourceFunctionEventInvokeConfig(), + "aws_lambda_function_url": lambda.ResourceFunctionUrl(), "aws_lambda_invocation": lambda.ResourceInvocation(), "aws_lambda_layer_version": lambda.ResourceLayerVersion(), "aws_lambda_layer_version_permission": lambda.ResourceLayerVersionPermission(), @@ -1621,6 +1638,10 @@ func Provider() *schema.Provider { "aws_networkmanager_transit_gateway_connect_peer_association": networkmanager.ResourceTransitGatewayConnectPeerAssociation(), "aws_networkmanager_transit_gateway_registration": networkmanager.ResourceTransitGatewayRegistration(), + "aws_opensearch_domain": opensearch.ResourceDomain(), + "aws_opensearch_domain_policy": opensearch.ResourceDomainPolicy(), + "aws_opensearch_domain_saml_options": opensearch.ResourceDomainSAMLOptions(), + "aws_opsworks_application": opsworks.ResourceApplication(), "aws_opsworks_custom_layer": opsworks.ResourceCustomLayer(), "aws_opsworks_ecs_cluster_layer": opsworks.ResourceECSClusterLayer(), @@ -1670,25 +1691,27 @@ func Provider() *schema.Provider { "aws_ram_resource_share": ram.ResourceResourceShare(), "aws_ram_resource_share_accepter": ram.ResourceResourceShareAccepter(), - "aws_db_cluster_snapshot": rds.ResourceClusterSnapshot(), - "aws_db_event_subscription": rds.ResourceEventSubscription(), - "aws_db_instance": rds.ResourceInstance(), - "aws_db_instance_role_association": rds.ResourceInstanceRoleAssociation(), - "aws_db_option_group": rds.ResourceOptionGroup(), - "aws_db_parameter_group": rds.ResourceParameterGroup(), - "aws_db_proxy": rds.ResourceProxy(), - "aws_db_proxy_default_target_group": rds.ResourceProxyDefaultTargetGroup(), - "aws_db_proxy_endpoint": rds.ResourceProxyEndpoint(), - "aws_db_proxy_target": rds.ResourceProxyTarget(), - "aws_db_security_group": rds.ResourceSecurityGroup(), - "aws_db_snapshot": rds.ResourceSnapshot(), - "aws_db_subnet_group": rds.ResourceSubnetGroup(), - "aws_rds_cluster": rds.ResourceCluster(), - "aws_rds_cluster_endpoint": rds.ResourceClusterEndpoint(), - "aws_rds_cluster_instance": rds.ResourceClusterInstance(), - "aws_rds_cluster_parameter_group": rds.ResourceClusterParameterGroup(), - "aws_rds_cluster_role_association": rds.ResourceClusterRoleAssociation(), - "aws_rds_global_cluster": rds.ResourceGlobalCluster(), + "aws_db_cluster_snapshot": rds.ResourceClusterSnapshot(), + "aws_db_event_subscription": rds.ResourceEventSubscription(), + "aws_db_instance": rds.ResourceInstance(), + "aws_db_instance_automated_backups_replication": rds.ResourceInstanceAutomatedBackupsReplication(), + "aws_db_instance_role_association": rds.ResourceInstanceRoleAssociation(), + "aws_db_option_group": rds.ResourceOptionGroup(), + "aws_db_parameter_group": rds.ResourceParameterGroup(), + "aws_db_proxy": rds.ResourceProxy(), + "aws_db_proxy_default_target_group": rds.ResourceProxyDefaultTargetGroup(), + "aws_db_proxy_endpoint": rds.ResourceProxyEndpoint(), + "aws_db_proxy_target": rds.ResourceProxyTarget(), + "aws_db_security_group": rds.ResourceSecurityGroup(), + "aws_db_snapshot": rds.ResourceSnapshot(), + "aws_db_subnet_group": rds.ResourceSubnetGroup(), + "aws_rds_cluster": rds.ResourceCluster(), + "aws_rds_cluster_activity_stream": rds.ResourceClusterActivityStream(), + "aws_rds_cluster_endpoint": rds.ResourceClusterEndpoint(), + "aws_rds_cluster_instance": rds.ResourceClusterInstance(), + "aws_rds_cluster_parameter_group": rds.ResourceClusterParameterGroup(), + "aws_rds_cluster_role_association": rds.ResourceClusterRoleAssociation(), + "aws_rds_global_cluster": rds.ResourceGlobalCluster(), "aws_redshift_cluster": redshift.ResourceCluster(), "aws_redshift_event_subscription": redshift.ResourceEventSubscription(), diff --git a/internal/service/autoscaling/group.go b/internal/service/autoscaling/group.go index 7faa2d90cfd2..25bc8257591a 100644 --- a/internal/service/autoscaling/group.go +++ b/internal/service/autoscaling/group.go @@ -392,6 +392,7 @@ func ResourceGroup() *schema.Resource { "initial_lifecycle_hook": { Type: schema.TypeSet, Optional: true, + ForceNew: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "name": { diff --git a/internal/service/autoscaling/group_test.go b/internal/service/autoscaling/group_test.go index e3fc718fe387..a1137eb9318f 100644 --- a/internal/service/autoscaling/group_test.go +++ b/internal/service/autoscaling/group_test.go @@ -739,6 +739,7 @@ func TestAccAutoScalingGroup_ALB_targetGroups(t *testing.T) { var group autoscaling.Group var tg elbv2.TargetGroup var tg2 elbv2.TargetGroup + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) testCheck := func(targets []*elbv2.TargetGroup) resource.TestCheckFunc { return func(*terraform.State) error { @@ -769,7 +770,7 @@ func TestAccAutoScalingGroup_ALB_targetGroups(t *testing.T) { CheckDestroy: testAccCheckGroupDestroy, Steps: []resource.TestStep{ { - Config: testAccGroupConfig_ALB_TargetGroup_pre(), + Config: testAccGroupConfig_ALB_TargetGroup_pre(rName), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckGroupExists("aws_autoscaling_group.bar", &group), testAccCheckLBTargetGroupExists("aws_lb_target_group.test", &tg), @@ -779,7 +780,7 @@ func TestAccAutoScalingGroup_ALB_targetGroups(t *testing.T) { }, { - Config: testAccGroupConfig_ALB_TargetGroup_post_duo(), + Config: testAccGroupConfig_ALB_TargetGroup_post_duo(rName), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckGroupExists("aws_autoscaling_group.bar", &group), testAccCheckLBTargetGroupExists("aws_lb_target_group.test", &tg), @@ -803,7 +804,7 @@ func TestAccAutoScalingGroup_ALB_targetGroups(t *testing.T) { }, }, { - Config: testAccGroupConfig_ALB_TargetGroup_post(), + Config: testAccGroupConfig_ALB_TargetGroup_post(rName), Check: resource.ComposeAggregateTestCheckFunc( testAccCheckGroupExists("aws_autoscaling_group.bar", &group), testAccCheckLBTargetGroupExists("aws_lb_target_group.test", &tg), @@ -3085,9 +3086,10 @@ resource "aws_autoscaling_group" "bar" { `) } -func testAccGroupConfig_ALB_TargetGroup_pre() string { - return acctest.ConfigCompose(acctest.ConfigAvailableAZsNoOptInDefaultExclude(), - ` +func testAccGroupConfig_ALB_TargetGroup_pre(rName string) string { + return acctest.ConfigCompose( + acctest.ConfigAvailableAZsNoOptInDefaultExclude(), + fmt.Sprintf(` resource "aws_vpc" "default" { cidr_block = "10.0.0.0/16" @@ -3097,7 +3099,7 @@ resource "aws_vpc" "default" { } resource "aws_lb_target_group" "test" { - name = "tf-example-alb-tg" + name = %[1]q port = 80 protocol = "HTTP" vpc_id = aws_vpc.default.id @@ -3109,7 +3111,7 @@ resource "aws_subnet" "main" { availability_zone = data.aws_availability_zones.available.names[0] tags = { - Name = "tf-acc-autoscaling-group-alb-target-group-main" + Name = %[1]q } } @@ -3119,7 +3121,7 @@ resource "aws_subnet" "alt" { availability_zone = data.aws_availability_zones.available.names[1] tags = { - Name = "tf-acc-autoscaling-group-alb-target-group-alt" + Name = %[1]q } } @@ -3156,8 +3158,8 @@ resource "aws_autoscaling_group" "bar" { } resource "aws_security_group" "tf_test_self" { - name = "tf_test_alb_asg" - description = "tf_test_alb_asg" + name = %[1]q + description = %[1]q vpc_id = aws_vpc.default.id ingress { @@ -3168,25 +3170,26 @@ resource "aws_security_group" "tf_test_self" { } tags = { - Name = "testAccAWSAutoScalingGroupConfig_ALB_TargetGroup" + Name = %[1]q } } -`) +`, rName)) } -func testAccGroupConfig_ALB_TargetGroup_post() string { - return acctest.ConfigCompose(acctest.ConfigAvailableAZsNoOptInDefaultExclude(), - ` +func testAccGroupConfig_ALB_TargetGroup_post(rName string) string { + return acctest.ConfigCompose( + acctest.ConfigAvailableAZsNoOptInDefaultExclude(), + fmt.Sprintf(` resource "aws_vpc" "default" { cidr_block = "10.0.0.0/16" tags = { - Name = "terraform-testacc-autoscaling-group-alb-target-group" + Name = %[1]q } } resource "aws_lb_target_group" "test" { - name = "tf-example-alb-tg" + name = %[1]q port = 80 protocol = "HTTP" vpc_id = aws_vpc.default.id @@ -3198,7 +3201,7 @@ resource "aws_subnet" "main" { availability_zone = data.aws_availability_zones.available.names[0] tags = { - Name = "tf-acc-autoscaling-group-alb-target-group-main" + Name = %[1]q } } @@ -3208,7 +3211,7 @@ resource "aws_subnet" "alt" { availability_zone = data.aws_availability_zones.available.names[1] tags = { - Name = "tf-acc-autoscaling-group-alb-target-group-alt" + Name = "%[1]s-2" } } @@ -3247,8 +3250,8 @@ resource "aws_autoscaling_group" "bar" { } resource "aws_security_group" "tf_test_self" { - name = "tf_test_alb_asg" - description = "tf_test_alb_asg" + name = %[1]q + description = %[1]q vpc_id = aws_vpc.default.id ingress { @@ -3259,32 +3262,33 @@ resource "aws_security_group" "tf_test_self" { } tags = { - Name = "testAccAWSAutoScalingGroupConfig_ALB_TargetGroup" + Name = %[1]q } } -`) +`, rName)) } -func testAccGroupConfig_ALB_TargetGroup_post_duo() string { - return acctest.ConfigCompose(acctest.ConfigAvailableAZsNoOptInDefaultExclude(), - ` +func testAccGroupConfig_ALB_TargetGroup_post_duo(rName string) string { + return acctest.ConfigCompose( + acctest.ConfigAvailableAZsNoOptInDefaultExclude(), + fmt.Sprintf(` resource "aws_vpc" "default" { cidr_block = "10.0.0.0/16" tags = { - Name = "terraform-testacc-autoscaling-group-alb-target-group" + Name = %[1]q } } resource "aws_lb_target_group" "test" { - name = "tf-example-alb-tg" + name = %[1]q port = 80 protocol = "HTTP" vpc_id = aws_vpc.default.id } resource "aws_lb_target_group" "test_more" { - name = "tf-example-alb-tg-more" + name = format("%%s-%%s", substr(%[1]q, 0, 28), "2") port = 80 protocol = "HTTP" vpc_id = aws_vpc.default.id @@ -3296,7 +3300,7 @@ resource "aws_subnet" "main" { availability_zone = data.aws_availability_zones.available.names[0] tags = { - Name = "tf-acc-autoscaling-group-alb-target-group-main" + Name = %[1]q } } @@ -3306,7 +3310,7 @@ resource "aws_subnet" "alt" { availability_zone = data.aws_availability_zones.available.names[1] tags = { - Name = "tf-acc-autoscaling-group-alb-target-group-alt" + Name = "%[1]s-2" } } @@ -3348,8 +3352,8 @@ resource "aws_autoscaling_group" "bar" { } resource "aws_security_group" "tf_test_self" { - name = "tf_test_alb_asg" - description = "tf_test_alb_asg" + name = %[1]q + description = %[1]q vpc_id = aws_vpc.default.id ingress { @@ -3360,10 +3364,10 @@ resource "aws_security_group" "tf_test_self" { } tags = { - Name = "testAccAWSAutoScalingGroupConfig_ALB_TargetGroup" + Name = %[1]q } } -`) +`, rName)) } func testAccGroupConfig_TargetGroupARNs(rName string, tgCount int) string { diff --git a/internal/service/backup/consts.go b/internal/service/backup/consts.go index aff4c3ff2726..2e17cff83c9c 100644 --- a/internal/service/backup/consts.go +++ b/internal/service/backup/consts.go @@ -7,3 +7,40 @@ const ( frameworkStatusFailed = "FAILED" frameworkStatusUpdateInProgress = "UPDATE_IN_PROGRESS" ) + +const ( + reportPlanDeploymentStatusCompleted = "COMPLETED" + reportPlanDeploymentStatusCreateInProgress = "CREATE_IN_PROGRESS" + reportPlanDeploymentStatusDeleteInProgress = "DELETE_IN_PROGRESS" + reportPlanDeploymentStatusUpdateInProgress = "UPDATE_IN_PROGRESS" +) + +const ( + reportDeliveryChannelFormatCSV = "CSV" + reportDeliveryChannelFormatJSON = "JSON" +) + +func reportDeliveryChannelFormat_Values() []string { + return []string{ + reportDeliveryChannelFormatCSV, + reportDeliveryChannelFormatJSON, + } +} + +const ( + reportSettingTemplateBackupJobReport = "BACKUP_JOB_REPORT" + reportSettingTemplateControlComplianceReport = "CONTROL_COMPLIANCE_REPORT" + reportSettingTemplateCopyJobReport = "COPY_JOB_REPORT" + reportSettingTemplateResourceComplianceReport = "RESOURCE_COMPLIANCE_REPORT" + reportSettingTemplateRestoreJobReport = "RESTORE_JOB_REPORT" +) + +func reportSettingTemplate_Values() []string { + return []string{ + reportSettingTemplateBackupJobReport, + reportSettingTemplateControlComplianceReport, + reportSettingTemplateCopyJobReport, + reportSettingTemplateResourceComplianceReport, + reportSettingTemplateRestoreJobReport, + } +} diff --git a/internal/service/backup/region_settings.go b/internal/service/backup/region_settings.go index 20cbfb7cc746..b84c39d53511 100644 --- a/internal/service/backup/region_settings.go +++ b/internal/service/backup/region_settings.go @@ -52,7 +52,7 @@ func resourceRegionSettingsUpdate(d *schema.ResourceData, meta interface{}) erro _, err := conn.UpdateRegionSettings(input) if err != nil { - return fmt.Errorf("error setting Backup Region Settings (%s): %w", d.Id(), err) + return fmt.Errorf("error updating Backup Region Settings (%s): %w", d.Id(), err) } d.SetId(meta.(*conns.AWSClient).Region) @@ -63,14 +63,14 @@ func resourceRegionSettingsUpdate(d *schema.ResourceData, meta interface{}) erro func resourceRegionSettingsRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).BackupConn - resp, err := conn.DescribeRegionSettings(&backup.DescribeRegionSettingsInput{}) + output, err := conn.DescribeRegionSettings(&backup.DescribeRegionSettingsInput{}) if err != nil { return fmt.Errorf("error reading Backup Region Settings (%s): %w", d.Id(), err) } - d.Set("resource_type_opt_in_preference", aws.BoolValueMap(resp.ResourceTypeOptInPreference)) - d.Set("resource_type_management_preference", aws.BoolValueMap(resp.ResourceTypeManagementPreference)) + d.Set("resource_type_opt_in_preference", aws.BoolValueMap(output.ResourceTypeOptInPreference)) + d.Set("resource_type_management_preference", aws.BoolValueMap(output.ResourceTypeManagementPreference)) return nil } diff --git a/internal/service/backup/region_settings_test.go b/internal/service/backup/region_settings_test.go index bc4edf5b29e0..f7622a7bc409 100644 --- a/internal/service/backup/region_settings_test.go +++ b/internal/service/backup/region_settings_test.go @@ -26,10 +26,10 @@ func TestAccBackupRegionSettings_basic(t *testing.T) { CheckDestroy: nil, Steps: []resource.TestStep{ { - Config: testAccBackupRegionSettingsConfig1(), - Check: resource.ComposeTestCheckFunc( + Config: testAccRegionSettings1Config(), + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckRegionSettingsExists(&settings), - resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.%", "11"), + resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.%", "12"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.Aurora", "true"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.DocumentDB", "true"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.DynamoDB", "true"), @@ -39,6 +39,7 @@ func TestAccBackupRegionSettings_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.FSx", "true"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.Neptune", "true"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.RDS", "true"), + resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.S3", "true"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.Storage Gateway", "true"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.VirtualMachine", "true"), resource.TestCheckResourceAttr(resourceName, "resource_type_management_preference.%", "2"), @@ -52,10 +53,10 @@ func TestAccBackupRegionSettings_basic(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccBackupRegionSettingsConfig2(), - Check: resource.ComposeTestCheckFunc( + Config: testAccRegionSettings2Config(), + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckRegionSettingsExists(&settings), - resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.%", "11"), + resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.%", "12"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.Aurora", "false"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.DocumentDB", "true"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.DynamoDB", "true"), @@ -65,6 +66,7 @@ func TestAccBackupRegionSettings_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.FSx", "true"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.Neptune", "true"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.RDS", "true"), + resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.S3", "true"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.Storage Gateway", "true"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.VirtualMachine", "true"), resource.TestCheckResourceAttr(resourceName, "resource_type_management_preference.%", "2"), @@ -73,10 +75,10 @@ func TestAccBackupRegionSettings_basic(t *testing.T) { ), }, { - Config: testAccBackupRegionSettingsConfig3(), - Check: resource.ComposeTestCheckFunc( + Config: testAccRegionSettings3Config(), + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckRegionSettingsExists(&settings), - resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.%", "11"), + resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.%", "12"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.Aurora", "false"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.DocumentDB", "true"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.DynamoDB", "true"), @@ -86,6 +88,7 @@ func TestAccBackupRegionSettings_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.FSx", "true"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.Neptune", "true"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.RDS", "true"), + resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.S3", "true"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.Storage Gateway", "true"), resource.TestCheckResourceAttr(resourceName, "resource_type_opt_in_preference.VirtualMachine", "false"), resource.TestCheckResourceAttr(resourceName, "resource_type_management_preference.%", "2"), @@ -97,22 +100,23 @@ func TestAccBackupRegionSettings_basic(t *testing.T) { }) } -func testAccCheckRegionSettingsExists(settings *backup.DescribeRegionSettingsOutput) resource.TestCheckFunc { +func testAccCheckRegionSettingsExists(v *backup.DescribeRegionSettingsOutput) resource.TestCheckFunc { return func(s *terraform.State) error { - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn - resp, err := conn.DescribeRegionSettings(&backup.DescribeRegionSettingsInput{}) + + output, err := conn.DescribeRegionSettings(&backup.DescribeRegionSettingsInput{}) + if err != nil { return err } - *settings = *resp + *v = *output return nil } } -func testAccBackupRegionSettingsConfig1() string { +func testAccRegionSettings1Config() string { return ` resource "aws_backup_region_settings" "test" { resource_type_opt_in_preference = { @@ -125,6 +129,7 @@ resource "aws_backup_region_settings" "test" { "FSx" = true "Neptune" = true "RDS" = true + "S3" = true "Storage Gateway" = true "VirtualMachine" = true } @@ -132,7 +137,7 @@ resource "aws_backup_region_settings" "test" { ` } -func testAccBackupRegionSettingsConfig2() string { +func testAccRegionSettings2Config() string { return ` resource "aws_backup_region_settings" "test" { resource_type_opt_in_preference = { @@ -145,6 +150,7 @@ resource "aws_backup_region_settings" "test" { "FSx" = true "Neptune" = true "RDS" = true + "S3" = true "Storage Gateway" = true "VirtualMachine" = true } @@ -157,7 +163,7 @@ resource "aws_backup_region_settings" "test" { ` } -func testAccBackupRegionSettingsConfig3() string { +func testAccRegionSettings3Config() string { return ` resource "aws_backup_region_settings" "test" { resource_type_opt_in_preference = { @@ -170,6 +176,7 @@ resource "aws_backup_region_settings" "test" { "FSx" = true "Neptune" = true "RDS" = true + "S3" = true "Storage Gateway" = true "VirtualMachine" = false } diff --git a/internal/service/backup/report_plan.go b/internal/service/backup/report_plan.go index 216e0ff760c6..b4a6854f4a76 100644 --- a/internal/service/backup/report_plan.go +++ b/internal/service/backup/report_plan.go @@ -14,6 +14,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" "github.com/hashicorp/terraform-provider-aws/internal/verify" ) @@ -23,6 +24,7 @@ func ResourceReportPlan() *schema.Resource { Read: resourceReportPlanRead, Update: resourceReportPlanUpdate, Delete: resourceReportPlanDelete, + Importer: &schema.ResourceImporter{ State: schema.ImportStatePassthrough, }, @@ -61,11 +63,8 @@ func ResourceReportPlan() *schema.Resource { Type: schema.TypeSet, Optional: true, Elem: &schema.Schema{ - Type: schema.TypeString, - ValidateFunc: validation.StringInSlice([]string{ - "CSV", - "JSON", - }, false), + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice(reportDeliveryChannelFormat_Values(), false), }, }, "s3_bucket_name": { @@ -98,16 +97,10 @@ func ResourceReportPlan() *schema.Resource { }, // A report plan template cannot be updated "report_template": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice([]string{ - "RESOURCE_COMPLIANCE_REPORT", - "CONTROL_COMPLIANCE_REPORT", - "BACKUP_JOB_REPORT", - "COPY_JOB_REPORT", - "RESTORE_JOB_REPORT", - }, false), + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(reportSettingTemplate_Values(), false), }, }, }, @@ -126,7 +119,6 @@ func resourceReportPlanCreate(d *schema.ResourceData, meta interface{}) error { tags := defaultTagsConfig.MergeTags(tftags.New(d.Get("tags").(map[string]interface{}))) name := d.Get("name").(string) - input := &backup.CreateReportPlanInput{ IdempotencyToken: aws.String(resource.UniqueId()), ReportDeliveryChannel: expandReportDeliveryChannel(d.Get("report_delivery_channel").([]interface{})), @@ -142,14 +134,19 @@ func resourceReportPlanCreate(d *schema.ResourceData, meta interface{}) error { input.ReportPlanTags = Tags(tags.IgnoreAWS()) } - log.Printf("[DEBUG] Creating Backup Report Plan: %#v", input) - resp, err := conn.CreateReportPlan(input) + log.Printf("[DEBUG] Creating Backup Report Plan: %s", input) + output, err := conn.CreateReportPlan(input) + if err != nil { - return fmt.Errorf("error creating Backup Report Plan: %w", err) + return fmt.Errorf("error creating Backup Report Plan (%s): %w", name, err) } - // Set ID with the name since the name is unique for the report plan - d.SetId(aws.StringValue(resp.ReportPlanName)) + // Set ID with the name since the name is unique for the report plan. + d.SetId(aws.StringValue(output.ReportPlanName)) + + if _, err := waitReportPlanCreated(conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { + return fmt.Errorf("error waiting for Backup Report Plan (%s) create: %w", d.Id(), err) + } return resourceReportPlanRead(d, meta) } @@ -159,40 +156,38 @@ func resourceReportPlanRead(d *schema.ResourceData, meta interface{}) error { defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig - resp, err := conn.DescribeReportPlan(&backup.DescribeReportPlanInput{ - ReportPlanName: aws.String(d.Id()), - }) + reportPlan, err := FindReportPlanByName(conn, d.Id()) - if tfawserr.ErrCodeEquals(err, backup.ErrCodeResourceNotFoundException) { - log.Printf("[WARN] Backup Report Plan (%s) not found, removing from state", d.Id()) + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] Backup Report Plan %s not found, removing from state", d.Id()) d.SetId("") return nil } + if err != nil { return fmt.Errorf("error reading Backup Report Plan (%s): %w", d.Id(), err) } - d.Set("arn", resp.ReportPlan.ReportPlanArn) - d.Set("deployment_status", resp.ReportPlan.DeploymentStatus) - d.Set("description", resp.ReportPlan.ReportPlanDescription) - d.Set("name", resp.ReportPlan.ReportPlanName) - - if err := d.Set("creation_time", resp.ReportPlan.CreationTime.Format(time.RFC3339)); err != nil { - return fmt.Errorf("error setting creation_time: %s", err) - } + d.Set("arn", reportPlan.ReportPlanArn) + d.Set("creation_time", reportPlan.CreationTime.Format(time.RFC3339)) + d.Set("deployment_status", reportPlan.DeploymentStatus) + d.Set("description", reportPlan.ReportPlanDescription) + d.Set("name", reportPlan.ReportPlanName) - if err := d.Set("report_delivery_channel", flattenReportDeliveryChannel(resp.ReportPlan.ReportDeliveryChannel)); err != nil { + if err := d.Set("report_delivery_channel", flattenReportDeliveryChannel(reportPlan.ReportDeliveryChannel)); err != nil { return fmt.Errorf("error setting report_delivery_channel: %w", err) } - if err := d.Set("report_setting", flattenReportSetting(resp.ReportPlan.ReportSetting)); err != nil { - return fmt.Errorf("error setting report_delivery_channel: %w", err) + if err := d.Set("report_setting", flattenReportSetting(reportPlan.ReportSetting)); err != nil { + return fmt.Errorf("error setting report_setting: %w", err) } tags, err := ListTags(conn, d.Get("arn").(string)) + if err != nil { return fmt.Errorf("error listing tags for Backup Report Plan (%s): %w", d.Id(), err) } + tags = tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig) //lintignore:AWSR002 @@ -210,7 +205,7 @@ func resourceReportPlanRead(d *schema.ResourceData, meta interface{}) error { func resourceReportPlanUpdate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).BackupConn - if d.HasChanges("description", "report_delivery_channel", "report_plan_description", "report_setting") { + if d.HasChangesExcept("tags_all", "tags") { input := &backup.UpdateReportPlanInput{ IdempotencyToken: aws.String(resource.UniqueId()), ReportDeliveryChannel: expandReportDeliveryChannel(d.Get("report_delivery_channel").([]interface{})), @@ -219,15 +214,21 @@ func resourceReportPlanUpdate(d *schema.ResourceData, meta interface{}) error { ReportSetting: expandReportSetting(d.Get("report_setting").([]interface{})), } - log.Printf("[DEBUG] Updating Backup Report Plan: %#v", input) + log.Printf("[DEBUG] Updating Backup Report Plan: %s", input) _, err := conn.UpdateReportPlan(input) + if err != nil { return fmt.Errorf("error updating Backup Report Plan (%s): %w", d.Id(), err) } + + if _, err := waitReportPlanUpdated(conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { + return fmt.Errorf("error waiting for Backup Report Plan (%s) update: %w", d.Id(), err) + } } if d.HasChange("tags_all") { o, n := d.GetChange("tags_all") + if err := UpdateTags(conn, d.Get("arn").(string), o, n); err != nil { return fmt.Errorf("error updating tags for Backup Report Plan (%s): %w", d.Id(), err) } @@ -239,13 +240,17 @@ func resourceReportPlanUpdate(d *schema.ResourceData, meta interface{}) error { func resourceReportPlanDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).BackupConn - input := &backup.DeleteReportPlanInput{ + log.Printf("[DEBUG] Deleting Backup Report Plan: %s", d.Id()) + _, err := conn.DeleteReportPlan(&backup.DeleteReportPlanInput{ ReportPlanName: aws.String(d.Id()), - } + }) - _, err := conn.DeleteReportPlan(input) if err != nil { - return fmt.Errorf("error deleting Backup Report Plan: %s", err) + return fmt.Errorf("error deleting Backup Report Plan (%s): %w", d.Id(), err) + } + + if _, err := waitReportPlanDeleted(conn, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { + return fmt.Errorf("error waiting for Backup Report Plan (%s) delete: %w", d.Id(), err) } return nil @@ -340,3 +345,95 @@ func flattenReportSetting(reportSetting *backup.ReportSetting) []interface{} { return []interface{}{values} } + +func FindReportPlanByName(conn *backup.Backup, name string) (*backup.ReportPlan, error) { + input := &backup.DescribeReportPlanInput{ + ReportPlanName: aws.String(name), + } + + output, err := conn.DescribeReportPlan(input) + + if tfawserr.ErrCodeEquals(err, backup.ErrCodeResourceNotFoundException) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || output.ReportPlan == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output.ReportPlan, nil +} + +func statusReportPlanDeployment(conn *backup.Backup, name string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := FindReportPlanByName(conn, name) + + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return output, aws.StringValue(output.DeploymentStatus), nil + } +} + +func waitReportPlanCreated(conn *backup.Backup, name string, timeout time.Duration) (*backup.ReportPlan, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{reportPlanDeploymentStatusCreateInProgress}, + Target: []string{reportPlanDeploymentStatusCompleted}, + Timeout: timeout, + Refresh: statusReportPlanDeployment(conn, name), + } + + outputRaw, err := stateConf.WaitForState() + + if output, ok := outputRaw.(*backup.ReportPlan); ok { + return output, err + } + + return nil, err +} + +func waitReportPlanDeleted(conn *backup.Backup, name string, timeout time.Duration) (*backup.ReportPlan, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{reportPlanDeploymentStatusDeleteInProgress}, + Target: []string{}, + Timeout: timeout, + Refresh: statusReportPlanDeployment(conn, name), + } + + outputRaw, err := stateConf.WaitForState() + + if output, ok := outputRaw.(*backup.ReportPlan); ok { + return output, err + } + + return nil, err +} + +func waitReportPlanUpdated(conn *backup.Backup, name string, timeout time.Duration) (*backup.ReportPlan, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{reportPlanDeploymentStatusUpdateInProgress}, + Target: []string{reportPlanDeploymentStatusCompleted}, + Timeout: timeout, + Refresh: statusReportPlanDeployment(conn, name), + } + + outputRaw, err := stateConf.WaitForState() + + if output, ok := outputRaw.(*backup.ReportPlan); ok { + return output, err + } + + return nil, err +} diff --git a/internal/service/backup/report_plan_data_source.go b/internal/service/backup/report_plan_data_source.go index 07243ffd89db..1522d8e6bc9f 100644 --- a/internal/service/backup/report_plan_data_source.go +++ b/internal/service/backup/report_plan_data_source.go @@ -5,7 +5,6 @@ import ( "time" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/backup" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" @@ -92,34 +91,29 @@ func dataSourceReportPlanRead(d *schema.ResourceData, meta interface{}) error { ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig name := d.Get("name").(string) + reportPlan, err := FindReportPlanByName(conn, name) - resp, err := conn.DescribeReportPlan(&backup.DescribeReportPlanInput{ - ReportPlanName: aws.String(name), - }) if err != nil { - return fmt.Errorf("Error getting Backup Report Plan: %w", err) + return fmt.Errorf("error reading Backup Report Plan (%s): %w", name, err) } - d.SetId(aws.StringValue(resp.ReportPlan.ReportPlanName)) + d.SetId(aws.StringValue(reportPlan.ReportPlanName)) - d.Set("arn", resp.ReportPlan.ReportPlanArn) - d.Set("deployment_status", resp.ReportPlan.DeploymentStatus) - d.Set("description", resp.ReportPlan.ReportPlanDescription) - d.Set("name", resp.ReportPlan.ReportPlanName) + d.Set("arn", reportPlan.ReportPlanArn) + d.Set("creation_time", reportPlan.CreationTime.Format(time.RFC3339)) + d.Set("deployment_status", reportPlan.DeploymentStatus) + d.Set("description", reportPlan.ReportPlanDescription) + d.Set("name", reportPlan.ReportPlanName) - if err := d.Set("creation_time", resp.ReportPlan.CreationTime.Format(time.RFC3339)); err != nil { - return fmt.Errorf("error setting creation_time: %s", err) - } - - if err := d.Set("report_delivery_channel", flattenReportDeliveryChannel(resp.ReportPlan.ReportDeliveryChannel)); err != nil { + if err := d.Set("report_delivery_channel", flattenReportDeliveryChannel(reportPlan.ReportDeliveryChannel)); err != nil { return fmt.Errorf("error setting report_delivery_channel: %w", err) } - if err := d.Set("report_setting", flattenReportSetting(resp.ReportPlan.ReportSetting)); err != nil { - return fmt.Errorf("error setting report_delivery_channel: %w", err) + if err := d.Set("report_setting", flattenReportSetting(reportPlan.ReportSetting)); err != nil { + return fmt.Errorf("error setting report_setting: %w", err) } - tags, err := ListTags(conn, aws.StringValue(resp.ReportPlan.ReportPlanArn)) + tags, err := ListTags(conn, aws.StringValue(reportPlan.ReportPlanArn)) if err != nil { return fmt.Errorf("error listing tags for Backup Report Plan (%s): %w", d.Id(), err) diff --git a/internal/service/backup/report_plan_data_source_test.go b/internal/service/backup/report_plan_data_source_test.go index 2f889474a2a2..3422ec165a6e 100644 --- a/internal/service/backup/report_plan_data_source_test.go +++ b/internal/service/backup/report_plan_data_source_test.go @@ -23,11 +23,11 @@ func TestAccBackupReportPlanDataSource_basic(t *testing.T) { Providers: acctest.Providers, Steps: []resource.TestStep{ { - Config: testAccReportPlanDataSourceConfig_nonExistent, - ExpectError: regexp.MustCompile(`Error getting Backup Report Plan`), + Config: testAccReportPlanDataSourceNonExistentConfig, + ExpectError: regexp.MustCompile(`error reading Backup Report Plan`), }, { - Config: testAccReportPlanDataSourceConfig_basic(rName, rName2), + Config: testAccReportPlanDataSourceConfig(rName, rName2), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttrPair(datasourceName, "arn", resourceName, "arn"), resource.TestCheckResourceAttrPair(datasourceName, "creation_time", resourceName, "creation_time"), @@ -48,13 +48,13 @@ func TestAccBackupReportPlanDataSource_basic(t *testing.T) { }) } -const testAccReportPlanDataSourceConfig_nonExistent = ` +const testAccReportPlanDataSourceNonExistentConfig = ` data "aws_backup_report_plan" "test" { name = "tf_acc_test_does_not_exist" } ` -func testAccReportPlanDataSourceConfig_basic(rName, rName2 string) string { +func testAccReportPlanDataSourceConfig(rName, rName2 string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q diff --git a/internal/service/backup/report_plan_test.go b/internal/service/backup/report_plan_test.go index a06dc1cc9a8b..2be15865d413 100644 --- a/internal/service/backup/report_plan_test.go +++ b/internal/service/backup/report_plan_test.go @@ -4,7 +4,6 @@ import ( "fmt" "testing" - "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/backup" sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" @@ -12,11 +11,11 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" tfbackup "github.com/hashicorp/terraform-provider-aws/internal/service/backup" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) func TestAccBackupReportPlan_basic(t *testing.T) { - var reportPlan backup.DescribeReportPlanOutput - + var reportPlan backup.ReportPlan rName := sdkacctest.RandomWithPrefix("tf-test-bucket") rName2 := fmt.Sprintf("tf_acc_test_%s", sdkacctest.RandString(7)) originalDescription := "original description" @@ -30,7 +29,7 @@ func TestAccBackupReportPlan_basic(t *testing.T) { CheckDestroy: testAccCheckReportPlanDestroy, Steps: []resource.TestStep{ { - Config: testAccBackupReportPlanConfig_basic(rName, rName2, originalDescription), + Config: testAccReportPlanConfig(rName, rName2, originalDescription), Check: resource.ComposeTestCheckFunc( testAccCheckReportPlanExists(resourceName, &reportPlan), resource.TestCheckResourceAttrSet(resourceName, "arn"), @@ -53,7 +52,7 @@ func TestAccBackupReportPlan_basic(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccBackupReportPlanConfig_basic(rName, rName2, updatedDescription), + Config: testAccReportPlanConfig(rName, rName2, updatedDescription), Check: resource.ComposeTestCheckFunc( testAccCheckReportPlanExists(resourceName, &reportPlan), resource.TestCheckResourceAttrSet(resourceName, "arn"), @@ -75,8 +74,7 @@ func TestAccBackupReportPlan_basic(t *testing.T) { } func TestAccBackupReportPlan_updateTags(t *testing.T) { - var reportPlan backup.DescribeReportPlanOutput - + var reportPlan backup.ReportPlan rName := sdkacctest.RandomWithPrefix("tf-test-bucket") rName2 := fmt.Sprintf("tf_acc_test_%s", sdkacctest.RandString(7)) description := "example description" @@ -89,7 +87,7 @@ func TestAccBackupReportPlan_updateTags(t *testing.T) { CheckDestroy: testAccCheckReportPlanDestroy, Steps: []resource.TestStep{ { - Config: testAccBackupReportPlanConfig_basic(rName, rName2, description), + Config: testAccReportPlanConfig(rName, rName2, description), Check: resource.ComposeTestCheckFunc( testAccCheckReportPlanExists(resourceName, &reportPlan), resource.TestCheckResourceAttrSet(resourceName, "arn"), @@ -112,7 +110,7 @@ func TestAccBackupReportPlan_updateTags(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccBackupReportPlanConfig_tags(rName, rName2, description), + Config: testAccReportPlanConfigTags1(rName, rName2, description), Check: resource.ComposeTestCheckFunc( testAccCheckReportPlanExists(resourceName, &reportPlan), resource.TestCheckResourceAttrSet(resourceName, "arn"), @@ -136,7 +134,7 @@ func TestAccBackupReportPlan_updateTags(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccBackupReportPlanConfig_tagsUpdated(rName, rName2, description), + Config: testAccReportPlanConfigTags2(rName, rName2, description), Check: resource.ComposeTestCheckFunc( testAccCheckReportPlanExists(resourceName, &reportPlan), resource.TestCheckResourceAttrSet(resourceName, "arn"), @@ -160,8 +158,7 @@ func TestAccBackupReportPlan_updateTags(t *testing.T) { } func TestAccBackupReportPlan_updateReportDeliveryChannel(t *testing.T) { - var reportPlan backup.DescribeReportPlanOutput - + var reportPlan backup.ReportPlan rName := sdkacctest.RandomWithPrefix("tf-test-bucket") rName2 := fmt.Sprintf("tf_acc_test_%s", sdkacctest.RandString(7)) description := "example description" @@ -174,7 +171,7 @@ func TestAccBackupReportPlan_updateReportDeliveryChannel(t *testing.T) { CheckDestroy: testAccCheckReportPlanDestroy, Steps: []resource.TestStep{ { - Config: testAccBackupReportPlanConfig_basic(rName, rName2, description), + Config: testAccReportPlanConfig(rName, rName2, description), Check: resource.ComposeTestCheckFunc( testAccCheckReportPlanExists(resourceName, &reportPlan), resource.TestCheckResourceAttrSet(resourceName, "arn"), @@ -197,7 +194,7 @@ func TestAccBackupReportPlan_updateReportDeliveryChannel(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccBackupReportPlanConfig_reportDeliveryChannel(rName, rName2, description), + Config: testAccReportPlanReportDeliveryChannelConfig(rName, rName2, description), Check: resource.ComposeTestCheckFunc( testAccCheckReportPlanExists(resourceName, &reportPlan), resource.TestCheckResourceAttrSet(resourceName, "arn"), @@ -220,8 +217,7 @@ func TestAccBackupReportPlan_updateReportDeliveryChannel(t *testing.T) { } func TestAccBackupReportPlan_disappears(t *testing.T) { - var reportPlan backup.DescribeReportPlanOutput - + var reportPlan backup.ReportPlan rName := sdkacctest.RandomWithPrefix("tf-test-bucket") rName2 := fmt.Sprintf("tf_acc_test_%s", sdkacctest.RandString(7)) description := "disappears" @@ -234,7 +230,7 @@ func TestAccBackupReportPlan_disappears(t *testing.T) { CheckDestroy: testAccCheckReportPlanDestroy, Steps: []resource.TestStep{ { - Config: testAccBackupReportPlanConfig_basic(rName, rName2, description), + Config: testAccReportPlanConfig(rName, rName2, description), Check: resource.ComposeTestCheckFunc( testAccCheckReportPlanExists(resourceName, &reportPlan), acctest.CheckResourceDisappears(acctest.Provider, tfbackup.ResourceReportPlan(), resourceName), @@ -261,52 +257,54 @@ func testAccReportPlanPreCheck(t *testing.T) { func testAccCheckReportPlanDestroy(s *terraform.State) error { conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn + for _, rs := range s.RootModule().Resources { if rs.Type != "aws_backup_report_plan" { continue } - input := &backup.DescribeReportPlanInput{ - ReportPlanName: aws.String(rs.Primary.ID), - } + _, err := tfbackup.FindReportPlanByName(conn, rs.Primary.ID) - resp, err := conn.DescribeReportPlan(input) + if tfresource.NotFound(err) { + continue + } - if err == nil { - if aws.StringValue(resp.ReportPlan.ReportPlanName) == rs.Primary.ID { - return fmt.Errorf("Backup Report Plan '%s' was not deleted properly", rs.Primary.ID) - } + if err != nil { + return err } + + return fmt.Errorf("Backup Report Plan %s still exists", rs.Primary.ID) } return nil } -func testAccCheckReportPlanExists(name string, reportPlan *backup.DescribeReportPlanOutput) resource.TestCheckFunc { +func testAccCheckReportPlanExists(n string, v *backup.ReportPlan) resource.TestCheckFunc { return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[name] - + rs, ok := s.RootModule().Resources[n] if !ok { - return fmt.Errorf("Not found: %s", name) + return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn - input := &backup.DescribeReportPlanInput{ - ReportPlanName: aws.String(rs.Primary.ID), + if rs.Primary.ID == "" { + return fmt.Errorf("No Backup Report Plan ID is set") } - resp, err := conn.DescribeReportPlan(input) + + conn := acctest.Provider.Meta().(*conns.AWSClient).BackupConn + + output, err := tfbackup.FindReportPlanByName(conn, rs.Primary.ID) if err != nil { return err } - *reportPlan = *resp + *v = *output return nil } } -func testAccBackupReportPlanBaseConfig(bucketName string) string { +func testAccReportPlanBaseConfig(bucketName string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q @@ -322,10 +320,8 @@ resource "aws_s3_bucket_public_access_block" "test" { `, bucketName) } -func testAccBackupReportPlanConfig_basic(rName, rName2, label string) string { - return acctest.ConfigCompose( - testAccBackupReportPlanBaseConfig(rName), - fmt.Sprintf(` +func testAccReportPlanConfig(rName, rName2, label string) string { + return acctest.ConfigCompose(testAccReportPlanBaseConfig(rName), fmt.Sprintf(` resource "aws_backup_report_plan" "test" { name = %[1]q description = %[2]q @@ -348,10 +344,8 @@ resource "aws_backup_report_plan" "test" { `, rName2, label)) } -func testAccBackupReportPlanConfig_tags(rName, rName2, label string) string { - return acctest.ConfigCompose( - testAccBackupReportPlanBaseConfig(rName), - fmt.Sprintf(` +func testAccReportPlanConfigTags1(rName, rName2, label string) string { + return acctest.ConfigCompose(testAccReportPlanBaseConfig(rName), fmt.Sprintf(` resource "aws_backup_report_plan" "test" { name = %[1]q description = %[2]q @@ -375,10 +369,8 @@ resource "aws_backup_report_plan" "test" { `, rName2, label)) } -func testAccBackupReportPlanConfig_tagsUpdated(rName, rName2, label string) string { - return acctest.ConfigCompose( - testAccBackupReportPlanBaseConfig(rName), - fmt.Sprintf(` +func testAccReportPlanConfigTags2(rName, rName2, label string) string { + return acctest.ConfigCompose(testAccReportPlanBaseConfig(rName), fmt.Sprintf(` resource "aws_backup_report_plan" "test" { name = %[1]q description = %[2]q @@ -403,10 +395,8 @@ resource "aws_backup_report_plan" "test" { `, rName2, label)) } -func testAccBackupReportPlanConfig_reportDeliveryChannel(rName, rName2, label string) string { - return acctest.ConfigCompose( - testAccBackupReportPlanBaseConfig(rName), - fmt.Sprintf(` +func testAccReportPlanReportDeliveryChannelConfig(rName, rName2, label string) string { + return acctest.ConfigCompose(testAccReportPlanBaseConfig(rName), fmt.Sprintf(` resource "aws_backup_report_plan" "test" { name = %[1]q description = %[2]q diff --git a/internal/service/backup/sweep.go b/internal/service/backup/sweep.go index a50b01a31be3..537ce34531a2 100644 --- a/internal/service/backup/sweep.go +++ b/internal/service/backup/sweep.go @@ -16,6 +16,16 @@ import ( ) func init() { + resource.AddTestSweepers("aws_backup_framework", &resource.Sweeper{ + Name: "aws_backup_framework", + F: sweepFramework, + }) + + resource.AddTestSweepers("aws_backup_report_plan", &resource.Sweeper{ + Name: "aws_backup_report_plan", + F: sweepReportPlan, + }) + resource.AddTestSweepers("aws_backup_vault_lock_configuration", &resource.Sweeper{ Name: "aws_backup_vault_lock_configuration", F: sweepVaultLockConfiguration, @@ -42,6 +52,90 @@ func init() { }) } +func sweepFramework(region string) error { + client, err := sweep.SharedRegionalSweepClient(region) + if err != nil { + return fmt.Errorf("Error getting client: %w", err) + } + conn := client.(*conns.AWSClient).BackupConn + input := &backup.ListFrameworksInput{} + var sweeperErrs *multierror.Error + sweepResources := make([]*sweep.SweepResource, 0) + + err = conn.ListFrameworksPages(input, func(page *backup.ListFrameworksOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, framework := range page.Frameworks { + r := ResourceFramework() + d := r.Data(nil) + d.SetId(aws.StringValue(framework.FrameworkName)) + + sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) + } + + return !lastPage + }) + + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping Backup Framework sweep for %s: %s", region, err) + return sweeperErrs.ErrorOrNil() + } + + if err != nil { + sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Backup Frameworks for %s: %w", region, err)) + } + + if err := sweep.SweepOrchestrator(sweepResources); err != nil { + sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Backup Frameworks for %s: %w", region, err)) + } + + return sweeperErrs.ErrorOrNil() +} + +func sweepReportPlan(region string) error { + client, err := sweep.SharedRegionalSweepClient(region) + if err != nil { + return fmt.Errorf("Error getting client: %w", err) + } + conn := client.(*conns.AWSClient).BackupConn + input := &backup.ListReportPlansInput{} + var sweeperErrs *multierror.Error + sweepResources := make([]*sweep.SweepResource, 0) + + err = conn.ListReportPlansPages(input, func(page *backup.ListReportPlansOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, reportPlan := range page.ReportPlans { + r := ResourceReportPlan() + d := r.Data(nil) + d.SetId(aws.StringValue(reportPlan.ReportPlanName)) + + sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) + } + + return !lastPage + }) + + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping Backup Report Plans sweep for %s: %s", region, err) + return sweeperErrs.ErrorOrNil() + } + + if err != nil { + sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error listing Backup Report Plans for %s: %w", region, err)) + } + + if err := sweep.SweepOrchestrator(sweepResources); err != nil { + sweeperErrs = multierror.Append(sweeperErrs, fmt.Errorf("error sweeping Backup Report Plans for %s: %w", region, err)) + } + + return sweeperErrs.ErrorOrNil() +} + func sweepVaultLockConfiguration(region string) error { client, err := sweep.SharedRegionalSweepClient(region) diff --git a/internal/service/cloudformation/stack_set.go b/internal/service/cloudformation/stack_set.go index b61e115ddb91..4f15402b4c0c 100644 --- a/internal/service/cloudformation/stack_set.go +++ b/internal/service/cloudformation/stack_set.go @@ -102,6 +102,53 @@ func ResourceStackSet() *schema.Resource { validation.StringMatch(regexp.MustCompile(`^[a-zA-Z0-9-]+$`), "must contain only alphanumeric and hyphen characters"), ), }, + "operation_preferences": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "failure_tolerance_count": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntAtLeast(0), + ConflictsWith: []string{"operation_preferences.0.failure_tolerance_percentage"}, + }, + "failure_tolerance_percentage": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntBetween(0, 100), + ConflictsWith: []string{"operation_preferences.0.failure_tolerance_count"}, + }, + "max_concurrent_count": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntAtLeast(1), + ConflictsWith: []string{"operation_preferences.0.max_concurrent_percentage"}, + }, + "max_concurrent_percentage": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntBetween(1, 100), + ConflictsWith: []string{"operation_preferences.0.max_concurrent_count"}, + }, + "region_concurrency_type": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(cloudformation.RegionConcurrencyType_Values(), false), + }, + "region_order": { + Type: schema.TypeList, + Optional: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^[a-zA-Z0-9-]{1,128}$`), ""), + }, + }, + }, + }, + }, "parameters": { Type: schema.TypeMap, Optional: true, @@ -289,6 +336,10 @@ func resourceStackSetUpdate(d *schema.ResourceData, meta interface{}) error { input.ExecutionRoleName = aws.String(v.(string)) } + if v, ok := d.GetOk("operation_preferences"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.OperationPreferences = expandCloudFormationOperationPreferences(v.([]interface{})[0].(map[string]interface{})) + } + if v, ok := d.GetOk("parameters"); ok { input.Parameters = expandParameters(v.(map[string]interface{})) } diff --git a/internal/service/cloudformation/stack_set_instance.go b/internal/service/cloudformation/stack_set_instance.go index c67b584dec04..b8d2e428830a 100644 --- a/internal/service/cloudformation/stack_set_instance.go +++ b/internal/service/cloudformation/stack_set_instance.go @@ -190,7 +190,7 @@ func resourceStackSetInstanceCreate(d *schema.ResourceData, meta interface{}) er } if v, ok := d.GetOk("operation_preferences"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - input.OperationPreferences = expandCloudFormationOperationPreferences(d) + input.OperationPreferences = expandCloudFormationOperationPreferences(v.([]interface{})[0].(map[string]interface{})) } log.Printf("[DEBUG] Creating CloudFormation StackSet Instance: %s", input) @@ -339,7 +339,7 @@ func resourceStackSetInstanceUpdate(d *schema.ResourceData, meta interface{}) er } if v, ok := d.GetOk("operation_preferences"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { - input.OperationPreferences = expandCloudFormationOperationPreferences(d) + input.OperationPreferences = expandCloudFormationOperationPreferences(v.([]interface{})[0].(map[string]interface{})) } log.Printf("[DEBUG] Updating CloudFormation StackSet Instance: %s", input) @@ -425,29 +425,3 @@ func expandCloudFormationDeploymentTargets(l []interface{}) *cloudformation.Depl return dt } - -func expandCloudFormationOperationPreferences(d *schema.ResourceData) *cloudformation.StackSetOperationPreferences { - - operationPreferences := &cloudformation.StackSetOperationPreferences{} - - if v, ok := d.GetOk("operation_preferences.0.failure_tolerance_count"); ok { - operationPreferences.FailureToleranceCount = aws.Int64(int64(v.(int))) - } - if v, ok := d.GetOk("operation_preferences.0.failure_tolerance_percentage"); ok { - operationPreferences.FailureTolerancePercentage = aws.Int64(int64(v.(int))) - } - if v, ok := d.GetOk("operation_preferences.0.max_concurrent_count"); ok { - operationPreferences.MaxConcurrentCount = aws.Int64(int64(v.(int))) - } - if v, ok := d.GetOk("operation_preferences.0.max_concurrent_percentage"); ok { - operationPreferences.MaxConcurrentPercentage = aws.Int64(int64(v.(int))) - } - if v, ok := d.GetOk("operation_preferences.0.region_concurrency_type"); ok { - operationPreferences.RegionConcurrencyType = aws.String(v.(string)) - } - if v, ok := d.GetOk("operation_preferences.0.region_order"); ok { - operationPreferences.RegionOrder = flex.ExpandStringSet(v.(*schema.Set)) - } - - return operationPreferences -} diff --git a/internal/service/cloudformation/stack_set_test.go b/internal/service/cloudformation/stack_set_test.go index 425b06dcc18a..57c08c4abce5 100644 --- a/internal/service/cloudformation/stack_set_test.go +++ b/internal/service/cloudformation/stack_set_test.go @@ -40,6 +40,7 @@ func TestAccCloudFormationStackSet_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "description", ""), resource.TestCheckResourceAttr(resourceName, "execution_role_name", "AWSCloudFormationStackSetExecutionRole"), resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "operation_preferences.#", "0"), resource.TestCheckResourceAttr(resourceName, "parameters.%", "0"), resource.TestCheckResourceAttr(resourceName, "permission_model", "SELF_MANAGED"), resource.TestMatchResourceAttr(resourceName, "stack_set_id", regexp.MustCompile(fmt.Sprintf("%s:.+", rName))), @@ -259,6 +260,43 @@ func TestAccCloudFormationStackSet_name(t *testing.T) { }) } +func TestAccCloudFormationStackSet_operationPreferences(t *testing.T) { + var stackSet cloudformation.StackSet + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_cloudformation_stack_set.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckStackSet(t) }, + ErrorCheck: acctest.ErrorCheck(t, cloudformation.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckStackSetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccStackSetOperationPreferencesConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckCloudFormationStackSetExists(resourceName, &stackSet), + resource.TestCheckResourceAttr(resourceName, "operation_preferences.#", "1"), + resource.TestCheckResourceAttr(resourceName, "operation_preferences.0.failure_tolerance_count", "1"), + resource.TestCheckResourceAttr(resourceName, "operation_preferences.0.failure_tolerance_percentage", "0"), + resource.TestCheckResourceAttr(resourceName, "operation_preferences.0.max_concurrent_count", "10"), + resource.TestCheckResourceAttr(resourceName, "operation_preferences.0.max_concurrent_percentage", "0"), + resource.TestCheckResourceAttr(resourceName, "operation_preferences.0.region_concurrency_type", ""), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "call_as", + "template_url", + "operation_preferences", + }, + }, + }, + }) +} + func TestAccCloudFormationStackSet_parameters(t *testing.T) { var stackSet1, stackSet2 cloudformation.StackSet rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -1458,3 +1496,44 @@ TEMPLATE } `, rName, testAccStackSetTemplateBodyVPC(rName)) } + +func testAccStackSetOperationPreferencesConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_iam_role" "test" { + assume_role_policy = < 0 { + apiObject.RegionOrder = flex.ExpandStringSet(v) + } + + return apiObject +} + func flattenCloudformationLoggingConfig(apiObject *cloudformation.LoggingConfig) map[string]interface{} { if apiObject == nil { return nil diff --git a/internal/service/cloudformation/wait.go b/internal/service/cloudformation/wait.go index 7090b7343049..ce7c77ab30f8 100644 --- a/internal/service/cloudformation/wait.go +++ b/internal/service/cloudformation/wait.go @@ -58,7 +58,7 @@ const ( func WaitStackSetOperationSucceeded(conn *cloudformation.CloudFormation, stackSetName, operationID, callAs string, timeout time.Duration) (*cloudformation.StackSetOperation, error) { stateConf := &resource.StateChangeConf{ - Pending: []string{cloudformation.StackSetOperationStatusRunning}, + Pending: []string{cloudformation.StackSetOperationStatusRunning, cloudformation.StackSetOperationStatusQueued}, Target: []string{cloudformation.StackSetOperationStatusSucceeded}, Refresh: StatusStackSetOperation(conn, stackSetName, operationID, callAs), Timeout: timeout, diff --git a/internal/service/cloudfront/distribution.go b/internal/service/cloudfront/distribution.go index aeb4242008fd..bec68a0387d6 100644 --- a/internal/service/cloudfront/distribution.go +++ b/internal/service/cloudfront/distribution.go @@ -594,6 +594,7 @@ func ResourceDistribution() *schema.Resource { "origin_path": { Type: schema.TypeString, Optional: true, + Default: "", }, "origin_shield": { Type: schema.TypeList, diff --git a/internal/service/cloudwatchlogs/subscription_filter.go b/internal/service/cloudwatchlogs/subscription_filter.go index e14cc428436e..ebeac4767444 100644 --- a/internal/service/cloudwatchlogs/subscription_filter.go +++ b/internal/service/cloudwatchlogs/subscription_filter.go @@ -45,7 +45,6 @@ func ResourceSubscriptionFilter() *schema.Resource { "filter_pattern": { Type: schema.TypeString, Required: true, - ForceNew: false, ValidateFunc: validation.StringLenBetween(0, 1024), }, "log_group_name": { diff --git a/internal/service/dlm/lifecycle_policy.go b/internal/service/dlm/lifecycle_policy.go index 5808c53393df..8b50ea339407 100644 --- a/internal/service/dlm/lifecycle_policy.go +++ b/internal/service/dlm/lifecycle_policy.go @@ -32,13 +32,14 @@ func ResourceLifecyclePolicy() *schema.Resource { Computed: true, }, "description": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringMatch(regexp.MustCompile("^[0-9A-Za-z _-]+$"), "see https://docs.aws.amazon.com/cli/latest/reference/dlm/create-lifecycle-policy.html"), - // TODO: https://docs.aws.amazon.com/dlm/latest/APIReference/API_LifecyclePolicy.html#dlm-Type-LifecyclePolicy-Description says it has max length of 500 but doesn't mention the regex but SDK and CLI docs only mention the regex and not max length. Check this + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.All( + validation.StringMatch(regexp.MustCompile("^[0-9A-Za-z _-]+$"), "see https://docs.aws.amazon.com/cli/latest/reference/dlm/create-lifecycle-policy.html"), + validation.StringLenBetween(1, 500), + ), }, "execution_role_arn": { - // TODO: Make this not required and if it's not provided then use the default service role, creating it if necessary Type: schema.TypeString, Required: true, ValidateFunc: verify.ValidARN, @@ -49,14 +50,167 @@ func ResourceLifecyclePolicy() *schema.Resource { MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ + "action": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cross_region_copy": { + Type: schema.TypeSet, + Required: true, + MaxItems: 3, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "encryption_configuration": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cmk_arn": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidARN, + }, + "encrypted": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + }, + }, + }, + "retain_rule": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "interval": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntAtLeast(1), + }, + "interval_unit": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice( + dlm.RetentionIntervalUnitValues_Values(), + false, + ), + }, + }, + }, + }, + "target": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^[\w:\-\/\*]+$`), ""), + }, + }, + }, + }, + "name": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.All( + validation.StringLenBetween(0, 120), + validation.StringMatch(regexp.MustCompile("^[0-9A-Za-z _-]+$"), "see https://docs.aws.amazon.com/dlm/latest/APIReference/API_Action.html"), + ), + }, + }, + }, + }, + "event_source": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "parameters": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "description_regex": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringLenBetween(0, 1000), + }, + "event_type": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(dlm.EventTypeValues_Values(), false), + }, + "snapshot_owner": { + Type: schema.TypeSet, + Required: true, + MaxItems: 50, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: verify.ValidAccountID, + }, + }, + }, + }, + }, + "type": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(dlm.EventSourceValues_Values(), false), + }, + }, + }, + }, "resource_types": { Type: schema.TypeList, - Required: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Optional: true, + MaxItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice(dlm.ResourceTypeValues_Values(), false), + }, + }, + "resource_locations": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice(dlm.ResourceLocationValues_Values(), false), + }, + }, + "parameters": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "exclude_boot_volume": { + Type: schema.TypeBool, + Optional: true, + }, + "no_reboot": { + Type: schema.TypeBool, + Optional: true, + }, + }, + }, + }, + "policy_type": { + Type: schema.TypeString, + Optional: true, + Default: dlm.PolicyTypeValuesEbsSnapshotManagement, + ValidateFunc: validation.StringInSlice(dlm.PolicyTypeValues_Values(), false), }, "schedule": { Type: schema.TypeList, - Required: true, + Optional: true, + MinItems: 1, + MaxItems: 4, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "copy_tags": { @@ -71,18 +225,27 @@ func ResourceLifecyclePolicy() *schema.Resource { MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ + "cron_expression": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile("^cron\\([^\n]{11,100}\\)$"), "see https://docs.aws.amazon.com/dlm/latest/APIReference/API_CreateRule.html"), + }, "interval": { Type: schema.TypeInt, - Required: true, + Optional: true, ValidateFunc: validation.IntInSlice([]int{1, 2, 3, 4, 6, 8, 12, 24}), }, "interval_unit": { - Type: schema.TypeString, - Optional: true, - Default: dlm.IntervalUnitValuesHours, - ValidateFunc: validation.StringInSlice([]string{ - dlm.IntervalUnitValuesHours, - }, false), + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice(dlm.IntervalUnitValues_Values(), false), + }, + "location": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice(dlm.LocationValues_Values(), false), }, "times": { Type: schema.TypeList, @@ -168,10 +331,71 @@ func ResourceLifecyclePolicy() *schema.Resource { }, }, }, + "deprecate_rule": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "count": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntBetween(1, 1000), + }, + "interval": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntAtLeast(1), + }, + "interval_unit": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice( + dlm.RetentionIntervalUnitValues_Values(), + false, + ), + }, + }, + }, + }, + "fast_restore_rule": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_zones": { + Type: schema.TypeSet, + Required: true, + MinItems: 1, + MaxItems: 10, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "count": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntBetween(1, 1000), + }, + "interval": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntAtLeast(1), + }, + "interval_unit": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice( + dlm.RetentionIntervalUnitValues_Values(), + false, + ), + }, + }, + }, + }, "name": { Type: schema.TypeString, Required: true, - ValidateFunc: validation.StringLenBetween(0, 500), + ValidateFunc: validation.StringLenBetween(0, 120), }, "retain_rule": { Type: schema.TypeList, @@ -181,9 +405,53 @@ func ResourceLifecyclePolicy() *schema.Resource { Schema: map[string]*schema.Schema{ "count": { Type: schema.TypeInt, - Required: true, + Optional: true, ValidateFunc: validation.IntBetween(1, 1000), }, + "interval": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntAtLeast(1), + }, + "interval_unit": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice( + dlm.RetentionIntervalUnitValues_Values(), + false, + ), + }, + }, + }, + }, + "share_rule": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "target_accounts": { + Type: schema.TypeSet, + Required: true, + MinItems: 1, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: verify.ValidAccountID, + }, + }, + "unshare_interval": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntAtLeast(1), + }, + "unshare_interval_unit": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice( + dlm.RetentionIntervalUnitValues_Values(), + false, + ), + }, }, }, }, @@ -192,25 +460,27 @@ func ResourceLifecyclePolicy() *schema.Resource { Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, }, + "variable_tags": { + Type: schema.TypeMap, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, }, }, }, "target_tags": { Type: schema.TypeMap, - Required: true, + Optional: true, Elem: &schema.Schema{Type: schema.TypeString}, }, }, }, }, "state": { - Type: schema.TypeString, - Optional: true, - Default: dlm.SettablePolicyStateValuesEnabled, - ValidateFunc: validation.StringInSlice([]string{ - dlm.SettablePolicyStateValuesDisabled, - dlm.SettablePolicyStateValuesEnabled, - }, false), + Type: schema.TypeString, + Optional: true, + Default: dlm.SettablePolicyStateValuesEnabled, + ValidateFunc: validation.StringInSlice(dlm.SettablePolicyStateValues_Values(), false), }, "tags": tftags.TagsSchema(), "tags_all": tftags.TagsSchemaComputed(), @@ -237,12 +507,15 @@ func resourceLifecyclePolicyCreate(d *schema.ResourceData, meta interface{}) err } log.Printf("[INFO] Creating DLM lifecycle policy: %s", input) - out, err := conn.CreateLifecyclePolicy(&input) + out, err := verify.RetryOnAWSCode(dlm.ErrCodeInvalidRequestException, func() (interface{}, error) { + return conn.CreateLifecyclePolicy(&input) + }) + if err != nil { return fmt.Errorf("error creating DLM Lifecycle Policy: %s", err) } - d.SetId(aws.StringValue(out.PolicyId)) + d.SetId(aws.StringValue(out.(*dlm.CreateLifecyclePolicyOutput).PolicyId)) return resourceLifecyclePolicyRead(d, meta) } @@ -292,29 +565,24 @@ func resourceLifecyclePolicyRead(d *schema.ResourceData, meta interface{}) error func resourceLifecyclePolicyUpdate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).DLMConn - input := dlm.UpdateLifecyclePolicyInput{ - PolicyId: aws.String(d.Id()), - } - updateLifecyclePolicy := false + if d.HasChangesExcept("tags", "tags_all") { + input := dlm.UpdateLifecyclePolicyInput{ + PolicyId: aws.String(d.Id()), + } - if d.HasChange("description") { - input.Description = aws.String(d.Get("description").(string)) - updateLifecyclePolicy = true - } - if d.HasChange("execution_role_arn") { - input.ExecutionRoleArn = aws.String(d.Get("execution_role_arn").(string)) - updateLifecyclePolicy = true - } - if d.HasChange("state") { - input.State = aws.String(d.Get("state").(string)) - updateLifecyclePolicy = true - } - if d.HasChange("policy_details") { - input.PolicyDetails = expandDlmPolicyDetails(d.Get("policy_details").([]interface{})) - updateLifecyclePolicy = true - } + if d.HasChange("description") { + input.Description = aws.String(d.Get("description").(string)) + } + if d.HasChange("execution_role_arn") { + input.ExecutionRoleArn = aws.String(d.Get("execution_role_arn").(string)) + } + if d.HasChange("state") { + input.State = aws.String(d.Get("state").(string)) + } + if d.HasChange("policy_details") { + input.PolicyDetails = expandDlmPolicyDetails(d.Get("policy_details").([]interface{})) + } - if updateLifecyclePolicy { log.Printf("[INFO] Updating lifecycle policy %s", d.Id()) _, err := conn.UpdateLifecyclePolicy(&input) if err != nil { @@ -340,6 +608,9 @@ func resourceLifecyclePolicyDelete(d *schema.ResourceData, meta interface{}) err PolicyId: aws.String(d.Id()), }) if err != nil { + if tfawserr.ErrCodeEquals(err, dlm.ErrCodeResourceNotFoundException) { + return nil + } return fmt.Errorf("error deleting DLM Lifecycle Policy (%s): %s", d.Id(), err) } @@ -350,17 +621,32 @@ func expandDlmPolicyDetails(cfg []interface{}) *dlm.PolicyDetails { if len(cfg) == 0 || cfg[0] == nil { return nil } - - policyDetails := &dlm.PolicyDetails{} m := cfg[0].(map[string]interface{}) - if v, ok := m["resource_types"]; ok { - policyDetails.ResourceTypes = flex.ExpandStringList(v.([]interface{})) + policyType := m["policy_type"].(string) + + policyDetails := &dlm.PolicyDetails{ + PolicyType: aws.String(policyType), + } + if v, ok := m["resource_types"].([]interface{}); ok && len(v) > 0 { + policyDetails.ResourceTypes = flex.ExpandStringList(v) + } + if v, ok := m["resource_locations"].([]interface{}); ok && len(v) > 0 { + policyDetails.ResourceLocations = flex.ExpandStringList(v) } - if v, ok := m["schedule"]; ok { - policyDetails.Schedules = expandDlmSchedules(v.([]interface{})) + if v, ok := m["schedule"].([]interface{}); ok && len(v) > 0 { + policyDetails.Schedules = expandDlmSchedules(v) } - if v, ok := m["target_tags"]; ok { - policyDetails.TargetTags = expandDlmTags(v.(map[string]interface{})) + if v, ok := m["action"].([]interface{}); ok && len(v) > 0 { + policyDetails.Actions = expandDlmActions(v) + } + if v, ok := m["event_source"].([]interface{}); ok && len(v) > 0 { + policyDetails.EventSource = expandDlmEventSource(v) + } + if v, ok := m["target_tags"].(map[string]interface{}); ok && len(v) > 0 { + policyDetails.TargetTags = expandDlmTags(v) + } + if v, ok := m["parameters"].([]interface{}); ok && len(v) > 0 { + policyDetails.Parameters = expandDlmParameters(v, policyType) } return policyDetails @@ -369,8 +655,16 @@ func expandDlmPolicyDetails(cfg []interface{}) *dlm.PolicyDetails { func flattenDlmPolicyDetails(policyDetails *dlm.PolicyDetails) []map[string]interface{} { result := make(map[string]interface{}) result["resource_types"] = flex.FlattenStringList(policyDetails.ResourceTypes) + result["resource_locations"] = flex.FlattenStringList(policyDetails.ResourceLocations) + result["action"] = flattenDlmActions(policyDetails.Actions) + result["event_source"] = flattenDlmEventSource(policyDetails.EventSource) result["schedule"] = flattenDlmSchedules(policyDetails.Schedules) result["target_tags"] = flattenDlmTags(policyDetails.TargetTags) + result["policy_type"] = aws.StringValue(policyDetails.PolicyType) + + if policyDetails.Parameters != nil { + result["parameters"] = flattenDlmParameters(policyDetails.Parameters) + } return []map[string]interface{}{result} } @@ -392,12 +686,25 @@ func expandDlmSchedules(cfg []interface{}) []*dlm.Schedule { if v, ok := m["name"]; ok { schedule.Name = aws.String(v.(string)) } + if v, ok := m["deprecate_rule"]; ok { + schedule.DeprecateRule = expandDlmDeprecateRule(v.([]interface{})) + } + if v, ok := m["fast_restore_rule"]; ok { + schedule.FastRestoreRule = expandDlmFastRestoreRule(v.([]interface{})) + } + if v, ok := m["share_rule"]; ok { + schedule.ShareRules = expandDlmShareRule(v.([]interface{})) + } if v, ok := m["retain_rule"]; ok { schedule.RetainRule = expandDlmRetainRule(v.([]interface{})) } if v, ok := m["tags_to_add"]; ok { schedule.TagsToAdd = expandDlmTags(v.(map[string]interface{})) } + if v, ok := m["variable_tags"]; ok { + schedule.VariableTags = expandDlmTags(v.(map[string]interface{})) + } + schedules[i] = schedule } @@ -414,12 +721,204 @@ func flattenDlmSchedules(schedules []*dlm.Schedule) []map[string]interface{} { m["name"] = aws.StringValue(s.Name) m["retain_rule"] = flattenDlmRetainRule(s.RetainRule) m["tags_to_add"] = flattenDlmTags(s.TagsToAdd) + m["variable_tags"] = flattenDlmTags(s.VariableTags) + + if s.DeprecateRule != nil { + m["deprecate_rule"] = flattenDlmDeprecateRule(s.DeprecateRule) + } + + if s.FastRestoreRule != nil { + m["fast_restore_rule"] = flattenDlmFastRestoreRule(s.FastRestoreRule) + } + + if s.ShareRules != nil { + m["share_rule"] = flattenDlmShareRule(s.ShareRules) + } + + result[i] = m + } + + return result +} + +func expandDlmActions(cfg []interface{}) []*dlm.Action { + actions := make([]*dlm.Action, len(cfg)) + for i, c := range cfg { + action := &dlm.Action{} + m := c.(map[string]interface{}) + if v, ok := m["cross_region_copy"].(*schema.Set); ok { + action.CrossRegionCopy = expandDlmActionCrossRegionCopyRules(v.List()) + } + if v, ok := m["name"]; ok { + action.Name = aws.String(v.(string)) + } + + actions[i] = action + } + + return actions +} + +func flattenDlmActions(actions []*dlm.Action) []map[string]interface{} { + result := make([]map[string]interface{}, len(actions)) + for i, s := range actions { + m := make(map[string]interface{}) + + m["name"] = aws.StringValue(s.Name) + + if s.CrossRegionCopy != nil { + m["cross_region_copy"] = flattenDlmActionCrossRegionCopyRules(s.CrossRegionCopy) + } + result[i] = m } return result } +func expandDlmActionCrossRegionCopyRules(l []interface{}) []*dlm.CrossRegionCopyAction { + if len(l) == 0 || l[0] == nil { + return nil + } + + var rules []*dlm.CrossRegionCopyAction + + for _, tfMapRaw := range l { + m, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + rule := &dlm.CrossRegionCopyAction{} + if v, ok := m["encryption_configuration"].([]interface{}); ok { + rule.EncryptionConfiguration = expandDlmActionCrossRegionCopyRuleEncryptionConfiguration(v) + } + if v, ok := m["retain_rule"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + rule.RetainRule = expandDlmCrossRegionCopyRuleRetainRule(v) + } + if v, ok := m["target"].(string); ok && v != "" { + rule.Target = aws.String(v) + } + + rules = append(rules, rule) + } + + return rules +} + +func flattenDlmActionCrossRegionCopyRules(rules []*dlm.CrossRegionCopyAction) []interface{} { + if len(rules) == 0 { + return []interface{}{} + } + + var result []interface{} + + for _, rule := range rules { + if rule == nil { + continue + } + + m := map[string]interface{}{ + "encryption_configuration": flattenDlmActionCrossRegionCopyRuleEncryptionConfiguration(rule.EncryptionConfiguration), + "retain_rule": flattenDlmCrossRegionCopyRuleRetainRule(rule.RetainRule), + "target": aws.StringValue(rule.Target), + } + + result = append(result, m) + } + + return result +} + +func expandDlmActionCrossRegionCopyRuleEncryptionConfiguration(l []interface{}) *dlm.EncryptionConfiguration { + if len(l) == 0 || l[0] == nil { + return nil + } + + m := l[0].(map[string]interface{}) + config := &dlm.EncryptionConfiguration{ + Encrypted: aws.Bool(m["encrypted"].(bool)), + } + + if v, ok := m["cmk_arn"].(string); ok && v != "" { + config.CmkArn = aws.String(v) + } + return config +} + +func flattenDlmActionCrossRegionCopyRuleEncryptionConfiguration(rule *dlm.EncryptionConfiguration) []interface{} { + if rule == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "encrypted": aws.BoolValue(rule.Encrypted), + "cmk_arn": aws.StringValue(rule.CmkArn), + } + + return []interface{}{m} +} + +func expandDlmEventSource(l []interface{}) *dlm.EventSource { + if len(l) == 0 || l[0] == nil { + return nil + } + + m := l[0].(map[string]interface{}) + config := &dlm.EventSource{ + Type: aws.String(m["type"].(string)), + } + + if v, ok := m["parameters"].([]interface{}); ok && len(v) > 0 { + config.Parameters = expandDlmEventSourceParameters(v) + } + + return config +} + +func flattenDlmEventSource(rule *dlm.EventSource) []interface{} { + if rule == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "parameters": flattenDlmEventSourceParameters(rule.Parameters), + "type": aws.StringValue(rule.Type), + } + + return []interface{}{m} +} + +func expandDlmEventSourceParameters(l []interface{}) *dlm.EventParameters { + if len(l) == 0 || l[0] == nil { + return nil + } + + m := l[0].(map[string]interface{}) + config := &dlm.EventParameters{ + DescriptionRegex: aws.String(m["description_regex"].(string)), + EventType: aws.String(m["event_type"].(string)), + SnapshotOwner: flex.ExpandStringSet(m["snapshot_owner"].(*schema.Set)), + } + + return config +} + +func flattenDlmEventSourceParameters(rule *dlm.EventParameters) []interface{} { + if rule == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "description_regex": aws.StringValue(rule.DescriptionRegex), + "event_type": aws.StringValue(rule.EventType), + "snapshot_owner": flex.FlattenStringSet(rule.SnapshotOwner), + } + + return []interface{}{m} +} + func expandDlmCrossRegionCopyRules(l []interface{}) []*dlm.CrossRegionCopyRule { if len(l) == 0 || l[0] == nil { return nil @@ -545,12 +1044,29 @@ func expandDlmCreateRule(cfg []interface{}) *dlm.CreateRule { return nil } c := cfg[0].(map[string]interface{}) - createRule := &dlm.CreateRule{ - Interval: aws.Int64(int64(c["interval"].(int))), - IntervalUnit: aws.String(c["interval_unit"].(string)), + createRule := &dlm.CreateRule{} + + if v, ok := c["times"].([]interface{}); ok && len(v) > 0 { + createRule.Times = flex.ExpandStringList(v) } - if v, ok := c["times"]; ok { - createRule.Times = flex.ExpandStringList(v.([]interface{})) + + if v, ok := c["interval"].(int); ok && v > 0 { + createRule.Interval = aws.Int64(int64(v)) + } + + if v, ok := c["location"].(string); ok && v != "" { + createRule.Location = aws.String(v) + } + + if v, ok := c["interval_unit"].(string); ok && v != "" { + createRule.IntervalUnit = aws.String(v) + } else { + createRule.IntervalUnit = aws.String(dlm.IntervalUnitValuesHours) + } + + if v, ok := c["cron_expression"].(string); ok && v != "" { + createRule.CronExpression = aws.String(v) + createRule.IntervalUnit = nil } return createRule @@ -562,10 +1078,24 @@ func flattenDlmCreateRule(createRule *dlm.CreateRule) []map[string]interface{} { } result := make(map[string]interface{}) - result["interval"] = aws.Int64Value(createRule.Interval) - result["interval_unit"] = aws.StringValue(createRule.IntervalUnit) result["times"] = flex.FlattenStringList(createRule.Times) + if createRule.Interval != nil { + result["interval"] = aws.Int64Value(createRule.Interval) + } + + if createRule.IntervalUnit != nil { + result["interval_unit"] = aws.StringValue(createRule.IntervalUnit) + } + + if createRule.Location != nil { + result["location"] = aws.StringValue(createRule.Location) + } + + if createRule.CronExpression != nil { + result["cron_expression"] = aws.StringValue(createRule.CronExpression) + } + return []map[string]interface{}{result} } @@ -574,18 +1104,153 @@ func expandDlmRetainRule(cfg []interface{}) *dlm.RetainRule { return nil } m := cfg[0].(map[string]interface{}) - return &dlm.RetainRule{ - Count: aws.Int64(int64(m["count"].(int))), + rule := &dlm.RetainRule{} + + if v, ok := m["count"].(int); ok && v > 0 { + rule.Count = aws.Int64(int64(v)) + } + + if v, ok := m["interval"].(int); ok && v > 0 { + rule.Interval = aws.Int64(int64(v)) } + + if v, ok := m["interval_unit"].(string); ok && v != "" { + rule.IntervalUnit = aws.String(v) + } + + return rule } func flattenDlmRetainRule(retainRule *dlm.RetainRule) []map[string]interface{} { result := make(map[string]interface{}) result["count"] = aws.Int64Value(retainRule.Count) + result["interval_unit"] = aws.StringValue(retainRule.IntervalUnit) + result["interval"] = aws.Int64Value(retainRule.Interval) return []map[string]interface{}{result} } +func expandDlmDeprecateRule(cfg []interface{}) *dlm.DeprecateRule { + if len(cfg) == 0 || cfg[0] == nil { + return nil + } + m := cfg[0].(map[string]interface{}) + rule := &dlm.DeprecateRule{} + + if v, ok := m["count"].(int); ok && v > 0 { + rule.Count = aws.Int64(int64(v)) + } + + if v, ok := m["interval"].(int); ok && v > 0 { + rule.Interval = aws.Int64(int64(v)) + } + + if v, ok := m["interval_unit"].(string); ok && v != "" { + rule.IntervalUnit = aws.String(v) + } + + return rule +} + +func flattenDlmDeprecateRule(rule *dlm.DeprecateRule) []map[string]interface{} { + result := make(map[string]interface{}) + result["count"] = aws.Int64Value(rule.Count) + result["interval_unit"] = aws.StringValue(rule.IntervalUnit) + result["interval"] = aws.Int64Value(rule.Interval) + + return []map[string]interface{}{result} +} + +func expandDlmFastRestoreRule(cfg []interface{}) *dlm.FastRestoreRule { + if len(cfg) == 0 || cfg[0] == nil { + return nil + } + m := cfg[0].(map[string]interface{}) + rule := &dlm.FastRestoreRule{ + AvailabilityZones: flex.ExpandStringSet(m["availability_zones"].(*schema.Set)), + } + + if v, ok := m["count"].(int); ok && v > 0 { + rule.Count = aws.Int64(int64(v)) + } + + if v, ok := m["interval"].(int); ok && v > 0 { + rule.Interval = aws.Int64(int64(v)) + } + + if v, ok := m["interval_unit"].(string); ok && v != "" { + rule.IntervalUnit = aws.String(v) + } + + return rule +} + +func flattenDlmFastRestoreRule(rule *dlm.FastRestoreRule) []map[string]interface{} { + result := make(map[string]interface{}) + result["count"] = aws.Int64Value(rule.Count) + result["interval_unit"] = aws.StringValue(rule.IntervalUnit) + result["interval"] = aws.Int64Value(rule.Interval) + result["availability_zones"] = flex.FlattenStringSet(rule.AvailabilityZones) + + return []map[string]interface{}{result} +} + +func expandDlmShareRule(cfg []interface{}) []*dlm.ShareRule { + if len(cfg) == 0 || cfg[0] == nil { + return nil + } + + rules := make([]*dlm.ShareRule, 0) + + for _, shareRule := range cfg { + m := shareRule.(map[string]interface{}) + + rule := &dlm.ShareRule{ + TargetAccounts: flex.ExpandStringSet(m["target_accounts"].(*schema.Set)), + } + + if v, ok := m["unshare_interval"].(int); ok && v > 0 { + rule.UnshareInterval = aws.Int64(int64(v)) + } + + if v, ok := m["unshare_interval_unit"].(string); ok && v != "" { + rule.UnshareIntervalUnit = aws.String(v) + } + + rules = append(rules, rule) + } + + return rules +} + +func flattenDlmShareRule(rules []*dlm.ShareRule) []map[string]interface{} { + values := make([]map[string]interface{}, 0) + + for _, v := range rules { + rule := make(map[string]interface{}) + + if v == nil { + return nil + } + + if v.TargetAccounts != nil { + rule["target_accounts"] = flex.FlattenStringSet(v.TargetAccounts) + } + + if v.UnshareIntervalUnit != nil { + rule["unshare_interval_unit"] = aws.StringValue(v.UnshareIntervalUnit) + } + + if v.UnshareInterval != nil { + rule["unshare_interval"] = aws.Int64Value(v.UnshareInterval) + } + + values = append(values, rule) + } + + return values +} + func expandDlmTags(m map[string]interface{}) []*dlm.Tag { var result []*dlm.Tag for k, v := range m { @@ -606,3 +1271,34 @@ func flattenDlmTags(tags []*dlm.Tag) map[string]string { return result } + +func expandDlmParameters(cfg []interface{}, policyType string) *dlm.Parameters { + if len(cfg) == 0 || cfg[0] == nil { + return nil + } + m := cfg[0].(map[string]interface{}) + parameters := &dlm.Parameters{} + + if v, ok := m["exclude_boot_volume"].(bool); ok && policyType == dlm.PolicyTypeValuesEbsSnapshotManagement { + parameters.ExcludeBootVolume = aws.Bool(v) + } + + if v, ok := m["no_reboot"].(bool); ok && policyType == dlm.PolicyTypeValuesImageManagement { + parameters.NoReboot = aws.Bool(v) + } + + return parameters +} + +func flattenDlmParameters(parameters *dlm.Parameters) []map[string]interface{} { + result := make(map[string]interface{}) + if parameters.ExcludeBootVolume != nil { + result["exclude_boot_volume"] = aws.BoolValue(parameters.ExcludeBootVolume) + } + + if parameters.NoReboot != nil { + result["no_reboot"] = aws.BoolValue(parameters.NoReboot) + } + + return []map[string]interface{}{result} +} diff --git a/internal/service/dlm/lifecycle_policy_test.go b/internal/service/dlm/lifecycle_policy_test.go index 51768b1eada1..0dfdf309b22d 100644 --- a/internal/service/dlm/lifecycle_policy_test.go +++ b/internal/service/dlm/lifecycle_policy_test.go @@ -14,10 +14,11 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfdlm "github.com/hashicorp/terraform-provider-aws/internal/service/dlm" ) func TestAccDLMLifecyclePolicy_basic(t *testing.T) { - resourceName := "aws_dlm_lifecycle_policy.basic" + resourceName := "aws_dlm_lifecycle_policy.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ @@ -35,11 +36,16 @@ func TestAccDLMLifecyclePolicy_basic(t *testing.T) { resource.TestCheckResourceAttrSet(resourceName, "execution_role_arn"), resource.TestCheckResourceAttr(resourceName, "state", "ENABLED"), resource.TestCheckResourceAttr(resourceName, "policy_details.0.resource_types.0", "VOLUME"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.policy_type", "EBS_SNAPSHOT_MANAGEMENT"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.action.#", "0"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.event_source.#", "0"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.#", "1"), resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.name", "tf-acc-basic"), resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.create_rule.0.interval", "12"), resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.create_rule.0.interval_unit", "HOURS"), resource.TestCheckResourceAttrSet(resourceName, "policy_details.0.schedule.0.create_rule.0.times.0"), resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.retain_rule.0.count", "10"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.deprecate_rule.#", "0"), resource.TestCheckResourceAttr(resourceName, "policy_details.0.target_tags.tf-acc-test", "basic"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), @@ -53,8 +59,274 @@ func TestAccDLMLifecyclePolicy_basic(t *testing.T) { }) } +func TestAccDLMLifecyclePolicy_event(t *testing.T) { + resourceName := "aws_dlm_lifecycle_policy.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, dlm.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: dlmLifecyclePolicyDestroy, + Steps: []resource.TestStep{ + { + Config: dlmLifecyclePolicyEventConfig(rName), + Check: resource.ComposeTestCheckFunc( + checkDlmLifecyclePolicyExists(resourceName), + acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "dlm", regexp.MustCompile(`policy/.+`)), + resource.TestCheckResourceAttr(resourceName, "description", "tf-acc-basic"), + resource.TestCheckResourceAttrSet(resourceName, "execution_role_arn"), + resource.TestCheckResourceAttr(resourceName, "state", "ENABLED"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.resource_types.#", "0"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.policy_type", "EVENT_BASED_POLICY"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.#", "0"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.action.#", "1"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.action.0.name", "tf-acc-basic"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.action.0.cross_region_copy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.action.0.cross_region_copy.0.target", acctest.AlternateRegion()), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.action.0.cross_region_copy.0.encryption_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.action.0.cross_region_copy.0.encryption_configuration.0.encrypted", "false"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.action.0.cross_region_copy.0.retain_rule.#", "1"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.action.0.cross_region_copy.0.retain_rule.0.interval", "15"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.action.0.cross_region_copy.0.retain_rule.0.interval_unit", "MONTHS"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.event_source.#", "1"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.event_source.0.type", "MANAGED_CWE"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.event_source.0.parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.event_source.0.parameters.0.description_regex", "^.*Created for policy: policy-1234567890abcdef0.*$"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.event_source.0.parameters.0.event_type", "shareSnapshot"), + resource.TestCheckResourceAttrPair(resourceName, "policy_details.0.event_source.0.parameters.0.snapshot_owner.0", "data.aws_caller_identity.current", "account_id"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccDLMLifecyclePolicy_cron(t *testing.T) { + resourceName := "aws_dlm_lifecycle_policy.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, dlm.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: dlmLifecyclePolicyDestroy, + Steps: []resource.TestStep{ + { + Config: dlmLifecyclePolicyCronConfig(rName), + Check: resource.ComposeTestCheckFunc( + checkDlmLifecyclePolicyExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.name", "tf-acc-basic"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.create_rule.0.cron_expression", "cron(0 18 ? * WED *)"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccDLMLifecyclePolicy_retainInterval(t *testing.T) { + resourceName := "aws_dlm_lifecycle_policy.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, dlm.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: dlmLifecyclePolicyDestroy, + Steps: []resource.TestStep{ + { + Config: dlmLifecyclePolicyRetainIntervalConfig(rName), + Check: resource.ComposeTestCheckFunc( + checkDlmLifecyclePolicyExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.retain_rule.0.interval", "1"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.retain_rule.0.interval_unit", "DAYS"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccDLMLifecyclePolicy_deprecate(t *testing.T) { + resourceName := "aws_dlm_lifecycle_policy.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, dlm.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: dlmLifecyclePolicyDestroy, + Steps: []resource.TestStep{ + { + Config: dlmLifecyclePolicyDeprecateConfig(rName), + Check: resource.ComposeTestCheckFunc( + checkDlmLifecyclePolicyExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.parameters.#", "0"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.deprecate_rule.0.count", "10"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccDLMLifecyclePolicy_fastRestore(t *testing.T) { + resourceName := "aws_dlm_lifecycle_policy.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, dlm.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: dlmLifecyclePolicyDestroy, + Steps: []resource.TestStep{ + { + Config: dlmLifecyclePolicyFastRestoreConfig(rName), + Check: resource.ComposeTestCheckFunc( + checkDlmLifecyclePolicyExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.fast_restore_rule.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "policy_details.0.schedule.0.fast_restore_rule.0.availability_zones.#", "data.aws_availability_zones.available", "names.#"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.fast_restore_rule.0.count", "10"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccDLMLifecyclePolicy_shareRule(t *testing.T) { + resourceName := "aws_dlm_lifecycle_policy.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, dlm.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: dlmLifecyclePolicyDestroy, + Steps: []resource.TestStep{ + { + Config: dlmLifecyclePolicyShareRuleConfig(rName), + Check: resource.ComposeTestCheckFunc( + checkDlmLifecyclePolicyExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.share_rule.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "policy_details.0.schedule.0.share_rule.0.target_accounts.0", "data.aws_caller_identity.current", "account_id"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccDLMLifecyclePolicy_parameters_instance(t *testing.T) { + resourceName := "aws_dlm_lifecycle_policy.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, dlm.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: dlmLifecyclePolicyDestroy, + Steps: []resource.TestStep{ + { + Config: dlmLifecyclePolicyParametersInstanceConfig(rName), + Check: resource.ComposeTestCheckFunc( + checkDlmLifecyclePolicyExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.parameters.0.exclude_boot_volume", "false"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.parameters.0.no_reboot", "false"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccDLMLifecyclePolicy_parameters_volume(t *testing.T) { + resourceName := "aws_dlm_lifecycle_policy.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, dlm.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: dlmLifecyclePolicyDestroy, + Steps: []resource.TestStep{ + { + Config: dlmLifecyclePolicyParametersVolumeConfig(rName), + Check: resource.ComposeTestCheckFunc( + checkDlmLifecyclePolicyExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.parameters.#", "1"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.parameters.0.exclude_boot_volume", "true"), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.parameters.0.no_reboot", "false"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccDLMLifecyclePolicy_variableTags(t *testing.T) { + resourceName := "aws_dlm_lifecycle_policy.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, dlm.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: dlmLifecyclePolicyDestroy, + Steps: []resource.TestStep{ + { + Config: dlmLifecyclePolicyVariableTagsConfig(rName), + Check: resource.ComposeTestCheckFunc( + checkDlmLifecyclePolicyExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "policy_details.0.schedule.0.variable_tags.instance_id", "$(instance-id)"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccDLMLifecyclePolicy_full(t *testing.T) { - resourceName := "aws_dlm_lifecycle_policy.full" + resourceName := "aws_dlm_lifecycle_policy.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ @@ -81,6 +353,11 @@ func TestAccDLMLifecyclePolicy_full(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "policy_details.0.target_tags.tf-acc-test", "full"), ), }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, { Config: dlmLifecyclePolicyFullUpdateConfig(rName), Check: resource.ComposeTestCheckFunc( @@ -208,6 +485,29 @@ func TestAccDLMLifecyclePolicy_tags(t *testing.T) { }) } +func TestAccDLMLifecyclePolicy_disappears(t *testing.T) { + resourceName := "aws_dlm_lifecycle_policy.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, dlm.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: dlmLifecyclePolicyDestroy, + Steps: []resource.TestStep{ + { + Config: dlmLifecyclePolicyBasicConfig(rName), + Check: resource.ComposeTestCheckFunc( + checkDlmLifecyclePolicyExists(resourceName), + acctest.CheckResourceDisappears(acctest.Provider, tfdlm.ResourceLifecyclePolicy(), resourceName), + acctest.CheckResourceDisappears(acctest.Provider, tfdlm.ResourceLifecyclePolicy(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + func dlmLifecyclePolicyDestroy(s *terraform.State) error { conn := acctest.Provider.Meta().(*conns.AWSClient).DLMConn @@ -277,10 +577,12 @@ func testAccPreCheck(t *testing.T) { } } -func dlmLifecyclePolicyBasicConfig(rName string) string { +func dlmLifecyclePolicyBaseConfig(rName string) string { return fmt.Sprintf(` -resource "aws_iam_role" "dlm_lifecycle_role" { - name = %q +data "aws_partition" "current" {} + +resource "aws_iam_role" "test" { + name = %[1]q assume_role_policy = < 0 { + req.Tags = Tags(tags.IgnoreAWS()) + } } if v, ok := d.GetOk("cluster_id"); ok { @@ -309,8 +348,15 @@ func resourceClusterCreate(d *schema.ResourceData, meta interface{}) error { req.Engine = aws.String(v.(string)) } - if v, ok := d.GetOk("engine_version"); ok { - req.EngineVersion = aws.String(v.(string)) + version := d.Get("engine_version").(string) + if version != "" { + req.EngineVersion = aws.String(version) + } + + if v, ok := d.GetOk("auto_minor_version_upgrade"); ok { + if v, null, _ := nullable.Bool(v.(string)).Value(); !null { + req.AutoMinorVersionUpgrade = aws.Bool(v) + } } if v, ok := d.GetOk("port"); ok { @@ -334,6 +380,15 @@ func resourceClusterCreate(d *schema.ResourceData, meta interface{}) error { req.SnapshotWindow = aws.String(v.(string)) } + if v, ok := d.GetOk("log_delivery_configuration"); ok { + req.LogDeliveryConfigurations = []*elasticache.LogDeliveryConfigurationRequest{} + v := v.(*schema.Set).List() + for _, v := range v { + logDeliveryConfigurationRequest := expandLogDeliveryConfigurations(v.(map[string]interface{})) + req.LogDeliveryConfigurations = append(req.LogDeliveryConfigurations, &logDeliveryConfigurationRequest) + } + } + if v, ok := d.GetOk("maintenance_window"); ok { req.PreferredMaintenanceWindow = aws.String(v.(string)) } @@ -364,7 +419,7 @@ func resourceClusterCreate(d *schema.ResourceData, meta interface{}) error { req.PreferredAvailabilityZones = flex.ExpandStringList(v.([]interface{})) } - id, err := createElasticacheCacheCluster(conn, req) + id, arn, err := createCacheCluster(conn, req) if err != nil { return fmt.Errorf("error creating ElastiCache Cache Cluster: %w", err) } @@ -376,6 +431,20 @@ func resourceClusterCreate(d *schema.ResourceData, meta interface{}) error { return fmt.Errorf("error waiting for ElastiCache Cache Cluster (%s) to be created: %w", d.Id(), err) } + // Only post-create tagging supported in some partitions + if req.Tags == nil && len(tags) > 0 { + err := UpdateTags(conn, arn, nil, tags) + + if err != nil { + if v, ok := d.GetOk("tags"); (ok && len(v.(map[string]interface{})) > 0) || !verify.CheckISOErrorTagsUnsupported(err) { + // explicitly setting tags or not an iso-unsupported error + return fmt.Errorf("failed adding tags after create for ElastiCache Cache Cluster (%s): %w", d.Id(), err) + } + + log.Printf("[WARN] failed adding tags after create for ElastiCache Cache Cluster (%s): %s", d.Id(), err) + } + } + return resourceClusterRead(d, meta) } @@ -396,10 +465,11 @@ func resourceClusterRead(d *schema.ResourceData, meta interface{}) error { d.Set("cluster_id", c.CacheClusterId) - if err := elasticacheSetResourceDataFromCacheCluster(d, c); err != nil { + if err := setFromCacheCluster(d, c); err != nil { return err } + d.Set("log_delivery_configuration", flattenLogDeliveryConfigurations(c.LogDeliveryConfigurations)) d.Set("snapshot_window", c.SnapshotWindow) d.Set("snapshot_retention_limit", c.SnapshotRetentionLimit) @@ -437,31 +507,38 @@ func resourceClusterRead(d *schema.ResourceData, meta interface{}) error { tags, err := ListTags(conn, aws.StringValue(c.ARN)) + if err != nil && !verify.CheckISOErrorTagsUnsupported(err) { + return fmt.Errorf("error listing tags for ElastiCache Cache Cluster (%s): %w", d.Id(), err) + } + if err != nil { - return fmt.Errorf("error listing tags for ElastiCache Cluster (%s): %w", d.Id(), err) + log.Printf("[WARN] error listing tags for Elasticache Cache Cluster (%s): %s", d.Id(), err) } - tags = tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig) + if tags != nil { + tags = tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig) - //lintignore:AWSR002 - if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { - return fmt.Errorf("error setting tags: %w", err) - } + //lintignore:AWSR002 + if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { + return fmt.Errorf("error setting tags: %w", err) + } - if err := d.Set("tags_all", tags.Map()); err != nil { - return fmt.Errorf("error setting tags_all: %w", err) + if err := d.Set("tags_all", tags.Map()); err != nil { + return fmt.Errorf("error setting tags_all: %w", err) + } } return nil } -func elasticacheSetResourceDataFromCacheCluster(d *schema.ResourceData, c *elasticache.CacheCluster) error { +func setFromCacheCluster(d *schema.ResourceData, c *elasticache.CacheCluster) error { d.Set("node_type", c.CacheNodeType) d.Set("engine", c.Engine) - if err := elasticacheSetResourceDataEngineVersionFromCacheCluster(d, c); err != nil { + if err := setEngineVersionFromCacheCluster(d, c); err != nil { return err } + d.Set("auto_minor_version_upgrade", strconv.FormatBool(aws.BoolValue(c.AutoMinorVersionUpgrade))) d.Set("subnet_group_name", c.CacheSubnetGroupName) if err := d.Set("security_group_names", flattenSecurityGroupNames(c.CacheSecurityGroups)); err != nil { @@ -480,7 +557,7 @@ func elasticacheSetResourceDataFromCacheCluster(d *schema.ResourceData, c *elast return nil } -func elasticacheSetResourceDataEngineVersionFromCacheCluster(d *schema.ResourceData, c *elasticache.CacheCluster) error { +func setEngineVersionFromCacheCluster(d *schema.ResourceData, c *elasticache.CacheCluster) error { engineVersion, err := gversion.NewVersion(aws.StringValue(c.EngineVersion)) if err != nil { return fmt.Errorf("error reading ElastiCache Cache Cluster (%s) engine version: %w", d.Id(), err) @@ -498,14 +575,6 @@ func elasticacheSetResourceDataEngineVersionFromCacheCluster(d *schema.ResourceD func resourceClusterUpdate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).ElastiCacheConn - if d.HasChange("tags_all") { - o, n := d.GetChange("tags_all") - - if err := UpdateTags(conn, d.Get("arn").(string), o, n); err != nil { - return fmt.Errorf("error updating ElastiCache Cluster (%s) tags: %w", d.Get("arn").(string), err) - } - } - req := &elasticache.ModifyCacheClusterInput{ CacheClusterId: aws.String(d.Id()), ApplyImmediately: aws.Bool(d.Get("apply_immediately").(bool)), @@ -524,6 +593,32 @@ func resourceClusterUpdate(d *schema.ResourceData, meta interface{}) error { requestUpdate = true } + if d.HasChange("log_delivery_configuration") { + + oldLogDeliveryConfig, newLogDeliveryConfig := d.GetChange("log_delivery_configuration") + + req.LogDeliveryConfigurations = []*elasticache.LogDeliveryConfigurationRequest{} + logTypesToSubmit := make(map[string]bool) + + currentLogDeliveryConfig := newLogDeliveryConfig.(*schema.Set).List() + for _, current := range currentLogDeliveryConfig { + logDeliveryConfigurationRequest := expandLogDeliveryConfigurations(current.(map[string]interface{})) + logTypesToSubmit[*logDeliveryConfigurationRequest.LogType] = true + req.LogDeliveryConfigurations = append(req.LogDeliveryConfigurations, &logDeliveryConfigurationRequest) + } + + previousLogDeliveryConfig := oldLogDeliveryConfig.(*schema.Set).List() + for _, previous := range previousLogDeliveryConfig { + logDeliveryConfigurationRequest := expandEmptyLogDeliveryConfigurations(previous.(map[string]interface{})) + // if something was removed, send an empty request + if !logTypesToSubmit[*logDeliveryConfigurationRequest.LogType] { + req.LogDeliveryConfigurations = append(req.LogDeliveryConfigurations, &logDeliveryConfigurationRequest) + } + } + + requestUpdate = true + } + if d.HasChange("maintenance_window") { req.PreferredMaintenanceWindow = aws.String(d.Get("maintenance_window").(string)) requestUpdate = true @@ -544,6 +639,14 @@ func resourceClusterUpdate(d *schema.ResourceData, meta interface{}) error { requestUpdate = true } + if d.HasChange("auto_minor_version_upgrade") { + v := d.Get("auto_minor_version_upgrade") + if v, null, _ := nullable.Bool(v.(string)).Value(); !null { + req.AutoMinorVersionUpgrade = aws.Bool(v) + } + requestUpdate = true + } + if d.HasChange("snapshot_window") { req.SnapshotWindow = aws.String(d.Get("snapshot_window").(string)) requestUpdate = true @@ -608,6 +711,23 @@ func resourceClusterUpdate(d *schema.ResourceData, meta interface{}) error { } } + if d.HasChange("tags_all") { + o, n := d.GetChange("tags_all") + + err := UpdateTags(conn, d.Get("arn").(string), o, n) + + // ISO partitions may not support tagging, giving error + if err != nil { + if v, ok := d.GetOk("tags"); (ok && len(v.(map[string]interface{})) > 0) || !verify.CheckISOErrorTagsUnsupported(err) { + // explicitly setting tags or not an iso-unsupported error + return fmt.Errorf("failed updating ElastiCache Cache Cluster (%s) tags: %w", d.Get("arn").(string), err) + } + + // no non-default tags and iso-unsupported error + log.Printf("[WARN] failed updating tags for ElastiCache Cache Cluster (%s): %s", d.Get("arn").(string), err) + } + } + return resourceClusterRead(d, meta) } @@ -671,19 +791,29 @@ func resourceClusterDelete(d *schema.ResourceData, meta interface{}) error { return nil } -func createElasticacheCacheCluster(conn *elasticache.ElastiCache, input *elasticache.CreateCacheClusterInput) (string, error) { +func createCacheCluster(conn *elasticache.ElastiCache, input *elasticache.CreateCacheClusterInput) (string, string, error) { log.Printf("[DEBUG] Creating ElastiCache Cache Cluster: %s", input) output, err := conn.CreateCacheCluster(input) + + // Some partitions may not support tag-on-create + if input.Tags != nil && verify.CheckISOErrorTagsUnsupported(err) { + log.Printf("[WARN] failed creating ElastiCache Cache Cluster with tags: %s. Trying create without tags.", err) + + input.Tags = nil + output, err = conn.CreateCacheCluster(input) + } + if err != nil { - return "", err + return "", "", err } + if output == nil || output.CacheCluster == nil { - return "", errors.New("missing cluster ID after creation") + return "", "", errors.New("missing cluster ID after creation") } // Elasticache always retains the id in lower case, so we have to // mimic that or else we won't be able to refresh a resource whose // name contained uppercase characters. - return strings.ToLower(aws.StringValue(output.CacheCluster.CacheClusterId)), nil + return strings.ToLower(aws.StringValue(output.CacheCluster.CacheClusterId)), aws.StringValue(output.CacheCluster.ARN), nil } func DeleteCacheCluster(conn *elasticache.ElastiCache, cacheClusterID string, finalSnapshotID string) error { diff --git a/internal/service/elasticache/cluster_data_source.go b/internal/service/elasticache/cluster_data_source.go index b229444fe772..3626f9513f8e 100644 --- a/internal/service/elasticache/cluster_data_source.go +++ b/internal/service/elasticache/cluster_data_source.go @@ -2,6 +2,7 @@ package elasticache import ( "fmt" + "log" "strings" "github.com/aws/aws-sdk-go/aws" @@ -9,6 +10,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/conns" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/verify" ) func DataSourceCluster() *schema.Resource { @@ -74,6 +76,30 @@ func DataSourceCluster() *schema.Resource { Set: schema.HashString, }, + "log_delivery_configuration": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "destination_type": { + Type: schema.TypeString, + Computed: true, + }, + "destination": { + Type: schema.TypeString, + Computed: true, + }, + "log_format": { + Type: schema.TypeString, + Computed: true, + }, + "log_type": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, "maintenance_window": { Type: schema.TypeString, Computed: true, @@ -181,6 +207,7 @@ func dataSourceClusterRead(d *schema.ResourceData, meta interface{}) error { d.Set("replication_group_id", cluster.ReplicationGroupId) } + d.Set("log_delivery_configuration", flattenLogDeliveryConfigurations(cluster.LogDeliveryConfigurations)) d.Set("maintenance_window", cluster.PreferredMaintenanceWindow) d.Set("snapshot_window", cluster.SnapshotWindow) d.Set("snapshot_retention_limit", cluster.SnapshotRetentionLimit) @@ -206,14 +233,19 @@ func dataSourceClusterRead(d *schema.ResourceData, meta interface{}) error { tags, err := ListTags(conn, aws.StringValue(cluster.ARN)) - if err != nil { + if err != nil && !verify.CheckISOErrorTagsUnsupported(err) { return fmt.Errorf("error listing tags for Elasticache Cluster (%s): %w", d.Id(), err) } - if err := d.Set("tags", tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { - return fmt.Errorf("error setting tags: %w", err) + if err != nil { + log.Printf("[WARN] error listing tags for Elasticache Cluster (%s): %s", d.Id(), err) } - return nil + if tags != nil { + if err := d.Set("tags", tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { + return fmt.Errorf("error setting tags: %w", err) + } + } + return nil } diff --git a/internal/service/elasticache/cluster_data_source_test.go b/internal/service/elasticache/cluster_data_source_test.go index a5146c18543a..1b0e9952cd1a 100644 --- a/internal/service/elasticache/cluster_data_source_test.go +++ b/internal/service/elasticache/cluster_data_source_test.go @@ -36,6 +36,33 @@ func TestAccElastiCacheClusterDataSource_Data_basic(t *testing.T) { }) } +func TestAccElastiCacheClusterDataSource_Engine_Redis_LogDeliveryConfigurations(t *testing.T) { + rName := sdkacctest.RandomWithPrefix("tf-acc-test") + dataSourceName := "data.aws_elasticache_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, elasticache.EndpointsID), + Providers: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccClusterConfig_Engine_Redis_LogDeliveryConfigurations(rName, true, elasticache.DestinationTypeKinesisFirehose, elasticache.LogFormatJson, true, elasticache.DestinationTypeCloudwatchLogs, elasticache.LogFormatText), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr(dataSourceName, "engine", "redis"), + resource.TestCheckResourceAttr(dataSourceName, "log_delivery_configuration.0.destination", rName), + resource.TestCheckResourceAttr(dataSourceName, "log_delivery_configuration.0.destination_type", "cloudwatch-logs"), + resource.TestCheckResourceAttr(dataSourceName, "log_delivery_configuration.0.log_format", "text"), + resource.TestCheckResourceAttr(dataSourceName, "log_delivery_configuration.0.log_type", "engine-log"), + resource.TestCheckResourceAttr(dataSourceName, "log_delivery_configuration.1.destination", rName), + resource.TestCheckResourceAttr(dataSourceName, "log_delivery_configuration.1.destination_type", "kinesis-firehose"), + resource.TestCheckResourceAttr(dataSourceName, "log_delivery_configuration.1.log_format", "json"), + resource.TestCheckResourceAttr(dataSourceName, "log_delivery_configuration.1.log_type", "slow-log"), + ), + }, + }, + }) +} + func testAccClusterWithDataSourceConfig(rName string) string { return fmt.Sprintf(` resource "aws_elasticache_cluster" "test" { diff --git a/internal/service/elasticache/cluster_test.go b/internal/service/elasticache/cluster_test.go index 077070c1be57..a90bb620fecb 100644 --- a/internal/service/elasticache/cluster_test.go +++ b/internal/service/elasticache/cluster_test.go @@ -20,11 +20,11 @@ import ( ) func init() { - acctest.RegisterServiceErrorCheckFunc(elasticache.EndpointsID, testAccErrorCheckSkipElasticache) + acctest.RegisterServiceErrorCheckFunc(elasticache.EndpointsID, testAccErrorCheckSkip) } -func testAccErrorCheckSkipElasticache(t *testing.T) resource.ErrorCheckFunc { +func testAccErrorCheckSkip(t *testing.T) resource.ErrorCheckFunc { return acctest.ErrorCheckSkipMessagesContaining(t, "is not suppored in this region", ) @@ -43,7 +43,7 @@ func TestAccElastiCacheCluster_Engine_memcached(t *testing.T) { Steps: []resource.TestStep{ { Config: testAccClusterConfig_Engine_Memcached(rName), - Check: resource.ComposeTestCheckFunc( + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckClusterExists(resourceName, &ec), resource.TestCheckResourceAttr(resourceName, "cache_nodes.0.id", "0001"), resource.TestCheckResourceAttrSet(resourceName, "configuration_endpoint"), @@ -77,11 +77,46 @@ func TestAccElastiCacheCluster_Engine_redis(t *testing.T) { Steps: []resource.TestStep{ { Config: testAccClusterConfig_Engine_Redis(rName), - Check: resource.ComposeTestCheckFunc( + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckClusterExists(resourceName, &ec), resource.TestCheckResourceAttr(resourceName, "cache_nodes.0.id", "0001"), resource.TestCheckResourceAttr(resourceName, "engine", "redis"), + resource.TestMatchResourceAttr(resourceName, "engine_version_actual", regexp.MustCompile(`^6\.[[:digit:]]+\.[[:digit:]]+$`)), resource.TestCheckResourceAttr(resourceName, "port", "6379"), + resource.TestCheckResourceAttr(resourceName, "auto_minor_version_upgrade", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "apply_immediately", + }, + }, + }, + }) +} + +func TestAccElastiCacheCluster_Engine_redis_v5(t *testing.T) { + var ec elasticache.CacheCluster + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_elasticache_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, elasticache.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccClusterConfig_Engine_Redis_v5(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckClusterExists(resourceName, &ec), + resource.TestCheckResourceAttr(resourceName, "engine", "redis"), + resource.TestCheckResourceAttr(resourceName, "engine_version_actual", "5.0.6"), + // Even though it is ignored, the API returns `true` in this case + resource.TestCheckResourceAttr(resourceName, "auto_minor_version_upgrade", "true"), ), }, { @@ -763,6 +798,256 @@ func TestAccElastiCacheCluster_Redis_finalSnapshot(t *testing.T) { }) } +func TestAccElastiCacheCluster_Redis_autoMinorVersionUpgrade(t *testing.T) { + var cluster elasticache.CacheCluster + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_elasticache_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, elasticache.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccClusterConfig_Redis_AutoMinorVersionUpgrade(rName, false), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckClusterExists(resourceName, &cluster), + resource.TestCheckResourceAttr(resourceName, "auto_minor_version_upgrade", "false"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "apply_immediately", + }, + }, + { + Config: testAccClusterConfig_Redis_AutoMinorVersionUpgrade(rName, true), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckClusterExists(resourceName, &cluster), + resource.TestCheckResourceAttr(resourceName, "auto_minor_version_upgrade", "true"), + ), + }, + }, + }) +} + +func TestAccElastiCacheCluster_Engine_Redis_LogDeliveryConfigurations(t *testing.T) { + var ec elasticache.CacheCluster + rName := sdkacctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_elasticache_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, elasticache.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccClusterConfig_Engine_Redis_LogDeliveryConfigurations(rName, true, elasticache.DestinationTypeCloudwatchLogs, elasticache.LogFormatText, true, elasticache.DestinationTypeCloudwatchLogs, elasticache.LogFormatText), + Check: resource.ComposeTestCheckFunc( + testAccCheckClusterExists(resourceName, &ec), + resource.TestCheckResourceAttr(resourceName, "engine", "redis"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination", rName), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination_type", "cloudwatch-logs"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_format", "text"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_type", "engine-log"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.destination", rName), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.destination_type", "cloudwatch-logs"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.log_format", "text"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.log_type", "slow-log"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"apply_immediately"}, + }, + { + Config: testAccClusterConfig_Engine_Redis_LogDeliveryConfigurations(rName, true, elasticache.DestinationTypeKinesisFirehose, elasticache.LogFormatJson, true, elasticache.DestinationTypeKinesisFirehose, elasticache.LogFormatJson), + Check: resource.ComposeTestCheckFunc( + testAccCheckClusterExists(resourceName, &ec), + resource.TestCheckResourceAttr(resourceName, "engine", "redis"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination", rName), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination_type", "kinesis-firehose"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_format", "json"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_type", "engine-log"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.destination", rName), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.destination_type", "kinesis-firehose"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.log_format", "json"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.log_type", "slow-log"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"apply_immediately"}, + }, + { + Config: testAccClusterConfig_Engine_Redis_LogDeliveryConfigurations(rName, true, elasticache.DestinationTypeCloudwatchLogs, elasticache.LogFormatText, true, elasticache.DestinationTypeKinesisFirehose, elasticache.LogFormatJson), + Check: resource.ComposeTestCheckFunc( + testAccCheckClusterExists(resourceName, &ec), + resource.TestCheckResourceAttr(resourceName, "engine", "redis"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination", rName), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination_type", "cloudwatch-logs"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_format", "text"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_type", "slow-log"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.destination", rName), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.destination_type", "kinesis-firehose"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.log_format", "json"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.log_type", "engine-log"), + ), + }, + { + Config: testAccClusterConfig_Engine_Redis_LogDeliveryConfigurations(rName, true, elasticache.DestinationTypeKinesisFirehose, elasticache.LogFormatJson, true, elasticache.DestinationTypeCloudwatchLogs, elasticache.LogFormatText), + Check: resource.ComposeTestCheckFunc( + testAccCheckClusterExists(resourceName, &ec), + resource.TestCheckResourceAttr(resourceName, "engine", "redis"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination", rName), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination_type", "cloudwatch-logs"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_format", "text"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_type", "engine-log"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.destination", rName), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.destination_type", "kinesis-firehose"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.log_format", "json"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.log_type", "slow-log"), + ), + }, + { + Config: testAccClusterConfig_Engine_Redis_LogDeliveryConfigurations(rName, false, "", "", false, "", ""), + Check: resource.ComposeTestCheckFunc( + testAccCheckClusterExists(resourceName, &ec), + resource.TestCheckResourceAttr(resourceName, "engine", "redis"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.0.destination"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.0.destination_type"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.0.log_format"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.0.log_type"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.destination"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.destination_type"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.log_format"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.log_type"), + ), + }, + { + Config: testAccClusterConfig_Engine_Redis_LogDeliveryConfigurations(rName, true, elasticache.DestinationTypeKinesisFirehose, elasticache.LogFormatJson, false, "", ""), + Check: resource.ComposeTestCheckFunc( + testAccCheckClusterExists(resourceName, &ec), + resource.TestCheckResourceAttr(resourceName, "engine", "redis"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination", rName), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination_type", "kinesis-firehose"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_format", "json"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_type", "slow-log"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.destination"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.destination_type"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.log_format"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.log_type"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"apply_immediately"}, + }, + }, + }) +} + +func TestAccElastiCacheCluster_tags(t *testing.T) { + var cluster elasticache.CacheCluster + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_elasticache_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, elasticache.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccClusterConfigTags1(rName, "key1", "value1"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckClusterExists(resourceName, &cluster), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + resource.TestCheckResourceAttr(resourceName, "tags_all.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags_all.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"apply_immediately"}, //not in the API + }, + { + Config: testAccClusterConfigTags2(rName, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckClusterExists(resourceName, &cluster), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + resource.TestCheckResourceAttr(resourceName, "tags_all.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags_all.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags_all.key2", "value2"), + ), + }, + { + Config: testAccClusterConfigTags1(rName, "key2", "value2"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckClusterExists(resourceName, &cluster), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + resource.TestCheckResourceAttr(resourceName, "tags_all.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags_all.key2", "value2"), + ), + }, + }, + }) +} + +func TestAccElastiCacheCluster_tagWithOtherModification(t *testing.T) { + var cluster elasticache.CacheCluster + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_elasticache_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, elasticache.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccClusterVersionAndTagConfig(rName, "5.0.4", "key1", "value1"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckClusterExists(resourceName, &cluster), + resource.TestCheckResourceAttr(resourceName, "engine_version", "5.0.4"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + resource.TestCheckResourceAttr(resourceName, "tags_all.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags_all.key1", "value1"), + ), + }, + { + Config: testAccClusterVersionAndTagConfig(rName, "5.0.6", "key1", "value1updated"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckClusterExists(resourceName, &cluster), + resource.TestCheckResourceAttr(resourceName, "engine_version", "5.0.6"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags_all.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags_all.key1", "value1updated"), + ), + }, + }, + }) +} + func testAccCheckClusterAttributes(v *elasticache.CacheCluster) resource.TestCheckFunc { return func(s *terraform.State) error { if v.NotificationConfiguration == nil { @@ -897,7 +1182,7 @@ func testAccCheckClusterEC2ClassicExists(n string, v *elasticache.CacheCluster) func testAccClusterConfig_Engine_Memcached(rName string) string { return fmt.Sprintf(` resource "aws_elasticache_cluster" "test" { - cluster_id = "%s" + cluster_id = "%[1]s" engine = "memcached" node_type = "cache.t3.small" num_cache_nodes = 1 @@ -908,7 +1193,19 @@ resource "aws_elasticache_cluster" "test" { func testAccClusterConfig_Engine_Redis(rName string) string { return fmt.Sprintf(` resource "aws_elasticache_cluster" "test" { - cluster_id = "%s" + cluster_id = "%[1]s" + engine = "redis" + node_type = "cache.t3.small" + num_cache_nodes = 1 +} +`, rName) +} + +func testAccClusterConfig_Engine_Redis_v5(rName string) string { + return fmt.Sprintf(` +resource "aws_elasticache_cluster" "test" { + cluster_id = "%[1]s" + engine_version = "5.0.6" engine = "redis" node_type = "cache.t3.small" num_cache_nodes = 1 @@ -919,7 +1216,7 @@ resource "aws_elasticache_cluster" "test" { func testAccClusterConfig_Engine_None(rName string) string { return fmt.Sprintf(` resource "aws_elasticache_cluster" "test" { - cluster_id = "%s" + cluster_id = "%[1]s" node_type = "cache.t3.small" num_cache_nodes = 1 } @@ -1405,3 +1702,189 @@ resource "aws_elasticache_cluster" "test" { } `, rName) } + +func testAccClusterConfig_Redis_AutoMinorVersionUpgrade(rName string, enable bool) string { + return fmt.Sprintf(` +resource "aws_elasticache_cluster" "test" { + cluster_id = %[1]q + engine = "redis" + engine_version = "6.x" + node_type = "cache.t3.small" + num_cache_nodes = 1 + + auto_minor_version_upgrade = %[2]t +} +`, rName, enable) +} + +func testAccClusterConfig_Engine_Redis_LogDeliveryConfigurations(rName string, slowLogDeliveryEnabled bool, slowDeliveryDestination string, slowDeliveryFormat string, engineLogDeliveryEnabled bool, engineDeliveryDestination string, engineLogDeliveryFormat string) string { + return fmt.Sprintf(` +data "aws_iam_policy_document" "p" { + statement { + actions = [ + "logs:CreateLogStream", + "logs:PutLogEvents" + ] + resources = ["${aws_cloudwatch_log_group.lg.arn}:log-stream:*"] + principals { + identifiers = ["delivery.logs.amazonaws.com"] + type = "Service" + } + } +} + +resource "aws_cloudwatch_log_resource_policy" "rp" { + policy_document = data.aws_iam_policy_document.p.json + policy_name = "%[1]s" + depends_on = [ + aws_cloudwatch_log_group.lg + ] +} + +resource "aws_cloudwatch_log_group" "lg" { + retention_in_days = 1 + name = "%[1]s" +} + +resource "aws_s3_bucket" "b" { + force_destroy = true +} + +resource "aws_iam_role" "r" { + assume_role_policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Sid = "" + Principal = { + Service = "firehose.amazonaws.com" + } + }, + ] + }) + inline_policy { + name = "my_inline_s3_policy" + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = [ + "s3:AbortMultipartUpload", + "s3:GetBucketLocation", + "s3:GetObject", + "s3:ListBucket", + "s3:ListBucketMultipartUploads", + "s3:PutObject", + "s3:PutObjectAcl", + ] + Effect = "Allow" + Resource = ["${aws_s3_bucket.b.arn}", "${aws_s3_bucket.b.arn}/*"] + }, + ] + }) + } +} + +resource "aws_kinesis_firehose_delivery_stream" "ds" { + name = "%[1]s" + destination = "s3" + s3_configuration { + role_arn = aws_iam_role.r.arn + bucket_arn = aws_s3_bucket.b.arn + } + lifecycle { + ignore_changes = [ + tags["LogDeliveryEnabled"], + ] + } +} + +resource "aws_elasticache_cluster" "test" { + cluster_id = "%[1]s" + engine = "redis" + node_type = "cache.t3.micro" + num_cache_nodes = 1 + port = 6379 + apply_immediately = true + dynamic "log_delivery_configuration" { + for_each = tobool("%[2]t") ? [""] : [] + content { + destination = ("%[3]s" == "cloudwatch-logs") ? aws_cloudwatch_log_group.lg.name : (("%[3]s" == "kinesis-firehose") ? aws_kinesis_firehose_delivery_stream.ds.name : null) + destination_type = "%[3]s" + log_format = "%[4]s" + log_type = "slow-log" + } + } + dynamic "log_delivery_configuration" { + for_each = tobool("%[5]t") ? [""] : [] + content { + destination = ("%[6]s" == "cloudwatch-logs") ? aws_cloudwatch_log_group.lg.name : (("%[6]s" == "kinesis-firehose") ? aws_kinesis_firehose_delivery_stream.ds.name : null) + destination_type = "%[6]s" + log_format = "%[7]s" + log_type = "engine-log" + } + } +} + +data "aws_elasticache_cluster" "test" { + cluster_id = aws_elasticache_cluster.test.cluster_id +} +`, rName, slowLogDeliveryEnabled, slowDeliveryDestination, slowDeliveryFormat, engineLogDeliveryEnabled, engineDeliveryDestination, engineLogDeliveryFormat) + +} + +func testAccClusterConfigTags1(rName, tag1Key, tag1Value string) string { + return acctest.ConfigCompose( + acctest.ConfigAvailableAZsNoOptIn(), + fmt.Sprintf(` +resource "aws_elasticache_cluster" "test" { + cluster_id = %[1]q + engine = "memcached" + node_type = "cache.t3.small" + num_cache_nodes = 1 + + tags = { + %[2]q = %[3]q + } +} +`, rName, tag1Key, tag1Value)) +} + +func testAccClusterConfigTags2(rName, tag1Key, tag1Value, tag2Key, tag2Value string) string { + return acctest.ConfigCompose( + acctest.ConfigAvailableAZsNoOptIn(), + fmt.Sprintf(` +resource "aws_elasticache_cluster" "test" { + cluster_id = %[1]q + engine = "memcached" + node_type = "cache.t3.small" + num_cache_nodes = 1 + + tags = { + %[2]q = %[3]q + %[4]q = %[5]q + } +} +`, rName, tag1Key, tag1Value, tag2Key, tag2Value)) +} + +func testAccClusterVersionAndTagConfig(rName, version, tagKey1, tagValue1 string) string { + return acctest.ConfigCompose( + fmt.Sprintf(` +resource "aws_elasticache_cluster" "test" { + cluster_id = %[1]q + node_type = "cache.t3.small" + num_cache_nodes = 1 + engine = "redis" + engine_version = %[2]q + apply_immediately = true + + tags = { + %[3]q = %[4]q + } +} +`, rName, version, tagKey1, tagValue1), + ) +} diff --git a/internal/service/elasticache/validation.go b/internal/service/elasticache/diff.go similarity index 74% rename from internal/service/elasticache/validation.go rename to internal/service/elasticache/diff.go index 88fb21f04df3..53bd010cb95d 100644 --- a/internal/service/elasticache/validation.go +++ b/internal/service/elasticache/diff.go @@ -4,7 +4,6 @@ import ( "context" "errors" "fmt" - "regexp" "github.com/aws/aws-sdk-go/service/elasticache" multierror "github.com/hashicorp/go-multierror" @@ -12,55 +11,28 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" ) -const ( - redisVersionPreV6RegexpRaw = `[1-5](\.[[:digit:]]+){2}` - redisVersionPostV6RegexpRaw = `([6-9]|[[:digit:]]{2})\.x` - - redisVersionRegexpRaw = redisVersionPreV6RegexpRaw + "|" + redisVersionPostV6RegexpRaw -) - -const ( - redisVersionRegexpPattern = "^" + redisVersionRegexpRaw + "$" - redisVersionPostV6RegexpPattern = "^" + redisVersionPostV6RegexpRaw + "$" -) - -var ( - redisVersionRegexp = regexp.MustCompile(redisVersionRegexpPattern) - redisVersionPostV6Regexp = regexp.MustCompile(redisVersionPostV6RegexpPattern) -) - -func ValidateElastiCacheRedisVersionString(v interface{}, k string) (ws []string, errors []error) { - value := v.(string) - - if !redisVersionRegexp.MatchString(value) { - errors = append(errors, fmt.Errorf("%s: Redis versions must match .x when using version 6 or higher, or ..", k)) - } - - return -} - -// NormalizeElastiCacheEngineVersion returns a github.com/hashicorp/go-version Version +// NormalizeEngineVersion returns a github.com/hashicorp/go-version Version // that can handle a regular 1.2.3 version number or a 6.x version number used for // ElastiCache Redis version 6 and higher -func NormalizeElastiCacheEngineVersion(version string) (*gversion.Version, error) { +func NormalizeEngineVersion(version string) (*gversion.Version, error) { if matches := redisVersionPostV6Regexp.FindStringSubmatch(version); matches != nil { version = matches[1] } return gversion.NewVersion(version) } -// CustomizeDiffElastiCacheEngineVersion causes re-creation of the resource if the version is being downgraded -func CustomizeDiffElastiCacheEngineVersion(_ context.Context, diff *schema.ResourceDiff, v interface{}) error { +// CustomizeDiffEngineVersion causes re-creation of the resource if the version is being downgraded +func CustomizeDiffEngineVersion(_ context.Context, diff *schema.ResourceDiff, v interface{}) error { if diff.Id() == "" || !diff.HasChange("engine_version") { return nil } o, n := diff.GetChange("engine_version") - oVersion, err := NormalizeElastiCacheEngineVersion(o.(string)) + oVersion, err := NormalizeEngineVersion(o.(string)) if err != nil { return fmt.Errorf("error parsing old engine_version: %w", err) } - nVersion, err := NormalizeElastiCacheEngineVersion(n.(string)) + nVersion, err := NormalizeEngineVersion(n.(string)) if err != nil { return fmt.Errorf("error parsing new engine_version: %w", err) } @@ -97,7 +69,7 @@ func CustomizeDiffValidateClusterEngineVersion(_ context.Context, diff *schema.R if v, ok := diff.GetOk("engine"); !ok || v.(string) == engineMemcached { validator = validVersionString } else { - validator = ValidateElastiCacheRedisVersionString + validator = ValidRedisVersionString } _, errs := validator(engineVersion, "engine_version") diff --git a/internal/service/elasticache/find.go b/internal/service/elasticache/find.go index 147ba44e7da6..fd1abd549862 100644 --- a/internal/service/elasticache/find.go +++ b/internal/service/elasticache/find.go @@ -180,7 +180,7 @@ func FindGlobalReplicationGroupMemberByID(conn *elasticache.ElastiCache, globalR } } -func FindElastiCacheUserByID(conn *elasticache.ElastiCache, userID string) (*elasticache.User, error) { +func FindUserByID(conn *elasticache.ElastiCache, userID string) (*elasticache.User, error) { input := &elasticache.DescribeUsersInput{ UserId: aws.String(userID), } @@ -204,7 +204,7 @@ func FindElastiCacheUserByID(conn *elasticache.ElastiCache, userID string) (*ela } } -func FindElastiCacheUserGroupByID(conn *elasticache.ElastiCache, groupID string) (*elasticache.UserGroup, error) { +func FindUserGroupByID(conn *elasticache.ElastiCache, groupID string) (*elasticache.UserGroup, error) { input := &elasticache.DescribeUserGroupsInput{ UserGroupId: aws.String(groupID), } diff --git a/internal/service/elasticache/flex.go b/internal/service/elasticache/flex.go index f488dd17376a..90f2fb54f7b1 100644 --- a/internal/service/elasticache/flex.go +++ b/internal/service/elasticache/flex.go @@ -1,6 +1,7 @@ package elasticache import ( + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/elasticache" ) @@ -23,3 +24,62 @@ func flattenSecurityGroupNames(securityGroups []*elasticache.CacheSecurityGroupM } return result } + +func flattenLogDeliveryConfigurations(logDeliveryConfiguration []*elasticache.LogDeliveryConfiguration) []map[string]interface{} { + if len(logDeliveryConfiguration) == 0 { + return nil + } + + var logDeliveryConfigurations []map[string]interface{} + for _, v := range logDeliveryConfiguration { + + logDeliveryConfig := make(map[string]interface{}) + + switch aws.StringValue(v.DestinationType) { + case elasticache.DestinationTypeKinesisFirehose: + logDeliveryConfig["destination"] = aws.StringValue(v.DestinationDetails.KinesisFirehoseDetails.DeliveryStream) + case elasticache.DestinationTypeCloudwatchLogs: + logDeliveryConfig["destination"] = aws.StringValue(v.DestinationDetails.CloudWatchLogsDetails.LogGroup) + } + + logDeliveryConfig["destination_type"] = aws.StringValue(v.DestinationType) + logDeliveryConfig["log_format"] = aws.StringValue(v.LogFormat) + logDeliveryConfig["log_type"] = aws.StringValue(v.LogType) + logDeliveryConfigurations = append(logDeliveryConfigurations, logDeliveryConfig) + } + + return logDeliveryConfigurations +} + +func expandEmptyLogDeliveryConfigurations(v map[string]interface{}) elasticache.LogDeliveryConfigurationRequest { + logDeliveryConfigurationRequest := elasticache.LogDeliveryConfigurationRequest{} + logDeliveryConfigurationRequest.SetEnabled(false) + logDeliveryConfigurationRequest.SetLogType(v["log_type"].(string)) + + return logDeliveryConfigurationRequest +} + +func expandLogDeliveryConfigurations(v map[string]interface{}) elasticache.LogDeliveryConfigurationRequest { + + logDeliveryConfigurationRequest := elasticache.LogDeliveryConfigurationRequest{} + + logDeliveryConfigurationRequest.LogType = aws.String(v["log_type"].(string)) + logDeliveryConfigurationRequest.DestinationType = aws.String(v["destination_type"].(string)) + logDeliveryConfigurationRequest.LogFormat = aws.String(v["log_format"].(string)) + destinationDetails := elasticache.DestinationDetails{} + + switch v["destination_type"].(string) { + case elasticache.DestinationTypeCloudwatchLogs: + destinationDetails.CloudWatchLogsDetails = &elasticache.CloudWatchLogsDestinationDetails{ + LogGroup: aws.String(v["destination"].(string)), + } + case elasticache.DestinationTypeKinesisFirehose: + destinationDetails.KinesisFirehoseDetails = &elasticache.KinesisFirehoseDestinationDetails{ + DeliveryStream: aws.String(v["destination"].(string)), + } + } + + logDeliveryConfigurationRequest.DestinationDetails = &destinationDetails + + return logDeliveryConfigurationRequest +} diff --git a/internal/service/elasticache/global_replication_group.go b/internal/service/elasticache/global_replication_group.go index 4347b6699b69..b245b9bf7479 100644 --- a/internal/service/elasticache/global_replication_group.go +++ b/internal/service/elasticache/global_replication_group.go @@ -89,8 +89,8 @@ func ResourceGlobalReplicationGroup() *schema.Resource { "global_replication_group_description": { Type: schema.TypeString, Optional: true, - DiffSuppressFunc: elasticacheDescriptionDiffSuppress, - StateFunc: elasticacheDescriptionStateFunc, + DiffSuppressFunc: descriptionDiffSuppress, + StateFunc: descriptionStateFunc, }, // global_replication_group_members cannot be correctly implemented because any secondary // replication groups will be added after this resource completes. @@ -128,14 +128,14 @@ func ResourceGlobalReplicationGroup() *schema.Resource { } } -func elasticacheDescriptionDiffSuppress(_, old, new string, d *schema.ResourceData) bool { +func descriptionDiffSuppress(_, old, new string, d *schema.ResourceData) bool { if (old == EmptyDescription && new == "") || (old == "" && new == EmptyDescription) { return true } return false } -func elasticacheDescriptionStateFunc(v interface{}) string { +func descriptionStateFunc(v interface{}) string { s := v.(string) if s == "" { return EmptyDescription @@ -199,7 +199,7 @@ func resourceGlobalReplicationGroupRead(d *schema.ResourceData, meta interface{} d.Set("global_replication_group_id", globalReplicationGroup.GlobalReplicationGroupId) d.Set("transit_encryption_enabled", globalReplicationGroup.TransitEncryptionEnabled) - d.Set("primary_replication_group_id", flattenElasticacheGlobalReplicationGroupPrimaryGroupID(globalReplicationGroup.Members)) + d.Set("primary_replication_group_id", flattenGlobalReplicationGroupPrimaryGroupID(globalReplicationGroup.Members)) return nil } @@ -208,7 +208,7 @@ func resourceGlobalReplicationGroupUpdate(d *schema.ResourceData, meta interface conn := meta.(*conns.AWSClient).ElastiCacheConn // Only one field can be changed per request - updaters := map[string]elasticacheGlobalReplicationGroupUpdater{} + updaters := map[string]globalReplicationGroupUpdater{} if !d.IsNewResource() { updaters["global_replication_group_description"] = func(input *elasticache.ModifyGlobalReplicationGroupInput) { input.GlobalReplicationGroupDescription = aws.String(d.Get("global_replication_group_description").(string)) @@ -217,7 +217,7 @@ func resourceGlobalReplicationGroupUpdate(d *schema.ResourceData, meta interface for k, f := range updaters { if d.HasChange(k) { - if err := updateElasticacheGlobalReplicationGroup(conn, d.Id(), f); err != nil { + if err := updateGlobalReplicationGroup(conn, d.Id(), f); err != nil { return fmt.Errorf("error updating ElastiCache Global Replication Group (%s): %w", d.Id(), err) } } @@ -226,9 +226,9 @@ func resourceGlobalReplicationGroupUpdate(d *schema.ResourceData, meta interface return resourceGlobalReplicationGroupRead(d, meta) } -type elasticacheGlobalReplicationGroupUpdater func(input *elasticache.ModifyGlobalReplicationGroupInput) +type globalReplicationGroupUpdater func(input *elasticache.ModifyGlobalReplicationGroupInput) -func updateElasticacheGlobalReplicationGroup(conn *elasticache.ElastiCache, id string, f elasticacheGlobalReplicationGroupUpdater) error { +func updateGlobalReplicationGroup(conn *elasticache.ElastiCache, id string, f globalReplicationGroupUpdater) error { input := &elasticache.ModifyGlobalReplicationGroupInput{ ApplyImmediately: aws.Bool(true), GlobalReplicationGroupId: aws.String(id), @@ -298,7 +298,7 @@ func DeleteGlobalReplicationGroup(conn *elasticache.ElastiCache, id string, read return nil } -func flattenElasticacheGlobalReplicationGroupPrimaryGroupID(members []*elasticache.GlobalReplicationGroupMember) string { +func flattenGlobalReplicationGroupPrimaryGroupID(members []*elasticache.GlobalReplicationGroupMember) string { for _, member := range members { if aws.StringValue(member.Role) == GlobalReplicationGroupMemberRolePrimary { return aws.StringValue(member.ReplicationGroupId) diff --git a/internal/service/elasticache/global_replication_group_test.go b/internal/service/elasticache/global_replication_group_test.go index 17fee1f339d8..97b3f9f33f6e 100644 --- a/internal/service/elasticache/global_replication_group_test.go +++ b/internal/service/elasticache/global_replication_group_test.go @@ -318,9 +318,9 @@ resource "aws_elasticache_replication_group" "test" { func testAccGlobalReplicationGroupConfig_MultipleSecondaries(rName string) string { return acctest.ConfigCompose( acctest.ConfigMultipleRegionProvider(3), - testAccElasticacheVpcBaseWithProvider(rName, "primary", acctest.ProviderName, 1), - testAccElasticacheVpcBaseWithProvider(rName, "alternate", acctest.ProviderNameAlternate, 1), - testAccElasticacheVpcBaseWithProvider(rName, "third", acctest.ProviderNameThird, 1), + testAccVPCBaseWithProvider(rName, "primary", acctest.ProviderName, 1), + testAccVPCBaseWithProvider(rName, "alternate", acctest.ProviderNameAlternate, 1), + testAccVPCBaseWithProvider(rName, "third", acctest.ProviderNameThird, 1), fmt.Sprintf(` resource "aws_elasticache_global_replication_group" "test" { provider = aws @@ -373,9 +373,9 @@ resource "aws_elasticache_replication_group" "third" { func testAccReplicationGroupConfig_ReplaceSecondary_DifferentRegion_Setup(rName string) string { return acctest.ConfigCompose( acctest.ConfigMultipleRegionProvider(3), - testAccElasticacheVpcBaseWithProvider(rName, "primary", acctest.ProviderName, 1), - testAccElasticacheVpcBaseWithProvider(rName, "secondary", acctest.ProviderNameAlternate, 1), - testAccElasticacheVpcBaseWithProvider(rName, "third", acctest.ProviderNameThird, 1), + testAccVPCBaseWithProvider(rName, "primary", acctest.ProviderName, 1), + testAccVPCBaseWithProvider(rName, "secondary", acctest.ProviderNameAlternate, 1), + testAccVPCBaseWithProvider(rName, "third", acctest.ProviderNameThird, 1), fmt.Sprintf(` resource "aws_elasticache_global_replication_group" "test" { provider = aws @@ -416,9 +416,9 @@ resource "aws_elasticache_replication_group" "secondary" { func testAccReplicationGroupConfig_ReplaceSecondary_DifferentRegion_Move(rName string) string { return acctest.ConfigCompose( acctest.ConfigMultipleRegionProvider(3), - testAccElasticacheVpcBaseWithProvider(rName, "primary", acctest.ProviderName, 1), - testAccElasticacheVpcBaseWithProvider(rName, "secondary", acctest.ProviderNameAlternate, 1), - testAccElasticacheVpcBaseWithProvider(rName, "third", acctest.ProviderNameThird, 1), + testAccVPCBaseWithProvider(rName, "primary", acctest.ProviderName, 1), + testAccVPCBaseWithProvider(rName, "secondary", acctest.ProviderNameAlternate, 1), + testAccVPCBaseWithProvider(rName, "third", acctest.ProviderNameThird, 1), fmt.Sprintf(` resource "aws_elasticache_global_replication_group" "test" { provider = aws @@ -481,7 +481,7 @@ resource "aws_elasticache_replication_group" "test" { `, rName) } -func testAccElasticacheVpcBaseWithProvider(rName, name, provider string, subnetCount int) string { +func testAccVPCBaseWithProvider(rName, name, provider string, subnetCount int) string { return acctest.ConfigCompose( testAccAvailableAZsNoOptInConfigWithProvider(name, provider), fmt.Sprintf(` diff --git a/internal/service/elasticache/parameter_group.go b/internal/service/elasticache/parameter_group.go index d078d103a1af..ee658786b152 100644 --- a/internal/service/elasticache/parameter_group.go +++ b/internal/service/elasticache/parameter_group.go @@ -86,11 +86,22 @@ func resourceParameterGroupCreate(d *schema.ResourceData, meta interface{}) erro CacheParameterGroupName: aws.String(d.Get("name").(string)), CacheParameterGroupFamily: aws.String(d.Get("family").(string)), Description: aws.String(d.Get("description").(string)), - Tags: Tags(tags.IgnoreAWS()), + } + + if len(tags) > 0 { + createOpts.Tags = Tags(tags.IgnoreAWS()) } log.Printf("[DEBUG] Create ElastiCache Parameter Group: %#v", createOpts) resp, err := conn.CreateCacheParameterGroup(&createOpts) + + if createOpts.Tags != nil && verify.CheckISOErrorTagsUnsupported(err) { + log.Printf("[WARN] failed creating ElastiCache Parameter Group with tags: %s. Trying create without tags.", err) + + createOpts.Tags = nil + resp, err = conn.CreateCacheParameterGroup(&createOpts) + } + if err != nil { return fmt.Errorf("error creating ElastiCache Parameter Group: %w", err) } @@ -126,23 +137,6 @@ func resourceParameterGroupRead(d *schema.ResourceData, meta interface{}) error d.Set("description", describeResp.CacheParameterGroups[0].Description) d.Set("arn", describeResp.CacheParameterGroups[0].ARN) - tags, err := ListTags(conn, aws.StringValue(describeResp.CacheParameterGroups[0].ARN)) - - if err != nil { - return fmt.Errorf("error listing tags for ElastiCache Parameter Group (%s): %w", d.Id(), err) - } - - tags = tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig) - - //lintignore:AWSR002 - if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { - return fmt.Errorf("error setting tags: %w", err) - } - - if err := d.Set("tags_all", tags.Map()); err != nil { - return fmt.Errorf("error setting tags_all: %w", err) - } - // Only include user customized parameters as there's hundreds of system/default ones describeParametersOpts := elasticache.DescribeCacheParametersInput{ CacheParameterGroupName: aws.String(d.Id()), @@ -156,20 +150,35 @@ func resourceParameterGroupRead(d *schema.ResourceData, meta interface{}) error d.Set("parameter", FlattenParameters(describeParametersResp.Parameters)) - return nil -} + tags, err := ListTags(conn, aws.StringValue(describeResp.CacheParameterGroups[0].ARN)) -func resourceParameterGroupUpdate(d *schema.ResourceData, meta interface{}) error { - conn := meta.(*conns.AWSClient).ElastiCacheConn + if err != nil && !verify.CheckISOErrorTagsUnsupported(err) { + return fmt.Errorf("error listing tags for ElastiCache Parameter Group (%s): %w", d.Id(), err) + } - if d.HasChange("tags_all") { - o, n := d.GetChange("tags_all") + if err != nil { + log.Printf("[WARN] failed listing tags for Elasticache Parameter Group (%s): %s", d.Id(), err) + } + + if tags != nil { + tags = tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig) + + //lintignore:AWSR002 + if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { + return fmt.Errorf("error setting tags: %w", err) + } - if err := UpdateTags(conn, d.Get("arn").(string), o, n); err != nil { - return fmt.Errorf("error updating ElastiCache Parameter Group (%s) tags: %w", d.Get("arn").(string), err) + if err := d.Set("tags_all", tags.Map()); err != nil { + return fmt.Errorf("error setting tags_all: %w", err) } } + return nil +} + +func resourceParameterGroupUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*conns.AWSClient).ElastiCacheConn + if d.HasChange("parameter") { o, n := d.GetChange("parameter") toRemove, toAdd := ParameterChanges(o, n) @@ -285,6 +294,21 @@ func resourceParameterGroupUpdate(d *schema.ResourceData, meta interface{}) erro } } + if d.HasChange("tags_all") { + o, n := d.GetChange("tags_all") + + err := UpdateTags(conn, d.Get("arn").(string), o, n) + + if err != nil { + if v, ok := d.GetOk("tags"); (ok && len(v.(map[string]interface{})) > 0) || !verify.CheckISOErrorTagsUnsupported(err) { + // explicitly setting tags or not an iso-unsupported error + return fmt.Errorf("failed updating ElastiCache Parameter Group (%s) tags: %w", d.Get("arn").(string), err) + } + + log.Printf("[WARN] failed updating tags for ElastiCache Parameter Group (%s): %s", d.Get("arn").(string), err) + } + } + return resourceParameterGroupRead(d, meta) } @@ -345,12 +369,12 @@ func ParameterChanges(o, n interface{}) (remove, addOrUpdate []*elasticache.Para om := make(map[string]*elasticache.ParameterNameValue, os.Len()) for _, raw := range os.List() { param := raw.(map[string]interface{}) - om[param["name"].(string)] = expandElastiCacheParameter(param) + om[param["name"].(string)] = expandParameter(param) } nm := make(map[string]*elasticache.ParameterNameValue, len(addOrUpdate)) for _, raw := range ns.List() { param := raw.(map[string]interface{}) - nm[param["name"].(string)] = expandElastiCacheParameter(param) + nm[param["name"].(string)] = expandParameter(param) } // Remove: key is in old, but not in new @@ -421,13 +445,13 @@ func ExpandParameters(configured []interface{}) []*elasticache.ParameterNameValu // Loop over our configured parameters and create // an array of aws-sdk-go compatible objects for i, pRaw := range configured { - parameters[i] = expandElastiCacheParameter(pRaw.(map[string]interface{})) + parameters[i] = expandParameter(pRaw.(map[string]interface{})) } return parameters } -func expandElastiCacheParameter(param map[string]interface{}) *elasticache.ParameterNameValue { +func expandParameter(param map[string]interface{}) *elasticache.ParameterNameValue { return &elasticache.ParameterNameValue{ ParameterName: aws.String(param["name"].(string)), ParameterValue: aws.String(param["value"].(string)), diff --git a/internal/service/elasticache/parameter_group_test.go b/internal/service/elasticache/parameter_group_test.go index c1ee601dcc16..56c9922c1e78 100644 --- a/internal/service/elasticache/parameter_group_test.go +++ b/internal/service/elasticache/parameter_group_test.go @@ -566,7 +566,7 @@ resource "aws_elasticache_parameter_group" "test" { `, family, rName, tagName1, tagValue1, tagName2, tagValue2) } -func TestFlattenElasticacheParameters(t *testing.T) { +func TestFlattenParameters(t *testing.T) { cases := []struct { Input []*elasticache.Parameter Output []map[string]interface{} @@ -595,7 +595,7 @@ func TestFlattenElasticacheParameters(t *testing.T) { } } -func TestExpandElasticacheParameters(t *testing.T) { +func TestExpandParameters(t *testing.T) { expanded := []interface{}{ map[string]interface{}{ "name": "activerehashing", @@ -618,7 +618,7 @@ func TestExpandElasticacheParameters(t *testing.T) { } } -func TestElastiCacheParameterChanges(t *testing.T) { +func TestParameterChanges(t *testing.T) { cases := []struct { Name string Old *schema.Set diff --git a/internal/service/elasticache/replication_group.go b/internal/service/elasticache/replication_group.go index c12ceada9acd..295a04a73d71 100644 --- a/internal/service/elasticache/replication_group.go +++ b/internal/service/elasticache/replication_group.go @@ -18,6 +18,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" @@ -59,9 +60,10 @@ func ResourceReplicationGroup() *schema.Resource { ConflictsWith: []string{"user_group_ids"}, }, "auto_minor_version_upgrade": { - Type: schema.TypeBool, - Optional: true, - Default: true, + Type: nullable.TypeNullableBool, + Optional: true, + Default: "true", + ValidateFunc: nullable.ValidateTypeStringNullableBool, }, "automatic_failover_enabled": { Type: schema.TypeBool, @@ -134,7 +136,7 @@ func ResourceReplicationGroup() *schema.Resource { Type: schema.TypeString, Optional: true, Computed: true, - ValidateFunc: ValidateElastiCacheRedisVersionString, + ValidateFunc: ValidRedisVersionString, }, "engine_version_actual": { Type: schema.TypeString, @@ -159,12 +161,40 @@ func ResourceReplicationGroup() *schema.Resource { "snapshot_name", }, }, + "log_delivery_configuration": { + Type: schema.TypeSet, + Optional: true, + MaxItems: 2, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "destination_type": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(elasticache.DestinationType_Values(), false), + }, + "destination": { + Type: schema.TypeString, + Required: true, + }, + "log_format": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(elasticache.LogFormat_Values(), false), + }, + "log_type": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(elasticache.LogType_Values(), false), + }, + }, + }, + }, "maintenance_window": { Type: schema.TypeString, Optional: true, Computed: true, StateFunc: func(val interface{}) string { - // Elasticache always changes the maintenance to lowercase + // ElastiCache always changes the maintenance to lowercase return strings.ToLower(val.(string)) }, ValidateFunc: verify.ValidOnceAWeekWindowFormat, @@ -359,7 +389,7 @@ func ResourceReplicationGroup() *schema.Resource { CustomizeDiff: customdiff.Sequence( CustomizeDiffValidateReplicationGroupAutomaticFailover, - CustomizeDiffElastiCacheEngineVersion, + CustomizeDiffEngineVersion, customdiff.ComputedIf("member_clusters", func(ctx context.Context, diff *schema.ResourceDiff, meta interface{}) bool { return diff.HasChange("number_cache_clusters") || diff.HasChange("num_cache_clusters") || @@ -379,9 +409,11 @@ func resourceReplicationGroupCreate(d *schema.ResourceData, meta interface{}) er tags := defaultTagsConfig.MergeTags(tftags.New(d.Get("tags").(map[string]interface{}))) params := &elasticache.CreateReplicationGroupInput{ - ReplicationGroupId: aws.String(d.Get("replication_group_id").(string)), - AutoMinorVersionUpgrade: aws.Bool(d.Get("auto_minor_version_upgrade").(bool)), - Tags: Tags(tags.IgnoreAWS()), + ReplicationGroupId: aws.String(d.Get("replication_group_id").(string)), + } + + if len(tags) > 0 { + params.Tags = Tags(tags.IgnoreAWS()) } if v, ok := d.GetOk("description"); ok { @@ -412,6 +444,12 @@ func resourceReplicationGroupCreate(d *schema.ResourceData, meta interface{}) er params.EngineVersion = aws.String(v.(string)) } + if v, ok := d.GetOk("auto_minor_version_upgrade"); ok { + if v, null, _ := nullable.Bool(v.(string)).Value(); !null { + params.AutoMinorVersionUpgrade = aws.Bool(v) + } + } + if preferredAZs, ok := d.GetOk("preferred_cache_cluster_azs"); ok { params.PreferredCacheClusterAZs = flex.ExpandStringList(preferredAZs.([]interface{})) } @@ -443,6 +481,15 @@ func resourceReplicationGroupCreate(d *schema.ResourceData, meta interface{}) er params.SnapshotArns = flex.ExpandStringSet(snaps) } + if v, ok := d.GetOk("log_delivery_configuration"); ok { + params.LogDeliveryConfigurations = []*elasticache.LogDeliveryConfigurationRequest{} + v := v.(*schema.Set).List() + for _, v := range v { + logDeliveryConfigurationRequest := expandLogDeliveryConfigurations(v.(map[string]interface{})) + params.LogDeliveryConfigurations = append(params.LogDeliveryConfigurations, &logDeliveryConfigurationRequest) + } + } + if v, ok := d.GetOk("maintenance_window"); ok { params.PreferredMaintenanceWindow = aws.String(v.(string)) } @@ -517,6 +564,14 @@ func resourceReplicationGroupCreate(d *schema.ResourceData, meta interface{}) er } resp, err := conn.CreateReplicationGroup(params) + + if params.Tags != nil && verify.CheckISOErrorTagsUnsupported(err) { + log.Printf("[WARN] failed creating ElastiCache Replication Group with tags: %s. Trying create without tags.", err) + + params.Tags = nil + resp, err = conn.CreateReplicationGroup(params) + } + if err != nil { return fmt.Errorf("error creating ElastiCache Replication Group (%s): %w", d.Get("replication_group_id").(string), err) } @@ -538,6 +593,20 @@ func resourceReplicationGroupCreate(d *schema.ResourceData, meta interface{}) er } } + // In some partitions, only post-create tagging supported + if params.Tags == nil && len(tags) > 0 { + err := UpdateTags(conn, aws.StringValue(resp.ReplicationGroup.ARN), nil, tags) + + if err != nil { + if v, ok := d.GetOk("tags"); (ok && len(v.(map[string]interface{})) > 0) || !verify.CheckISOErrorTagsUnsupported(err) { + // explicitly setting tags or not an iso-unsupported error + return fmt.Errorf("failed adding tags after create for ElastiCache Replication Group (%s): %w", d.Id(), err) + } + + log.Printf("[WARN] failed adding tags after create for ElastiCache Replication Group (%s): %s", d.Id(), err) + } + } + return resourceReplicationGroupRead(d, meta) } @@ -596,7 +665,7 @@ func resourceReplicationGroupRead(d *schema.ResourceData, meta interface{}) erro if err := d.Set("member_clusters", flex.FlattenStringSet(rgp.MemberClusters)); err != nil { return fmt.Errorf("error setting member_clusters: %w", err) } - if err := d.Set("cluster_mode", flattenElasticacheNodeGroupsToClusterMode(rgp.NodeGroups)); err != nil { + if err := d.Set("cluster_mode", flattenNodeGroupsToClusterMode(rgp.NodeGroups)); err != nil { return fmt.Errorf("error setting cluster_mode attribute: %w", err) } @@ -608,34 +677,54 @@ func resourceReplicationGroupRead(d *schema.ResourceData, meta interface{}) erro d.Set("arn", rgp.ARN) d.Set("data_tiering_enabled", aws.StringValue(rgp.DataTiering) == elasticache.DataTieringStatusEnabled) + d.Set("log_delivery_configuration", flattenLogDeliveryConfigurations(rgp.LogDeliveryConfigurations)) + d.Set("snapshot_window", rgp.SnapshotWindow) + d.Set("snapshot_retention_limit", rgp.SnapshotRetentionLimit) + + if rgp.ConfigurationEndpoint != nil { + d.Set("port", rgp.ConfigurationEndpoint.Port) + d.Set("configuration_endpoint_address", rgp.ConfigurationEndpoint.Address) + } else { + d.Set("port", rgp.NodeGroups[0].PrimaryEndpoint.Port) + d.Set("primary_endpoint_address", rgp.NodeGroups[0].PrimaryEndpoint.Address) + d.Set("reader_endpoint_address", rgp.NodeGroups[0].ReaderEndpoint.Address) + } + + d.Set("user_group_ids", rgp.UserGroupIds) + // Tags cannot be read when the replication group is not Available _, err = WaitReplicationGroupAvailable(conn, d.Id(), d.Timeout(schema.TimeoutUpdate)) if err != nil { - return fmt.Errorf("error listing tags for resource (%s): %w", aws.StringValue(rgp.ARN), err) + return fmt.Errorf("waiting for ElastiCache Replication Group to be available (%s): %w", aws.StringValue(rgp.ARN), err) } + tags, err := ListTags(conn, aws.StringValue(rgp.ARN)) - if err != nil { - return fmt.Errorf("error listing tags for resource (%s): %w", aws.StringValue(rgp.ARN), err) + if err != nil && !verify.CheckISOErrorTagsUnsupported(err) { + return fmt.Errorf("listing tags for ElastiCache Replication Group (%s): %w", aws.StringValue(rgp.ARN), err) } - tags = tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig) - - //lintignore:AWSR002 - if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { - return fmt.Errorf("error setting tags: %w", err) + // tags not supported in all partitions + if err != nil { + log.Printf("[WARN] failed listing tags for ElastiCache Replication Group (%s): %s", aws.StringValue(rgp.ARN), err) } - if err := d.Set("tags_all", tags.Map()); err != nil { - return fmt.Errorf("error setting tags_all: %w", err) - } + if tags != nil { + tags = tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig) - if rgp.NodeGroups != nil { - if len(rgp.NodeGroups[0].NodeGroupMembers) == 0 { - return nil + //lintignore:AWSR002 + if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { + return fmt.Errorf("error setting tags: %w", err) } - cacheCluster := *rgp.NodeGroups[0].NodeGroupMembers[0] // nosemgrep: prefer-aws-go-sdk-pointer-conversion-assignment // false positive + if err := d.Set("tags_all", tags.Map()); err != nil { + return fmt.Errorf("error setting tags_all: %w", err) + } + } + + // This section reads settings that require checking the underlying cache clusters + if rgp.NodeGroups != nil && len(rgp.NodeGroups[0].NodeGroupMembers) != 0 { + cacheCluster := rgp.NodeGroups[0].NodeGroupMembers[0] res, err := conn.DescribeCacheClusters(&elasticache.DescribeCacheClustersInput{ CacheClusterId: cacheCluster.CacheClusterId, @@ -651,26 +740,11 @@ func resourceReplicationGroupRead(d *schema.ResourceData, meta interface{}) erro c := res.CacheClusters[0] - if err := elasticacheSetResourceDataFromCacheCluster(d, c); err != nil { + if err := setFromCacheCluster(d, c); err != nil { return err } - d.Set("snapshot_window", rgp.SnapshotWindow) - d.Set("snapshot_retention_limit", rgp.SnapshotRetentionLimit) - - if rgp.ConfigurationEndpoint != nil { - d.Set("port", rgp.ConfigurationEndpoint.Port) - d.Set("configuration_endpoint_address", rgp.ConfigurationEndpoint.Address) - } else { - d.Set("port", rgp.NodeGroups[0].PrimaryEndpoint.Port) - d.Set("primary_endpoint_address", rgp.NodeGroups[0].PrimaryEndpoint.Address) - d.Set("reader_endpoint_address", rgp.NodeGroups[0].ReaderEndpoint.Address) - } - - d.Set("user_group_ids", rgp.UserGroupIds) - d.Set("at_rest_encryption_enabled", c.AtRestEncryptionEnabled) - d.Set("auto_minor_version_upgrade", c.AutoMinorVersionUpgrade) d.Set("transit_encryption_enabled", c.TransitEncryptionEnabled) if c.AuthTokenEnabled != nil && !aws.BoolValue(c.AuthTokenEnabled) { @@ -690,18 +764,18 @@ func resourceReplicationGroupUpdate(d *schema.ResourceData, meta interface{}) er "num_node_groups", "replicas_per_node_group", ) { - err := elasticacheReplicationGroupModifyShardConfiguration(conn, d) + err := modifyReplicationGroupShardConfiguration(conn, d) if err != nil { return fmt.Errorf("error modifying ElastiCache Replication Group (%s) shard configuration: %w", d.Id(), err) } } else if d.HasChange("number_cache_clusters") { // TODO: remove when number_cache_clusters is removed from resource schema - err := elasticacheReplicationGroupModifyNumCacheClusters(conn, d, "number_cache_clusters") + err := modifyReplicationGroupNumCacheClusters(conn, d, "number_cache_clusters") if err != nil { return fmt.Errorf("error modifying ElastiCache Replication Group (%s) clusters: %w", d.Id(), err) } } else if d.HasChange("num_cache_clusters") { - err := elasticacheReplicationGroupModifyNumCacheClusters(conn, d, "num_cache_clusters") + err := modifyReplicationGroupNumCacheClusters(conn, d, "num_cache_clusters") if err != nil { return fmt.Errorf("error modifying ElastiCache Replication Group (%s) clusters: %w", d.Id(), err) } @@ -729,7 +803,10 @@ func resourceReplicationGroupUpdate(d *schema.ResourceData, meta interface{}) er } if d.HasChange("auto_minor_version_upgrade") { - params.AutoMinorVersionUpgrade = aws.Bool(d.Get("auto_minor_version_upgrade").(bool)) + v := d.Get("auto_minor_version_upgrade") + if v, null, _ := nullable.Bool(v.(string)).Value(); !null { + params.AutoMinorVersionUpgrade = aws.Bool(v) + } requestUpdate = true } @@ -747,6 +824,31 @@ func resourceReplicationGroupUpdate(d *schema.ResourceData, meta interface{}) er } } + if d.HasChange("log_delivery_configuration") { + + oldLogDeliveryConfig, newLogDeliveryConfig := d.GetChange("log_delivery_configuration") + + params.LogDeliveryConfigurations = []*elasticache.LogDeliveryConfigurationRequest{} + logTypesToSubmit := make(map[string]bool) + + currentLogDeliveryConfig := newLogDeliveryConfig.(*schema.Set).List() + for _, current := range currentLogDeliveryConfig { + logDeliveryConfigurationRequest := expandLogDeliveryConfigurations(current.(map[string]interface{})) + logTypesToSubmit[*logDeliveryConfigurationRequest.LogType] = true + params.LogDeliveryConfigurations = append(params.LogDeliveryConfigurations, &logDeliveryConfigurationRequest) + } + + previousLogDeliveryConfig := oldLogDeliveryConfig.(*schema.Set).List() + for _, previous := range previousLogDeliveryConfig { + logDeliveryConfigurationRequest := expandEmptyLogDeliveryConfigurations(previous.(map[string]interface{})) + //if something was removed, send an empty request + if !logTypesToSubmit[*logDeliveryConfigurationRequest.LogType] { + params.LogDeliveryConfigurations = append(params.LogDeliveryConfigurations, &logDeliveryConfigurationRequest) + } + } + requestUpdate = true + } + if d.HasChange("maintenance_window") { params.PreferredMaintenanceWindow = aws.String(d.Get("maintenance_window").(string)) requestUpdate = true @@ -817,6 +919,11 @@ func resourceReplicationGroupUpdate(d *schema.ResourceData, meta interface{}) er if err != nil { return fmt.Errorf("error updating ElastiCache Replication Group (%s): %w", d.Id(), err) } + + _, err = WaitReplicationGroupAvailable(conn, d.Id(), d.Timeout(schema.TimeoutUpdate)) + if err != nil { + return fmt.Errorf("error waiting for ElastiCache Replication Group (%s) to update: %w", d.Id(), err) + } } if d.HasChange("auth_token") { @@ -829,20 +936,28 @@ func resourceReplicationGroupUpdate(d *schema.ResourceData, meta interface{}) er _, err := conn.ModifyReplicationGroup(params) if err != nil { - return fmt.Errorf("error changing auth_token for Elasticache Replication Group (%s): %w", d.Id(), err) + return fmt.Errorf("error changing auth_token for ElastiCache Replication Group (%s): %w", d.Id(), err) + } + + _, err = WaitReplicationGroupAvailable(conn, d.Id(), d.Timeout(schema.TimeoutUpdate)) + if err != nil { + return fmt.Errorf("error waiting for ElastiCache Replication Group (%s) auth_token change: %w", d.Id(), err) } } if d.HasChange("tags_all") { o, n := d.GetChange("tags_all") - if err := UpdateTags(conn, d.Get("arn").(string), o, n); err != nil { - return fmt.Errorf("error updating tags: %w", err) - } - } - _, err := WaitReplicationGroupAvailable(conn, d.Id(), d.Timeout(schema.TimeoutUpdate)) - if err != nil { - return fmt.Errorf("error waiting for modification: %w", err) + err := UpdateTags(conn, d.Get("arn").(string), o, n) + + if err != nil { + if v, ok := d.GetOk("tags"); (ok && len(v.(map[string]interface{})) > 0) || !verify.CheckISOErrorTagsUnsupported(err) { + // explicitly setting tags or not an iso-unsupported error + return fmt.Errorf("failed updating ElastiCache Replication Group (%s) tags: %w", d.Id(), err) + } + + log.Printf("[WARN] failed updating tags for ElastiCache Replication Group (%s): %s", d.Id(), err) + } } return resourceReplicationGroupRead(d, meta) @@ -859,7 +974,7 @@ func resourceReplicationGroupDelete(d *schema.ResourceData, meta interface{}) er } var finalSnapshotID = d.Get("final_snapshot_identifier").(string) - err := deleteElasticacheReplicationGroup(d.Id(), conn, finalSnapshotID, d.Timeout(schema.TimeoutDelete)) + err := deleteReplicationGroup(d.Id(), conn, finalSnapshotID, d.Timeout(schema.TimeoutDelete)) if err != nil { return fmt.Errorf("error deleting ElastiCache Replication Group (%s): %w", d.Id(), err) } @@ -910,7 +1025,7 @@ func DisassociateReplicationGroup(conn *elasticache.ElastiCache, globalReplicati } -func deleteElasticacheReplicationGroup(replicationGroupID string, conn *elasticache.ElastiCache, finalSnapshotID string, timeout time.Duration) error { +func deleteReplicationGroup(replicationGroupID string, conn *elasticache.ElastiCache, finalSnapshotID string, timeout time.Duration) error { input := &elasticache.DeleteReplicationGroupInput{ ReplicationGroupId: aws.String(replicationGroupID), } @@ -953,7 +1068,7 @@ func deleteElasticacheReplicationGroup(replicationGroupID string, conn *elastica return nil } -func flattenElasticacheNodeGroupsToClusterMode(nodeGroups []*elasticache.NodeGroup) []map[string]interface{} { +func flattenNodeGroupsToClusterMode(nodeGroups []*elasticache.NodeGroup) []map[string]interface{} { if len(nodeGroups) == 0 { return []map[string]interface{}{} } @@ -965,30 +1080,30 @@ func flattenElasticacheNodeGroupsToClusterMode(nodeGroups []*elasticache.NodeGro return []map[string]interface{}{m} } -func elasticacheReplicationGroupModifyShardConfiguration(conn *elasticache.ElastiCache, d *schema.ResourceData) error { +func modifyReplicationGroupShardConfiguration(conn *elasticache.ElastiCache, d *schema.ResourceData) error { if d.HasChange("cluster_mode.0.num_node_groups") { - err := elasticacheReplicationGroupModifyShardConfigurationNumNodeGroups(conn, d, "cluster_mode.0.num_node_groups") + err := modifyReplicationGroupShardConfigurationNumNodeGroups(conn, d, "cluster_mode.0.num_node_groups") if err != nil { return err } } if d.HasChange("cluster_mode.0.replicas_per_node_group") { - err := elasticacheReplicationGroupModifyShardConfigurationReplicasPerNodeGroup(conn, d, "cluster_mode.0.replicas_per_node_group") + err := modifyReplicationGroupShardConfigurationReplicasPerNodeGroup(conn, d, "cluster_mode.0.replicas_per_node_group") if err != nil { return err } } if d.HasChange("num_node_groups") { - err := elasticacheReplicationGroupModifyShardConfigurationNumNodeGroups(conn, d, "num_node_groups") + err := modifyReplicationGroupShardConfigurationNumNodeGroups(conn, d, "num_node_groups") if err != nil { return err } } if d.HasChange("replicas_per_node_group") { - err := elasticacheReplicationGroupModifyShardConfigurationReplicasPerNodeGroup(conn, d, "replicas_per_node_group") + err := modifyReplicationGroupShardConfigurationReplicasPerNodeGroup(conn, d, "replicas_per_node_group") if err != nil { return err } @@ -997,7 +1112,7 @@ func elasticacheReplicationGroupModifyShardConfiguration(conn *elasticache.Elast return nil } -func elasticacheReplicationGroupModifyShardConfigurationNumNodeGroups(conn *elasticache.ElastiCache, d *schema.ResourceData, argument string) error { +func modifyReplicationGroupShardConfigurationNumNodeGroups(conn *elasticache.ElastiCache, d *schema.ResourceData, argument string) error { o, n := d.GetChange(argument) oldNumNodeGroups := o.(int) newNumNodeGroups := n.(int) @@ -1033,7 +1148,7 @@ func elasticacheReplicationGroupModifyShardConfigurationNumNodeGroups(conn *elas return nil } -func elasticacheReplicationGroupModifyShardConfigurationReplicasPerNodeGroup(conn *elasticache.ElastiCache, d *schema.ResourceData, argument string) error { +func modifyReplicationGroupShardConfigurationReplicasPerNodeGroup(conn *elasticache.ElastiCache, d *schema.ResourceData, argument string) error { o, n := d.GetChange(argument) oldReplicas := o.(int) newReplicas := n.(int) @@ -1071,21 +1186,21 @@ func elasticacheReplicationGroupModifyShardConfigurationReplicasPerNodeGroup(con return nil } -func elasticacheReplicationGroupModifyNumCacheClusters(conn *elasticache.ElastiCache, d *schema.ResourceData, argument string) error { +func modifyReplicationGroupNumCacheClusters(conn *elasticache.ElastiCache, d *schema.ResourceData, argument string) error { o, n := d.GetChange(argument) oldNumberCacheClusters := o.(int) newNumberCacheClusters := n.(int) var err error if newNumberCacheClusters > oldNumberCacheClusters { - err = elasticacheReplicationGroupIncreaseNumCacheClusters(conn, d.Id(), newNumberCacheClusters, d.Timeout(schema.TimeoutUpdate)) + err = increaseReplicationGroupNumCacheClusters(conn, d.Id(), newNumberCacheClusters, d.Timeout(schema.TimeoutUpdate)) } else if newNumberCacheClusters < oldNumberCacheClusters { - err = elasticacheReplicationGroupDecreaseNumCacheClusters(conn, d.Id(), newNumberCacheClusters, d.Timeout(schema.TimeoutUpdate)) + err = decreaseReplicationGroupNumCacheClusters(conn, d.Id(), newNumberCacheClusters, d.Timeout(schema.TimeoutUpdate)) } return err } -func elasticacheReplicationGroupIncreaseNumCacheClusters(conn *elasticache.ElastiCache, replicationGroupID string, newNumberCacheClusters int, timeout time.Duration) error { +func increaseReplicationGroupNumCacheClusters(conn *elasticache.ElastiCache, replicationGroupID string, newNumberCacheClusters int, timeout time.Duration) error { input := &elasticache.IncreaseReplicaCountInput{ ApplyImmediately: aws.Bool(true), NewReplicaCount: aws.Int64(int64(newNumberCacheClusters - 1)), @@ -1104,7 +1219,7 @@ func elasticacheReplicationGroupIncreaseNumCacheClusters(conn *elasticache.Elast return nil } -func elasticacheReplicationGroupDecreaseNumCacheClusters(conn *elasticache.ElastiCache, replicationGroupID string, newNumberCacheClusters int, timeout time.Duration) error { +func decreaseReplicationGroupNumCacheClusters(conn *elasticache.ElastiCache, replicationGroupID string, newNumberCacheClusters int, timeout time.Duration) error { input := &elasticache.DecreaseReplicaCountInput{ ApplyImmediately: aws.Bool(true), NewReplicaCount: aws.Int64(int64(newNumberCacheClusters - 1)), diff --git a/internal/service/elasticache/replication_group_data_source.go b/internal/service/elasticache/replication_group_data_source.go index 86b49ab56006..c855b0d16d90 100644 --- a/internal/service/elasticache/replication_group_data_source.go +++ b/internal/service/elasticache/replication_group_data_source.go @@ -88,6 +88,30 @@ func DataSourceReplicationGroup() *schema.Resource { Type: schema.TypeInt, Computed: true, }, + "log_delivery_configuration": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "destination_type": { + Type: schema.TypeString, + Computed: true, + }, + "destination": { + Type: schema.TypeString, + Computed: true, + }, + "log_format": { + Type: schema.TypeString, + Computed: true, + }, + "log_type": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, "snapshot_window": { Type: schema.TypeString, Computed: true, @@ -157,6 +181,7 @@ func dataSourceReplicationGroupRead(d *schema.ResourceData, meta interface{}) er d.Set("node_type", rg.CacheNodeType) d.Set("num_node_groups", len(rg.NodeGroups)) d.Set("replicas_per_node_group", len(rg.NodeGroups[0].NodeGroupMembers)-1) + d.Set("log_delivery_configuration", flattenLogDeliveryConfigurations(rg.LogDeliveryConfigurations)) d.Set("snapshot_window", rg.SnapshotWindow) d.Set("snapshot_retention_limit", rg.SnapshotRetentionLimit) return nil diff --git a/internal/service/elasticache/replication_group_data_source_test.go b/internal/service/elasticache/replication_group_data_source_test.go index 9065870350b1..f3eb9b80815c 100644 --- a/internal/service/elasticache/replication_group_data_source_test.go +++ b/internal/service/elasticache/replication_group_data_source_test.go @@ -110,6 +110,32 @@ func TestAccElastiCacheReplicationGroupDataSource_nonExistent(t *testing.T) { }) } +func TestAccElastiCacheReplicationGroupDataSource_Engine_Redis_LogDeliveryConfigurations(t *testing.T) { + rName := sdkacctest.RandomWithPrefix("tf-acc-test") + dataSourceName := "data.aws_elasticache_replication_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, elasticache.EndpointsID), + Providers: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccReplicationGroupConfig_Engine_Redis_LogDeliveryConfigurations(rName, false, true, elasticache.DestinationTypeCloudwatchLogs, elasticache.LogFormatText, true, elasticache.DestinationTypeKinesisFirehose, elasticache.LogFormatJson), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr(dataSourceName, "log_delivery_configuration.0.destination", rName), + resource.TestCheckResourceAttr(dataSourceName, "log_delivery_configuration.0.destination_type", "cloudwatch-logs"), + resource.TestCheckResourceAttr(dataSourceName, "log_delivery_configuration.0.log_format", "text"), + resource.TestCheckResourceAttr(dataSourceName, "log_delivery_configuration.0.log_type", "slow-log"), + resource.TestCheckResourceAttr(dataSourceName, "log_delivery_configuration.1.destination", rName), + resource.TestCheckResourceAttr(dataSourceName, "log_delivery_configuration.1.destination_type", "kinesis-firehose"), + resource.TestCheckResourceAttr(dataSourceName, "log_delivery_configuration.1.log_format", "json"), + resource.TestCheckResourceAttr(dataSourceName, "log_delivery_configuration.1.log_type", "engine-log"), + ), + }, + }, + }) +} + func testAccReplicationGroupDataSourceConfig_basic(rName string) string { return acctest.ConfigAvailableAZsNoOptIn() + fmt.Sprintf(` resource "aws_elasticache_replication_group" "test" { diff --git a/internal/service/elasticache/replication_group_test.go b/internal/service/elasticache/replication_group_test.go index ce4d89cf9643..24daf2bd8350 100644 --- a/internal/service/elasticache/replication_group_test.go +++ b/internal/service/elasticache/replication_group_test.go @@ -46,7 +46,6 @@ func TestAccElastiCacheReplicationGroup_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "multi_az_enabled", "false"), resource.TestCheckResourceAttr(resourceName, "automatic_failover_enabled", "false"), resource.TestCheckResourceAttr(resourceName, "member_clusters.#", "1"), - resource.TestCheckResourceAttr(resourceName, "auto_minor_version_upgrade", "false"), resource.TestCheckResourceAttr(resourceName, "parameter_group_name", "default.redis6.x"), resource.TestCheckResourceAttr(resourceName, "cluster_mode.#", "1"), resource.TestCheckResourceAttr(resourceName, "cluster_mode.0.num_node_groups", "1"), @@ -54,6 +53,7 @@ func TestAccElastiCacheReplicationGroup_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "cluster_enabled", "false"), resource.TestCheckResourceAttr(resourceName, "engine_version", "6.x"), resource.TestMatchResourceAttr(resourceName, "engine_version_actual", regexp.MustCompile(`^6\.[[:digit:]]+\.[[:digit:]]+$`)), + resource.TestCheckResourceAttr(resourceName, "auto_minor_version_upgrade", "true"), resource.TestCheckResourceAttr(resourceName, "data_tiering_enabled", "false"), ), }, @@ -67,6 +67,42 @@ func TestAccElastiCacheReplicationGroup_basic(t *testing.T) { }) } +func TestAccElastiCacheReplicationGroup_basic_v5(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var rg elasticache.ReplicationGroup + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_elasticache_replication_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, elasticache.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckReplicationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccReplicationGroupConfig_v5(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckReplicationGroupExists(resourceName, &rg), + resource.TestCheckResourceAttr(resourceName, "engine", "redis"), + resource.TestCheckResourceAttr(resourceName, "engine_version", "5.0.6"), + resource.TestCheckResourceAttr(resourceName, "engine_version_actual", "5.0.6"), + // Even though it is ignored, the API returns `true` in this case + resource.TestCheckResourceAttr(resourceName, "auto_minor_version_upgrade", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"apply_immediately"}, //not in the API + }, + }, + }) +} + func TestAccElastiCacheReplicationGroup_uppercase(t *testing.T) { if testing.Short() { t.Skip("skipping long-running test in short mode") @@ -210,7 +246,6 @@ func TestAccElastiCacheReplicationGroup_updateDescription(t *testing.T) { testAccCheckReplicationGroupExists(resourceName, &rg), resource.TestCheckResourceAttr(resourceName, "number_cache_clusters", "1"), resource.TestCheckResourceAttr(resourceName, "replication_group_description", "test description"), - resource.TestCheckResourceAttr(resourceName, "auto_minor_version_upgrade", "false"), ), }, { @@ -225,7 +260,6 @@ func TestAccElastiCacheReplicationGroup_updateDescription(t *testing.T) { testAccCheckReplicationGroupExists(resourceName, &rg), resource.TestCheckResourceAttr(resourceName, "number_cache_clusters", "1"), resource.TestCheckResourceAttr(resourceName, "replication_group_description", "updated description"), - resource.TestCheckResourceAttr(resourceName, "auto_minor_version_upgrade", "true"), ), }, }, @@ -455,7 +489,6 @@ func TestAccElastiCacheReplicationGroup_vpc(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckReplicationGroupExists(resourceName, &rg), resource.TestCheckResourceAttr(resourceName, "number_cache_clusters", "1"), - resource.TestCheckResourceAttr(resourceName, "auto_minor_version_upgrade", "false"), resource.TestCheckResourceAttr(resourceName, "preferred_cache_cluster_azs.#", "1"), ), }, @@ -489,7 +522,6 @@ func TestAccElastiCacheReplicationGroup_depecatedAvailabilityZones_vpc(t *testin Check: resource.ComposeTestCheckFunc( testAccCheckReplicationGroupExists(resourceName, &rg), resource.TestCheckResourceAttr(resourceName, "number_cache_clusters", "1"), - resource.TestCheckResourceAttr(resourceName, "auto_minor_version_upgrade", "false"), resource.TestCheckResourceAttr(resourceName, "availability_zones.#", "1"), ), }, @@ -1168,6 +1200,7 @@ func TestAccElastiCacheReplicationGroup_enableAuthTokenTransitEncryption(t *test } var rg elasticache.ReplicationGroup + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_elasticache_replication_group.test" resource.ParallelTest(t, resource.TestCase{ @@ -1177,7 +1210,7 @@ func TestAccElastiCacheReplicationGroup_enableAuthTokenTransitEncryption(t *test CheckDestroy: testAccCheckReplicationDestroy, Steps: []resource.TestStep{ { - Config: testAccReplicationGroup_EnableAuthTokenTransitEncryptionConfig(sdkacctest.RandString(10), sdkacctest.RandString(16)), + Config: testAccReplicationGroup_EnableAuthTokenTransitEncryptionConfig(rName, sdkacctest.RandString(16)), Check: resource.ComposeTestCheckFunc( testAccCheckReplicationGroupExists(resourceName, &rg), resource.TestCheckResourceAttr(resourceName, "transit_encryption_enabled", "true"), @@ -1714,7 +1747,7 @@ func TestAccElastiCacheReplicationGroup_tags(t *testing.T) { Steps: []resource.TestStep{ { Config: testAccReplicationGroupTags1Config(rName, "key1", "value1"), - Check: resource.ComposeTestCheckFunc( + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckReplicationGroupExists(resourceName, &rg), testAccReplicationGroupCheckMemberClusterTags(resourceName, clusterDataSourcePrefix, 2, []kvp{ {"key1", "value1"}, @@ -1729,7 +1762,7 @@ func TestAccElastiCacheReplicationGroup_tags(t *testing.T) { }, { Config: testAccReplicationGroupTags2Config(rName, "key1", "value1updated", "key2", "value2"), - Check: resource.ComposeTestCheckFunc( + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckReplicationGroupExists(resourceName, &rg), testAccReplicationGroupCheckMemberClusterTags(resourceName, clusterDataSourcePrefix, 2, []kvp{ {"key1", "value1updated"}, @@ -1739,7 +1772,7 @@ func TestAccElastiCacheReplicationGroup_tags(t *testing.T) { }, { Config: testAccReplicationGroupTags1Config(rName, "key2", "value2"), - Check: resource.ComposeTestCheckFunc( + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckReplicationGroupExists(resourceName, &rg), testAccReplicationGroupCheckMemberClusterTags(resourceName, clusterDataSourcePrefix, 2, []kvp{ {"key2", "value2"}, @@ -1750,6 +1783,46 @@ func TestAccElastiCacheReplicationGroup_tags(t *testing.T) { }) } +func TestAccElastiCacheReplicationGroup_tagWithOtherModification(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var rg elasticache.ReplicationGroup + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_elasticache_replication_group.test" + clusterDataSourcePrefix := "data.aws_elasticache_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, elasticache.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckReplicationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccReplicationGroupVersionAndTagConfig(rName, "5.0.4", "key1", "value1"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckReplicationGroupExists(resourceName, &rg), + resource.TestCheckResourceAttr(resourceName, "engine_version", "5.0.4"), + testAccReplicationGroupCheckMemberClusterTags(resourceName, clusterDataSourcePrefix, 2, []kvp{ + {"key1", "value1"}, + }), + ), + }, + { + Config: testAccReplicationGroupVersionAndTagConfig(rName, "5.0.6", "key1", "value1updated"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckReplicationGroupExists(resourceName, &rg), + resource.TestCheckResourceAttr(resourceName, "engine_version", "5.0.6"), + testAccReplicationGroupCheckMemberClusterTags(resourceName, clusterDataSourcePrefix, 2, []kvp{ + {"key1", "value1updated"}, + }), + ), + }, + }, + }) +} + func TestAccElastiCacheReplicationGroup_finalSnapshot(t *testing.T) { if testing.Short() { t.Skip("skipping long-running test in short mode") @@ -1776,6 +1849,47 @@ func TestAccElastiCacheReplicationGroup_finalSnapshot(t *testing.T) { }) } +func TestAccElastiCacheReplicationGroup_autoMinorVersionUpgrade(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var rg elasticache.ReplicationGroup + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_elasticache_replication_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, elasticache.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckClusterDestroy, + Steps: []resource.TestStep{ + { + Config: testAccReplicationGroup_AutoMinorVersionUpgrade(rName, false), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckReplicationGroupExists(resourceName, &rg), + resource.TestCheckResourceAttr(resourceName, "auto_minor_version_upgrade", "false"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "apply_immediately", + }, + }, + { + Config: testAccReplicationGroup_AutoMinorVersionUpgrade(rName, true), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckReplicationGroupExists(resourceName, &rg), + resource.TestCheckResourceAttr(resourceName, "auto_minor_version_upgrade", "true"), + ), + }, + }, + }) +} + func TestAccElastiCacheReplicationGroup_Validation_noNodeType(t *testing.T) { var providers []*schema.Provider rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -1982,6 +2096,194 @@ func TestAccElastiCacheReplicationGroup_GlobalReplicationGroupIDClusterMode_basi }) } +func TestAccElastiCacheReplicationGroup_Engine_Redis_LogDeliveryConfigurations_ClusterMode_Disabled(t *testing.T) { + var rg elasticache.ReplicationGroup + rName := sdkacctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_elasticache_replication_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, elasticache.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckReplicationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccReplicationGroupConfig_Engine_Redis_LogDeliveryConfigurations(rName, false, true, elasticache.DestinationTypeCloudwatchLogs, elasticache.LogFormatText, true, elasticache.DestinationTypeCloudwatchLogs, elasticache.LogFormatText), + Check: resource.ComposeTestCheckFunc( + testAccCheckReplicationGroupExists(resourceName, &rg), + resource.TestCheckResourceAttr(resourceName, "engine", "redis"), + resource.TestCheckResourceAttr(resourceName, "cluster_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination", rName), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination_type", "cloudwatch-logs"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_format", "text"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_type", "engine-log"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.destination", rName), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.destination_type", "cloudwatch-logs"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.log_format", "text"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.log_type", "slow-log"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"apply_immediately"}, + }, + { + Config: testAccReplicationGroupConfig_Engine_Redis_LogDeliveryConfigurations(rName, false, true, elasticache.DestinationTypeCloudwatchLogs, elasticache.LogFormatText, true, elasticache.DestinationTypeKinesisFirehose, elasticache.LogFormatJson), + Check: resource.ComposeTestCheckFunc( + testAccCheckReplicationGroupExists(resourceName, &rg), + resource.TestCheckResourceAttr(resourceName, "engine", "redis"), + resource.TestCheckResourceAttr(resourceName, "cluster_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination", rName), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination_type", "cloudwatch-logs"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_format", "text"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_type", "slow-log"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.destination", rName), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.destination_type", "kinesis-firehose"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.log_format", "json"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.log_type", "engine-log"), + ), + }, + { + Config: testAccReplicationGroupConfig_Engine_Redis_LogDeliveryConfigurations(rName, false, true, elasticache.DestinationTypeKinesisFirehose, elasticache.LogFormatJson, false, "", ""), + Check: resource.ComposeTestCheckFunc( + testAccCheckReplicationGroupExists(resourceName, &rg), + resource.TestCheckResourceAttr(resourceName, "engine", "redis"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination", rName), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination_type", "kinesis-firehose"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_format", "json"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_type", "slow-log"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.destination"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.destination_type"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.log_format"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.log_type"), + ), + }, + { + Config: testAccReplicationGroupConfig_Engine_Redis_LogDeliveryConfigurations(rName, false, false, "", "", false, "", ""), + Check: resource.ComposeTestCheckFunc( + testAccCheckReplicationGroupExists(resourceName, &rg), + resource.TestCheckResourceAttr(resourceName, "engine", "redis"), + resource.TestCheckResourceAttr(resourceName, "cluster_enabled", "false"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.0.destination"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.0.destination_type"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.0.log_format"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.0.log_type"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.destination"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.destination_type"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.log_format"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.log_type"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"apply_immediately"}, + }, + }, + }) +} + +func TestAccElastiCacheReplicationGroup_Engine_Redis_LogDeliveryConfigurations_ClusterMode_Enabled(t *testing.T) { + var rg elasticache.ReplicationGroup + rName := sdkacctest.RandomWithPrefix("tf-acc-test") + resourceName := "aws_elasticache_replication_group.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, elasticache.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckReplicationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccReplicationGroupConfig_Engine_Redis_LogDeliveryConfigurations(rName, true, true, elasticache.DestinationTypeCloudwatchLogs, elasticache.LogFormatText, true, elasticache.DestinationTypeCloudwatchLogs, elasticache.LogFormatText), + Check: resource.ComposeTestCheckFunc( + testAccCheckReplicationGroupExists(resourceName, &rg), + resource.TestCheckResourceAttr(resourceName, "engine", "redis"), + resource.TestCheckResourceAttr(resourceName, "cluster_enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "cluster_mode.#", "1"), + resource.TestCheckResourceAttr(resourceName, "parameter_group_name", "default.redis6.x.cluster.on"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination", rName), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination_type", "cloudwatch-logs"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_format", "text"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_type", "engine-log"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.destination", rName), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.destination_type", "cloudwatch-logs"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.log_format", "text"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.log_type", "slow-log"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"apply_immediately"}, + }, + { + Config: testAccReplicationGroupConfig_Engine_Redis_LogDeliveryConfigurations(rName, true, true, elasticache.DestinationTypeCloudwatchLogs, elasticache.LogFormatText, true, elasticache.DestinationTypeKinesisFirehose, elasticache.LogFormatJson), + Check: resource.ComposeTestCheckFunc( + testAccCheckReplicationGroupExists(resourceName, &rg), + resource.TestCheckResourceAttr(resourceName, "engine", "redis"), + resource.TestCheckResourceAttr(resourceName, "cluster_enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "cluster_mode.#", "1"), + resource.TestCheckResourceAttr(resourceName, "parameter_group_name", "default.redis6.x.cluster.on"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination", rName), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination_type", "cloudwatch-logs"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_format", "text"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_type", "slow-log"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.destination", rName), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.destination_type", "kinesis-firehose"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.log_format", "json"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.1.log_type", "engine-log"), + ), + }, + { + Config: testAccReplicationGroupConfig_Engine_Redis_LogDeliveryConfigurations(rName, true, true, elasticache.DestinationTypeKinesisFirehose, elasticache.LogFormatJson, false, "", ""), + Check: resource.ComposeTestCheckFunc( + testAccCheckReplicationGroupExists(resourceName, &rg), + resource.TestCheckResourceAttr(resourceName, "engine", "redis"), + resource.TestCheckResourceAttr(resourceName, "cluster_enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "cluster_mode.#", "1"), + resource.TestCheckResourceAttr(resourceName, "parameter_group_name", "default.redis6.x.cluster.on"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination", rName), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.destination_type", "kinesis-firehose"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_format", "json"), + resource.TestCheckResourceAttr(resourceName, "log_delivery_configuration.0.log_type", "slow-log"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.destination"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.destination_type"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.log_format"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.log_type"), + ), + }, + { + Config: testAccReplicationGroupConfig_Engine_Redis_LogDeliveryConfigurations(rName, true, false, "", "", false, "", ""), + Check: resource.ComposeTestCheckFunc( + testAccCheckReplicationGroupExists(resourceName, &rg), + resource.TestCheckResourceAttr(resourceName, "engine", "redis"), + resource.TestCheckResourceAttr(resourceName, "cluster_enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "cluster_mode.#", "1"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.0.destination"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.0.destination_type"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.0.log_format"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.0.log_type"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.destination"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.destination_type"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.log_format"), + resource.TestCheckNoResourceAttr(resourceName, "log_delivery_configuration.1.log_type"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"apply_immediately"}, + }, + }, + }) +} + // Test for out-of-band deletion // Naming to allow grouping all TestAccAWSElasticacheReplicationGroup_GlobalReplicationGroupId_* tests func TestAccElastiCacheReplicationGroup_GlobalReplicationGroupID_disappears(t *testing.T) { @@ -2190,13 +2492,23 @@ resource "aws_elasticache_replication_group" "test" { node_type = "cache.t3.small" port = 6379 apply_immediately = true - auto_minor_version_upgrade = false maintenance_window = "tue:06:30-tue:07:30" snapshot_window = "01:00-02:00" } `, rName) } +func testAccReplicationGroupConfig_v5(rName string) string { + return fmt.Sprintf(` +resource "aws_elasticache_replication_group" "test" { + replication_group_id = %[1]q + replication_group_description = "test description" + node_type = "cache.t3.small" + engine_version = "5.0.6" +} +`, rName) +} + func testAccReplicationGroupConfig_Uppercase(rName string) string { return acctest.ConfigCompose( acctest.ConfigVpcWithSubnets(2), @@ -2243,7 +2555,6 @@ resource "aws_elasticache_replication_group" "test" { node_type = "cache.t3.small" port = 6379 apply_immediately = true - auto_minor_version_upgrade = false maintenance_window = "tue:06:30-tue:07:30" snapshot_window = "01:00-02:00" snapshot_retention_limit = 2 @@ -2287,7 +2598,6 @@ resource "aws_elasticache_replication_group" "test" { node_type = "cache.t3.small" port = 6379 apply_immediately = true - auto_minor_version_upgrade = true } `, rName) } @@ -2300,7 +2610,6 @@ resource "aws_elasticache_replication_group" "test" { node_type = "cache.t3.small" port = 6379 apply_immediately = true - auto_minor_version_upgrade = true maintenance_window = "wed:03:00-wed:06:00" snapshot_window = "01:00-02:00" } @@ -2345,7 +2654,6 @@ resource "aws_elasticache_replication_group" "test" { node_type = "cache.t3.small" port = 6379 apply_immediately = true - auto_minor_version_upgrade = false maintenance_window = "tue:06:30-tue:07:30" snapshot_window = "01:00-02:00" transit_encryption_enabled = true @@ -2368,7 +2676,6 @@ resource "aws_elasticache_replication_group" "test" { subnet_group_name = aws_elasticache_subnet_group.test.name security_group_ids = [aws_security_group.test.id] preferred_cache_cluster_azs = [data.aws_availability_zones.available.names[0]] - auto_minor_version_upgrade = false } resource "aws_elasticache_subnet_group" "test" { @@ -2405,7 +2712,6 @@ resource "aws_elasticache_replication_group" "test" { subnet_group_name = aws_elasticache_subnet_group.test.name security_group_ids = [aws_security_group.test.id] availability_zones = [data.aws_availability_zones.available.names[0]] - auto_minor_version_upgrade = false } resource "aws_elasticache_subnet_group" "test" { @@ -2886,7 +3192,6 @@ resource "aws_elasticache_replication_group" "test" { number_cache_clusters = %[2]d port = 6379 apply_immediately = true - auto_minor_version_upgrade = false maintenance_window = "tue:06:30-tue:07:30" snapshot_window = "01:00-02:00" @@ -2910,7 +3215,6 @@ resource "aws_elasticache_replication_group" "test" { number_cache_clusters = %[2]d port = 6379 apply_immediately = true - auto_minor_version_upgrade = false maintenance_window = "tue:06:30-tue:07:30" snapshot_window = "01:00-02:00" @@ -2923,6 +3227,27 @@ resource "aws_elasticache_replication_group" "test" { ) } +func testAccReplicationGroupVersionAndTagConfig(rName, version, tagKey1, tagValue1 string) string { + const clusterCount = 2 + return acctest.ConfigCompose( + testAccReplicationGroupClusterData(clusterCount), + fmt.Sprintf(` +resource "aws_elasticache_replication_group" "test" { + replication_group_id = %[1]q + replication_group_description = "test description" + node_type = "cache.t3.small" + number_cache_clusters = %[2]d + apply_immediately = true + engine_version = %[3]q + + tags = { + %[4]q = %[5]q + } +} +`, rName, clusterCount, version, tagKey1, tagValue1), + ) +} + func testAccReplicationGroupClusterData(count int) string { return fmt.Sprintf(` data "aws_elasticache_cluster" "test" { @@ -2946,6 +3271,18 @@ resource "aws_elasticache_replication_group" "test" { `, rName) } +func testAccReplicationGroup_AutoMinorVersionUpgrade(rName string, enable bool) string { + return fmt.Sprintf(` +resource "aws_elasticache_replication_group" "test" { + replication_group_id = %[1]q + replication_group_description = "test description" + node_type = "cache.t3.small" + + auto_minor_version_upgrade = %[2]t +} +`, rName, enable) +} + func testAccReplicationGroupConfig_Validation_NoNodeType(rName string) string { return fmt.Sprintf(` resource "aws_elasticache_replication_group" "test" { @@ -2959,8 +3296,8 @@ resource "aws_elasticache_replication_group" "test" { func testAccReplicationGroupConfig_Validation_GlobalReplicationGroupIdAndNodeType(rName string) string { return acctest.ConfigCompose( acctest.ConfigMultipleRegionProvider(2), - testAccElasticacheVpcBaseWithProvider(rName, "test", acctest.ProviderName, 1), - testAccElasticacheVpcBaseWithProvider(rName, "primary", acctest.ProviderNameAlternate, 1), + testAccVPCBaseWithProvider(rName, "test", acctest.ProviderName, 1), + testAccVPCBaseWithProvider(rName, "primary", acctest.ProviderNameAlternate, 1), fmt.Sprintf(` resource "aws_elasticache_replication_group" "test" { provider = aws @@ -3004,8 +3341,8 @@ resource "aws_elasticache_replication_group" "primary" { func testAccReplicationGroupConfig_GlobalReplicationGroupId_Basic(rName string) string { return acctest.ConfigCompose( acctest.ConfigMultipleRegionProvider(2), - testAccElasticacheVpcBaseWithProvider(rName, "test", acctest.ProviderName, 1), - testAccElasticacheVpcBaseWithProvider(rName, "primary", acctest.ProviderNameAlternate, 1), + testAccVPCBaseWithProvider(rName, "test", acctest.ProviderName, 1), + testAccVPCBaseWithProvider(rName, "primary", acctest.ProviderNameAlternate, 1), fmt.Sprintf(` resource "aws_elasticache_replication_group" "test" { replication_group_id = "%[1]s-s" @@ -3043,8 +3380,8 @@ resource "aws_elasticache_replication_group" "primary" { func testAccReplicationGroupConfig_GlobalReplicationGroupId_Full(rName string, numCacheClusters int) string { return acctest.ConfigCompose( acctest.ConfigMultipleRegionProvider(2), - testAccElasticacheVpcBaseWithProvider(rName, "test", acctest.ProviderName, 2), - testAccElasticacheVpcBaseWithProvider(rName, "primary", acctest.ProviderNameAlternate, 2), + testAccVPCBaseWithProvider(rName, "test", acctest.ProviderName, 2), + testAccVPCBaseWithProvider(rName, "primary", acctest.ProviderNameAlternate, 2), fmt.Sprintf(` resource "aws_elasticache_replication_group" "test" { replication_group_id = "%[1]s-s" @@ -3095,8 +3432,8 @@ resource "aws_elasticache_replication_group" "primary" { func testAccReplicationGroupConfig_GlobalReplicationGroupId_ClusterMode(rName string, primaryReplicaCount, secondaryReplicaCount int) string { return acctest.ConfigCompose( acctest.ConfigMultipleRegionProvider(2), - testAccElasticacheVpcBaseWithProvider(rName, "test", acctest.ProviderName, 2), - testAccElasticacheVpcBaseWithProvider(rName, "primary", acctest.ProviderNameAlternate, 2), + testAccVPCBaseWithProvider(rName, "test", acctest.ProviderName, 2), + testAccVPCBaseWithProvider(rName, "primary", acctest.ProviderNameAlternate, 2), fmt.Sprintf(` resource "aws_elasticache_replication_group" "test" { replication_group_id = "%[1]s-s" @@ -3145,8 +3482,8 @@ resource "aws_elasticache_replication_group" "primary" { func testAccReplicationGroupConfig_GlobalReplicationGroupId_ClusterMode_NumNodeGroupsOnSecondary(rName string) string { return acctest.ConfigCompose( acctest.ConfigMultipleRegionProvider(2), - testAccElasticacheVpcBaseWithProvider(rName, "test", acctest.ProviderName, 2), - testAccElasticacheVpcBaseWithProvider(rName, "primary", acctest.ProviderNameAlternate, 2), + testAccVPCBaseWithProvider(rName, "test", acctest.ProviderName, 2), + testAccVPCBaseWithProvider(rName, "primary", acctest.ProviderNameAlternate, 2), fmt.Sprintf(` resource "aws_elasticache_replication_group" "test" { replication_group_id = "%[1]s-s" @@ -3205,7 +3542,6 @@ resource "aws_elasticache_replication_group" "test" { port = 6379 subnet_group_name = aws_elasticache_subnet_group.test.name security_group_ids = [aws_security_group.test.id] - auto_minor_version_upgrade = false } resource "aws_elasticache_subnet_group" "test" { @@ -3229,6 +3565,133 @@ resource "aws_security_group" "test" { ) } +func testAccReplicationGroupConfig_Engine_Redis_LogDeliveryConfigurations(rName string, enableClusterMode bool, slowLogDeliveryEnabled bool, slowDeliveryDestination string, slowDeliveryFormat string, engineLogDeliveryEnabled bool, engineDeliveryDestination string, engineLogDeliveryFormat string) string { + return fmt.Sprintf(` +data "aws_iam_policy_document" "p" { + statement { + actions = [ + "logs:CreateLogStream", + "logs:PutLogEvents" + ] + resources = ["${aws_cloudwatch_log_group.lg.arn}:log-stream:*"] + principals { + identifiers = ["delivery.logs.amazonaws.com"] + type = "Service" + } + } +} + +resource "aws_cloudwatch_log_resource_policy" "rp" { + policy_document = data.aws_iam_policy_document.p.json + policy_name = "%[1]s" + depends_on = [ + aws_cloudwatch_log_group.lg + ] +} + +resource "aws_cloudwatch_log_group" "lg" { + retention_in_days = 1 + name = "%[1]s" +} + +resource "aws_s3_bucket" "b" { + force_destroy = true +} + +resource "aws_iam_role" "r" { + assume_role_policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Sid = "" + Principal = { + Service = "firehose.amazonaws.com" + } + }, + ] + }) + inline_policy { + name = "my_inline_s3_policy" + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = [ + "s3:AbortMultipartUpload", + "s3:GetBucketLocation", + "s3:GetObject", + "s3:ListBucket", + "s3:ListBucketMultipartUploads", + "s3:PutObject", + "s3:PutObjectAcl", + ] + Effect = "Allow" + Resource = ["${aws_s3_bucket.b.arn}", "${aws_s3_bucket.b.arn}/*"] + }, + ] + }) + } +} + +resource "aws_kinesis_firehose_delivery_stream" "ds" { + name = "%[1]s" + destination = "s3" + s3_configuration { + role_arn = aws_iam_role.r.arn + bucket_arn = aws_s3_bucket.b.arn + } + lifecycle { + ignore_changes = [ + tags["LogDeliveryEnabled"], + ] + } +} + +resource "aws_elasticache_replication_group" "test" { + replication_group_id = "%[1]s" + replication_group_description = "test description" + node_type = "cache.t3.small" + port = 6379 + apply_immediately = true + maintenance_window = "tue:06:30-tue:07:30" + snapshot_window = "01:00-02:00" + parameter_group_name = tobool("%[2]t") ? "default.redis6.x.cluster.on" : "default.redis6.x" + automatic_failover_enabled = tobool("%[2]t") + dynamic "cluster_mode" { + for_each = tobool("%[2]t") ? [""] : [] + content { + num_node_groups = 1 + replicas_per_node_group = 0 + } + } + dynamic "log_delivery_configuration" { + for_each = tobool("%[3]t") ? [""] : [] + content { + destination = ("%[4]s" == "cloudwatch-logs") ? aws_cloudwatch_log_group.lg.name : (("%[4]s" == "kinesis-firehose") ? aws_kinesis_firehose_delivery_stream.ds.name : null) + destination_type = "%[4]s" + log_format = "%[5]s" + log_type = "slow-log" + } + } + dynamic "log_delivery_configuration" { + for_each = tobool("%[6]t") ? [""] : [] + content { + destination = ("%[7]s" == "cloudwatch-logs") ? aws_cloudwatch_log_group.lg.name : (("%[7]s" == "kinesis-firehose") ? aws_kinesis_firehose_delivery_stream.ds.name : null) + destination_type = "%[7]s" + log_format = "%[8]s" + log_type = "engine-log" + } + } +} + +data "aws_elasticache_replication_group" "test" { + replication_group_id = aws_elasticache_replication_group.test.replication_group_id +} +`, rName, enableClusterMode, slowLogDeliveryEnabled, slowDeliveryDestination, slowDeliveryFormat, engineLogDeliveryEnabled, engineDeliveryDestination, engineLogDeliveryFormat) +} + func resourceReplicationGroupDisableAutomaticFailover(conn *elasticache.ElastiCache, replicationGroupID string, timeout time.Duration) error { return resourceReplicationGroupModify(conn, timeout, &elasticache.ModifyReplicationGroupInput{ ReplicationGroupId: aws.String(replicationGroupID), diff --git a/internal/service/elasticache/status.go b/internal/service/elasticache/status.go index a14f37610b35..2432bec78430 100644 --- a/internal/service/elasticache/status.go +++ b/internal/service/elasticache/status.go @@ -132,7 +132,7 @@ func StatusGlobalReplicationGroupMember(conn *elasticache.ElastiCache, globalRep // StatusUser fetches the ElastiCache user and its Status func StatusUser(conn *elasticache.ElastiCache, userId string) resource.StateRefreshFunc { return func() (interface{}, string, error) { - user, err := FindElastiCacheUserByID(conn, userId) + user, err := FindUserByID(conn, userId) if tfresource.NotFound(err) { return nil, "", nil diff --git a/internal/service/elasticache/subnet_group.go b/internal/service/elasticache/subnet_group.go index b9c7655e1c80..d22b7d883944 100644 --- a/internal/service/elasticache/subnet_group.go +++ b/internal/service/elasticache/subnet_group.go @@ -95,12 +95,23 @@ func resourceSubnetGroupCreate(d *schema.ResourceData, meta interface{}) error { CacheSubnetGroupDescription: aws.String(desc), CacheSubnetGroupName: aws.String(name), SubnetIds: subnetIds, - Tags: Tags(tags.IgnoreAWS()), } - _, err := conn.CreateCacheSubnetGroup(req) + if len(tags) > 0 { + req.Tags = Tags(tags.IgnoreAWS()) + } + + output, err := conn.CreateCacheSubnetGroup(req) + + if req.Tags != nil && verify.CheckISOErrorTagsUnsupported(err) { + log.Printf("[WARN] failed creating ElastiCache Subnet Group with tags: %s. Trying create without tags.", err) + + req.Tags = nil + output, err = conn.CreateCacheSubnetGroup(req) + } + if err != nil { - return fmt.Errorf("error creating ElastiCache Subnet Group (%s): %w", name, err) + return fmt.Errorf("creating ElastiCache Subnet Group (%s): %w", name, err) } // Assign the group name as the resource ID @@ -109,6 +120,20 @@ func resourceSubnetGroupCreate(d *schema.ResourceData, meta interface{}) error { // name contained uppercase characters. d.SetId(strings.ToLower(name)) + // In some partitions, only post-create tagging supported + if req.Tags == nil && len(tags) > 0 { + err := UpdateTags(conn, aws.StringValue(output.CacheSubnetGroup.ARN), nil, tags) + + if err != nil { + if v, ok := d.GetOk("tags"); (ok && len(v.(map[string]interface{})) > 0) || !verify.CheckISOErrorTagsUnsupported(err) { + // explicitly setting tags or not an iso-unsupported error + return fmt.Errorf("failed adding tags after create for ElastiCache Subnet Group (%s): %w", d.Id(), err) + } + + log.Printf("[WARN] failed adding tags after create for ElastiCache Subnet Group (%s): %s", d.Id(), err) + } + } + return resourceSubnetGroupRead(d, meta) } @@ -158,19 +183,26 @@ func resourceSubnetGroupRead(d *schema.ResourceData, meta interface{}) error { tags, err := ListTags(conn, d.Get("arn").(string)) - if err != nil && !tfawserr.ErrCodeEquals(err, "UnknownOperationException") { - return fmt.Errorf("error listing tags for ElastiCache SubnetGroup (%s): %w", d.Id(), err) + if err != nil && !verify.CheckISOErrorTagsUnsupported(err) { + return fmt.Errorf("listing tags for ElastiCache Subnet Group (%s): %w", d.Id(), err) } - tags = tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig) - - //lintignore:AWSR002 - if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { - return fmt.Errorf("error setting tags: %w", err) + // tags not supported in all partitions + if err != nil { + log.Printf("[WARN] failed listing tags for Elasticache Subnet Group (%s): %s", d.Id(), err) } - if err := d.Set("tags_all", tags.Map()); err != nil { - return fmt.Errorf("error setting tags_all: %w", err) + if tags != nil { + tags = tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig) + + //lintignore:AWSR002 + if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { + return fmt.Errorf("error setting tags: %w", err) + } + + if err := d.Set("tags_all", tags.Map()); err != nil { + return fmt.Errorf("error setting tags_all: %w", err) + } } return nil @@ -201,8 +233,15 @@ func resourceSubnetGroupUpdate(d *schema.ResourceData, meta interface{}) error { if d.HasChange("tags_all") { o, n := d.GetChange("tags_all") - if err := UpdateTags(conn, d.Get("arn").(string), o, n); err != nil { - return fmt.Errorf("error updating tags: %w", err) + + err := UpdateTags(conn, d.Get("arn").(string), o, n) + if err != nil { + if v, ok := d.GetOk("tags"); (ok && len(v.(map[string]interface{})) > 0) || !verify.CheckISOErrorTagsUnsupported(err) { + // explicitly setting tags or not an iso-unsupported error + return fmt.Errorf("failed updating ElastiCache Subnet Group (%s) tags: %w", d.Id(), err) + } + + log.Printf("[WARN] failed updating tags for ElastiCache Subnet Group (%s): %s", d.Id(), err) } } diff --git a/internal/service/elasticache/user.go b/internal/service/elasticache/user.go index b08afd4f3f9b..907d7cebf084 100644 --- a/internal/service/elasticache/user.go +++ b/internal/service/elasticache/user.go @@ -6,7 +6,6 @@ import ( "strings" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/endpoints" "github.com/aws/aws-sdk-go/service/elasticache" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" @@ -92,18 +91,39 @@ func resourceUserCreate(d *schema.ResourceData, meta interface{}) error { input.Passwords = flex.ExpandStringSet(v.(*schema.Set)) } - // Tags are currently only supported in AWS Commercial. - if len(tags) > 0 && meta.(*conns.AWSClient).Partition == endpoints.AwsPartitionID { + if len(tags) > 0 { input.Tags = Tags(tags.IgnoreAWS()) } out, err := conn.CreateUser(input) + + if input.Tags != nil && verify.CheckISOErrorTagsUnsupported(err) { + log.Printf("[WARN] failed creating ElastiCache User with tags: %s. Trying create without tags.", err) + + input.Tags = nil + out, err = conn.CreateUser(input) + } + if err != nil { return fmt.Errorf("error creating ElastiCache User: %w", err) } d.SetId(aws.StringValue(out.UserId)) + // In some partitions, only post-create tagging supported + if input.Tags == nil && len(tags) > 0 { + err := UpdateTags(conn, aws.StringValue(out.ARN), nil, tags) + + if err != nil { + if v, ok := d.GetOk("tags"); (ok && len(v.(map[string]interface{})) > 0) || !verify.CheckISOErrorTagsUnsupported(err) { + // explicitly setting tags or not an iso-unsupported error + return fmt.Errorf("failed adding tags after create for ElastiCache User (%s): %w", d.Id(), err) + } + + log.Printf("[WARN] failed adding tags after create for ElastiCache User (%s): %s", d.Id(), err) + } + } + return resourceUserRead(d, meta) } @@ -113,7 +133,7 @@ func resourceUserRead(d *schema.ResourceData, meta interface{}) error { defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig - resp, err := FindElastiCacheUserByID(conn, d.Id()) + resp, err := FindUserByID(conn, d.Id()) if !d.IsNewResource() && (tfresource.NotFound(err) || tfawserr.ErrCodeEquals(err, elasticache.ErrCodeUserNotFoundFault)) { log.Printf("[WARN] ElastiCache User (%s) not found, removing from state", d.Id()) d.SetId("") @@ -130,14 +150,18 @@ func resourceUserRead(d *schema.ResourceData, meta interface{}) error { d.Set("user_name", resp.UserName) d.Set("arn", resp.ARN) - // Tags are currently only supported in AWS Commercial. - if meta.(*conns.AWSClient).Partition == endpoints.AwsPartitionID { - tags, err := ListTags(conn, aws.StringValue(resp.ARN)) + tags, err := ListTags(conn, aws.StringValue(resp.ARN)) - if err != nil { - return fmt.Errorf("error listing tags for ElastiCache User (%s): %w", aws.StringValue(resp.ARN), err) - } + if err != nil && !verify.CheckISOErrorTagsUnsupported(err) { + return fmt.Errorf("listing tags for ElastiCache User (%s): %w", aws.StringValue(resp.ARN), err) + } + + // tags not supported in all partitions + if err != nil { + log.Printf("[WARN] failed listing tags for Elasticache User (%s): %s", aws.StringValue(resp.ARN), err) + } + if tags != nil { tags = tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig) //lintignore:AWSR002 @@ -148,9 +172,6 @@ func resourceUserRead(d *schema.ResourceData, meta interface{}) error { if err := d.Set("tags_all", tags.Map()); err != nil { return fmt.Errorf("error setting tags_all: %w", err) } - } else { - d.Set("tags", nil) - d.Set("tags_all", nil) } return nil @@ -192,12 +213,19 @@ func resourceUserUpdate(d *schema.ResourceData, meta interface{}) error { } } - // Tags are currently only supported in AWS Commercial. - if d.HasChange("tags_all") && meta.(*conns.AWSClient).Partition == endpoints.AwsPartitionID { + + if d.HasChange("tags_all") { o, n := d.GetChange("tags_all") - if err := UpdateTags(conn, d.Get("arn").(string), o, n); err != nil { - return fmt.Errorf("error updating ElastiCache User (%s) tags: %w", d.Get("arn").(string), err) + err := UpdateTags(conn, d.Get("arn").(string), o, n) + + if err != nil { + if v, ok := d.GetOk("tags"); (ok && len(v.(map[string]interface{})) > 0) || !verify.CheckISOErrorTagsUnsupported(err) { + // explicitly setting tags or not an iso-unsupported error + return fmt.Errorf("failed updating ElastiCache User (%s) tags: %w", d.Get("arn").(string), err) + } + + log.Printf("[WARN] failed updating tags for ElastiCache User (%s): %s", d.Get("arn").(string), err) } } diff --git a/internal/service/elasticache/user_group.go b/internal/service/elasticache/user_group.go index 2baf78238eb9..a4137e4dbc54 100644 --- a/internal/service/elasticache/user_group.go +++ b/internal/service/elasticache/user_group.go @@ -7,7 +7,6 @@ import ( "time" "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/aws/endpoints" "github.com/aws/aws-sdk-go/service/elasticache" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" @@ -80,14 +79,21 @@ func resourceUserGroupCreate(d *schema.ResourceData, meta interface{}) error { input.UserIds = flex.ExpandStringSet(v.(*schema.Set)) } - // Tags are currently only supported in AWS Commercial. - if len(tags) > 0 && meta.(*conns.AWSClient).Partition == endpoints.AwsPartitionID { + if len(tags) > 0 { input.Tags = Tags(tags.IgnoreAWS()) } out, err := conn.CreateUserGroup(input) + + if input.Tags != nil && verify.CheckISOErrorTagsUnsupported(err) { + log.Printf("[WARN] failed creating ElastiCache User Group with tags: %s. Trying create without tags.", err) + + input.Tags = nil + out, err = conn.CreateUserGroup(input) + } + if err != nil { - return fmt.Errorf("error creating ElastiCache User Group: %w", err) + return fmt.Errorf("creating ElastiCache User Group (%s): %w", d.Get("user_group_id").(string), err) } d.SetId(aws.StringValue(out.UserGroupId)) @@ -107,8 +113,21 @@ func resourceUserGroupCreate(d *schema.ResourceData, meta interface{}) error { return fmt.Errorf("error creating ElastiCache User Group: %w", err) } - return resourceUserGroupRead(d, meta) + // In some partitions, only post-create tagging supported + if input.Tags == nil && len(tags) > 0 { + err := UpdateTags(conn, aws.StringValue(out.ARN), nil, tags) + + if err != nil { + if v, ok := d.GetOk("tags"); (ok && len(v.(map[string]interface{})) > 0) || !verify.CheckISOErrorTagsUnsupported(err) { + // explicitly setting tags or not an iso-unsupported error + return fmt.Errorf("failed adding tags after create for ElastiCache User Group (%s): %w", d.Id(), err) + } + + log.Printf("[WARN] failed adding tags after create for ElastiCache User Group (%s): %s", d.Id(), err) + } + } + return resourceUserGroupRead(d, meta) } func resourceUserGroupRead(d *schema.ResourceData, meta interface{}) error { @@ -116,7 +135,7 @@ func resourceUserGroupRead(d *schema.ResourceData, meta interface{}) error { defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig - resp, err := FindElastiCacheUserGroupByID(conn, d.Id()) + resp, err := FindUserGroupByID(conn, d.Id()) if !d.IsNewResource() && (tfresource.NotFound(err) || tfawserr.ErrCodeEquals(err, elasticache.ErrCodeUserGroupNotFoundFault)) { d.SetId("") log.Printf("[DEBUG] ElastiCache User Group (%s) not found", d.Id()) @@ -132,14 +151,18 @@ func resourceUserGroupRead(d *schema.ResourceData, meta interface{}) error { d.Set("user_ids", resp.UserIds) d.Set("user_group_id", resp.UserGroupId) - // Tags are currently only supported in AWS Commercial. - if meta.(*conns.AWSClient).Partition == endpoints.AwsPartitionID { - tags, err := ListTags(conn, aws.StringValue(resp.ARN)) + tags, err := ListTags(conn, aws.StringValue(resp.ARN)) - if err != nil { - return fmt.Errorf("error listing tags for ElastiCache User (%s): %w", aws.StringValue(resp.ARN), err) - } + if err != nil && !verify.CheckISOErrorTagsUnsupported(err) { + return fmt.Errorf("listing tags for ElastiCache User Group (%s): %w", aws.StringValue(resp.ARN), err) + } + // tags not supported in all partitions + if err != nil { + log.Printf("[WARN] failed listing tags for Elasticache User Group (%s): %s", aws.StringValue(resp.ARN), err) + } + + if tags != nil { tags = tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig) //lintignore:AWSR002 @@ -150,9 +173,6 @@ func resourceUserGroupRead(d *schema.ResourceData, meta interface{}) error { if err := d.Set("tags_all", tags.Map()); err != nil { return fmt.Errorf("error setting tags_all: %w", err) } - } else { - d.Set("tags", nil) - d.Set("tags_all", nil) } return nil @@ -204,12 +224,18 @@ func resourceUserGroupUpdate(d *schema.ResourceData, meta interface{}) error { } } - // Tags are currently only supported in AWS Commercial. - if d.HasChange("tags_all") && meta.(*conns.AWSClient).Partition == endpoints.AwsPartitionID { + if d.HasChange("tags_all") { o, n := d.GetChange("tags_all") - if err := UpdateTags(conn, d.Get("arn").(string), o, n); err != nil { - return fmt.Errorf("error updating ElastiCache User Group (%s) tags: %w", d.Get("arn").(string), err) + err := UpdateTags(conn, d.Get("arn").(string), o, n) + + if err != nil { + if v, ok := d.GetOk("tags"); (ok && len(v.(map[string]interface{})) > 0) || !verify.CheckISOErrorTagsUnsupported(err) { + // explicitly setting tags or not an iso-unsupported error + return fmt.Errorf("failed updating ElastiCache User Group (%s) tags: %w", d.Get("arn").(string), err) + } + + log.Printf("[WARN] failed updating tags for ElastiCache User Group (%s): %s", d.Get("arn").(string), err) } } @@ -250,7 +276,7 @@ func resourceUserGroupDelete(d *schema.ResourceData, meta interface{}) error { func resourceUserGroupStateRefreshFunc(id string, conn *elasticache.ElastiCache) resource.StateRefreshFunc { return func() (interface{}, string, error) { - v, err := FindElastiCacheUserGroupByID(conn, id) + v, err := FindUserGroupByID(conn, id) if err != nil { log.Printf("Error on retrieving ElastiCache User Group when waiting: %s", err) diff --git a/internal/service/elasticache/user_group_test.go b/internal/service/elasticache/user_group_test.go index 89e9e9ec335d..9e04c6ede8a0 100644 --- a/internal/service/elasticache/user_group_test.go +++ b/internal/service/elasticache/user_group_test.go @@ -161,7 +161,7 @@ func testAccCheckUserGroupDestroyWithProvider(s *terraform.State, provider *sche continue } - _, err := tfelasticache.FindElastiCacheUserGroupByID(conn, rs.Primary.ID) + _, err := tfelasticache.FindUserGroupByID(conn, rs.Primary.ID) if err != nil { if tfawserr.ErrCodeEquals(err, elasticache.ErrCodeUserGroupNotFoundFault) { return nil @@ -191,7 +191,7 @@ func testAccCheckUserGroupExistsWithProvider(n string, v *elasticache.UserGroup, provider := providerF() conn := provider.Meta().(*conns.AWSClient).ElastiCacheConn - resp, err := tfelasticache.FindElastiCacheUserGroupByID(conn, rs.Primary.ID) + resp, err := tfelasticache.FindUserGroupByID(conn, rs.Primary.ID) if err != nil { return fmt.Errorf("ElastiCache User Group (%s) not found: %w", rs.Primary.ID, err) } diff --git a/internal/service/elasticache/user_test.go b/internal/service/elasticache/user_test.go index 35347230f980..be22d1b0234b 100644 --- a/internal/service/elasticache/user_test.go +++ b/internal/service/elasticache/user_test.go @@ -172,7 +172,7 @@ func testAccCheckUserDestroyWithProvider(s *terraform.State, provider *schema.Pr continue } - user, err := tfelasticache.FindElastiCacheUserByID(conn, rs.Primary.ID) + user, err := tfelasticache.FindUserByID(conn, rs.Primary.ID) if tfawserr.ErrCodeEquals(err, elasticache.ErrCodeUserNotFoundFault) || tfresource.NotFound(err) { continue @@ -207,7 +207,7 @@ func testAccCheckUserExistsWithProvider(n string, v *elasticache.User, providerF provider := providerF() conn := provider.Meta().(*conns.AWSClient).ElastiCacheConn - resp, err := tfelasticache.FindElastiCacheUserByID(conn, rs.Primary.ID) + resp, err := tfelasticache.FindUserByID(conn, rs.Primary.ID) if err != nil { return fmt.Errorf("ElastiCache User (%s) not found: %w", rs.Primary.ID, err) } diff --git a/internal/service/elasticache/validate.go b/internal/service/elasticache/validate.go index 8cb44d008787..16e912d3f0d7 100644 --- a/internal/service/elasticache/validate.go +++ b/internal/service/elasticache/validate.go @@ -34,3 +34,30 @@ func validVersionString(v interface{}, k string) (ws []string, errors []error) { return } + +const ( + redisVersionPreV6RegexpRaw = `[1-5](\.[[:digit:]]+){2}` + redisVersionPostV6RegexpRaw = `([6-9]|[[:digit:]]{2})\.x` + + redisVersionRegexpRaw = redisVersionPreV6RegexpRaw + "|" + redisVersionPostV6RegexpRaw +) + +const ( + redisVersionRegexpPattern = "^" + redisVersionRegexpRaw + "$" + redisVersionPostV6RegexpPattern = "^" + redisVersionPostV6RegexpRaw + "$" +) + +var ( + redisVersionRegexp = regexp.MustCompile(redisVersionRegexpPattern) + redisVersionPostV6Regexp = regexp.MustCompile(redisVersionPostV6RegexpPattern) +) + +func ValidRedisVersionString(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + + if !redisVersionRegexp.MatchString(value) { + errors = append(errors, fmt.Errorf("%s: Redis versions must match .x when using version 6 or higher, or ..", k)) + } + + return +} diff --git a/internal/service/elasticbeanstalk/application.go b/internal/service/elasticbeanstalk/application.go index 1aadabaea9e9..efdfbce6691e 100644 --- a/internal/service/elasticbeanstalk/application.go +++ b/internal/service/elasticbeanstalk/application.go @@ -41,7 +41,6 @@ func ResourceApplication() *schema.Resource { "description": { Type: schema.TypeString, Optional: true, - ForceNew: false, }, "appversion_lifecycle": { Type: schema.TypeList, diff --git a/internal/service/elasticsearch/domain.go b/internal/service/elasticsearch/domain.go index 79905309e2da..458bb05c5ec3 100644 --- a/internal/service/elasticsearch/domain.go +++ b/internal/service/elasticsearch/domain.go @@ -12,8 +12,8 @@ import ( elasticsearch "github.com/aws/aws-sdk-go/service/elasticsearchservice" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" awspolicy "github.com/hashicorp/awspolicyequivalence" + gversion "github.com/hashicorp/go-version" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -31,12 +31,15 @@ func ResourceDomain() *schema.Resource { Read: resourceDomainRead, Update: resourceDomainUpdate, Delete: resourceDomainDelete, + Importer: &schema.ResourceImporter{ State: resourceDomainImport, }, Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(60 * time.Minute), Update: schema.DefaultTimeout(60 * time.Minute), + Delete: schema.DefaultTimeout(90 * time.Minute), }, CustomizeDiff: customdiff.Sequence( @@ -126,6 +129,10 @@ func ResourceDomain() *schema.Resource { }, }, }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, "auto_tune_options": { Type: schema.TypeList, Optional: true, @@ -144,10 +151,9 @@ func ResourceDomain() *schema.Resource { Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "start_at": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.IsRFC3339Time, + "cron_expression_for_recurrence": { + Type: schema.TypeString, + Required: true, }, "duration": { Type: schema.TypeList, @@ -155,21 +161,22 @@ func ResourceDomain() *schema.Resource { MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "value": { - Type: schema.TypeInt, - Required: true, - }, "unit": { Type: schema.TypeString, Required: true, ValidateFunc: validation.StringInSlice(elasticsearch.TimeUnit_Values(), false), }, + "value": { + Type: schema.TypeInt, + Required: true, + }, }, }, }, - "cron_expression_for_recurrence": { - Type: schema.TypeString, - Required: true, + "start_at": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.IsRFC3339Time, }, }, }, @@ -183,45 +190,141 @@ func ResourceDomain() *schema.Resource { }, }, }, - "domain_name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, - ValidateFunc: validation.StringMatch(regexp.MustCompile(`^[a-z][0-9a-z\-]{2,27}$`), - "must start with a lowercase alphabet and be at least 3 and no more than 28 characters long."+ - " Valid characters are a-z (lowercase letters), 0-9, and - (hyphen)."), - }, - "arn": { - Type: schema.TypeString, - Computed: true, - }, - "domain_id": { - Type: schema.TypeString, - Computed: true, - }, - "domain_endpoint_options": { + "cluster_config": { Type: schema.TypeList, Optional: true, Computed: true, MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "enforce_https": { + "cold_storage_options": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + }, + }, + }, + "dedicated_master_count": { + Type: schema.TypeInt, + Optional: true, + DiffSuppressFunc: isDedicatedMasterDisabled, + }, + "dedicated_master_enabled": { Type: schema.TypeBool, Optional: true, - Default: true, + Default: false, }, - "tls_security_policy": { - Type: schema.TypeString, + "dedicated_master_type": { + Type: schema.TypeString, + Optional: true, + DiffSuppressFunc: isDedicatedMasterDisabled, + }, + "instance_count": { + Type: schema.TypeInt, + Optional: true, + Default: 1, + }, + "instance_type": { + Type: schema.TypeString, + Optional: true, + Default: elasticsearch.ESPartitionInstanceTypeM3MediumElasticsearch, + }, + "warm_count": { + Type: schema.TypeInt, Optional: true, - Computed: true, - ValidateFunc: validation.StringInSlice(elasticsearch.TLSSecurityPolicy_Values(), false), + ValidateFunc: validation.IntBetween(2, 150), }, - "custom_endpoint_enabled": { + "warm_enabled": { + Type: schema.TypeBool, + Optional: true, + }, + "warm_type": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{ + elasticsearch.ESWarmPartitionInstanceTypeUltrawarm1MediumElasticsearch, + elasticsearch.ESWarmPartitionInstanceTypeUltrawarm1LargeElasticsearch, + "ultrawarm1.xlarge.elasticsearch", + }, false), + }, + "zone_awareness_config": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + DiffSuppressFunc: verify.SuppressMissingOptionalConfigurationBlock, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_zone_count": { + Type: schema.TypeInt, + Optional: true, + Default: 2, + ValidateFunc: validation.IntInSlice([]int{2, 3}), + }, + }, + }, + }, + "zone_awareness_enabled": { + Type: schema.TypeBool, + Optional: true, + }, + }, + }, + }, + "cognito_options": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + DiffSuppressFunc: verify.SuppressMissingOptionalConfigurationBlock, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { Type: schema.TypeBool, Optional: true, Default: false, }, + "identity_pool_id": { + Type: schema.TypeString, + Required: true, + }, + "role_arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + }, + "user_pool_id": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "domain_id": { + Type: schema.TypeString, + Computed: true, + }, + "domain_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^[a-z][0-9a-z\-]{2,27}$`), + "must start with a lowercase alphabet and be at least 3 and no more than 28 characters long."+ + " Valid characters are a-z (lowercase letters), 0-9, and - (hyphen)."), + }, + "domain_endpoint_options": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ "custom_endpoint": { Type: schema.TypeString, Optional: true, @@ -233,17 +336,25 @@ func ResourceDomain() *schema.Resource { ValidateFunc: verify.ValidARN, DiffSuppressFunc: isCustomEndpointDisabled, }, + "custom_endpoint_enabled": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "enforce_https": { + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + "tls_security_policy": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice(elasticsearch.TLSSecurityPolicy_Values(), false), + }, }, }, }, - "endpoint": { - Type: schema.TypeString, - Computed: true, - }, - "kibana_endpoint": { - Type: schema.TypeString, - Computed: true, - }, "ebs_options": { Type: schema.TypeList, Optional: true, @@ -272,6 +383,11 @@ func ResourceDomain() *schema.Resource { }, }, }, + "elasticsearch_version": { + Type: schema.TypeString, + Optional: true, + Default: "1.5", + }, "encrypt_at_rest": { Type: schema.TypeList, Optional: true, @@ -294,90 +410,48 @@ func ResourceDomain() *schema.Resource { }, }, }, - "node_to_node_encryption": { - Type: schema.TypeList, - Optional: true, + "endpoint": { + Type: schema.TypeString, Computed: true, - MaxItems: 1, + }, + "kibana_endpoint": { + Type: schema.TypeString, + Computed: true, + }, + "log_publishing_options": { + Type: schema.TypeSet, + Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ + "cloudwatch_log_group_arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + }, "enabled": { Type: schema.TypeBool, - Required: true, - ForceNew: true, + Optional: true, + Default: true, + }, + "log_type": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(elasticsearch.LogType_Values(), false), }, }, }, }, - "cluster_config": { + "node_to_node_encryption": { Type: schema.TypeList, Optional: true, Computed: true, MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "dedicated_master_count": { - Type: schema.TypeInt, - Optional: true, - DiffSuppressFunc: isDedicatedMasterDisabled, - }, - "dedicated_master_enabled": { - Type: schema.TypeBool, - Optional: true, - Default: false, - }, - "dedicated_master_type": { - Type: schema.TypeString, - Optional: true, - DiffSuppressFunc: isDedicatedMasterDisabled, - }, - "instance_count": { - Type: schema.TypeInt, - Optional: true, - Default: 1, - }, - "instance_type": { - Type: schema.TypeString, - Optional: true, - Default: elasticsearch.ESPartitionInstanceTypeM3MediumElasticsearch, - }, - "zone_awareness_config": { - Type: schema.TypeList, - Optional: true, - MaxItems: 1, - DiffSuppressFunc: verify.SuppressMissingOptionalConfigurationBlock, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "availability_zone_count": { - Type: schema.TypeInt, - Optional: true, - Default: 2, - ValidateFunc: validation.IntInSlice([]int{2, 3}), - }, - }, - }, - }, - "zone_awareness_enabled": { - Type: schema.TypeBool, - Optional: true, - }, - "warm_enabled": { + "enabled": { Type: schema.TypeBool, - Optional: true, - }, - "warm_count": { - Type: schema.TypeInt, - Optional: true, - ValidateFunc: validation.IntBetween(2, 150), - }, - "warm_type": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringInSlice([]string{ - elasticsearch.ESWarmPartitionInstanceTypeUltrawarm1MediumElasticsearch, - elasticsearch.ESWarmPartitionInstanceTypeUltrawarm1LargeElasticsearch, - "ultrawarm1.xlarge.elasticsearch", - }, false), + Required: true, + ForceNew: true, }, }, }, @@ -396,6 +470,8 @@ func ResourceDomain() *schema.Resource { }, }, }, + "tags": tftags.TagsSchema(), + "tags_all": tftags.TagsSchemaComputed(), "vpc_options": { Type: schema.TypeList, Optional: true, @@ -428,76 +504,10 @@ func ResourceDomain() *schema.Resource { }, }, }, - "log_publishing_options": { - Type: schema.TypeSet, - Optional: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "log_type": { - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.StringInSlice(elasticsearch.LogType_Values(), false), - }, - "cloudwatch_log_group_arn": { - Type: schema.TypeString, - Required: true, - ValidateFunc: verify.ValidARN, - }, - "enabled": { - Type: schema.TypeBool, - Optional: true, - Default: true, - }, - }, - }, - }, - "elasticsearch_version": { - Type: schema.TypeString, - Optional: true, - Default: "1.5", - }, - "cognito_options": { - Type: schema.TypeList, - Optional: true, - ForceNew: false, - MaxItems: 1, - DiffSuppressFunc: verify.SuppressMissingOptionalConfigurationBlock, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "enabled": { - Type: schema.TypeBool, - Optional: true, - Default: false, - }, - "user_pool_id": { - Type: schema.TypeString, - Required: true, - }, - "identity_pool_id": { - Type: schema.TypeString, - Required: true, - }, - "role_arn": { - Type: schema.TypeString, - Required: true, - ValidateFunc: verify.ValidARN, - }, - }, - }, - }, - - "tags": tftags.TagsSchema(), - "tags_all": tftags.TagsSchemaComputed(), }, } } -func resourceDomainImport( - d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - d.Set("domain_name", d.Id()) - return []*schema.ResourceData{d}, nil -} - func resourceDomainCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).ElasticsearchConn defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig @@ -505,14 +515,16 @@ func resourceDomainCreate(d *schema.ResourceData, meta interface{}) error { // The API doesn't check for duplicate names // so w/out this check Create would act as upsert - // and might cause duplicate domain to appear in state - resp, err := FindDomainByName(conn, d.Get("domain_name").(string)) + // and might cause duplicate domain to appear in state. + name := d.Get("domain_name").(string) + _, err := FindDomainByName(conn, name) + if err == nil { - return fmt.Errorf("Elasticsearch domain %s already exists", aws.StringValue(resp.DomainName)) + return fmt.Errorf("Elasticsearch Domain (%s) already exists", name) } - inputCreateDomain := elasticsearch.CreateElasticsearchDomainInput{ - DomainName: aws.String(d.Get("domain_name").(string)), + input := &elasticsearch.CreateElasticsearchDomainInput{ + DomainName: aws.String(name), ElasticsearchVersion: aws.String(d.Get("elasticsearch_version").(string)), TagList: Tags(tags.IgnoreAWS()), } @@ -524,19 +536,19 @@ func resourceDomainCreate(d *schema.ResourceData, meta interface{}) error { return fmt.Errorf("policy (%s) is invalid JSON: %w", policy, err) } - inputCreateDomain.AccessPolicies = aws.String(policy) + input.AccessPolicies = aws.String(policy) } if v, ok := d.GetOk("advanced_options"); ok { - inputCreateDomain.AdvancedOptions = flex.ExpandStringMap(v.(map[string]interface{})) + input.AdvancedOptions = flex.ExpandStringMap(v.(map[string]interface{})) } if v, ok := d.GetOk("advanced_security_options"); ok { - inputCreateDomain.AdvancedSecurityOptions = expandAdvancedSecurityOptions(v.([]interface{})) + input.AdvancedSecurityOptions = expandAdvancedSecurityOptions(v.([]interface{})) } if v, ok := d.GetOk("auto_tune_options"); ok && len(v.([]interface{})) > 0 { - inputCreateDomain.AutoTuneOptions = expandAutoTuneOptionsInput(v.([]interface{})[0].(map[string]interface{})) + input.AutoTuneOptions = expandAutoTuneOptionsInput(v.([]interface{})[0].(map[string]interface{})) } if v, ok := d.GetOk("ebs_options"); ok { @@ -548,7 +560,7 @@ func resourceDomainCreate(d *schema.ResourceData, meta interface{}) error { } s := options[0].(map[string]interface{}) - inputCreateDomain.EBSOptions = expandEBSOptions(s) + input.EBSOptions = expandEBSOptions(s) } } @@ -559,7 +571,7 @@ func resourceDomainCreate(d *schema.ResourceData, meta interface{}) error { } s := options[0].(map[string]interface{}) - inputCreateDomain.EncryptionAtRestOptions = expandEncryptAtRestOptions(s) + input.EncryptionAtRestOptions = expandEncryptAtRestOptions(s) } if v, ok := d.GetOk("cluster_config"); ok { @@ -570,7 +582,7 @@ func resourceDomainCreate(d *schema.ResourceData, meta interface{}) error { return fmt.Errorf("At least one field is expected inside cluster_config") } m := config[0].(map[string]interface{}) - inputCreateDomain.ElasticsearchClusterConfig = expandClusterConfig(m) + input.ElasticsearchClusterConfig = expandClusterConfig(m) } } @@ -578,7 +590,7 @@ func resourceDomainCreate(d *schema.ResourceData, meta interface{}) error { options := v.([]interface{}) s := options[0].(map[string]interface{}) - inputCreateDomain.NodeToNodeEncryptionOptions = expandNodeToNodeEncryptionOptions(s) + input.NodeToNodeEncryptionOptions = expandNodeToNodeEncryptionOptions(s) } if v, ok := d.GetOk("snapshot_options"); ok { @@ -595,7 +607,7 @@ func resourceDomainCreate(d *schema.ResourceData, meta interface{}) error { AutomatedSnapshotStartHour: aws.Int64(int64(o["automated_snapshot_start_hour"].(int))), } - inputCreateDomain.SnapshotOptions = &snapshotOptions + input.SnapshotOptions = &snapshotOptions } } @@ -606,92 +618,70 @@ func resourceDomainCreate(d *schema.ResourceData, meta interface{}) error { } s := options[0].(map[string]interface{}) - inputCreateDomain.VPCOptions = expandVPCOptions(s) + input.VPCOptions = expandVPCOptions(s) } if v, ok := d.GetOk("log_publishing_options"); ok { - inputCreateDomain.LogPublishingOptions = expandLogPublishingOptions(v.(*schema.Set)) + input.LogPublishingOptions = expandLogPublishingOptions(v.(*schema.Set)) } if v, ok := d.GetOk("domain_endpoint_options"); ok { - inputCreateDomain.DomainEndpointOptions = expandDomainEndpointOptions(v.([]interface{})) + input.DomainEndpointOptions = expandDomainEndpointOptions(v.([]interface{})) } if v, ok := d.GetOk("cognito_options"); ok { - inputCreateDomain.CognitoOptions = expandCognitoOptions(v.([]interface{})) + input.CognitoOptions = expandCognitoOptions(v.([]interface{})) } - log.Printf("[DEBUG] Creating Elasticsearch domain: %s", inputCreateDomain) + log.Printf("[DEBUG] Creating Elasticsearch Domain: %s", input) - // IAM Roles can take some time to propagate if set in AccessPolicies and created in the same terraform - var out *elasticsearch.CreateElasticsearchDomainOutput - err = resource.Retry(tfiam.PropagationTimeout, func() *resource.RetryError { - var err error - out, err = conn.CreateElasticsearchDomain(&inputCreateDomain) - if err != nil { - if tfawserr.ErrMessageContains(err, "InvalidTypeException", "Error setting policy") { - log.Printf("[DEBUG] Retrying creation of Elasticsearch domain %s", aws.StringValue(inputCreateDomain.DomainName)) - return resource.RetryableError(err) - } - if tfawserr.ErrMessageContains(err, "ValidationException", "enable a service-linked role to give Amazon ES permissions") { - return resource.RetryableError(err) - } - if tfawserr.ErrMessageContains(err, "ValidationException", "Domain is still being deleted") { - return resource.RetryableError(err) - } - if tfawserr.ErrMessageContains(err, "ValidationException", "Amazon Elasticsearch must be allowed to use the passed role") { - return resource.RetryableError(err) - } - if tfawserr.ErrMessageContains(err, "ValidationException", "The passed role has not propagated yet") { - return resource.RetryableError(err) - } - if tfawserr.ErrMessageContains(err, "ValidationException", "Authentication error") { - return resource.RetryableError(err) - } - if tfawserr.ErrMessageContains(err, "ValidationException", "Unauthorized Operation: Elasticsearch must be authorised to describe") { - return resource.RetryableError(err) - } - if tfawserr.ErrMessageContains(err, "ValidationException", "The passed role must authorize Amazon Elasticsearch to describe") { - return resource.RetryableError(err) + outputRaw, err := tfresource.RetryWhen( + tfiam.PropagationTimeout, + func() (interface{}, error) { + return conn.CreateElasticsearchDomain(input) + }, + func(err error) (bool, error) { + if tfawserr.ErrMessageContains(err, elasticsearch.ErrCodeInvalidTypeException, "Error setting policy") || + tfawserr.ErrMessageContains(err, elasticsearch.ErrCodeValidationException, "enable a service-linked role to give Amazon ES permissions") || + tfawserr.ErrMessageContains(err, elasticsearch.ErrCodeValidationException, "Domain is still being deleted") || + tfawserr.ErrMessageContains(err, elasticsearch.ErrCodeValidationException, "Amazon Elasticsearch must be allowed to use the passed role") || + tfawserr.ErrMessageContains(err, elasticsearch.ErrCodeValidationException, "The passed role has not propagated yet") || + tfawserr.ErrMessageContains(err, elasticsearch.ErrCodeValidationException, "Authentication error") || + tfawserr.ErrMessageContains(err, elasticsearch.ErrCodeValidationException, "Unauthorized Operation: Elasticsearch must be authorised to describe") || + tfawserr.ErrMessageContains(err, elasticsearch.ErrCodeValidationException, "The passed role must authorize Amazon Elasticsearch to describe") { + return true, err } - return resource.NonRetryableError(err) - } - return nil - }) - if tfresource.TimedOut(err) { - out, err = conn.CreateElasticsearchDomain(&inputCreateDomain) - } + return false, err + }, + ) + if err != nil { - return fmt.Errorf("Error creating Elasticsearch domain: %w", err) + return fmt.Errorf("error creating Elasticsearch Domain (%s): %w", name, err) } - d.SetId(aws.StringValue(out.DomainStatus.ARN)) + d.SetId(aws.StringValue(outputRaw.(*elasticsearch.CreateElasticsearchDomainOutput).DomainStatus.ARN)) - log.Printf("[DEBUG] Waiting for Elasticsearch domain %q to be created", d.Id()) - if err := WaitForDomainCreation(conn, d.Get("domain_name").(string)); err != nil { - return fmt.Errorf("error waiting for Elasticsearch Domain (%s) to be created: %w", d.Id(), err) + if err := WaitForDomainCreation(conn, name, d.Timeout(schema.TimeoutCreate)); err != nil { + return fmt.Errorf("error waiting for Elasticsearch Domain (%s) create: %w", d.Id(), err) } - log.Printf("[DEBUG] Elasticsearch domain %q created", d.Id()) - if v, ok := d.GetOk("auto_tune_options"); ok && len(v.([]interface{})) > 0 { - - log.Printf("[DEBUG] Modifying config for Elasticsearch domain %q", d.Id()) - - inputUpdateDomainConfig := &elasticsearch.UpdateElasticsearchDomainConfigInput{ - DomainName: aws.String(d.Get("domain_name").(string)), + input := &elasticsearch.UpdateElasticsearchDomainConfigInput{ + AutoTuneOptions: expandAutoTuneOptions(v.([]interface{})[0].(map[string]interface{})), + DomainName: aws.String(name), } - inputUpdateDomainConfig.AutoTuneOptions = expandAutoTuneOptions(v.([]interface{})[0].(map[string]interface{})) - - _, err = conn.UpdateElasticsearchDomainConfig(inputUpdateDomainConfig) + log.Printf("[DEBUG] Updating Elasticsearch Domain config: %s", input) + _, err = conn.UpdateElasticsearchDomainConfig(input) if err != nil { - return fmt.Errorf("Error modifying config for Elasticsearch domain: %s", err) + return fmt.Errorf("error updating Elasticsearch Domain (%s) config: %w", d.Id(), err) } - log.Printf("[DEBUG] Config for Elasticsearch domain %q modified", d.Id()) + if err := waitForDomainUpdate(conn, name, d.Timeout(schema.TimeoutCreate)); err != nil { + return fmt.Errorf("error waiting for Elasticsearch Domain (%s) update: %w", d.Id(), err) + } } return resourceDomainRead(d, meta) @@ -702,34 +692,31 @@ func resourceDomainRead(d *schema.ResourceData, meta interface{}) error { defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig - ds, err := FindDomainByName(conn, d.Get("domain_name").(string)) + name := d.Get("domain_name").(string) + ds, err := FindDomainByName(conn, name) if !d.IsNewResource() && tfresource.NotFound(err) { - log.Printf("[WARN] Elasticsearch domain (%s) not found, removing from state", d.Id()) + log.Printf("[WARN] Elasticsearch Domain (%s) not found, removing from state", d.Id()) d.SetId("") return nil } if err != nil { - return fmt.Errorf("error reading Elasticsearch domain (%s): %w", d.Id(), err) + return fmt.Errorf("error reading Elasticsearch Domain (%s): %w", d.Id(), err) } - log.Printf("[DEBUG] Received Elasticsearch domain: %s", ds) - - outDescribeDomainConfig, err := conn.DescribeElasticsearchDomainConfig(&elasticsearch.DescribeElasticsearchDomainConfigInput{ - DomainName: aws.String(d.Get("domain_name").(string)), + output, err := conn.DescribeElasticsearchDomainConfig(&elasticsearch.DescribeElasticsearchDomainConfigInput{ + DomainName: aws.String(name), }) if err != nil { - return err + return fmt.Errorf("error reading Elasticsearch Domain (%s) config: %w", d.Id(), err) } - log.Printf("[DEBUG] Received config for Elasticsearch domain: %s", outDescribeDomainConfig) + dc := output.DomainConfig - dc := outDescribeDomainConfig.DomainConfig - - if ds.AccessPolicies != nil && aws.StringValue(ds.AccessPolicies) != "" { - policies, err := verify.PolicyToSet(d.Get("access_policies").(string), aws.StringValue(ds.AccessPolicies)) + if v := aws.StringValue(ds.AccessPolicies); v != "" { + policies, err := verify.PolicyToSet(d.Get("access_policies").(string), v) if err != nil { return err @@ -740,10 +727,9 @@ func resourceDomainRead(d *schema.ResourceData, meta interface{}) error { options := advancedOptionsIgnoreDefault(d.Get("advanced_options").(map[string]interface{}), flex.PointersMapToStringList(ds.AdvancedOptions)) if err = d.Set("advanced_options", options); err != nil { - return fmt.Errorf("setting advanced_options %v: %w", options, err) + return fmt.Errorf("setting advanced_options: %w", err) } - d.SetId(aws.StringValue(ds.ARN)) d.Set("domain_id", ds.DomainId) d.Set("domain_name", ds.DomainName) d.Set("elasticsearch_version", ds.ElasticsearchVersion) @@ -785,9 +771,8 @@ func resourceDomainRead(d *schema.ResourceData, meta interface{}) error { } if v := dc.AutoTuneOptions; v != nil { - err = d.Set("auto_tune_options", []interface{}{flattenAutoTuneOptions(v.Options)}) - if err != nil { - return err + if err := d.Set("auto_tune_options", []interface{}{flattenAutoTuneOptions(v.Options)}); err != nil { + return fmt.Errorf("error setting auto_tune_options: %w", err) } } @@ -801,10 +786,8 @@ func resourceDomainRead(d *schema.ResourceData, meta interface{}) error { } endpoints := flex.PointersMapToStringList(ds.Endpoints) - err = d.Set("endpoint", endpoints["vpc"]) - if err != nil { - return err - } + d.Set("endpoint", endpoints["vpc"]) + d.Set("kibana_endpoint", getKibanaEndpoint(d)) if ds.Endpoint != nil { return fmt.Errorf("%q: Elasticsearch domain in VPC expected to have null Endpoint value", d.Id()) @@ -832,7 +815,7 @@ func resourceDomainRead(d *schema.ResourceData, meta interface{}) error { tags, err := ListTags(conn, d.Id()) if err != nil { - return fmt.Errorf("error listing tags for Elasticsearch Cluster (%s): %w", d.Id(), err) + return fmt.Errorf("error listing tags for Elasticsearch Domain (%s): %w", d.Id(), err) } tags = tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig) @@ -853,8 +836,9 @@ func resourceDomainUpdate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).ElasticsearchConn if d.HasChangesExcept("tags", "tags_all") { - input := elasticsearch.UpdateElasticsearchDomainConfigInput{ - DomainName: aws.String(d.Get("domain_name").(string)), + name := d.Get("domain_name").(string) + input := &elasticsearch.UpdateElasticsearchDomainConfigInput{ + DomainName: aws.String(name), } if d.HasChange("access_policies") { @@ -895,9 +879,17 @@ func resourceDomainUpdate(d *schema.ResourceData, meta interface{}) error { if len(config) == 1 { m := config[0].(map[string]interface{}) input.ElasticsearchClusterConfig = expandClusterConfig(m) + + // Work around "ValidationException: Your domain's Elasticsearch version does not support cold storage options. Upgrade to Elasticsearch 7.9 or later.". + if want, err := gversion.NewVersion("7.9"); err == nil { + if got, err := gversion.NewVersion(d.Get("elasticsearch_version").(string)); err == nil { + if got.LessThan(want) { + input.ElasticsearchClusterConfig.ColdStorageOptions = nil + } + } + } } } - } if d.HasChange("snapshot_options") { @@ -929,28 +921,32 @@ func resourceDomainUpdate(d *schema.ResourceData, meta interface{}) error { input.LogPublishingOptions = expandLogPublishingOptions(d.Get("log_publishing_options").(*schema.Set)) } - _, err := conn.UpdateElasticsearchDomainConfig(&input) + log.Printf("[DEBUG] Updating Elasticsearch Domain config: %s", input) + _, err := conn.UpdateElasticsearchDomainConfig(input) + if err != nil { - return err + return fmt.Errorf("error updating Elasticsearch Domain (%s) config: %w", d.Id(), err) } - if err := waitForDomainUpdate(conn, d.Get("domain_name").(string)); err != nil { - return fmt.Errorf("error waiting for Elasticsearch Domain Update (%s) to succeed: %w", d.Id(), err) + if err := waitForDomainUpdate(conn, name, d.Timeout(schema.TimeoutUpdate)); err != nil { + return fmt.Errorf("error waiting for Elasticsearch Domain (%s) update: %w", d.Id(), err) } if d.HasChange("elasticsearch_version") { - upgradeInput := elasticsearch.UpgradeElasticsearchDomainInput{ - DomainName: aws.String(d.Get("domain_name").(string)), + input := &elasticsearch.UpgradeElasticsearchDomainInput{ + DomainName: aws.String(name), TargetVersion: aws.String(d.Get("elasticsearch_version").(string)), } - _, err := conn.UpgradeElasticsearchDomain(&upgradeInput) + log.Printf("[DEBUG] Upgrading Elasticsearch Domain: %s", input) + _, err := conn.UpgradeElasticsearchDomain(input) + if err != nil { - return fmt.Errorf("Failed to upgrade elasticsearch domain: %w", err) + return fmt.Errorf("error upgrading Elasticsearch Domain (%s): %w", d.Id(), err) } - if _, err := waitUpgradeSucceeded(conn, d.Get("domain_name").(string), d.Timeout(schema.TimeoutUpdate)); err != nil { - return fmt.Errorf("error waiting for Elasticsearch Domain Upgrade (%s) to succeed: %w", d.Id(), err) + if _, err := waitUpgradeSucceeded(conn, name, d.Timeout(schema.TimeoutUpdate)); err != nil { + return fmt.Errorf("error waiting for Elasticsearch Domain (%s) upgrade: %w", d.Id(), err) } } } @@ -968,27 +964,45 @@ func resourceDomainUpdate(d *schema.ResourceData, meta interface{}) error { func resourceDomainDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).ElasticsearchConn - domainName := d.Get("domain_name").(string) - log.Printf("[DEBUG] Deleting Elasticsearch domain: %q", domainName) + name := d.Get("domain_name").(string) + + log.Printf("[DEBUG] Deleting Elasticsearch Domain: %s", d.Id()) _, err := conn.DeleteElasticsearchDomain(&elasticsearch.DeleteElasticsearchDomainInput{ - DomainName: aws.String(domainName), + DomainName: aws.String(name), }) + + if tfawserr.ErrCodeEquals(err, elasticsearch.ErrCodeResourceNotFoundException) { + return nil + } + if err != nil { - if tfawserr.ErrCodeEquals(err, elasticsearch.ErrCodeResourceNotFoundException) { - return nil - } - return err + return fmt.Errorf("error deleting Elasticsearch Domain (%s): %w", d.Id(), err) } - log.Printf("[DEBUG] Waiting for Elasticsearch domain %q to be deleted", domainName) - if err := waitForDomainDelete(conn, d.Get("domain_name").(string)); err != nil { - return fmt.Errorf("error waiting for Elasticsearch Domain (%s) to be deleted: %w", d.Id(), err) + if err := waitForDomainDelete(conn, name, d.Timeout(schema.TimeoutDelete)); err != nil { + return fmt.Errorf("error waiting for Elasticsearch Domain (%s) delete: %w", d.Id(), err) } return nil } +func resourceDomainImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + conn := meta.(*conns.AWSClient).ElasticsearchConn + + d.Set("domain_name", d.Id()) + + ds, err := FindDomainByName(conn, d.Get("domain_name").(string)) + + if err != nil { + return nil, err + } + + d.SetId(aws.StringValue(ds.ARN)) + + return []*schema.ResourceData{d}, nil +} + func suppressEquivalentKmsKeyIds(k, old, new string, d *schema.ResourceData) bool { // The Elasticsearch API accepts a short KMS key id but always returns the ARN of the key. // The ARN is of the format 'arn:aws:kms:REGION:ACCOUNT_ID:key/KMS_KEY_ID'. @@ -1043,6 +1057,10 @@ func flattenNodeToNodeEncryptionOptions(o *elasticsearch.NodeToNodeEncryptionOpt func expandClusterConfig(m map[string]interface{}) *elasticsearch.ElasticsearchClusterConfig { config := elasticsearch.ElasticsearchClusterConfig{} + if v, ok := m["cold_storage_options"].([]interface{}); ok && len(v) > 0 { + config.ColdStorageOptions = expandColdStorageOptions(v[0].(map[string]interface{})) + } + if v, ok := m["dedicated_master_enabled"]; ok { isEnabled := v.(bool) config.DedicatedMasterEnabled = aws.Bool(isEnabled) @@ -1093,6 +1111,20 @@ func expandClusterConfig(m map[string]interface{}) *elasticsearch.ElasticsearchC return &config } +func expandColdStorageOptions(tfMap map[string]interface{}) *elasticsearch.ColdStorageOptions { + if tfMap == nil { + return nil + } + + apiObject := &elasticsearch.ColdStorageOptions{} + + if v, ok := tfMap["enabled"].(bool); ok { + apiObject.Enabled = aws.Bool(v) + } + + return apiObject +} + func expandZoneAwarenessConfig(l []interface{}) *elasticsearch.ZoneAwarenessConfig { if len(l) == 0 || l[0] == nil { return nil @@ -1115,6 +1147,9 @@ func flattenClusterConfig(c *elasticsearch.ElasticsearchClusterConfig) []map[str "zone_awareness_enabled": aws.BoolValue(c.ZoneAwarenessEnabled), } + if c.ColdStorageOptions != nil { + m["cold_storage_options"] = flattenColdStorageOptions(c.ColdStorageOptions) + } if c.DedicatedMasterCount != nil { m["dedicated_master_count"] = aws.Int64Value(c.DedicatedMasterCount) } @@ -1143,6 +1178,18 @@ func flattenClusterConfig(c *elasticsearch.ElasticsearchClusterConfig) []map[str return []map[string]interface{}{m} } +func flattenColdStorageOptions(coldStorageOptions *elasticsearch.ColdStorageOptions) []interface{} { + if coldStorageOptions == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "enabled": aws.BoolValue(coldStorageOptions.Enabled), + } + + return []interface{}{m} +} + func flattenZoneAwarenessConfig(zoneAwarenessConfig *elasticsearch.ZoneAwarenessConfig) []interface{} { if zoneAwarenessConfig == nil { return []interface{}{} diff --git a/internal/service/elasticsearch/domain_data_source.go b/internal/service/elasticsearch/domain_data_source.go index 5cbd082bf334..271150eeca0a 100644 --- a/internal/service/elasticsearch/domain_data_source.go +++ b/internal/service/elasticsearch/domain_data_source.go @@ -42,6 +42,10 @@ func DataSourceDomain() *schema.Resource { }, }, }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, "auto_tune_options": { Type: schema.TypeList, Computed: true, @@ -56,7 +60,7 @@ func DataSourceDomain() *schema.Resource { Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "start_at": { + "cron_expression_for_recurrence": { Type: schema.TypeString, Computed: true, }, @@ -65,18 +69,18 @@ func DataSourceDomain() *schema.Resource { Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "value": { - Type: schema.TypeInt, - Computed: true, - }, "unit": { Type: schema.TypeString, Computed: true, }, + "value": { + Type: schema.TypeInt, + Computed: true, + }, }, }, }, - "cron_expression_for_recurrence": { + "start_at": { Type: schema.TypeString, Computed: true, }, @@ -90,100 +94,52 @@ func DataSourceDomain() *schema.Resource { }, }, }, - "domain_name": { - Type: schema.TypeString, - Required: true, - }, - "arn": { - Type: schema.TypeString, - Computed: true, - }, - "domain_id": { - Type: schema.TypeString, - Computed: true, - }, - "endpoint": { - Type: schema.TypeString, - Computed: true, - }, - "kibana_endpoint": { - Type: schema.TypeString, - Computed: true, - }, - "ebs_options": { + "cluster_config": { Type: schema.TypeList, Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "ebs_enabled": { - Type: schema.TypeBool, + "cold_storage_options": { + Type: schema.TypeList, Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Computed: true, + }, + }, + }, }, - "iops": { + "dedicated_master_count": { Type: schema.TypeInt, Computed: true, }, - "volume_size": { - Type: schema.TypeInt, + "dedicated_master_enabled": { + Type: schema.TypeBool, Computed: true, }, - "volume_type": { + "dedicated_master_type": { Type: schema.TypeString, Computed: true, }, - }, - }, - }, - "encryption_at_rest": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "enabled": { - Type: schema.TypeBool, + "instance_count": { + Type: schema.TypeInt, Computed: true, }, - "kms_key_id": { + "instance_type": { Type: schema.TypeString, Computed: true, }, - }, - }, - }, - "node_to_node_encryption": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "enabled": { - Type: schema.TypeBool, - Computed: true, - }, - }, - }, - }, - "cluster_config": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "dedicated_master_count": { + "warm_count": { Type: schema.TypeInt, Computed: true, }, - "dedicated_master_enabled": { + "warm_enabled": { Type: schema.TypeBool, Computed: true, }, - "dedicated_master_type": { - Type: schema.TypeString, - Computed: true, - }, - "instance_count": { - Type: schema.TypeInt, - Computed: true, - }, - "instance_type": { + "warm_type": { Type: schema.TypeString, Computed: true, }, @@ -203,74 +159,126 @@ func DataSourceDomain() *schema.Resource { Type: schema.TypeBool, Computed: true, }, - "warm_enabled": { + }, + }, + }, + "cognito_options": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { Type: schema.TypeBool, - Optional: true, + Computed: true, }, - "warm_count": { - Type: schema.TypeInt, + "identity_pool_id": { + Type: schema.TypeString, Computed: true, }, - "warm_type": { + "role_arn": { + Type: schema.TypeString, + Computed: true, + }, + "user_pool_id": { Type: schema.TypeString, Computed: true, }, }, }, }, - "snapshot_options": { + "created": { + Type: schema.TypeBool, + Computed: true, + }, + "deleted": { + Type: schema.TypeBool, + Computed: true, + }, + "domain_id": { + Type: schema.TypeString, + Computed: true, + }, + "domain_name": { + Type: schema.TypeString, + Required: true, + }, + "ebs_options": { Type: schema.TypeList, Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "automated_snapshot_start_hour": { + "ebs_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "iops": { + Type: schema.TypeInt, + Computed: true, + }, + "volume_size": { Type: schema.TypeInt, Computed: true, }, + "volume_type": { + Type: schema.TypeString, + Computed: true, + }, }, }, }, - "vpc_options": { + "elasticsearch_version": { + Type: schema.TypeString, + Computed: true, + }, + "encryption_at_rest": { Type: schema.TypeList, Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "availability_zones": { - Type: schema.TypeSet, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, - //Set: schema.HashString, - }, - "security_group_ids": { - Type: schema.TypeSet, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, - }, - "subnet_ids": { - Type: schema.TypeSet, + "enabled": { + Type: schema.TypeBool, Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, }, - "vpc_id": { + "kms_key_id": { Type: schema.TypeString, Computed: true, }, }, }, }, + "endpoint": { + Type: schema.TypeString, + Computed: true, + }, + "kibana_endpoint": { + Type: schema.TypeString, + Computed: true, + }, "log_publishing_options": { Type: schema.TypeSet, Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "log_type": { + "cloudwatch_log_group_arn": { Type: schema.TypeString, Computed: true, }, - "cloudwatch_log_group_arn": { + "enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "log_type": { Type: schema.TypeString, Computed: true, }, + }, + }, + }, + "node_to_node_encryption": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ "enabled": { Type: schema.TypeBool, Computed: true, @@ -278,49 +286,50 @@ func DataSourceDomain() *schema.Resource { }, }, }, - "elasticsearch_version": { - Type: schema.TypeString, + "processing": { + Type: schema.TypeBool, Computed: true, }, - "cognito_options": { + "snapshot_options": { Type: schema.TypeList, Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ - "enabled": { - Type: schema.TypeBool, + "automated_snapshot_start_hour": { + Type: schema.TypeInt, Computed: true, }, - "user_pool_id": { - Type: schema.TypeString, + }, + }, + }, + "tags": tftags.TagsSchemaComputed(), + "vpc_options": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_zones": { + Type: schema.TypeSet, Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, }, - "identity_pool_id": { - Type: schema.TypeString, + "security_group_ids": { + Type: schema.TypeSet, Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, }, - "role_arn": { + "subnet_ids": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "vpc_id": { Type: schema.TypeString, Computed: true, }, }, }, }, - - "created": { - Type: schema.TypeBool, - Computed: true, - }, - "deleted": { - Type: schema.TypeBool, - Computed: true, - }, - "processing": { - Type: schema.TypeBool, - Computed: true, - }, - - "tags": tftags.TagsSchemaComputed(), }, } } diff --git a/internal/service/elasticsearch/domain_policy.go b/internal/service/elasticsearch/domain_policy.go index 95850a333ca4..c8e0c3106e9e 100644 --- a/internal/service/elasticsearch/domain_policy.go +++ b/internal/service/elasticsearch/domain_policy.go @@ -3,6 +3,7 @@ package elasticsearch import ( "fmt" "log" + "time" "github.com/aws/aws-sdk-go/aws" elasticsearch "github.com/aws/aws-sdk-go/service/elasticsearchservice" @@ -21,11 +22,12 @@ func ResourceDomainPolicy() *schema.Resource { Update: resourceDomainPolicyUpsert, Delete: resourceDomainPolicyDelete, + Timeouts: &schema.ResourceTimeout{ + Update: schema.DefaultTimeout(60 * time.Minute), + Delete: schema.DefaultTimeout(60 * time.Minute), + }, + Schema: map[string]*schema.Schema{ - "domain_name": { - Type: schema.TypeString, - Required: true, - }, "access_policies": { Type: schema.TypeString, Required: true, @@ -36,6 +38,10 @@ func ResourceDomainPolicy() *schema.Resource { return json }, }, + "domain_name": { + Type: schema.TypeString, + Required: true, + }, }, } } @@ -88,7 +94,7 @@ func resourceDomainPolicyUpsert(d *schema.ResourceData, meta interface{}) error d.SetId("esd-policy-" + domainName) - if err := waitForDomainUpdate(conn, d.Get("domain_name").(string)); err != nil { + if err := waitForDomainUpdate(conn, d.Get("domain_name").(string), d.Timeout(schema.TimeoutUpdate)); err != nil { return fmt.Errorf("error waiting for Elasticsearch Domain Policy (%s) to be updated: %w", d.Id(), err) } @@ -108,7 +114,7 @@ func resourceDomainPolicyDelete(d *schema.ResourceData, meta interface{}) error log.Printf("[DEBUG] Waiting for Elasticsearch domain policy %q to be deleted", d.Get("domain_name").(string)) - if err := waitForDomainUpdate(conn, d.Get("domain_name").(string)); err != nil { + if err := waitForDomainUpdate(conn, d.Get("domain_name").(string), d.Timeout(schema.TimeoutDelete)); err != nil { return fmt.Errorf("error waiting for Elasticsearch Domain Policy (%s) to be deleted: %w", d.Id(), err) } diff --git a/internal/service/elasticsearch/domain_policy_test.go b/internal/service/elasticsearch/domain_policy_test.go index 035beddf5efc..f8f1487fe707 100644 --- a/internal/service/elasticsearch/domain_policy_test.go +++ b/internal/service/elasticsearch/domain_policy_test.go @@ -58,7 +58,7 @@ func TestAccElasticsearchDomainPolicy_basic(t *testing.T) { resource.TestCheckResourceAttr("aws_elasticsearch_domain.example", "elasticsearch_version", "2.3"), func(s *terraform.State) error { awsClient := acctest.Provider.Meta().(*conns.AWSClient) - expectedArn, err := buildESDomainArn(name, awsClient.Partition, awsClient.AccountID, awsClient.Region) + expectedArn, err := buildDomainARN(name, awsClient.Partition, awsClient.AccountID, awsClient.Region) if err != nil { return err } @@ -72,7 +72,7 @@ func TestAccElasticsearchDomainPolicy_basic(t *testing.T) { }) } -func buildESDomainArn(name, partition, accId, region string) (string, error) { +func buildDomainARN(name, partition, accId, region string) (string, error) { if partition == "" { return "", fmt.Errorf("Unable to construct ES Domain ARN because of missing AWS partition") } diff --git a/internal/service/elasticsearch/domain_saml_options.go b/internal/service/elasticsearch/domain_saml_options.go index 5bc720b00739..8d04cb7b338d 100644 --- a/internal/service/elasticsearch/domain_saml_options.go +++ b/internal/service/elasticsearch/domain_saml_options.go @@ -3,6 +3,7 @@ package elasticsearch import ( "fmt" "log" + "time" "github.com/aws/aws-sdk-go/aws" elasticsearch "github.com/aws/aws-sdk-go/service/elasticsearchservice" @@ -18,6 +19,7 @@ func ResourceDomainSAMLOptions() *schema.Resource { Read: resourceDomainSAMLOptionsRead, Update: resourceDomainSAMLOptionsPut, Delete: resourceDomainSAMLOptionsDelete, + Importer: &schema.ResourceImporter{ State: func(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { d.Set("domain_name", d.Id()) @@ -25,6 +27,11 @@ func ResourceDomainSAMLOptions() *schema.Resource { }, }, + Timeouts: &schema.ResourceTimeout{ + Update: schema.DefaultTimeout(60 * time.Minute), + Delete: schema.DefaultTimeout(60 * time.Minute), + }, + Schema: map[string]*schema.Schema{ "domain_name": { Type: schema.TypeString, @@ -85,7 +92,7 @@ func ResourceDomainSAMLOptions() *schema.Resource { "subject_key": { Type: schema.TypeString, Optional: true, - Default: "NameID", + Default: "", DiffSuppressFunc: elasticsearchDomainSamlOptionsDiffSupress, }, }, @@ -149,7 +156,7 @@ func resourceDomainSAMLOptionsPut(d *schema.ResourceData, meta interface{}) erro d.SetId(domainName) - if err := waitForDomainUpdate(conn, d.Get("domain_name").(string)); err != nil { + if err := waitForDomainUpdate(conn, d.Get("domain_name").(string), d.Timeout(schema.TimeoutUpdate)); err != nil { return fmt.Errorf("error waiting for Elasticsearch Domain SAML Options update (%s) to succeed: %w", d.Id(), err) } @@ -173,7 +180,7 @@ func resourceDomainSAMLOptionsDelete(d *schema.ResourceData, meta interface{}) e log.Printf("[DEBUG] Waiting for Elasticsearch domain SAML Options %q to be deleted", d.Get("domain_name").(string)) - if err := waitForDomainUpdate(conn, d.Get("domain_name").(string)); err != nil { + if err := waitForDomainUpdate(conn, d.Get("domain_name").(string), d.Timeout(schema.TimeoutDelete)); err != nil { return fmt.Errorf("error waiting for Elasticsearch Domain SAML Options (%s) to be deleted: %w", d.Id(), err) } diff --git a/internal/service/elasticsearch/domain_saml_options_test.go b/internal/service/elasticsearch/domain_saml_options_test.go index 27c2aac5eed2..0c9ea854c650 100644 --- a/internal/service/elasticsearch/domain_saml_options_test.go +++ b/internal/service/elasticsearch/domain_saml_options_test.go @@ -14,7 +14,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) -func TestAccElasticsearchDomainSamlOptions_basic(t *testing.T) { +func TestAccElasticsearchDomainSAMLOptions_basic(t *testing.T) { var domain elasticsearch.ElasticsearchDomainStatus rName := sdkacctest.RandomWithPrefix("acc-test") @@ -50,7 +50,7 @@ func TestAccElasticsearchDomainSamlOptions_basic(t *testing.T) { }) } -func TestAccElasticsearchDomainSamlOptions_disappears(t *testing.T) { +func TestAccElasticsearchDomainSAMLOptions_disappears(t *testing.T) { rName := sdkacctest.RandomWithPrefix("acc-test") rUserName := sdkacctest.RandomWithPrefix("es-master-user") idpEntityId := fmt.Sprintf("https://%s", acctest.RandomDomainName()) @@ -75,7 +75,7 @@ func TestAccElasticsearchDomainSamlOptions_disappears(t *testing.T) { }) } -func TestAccElasticsearchDomainSamlOptions_disappears_Domain(t *testing.T) { +func TestAccElasticsearchDomainSAMLOptions_disappears_Domain(t *testing.T) { rName := sdkacctest.RandomWithPrefix("acc-test") rUserName := sdkacctest.RandomWithPrefix("es-master-user") idpEntityId := fmt.Sprintf("https://%s", acctest.RandomDomainName()) @@ -101,7 +101,7 @@ func TestAccElasticsearchDomainSamlOptions_disappears_Domain(t *testing.T) { }) } -func TestAccElasticsearchDomainSamlOptions_Update(t *testing.T) { +func TestAccElasticsearchDomainSAMLOptions_Update(t *testing.T) { rName := sdkacctest.RandomWithPrefix("acc-test") rUserName := sdkacctest.RandomWithPrefix("es-master-user") idpEntityId := fmt.Sprintf("https://%s", acctest.RandomDomainName()) @@ -135,7 +135,7 @@ func TestAccElasticsearchDomainSamlOptions_Update(t *testing.T) { }) } -func TestAccElasticsearchDomainSamlOptions_Disabled(t *testing.T) { +func TestAccElasticsearchDomainSAMLOptions_Disabled(t *testing.T) { rName := sdkacctest.RandomWithPrefix("acc-test") rUserName := sdkacctest.RandomWithPrefix("es-master-user") idpEntityId := fmt.Sprintf("https://%s", acctest.RandomDomainName()) diff --git a/internal/service/elasticsearch/domain_test.go b/internal/service/elasticsearch/domain_test.go index 029cf04b940c..88441aea2c02 100644 --- a/internal/service/elasticsearch/domain_test.go +++ b/internal/service/elasticsearch/domain_test.go @@ -262,6 +262,43 @@ func TestAccElasticsearchDomain_warm(t *testing.T) { }) } +func TestAccElasticsearchDomain_withColdStorageOptions(t *testing.T) { + var domain elasticsearch.ElasticsearchDomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_elasticsearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIamServiceLinkedRoleEs(t) }, + ErrorCheck: acctest.ErrorCheck(t, elasticsearch.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_WithColdStorageOptions(rName, false, false, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "cluster_config.0.cold_storage_options.*", map[string]string{ + "enabled": "false", + })), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + { + Config: testAccDomainConfig_WithColdStorageOptions(rName, true, true, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "cluster_config.0.cold_storage_options.*", map[string]string{ + "enabled": "true", + })), + }, + }, + }) +} + func TestAccElasticsearchDomain_withDedicatedMaster(t *testing.T) { var domain elasticsearch.ElasticsearchDomainStatus rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -337,7 +374,7 @@ func TestAccElasticsearchDomain_duplicate(t *testing.T) { t.Fatal(err) } - err = tfelasticsearch.WaitForDomainCreation(conn, rName[:28]) + err = tfelasticsearch.WaitForDomainCreation(conn, rName[:28], 60*time.Minute) if err != nil { t.Fatal(err) } @@ -348,7 +385,7 @@ func TestAccElasticsearchDomain_duplicate(t *testing.T) { resource.TestCheckResourceAttr( resourceName, "elasticsearch_version", "1.5"), ), - ExpectError: regexp.MustCompile(`domain .+ already exists`), + ExpectError: regexp.MustCompile(`Elasticsearch Domain .+ already exists`), }, }, }) @@ -1766,6 +1803,51 @@ resource "aws_elasticsearch_domain" "test" { `, rName, enabled) } +func testAccDomainConfig_WithColdStorageOptions(rName string, dMasterEnabled bool, warmEnabled bool, csEnabled bool) string { + warmConfig := "" + if warmEnabled { + warmConfig = ` + warm_count = "2" + warm_type = "ultrawarm1.medium.elasticsearch" +` + } + + coldConfig := "" + if csEnabled { + coldConfig = ` + cold_storage_options { + enabled = true + } +` + } + + return fmt.Sprintf(` +resource "aws_elasticsearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + elasticsearch_version = "7.9" + + cluster_config { + instance_type = "m3.medium.elasticsearch" + instance_count = "1" + dedicated_master_enabled = %t + dedicated_master_count = "3" + dedicated_master_type = "m3.medium.elasticsearch" + warm_enabled = %[3]t + %[4]s + %[5]s + } + ebs_options { + ebs_enabled = true + volume_size = 10 + } + timeouts { + update = "180m" + } +} +`, rName, dMasterEnabled, warmEnabled, warmConfig, coldConfig) +} + func testAccDomainConfig_ClusterUpdate(rName string, instanceInt, snapshotInt int) string { return fmt.Sprintf(` resource "aws_elasticsearch_domain" "test" { diff --git a/internal/service/elasticsearch/find.go b/internal/service/elasticsearch/find.go index c49bddad1d69..8e4dc48055f4 100644 --- a/internal/service/elasticsearch/find.go +++ b/internal/service/elasticsearch/find.go @@ -14,6 +14,7 @@ func FindDomainByName(conn *elasticsearch.ElasticsearchService, name string) (*e } output, err := conn.DescribeElasticsearchDomain(input) + if tfawserr.ErrCodeEquals(err, elasticsearch.ErrCodeResourceNotFoundException) { return nil, &resource.NotFoundError{ LastError: err, diff --git a/internal/service/elasticsearch/wait.go b/internal/service/elasticsearch/wait.go index a04b4c222abe..30d01697ca7a 100644 --- a/internal/service/elasticsearch/wait.go +++ b/internal/service/elasticsearch/wait.go @@ -13,8 +13,6 @@ import ( const ( domainUpgradeSuccessMinTimeout = 10 * time.Second domainUpgradeSuccessDelay = 30 * time.Second - domainRetryTimeout = 60 * time.Minute - domainDeleteRetryTimeout = 90 * time.Minute ) // UpgradeSucceeded waits for an Upgrade to return Success @@ -37,9 +35,9 @@ func waitUpgradeSucceeded(conn *elasticsearch.ElasticsearchService, name string, return nil, err } -func WaitForDomainCreation(conn *elasticsearch.ElasticsearchService, domainName string) error { +func WaitForDomainCreation(conn *elasticsearch.ElasticsearchService, domainName string, timeout time.Duration) error { var out *elasticsearch.ElasticsearchDomainStatus - err := resource.Retry(domainRetryTimeout, func() *resource.RetryError { + err := resource.Retry(timeout, func() *resource.RetryError { var err error out, err = FindDomainByName(conn, domainName) if err != nil { @@ -62,15 +60,17 @@ func WaitForDomainCreation(conn *elasticsearch.ElasticsearchService, domainName return nil } } + if err != nil { - return fmt.Errorf("Error waiting for Elasticsearch domain to be created: %w", err) + return err } + return nil } -func waitForDomainUpdate(conn *elasticsearch.ElasticsearchService, domainName string) error { +func waitForDomainUpdate(conn *elasticsearch.ElasticsearchService, domainName string, timeout time.Duration) error { var out *elasticsearch.ElasticsearchDomainStatus - err := resource.Retry(domainRetryTimeout, func() *resource.RetryError { + err := resource.Retry(timeout, func() *resource.RetryError { var err error out, err = FindDomainByName(conn, domainName) if err != nil { @@ -93,15 +93,17 @@ func waitForDomainUpdate(conn *elasticsearch.ElasticsearchService, domainName st return nil } } + if err != nil { - return fmt.Errorf("Error waiting for Elasticsearch domain changes to be processed: %w", err) + return err } + return nil } -func waitForDomainDelete(conn *elasticsearch.ElasticsearchService, domainName string) error { +func waitForDomainDelete(conn *elasticsearch.ElasticsearchService, domainName string, timeout time.Duration) error { var out *elasticsearch.ElasticsearchDomainStatus - err := resource.Retry(domainDeleteRetryTimeout, func() *resource.RetryError { + err := resource.Retry(timeout, func() *resource.RetryError { var err error out, err = FindDomainByName(conn, domainName) @@ -130,8 +132,10 @@ func waitForDomainDelete(conn *elasticsearch.ElasticsearchService, domainName st return nil } } + if err != nil { - return fmt.Errorf("Error waiting for Elasticsearch domain to be deleted: %s", err) + return err } + return nil } diff --git a/internal/service/elb/load_balancer_test.go b/internal/service/elb/load_balancer_test.go index f85edaa5862d..6715609ff6da 100644 --- a/internal/service/elb/load_balancer_test.go +++ b/internal/service/elb/load_balancer_test.go @@ -1272,7 +1272,7 @@ resource "aws_s3_bucket_policy" "test" { "Principal": { "AWS": "${data.aws_elb_service_account.current.arn}" }, - "Resource": "arn:${data.aws_partition.current.partition}:s3:::%[1]s/*", + "Resource": "${aws_s3_bucket.accesslogs_bucket.arn}/*", "Sid": "Stmt1446575236270" } ], diff --git a/internal/service/elbv2/load_balancer.go b/internal/service/elbv2/load_balancer.go index 8f76a27adebf..606977f09217 100644 --- a/internal/service/elbv2/load_balancer.go +++ b/internal/service/elbv2/load_balancer.go @@ -532,7 +532,25 @@ func resourceLoadBalancerUpdate(d *schema.ResourceData, meta interface{}) error } log.Printf("[DEBUG] ALB Modify Load Balancer Attributes Request: %#v", input) - _, err := conn.ModifyLoadBalancerAttributes(input) + + // Not all attributes are supported in all partitions (e.g., ISO) + var err error + for { + _, err = conn.ModifyLoadBalancerAttributes(input) + if err == nil { + break + } + + re := regexp.MustCompile(`attribute key ('|")?([^'" ]+)('|")? is not recognized`) + if sm := re.FindStringSubmatch(err.Error()); len(sm) > 1 { + log.Printf("[WARN] failed to modify Load Balancer (%s), unsupported attribute (%s): %s", d.Id(), sm[2], err) + input.Attributes = removeAttribute(input.Attributes, sm[2]) + continue + } + + break + } + if err != nil { return fmt.Errorf("failure configuring LB attributes: %w", err) } @@ -658,6 +676,17 @@ func resourceLoadBalancerDelete(d *schema.ResourceData, meta interface{}) error return nil } +func removeAttribute(attributes []*elbv2.LoadBalancerAttribute, key string) []*elbv2.LoadBalancerAttribute { + for i, a := range attributes { + if aws.StringValue(a.Key) == key { + return append(attributes[:i], attributes[i+1:]...) + } + } + + log.Printf("[WARN] Unable to remove attribute %s from Load Balancer attributes: not found", key) + return attributes +} + // ALB automatically creates ENI(s) on creation // but the cleanup is asynchronous and may take time // which then blocks IGW, SG or VPC on deletion diff --git a/internal/service/emr/instance_group.go b/internal/service/emr/instance_group.go index 465ae064dfaf..3c0063723a1b 100644 --- a/internal/service/emr/instance_group.go +++ b/internal/service/emr/instance_group.go @@ -62,7 +62,6 @@ func ResourceInstanceGroup() *schema.Resource { "configurations_json": { Type: schema.TypeString, Optional: true, - ForceNew: false, ValidateFunc: validation.StringIsJSON, DiffSuppressFunc: verify.SuppressEquivalentJSONDiffs, StateFunc: func(v interface{}) string { diff --git a/internal/service/events/target.go b/internal/service/events/target.go index 4afb44b0c665..c4052d694b36 100644 --- a/internal/service/events/target.go +++ b/internal/service/events/target.go @@ -844,8 +844,8 @@ func expandTargetHTTPParameters(tfMap map[string]interface{}) *eventbridge.HttpP apiObject.HeaderParameters = flex.ExpandStringMap(v) } - if v, ok := tfMap["path_parameter_values"].(*schema.Set); ok && v.Len() > 0 { - apiObject.PathParameterValues = flex.ExpandStringSet(v) + if v, ok := tfMap["path_parameter_values"].([]interface{}); ok && len(v) > 0 { + apiObject.PathParameterValues = flex.ExpandStringList(v) } if v, ok := tfMap["query_string_parameters"].(map[string]interface{}); ok && len(v) > 0 { diff --git a/internal/service/events/target_test.go b/internal/service/events/target_test.go index fd5126a98c05..622644a4ce17 100644 --- a/internal/service/events/target_test.go +++ b/internal/service/events/target_test.go @@ -368,6 +368,58 @@ func TestAccEventsTarget_http(t *testing.T) { }) } +//https://github.com/hashicorp/terraform-provider-aws/issues/23805 +func TestAccEventsTarget_http_params(t *testing.T) { + resourceName := "aws_cloudwatch_event_target.test" + + var v eventbridge.Target + rName := sdkacctest.RandomWithPrefix("tf_http_target") + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, eventbridge.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckTargetDestroy, + Steps: []resource.TestStep{ + { + Config: testAccTargetHTTPParameterConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckTargetExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "http_target.#", "1"), + resource.TestCheckResourceAttr(resourceName, "http_target.0.path_parameter_values.#", "1"), + resource.TestCheckResourceAttr(resourceName, "http_target.0.path_parameter_values.0", "test"), + resource.TestCheckResourceAttr(resourceName, "http_target.0.header_parameters.%", "1"), + resource.TestCheckResourceAttr(resourceName, "http_target.0.header_parameters.X-Test", "test"), + resource.TestCheckResourceAttr(resourceName, "http_target.0.query_string_parameters.%", "2"), + resource.TestCheckResourceAttr(resourceName, "http_target.0.query_string_parameters.Env", "test"), + resource.TestCheckResourceAttr(resourceName, "http_target.0.query_string_parameters.Path", "$.detail.path"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateIdFunc: testAccTargetImportStateIdFunc(resourceName), + ImportStateVerify: true, + }, + { + Config: testAccTargetHTTPParameterConfigUpdated(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckTargetExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "http_target.#", "1"), + resource.TestCheckResourceAttr(resourceName, "http_target.0.path_parameter_values.#", "2"), + resource.TestCheckResourceAttr(resourceName, "http_target.0.path_parameter_values.0", "test"), + resource.TestCheckResourceAttr(resourceName, "http_target.0.path_parameter_values.1", "test2"), + resource.TestCheckResourceAttr(resourceName, "http_target.0.header_parameters.%", "1"), + resource.TestCheckResourceAttr(resourceName, "http_target.0.header_parameters.X-Test", "test"), + resource.TestCheckResourceAttr(resourceName, "http_target.0.query_string_parameters.%", "2"), + resource.TestCheckResourceAttr(resourceName, "http_target.0.query_string_parameters.Env", "test"), + resource.TestCheckResourceAttr(resourceName, "http_target.0.query_string_parameters.Path", "$.detail.path"), + ), + }, + }, + }) +} + func TestAccEventsTarget_ecs(t *testing.T) { resourceName := "aws_cloudwatch_event_target.test" iamRoleResourceName := "aws_iam_role.test" @@ -1277,7 +1329,7 @@ data "aws_partition" "current" {} `, rName) } -func testAccTargetHTTPConfig(rName string) string { +func testAccTargetHTTPConfigBase(rName string) string { return fmt.Sprintf(` resource "aws_cloudwatch_event_rule" "test" { name = %[1]q @@ -1286,22 +1338,6 @@ resource "aws_cloudwatch_event_rule" "test" { schedule_expression = "rate(5 minutes)" } -resource "aws_cloudwatch_event_target" "test" { - arn = "${aws_api_gateway_stage.test.execution_arn}/GET" - rule = aws_cloudwatch_event_rule.test.id - - http_target { - path_parameter_values = [] - query_string_parameters = { - Env = "test" - Path = "$.detail.path" - } - header_parameters = { - X-Test = "test" - } - } -} - resource "aws_api_gateway_rest_api" "test" { name = %[1]q body = jsonencode({ @@ -1347,6 +1383,66 @@ data "aws_partition" "current" {} `, rName) } +func testAccTargetHTTPConfig(rName string) string { + return testAccTargetHTTPConfigBase(rName) + ` +resource "aws_cloudwatch_event_target" "test" { + arn = "${aws_api_gateway_stage.test.execution_arn}/GET" + rule = aws_cloudwatch_event_rule.test.id + + http_target { + path_parameter_values = [] + query_string_parameters = { + Env = "test" + Path = "$.detail.path" + } + header_parameters = { + X-Test = "test" + } + } +} +` +} + +func testAccTargetHTTPParameterConfig(rName string) string { + return testAccTargetHTTPConfigBase(rName) + ` +resource "aws_cloudwatch_event_target" "test" { + arn = "${aws_api_gateway_stage.test.execution_arn}/*/*/GET" + rule = aws_cloudwatch_event_rule.test.id + + http_target { + path_parameter_values = ["test"] + query_string_parameters = { + Env = "test" + Path = "$.detail.path" + } + header_parameters = { + X-Test = "test" + } + } +} +` +} + +func testAccTargetHTTPParameterConfigUpdated(rName string) string { + return testAccTargetHTTPConfigBase(rName) + ` +resource "aws_cloudwatch_event_target" "test" { + arn = "${aws_api_gateway_stage.test.execution_arn}/*/*/*/GET" + rule = aws_cloudwatch_event_rule.test.id + + http_target { + path_parameter_values = ["test", "test2"] + query_string_parameters = { + Env = "test" + Path = "$.detail.path" + } + header_parameters = { + X-Test = "test" + } + } +} +` +} + func testAccTargetECSBaseConfig(rName string) string { return fmt.Sprintf(` resource "aws_vpc" "vpc" { diff --git a/internal/service/fsx/ontap_file_system.go b/internal/service/fsx/ontap_file_system.go index 858d1b4698b9..381697ce50e4 100644 --- a/internal/service/fsx/ontap_file_system.go +++ b/internal/service/fsx/ontap_file_system.go @@ -183,7 +183,6 @@ func ResourceOntapFileSystem() *schema.Resource { "storage_capacity": { Type: schema.TypeInt, Optional: true, - ForceNew: true, ValidateFunc: validation.IntBetween(1024, 192*1024), }, "storage_type": { @@ -206,8 +205,7 @@ func ResourceOntapFileSystem() *schema.Resource { "throughput_capacity": { Type: schema.TypeInt, Required: true, - ForceNew: true, - ValidateFunc: validation.IntInSlice([]int{128, 512, 1024, 2048}), + ValidateFunc: validation.IntInSlice([]int{128, 256, 512, 1024, 2048}), }, "vpc_id": { Type: schema.TypeString, @@ -392,6 +390,10 @@ func resourceOntapFileSystemUpdate(d *schema.ResourceData, meta interface{}) err OntapConfiguration: &fsx.UpdateFileSystemOntapConfiguration{}, } + if d.HasChange("storage_capacity") { + input.StorageCapacity = aws.Int64(int64(d.Get("storage_capacity").(int))) + } + if d.HasChange("automatic_backup_retention_days") { input.OntapConfiguration.AutomaticBackupRetentionDays = aws.Int64(int64(d.Get("automatic_backup_retention_days").(int))) } @@ -408,6 +410,14 @@ func resourceOntapFileSystemUpdate(d *schema.ResourceData, meta interface{}) err input.OntapConfiguration.WeeklyMaintenanceStartTime = aws.String(d.Get("weekly_maintenance_start_time").(string)) } + if d.HasChange("throughput_capacity") { + input.OntapConfiguration.ThroughputCapacity = aws.Int64(int64(d.Get("throughput_capacity").(int))) + } + + if d.HasChange("disk_iops_configuration") { + input.OntapConfiguration.DiskIopsConfiguration = expandFsxOntapFileDiskIopsConfiguration(d.Get("disk_iops_configuration").([]interface{})) + } + _, err := conn.UpdateFileSystem(input) if err != nil { @@ -417,6 +427,10 @@ func resourceOntapFileSystemUpdate(d *schema.ResourceData, meta interface{}) err if _, err := waitFileSystemUpdated(conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { return fmt.Errorf("error waiting for FSx ONTAP File System (%s) update: %w", d.Id(), err) } + + if _, err := waitAdministrativeActionCompleted(conn, d.Id(), fsx.AdministrativeActionTypeFileSystemUpdate, d.Timeout(schema.TimeoutUpdate)); err != nil { + return fmt.Errorf("error waiting for FSx ONTAP File System (%s) update: %w", d.Id(), err) + } } return resourceOntapFileSystemRead(d, meta) diff --git a/internal/service/fsx/ontap_file_system_test.go b/internal/service/fsx/ontap_file_system_test.go index 5b407575993b..bcacea8fe69c 100644 --- a/internal/service/fsx/ontap_file_system_test.go +++ b/internal/service/fsx/ontap_file_system_test.go @@ -148,7 +148,7 @@ func TestAccFSxOntapFileSystem_diskIops(t *testing.T) { CheckDestroy: testAccCheckFsxOntapFileSystemDestroy, Steps: []resource.TestStep{ { - Config: testAccOntapFileSystemDiskIopsConfigurationConfig(rName), + Config: testAccOntapFileSystemDiskIopsConfigurationConfig(rName, 3072), Check: resource.ComposeTestCheckFunc( testAccCheckFsxOntapFileSystemExists(resourceName, &filesystem), resource.TestCheckResourceAttr(resourceName, "disk_iops_configuration.#", "1"), @@ -162,6 +162,15 @@ func TestAccFSxOntapFileSystem_diskIops(t *testing.T) { ImportStateVerify: true, ImportStateVerifyIgnore: []string{"security_group_ids"}, }, + { + Config: testAccOntapFileSystemDiskIopsConfigurationConfig(rName, 4000), + Check: resource.ComposeTestCheckFunc( + testAccCheckFsxOntapFileSystemExists(resourceName, &filesystem), + resource.TestCheckResourceAttr(resourceName, "disk_iops_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "disk_iops_configuration.0.mode", "USER_PROVISIONED"), + resource.TestCheckResourceAttr(resourceName, "disk_iops_configuration.0.iops", "4000"), + ), + }, }, }) } @@ -445,6 +454,78 @@ func TestAccFSxOntapFileSystem_dailyAutomaticBackupStartTime(t *testing.T) { }) } +func TestAccFSxOntapFileSystem_throughputCapacity(t *testing.T) { + var filesystem1, filesystem2 fsx.FileSystem + resourceName := "aws_fsx_ontap_file_system.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); acctest.PreCheckPartitionHasService(fsx.EndpointsID, t) }, + ErrorCheck: acctest.ErrorCheck(t, fsx.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckFsxOntapFileSystemDestroy, + Steps: []resource.TestStep{ + { + Config: testAccOntapFileSystemBasicConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckFsxOntapFileSystemExists(resourceName, &filesystem1), + resource.TestCheckResourceAttr(resourceName, "throughput_capacity", "128"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"security_group_ids"}, + }, + { + Config: testAccOntapFileSystemThroughputCapacityConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckFsxOntapFileSystemExists(resourceName, &filesystem2), + testAccCheckFsxOntapFileSystemNotRecreated(&filesystem1, &filesystem2), + resource.TestCheckResourceAttr(resourceName, "throughput_capacity", "256"), + ), + }, + }, + }) +} + +func TestAccFSxOntapFileSystem_storageCapacity(t *testing.T) { + var filesystem1, filesystem2 fsx.FileSystem + resourceName := "aws_fsx_ontap_file_system.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); acctest.PreCheckPartitionHasService(fsx.EndpointsID, t) }, + ErrorCheck: acctest.ErrorCheck(t, fsx.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckFsxOntapFileSystemDestroy, + Steps: []resource.TestStep{ + { + Config: testAccOntapFileSystemBasicConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckFsxOntapFileSystemExists(resourceName, &filesystem1), + resource.TestCheckResourceAttr(resourceName, "storage_capacity", "1024"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"security_group_ids"}, + }, + { + Config: testAccOntapFileSystemStorageCapacityConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckFsxOntapFileSystemExists(resourceName, &filesystem2), + testAccCheckFsxOntapFileSystemNotRecreated(&filesystem1, &filesystem2), + resource.TestCheckResourceAttr(resourceName, "storage_capacity", "2048"), + ), + }, + }, + }) +} + func testAccCheckFsxOntapFileSystemExists(resourceName string, fs *fsx.FileSystem) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[resourceName] @@ -589,7 +670,7 @@ resource "aws_fsx_ontap_file_system" "test" { `, rName)) } -func testAccOntapFileSystemDiskIopsConfigurationConfig(rName string) string { +func testAccOntapFileSystemDiskIopsConfigurationConfig(rName string, iops int) string { return acctest.ConfigCompose(testAccOntapFileSystemBaseConfig(rName), fmt.Sprintf(` resource "aws_fsx_ontap_file_system" "test" { storage_capacity = 1024 @@ -600,14 +681,14 @@ resource "aws_fsx_ontap_file_system" "test" { disk_iops_configuration { mode = "USER_PROVISIONED" - iops = 3072 + iops = %[2]d } tags = { Name = %[1]q } } -`, rName)) +`, rName, iops)) } func testAccOntapFileSystemRouteTableConfig(rName string) string { @@ -857,3 +938,27 @@ resource "aws_fsx_ontap_file_system" "test" { } `, rName)) } + +func testAccOntapFileSystemThroughputCapacityConfig(rName string) string { + return acctest.ConfigCompose(testAccOntapFileSystemBaseConfig(rName), ` +resource "aws_fsx_ontap_file_system" "test" { + storage_capacity = 1024 + subnet_ids = [aws_subnet.test1.id, aws_subnet.test2.id] + deployment_type = "MULTI_AZ_1" + throughput_capacity = 256 + preferred_subnet_id = aws_subnet.test1.id +} +`) +} + +func testAccOntapFileSystemStorageCapacityConfig(rName string) string { + return acctest.ConfigCompose(testAccOntapFileSystemBaseConfig(rName), ` +resource "aws_fsx_ontap_file_system" "test" { + storage_capacity = 2048 + subnet_ids = [aws_subnet.test1.id, aws_subnet.test2.id] + deployment_type = "MULTI_AZ_1" + throughput_capacity = 128 + preferred_subnet_id = aws_subnet.test1.id +} +`) +} diff --git a/internal/service/imagebuilder/distribution_configuration.go b/internal/service/imagebuilder/distribution_configuration.go index c281c4b0b973..5879ac978b87 100644 --- a/internal/service/imagebuilder/distribution_configuration.go +++ b/internal/service/imagebuilder/distribution_configuration.go @@ -174,6 +174,11 @@ func ResourceDistributionConfiguration() *schema.Resource { MaxItems: 100, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ + "account_id": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidAccountID, + }, "default": { Type: schema.TypeBool, Optional: true, @@ -563,6 +568,10 @@ func expandLaunchTemplateConfiguration(tfMap map[string]interface{}) *imagebuild apiObject.SetDefaultVersion = aws.Bool(v) } + if v, ok := tfMap["account_id"].(string); ok && v != "" { + apiObject.AccountId = aws.String(v) + } + return apiObject } @@ -747,5 +756,9 @@ func flattenLaunchTemplateConfiguration(apiObject *imagebuilder.LaunchTemplateCo tfMap["default"] = aws.BoolValue(v) } + if v := apiObject.AccountId; v != nil { + tfMap["account_id"] = aws.StringValue(v) + } + return tfMap } diff --git a/internal/service/imagebuilder/distribution_configuration_data_source.go b/internal/service/imagebuilder/distribution_configuration_data_source.go index 85f45f4ede20..8107016f7638 100644 --- a/internal/service/imagebuilder/distribution_configuration_data_source.go +++ b/internal/service/imagebuilder/distribution_configuration_data_source.go @@ -142,6 +142,10 @@ func DataSourceDistributionConfiguration() *schema.Resource { Computed: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ + "account_id": { + Type: schema.TypeString, + Computed: true, + }, "default": { Type: schema.TypeBool, Computed: true, diff --git a/internal/service/imagebuilder/distribution_configuration_data_source_test.go b/internal/service/imagebuilder/distribution_configuration_data_source_test.go index d09e4aac246b..388d81891672 100644 --- a/internal/service/imagebuilder/distribution_configuration_data_source_test.go +++ b/internal/service/imagebuilder/distribution_configuration_data_source_test.go @@ -38,6 +38,7 @@ func TestAccImageBuilderDistributionConfigurationDataSource_arn(t *testing.T) { resource.TestCheckResourceAttrPair(dataSourceName, "distribution.0.launch_template_configuration.#", resourceName, "distribution.0.launch_template_configuration.#"), resource.TestCheckResourceAttrPair(dataSourceName, "distribution.0.launch_template_configuration.0.default", resourceName, "distribution.0.launch_template_configuration.0.default"), resource.TestCheckResourceAttrPair(dataSourceName, "distribution.0.launch_template_configuration.0.launch_template_id", resourceName, "distribution.0.launch_template_configuration.0.launch_template_id"), + resource.TestCheckResourceAttrPair(dataSourceName, "distribution.0.launch_template_configuration.0.account_id", resourceName, "distribution.0.launch_template_configuration.0.account_id"), resource.TestCheckResourceAttrPair(dataSourceName, "name", resourceName, "name"), resource.TestCheckResourceAttrPair(dataSourceName, "tags.%", resourceName, "tags.%"), ), @@ -50,6 +51,8 @@ func testAccDistributionConfigurationARNDataSourceConfig(rName string) string { return fmt.Sprintf(` data "aws_region" "current" {} +data "aws_caller_identity" "current" {} + resource "aws_launch_template" "test" { instance_type = "t2.micro" name = %[1]q @@ -71,6 +74,7 @@ resource "aws_imagebuilder_distribution_configuration" "test" { } launch_template_configuration { + account_id = data.aws_caller_identity.current.account_id default = false launch_template_id = aws_launch_template.test.id } diff --git a/internal/service/imagebuilder/distribution_configuration_test.go b/internal/service/imagebuilder/distribution_configuration_test.go index e0681d3973e4..588aabefbbf9 100644 --- a/internal/service/imagebuilder/distribution_configuration_test.go +++ b/internal/service/imagebuilder/distribution_configuration_test.go @@ -624,6 +624,18 @@ func TestAccImageBuilderDistributionConfiguration_Distribution_launchTemplateCon resource.TestCheckResourceAttrPair(resourceName, "distribution.0.launch_template_configuration.0.launch_template_id", launchTemplateResourceName, "id"), ), }, + { + Config: testAccDistributionConfigurationDistributionLaunchTemplateConfigurationLaunchTemplateIDAccountIDConfig(rName, "111111111111"), + Check: resource.ComposeTestCheckFunc( + testAccCheckDistributionConfigurationExists(resourceName), + acctest.CheckResourceAttrRFC3339(resourceName, "date_updated"), + resource.TestCheckResourceAttr(resourceName, "distribution.#", "1"), + resource.TestCheckResourceAttr(resourceName, "distribution.0.launch_template_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "distribution.0.launch_template_configuration.0.default", "false"), + resource.TestCheckResourceAttrPair(resourceName, "distribution.0.launch_template_configuration.0.launch_template_id", launchTemplateResourceName, "id"), + resource.TestCheckResourceAttr(resourceName, "distribution.0.launch_template_configuration.0.account_id", "111111111111"), + ), + }, }, }) } @@ -1093,6 +1105,8 @@ func testAccDistributionConfigurationDistributionLaunchTemplateConfigurationLaun return fmt.Sprintf(` data "aws_region" "current" {} +data "aws_caller_identity" "current" {} + resource "aws_launch_template" "test" { instance_type = "t2.micro" name = %[1]q @@ -1105,6 +1119,7 @@ resource "aws_imagebuilder_distribution_configuration" "test" { launch_template_configuration { default = true launch_template_id = aws_launch_template.test.id + account_id = data.aws_caller_identity.current.account_id } region = data.aws_region.current.name @@ -1117,6 +1132,8 @@ func testAccDistributionConfigurationDistributionLaunchTemplateConfigurationLaun return fmt.Sprintf(` data "aws_region" "current" {} +data "aws_caller_identity" "current" {} + resource "aws_launch_template" "test" { instance_type = "t2.micro" name = %[1]q @@ -1129,6 +1146,7 @@ resource "aws_imagebuilder_distribution_configuration" "test" { launch_template_configuration { default = false launch_template_id = aws_launch_template.test.id + account_id = data.aws_caller_identity.current.account_id } region = data.aws_region.current.name @@ -1137,6 +1155,37 @@ resource "aws_imagebuilder_distribution_configuration" "test" { `, rName) } +func testAccDistributionConfigurationDistributionLaunchTemplateConfigurationLaunchTemplateIDAccountIDConfig(rName string, accountId string) string { + return fmt.Sprintf(` +data "aws_region" "current" {} + +resource "aws_launch_template" "test" { + instance_type = "t2.micro" + name = %[1]q +} + +resource "aws_imagebuilder_distribution_configuration" "test" { + name = %[1]q + + distribution { + launch_template_configuration { + default = false + launch_template_id = aws_launch_template.test.id + account_id = %[2]q + } + + ami_distribution_configuration { + launch_permission { + user_ids = [%[2]q] + } + } + + region = data.aws_region.current.name + } +} + `, rName, accountId) +} + func testAccDistributionConfigurationDistributionLicenseConfigurationARNs1Config(rName string) string { return fmt.Sprintf(` data "aws_region" "current" {} diff --git a/internal/service/iot/authorizer.go b/internal/service/iot/authorizer.go index 1d8f24550fa4..91343ac2b0eb 100644 --- a/internal/service/iot/authorizer.go +++ b/internal/service/iot/authorizer.go @@ -41,6 +41,11 @@ func ResourceAuthorizer() *schema.Resource { Required: true, ValidateFunc: verify.ValidARN, }, + "enable_caching_for_http": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, "name": { Type: schema.TypeString, Required: true, @@ -85,6 +90,7 @@ func resourceAuthorizerCreate(d *schema.ResourceData, meta interface{}) error { input := &iot.CreateAuthorizerInput{ AuthorizerFunctionArn: aws.String(d.Get("authorizer_function_arn").(string)), AuthorizerName: aws.String(name), + EnableCachingForHttp: aws.Bool(d.Get("enable_caching_for_http").(bool)), SigningDisabled: aws.Bool(d.Get("signing_disabled").(bool)), Status: aws.String(d.Get("status").(string)), } @@ -126,6 +132,7 @@ func resourceAuthorizerRead(d *schema.ResourceData, meta interface{}) error { d.Set("arn", authorizer.AuthorizerArn) d.Set("authorizer_function_arn", authorizer.AuthorizerFunctionArn) + d.Set("enable_caching_for_http", authorizer.EnableCachingForHttp) d.Set("name", authorizer.AuthorizerName) d.Set("signing_disabled", authorizer.SigningDisabled) d.Set("status", authorizer.Status) @@ -146,6 +153,10 @@ func resourceAuthorizerUpdate(d *schema.ResourceData, meta interface{}) error { input.AuthorizerFunctionArn = aws.String(d.Get("authorizer_function_arn").(string)) } + if d.HasChange("enable_caching_for_http") { + input.EnableCachingForHttp = aws.Bool(d.Get("enable_caching_for_http").(bool)) + } + if d.HasChange("status") { input.Status = aws.String(d.Get("status").(string)) } diff --git a/internal/service/iot/authorizer_test.go b/internal/service/iot/authorizer_test.go index f93306f9474a..9dbb72d1b071 100644 --- a/internal/service/iot/authorizer_test.go +++ b/internal/service/iot/authorizer_test.go @@ -30,6 +30,7 @@ func TestAccIoTAuthorizer_basic(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckAuthorizerExists(resourceName, &conf), acctest.CheckResourceAttrRegionalARN(resourceName, "arn", "iot", fmt.Sprintf("authorizer/%s", rName)), + resource.TestCheckResourceAttr(resourceName, "enable_caching_for_http", "false"), resource.TestCheckResourceAttr(resourceName, "name", rName), resource.TestCheckResourceAttr(resourceName, "signing_disabled", "false"), resource.TestCheckResourceAttr(resourceName, "status", "ACTIVE"), @@ -118,6 +119,7 @@ func TestAccIoTAuthorizer_update(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckAuthorizerExists(resourceName, &conf), acctest.CheckResourceAttrRegionalARN(resourceName, "arn", "iot", fmt.Sprintf("authorizer/%s", rName)), + resource.TestCheckResourceAttr(resourceName, "enable_caching_for_http", "false"), resource.TestCheckResourceAttr(resourceName, "name", rName), resource.TestCheckResourceAttr(resourceName, "signing_disabled", "false"), resource.TestCheckResourceAttr(resourceName, "status", "ACTIVE"), @@ -131,6 +133,7 @@ func TestAccIoTAuthorizer_update(t *testing.T) { Check: resource.ComposeTestCheckFunc( testAccCheckAuthorizerExists(resourceName, &conf), acctest.CheckResourceAttrRegionalARN(resourceName, "arn", "iot", fmt.Sprintf("authorizer/%s", rName)), + resource.TestCheckResourceAttr(resourceName, "enable_caching_for_http", "true"), resource.TestCheckResourceAttr(resourceName, "name", rName), resource.TestCheckResourceAttr(resourceName, "signing_disabled", "false"), resource.TestCheckResourceAttr(resourceName, "status", "INACTIVE"), @@ -248,6 +251,7 @@ resource "aws_iot_authorizer" "test" { signing_disabled = false token_key_name = "Token-Header-2" status = "INACTIVE" + enable_caching_for_http = true token_signing_public_keys = { Key1 = "${file("test-fixtures/iot-authorizer-signing-key.pem")}" diff --git a/internal/service/iot/indexing_configuration.go b/internal/service/iot/indexing_configuration.go new file mode 100644 index 000000000000..f0552185d6cf --- /dev/null +++ b/internal/service/iot/indexing_configuration.go @@ -0,0 +1,393 @@ +package iot + +import ( + "context" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/iot" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" +) + +func ResourceIndexingConfiguration() *schema.Resource { + return &schema.Resource{ + CreateWithoutTimeout: resourceIndexingConfigurationPut, + ReadWithoutTimeout: resourceIndexingConfigurationRead, + UpdateWithoutTimeout: resourceIndexingConfigurationPut, + DeleteWithoutTimeout: schema.NoopContext, + + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "thing_group_indexing_configuration": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "custom_field": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Optional: true, + }, + "type": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(iot.FieldType_Values(), false), + }, + }, + }, + }, + "managed_field": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Optional: true, + }, + "type": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(iot.FieldType_Values(), false), + }, + }, + }, + }, + "thing_group_indexing_mode": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(iot.ThingGroupIndexingMode_Values(), false), + }, + }, + }, + AtLeastOneOf: []string{"thing_group_indexing_configuration", "thing_indexing_configuration"}, + }, + "thing_indexing_configuration": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "custom_field": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Optional: true, + }, + "type": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(iot.FieldType_Values(), false), + }, + }, + }, + }, + "device_defender_indexing_mode": { + Type: schema.TypeString, + Optional: true, + Default: iot.DeviceDefenderIndexingModeOff, + ValidateFunc: validation.StringInSlice(iot.DeviceDefenderIndexingMode_Values(), false), + }, + "managed_field": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Optional: true, + }, + "type": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(iot.FieldType_Values(), false), + }, + }, + }, + }, + "named_shadow_indexing_mode": { + Type: schema.TypeString, + Optional: true, + Default: iot.NamedShadowIndexingModeOff, + ValidateFunc: validation.StringInSlice(iot.NamedShadowIndexingMode_Values(), false), + }, + "thing_connectivity_indexing_mode": { + Type: schema.TypeString, + Optional: true, + Default: iot.ThingConnectivityIndexingModeOff, + ValidateFunc: validation.StringInSlice(iot.ThingConnectivityIndexingMode_Values(), false), + }, + "thing_indexing_mode": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(iot.ThingIndexingMode_Values(), false), + }, + }, + }, + AtLeastOneOf: []string{"thing_indexing_configuration", "thing_group_indexing_configuration"}, + }, + }, + } +} + +func resourceIndexingConfigurationPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).IoTConn + + input := &iot.UpdateIndexingConfigurationInput{} + + if v, ok := d.GetOk("thing_group_indexing_configuration"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.ThingGroupIndexingConfiguration = expandThingGroupIndexingConfiguration(v.([]interface{})[0].(map[string]interface{})) + } + + if v, ok := d.GetOk("thing_indexing_configuration"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.ThingIndexingConfiguration = expandThingIndexingConfiguration(v.([]interface{})[0].(map[string]interface{})) + } + + log.Printf("[DEBUG] Updating IoT Indexing Configuration: %s", input) + _, err := conn.UpdateIndexingConfigurationWithContext(ctx, input) + + if err != nil { + return diag.Errorf("error updating IoT Indexing Configuration: %s", err) + } + + d.SetId(meta.(*conns.AWSClient).Region) + + return resourceIndexingConfigurationRead(ctx, d, meta) +} + +func resourceIndexingConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).IoTConn + + output, err := conn.GetIndexingConfigurationWithContext(ctx, &iot.GetIndexingConfigurationInput{}) + + if err != nil { + return diag.Errorf("error reading IoT Indexing Configuration: %s", err) + } + + if output.ThingGroupIndexingConfiguration != nil { + if err := d.Set("thing_group_indexing_configuration", []interface{}{flattenThingGroupIndexingConfiguration(output.ThingGroupIndexingConfiguration)}); err != nil { + return diag.Errorf("error setting thing_group_indexing_configuration: %s", err) + } + } else { + d.Set("thing_group_indexing_configuration", nil) + } + if output.ThingIndexingConfiguration != nil { + if err := d.Set("thing_indexing_configuration", []interface{}{flattenThingIndexingConfiguration(output.ThingIndexingConfiguration)}); err != nil { + return diag.Errorf("error setting thing_indexing_configuration: %s", err) + } + } else { + d.Set("thing_indexing_configuration", nil) + } + + return nil +} + +func flattenThingGroupIndexingConfiguration(apiObject *iot.ThingGroupIndexingConfiguration) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.CustomFields; v != nil { + tfMap["custom_field"] = flattenFields(v) + } + + if v := apiObject.ManagedFields; v != nil { + tfMap["managed_field"] = flattenFields(v) + } + + if v := apiObject.ThingGroupIndexingMode; v != nil { + tfMap["thing_group_indexing_mode"] = aws.StringValue(v) + } + + return tfMap +} + +func flattenThingIndexingConfiguration(apiObject *iot.ThingIndexingConfiguration) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.CustomFields; v != nil { + tfMap["custom_field"] = flattenFields(v) + } + + if v := apiObject.DeviceDefenderIndexingMode; v != nil { + tfMap["device_defender_indexing_mode"] = aws.StringValue(v) + } + + if v := apiObject.ManagedFields; v != nil { + tfMap["managed_field"] = flattenFields(v) + } + + if v := apiObject.NamedShadowIndexingMode; v != nil { + tfMap["named_shadow_indexing_mode"] = aws.StringValue(v) + } + + if v := apiObject.ThingConnectivityIndexingMode; v != nil { + tfMap["thing_connectivity_indexing_mode"] = aws.StringValue(v) + } + + if v := apiObject.ThingIndexingMode; v != nil { + tfMap["thing_indexing_mode"] = aws.StringValue(v) + } + + return tfMap +} + +func flattenField(apiObject *iot.Field) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.Name; v != nil { + tfMap["name"] = aws.StringValue(v) + } + + if v := apiObject.Type; v != nil { + tfMap["type"] = aws.StringValue(v) + } + + return tfMap +} + +func flattenFields(apiObjects []*iot.Field) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + if apiObject == nil { + continue + } + + tfList = append(tfList, flattenField(apiObject)) + } + + return tfList +} + +func expandThingGroupIndexingConfiguration(tfMap map[string]interface{}) *iot.ThingGroupIndexingConfiguration { + if tfMap == nil { + return nil + } + + apiObject := &iot.ThingGroupIndexingConfiguration{} + + if v, ok := tfMap["custom_field"].(*schema.Set); ok && v.Len() > 0 { + apiObject.CustomFields = expandFields(v.List()) + } + + if v, ok := tfMap["managed_field"].(*schema.Set); ok && v.Len() > 0 { + apiObject.ManagedFields = expandFields(v.List()) + } + + if v, ok := tfMap["thing_group_indexing_mode"].(string); ok && v != "" { + apiObject.ThingGroupIndexingMode = aws.String(v) + } + + return apiObject +} + +func expandThingIndexingConfiguration(tfMap map[string]interface{}) *iot.ThingIndexingConfiguration { + if tfMap == nil { + return nil + } + + apiObject := &iot.ThingIndexingConfiguration{} + + if v, ok := tfMap["custom_field"].(*schema.Set); ok && v.Len() > 0 { + apiObject.CustomFields = expandFields(v.List()) + } + + if v, ok := tfMap["device_defender_indexing_mode"].(string); ok && v != "" { + apiObject.DeviceDefenderIndexingMode = aws.String(v) + } + + if v, ok := tfMap["managed_field"].(*schema.Set); ok && v.Len() > 0 { + apiObject.ManagedFields = expandFields(v.List()) + } + + if v, ok := tfMap["named_shadow_indexing_mode"].(string); ok && v != "" { + apiObject.NamedShadowIndexingMode = aws.String(v) + } + + if v, ok := tfMap["thing_connectivity_indexing_mode"].(string); ok && v != "" { + apiObject.ThingConnectivityIndexingMode = aws.String(v) + } + + if v, ok := tfMap["thing_indexing_mode"].(string); ok && v != "" { + apiObject.ThingIndexingMode = aws.String(v) + } + + return apiObject +} + +func expandField(tfMap map[string]interface{}) *iot.Field { + if tfMap == nil { + return nil + } + + apiObject := &iot.Field{} + + if v, ok := tfMap["name"].(string); ok && v != "" { + apiObject.Name = aws.String(v) + } + + if v, ok := tfMap["type"].(string); ok && v != "" { + apiObject.Type = aws.String(v) + } + + return apiObject +} + +func expandFields(tfList []interface{}) []*iot.Field { + if len(tfList) == 0 { + return nil + } + + var apiObjects []*iot.Field + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandField(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, apiObject) + } + + return apiObjects +} diff --git a/internal/service/iot/indexing_configuration_test.go b/internal/service/iot/indexing_configuration_test.go new file mode 100644 index 000000000000..779b98292021 --- /dev/null +++ b/internal/service/iot/indexing_configuration_test.go @@ -0,0 +1,143 @@ +package iot_test + +import ( + "testing" + + "github.com/aws/aws-sdk-go/service/iot" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" +) + +func TestAccIoTIndexingConfiguration_serial(t *testing.T) { + testCases := map[string]func(t *testing.T){ + "basic": testAccIndexingConfiguration_basic, + "allAttributes": testAccIndexingConfiguration_allAttributes, + } + + for name, tc := range testCases { + tc := tc + t.Run(name, func(t *testing.T) { + tc(t) + }) + } +} + +func testAccIndexingConfiguration_basic(t *testing.T) { + resourceName := "aws_iot_indexing_configuration.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, iot.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: acctest.CheckDestroyNoop, + Steps: []resource.TestStep{ + { + Config: testAccIndexingConfigurationConfig, + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr(resourceName, "thing_group_indexing_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "thing_group_indexing_configuration.0.custom_field.#", "0"), + resource.TestCheckResourceAttr(resourceName, "thing_group_indexing_configuration.0.managed_field.#", "0"), + resource.TestCheckResourceAttr(resourceName, "thing_group_indexing_configuration.0.thing_group_indexing_mode", "OFF"), + resource.TestCheckResourceAttr(resourceName, "thing_indexing_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "thing_indexing_configuration.0.custom_field.#", "0"), + resource.TestCheckResourceAttr(resourceName, "thing_indexing_configuration.0.device_defender_indexing_mode", "OFF"), + resource.TestCheckResourceAttr(resourceName, "thing_indexing_configuration.0.managed_field.#", "0"), + resource.TestCheckResourceAttr(resourceName, "thing_indexing_configuration.0.named_shadow_indexing_mode", "OFF"), + resource.TestCheckResourceAttr(resourceName, "thing_indexing_configuration.0.thing_connectivity_indexing_mode", "OFF"), + resource.TestCheckResourceAttr(resourceName, "thing_indexing_configuration.0.thing_indexing_mode", "OFF"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccIndexingConfiguration_allAttributes(t *testing.T) { + resourceName := "aws_iot_indexing_configuration.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, iot.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: acctest.CheckDestroyNoop, + Steps: []resource.TestStep{ + { + Config: testAccIndexingConfigurationAllAttributesConfig, + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr(resourceName, "thing_group_indexing_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "thing_group_indexing_configuration.0.custom_field.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "thing_group_indexing_configuration.0.managed_field.#", "0"), + resource.TestCheckResourceAttr(resourceName, "thing_group_indexing_configuration.0.thing_group_indexing_mode", "ON"), + resource.TestCheckResourceAttr(resourceName, "thing_indexing_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "thing_indexing_configuration.0.custom_field.#", "3"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "thing_indexing_configuration.0.custom_field.*", map[string]string{ + "name": "attributes.version", + "type": "Number", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "thing_indexing_configuration.0.custom_field.*", map[string]string{ + "name": "shadow.name.thing1shadow.desired.DefaultDesired", + "type": "String", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "thing_indexing_configuration.0.custom_field.*", map[string]string{ + "name": "deviceDefender.securityProfile1.NUMBER_VALUE_BEHAVIOR.lastViolationValue.number", + "type": "Number", + }), + resource.TestCheckResourceAttr(resourceName, "thing_indexing_configuration.0.device_defender_indexing_mode", "VIOLATIONS"), + acctest.CheckResourceAttrGreaterThanValue(resourceName, "thing_group_indexing_configuration.0.managed_field.#", "0"), + resource.TestCheckResourceAttr(resourceName, "thing_indexing_configuration.0.named_shadow_indexing_mode", "ON"), + resource.TestCheckResourceAttr(resourceName, "thing_indexing_configuration.0.thing_connectivity_indexing_mode", "STATUS"), + resource.TestCheckResourceAttr(resourceName, "thing_indexing_configuration.0.thing_indexing_mode", "REGISTRY_AND_SHADOW"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +const testAccIndexingConfigurationConfig = ` +resource "aws_iot_indexing_configuration" "test" { + thing_group_indexing_configuration { + thing_group_indexing_mode = "OFF" + } + + thing_indexing_configuration { + thing_indexing_mode = "OFF" + } +} +` + +const testAccIndexingConfigurationAllAttributesConfig = ` +resource "aws_iot_indexing_configuration" "test" { + thing_group_indexing_configuration { + thing_group_indexing_mode = "ON" + } + + thing_indexing_configuration { + thing_indexing_mode = "REGISTRY_AND_SHADOW" + thing_connectivity_indexing_mode = "STATUS" + device_defender_indexing_mode = "VIOLATIONS" + named_shadow_indexing_mode = "ON" + + custom_field { + name = "attributes.version" + type = "Number" + } + custom_field { + name = "shadow.name.thing1shadow.desired.DefaultDesired" + type = "String" + } + custom_field { + name = "deviceDefender.securityProfile1.NUMBER_VALUE_BEHAVIOR.lastViolationValue.number" + type = "Number" + } + } +} +` diff --git a/internal/service/iot/logging_options.go b/internal/service/iot/logging_options.go new file mode 100644 index 000000000000..b66558809b67 --- /dev/null +++ b/internal/service/iot/logging_options.go @@ -0,0 +1,90 @@ +package iot + +import ( + "context" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/iot" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfiam "github.com/hashicorp/terraform-provider-aws/internal/service/iam" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/verify" +) + +func ResourceLoggingOptions() *schema.Resource { + return &schema.Resource{ + CreateWithoutTimeout: resourceLoggingOptionsPut, + ReadWithoutTimeout: resourceLoggingOptionsRead, + UpdateWithoutTimeout: resourceLoggingOptionsPut, + DeleteWithoutTimeout: schema.NoopContext, + + Schema: map[string]*schema.Schema{ + "default_log_level": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(iot.LogLevel_Values(), false), + }, + "disable_all_logs": { + Type: schema.TypeBool, + Optional: true, + }, + "role_arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + }, + }, + } +} + +func resourceLoggingOptionsPut(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).IoTConn + + input := &iot.SetV2LoggingOptionsInput{} + + if v, ok := d.GetOk("default_log_level"); ok { + input.DefaultLogLevel = aws.String(v.(string)) + } + + if v, ok := d.GetOk("disable_all_logs"); ok { + input.DisableAllLogs = aws.Bool(v.(bool)) + } + + if v, ok := d.GetOk("role_arn"); ok { + input.RoleArn = aws.String(v.(string)) + } + + _, err := tfresource.RetryWhenAWSErrMessageContainsContext(ctx, tfiam.PropagationTimeout, + func() (interface{}, error) { + return conn.SetV2LoggingOptionsWithContext(ctx, input) + }, + iot.ErrCodeInvalidRequestException, "If the role was just created or updated, please try again in a few seconds.", + ) + + if err != nil { + return diag.Errorf("setting IoT logging options: %s", err) + } + + d.SetId(meta.(*conns.AWSClient).Region) + + return resourceLoggingOptionsRead(ctx, d, meta) +} + +func resourceLoggingOptionsRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).IoTConn + + output, err := conn.GetV2LoggingOptionsWithContext(ctx, &iot.GetV2LoggingOptionsInput{}) + + if err != nil { + return diag.Errorf("reading IoT logging options: %s", err) + } + + d.Set("default_log_level", output.DefaultLogLevel) + d.Set("disable_all_logs", output.DisableAllLogs) + d.Set("role_arn", output.RoleArn) + + return nil +} diff --git a/internal/service/iot/logging_options_test.go b/internal/service/iot/logging_options_test.go new file mode 100644 index 000000000000..08a579ee4a0d --- /dev/null +++ b/internal/service/iot/logging_options_test.go @@ -0,0 +1,141 @@ +package iot_test + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/iot" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" +) + +func TestAccIoTLoggingOptions_serial(t *testing.T) { + testCases := map[string]func(t *testing.T){ + "basic": testAccLoggingOptions_basic, + "update": testAccLoggingOptions_update, + } + + for name, tc := range testCases { + tc := tc + t.Run(name, func(t *testing.T) { + tc(t) + }) + } +} + +func testAccLoggingOptions_basic(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_iot_logging_options.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, iot.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: acctest.CheckDestroyNoop, + Steps: []resource.TestStep{ + { + Config: testAccLoggingOptionsConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr(resourceName, "default_log_level", "WARN"), + resource.TestCheckResourceAttr(resourceName, "disable_all_logs", "false"), + resource.TestCheckResourceAttrSet(resourceName, "role_arn"), + ), + }, + }, + }) +} + +func testAccLoggingOptions_update(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_iot_logging_options.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, iot.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: acctest.CheckDestroyNoop, + Steps: []resource.TestStep{ + { + Config: testAccLoggingOptionsConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr(resourceName, "default_log_level", "WARN"), + resource.TestCheckResourceAttr(resourceName, "disable_all_logs", "false"), + resource.TestCheckResourceAttrSet(resourceName, "role_arn"), + ), + }, + { + Config: testAccLoggingOptionsUpdatedConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr(resourceName, "default_log_level", "DISABLED"), + resource.TestCheckResourceAttr(resourceName, "disable_all_logs", "true"), + resource.TestCheckResourceAttrSet(resourceName, "role_arn"), + ), + }, + }, + }) +} + +func testAccLoggingOptionsBaseConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_iam_role" "test" { + name = %[1]q + + assume_role_policy = < 0 && v.([]interface{})[0] != nil { + input.PreProvisioningHook = expandProvisioningHook(v.([]interface{})[0].(map[string]interface{})) + } + + if v, ok := d.GetOk("provisioning_role_arn"); ok { + input.ProvisioningRoleArn = aws.String(v.(string)) + } + + if v, ok := d.GetOk("template_body"); ok { + input.TemplateBody = aws.String(v.(string)) + } + + if len(tags) > 0 { + input.Tags = Tags(tags.IgnoreAWS()) + } + + log.Printf("[DEBUG] Creating IoT Provisioning Template: %s", input) + outputRaw, err := tfresource.RetryWhenAWSErrMessageContainsContext(ctx, tfiam.PropagationTimeout, + func() (interface{}, error) { + return conn.CreateProvisioningTemplateWithContext(ctx, input) + }, + iot.ErrCodeInvalidRequestException, "The provisioning role cannot be assumed by AWS IoT") + + if err != nil { + return diag.Errorf("error creating IoT Provisioning Template (%s): %s", name, err) + } + + d.SetId(aws.StringValue(outputRaw.(*iot.CreateProvisioningTemplateOutput).TemplateName)) + + return resourceProvisioningTemplateRead(ctx, d, meta) +} + +func resourceProvisioningTemplateRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).IoTConn + defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig + ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig + + output, err := FindProvisioningTemplateByName(ctx, conn, d.Id()) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] IoT Provisioning Template %s not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return diag.Errorf("error reading IoT Provisioning Template (%s): %s", d.Id(), err) + } + + d.Set("arn", output.TemplateArn) + d.Set("default_version_id", output.DefaultVersionId) + d.Set("description", output.Description) + d.Set("enabled", output.Enabled) + d.Set("name", output.TemplateName) + if output.PreProvisioningHook != nil { + if err := d.Set("pre_provisioning_hook", []interface{}{flattenProvisioningHook(output.PreProvisioningHook)}); err != nil { + return diag.Errorf("error setting pre_provisioning_hook: %s", err) + } + } else { + d.Set("pre_provisioning_hook", nil) + } + d.Set("provisioning_role_arn", output.ProvisioningRoleArn) + d.Set("template_body", output.TemplateBody) + + tags, err := ListTags(conn, d.Get("arn").(string)) + + if err != nil { + return diag.Errorf("error listing tags for IoT Provisioning Template (%s): %s", d.Id(), err) + } + + tags = tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig) + + //lintignore:AWSR002 + if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { + return diag.Errorf("error setting tags: %s", err) + } + + if err := d.Set("tags_all", tags.Map()); err != nil { + return diag.Errorf("error setting tags_all: %s", err) + } + + return nil +} + +func resourceProvisioningTemplateUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).IoTConn + + if d.HasChange("template_body") { + input := &iot.CreateProvisioningTemplateVersionInput{ + SetAsDefault: aws.Bool(true), + TemplateBody: aws.String(d.Get("template_body").(string)), + TemplateName: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Creating IoT Provisioning Template version: %s", input) + _, err := conn.CreateProvisioningTemplateVersionWithContext(ctx, input) + + if err != nil { + return diag.Errorf("error creating IoT Provisioning Template (%s) version: %s", d.Id(), err) + } + } + + if d.HasChanges("description", "enabled", "provisioning_role_arn") { + input := &iot.UpdateProvisioningTemplateInput{ + Description: aws.String(d.Get("description").(string)), + Enabled: aws.Bool(d.Get("enabled").(bool)), + ProvisioningRoleArn: aws.String(d.Get("provisioning_role_arn").(string)), + TemplateName: aws.String(d.Id()), + } + + log.Printf("[DEBUG] Updating IoT Provisioning Template: %s", input) + _, err := tfresource.RetryWhenAWSErrMessageContainsContext(ctx, tfiam.PropagationTimeout, + func() (interface{}, error) { + return conn.UpdateProvisioningTemplateWithContext(ctx, input) + }, + iot.ErrCodeInvalidRequestException, "The provisioning role cannot be assumed by AWS IoT") + + if err != nil { + return diag.Errorf("error updating IoT Provisioning Template (%s): %s", d.Id(), err) + } + } + + if d.HasChange("tags_all") { + o, n := d.GetChange("tags_all") + + if err := UpdateTags(conn, d.Get("arn").(string), o, n); err != nil { + return diag.Errorf("error updating tags: %s", err) + } + } + + return resourceProvisioningTemplateRead(ctx, d, meta) +} + +func resourceProvisioningTemplateDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).IoTConn + + log.Printf("[INFO] Deleting IoT Provisioning Template: %s", d.Id()) + _, err := conn.DeleteProvisioningTemplateWithContext(ctx, &iot.DeleteProvisioningTemplateInput{ + TemplateName: aws.String(d.Id()), + }) + + if tfawserr.ErrCodeEquals(err, iot.ErrCodeResourceNotFoundException) { + return nil + } + + if err != nil { + return diag.Errorf("error deleting IoT Provisioning Template (%s): %s", d.Id(), err) + } + + return nil +} + +func flattenProvisioningHook(apiObject *iot.ProvisioningHook) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.PayloadVersion; v != nil { + tfMap["payload_version"] = aws.StringValue(v) + } + + if v := apiObject.TargetArn; v != nil { + tfMap["target_arn"] = aws.StringValue(v) + } + + return tfMap +} + +func expandProvisioningHook(tfMap map[string]interface{}) *iot.ProvisioningHook { + if tfMap == nil { + return nil + } + + apiObject := &iot.ProvisioningHook{} + + if v, ok := tfMap["payload_version"].(string); ok && v != "" { + apiObject.PayloadVersion = aws.String(v) + } + + if v, ok := tfMap["target_arn"].(string); ok && v != "" { + apiObject.TargetArn = aws.String(v) + } + + return apiObject +} + +func FindProvisioningTemplateByName(ctx context.Context, conn *iot.IoT, name string) (*iot.DescribeProvisioningTemplateOutput, error) { + input := &iot.DescribeProvisioningTemplateInput{ + TemplateName: aws.String(name), + } + + output, err := conn.DescribeProvisioningTemplateWithContext(ctx, input) + + if tfawserr.ErrCodeEquals(err, iot.ErrCodeResourceNotFoundException) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output, nil +} diff --git a/internal/service/iot/provisioning_template_test.go b/internal/service/iot/provisioning_template_test.go new file mode 100644 index 000000000000..b441ac3e85b0 --- /dev/null +++ b/internal/service/iot/provisioning_template_test.go @@ -0,0 +1,424 @@ +package iot_test + +import ( + "context" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/iot" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfiot "github.com/hashicorp/terraform-provider-aws/internal/service/iot" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func TestAccIoTProvisioningTemplate_basic(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_iot_provisioning_template.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, iot.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckProvisioningTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccProvisioningTemplateConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckProvisioningTemplateExists(resourceName), + testAccCheckProvisioningTemplateNumVersions(rName, 1), + resource.TestCheckResourceAttrSet(resourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "pre_provisioning_hook.#", "0"), + resource.TestCheckResourceAttrSet(resourceName, "provisioning_role_arn"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttrSet(resourceName, "template_body"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccIoTProvisioningTemplate_disappears(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_iot_provisioning_template.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, iot.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckProvisioningTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccProvisioningTemplateConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckProvisioningTemplateExists(resourceName), + acctest.CheckResourceDisappears(acctest.Provider, tfiot.ResourceProvisioningTemplate(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccIoTProvisioningTemplate_tags(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_iot_provisioning_template.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, iot.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckProvisioningTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccProvisioningTemplateConfigTags1(rName, "key1", "value1"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckProvisioningTemplateExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + testAccCheckProvisioningTemplateNumVersions(rName, 1), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccProvisioningTemplateConfigTags2(rName, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckProvisioningTemplateExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + testAccCheckProvisioningTemplateNumVersions(rName, 1), + ), + }, + { + Config: testAccProvisioningTemplateConfigTags1(rName, "key2", "value2"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckProvisioningTemplateExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + testAccCheckProvisioningTemplateNumVersions(rName, 1), + ), + }, + }, + }) +} + +func TestAccIoTProvisioningTemplate_update(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_iot_provisioning_template.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, iot.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckProvisioningTemplateDestroy, + Steps: []resource.TestStep{ + { + Config: testAccProvisioningTemplateConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckProvisioningTemplateExists(resourceName), + testAccCheckProvisioningTemplateNumVersions(rName, 1), + resource.TestCheckResourceAttrSet(resourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "pre_provisioning_hook.#", "0"), + resource.TestCheckResourceAttrSet(resourceName, "provisioning_role_arn"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttrSet(resourceName, "template_body"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccProvisioningTemplateUpdatedConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckProvisioningTemplateExists(resourceName), + testAccCheckProvisioningTemplateNumVersions(rName, 2), + resource.TestCheckResourceAttrSet(resourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "description", "For testing"), + resource.TestCheckResourceAttr(resourceName, "enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "pre_provisioning_hook.#", "0"), + resource.TestCheckResourceAttrSet(resourceName, "provisioning_role_arn"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttrSet(resourceName, "template_body"), + ), + }, + }, + }) +} + +func testAccCheckProvisioningTemplateExists(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No IoT Provisioning Template ID is set") + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn + + _, err := tfiot.FindProvisioningTemplateByName(context.TODO(), conn, rs.Primary.ID) + + if err != nil { + return err + } + + return nil + } +} + +func testAccCheckProvisioningTemplateDestroy(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_iot_provisioning_template" { + continue + } + + _, err := tfiot.FindProvisioningTemplateByName(context.TODO(), conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("IoT Provisioning Template %s still exists", rs.Primary.ID) + } + + return nil +} + +func testAccCheckProvisioningTemplateNumVersions(name string, want int) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).IoTConn + + var got int + err := conn.ListProvisioningTemplateVersionsPages( + &iot.ListProvisioningTemplateVersionsInput{TemplateName: aws.String(name)}, + func(page *iot.ListProvisioningTemplateVersionsOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + got += len(page.Versions) + + return !lastPage + }) + + if err != nil { + return err + } + + if got != want { + return fmt.Errorf("Incorrect version count for IoT Provisioning Template %s; got: %d, want: %d", name, got, want) + } + + return nil + } +} + +func testAccProvisioningTemplateBaseConfig(rName string) string { + return fmt.Sprintf(` +data "aws_iam_policy_document" "assume_role" { + statement { + actions = ["sts:AssumeRole"] + + principals { + type = "Service" + identifiers = ["iot.amazonaws.com"] + } + } +} + +resource "aws_iam_role" "test" { + name = %[1]q + path = "/service-role/" + assume_role_policy = data.aws_iam_policy_document.assume_role.json +} + +data "aws_partition" "current" {} + +resource "aws_iam_role_policy_attachment" "test" { + role = aws_iam_role.test.name + policy_arn = "arn:${data.aws_partition.current.partition}:iam::aws:policy/service-role/AWSIoTThingsRegistration" +} + +data "aws_iam_policy_document" "device" { + statement { + actions = ["iot:Subscribe"] + resources = ["*"] + } +} + +resource "aws_iot_policy" "test" { + name = %[1]q + policy = data.aws_iam_policy_document.device.json +} +`, rName) +} + +func testAccProvisioningTemplateConfig(rName string) string { + return acctest.ConfigCompose(testAccProvisioningTemplateBaseConfig(rName), fmt.Sprintf(` +resource "aws_iot_provisioning_template" "test" { + name = %[1]q + provisioning_role_arn = aws_iam_role.test.arn + + template_body = jsonencode({ + Parameters = { + SerialNumber = { Type = "String" } + } + + Resources = { + certificate = { + Properties = { + CertificateId = { Ref = "AWS::IoT::Certificate::Id" } + Status = "Active" + } + Type = "AWS::IoT::Certificate" + } + + policy = { + Properties = { + PolicyName = aws_iot_policy.test.name + } + Type = "AWS::IoT::Policy" + } + } + }) +} +`, rName)) +} + +func testAccProvisioningTemplateConfigTags1(rName, tagKey1, tagValue1 string) string { + return acctest.ConfigCompose(testAccProvisioningTemplateBaseConfig(rName), fmt.Sprintf(` +resource "aws_iot_provisioning_template" "test" { + name = %[1]q + provisioning_role_arn = aws_iam_role.test.arn + + template_body = jsonencode({ + Parameters = { + SerialNumber = { Type = "String" } + } + + Resources = { + certificate = { + Properties = { + CertificateId = { Ref = "AWS::IoT::Certificate::Id" } + Status = "Active" + } + Type = "AWS::IoT::Certificate" + } + + policy = { + Properties = { + PolicyName = aws_iot_policy.test.name + } + Type = "AWS::IoT::Policy" + } + } + }) + + tags = { + %[2]q = %[3]q + } +} +`, rName, tagKey1, tagValue1)) +} + +func testAccProvisioningTemplateConfigTags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { + return acctest.ConfigCompose(testAccProvisioningTemplateBaseConfig(rName), fmt.Sprintf(` +resource "aws_iot_provisioning_template" "test" { + name = %[1]q + provisioning_role_arn = aws_iam_role.test.arn + + template_body = jsonencode({ + Parameters = { + SerialNumber = { Type = "String" } + } + + Resources = { + certificate = { + Properties = { + CertificateId = { Ref = "AWS::IoT::Certificate::Id" } + Status = "Active" + } + Type = "AWS::IoT::Certificate" + } + + policy = { + Properties = { + PolicyName = aws_iot_policy.test.name + } + Type = "AWS::IoT::Policy" + } + } + }) + + tags = { + %[2]q = %[3]q + %[4]q = %[5]q + } +} +`, rName, tagKey1, tagValue1, tagKey2, tagValue2)) +} + +func testAccProvisioningTemplateUpdatedConfig(rName string) string { + return acctest.ConfigCompose(testAccProvisioningTemplateBaseConfig(rName), fmt.Sprintf(` +resource "aws_iot_provisioning_template" "test" { + name = %[1]q + provisioning_role_arn = aws_iam_role.test.arn + description = "For testing" + enabled = true + + template_body = jsonencode({ + Parameters = { + SerialNumber = { Type = "String" } + } + + Resources = { + certificate = { + Properties = { + CertificateId = { Ref = "AWS::IoT::Certificate::Id" } + Status = "Inactive" + } + Type = "AWS::IoT::Certificate" + } + + policy = { + Properties = { + PolicyName = aws_iot_policy.test.name + } + Type = "AWS::IoT::Policy" + } + } + }) +} +`, rName)) +} diff --git a/internal/service/kafka/sweep.go b/internal/service/kafka/sweep.go index 427da7ddb9db..87f1cd224f4b 100644 --- a/internal/service/kafka/sweep.go +++ b/internal/service/kafka/sweep.go @@ -19,6 +19,9 @@ func init() { resource.AddTestSweepers("aws_msk_cluster", &resource.Sweeper{ Name: "aws_msk_cluster", F: sweepClusters, + Dependencies: []string{ + "aws_mskconnect_connector", + }, }) resource.AddTestSweepers("aws_msk_configuration", &resource.Sweeper{ diff --git a/internal/service/kafkaconnect/connector.go b/internal/service/kafkaconnect/connector.go new file mode 100644 index 000000000000..564cec911a74 --- /dev/null +++ b/internal/service/kafkaconnect/connector.go @@ -0,0 +1,1319 @@ +package kafkaconnect + +import ( + "context" + "log" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/kafkaconnect" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/flex" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/verify" +) + +func ResourceConnector() *schema.Resource { + return &schema.Resource{ + CreateWithoutTimeout: resourceConnectorCreate, + ReadWithoutTimeout: resourceConnectorRead, + UpdateWithoutTimeout: resourceConnectorUpdate, + DeleteWithoutTimeout: resourceConnectorDelete, + + Importer: &schema.ResourceImporter{ + StateContext: schema.ImportStatePassthroughContext, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(20 * time.Minute), + Update: schema.DefaultTimeout(20 * time.Minute), + Delete: schema.DefaultTimeout(10 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "capacity": { + Type: schema.TypeList, + MaxItems: 1, + Required: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "autoscaling": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "max_worker_count": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntBetween(1, 10), + }, + "mcu_count": { + Type: schema.TypeInt, + Optional: true, + Default: 1, + ValidateFunc: validation.IntInSlice([]int{1, 2, 4, 8}), + }, + "min_worker_count": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntBetween(1, 10), + }, + "scale_in_policy": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cpu_utilization_percentage": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + ValidateFunc: validation.IntBetween(1, 100), + }, + }, + }, + }, + "scale_out_policy": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cpu_utilization_percentage": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + ValidateFunc: validation.IntBetween(1, 100), + }, + }, + }, + }, + }, + }, + ExactlyOneOf: []string{"capacity.0.autoscaling", "capacity.0.provisioned_capacity"}, + }, + "provisioned_capacity": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "mcu_count": { + Type: schema.TypeInt, + Optional: true, + Default: 1, + ValidateFunc: validation.IntInSlice([]int{1, 2, 4, 8}), + }, + "worker_count": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntBetween(1, 10), + }, + }, + }, + ExactlyOneOf: []string{"capacity.0.autoscaling", "capacity.0.provisioned_capacity"}, + }, + }, + }, + }, + "connector_configuration": { + Type: schema.TypeMap, + Elem: &schema.Schema{Type: schema.TypeString}, + Required: true, + ForceNew: true, + }, + "description": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(0, 1024), + }, + "kafka_cluster": { + Type: schema.TypeList, + MaxItems: 1, + Required: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "apache_kafka_cluster": { + Type: schema.TypeList, + MaxItems: 1, + Required: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "bootstrap_servers": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "vpc": { + Type: schema.TypeList, + MaxItems: 1, + Required: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "security_groups": { + Type: schema.TypeSet, + Required: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "subnets": { + Type: schema.TypeSet, + Required: true, + ForceNew: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + "kafka_cluster_client_authentication": { + Type: schema.TypeList, + MaxItems: 1, + Required: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "authentication_type": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: kafkaconnect.KafkaClusterClientAuthenticationTypeNone, + ValidateFunc: validation.StringInSlice(kafkaconnect.KafkaClusterClientAuthenticationType_Values(), false), + }, + }, + }, + }, + "kafka_cluster_encryption_in_transit": { + Type: schema.TypeList, + MaxItems: 1, + Required: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "encryption_type": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: kafkaconnect.KafkaClusterEncryptionInTransitTypePlaintext, + ValidateFunc: validation.StringInSlice(kafkaconnect.KafkaClusterEncryptionInTransitType_Values(), false), + }, + }, + }, + }, + "kafkaconnect_version": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "log_delivery": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "worker_log_delivery": { + Type: schema.TypeList, + MaxItems: 1, + Required: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cloudwatch_logs": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + ForceNew: true, + }, + "log_group": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + }, + }, + }, + "firehose": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "delivery_stream": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "enabled": { + Type: schema.TypeBool, + Required: true, + ForceNew: true, + }, + }, + }, + }, + "s3": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "bucket": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "enabled": { + Type: schema.TypeBool, + Required: true, + ForceNew: true, + }, + "prefix": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + }, + }, + }, + }, + }, + }, + }, + }, + }, + "name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 128), + }, + "plugin": { + Type: schema.TypeSet, + Required: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "custom_plugin": { + Type: schema.TypeList, + MaxItems: 1, + Required: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: verify.ValidARN, + }, + "revision": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + }, + }, + }, + }, + }, + }, + "service_execution_role_arn": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: verify.ValidARN, + }, + "version": { + Type: schema.TypeString, + Computed: true, + }, + "worker_configuration": { + Type: schema.TypeList, + MaxItems: 1, + Optional: true, + ForceNew: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: verify.ValidARN, + }, + "revision": { + Type: schema.TypeInt, + Required: true, + ForceNew: true, + }, + }, + }, + }, + }, + } +} + +func resourceConnectorCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).KafkaConnectConn + + name := d.Get("name").(string) + input := &kafkaconnect.CreateConnectorInput{ + Capacity: expandCapacity(d.Get("capacity").([]interface{})[0].(map[string]interface{})), + ConnectorConfiguration: flex.ExpandStringMap(d.Get("connector_configuration").(map[string]interface{})), + ConnectorName: aws.String(name), + KafkaCluster: expandKafkaCluster(d.Get("kafka_cluster").([]interface{})[0].(map[string]interface{})), + KafkaClusterClientAuthentication: expandKafkaClusterClientAuthentication(d.Get("kafka_cluster_client_authentication").([]interface{})[0].(map[string]interface{})), + KafkaClusterEncryptionInTransit: expandKafkaClusterEncryptionInTransit(d.Get("kafka_cluster_encryption_in_transit").([]interface{})[0].(map[string]interface{})), + KafkaConnectVersion: aws.String(d.Get("kafkaconnect_version").(string)), + Plugins: expandPlugins(d.Get("plugin").(*schema.Set).List()), + ServiceExecutionRoleArn: aws.String(d.Get("service_execution_role_arn").(string)), + } + + if v, ok := d.GetOk("description"); ok { + input.ConnectorDescription = aws.String(v.(string)) + } + + if v, ok := d.GetOk("log_delivery"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.LogDelivery = expandLogDelivery(v.([]interface{})[0].(map[string]interface{})) + } + + if v, ok := d.GetOk("worker_configuration"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.WorkerConfiguration = expandWorkerConfiguration(v.([]interface{})[0].(map[string]interface{})) + } + + log.Printf("[DEBUG] Creating MSK Connect Connector: %s", input) + output, err := conn.CreateConnectorWithContext(ctx, input) + + if err != nil { + return diag.Errorf("error creating MSK Connect Connector (%s): %s", name, err) + } + + d.SetId(aws.StringValue(output.ConnectorArn)) + + _, err = waitConnectorCreated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutCreate)) + + if err != nil { + return diag.Errorf("error waiting for MSK Connect Connector (%s) create: %s", d.Id(), err) + } + + return resourceConnectorRead(ctx, d, meta) +} + +func resourceConnectorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).KafkaConnectConn + + connector, err := FindConnectorByARN(ctx, conn, d.Id()) + + if tfresource.NotFound(err) && !d.IsNewResource() { + log.Printf("[WARN] MSK Connect Connector (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return diag.Errorf("error reading MSK Connect Connector (%s): %s", d.Id(), err) + } + + d.Set("arn", connector.ConnectorArn) + if connector.Capacity != nil { + if err := d.Set("capacity", []interface{}{flattenCapacityDescription(connector.Capacity)}); err != nil { + return diag.Errorf("error setting capacity: %s", err) + } + } else { + d.Set("capacity", nil) + } + d.Set("connector_configuration", aws.StringValueMap(connector.ConnectorConfiguration)) + d.Set("description", connector.ConnectorDescription) + if connector.KafkaCluster != nil { + if err := d.Set("kafka_cluster", []interface{}{flattenKafkaClusterDescription(connector.KafkaCluster)}); err != nil { + return diag.Errorf("error setting kafka_cluster: %s", err) + } + } else { + d.Set("kafka_cluster", nil) + } + if connector.KafkaClusterClientAuthentication != nil { + if err := d.Set("kafka_cluster_client_authentication", []interface{}{flattenKafkaClusterClientAuthenticationDescription(connector.KafkaClusterClientAuthentication)}); err != nil { + return diag.Errorf("error setting kafka_cluster_client_authentication: %s", err) + } + } else { + d.Set("kafka_cluster_client_authentication", nil) + } + if connector.KafkaClusterEncryptionInTransit != nil { + if err := d.Set("kafka_cluster_encryption_in_transit", []interface{}{flattenKafkaClusterEncryptionInTransitDescription(connector.KafkaClusterEncryptionInTransit)}); err != nil { + return diag.Errorf("error setting kafka_cluster_encryption_in_transit: %s", err) + } + } else { + d.Set("kafka_cluster_encryption_in_transit", nil) + } + d.Set("kafkaconnect_version", connector.KafkaConnectVersion) + if connector.LogDelivery != nil { + if err := d.Set("log_delivery", []interface{}{flattenLogDeliveryDescription(connector.LogDelivery)}); err != nil { + return diag.Errorf("error setting log_delivery: %s", err) + } + } else { + d.Set("log_delivery", nil) + } + d.Set("name", connector.ConnectorName) + if err := d.Set("plugin", flattenPluginDescriptions(connector.Plugins)); err != nil { + return diag.Errorf("error setting plugin: %s", err) + } + d.Set("service_execution_role_arn", connector.ServiceExecutionRoleArn) + d.Set("version", connector.CurrentVersion) + if connector.WorkerConfiguration != nil { + if err := d.Set("worker_configuration", []interface{}{flattenWorkerConfigurationDescription(connector.WorkerConfiguration)}); err != nil { + return diag.Errorf("error setting worker_configuration: %s", err) + } + } else { + d.Set("worker_configuration", nil) + } + + return nil +} + +func resourceConnectorUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).KafkaConnectConn + + input := &kafkaconnect.UpdateConnectorInput{ + Capacity: expandCapacityUpdate(d.Get("capacity").([]interface{})[0].(map[string]interface{})), + ConnectorArn: aws.String(d.Id()), + CurrentVersion: aws.String(d.Get("version").(string)), + } + + log.Printf("[DEBUG] Updating MSK Connect Connector: %s", input) + _, err := conn.UpdateConnectorWithContext(ctx, input) + + if err != nil { + return diag.Errorf("error updating MSK Connect Connector (%s): %s", d.Id(), err) + } + + _, err = waitConnectorUpdated(ctx, conn, d.Id(), d.Timeout(schema.TimeoutUpdate)) + + if err != nil { + return diag.Errorf("error waiting for MSK Connect Connector (%s) update: %s", d.Id(), err) + } + + return resourceConnectorRead(ctx, d, meta) +} + +func resourceConnectorDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).KafkaConnectConn + + log.Printf("[DEBUG] Deleting MSK Connect Connector: %s", d.Id()) + _, err := conn.DeleteConnectorWithContext(ctx, &kafkaconnect.DeleteConnectorInput{ + ConnectorArn: aws.String(d.Id()), + }) + + if tfawserr.ErrCodeEquals(err, kafkaconnect.ErrCodeNotFoundException) { + return nil + } + + if err != nil { + return diag.Errorf("error deleting MSK Connect Connector (%s): %s", d.Id(), err) + } + + _, err = waitConnectorDeleted(ctx, conn, d.Id(), d.Timeout(schema.TimeoutDelete)) + + if err != nil { + return diag.Errorf("error waiting for MSK Connect Connector (%s) delete: %s", d.Id(), err) + } + + return nil +} + +func expandCapacity(tfMap map[string]interface{}) *kafkaconnect.Capacity { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.Capacity{} + + if v, ok := tfMap["autoscaling"].([]interface{}); ok && len(v) > 0 { + apiObject.AutoScaling = expandAutoScaling(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["provisioned_capacity"].([]interface{}); ok && len(v) > 0 { + apiObject.ProvisionedCapacity = expandProvisionedCapacity(v[0].(map[string]interface{})) + } + + return apiObject +} + +func expandAutoScaling(tfMap map[string]interface{}) *kafkaconnect.AutoScaling { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.AutoScaling{} + + if v, ok := tfMap["max_worker_count"].(int); ok && v != 0 { + apiObject.MaxWorkerCount = aws.Int64(int64(v)) + } + + if v, ok := tfMap["mcu_count"].(int); ok && v != 0 { + apiObject.McuCount = aws.Int64(int64(v)) + } + + if v, ok := tfMap["min_worker_count"].(int); ok && v != 0 { + apiObject.MinWorkerCount = aws.Int64(int64(v)) + } + + if v, ok := tfMap["scale_in_policy"].([]interface{}); ok && len(v) > 0 { + apiObject.ScaleInPolicy = expandScaleInPolicy(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["scale_out_policy"].([]interface{}); ok && len(v) > 0 { + apiObject.ScaleOutPolicy = expandScaleOutPolicy(v[0].(map[string]interface{})) + } + + return apiObject +} + +func expandScaleInPolicy(tfMap map[string]interface{}) *kafkaconnect.ScaleInPolicy { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.ScaleInPolicy{} + + if v, ok := tfMap["cpu_utilization_percentage"].(int); ok && v != 0 { + apiObject.CpuUtilizationPercentage = aws.Int64(int64(v)) + } + + return apiObject +} + +func expandScaleOutPolicy(tfMap map[string]interface{}) *kafkaconnect.ScaleOutPolicy { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.ScaleOutPolicy{} + + if v, ok := tfMap["cpu_utilization_percentage"].(int); ok && v != 0 { + apiObject.CpuUtilizationPercentage = aws.Int64(int64(v)) + } + + return apiObject +} + +func expandProvisionedCapacity(tfMap map[string]interface{}) *kafkaconnect.ProvisionedCapacity { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.ProvisionedCapacity{} + + if v, ok := tfMap["mcu_count"].(int); ok && v != 0 { + apiObject.McuCount = aws.Int64(int64(v)) + } + + if v, ok := tfMap["worker_count"].(int); ok && v != 0 { + apiObject.WorkerCount = aws.Int64(int64(v)) + } + + return apiObject +} + +func expandCapacityUpdate(tfMap map[string]interface{}) *kafkaconnect.CapacityUpdate { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.CapacityUpdate{} + + if v, ok := tfMap["autoscaling"].([]interface{}); ok && len(v) > 0 { + apiObject.AutoScaling = expandAutoScalingUpdate(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["provisioned_capacity"].([]interface{}); ok && len(v) > 0 { + apiObject.ProvisionedCapacity = expandProvisionedCapacityUpdate(v[0].(map[string]interface{})) + } + + return apiObject +} + +func expandAutoScalingUpdate(tfMap map[string]interface{}) *kafkaconnect.AutoScalingUpdate { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.AutoScalingUpdate{} + + if v, ok := tfMap["max_worker_count"].(int); ok { + apiObject.MaxWorkerCount = aws.Int64(int64(v)) + } + + if v, ok := tfMap["mcu_count"].(int); ok { + apiObject.McuCount = aws.Int64(int64(v)) + } + + if v, ok := tfMap["min_worker_count"].(int); ok { + apiObject.MinWorkerCount = aws.Int64(int64(v)) + } + + if v, ok := tfMap["scale_in_policy"].([]interface{}); ok && len(v) > 0 { + apiObject.ScaleInPolicy = expandScaleInPolicyUpdate(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["scale_out_policy"].([]interface{}); ok && len(v) > 0 { + apiObject.ScaleOutPolicy = expandScaleOutPolicyUpdate(v[0].(map[string]interface{})) + } + + return apiObject +} + +func expandScaleInPolicyUpdate(tfMap map[string]interface{}) *kafkaconnect.ScaleInPolicyUpdate { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.ScaleInPolicyUpdate{} + + if v, ok := tfMap["cpu_utilization_percentage"].(int); ok { + apiObject.CpuUtilizationPercentage = aws.Int64(int64(v)) + } + + return apiObject +} + +func expandScaleOutPolicyUpdate(tfMap map[string]interface{}) *kafkaconnect.ScaleOutPolicyUpdate { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.ScaleOutPolicyUpdate{} + + if v, ok := tfMap["cpu_utilization_percentage"].(int); ok { + apiObject.CpuUtilizationPercentage = aws.Int64(int64(v)) + } + + return apiObject +} + +func expandProvisionedCapacityUpdate(tfMap map[string]interface{}) *kafkaconnect.ProvisionedCapacityUpdate { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.ProvisionedCapacityUpdate{} + + if v, ok := tfMap["mcu_count"].(int); ok { + apiObject.McuCount = aws.Int64(int64(v)) + } + + if v, ok := tfMap["worker_count"].(int); ok { + apiObject.WorkerCount = aws.Int64(int64(v)) + } + + return apiObject +} + +func expandKafkaCluster(tfMap map[string]interface{}) *kafkaconnect.KafkaCluster { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.KafkaCluster{} + + if v, ok := tfMap["apache_kafka_cluster"].([]interface{}); ok && len(v) > 0 { + apiObject.ApacheKafkaCluster = expandApacheKafkaCluster(v[0].(map[string]interface{})) + } + + return apiObject +} + +func expandApacheKafkaCluster(tfMap map[string]interface{}) *kafkaconnect.ApacheKafkaCluster { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.ApacheKafkaCluster{} + + if v, ok := tfMap["bootstrap_servers"].(string); ok && v != "" { + apiObject.BootstrapServers = aws.String(v) + } + + if v, ok := tfMap["vpc"].([]interface{}); ok && len(v) > 0 { + apiObject.Vpc = expandVpc(v[0].(map[string]interface{})) + } + + return apiObject +} + +func expandVpc(tfMap map[string]interface{}) *kafkaconnect.Vpc { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.Vpc{} + + if v, ok := tfMap["security_groups"].(*schema.Set); ok && v.Len() > 0 { + apiObject.SecurityGroups = flex.ExpandStringSet(v) + } + + if v, ok := tfMap["subnets"].(*schema.Set); ok && v.Len() > 0 { + apiObject.Subnets = flex.ExpandStringSet(v) + } + + return apiObject +} + +func expandKafkaClusterClientAuthentication(tfMap map[string]interface{}) *kafkaconnect.KafkaClusterClientAuthentication { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.KafkaClusterClientAuthentication{} + + if v, ok := tfMap["authentication_type"].(string); ok && v != "" { + apiObject.AuthenticationType = aws.String(v) + } + + return apiObject +} + +func expandKafkaClusterEncryptionInTransit(tfMap map[string]interface{}) *kafkaconnect.KafkaClusterEncryptionInTransit { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.KafkaClusterEncryptionInTransit{} + + if v, ok := tfMap["encryption_type"].(string); ok && v != "" { + apiObject.EncryptionType = aws.String(v) + } + + return apiObject +} + +func expandPlugin(tfMap map[string]interface{}) *kafkaconnect.Plugin { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.Plugin{} + + if v, ok := tfMap["custom_plugin"].([]interface{}); ok && len(v) > 0 { + apiObject.CustomPlugin = expandCustomPlugin(v[0].(map[string]interface{})) + } + + return apiObject +} + +func expandPlugins(tfList []interface{}) []*kafkaconnect.Plugin { + if len(tfList) == 0 { + return nil + } + + var apiObjects []*kafkaconnect.Plugin + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandPlugin(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, apiObject) + } + + return apiObjects +} + +func expandCustomPlugin(tfMap map[string]interface{}) *kafkaconnect.CustomPlugin { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.CustomPlugin{} + + if v, ok := tfMap["arn"].(string); ok && v != "" { + apiObject.CustomPluginArn = aws.String(v) + } + + if v, ok := tfMap["revision"].(int); ok && v != 0 { + apiObject.Revision = aws.Int64(int64(v)) + } + + return apiObject +} + +func expandLogDelivery(tfMap map[string]interface{}) *kafkaconnect.LogDelivery { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.LogDelivery{} + + if v, ok := tfMap["worker_log_delivery"].([]interface{}); ok && len(v) > 0 { + apiObject.WorkerLogDelivery = expandWorkerLogDelivery(v[0].(map[string]interface{})) + } + + return apiObject +} + +func expandWorkerLogDelivery(tfMap map[string]interface{}) *kafkaconnect.WorkerLogDelivery { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.WorkerLogDelivery{} + + if v, ok := tfMap["cloudwatch_logs"].([]interface{}); ok && len(v) > 0 { + apiObject.CloudWatchLogs = expandCloudWatchLogsLogDelivery(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["firehose"].([]interface{}); ok && len(v) > 0 { + apiObject.Firehose = expandFirehoseLogDelivery(v[0].(map[string]interface{})) + } + + if v, ok := tfMap["s3"].([]interface{}); ok && len(v) > 0 { + apiObject.S3 = expandS3LogDelivery(v[0].(map[string]interface{})) + } + + return apiObject +} + +func expandCloudWatchLogsLogDelivery(tfMap map[string]interface{}) *kafkaconnect.CloudWatchLogsLogDelivery { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.CloudWatchLogsLogDelivery{} + + if v, ok := tfMap["enabled"].(bool); ok { + apiObject.Enabled = aws.Bool(v) + } + + if v, ok := tfMap["log_group"].(string); ok && v != "" { + apiObject.LogGroup = aws.String(v) + } + + return apiObject +} + +func expandFirehoseLogDelivery(tfMap map[string]interface{}) *kafkaconnect.FirehoseLogDelivery { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.FirehoseLogDelivery{} + + if v, ok := tfMap["delivery_stream"].(string); ok && v != "" { + apiObject.DeliveryStream = aws.String(v) + } + + if v, ok := tfMap["enabled"].(bool); ok { + apiObject.Enabled = aws.Bool(v) + } + + return apiObject +} + +func expandS3LogDelivery(tfMap map[string]interface{}) *kafkaconnect.S3LogDelivery { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.S3LogDelivery{} + + if v, ok := tfMap["bucket"].(string); ok && v != "" { + apiObject.Bucket = aws.String(v) + } + + if v, ok := tfMap["enabled"].(bool); ok { + apiObject.Enabled = aws.Bool(v) + } + + if v, ok := tfMap["prefix"].(string); ok && v != "" { + apiObject.Prefix = aws.String(v) + } + + return apiObject +} + +func expandWorkerConfiguration(tfMap map[string]interface{}) *kafkaconnect.WorkerConfiguration { + if tfMap == nil { + return nil + } + + apiObject := &kafkaconnect.WorkerConfiguration{} + + if v, ok := tfMap["revision"].(int); ok && v != 0 { + apiObject.Revision = aws.Int64(int64(v)) + } + + if v, ok := tfMap["arn"].(string); ok && v != "" { + apiObject.WorkerConfigurationArn = aws.String(v) + } + + return apiObject +} + +func flattenCapacityDescription(apiObject *kafkaconnect.CapacityDescription) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.AutoScaling; v != nil { + tfMap["autoscaling"] = []interface{}{flattenAutoScalingDescription(v)} + } + + if v := apiObject.ProvisionedCapacity; v != nil { + tfMap["provisioned_capacity"] = []interface{}{flattenProvisionedCapacityDescription(v)} + } + + return tfMap +} + +func flattenAutoScalingDescription(apiObject *kafkaconnect.AutoScalingDescription) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.MaxWorkerCount; v != nil { + tfMap["max_worker_count"] = aws.Int64Value(v) + } + + if v := apiObject.McuCount; v != nil { + tfMap["mcu_count"] = aws.Int64Value(v) + } + + if v := apiObject.MinWorkerCount; v != nil { + tfMap["min_worker_count"] = aws.Int64Value(v) + } + + if v := apiObject.ScaleInPolicy; v != nil { + tfMap["scale_in_policy"] = []interface{}{flattenScaleInPolicyDescription(v)} + } + + if v := apiObject.ScaleOutPolicy; v != nil { + tfMap["scale_out_policy"] = []interface{}{flattenScaleOutPolicyDescription(v)} + } + + return tfMap +} + +func flattenScaleInPolicyDescription(apiObject *kafkaconnect.ScaleInPolicyDescription) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.CpuUtilizationPercentage; v != nil { + tfMap["cpu_utilization_percentage"] = aws.Int64Value(v) + } + + return tfMap +} + +func flattenScaleOutPolicyDescription(apiObject *kafkaconnect.ScaleOutPolicyDescription) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.CpuUtilizationPercentage; v != nil { + tfMap["cpu_utilization_percentage"] = aws.Int64Value(v) + } + + return tfMap +} + +func flattenProvisionedCapacityDescription(apiObject *kafkaconnect.ProvisionedCapacityDescription) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.McuCount; v != nil { + tfMap["mcu_count"] = aws.Int64Value(v) + } + + if v := apiObject.WorkerCount; v != nil { + tfMap["worker_count"] = aws.Int64Value(v) + } + + return tfMap +} + +func flattenKafkaClusterDescription(apiObject *kafkaconnect.KafkaClusterDescription) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.ApacheKafkaCluster; v != nil { + tfMap["apache_kafka_cluster"] = []interface{}{flattenApacheKafkaClusterDescription(v)} + } + + return tfMap +} + +func flattenApacheKafkaClusterDescription(apiObject *kafkaconnect.ApacheKafkaClusterDescription) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.BootstrapServers; v != nil { + tfMap["bootstrap_servers"] = aws.StringValue(v) + } + + if v := apiObject.Vpc; v != nil { + tfMap["vpc"] = []interface{}{flattenVpcDescription(v)} + } + + return tfMap +} + +func flattenVpcDescription(apiObject *kafkaconnect.VpcDescription) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.SecurityGroups; v != nil { + tfMap["security_groups"] = aws.StringValueSlice(v) + } + + if v := apiObject.Subnets; v != nil { + tfMap["subnets"] = aws.StringValueSlice(v) + } + + return tfMap +} + +func flattenKafkaClusterClientAuthenticationDescription(apiObject *kafkaconnect.KafkaClusterClientAuthenticationDescription) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.AuthenticationType; v != nil { + tfMap["authentication_type"] = aws.StringValue(v) + } + + return tfMap +} + +func flattenKafkaClusterEncryptionInTransitDescription(apiObject *kafkaconnect.KafkaClusterEncryptionInTransitDescription) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.EncryptionType; v != nil { + tfMap["encryption_type"] = aws.StringValue(v) + } + + return tfMap +} + +func flattenPluginDescription(apiObject *kafkaconnect.PluginDescription) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.CustomPlugin; v != nil { + tfMap["custom_plugin"] = []interface{}{flattenCustomPluginDescription(v)} + } + + return tfMap +} + +func flattenPluginDescriptions(apiObjects []*kafkaconnect.PluginDescription) []interface{} { + if len(apiObjects) == 0 { + return nil + } + + var tfList []interface{} + + for _, apiObject := range apiObjects { + if apiObject == nil { + continue + } + + tfList = append(tfList, flattenPluginDescription(apiObject)) + } + + return tfList +} + +func flattenCustomPluginDescription(apiObject *kafkaconnect.CustomPluginDescription) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.CustomPluginArn; v != nil { + tfMap["arn"] = aws.StringValue(v) + } + + if v := apiObject.Revision; v != nil { + tfMap["revision"] = aws.Int64Value(v) + } + + return tfMap +} + +func flattenLogDeliveryDescription(apiObject *kafkaconnect.LogDeliveryDescription) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.WorkerLogDelivery; v != nil { + tfMap["worker_log_delivery"] = []interface{}{flattenWorkerLogDeliveryDescription(v)} + } + + return tfMap +} + +func flattenWorkerLogDeliveryDescription(apiObject *kafkaconnect.WorkerLogDeliveryDescription) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.CloudWatchLogs; v != nil { + tfMap["cloudwatch_logs"] = []interface{}{flattenCloudWatchLogsLogDeliveryDescription(v)} + } + + if v := apiObject.Firehose; v != nil { + tfMap["firehose"] = []interface{}{flattenFirehoseLogDeliveryDescription(v)} + } + + if v := apiObject.S3; v != nil { + tfMap["s3"] = []interface{}{flattenS3LogDeliveryDescription(v)} + } + + return tfMap +} + +func flattenCloudWatchLogsLogDeliveryDescription(apiObject *kafkaconnect.CloudWatchLogsLogDeliveryDescription) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.Enabled; v != nil { + tfMap["enabled"] = aws.BoolValue(v) + } + + if v := apiObject.LogGroup; v != nil { + tfMap["log_group"] = aws.StringValue(v) + } + + return tfMap +} + +func flattenFirehoseLogDeliveryDescription(apiObject *kafkaconnect.FirehoseLogDeliveryDescription) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.DeliveryStream; v != nil { + tfMap["delivery_stream"] = aws.StringValue(v) + } + + if v := apiObject.Enabled; v != nil { + tfMap["enabled"] = aws.BoolValue(v) + } + + return tfMap +} + +func flattenS3LogDeliveryDescription(apiObject *kafkaconnect.S3LogDeliveryDescription) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.Bucket; v != nil { + tfMap["bucket"] = aws.StringValue(v) + } + + if v := apiObject.Enabled; v != nil { + tfMap["enabled"] = aws.BoolValue(v) + } + + if v := apiObject.Prefix; v != nil { + tfMap["prefix"] = aws.StringValue(v) + } + + return tfMap +} + +func flattenWorkerConfigurationDescription(apiObject *kafkaconnect.WorkerConfigurationDescription) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.Revision; v != nil { + tfMap["revision"] = aws.Int64Value(v) + } + + if v := apiObject.WorkerConfigurationArn; v != nil { + tfMap["arn"] = aws.StringValue(v) + } + + return tfMap +} diff --git a/internal/service/kafkaconnect/connector_data_source.go b/internal/service/kafkaconnect/connector_data_source.go new file mode 100644 index 000000000000..a0a273314937 --- /dev/null +++ b/internal/service/kafkaconnect/connector_data_source.go @@ -0,0 +1,83 @@ +package kafkaconnect + +import ( + "context" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/kafkaconnect" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func DataSourceConnector() *schema.Resource { + return &schema.Resource{ + ReadContext: dataSourceConnectorRead, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "description": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + }, + "version": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func dataSourceConnectorRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).KafkaConnectConn + + name := d.Get("name") + var output []*kafkaconnect.ConnectorSummary + + err := conn.ListConnectorsPagesWithContext(ctx, &kafkaconnect.ListConnectorsInput{}, func(page *kafkaconnect.ListConnectorsOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, v := range page.Connectors { + if aws.StringValue(v.ConnectorName) == name { + output = append(output, v) + } + } + + return !lastPage + }) + + if err != nil { + return diag.Errorf("error listing MSK Connect Connectors: %s", err) + } + + if len(output) == 0 || output[0] == nil { + err = tfresource.NewEmptyResultError(name) + } else if count := len(output); count > 1 { + err = tfresource.NewTooManyResultsError(count, name) + } + + if err != nil { + return diag.FromErr(tfresource.SingularDataSourceFindError("MSK Connect Connector", err)) + } + + connector := output[0] + + d.SetId(aws.StringValue(connector.ConnectorArn)) + + d.Set("arn", connector.ConnectorArn) + d.Set("description", connector.ConnectorDescription) + d.Set("name", connector.ConnectorName) + d.Set("version", connector.CurrentVersion) + + return nil +} diff --git a/internal/service/kafkaconnect/connector_data_source_test.go b/internal/service/kafkaconnect/connector_data_source_test.go new file mode 100644 index 000000000000..297434ae61b3 --- /dev/null +++ b/internal/service/kafkaconnect/connector_data_source_test.go @@ -0,0 +1,42 @@ +package kafkaconnect_test + +import ( + "testing" + + "github.com/aws/aws-sdk-go/service/kafkaconnect" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" +) + +func TestAccKafkaConnectConnectorDataSource_basic(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_mskconnect_connector.test" + dataSourceName := "data.aws_mskconnect_connector.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); acctest.PreCheckPartitionHasService(kafkaconnect.EndpointsID, t) }, + ErrorCheck: acctest.ErrorCheck(t, kafkaconnect.EndpointsID), + CheckDestroy: nil, + ProviderFactories: acctest.ProviderFactories, + Steps: []resource.TestStep{ + { + Config: testAccConnectorDataSourceConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrPair(resourceName, "arn", dataSourceName, "arn"), + resource.TestCheckResourceAttrPair(resourceName, "description", dataSourceName, "description"), + resource.TestCheckResourceAttrPair(resourceName, "name", dataSourceName, "name"), + resource.TestCheckResourceAttrPair(resourceName, "version", dataSourceName, "version"), + ), + }, + }, + }) +} + +func testAccConnectorDataSourceConfig(rName string) string { + return acctest.ConfigCompose(testAccConnectorConfig(rName), ` +data "aws_mskconnect_connector" "test" { + name = aws_mskconnect_connector.test.name +} +`) +} diff --git a/internal/service/kafkaconnect/connector_test.go b/internal/service/kafkaconnect/connector_test.go new file mode 100644 index 000000000000..6adf5e767280 --- /dev/null +++ b/internal/service/kafkaconnect/connector_test.go @@ -0,0 +1,635 @@ +package kafkaconnect_test + +import ( + "context" + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/kafkaconnect" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfkafkaconnect "github.com/hashicorp/terraform-provider-aws/internal/service/kafkaconnect" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func TestAccKafkaConnectConnector_basic(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_mskconnect_connector.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); acctest.PreCheckPartitionHasService(kafkaconnect.EndpointsID, t) }, + ErrorCheck: acctest.ErrorCheck(t, kafkaconnect.EndpointsID), + CheckDestroy: testAccCheckConnectorDestroy, + ProviderFactories: acctest.ProviderFactories, + Steps: []resource.TestStep{ + { + Config: testAccConnectorConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckConnectorExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "capacity.#", "1"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.autoscaling.#", "1"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.autoscaling.0.max_worker_count", "2"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.autoscaling.0.mcu_count", "1"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.autoscaling.0.min_worker_count", "1"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.autoscaling.0.scale_in_policy.#", "1"), + resource.TestCheckResourceAttrSet(resourceName, "capacity.0.autoscaling.0.scale_in_policy.0.cpu_utilization_percentage"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.autoscaling.0.scale_out_policy.#", "1"), + resource.TestCheckResourceAttrSet(resourceName, "capacity.0.autoscaling.0.scale_out_policy.0.cpu_utilization_percentage"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.provisioned_capacity.#", "0"), + resource.TestCheckResourceAttr(resourceName, "connector_configuration.%", "3"), + resource.TestCheckResourceAttr(resourceName, "connector_configuration.connector.class", "com.github.jcustenborder.kafka.connect.simulator.SimulatorSinkConnector"), + resource.TestCheckResourceAttr(resourceName, "connector_configuration.tasks.max", "1"), + resource.TestCheckResourceAttr(resourceName, "connector_configuration.topics", "t1"), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster.#", "1"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster.0.apache_kafka_cluster.#", "1"), + resource.TestCheckResourceAttrSet(resourceName, "kafka_cluster.0.apache_kafka_cluster.0.bootstrap_servers"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster.0.apache_kafka_cluster.0.vpc.#", "1"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster.0.apache_kafka_cluster.0.vpc.0.security_groups.#", "1"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster.0.apache_kafka_cluster.0.vpc.0.subnets.#", "3"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster_client_authentication.#", "1"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster_client_authentication.0.authentication_type", "NONE"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster_encryption_in_transit.#", "1"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster_encryption_in_transit.0.encryption_type", "TLS"), + resource.TestCheckResourceAttr(resourceName, "kafkaconnect_version", "2.7.1"), + resource.TestCheckResourceAttr(resourceName, "log_delivery.#", "0"), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "plugin.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "plugin.*", map[string]string{ + "custom_plugin.#": "1", + }), + resource.TestCheckResourceAttrSet(resourceName, "service_execution_role_arn"), + resource.TestCheckResourceAttrSet(resourceName, "version"), + resource.TestCheckResourceAttr(resourceName, "worker_configuration.#", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccKafkaConnectConnector_disappears(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_mskconnect_connector.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); acctest.PreCheckPartitionHasService(kafkaconnect.EndpointsID, t) }, + ErrorCheck: acctest.ErrorCheck(t, kafkaconnect.EndpointsID), + CheckDestroy: testAccCheckConnectorDestroy, + ProviderFactories: acctest.ProviderFactories, + Steps: []resource.TestStep{ + { + Config: testAccConnectorConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckConnectorExists(resourceName), + acctest.CheckResourceDisappears(acctest.Provider, tfkafkaconnect.ResourceConnector(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccKafkaConnectConnector_update(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_mskconnect_connector.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); acctest.PreCheckPartitionHasService(kafkaconnect.EndpointsID, t) }, + ErrorCheck: acctest.ErrorCheck(t, kafkaconnect.EndpointsID), + CheckDestroy: testAccCheckConnectorDestroy, + ProviderFactories: acctest.ProviderFactories, + Steps: []resource.TestStep{ + { + Config: testAccConnectorAllAttributesConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckConnectorExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "capacity.#", "1"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.autoscaling.#", "1"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.autoscaling.0.max_worker_count", "6"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.autoscaling.0.mcu_count", "2"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.autoscaling.0.min_worker_count", "4"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.autoscaling.0.scale_in_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.autoscaling.0.scale_in_policy.0.cpu_utilization_percentage", "25"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.autoscaling.0.scale_out_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.autoscaling.0.scale_out_policy.0.cpu_utilization_percentage", "75"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.provisioned_capacity.#", "0"), + resource.TestCheckResourceAttr(resourceName, "connector_configuration.%", "3"), + resource.TestCheckResourceAttr(resourceName, "connector_configuration.connector.class", "com.github.jcustenborder.kafka.connect.simulator.SimulatorSinkConnector"), + resource.TestCheckResourceAttr(resourceName, "connector_configuration.tasks.max", "1"), + resource.TestCheckResourceAttr(resourceName, "connector_configuration.topics", "t1"), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster.#", "1"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster.0.apache_kafka_cluster.#", "1"), + resource.TestCheckResourceAttrSet(resourceName, "kafka_cluster.0.apache_kafka_cluster.0.bootstrap_servers"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster.0.apache_kafka_cluster.0.vpc.#", "1"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster.0.apache_kafka_cluster.0.vpc.0.security_groups.#", "1"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster.0.apache_kafka_cluster.0.vpc.0.subnets.#", "3"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster_client_authentication.#", "1"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster_client_authentication.0.authentication_type", "NONE"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster_encryption_in_transit.#", "1"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster_encryption_in_transit.0.encryption_type", "TLS"), + resource.TestCheckResourceAttr(resourceName, "kafkaconnect_version", "2.7.1"), + resource.TestCheckResourceAttr(resourceName, "log_delivery.#", "1"), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.#", "1"), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.0.cloudwatch_logs.#", "1"), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.0.cloudwatch_logs.0.enabled", "true"), + resource.TestCheckResourceAttrSet(resourceName, "log_delivery.0.worker_log_delivery.0.cloudwatch_logs.0.log_group"), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.0.firehose.#", "1"), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.0.firehose.0.delivery_stream", ""), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.0.firehose.0.enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.0.s3.#", "1"), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.0.s3.0.bucket", ""), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.0.s3.0.enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.0.s3.0.prefix", ""), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "plugin.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "plugin.*", map[string]string{ + "custom_plugin.#": "1", + }), + resource.TestCheckResourceAttrSet(resourceName, "service_execution_role_arn"), + resource.TestCheckResourceAttrSet(resourceName, "version"), + resource.TestCheckResourceAttr(resourceName, "worker_configuration.#", "1"), + resource.TestCheckResourceAttrSet(resourceName, "worker_configuration.0.arn"), + resource.TestCheckResourceAttrSet(resourceName, "worker_configuration.0.revision"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccConnectorAllAttributesCapacityUpdatedConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckConnectorExists(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "capacity.#", "1"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.autoscaling.#", "0"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.provisioned_capacity.#", "1"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.provisioned_capacity.0.mcu_count", "1"), + resource.TestCheckResourceAttr(resourceName, "capacity.0.provisioned_capacity.0.worker_count", "4"), + resource.TestCheckResourceAttr(resourceName, "connector_configuration.%", "3"), + resource.TestCheckResourceAttr(resourceName, "connector_configuration.connector.class", "com.github.jcustenborder.kafka.connect.simulator.SimulatorSinkConnector"), + resource.TestCheckResourceAttr(resourceName, "connector_configuration.tasks.max", "1"), + resource.TestCheckResourceAttr(resourceName, "connector_configuration.topics", "t1"), + resource.TestCheckResourceAttr(resourceName, "description", ""), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster.#", "1"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster.0.apache_kafka_cluster.#", "1"), + resource.TestCheckResourceAttrSet(resourceName, "kafka_cluster.0.apache_kafka_cluster.0.bootstrap_servers"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster.0.apache_kafka_cluster.0.vpc.#", "1"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster.0.apache_kafka_cluster.0.vpc.0.security_groups.#", "1"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster.0.apache_kafka_cluster.0.vpc.0.subnets.#", "3"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster_client_authentication.#", "1"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster_client_authentication.0.authentication_type", "NONE"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster_encryption_in_transit.#", "1"), + resource.TestCheckResourceAttr(resourceName, "kafka_cluster_encryption_in_transit.0.encryption_type", "TLS"), + resource.TestCheckResourceAttr(resourceName, "kafkaconnect_version", "2.7.1"), + resource.TestCheckResourceAttr(resourceName, "log_delivery.#", "1"), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.#", "1"), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.0.cloudwatch_logs.#", "1"), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.0.cloudwatch_logs.0.enabled", "true"), + resource.TestCheckResourceAttrSet(resourceName, "log_delivery.0.worker_log_delivery.0.cloudwatch_logs.0.log_group"), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.0.firehose.#", "1"), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.0.firehose.0.delivery_stream", ""), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.0.firehose.0.enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.0.s3.#", "1"), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.0.s3.0.bucket", ""), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.0.s3.0.enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "log_delivery.0.worker_log_delivery.0.s3.0.prefix", ""), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "plugin.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "plugin.*", map[string]string{ + "custom_plugin.#": "1", + }), + resource.TestCheckResourceAttrSet(resourceName, "service_execution_role_arn"), + resource.TestCheckResourceAttrSet(resourceName, "version"), + resource.TestCheckResourceAttr(resourceName, "worker_configuration.#", "1"), + resource.TestCheckResourceAttrSet(resourceName, "worker_configuration.0.arn"), + resource.TestCheckResourceAttrSet(resourceName, "worker_configuration.0.revision"), + ), + }, + }, + }) +} + +func testAccCheckConnectorExists(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No MSK Connect Connector ID is set") + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConnectConn + + _, err := tfkafkaconnect.FindConnectorByARN(context.TODO(), conn, rs.Primary.ID) + + if err != nil { + return err + } + + return nil + } +} + +func testAccCheckConnectorDestroy(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConnectConn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_mskconnect_connector" { + continue + } + + _, err := tfkafkaconnect.FindConnectorByARN(context.TODO(), conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("MSK Connect Connector %s still exists", rs.Primary.ID) + } + + return nil +} + +func testAccConnectorBaseConfig(rName string) string { + return acctest.ConfigCompose(acctest.ConfigAvailableAZsNoOptIn(), fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "10.10.0.0/16" + + tags = { + Name = %[1]q + } +} + +resource "aws_subnet" "test1" { + vpc_id = aws_vpc.test.id + cidr_block = "10.10.1.0/24" + availability_zone = data.aws_availability_zones.available.names[0] + + tags = { + Name = %[1]q + } +} + +resource "aws_subnet" "test2" { + vpc_id = aws_vpc.test.id + cidr_block = "10.10.2.0/24" + availability_zone = data.aws_availability_zones.available.names[1] + + tags = { + Name = %[1]q + } +} + +resource "aws_subnet" "test3" { + vpc_id = aws_vpc.test.id + cidr_block = "10.10.3.0/24" + availability_zone = data.aws_availability_zones.available.names[2] + + tags = { + Name = %[1]q + } +} + +resource "aws_security_group" "test" { + vpc_id = aws_vpc.test.id + name = %[1]q + + ingress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + egress { + from_port = 0 + to_port = 0 + protocol = "-1" + cidr_blocks = ["0.0.0.0/0"] + } + + tags = { + Name = %[1]q + } +} + +data "aws_region" "current" {} + +resource "aws_vpc_endpoint" "test" { + vpc_id = aws_vpc.test.id + service_name = "com.amazonaws.${data.aws_region.current.name}.s3" + vpc_endpoint_type = "Interface" + + security_group_ids = [ + aws_security_group.test.id, + ] + + tags = { + Name = %[1]q + } +} + +resource "aws_iam_role" "test" { + name = %[1]q + path = "/" + assume_role_policy = data.aws_iam_policy_document.assume_role.json +} + +data "aws_iam_policy_document" "assume_role" { + statement { + actions = ["sts:AssumeRole"] + effect = "Allow" + + principals { + type = "Service" + identifiers = ["kafkaconnect.amazonaws.com"] + } + } +} + +resource "aws_iam_role_policy" "test" { + name = %[1]q + role = aws_iam_role.test.id + policy = < 0 { + apiObject.S3Location = expandS3Location(v[0].(map[string]interface{})) } - if objVer, ok := tfMap["object_version"].(string); ok && objVer != "" { - s3Location.ObjectVersion = aws.String(objVer) + return apiObject +} + +func expandS3Location(tfMap map[string]interface{}) *kafkaconnect.S3Location { + if tfMap == nil { + return nil } - return &s3Location + apiObject := &kafkaconnect.S3Location{} + + if v, ok := tfMap["bucket_arn"].(string); ok && v != "" { + apiObject.BucketArn = aws.String(v) + } + + if v, ok := tfMap["file_key"].(string); ok && v != "" { + apiObject.FileKey = aws.String(v) + } + + if v, ok := tfMap["object_version"].(string); ok && v != "" { + apiObject.ObjectVersion = aws.String(v) + } + + return apiObject } -func flattenLocation(apiLocation *kafkaconnect.CustomPluginLocationDescription) []interface{} { - location := make(map[string]interface{}) +func flattenCustomPluginLocationDescription(apiObject *kafkaconnect.CustomPluginLocationDescription) map[string]interface{} { + if apiObject == nil { + return nil + } - location["s3"] = flattenS3Location(apiLocation.S3Location) + tfMap := map[string]interface{}{} - return []interface{}{location} + if v := apiObject.S3Location; v != nil { + tfMap["s3"] = []interface{}{flattenS3LocationDescription(v)} + } + + return tfMap } -func flattenS3Location(apiS3Location *kafkaconnect.S3LocationDescription) []interface{} { - location := make(map[string]interface{}) +func flattenS3LocationDescription(apiObject *kafkaconnect.S3LocationDescription) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} - location["bucket_arn"] = aws.StringValue(apiS3Location.BucketArn) - location["file_key"] = aws.StringValue(apiS3Location.FileKey) + if v := apiObject.BucketArn; v != nil { + tfMap["bucket_arn"] = aws.StringValue(v) + } + + if v := apiObject.FileKey; v != nil { + tfMap["file_key"] = aws.StringValue(v) + } - if objVer := apiS3Location.ObjectVersion; objVer != nil { - location["object_version"] = aws.StringValue(objVer) + if v := apiObject.ObjectVersion; v != nil { + tfMap["object_version"] = aws.StringValue(v) } - return []interface{}{location} + return tfMap } diff --git a/internal/service/kafkaconnect/custom_plugin_data_source.go b/internal/service/kafkaconnect/custom_plugin_data_source.go index 062664248015..7de27951d44b 100644 --- a/internal/service/kafkaconnect/custom_plugin_data_source.go +++ b/internal/service/kafkaconnect/custom_plugin_data_source.go @@ -1,17 +1,19 @@ package kafkaconnect import ( - "fmt" + "context" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/kafkaconnect" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) func DataSourceCustomPlugin() *schema.Resource { return &schema.Resource{ - Read: dataSourceCustomPluginRead, + ReadWithoutTimeout: dataSourceCustomPluginRead, Schema: map[string]*schema.Schema{ "arn": { @@ -38,25 +40,20 @@ func DataSourceCustomPlugin() *schema.Resource { } } -func dataSourceCustomPluginRead(d *schema.ResourceData, meta interface{}) error { +func dataSourceCustomPluginRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { conn := meta.(*conns.AWSClient).KafkaConnectConn - pluginName := d.Get("name") + name := d.Get("name") + var output []*kafkaconnect.CustomPluginSummary - input := &kafkaconnect.ListCustomPluginsInput{} - - var plugin *kafkaconnect.CustomPluginSummary - - err := conn.ListCustomPluginsPages(input, func(page *kafkaconnect.ListCustomPluginsOutput, lastPage bool) bool { + err := conn.ListCustomPluginsPagesWithContext(ctx, &kafkaconnect.ListCustomPluginsInput{}, func(page *kafkaconnect.ListCustomPluginsOutput, lastPage bool) bool { if page == nil { return !lastPage } - for _, pluginSummary := range page.CustomPlugins { - if aws.StringValue(pluginSummary.Name) == pluginName { - plugin = pluginSummary - - return false + for _, v := range page.CustomPlugins { + if aws.StringValue(v.Name) == name { + output = append(output, v) } } @@ -64,18 +61,28 @@ func dataSourceCustomPluginRead(d *schema.ResourceData, meta interface{}) error }) if err != nil { - return fmt.Errorf("error listing MSK Connect Custom Plugins: %w", err) + return diag.Errorf("error listing MSK Connect Custom Plugins: %s", err) } - if plugin == nil { - return fmt.Errorf("error reading MSK Connect Custom Plugin (%s): no results found", pluginName) + if len(output) == 0 || output[0] == nil { + err = tfresource.NewEmptyResultError(name) + } else if count := len(output); count > 1 { + err = tfresource.NewTooManyResultsError(count, name) } + if err != nil { + return diag.FromErr(tfresource.SingularDataSourceFindError("MSK Connect Custom Plugin", err)) + } + + plugin := output[0] + d.SetId(aws.StringValue(plugin.CustomPluginArn)) + d.Set("arn", plugin.CustomPluginArn) d.Set("description", plugin.Description) d.Set("name", plugin.Name) d.Set("state", plugin.CustomPluginState) + if plugin.LatestRevision != nil { d.Set("latest_revision", plugin.LatestRevision.Revision) } else { diff --git a/internal/service/kafkaconnect/custom_plugin_data_source_test.go b/internal/service/kafkaconnect/custom_plugin_data_source_test.go index 8600bf3435cc..c8ae0e45e279 100644 --- a/internal/service/kafkaconnect/custom_plugin_data_source_test.go +++ b/internal/service/kafkaconnect/custom_plugin_data_source_test.go @@ -36,10 +36,10 @@ func TestAccKafkaConnectCustomPluginDataSource_basic(t *testing.T) { } func testAccCustomPluginDataSourceConfig(rName string) string { - return acctest.ConfigCompose(testAccCustomPluginConfigBasicS3ObjectJar(rName), fmt.Sprintf(` + return acctest.ConfigCompose(testAccCustomPluginBaseConfig(rName, false), fmt.Sprintf(` resource "aws_mskconnect_custom_plugin" "test" { name = %[1]q - content_type = "JAR" + content_type = "ZIP" location { s3 { diff --git a/internal/service/kafkaconnect/custom_plugin_test.go b/internal/service/kafkaconnect/custom_plugin_test.go index b78f748004ba..4eedc132ed30 100644 --- a/internal/service/kafkaconnect/custom_plugin_test.go +++ b/internal/service/kafkaconnect/custom_plugin_test.go @@ -1,6 +1,7 @@ package kafkaconnect_test import ( + "context" "fmt" "testing" @@ -11,6 +12,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" tfkafkaconnect "github.com/hashicorp/terraform-provider-aws/internal/service/kafkaconnect" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) func TestAccKafkaConnectCustomPlugin_basic(t *testing.T) { @@ -20,20 +22,24 @@ func TestAccKafkaConnectCustomPlugin_basic(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t); acctest.PreCheckPartitionHasService(kafkaconnect.EndpointsID, t) }, ErrorCheck: acctest.ErrorCheck(t, kafkaconnect.EndpointsID), - CheckDestroy: nil, + CheckDestroy: testAccCheckCustomPluginDestroy, Providers: acctest.Providers, Steps: []resource.TestStep{ { - Config: testAccCustomPluginConfigBasic(rName), - Check: resource.ComposeTestCheckFunc( + Config: testAccCustomPluginConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckCustomPluginExists(resourceName), resource.TestCheckResourceAttrSet(resourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "content_type", "ZIP"), + resource.TestCheckResourceAttr(resourceName, "description", ""), resource.TestCheckResourceAttrSet(resourceName, "latest_revision"), + resource.TestCheckResourceAttr(resourceName, "location.#", "1"), + resource.TestCheckResourceAttr(resourceName, "location.0.s3.#", "1"), resource.TestCheckResourceAttrSet(resourceName, "location.0.s3.0.bucket_arn"), - resource.TestCheckResourceAttr(resourceName, "location.0.s3.0.file_key", rName), + resource.TestCheckResourceAttrSet(resourceName, "location.0.s3.0.file_key"), resource.TestCheckResourceAttr(resourceName, "location.0.s3.0.object_version", ""), - resource.TestCheckResourceAttr(resourceName, "state", kafkaconnect.CustomPluginStateActive), - resource.TestCheckResourceAttr(resourceName, "content_type", kafkaconnect.CustomPluginContentTypeJar), + resource.TestCheckResourceAttr(resourceName, "name", rName), + resource.TestCheckResourceAttr(resourceName, "state", "ACTIVE"), ), }, { @@ -45,49 +51,43 @@ func TestAccKafkaConnectCustomPlugin_basic(t *testing.T) { }) } -func TestAccKafkaConnectCustomPlugin_description(t *testing.T) { +func TestAccKafkaConnectCustomPlugin_disappears(t *testing.T) { rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - rDescription := sdkacctest.RandString(20) resourceName := "aws_mskconnect_custom_plugin.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t); acctest.PreCheckPartitionHasService(kafkaconnect.EndpointsID, t) }, ErrorCheck: acctest.ErrorCheck(t, kafkaconnect.EndpointsID), - CheckDestroy: nil, + CheckDestroy: testAccCheckCustomPluginDestroy, Providers: acctest.Providers, Steps: []resource.TestStep{ { - Config: testAccCustomPluginConfigDescription(rName, rDescription), + Config: testAccCustomPluginConfig(rName), Check: resource.ComposeTestCheckFunc( testAccCheckCustomPluginExists(resourceName), - resource.TestCheckResourceAttr(resourceName, "description", rDescription), + acctest.CheckResourceDisappears(acctest.Provider, tfkafkaconnect.ResourceCustomPlugin(), resourceName), ), - }, - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ExpectNonEmptyPlan: true, }, }, }) } -func TestAccKafkaConnectCustomPlugin_contentType(t *testing.T) { +func TestAccKafkaConnectCustomPlugin_description(t *testing.T) { rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - rNameUpdated := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_mskconnect_custom_plugin.test" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t); acctest.PreCheckPartitionHasService(kafkaconnect.EndpointsID, t) }, ErrorCheck: acctest.ErrorCheck(t, kafkaconnect.EndpointsID), - CheckDestroy: nil, + CheckDestroy: testAccCheckCustomPluginDestroy, Providers: acctest.Providers, Steps: []resource.TestStep{ { - Config: testAccCustomPluginConfigBasic(rName), + Config: testAccCustomPluginConfigDescription(rName), Check: resource.ComposeTestCheckFunc( testAccCheckCustomPluginExists(resourceName), - resource.TestCheckResourceAttr(resourceName, "content_type", kafkaconnect.CustomPluginContentTypeJar), + resource.TestCheckResourceAttr(resourceName, "description", "testing"), ), }, { @@ -95,13 +95,6 @@ func TestAccKafkaConnectCustomPlugin_contentType(t *testing.T) { ImportState: true, ImportStateVerify: true, }, - { - Config: testAccCustomPluginConfigContentTypeZip(rNameUpdated), - Check: resource.ComposeTestCheckFunc( - testAccCheckCustomPluginExists(resourceName), - resource.TestCheckResourceAttr(resourceName, "content_type", kafkaconnect.CustomPluginContentTypeZip), - ), - }, }, }) } @@ -113,14 +106,14 @@ func TestAccKafkaConnectCustomPlugin_objectVersion(t *testing.T) { resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t); acctest.PreCheckPartitionHasService(kafkaconnect.EndpointsID, t) }, ErrorCheck: acctest.ErrorCheck(t, kafkaconnect.EndpointsID), - CheckDestroy: nil, + CheckDestroy: testAccCheckCustomPluginDestroy, Providers: acctest.Providers, Steps: []resource.TestStep{ { Config: testAccCustomPluginConfigObjectVersion(rName), Check: resource.ComposeTestCheckFunc( testAccCheckCustomPluginExists(resourceName), - testAccCheckCustomPluginObjectVersion(resourceName), + resource.TestCheckResourceAttrSet(resourceName, "location.0.s3.0.object_version"), ), }, { @@ -140,12 +133,12 @@ func testAccCheckCustomPluginExists(name string) resource.TestCheckFunc { } if rs.Primary.ID == "" { - return fmt.Errorf("No MSK Custom Plugin ID is set") + return fmt.Errorf("No MSK Connect Custom Plugin ID is set") } conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConnectConn - _, err := tfkafkaconnect.FindCustomPluginByARN(conn, rs.Primary.ID) + _, err := tfkafkaconnect.FindCustomPluginByARN(context.TODO(), conn, rs.Primary.ID) if err != nil { return err @@ -155,80 +148,63 @@ func testAccCheckCustomPluginExists(name string) resource.TestCheckFunc { } } -func testAccCheckCustomPluginObjectVersion(name string) resource.TestCheckFunc { - return func(s *terraform.State) error { - plugin, ok := s.RootModule().Resources[name] - if !ok { - return fmt.Errorf("Not found: %s", name) +func testAccCheckCustomPluginDestroy(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).KafkaConnectConn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_mskconnect_custom_plugin" { + continue } - for _, rs := range s.RootModule().Resources { - if rs.Type == "aws_s3_object" { - pluginObjectVersion := plugin.Primary.Attributes["location.0.s3.0.object_version"] - objectVersionId := rs.Primary.Attributes["version_id"] + _, err := tfkafkaconnect.FindCustomPluginByARN(context.TODO(), conn, rs.Primary.ID) - if !(pluginObjectVersion == objectVersionId) { - return fmt.Errorf("Plugin object version doesn't match object's version id: %s != %s", pluginObjectVersion, objectVersionId) - } + if tfresource.NotFound(err) { + continue + } - return nil - } + if err != nil { + return err } - return fmt.Errorf("Couldn't find aws_s3_object resource to compare versions.") + return fmt.Errorf("MSK Connect Custom Plugin %s still exists", rs.Primary.ID) } -} -func testAccCustomPluginConfigBasicS3ObjectZip(name string) string { - return fmt.Sprintf(` -resource "aws_s3_bucket" "test" { - bucket = %[1]q + return nil } -resource "aws_s3_object" "test" { - bucket = aws_s3_bucket.test.id - key = %[1]q - source = "test-fixtures/activemq-connector.zip" -} -`, name) -} - -func testAccCustomPluginConfigBasicS3ObjectJar(name string) string { +func testAccCustomPluginBaseConfig(rName string, s3BucketVersioning bool) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { - bucket = %[1]q + bucket = %[1]q + force_destroy = true } -resource "aws_s3_object" "test" { +resource "aws_s3_bucket_acl" "test" { bucket = aws_s3_bucket.test.id - key = %[1]q - source = "test-fixtures/mongodb-connector.jar" -} -`, name) + acl = "private" } -func testAccCustomPluginConfigBasic(name string) string { - return acctest.ConfigCompose(testAccCustomPluginConfigBasicS3ObjectJar(name), fmt.Sprintf(` -resource "aws_mskconnect_custom_plugin" "test" { - name = %[1]q - content_type = "JAR" +resource "aws_s3_bucket_versioning" "test" { + bucket = aws_s3_bucket_acl.test.bucket - location { - s3 { - bucket_arn = aws_s3_bucket.test.arn - file_key = aws_s3_object.test.key - } + versioning_configuration { + status = %[2]t ? "Enabled" : "Suspended" } } -`, name)) + +resource "aws_s3_object" "test" { + bucket = aws_s3_bucket_versioning.test.bucket + key = "jcustenborder-kafka-connect-simulator-0.1.120.zip" + source = "test-fixtures/jcustenborder-kafka-connect-simulator-0.1.120.zip" +} +`, rName, s3BucketVersioning) } -func testAccCustomPluginConfigDescription(name, description string) string { - return acctest.ConfigCompose(testAccCustomPluginConfigBasicS3ObjectJar(name), fmt.Sprintf(` +func testAccCustomPluginConfig(rName string) string { + return acctest.ConfigCompose(testAccCustomPluginBaseConfig(rName, false), fmt.Sprintf(` resource "aws_mskconnect_custom_plugin" "test" { name = %[1]q - description = %[2]q - content_type = "JAR" + content_type = "ZIP" location { s3 { @@ -237,14 +213,15 @@ resource "aws_mskconnect_custom_plugin" "test" { } } } -`, name, description)) +`, rName)) } -func testAccCustomPluginConfigContentTypeZip(name string) string { - return acctest.ConfigCompose(testAccCustomPluginConfigBasicS3ObjectZip(name), fmt.Sprintf(` +func testAccCustomPluginConfigDescription(rName string) string { + return acctest.ConfigCompose(testAccCustomPluginBaseConfig(rName, false), fmt.Sprintf(` resource "aws_mskconnect_custom_plugin" "test" { name = %[1]q content_type = "ZIP" + description = "testing" location { s3 { @@ -253,34 +230,15 @@ resource "aws_mskconnect_custom_plugin" "test" { } } } -`, name)) -} - -func testAccCustomPluginConfigObjectVersion(name string) string { - return fmt.Sprintf(` -resource "aws_s3_bucket" "test" { - bucket = %[1]q -} - -resource "aws_s3_bucket_versioning" "test" { - bucket = aws_s3_bucket.test.id - versioning_configuration { - status = "Enabled" - } -} - -resource "aws_s3_object" "test" { - # Must have versioning enabled first - depends_on = [aws_s3_bucket_versioning.test] - - bucket = aws_s3_bucket.test.id - key = %[1]q - source = "test-fixtures/mongodb-connector.jar" +`, rName)) } +func testAccCustomPluginConfigObjectVersion(rName string) string { + return acctest.ConfigCompose(testAccCustomPluginBaseConfig(rName, true), fmt.Sprintf(` resource "aws_mskconnect_custom_plugin" "test" { name = %[1]q - content_type = "JAR" + content_type = "ZIP" + description = "testing" location { s3 { @@ -290,5 +248,5 @@ resource "aws_mskconnect_custom_plugin" "test" { } } } -`, name) +`, rName)) } diff --git a/internal/service/kafkaconnect/find.go b/internal/service/kafkaconnect/find.go index 98c4c00ee706..4e31c31f98d1 100644 --- a/internal/service/kafkaconnect/find.go +++ b/internal/service/kafkaconnect/find.go @@ -1,6 +1,8 @@ package kafkaconnect import ( + "context" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/kafkaconnect" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" @@ -8,12 +10,37 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) -func FindCustomPluginByARN(conn *kafkaconnect.KafkaConnect, arn string) (*kafkaconnect.DescribeCustomPluginOutput, error) { +func FindConnectorByARN(ctx context.Context, conn *kafkaconnect.KafkaConnect, arn string) (*kafkaconnect.DescribeConnectorOutput, error) { + input := &kafkaconnect.DescribeConnectorInput{ + ConnectorArn: aws.String(arn), + } + + output, err := conn.DescribeConnectorWithContext(ctx, input) + + if tfawserr.ErrCodeEquals(err, kafkaconnect.ErrCodeNotFoundException) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output, nil +} + +func FindCustomPluginByARN(ctx context.Context, conn *kafkaconnect.KafkaConnect, arn string) (*kafkaconnect.DescribeCustomPluginOutput, error) { input := &kafkaconnect.DescribeCustomPluginInput{ CustomPluginArn: aws.String(arn), } - output, err := conn.DescribeCustomPlugin(input) + output, err := conn.DescribeCustomPluginWithContext(ctx, input) if tfawserr.ErrCodeEquals(err, kafkaconnect.ErrCodeNotFoundException) { return nil, &resource.NotFoundError{ @@ -33,12 +60,13 @@ func FindCustomPluginByARN(conn *kafkaconnect.KafkaConnect, arn string) (*kafkac return output, nil } -func FindWorkerConfigurationByARN(conn *kafkaconnect.KafkaConnect, arn string) (*kafkaconnect.DescribeWorkerConfigurationOutput, error) { +func FindWorkerConfigurationByARN(ctx context.Context, conn *kafkaconnect.KafkaConnect, arn string) (*kafkaconnect.DescribeWorkerConfigurationOutput, error) { input := &kafkaconnect.DescribeWorkerConfigurationInput{ WorkerConfigurationArn: aws.String(arn), } - output, err := conn.DescribeWorkerConfiguration(input) + output, err := conn.DescribeWorkerConfigurationWithContext(ctx, input) + if tfawserr.ErrCodeEquals(err, kafkaconnect.ErrCodeNotFoundException) { return nil, &resource.NotFoundError{ LastError: err, diff --git a/internal/service/kafkaconnect/status.go b/internal/service/kafkaconnect/status.go index 854229423e21..b87d28809d7a 100644 --- a/internal/service/kafkaconnect/status.go +++ b/internal/service/kafkaconnect/status.go @@ -1,15 +1,33 @@ package kafkaconnect import ( + "context" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/kafkaconnect" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) -func statusCustomPluginState(conn *kafkaconnect.KafkaConnect, arn string) resource.StateRefreshFunc { +func statusConnectorState(ctx context.Context, conn *kafkaconnect.KafkaConnect, arn string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := FindConnectorByARN(ctx, conn, arn) + + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return output, aws.StringValue(output.ConnectorState), nil + } +} + +func statusCustomPluginState(ctx context.Context, conn *kafkaconnect.KafkaConnect, arn string) resource.StateRefreshFunc { return func() (interface{}, string, error) { - output, err := FindCustomPluginByARN(conn, arn) + output, err := FindCustomPluginByARN(ctx, conn, arn) if tfresource.NotFound(err) { return nil, "", nil diff --git a/internal/service/kafkaconnect/sweep.go b/internal/service/kafkaconnect/sweep.go new file mode 100644 index 000000000000..32ef5b5d9852 --- /dev/null +++ b/internal/service/kafkaconnect/sweep.go @@ -0,0 +1,116 @@ +//go:build sweep +// +build sweep + +package kafkaconnect + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/kafkaconnect" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/sweep" +) + +func init() { + resource.AddTestSweepers("aws_mskconnect_connector", &resource.Sweeper{ + Name: "aws_mskconnect_connector", + F: sweepConnectors, + }) + + resource.AddTestSweepers("aws_mskconnect_custom_plugin", &resource.Sweeper{ + Name: "aws_mskconnect_custom_plugin", + F: sweepCustomPlugins, + Dependencies: []string{ + "aws_mskconnect_connector", + }, + }) +} + +func sweepConnectors(region string) error { + client, err := sweep.SharedRegionalSweepClient(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*conns.AWSClient).KafkaConnectConn + input := &kafkaconnect.ListConnectorsInput{} + sweepResources := make([]*sweep.SweepResource, 0) + + err = conn.ListConnectorsPages(input, func(page *kafkaconnect.ListConnectorsOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, v := range page.Connectors { + r := ResourceConnector() + d := r.Data(nil) + d.SetId(aws.StringValue(v.ConnectorArn)) + + sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) + } + + return !lastPage + }) + + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping MSK Connect Connector sweep for %s: %s", region, err) + return nil + } + + if err != nil { + return fmt.Errorf("error listing MSK Connect Connectors (%s): %w", region, err) + } + + err = sweep.SweepOrchestrator(sweepResources) + + if err != nil { + return fmt.Errorf("error sweeping MSK Connect Connectors (%s): %w", region, err) + } + + return nil +} + +func sweepCustomPlugins(region string) error { + client, err := sweep.SharedRegionalSweepClient(region) + if err != nil { + return fmt.Errorf("error getting client: %s", err) + } + conn := client.(*conns.AWSClient).KafkaConnectConn + input := &kafkaconnect.ListCustomPluginsInput{} + sweepResources := make([]*sweep.SweepResource, 0) + + err = conn.ListCustomPluginsPages(input, func(page *kafkaconnect.ListCustomPluginsOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, v := range page.CustomPlugins { + r := ResourceCustomPlugin() + d := r.Data(nil) + d.SetId(aws.StringValue(v.CustomPluginArn)) + + sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) + } + + return !lastPage + }) + + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping MSK Connect Custom Plugin sweep for %s: %s", region, err) + return nil + } + + if err != nil { + return fmt.Errorf("error listing MSK Connect Custom Plugins (%s): %w", region, err) + } + + err = sweep.SweepOrchestrator(sweepResources) + + if err != nil { + return fmt.Errorf("error sweeping MSK Connect Custom Plugins (%s): %w", region, err) + } + + return nil +} diff --git a/internal/service/kafkaconnect/test-fixtures/README.md b/internal/service/kafkaconnect/test-fixtures/README.md new file mode 100644 index 000000000000..549c888b8c9a --- /dev/null +++ b/internal/service/kafkaconnect/test-fixtures/README.md @@ -0,0 +1,6 @@ +# Amazon MSK Connect Test Data + +This directory contains test data for the Amazon MSK Connect resource & data source acceptance tests. + +The checked-in ZIP file contains the [Simulator Connector](https://www.confluent.io/hub/jcustenborder/kafka-connect-simulator) JAR file. +See the [GitHub repository](https://github.com/jcustenborder/kafka-connect-simulator) for configuration properties. diff --git a/internal/service/kafkaconnect/test-fixtures/activemq-connector.zip b/internal/service/kafkaconnect/test-fixtures/activemq-connector.zip deleted file mode 100644 index 97a58ac7a443..000000000000 Binary files a/internal/service/kafkaconnect/test-fixtures/activemq-connector.zip and /dev/null differ diff --git a/internal/service/kafkaconnect/test-fixtures/jcustenborder-kafka-connect-simulator-0.1.120.zip b/internal/service/kafkaconnect/test-fixtures/jcustenborder-kafka-connect-simulator-0.1.120.zip new file mode 100644 index 000000000000..693e92844a61 Binary files /dev/null and b/internal/service/kafkaconnect/test-fixtures/jcustenborder-kafka-connect-simulator-0.1.120.zip differ diff --git a/internal/service/kafkaconnect/test-fixtures/mongodb-connector.jar b/internal/service/kafkaconnect/test-fixtures/mongodb-connector.jar deleted file mode 100644 index c95d1f68ec4c..000000000000 Binary files a/internal/service/kafkaconnect/test-fixtures/mongodb-connector.jar and /dev/null differ diff --git a/internal/service/kafkaconnect/wait.go b/internal/service/kafkaconnect/wait.go index f9dba0baefce..140dfe1956fe 100644 --- a/internal/service/kafkaconnect/wait.go +++ b/internal/service/kafkaconnect/wait.go @@ -1,21 +1,109 @@ package kafkaconnect import ( + "context" + "fmt" "time" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/kafkaconnect" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) -func waitCustomPluginCreated(conn *kafkaconnect.KafkaConnect, arn string, timeout time.Duration) (*kafkaconnect.DescribeCustomPluginOutput, error) { - stateconf := &resource.StateChangeConf{ +func waitConnectorCreated(ctx context.Context, conn *kafkaconnect.KafkaConnect, arn string, timeout time.Duration) (*kafkaconnect.DescribeConnectorOutput, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{kafkaconnect.ConnectorStateCreating}, + Target: []string{kafkaconnect.ConnectorStateRunning}, + Refresh: statusConnectorState(ctx, conn, arn), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.(*kafkaconnect.DescribeConnectorOutput); ok { + if state, stateDescription := aws.StringValue(output.ConnectorState), output.StateDescription; state == kafkaconnect.ConnectorStateFailed && stateDescription != nil { + tfresource.SetLastError(err, fmt.Errorf("%s: %s", aws.StringValue(stateDescription.Code), aws.StringValue(stateDescription.Message))) + } + + return output, err + } + + return nil, err +} + +func waitConnectorDeleted(ctx context.Context, conn *kafkaconnect.KafkaConnect, arn string, timeout time.Duration) (*kafkaconnect.DescribeConnectorOutput, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{kafkaconnect.ConnectorStateDeleting}, + Target: []string{}, + Refresh: statusConnectorState(ctx, conn, arn), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.(*kafkaconnect.DescribeConnectorOutput); ok { + if state, stateDescription := aws.StringValue(output.ConnectorState), output.StateDescription; state == kafkaconnect.ConnectorStateFailed && stateDescription != nil { + tfresource.SetLastError(err, fmt.Errorf("%s: %s", aws.StringValue(stateDescription.Code), aws.StringValue(stateDescription.Message))) + } + + return output, err + } + + return nil, err +} + +func waitConnectorUpdated(ctx context.Context, conn *kafkaconnect.KafkaConnect, arn string, timeout time.Duration) (*kafkaconnect.DescribeConnectorOutput, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{kafkaconnect.ConnectorStateUpdating}, + Target: []string{kafkaconnect.ConnectorStateRunning}, + Refresh: statusConnectorState(ctx, conn, arn), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.(*kafkaconnect.DescribeConnectorOutput); ok { + if state, stateDescription := aws.StringValue(output.ConnectorState), output.StateDescription; state == kafkaconnect.ConnectorStateFailed && stateDescription != nil { + tfresource.SetLastError(err, fmt.Errorf("%s: %s", aws.StringValue(stateDescription.Code), aws.StringValue(stateDescription.Message))) + } + + return output, err + } + + return nil, err +} + +func waitCustomPluginCreated(ctx context.Context, conn *kafkaconnect.KafkaConnect, arn string, timeout time.Duration) (*kafkaconnect.DescribeCustomPluginOutput, error) { + stateConf := &resource.StateChangeConf{ Pending: []string{kafkaconnect.CustomPluginStateCreating}, Target: []string{kafkaconnect.CustomPluginStateActive}, - Refresh: statusCustomPluginState(conn, arn), + Refresh: statusCustomPluginState(ctx, conn, arn), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForStateContext(ctx) + + if output, ok := outputRaw.(*kafkaconnect.DescribeCustomPluginOutput); ok { + if state, stateDescription := aws.StringValue(output.CustomPluginState), output.StateDescription; state == kafkaconnect.CustomPluginStateCreateFailed && stateDescription != nil { + tfresource.SetLastError(err, fmt.Errorf("%s: %s", aws.StringValue(stateDescription.Code), aws.StringValue(stateDescription.Message))) + } + + return output, err + } + + return nil, err +} + +func waitCustomPluginDeleted(ctx context.Context, conn *kafkaconnect.KafkaConnect, arn string, timeout time.Duration) (*kafkaconnect.DescribeCustomPluginOutput, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{kafkaconnect.CustomPluginStateDeleting}, + Target: []string{}, + Refresh: statusCustomPluginState(ctx, conn, arn), Timeout: timeout, } - outputRaw, err := stateconf.WaitForState() + outputRaw, err := stateConf.WaitForStateContext(ctx) if output, ok := outputRaw.(*kafkaconnect.DescribeCustomPluginOutput); ok { return output, err diff --git a/internal/service/kafkaconnect/worker_configuration.go b/internal/service/kafkaconnect/worker_configuration.go index 513793745546..c93ab001e690 100644 --- a/internal/service/kafkaconnect/worker_configuration.go +++ b/internal/service/kafkaconnect/worker_configuration.go @@ -1,12 +1,13 @@ package kafkaconnect import ( + "context" "encoding/base64" - "fmt" "log" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/kafkaconnect" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" @@ -15,9 +16,9 @@ import ( func ResourceWorkerConfiguration() *schema.Resource { return &schema.Resource{ - Create: resourceWorkerConfigurationCreate, - Read: resourceWorkerConfigurationRead, - Delete: schema.Noop, + CreateWithoutTimeout: resourceWorkerConfigurationCreate, + ReadWithoutTimeout: resourceWorkerConfigurationRead, + DeleteWithoutTimeout: schema.NoopContext, Importer: &schema.ResourceImporter{ State: schema.ImportStatePassthrough, @@ -44,8 +45,8 @@ func ResourceWorkerConfiguration() *schema.Resource { }, "properties_file_content": { Type: schema.TypeString, - ForceNew: true, Required: true, + ForceNew: true, StateFunc: func(v interface{}) string { switch v := v.(type) { case string: @@ -59,36 +60,35 @@ func ResourceWorkerConfiguration() *schema.Resource { } } -func resourceWorkerConfigurationCreate(d *schema.ResourceData, meta interface{}) error { +func resourceWorkerConfigurationCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { conn := meta.(*conns.AWSClient).KafkaConnectConn name := d.Get("name").(string) - properties := d.Get("properties_file_content").(string) - input := &kafkaconnect.CreateWorkerConfigurationInput{ Name: aws.String(name), - PropertiesFileContent: aws.String(verify.Base64Encode([]byte(properties))), + PropertiesFileContent: aws.String(verify.Base64Encode([]byte(d.Get("properties_file_content").(string)))), } if v, ok := d.GetOk("description"); ok { input.Description = aws.String(v.(string)) } - log.Print("[DEBUG] Creating MSK Connect Worker Configuration") - output, err := conn.CreateWorkerConfiguration(input) + log.Printf("[DEBUG] Creating MSK Connect Worker Configuration: %s", input) + output, err := conn.CreateWorkerConfigurationWithContext(ctx, input) + if err != nil { - return fmt.Errorf("error creating MSK Connect Worker Configuration (%s): %w", name, err) + return diag.Errorf("error creating MSK Connect Worker Configuration (%s): %s", name, err) } d.SetId(aws.StringValue(output.WorkerConfigurationArn)) - return resourceWorkerConfigurationRead(d, meta) + return resourceWorkerConfigurationRead(ctx, d, meta) } -func resourceWorkerConfigurationRead(d *schema.ResourceData, meta interface{}) error { +func resourceWorkerConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { conn := meta.(*conns.AWSClient).KafkaConnectConn - config, err := FindWorkerConfigurationByARN(conn, d.Id()) + config, err := FindWorkerConfigurationByARN(ctx, conn, d.Id()) if tfresource.NotFound(err) && !d.IsNewResource() { log.Printf("[WARN] MSK Connect Worker Configuration (%s) not found, removing from state", d.Id()) @@ -97,12 +97,12 @@ func resourceWorkerConfigurationRead(d *schema.ResourceData, meta interface{}) e } if err != nil { - return fmt.Errorf("error reading MSK Connect Worker Configuration (%s): %w", d.Id(), err) + return diag.Errorf("error reading MSK Connect Worker Configuration (%s): %s", d.Id(), err) } d.Set("arn", config.WorkerConfigurationArn) - d.Set("name", config.Name) d.Set("description", config.Description) + d.Set("name", config.Name) if config.LatestRevision != nil { d.Set("latest_revision", config.LatestRevision.Revision) @@ -117,6 +117,7 @@ func resourceWorkerConfigurationRead(d *schema.ResourceData, meta interface{}) e func decodePropertiesFileContent(content string) string { result, err := base64.StdEncoding.DecodeString(content) + if err != nil { return content } diff --git a/internal/service/kafkaconnect/worker_configuration_data_source.go b/internal/service/kafkaconnect/worker_configuration_data_source.go index ea98239a2c7f..5fd0b8c346c7 100644 --- a/internal/service/kafkaconnect/worker_configuration_data_source.go +++ b/internal/service/kafkaconnect/worker_configuration_data_source.go @@ -1,17 +1,19 @@ package kafkaconnect import ( - "fmt" + "context" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/kafkaconnect" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) func DataSourceWorkerConfiguration() *schema.Resource { return &schema.Resource{ - Read: dataSourceWorkerConfigurationRead, + ReadWithoutTimeout: dataSourceWorkerConfigurationRead, Schema: map[string]*schema.Schema{ "arn": { @@ -38,25 +40,20 @@ func DataSourceWorkerConfiguration() *schema.Resource { } } -func dataSourceWorkerConfigurationRead(d *schema.ResourceData, meta interface{}) error { +func dataSourceWorkerConfigurationRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { conn := meta.(*conns.AWSClient).KafkaConnectConn - configName := d.Get("name") + name := d.Get("name") + var output []*kafkaconnect.WorkerConfigurationSummary - input := &kafkaconnect.ListWorkerConfigurationsInput{} - - var config *kafkaconnect.WorkerConfigurationSummary - - err := conn.ListWorkerConfigurationsPages(input, func(page *kafkaconnect.ListWorkerConfigurationsOutput, lastPage bool) bool { + err := conn.ListWorkerConfigurationsPagesWithContext(ctx, &kafkaconnect.ListWorkerConfigurationsInput{}, func(page *kafkaconnect.ListWorkerConfigurationsOutput, lastPage bool) bool { if page == nil { return !lastPage } - for _, configSummary := range page.WorkerConfigurations { - if aws.StringValue(configSummary.Name) == configName { - config = configSummary - - return false + for _, v := range page.WorkerConfigurations { + if aws.StringValue(v.Name) == name { + output = append(output, v) } } @@ -64,37 +61,37 @@ func dataSourceWorkerConfigurationRead(d *schema.ResourceData, meta interface{}) }) if err != nil { - return fmt.Errorf("error listing MSK Connect Worker Configurations: %w", err) + return diag.Errorf("error listing MSK Connect Worker Configurations: %s", err) } - if config == nil { - return fmt.Errorf("error reading MSK Connect Worker Configuration (%s): no results found", configName) + if len(output) == 0 || output[0] == nil { + err = tfresource.NewEmptyResultError(name) + } else if count := len(output); count > 1 { + err = tfresource.NewTooManyResultsError(count, name) } - describeInput := &kafkaconnect.DescribeWorkerConfigurationInput{ - WorkerConfigurationArn: config.WorkerConfigurationArn, + if err != nil { + return diag.FromErr(tfresource.SingularDataSourceFindError("MSK Connect Worker Configuration", err)) } - describeOutput, err := conn.DescribeWorkerConfiguration(describeInput) + arn := aws.StringValue(output[0].WorkerConfigurationArn) + config, err := FindWorkerConfigurationByARN(ctx, conn, arn) if err != nil { - return fmt.Errorf("error reading MSK Connect Worker Configuration (%s): %w", configName, err) + return diag.Errorf("error reading MSK Connect Worker Configuration (%s): %s", arn, err) } d.SetId(aws.StringValue(config.Name)) + d.Set("arn", config.WorkerConfigurationArn) d.Set("description", config.Description) d.Set("name", config.Name) if config.LatestRevision != nil { d.Set("latest_revision", config.LatestRevision.Revision) + d.Set("properties_file_content", decodePropertiesFileContent(aws.StringValue(config.LatestRevision.PropertiesFileContent))) } else { d.Set("latest_revision", nil) - } - - if describeOutput.LatestRevision != nil { - d.Set("properties_file_content", decodePropertiesFileContent(aws.StringValue(describeOutput.LatestRevision.PropertiesFileContent))) - } else { d.Set("properties_file_content", nil) } diff --git a/internal/service/kafkaconnect/worker_configuration_data_source_test.go b/internal/service/kafkaconnect/worker_configuration_data_source_test.go index 5ca3121ff31e..14da35892dc4 100644 --- a/internal/service/kafkaconnect/worker_configuration_data_source_test.go +++ b/internal/service/kafkaconnect/worker_configuration_data_source_test.go @@ -12,9 +12,6 @@ import ( func TestAccKafkaConnectWorkerConfigurationDataSource_basic(t *testing.T) { rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - - propertiesFileContent := "key.converter=hello\nvalue.converter=world" - resourceName := "aws_mskconnect_worker_configuration.test" dataSourceName := "data.aws_mskconnect_worker_configuration.test" @@ -25,7 +22,7 @@ func TestAccKafkaConnectWorkerConfigurationDataSource_basic(t *testing.T) { Providers: acctest.Providers, Steps: []resource.TestStep{ { - Config: testAccWorkerConfigurationDataSourceConfigBasic(rName, propertiesFileContent), + Config: testAccWorkerConfigurationDataSourceConfig(rName), Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttrPair(resourceName, "arn", dataSourceName, "arn"), resource.TestCheckResourceAttrPair(resourceName, "description", dataSourceName, "description"), @@ -38,15 +35,19 @@ func TestAccKafkaConnectWorkerConfigurationDataSource_basic(t *testing.T) { }) } -func testAccWorkerConfigurationDataSourceConfigBasic(name, content string) string { +func testAccWorkerConfigurationDataSourceConfig(rName string) string { return fmt.Sprintf(` resource "aws_mskconnect_worker_configuration" "test" { - name = %[1]q - properties_file_content = %[2]q + name = %[1]q + + properties_file_content = < 0 { + ephemeralStorage := v.([]interface{}) + configMap := ephemeralStorage[0].(map[string]interface{}) + params.EphemeralStorage = &lambda.EphemeralStorage{ + Size: aws.Int64(int64(configMap["size"].(int))), + } + } + if v, ok := d.GetOk("file_system_config"); ok && len(v.([]interface{})) > 0 { params.FileSystemConfigs = expandFileSystemConfigs(v.([]interface{})) } @@ -553,6 +578,11 @@ func resourceFunctionCreate(d *schema.ResourceData, meta interface{}) error { return resource.RetryableError(err) } + if tfawserr.ErrCodeEquals(err, lambda.ErrCodeResourceConflictException) { + log.Printf("[DEBUG] Received %s, retrying CreateFunction", err) + return resource.RetryableError(err) + } + if err != nil { return resource.NonRetryableError(err) } @@ -792,6 +822,12 @@ func resourceFunctionRead(d *schema.ResourceData, meta interface{}) error { log.Printf("[ERR] Error setting environment for Lambda Function (%s): %s", d.Id(), err) } + ephemeralStorage := flattenEphemeralStorage(function.EphemeralStorage) + log.Printf("[INFO] Setting Lambda %s ephemeralStorage %#v from API", d.Id(), ephemeralStorage) + if err := d.Set("ephemeral_storage", ephemeralStorage); err != nil { + return fmt.Errorf("error setting ephemeral_storage for Lambda Function (%s): %w", d.Id(), err) + } + if function.DeadLetterConfig != nil && function.DeadLetterConfig.TargetArn != nil { d.Set("dead_letter_config", []interface{}{ map[string]interface{}{ @@ -983,7 +1019,15 @@ func resourceFunctionUpdate(d *schema.ResourceData, meta interface{}) error { if d.HasChange("description") { configReq.Description = aws.String(d.Get("description").(string)) } - + if d.HasChange("ephemeral_storage") { + ephemeralStorage := d.Get("ephemeral_storage").([]interface{}) + if len(ephemeralStorage) == 1 { + configMap := ephemeralStorage[0].(map[string]interface{}) + configReq.EphemeralStorage = &lambda.EphemeralStorage{ + Size: aws.Int64(int64(configMap["size"].(int))), + } + } + } if d.HasChange("handler") { configReq.Handler = aws.String(d.Get("handler").(string)) } @@ -1097,6 +1141,11 @@ func resourceFunctionUpdate(d *schema.ResourceData, meta interface{}) error { return resource.RetryableError(err) } + if tfawserr.ErrCodeEquals(err, lambda.ErrCodeResourceConflictException) { + log.Printf("[DEBUG] Received %s, retrying UpdateFunctionConfiguration", err) + return resource.RetryableError(err) + } + if err != nil { return resource.NonRetryableError(err) } @@ -1444,3 +1493,14 @@ func expandImageConfigs(imageConfigMaps []interface{}) *lambda.ImageConfig { } return imageConfig } + +func flattenEphemeralStorage(response *lambda.EphemeralStorage) []map[string]interface{} { + if response == nil { + return nil + } + + m := make(map[string]interface{}) + m["size"] = aws.Int64Value(response.Size) + + return []map[string]interface{}{m} +} diff --git a/internal/service/lambda/function_data_source.go b/internal/service/lambda/function_data_source.go index 366fb01a0bee..3a4154a8e0ef 100644 --- a/internal/service/lambda/function_data_source.go +++ b/internal/service/lambda/function_data_source.go @@ -43,6 +43,18 @@ func DataSourceFunction() *schema.Resource { }, }, }, + "ephemeral_storage": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "size": { + Type: schema.TypeInt, + Computed: true, + }, + }, + }, + }, "file_system_config": { Type: schema.TypeList, Computed: true, @@ -356,5 +368,9 @@ func dataSourceFunctionRead(d *schema.ResourceData, meta interface{}) error { return fmt.Errorf("Error setting architectures for Lambda Function (%s): %w", d.Id(), err) } + if err := d.Set("ephemeral_storage", flattenEphemeralStorage(function.EphemeralStorage)); err != nil { + return fmt.Errorf("error setting ephemeral_storage: (%s): %w", d.Id(), err) + } + return nil } diff --git a/internal/service/lambda/function_data_source_test.go b/internal/service/lambda/function_data_source_test.go index 82cfc7c37e37..5f4ef7248b11 100644 --- a/internal/service/lambda/function_data_source_test.go +++ b/internal/service/lambda/function_data_source_test.go @@ -29,8 +29,11 @@ func TestAccLambdaFunctionDataSource_basic(t *testing.T) { Config: testAccFunctionBasicDataSourceConfig(rName), Check: resource.ComposeAggregateTestCheckFunc( resource.TestCheckResourceAttrPair(dataSourceName, "arn", resourceName, "arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "code_signing_config_arn", resourceName, "code_signing_config_arn"), resource.TestCheckResourceAttrPair(dataSourceName, "dead_letter_config.#", resourceName, "dead_letter_config.#"), resource.TestCheckResourceAttrPair(dataSourceName, "description", resourceName, "description"), + resource.TestCheckResourceAttrPair(dataSourceName, "ephemeral_storage.#", resourceName, "ephemeral_storage.#"), + resource.TestCheckResourceAttrPair(dataSourceName, "ephemeral_storage.0.size", resourceName, "ephemeral_storage.0.size"), resource.TestCheckResourceAttrPair(dataSourceName, "function_name", resourceName, "function_name"), resource.TestCheckResourceAttrPair(dataSourceName, "handler", resourceName, "handler"), resource.TestCheckResourceAttrPair(dataSourceName, "invoke_arn", resourceName, "invoke_arn"), @@ -40,6 +43,8 @@ func TestAccLambdaFunctionDataSource_basic(t *testing.T) { resource.TestCheckResourceAttrPair(dataSourceName, "reserved_concurrent_executions", resourceName, "reserved_concurrent_executions"), resource.TestCheckResourceAttrPair(dataSourceName, "role", resourceName, "role"), resource.TestCheckResourceAttrPair(dataSourceName, "runtime", resourceName, "runtime"), + resource.TestCheckResourceAttrPair(dataSourceName, "signing_job_arn", resourceName, "signing_job_arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "signing_profile_version_arn", resourceName, "signing_profile_version_arn"), resource.TestCheckResourceAttrPair(dataSourceName, "source_code_hash", resourceName, "source_code_hash"), resource.TestCheckResourceAttrPair(dataSourceName, "source_code_size", resourceName, "source_code_size"), resource.TestCheckResourceAttrPair(dataSourceName, "tags.%", resourceName, "tags.%"), @@ -47,9 +52,6 @@ func TestAccLambdaFunctionDataSource_basic(t *testing.T) { resource.TestCheckResourceAttrPair(dataSourceName, "tracing_config.#", resourceName, "tracing_config.#"), resource.TestCheckResourceAttrPair(dataSourceName, "tracing_config.0.mode", resourceName, "tracing_config.0.mode"), resource.TestCheckResourceAttrPair(dataSourceName, "version", resourceName, "version"), - resource.TestCheckResourceAttrPair(dataSourceName, "signing_profile_version_arn", resourceName, "signing_profile_version_arn"), - resource.TestCheckResourceAttrPair(dataSourceName, "signing_job_arn", resourceName, "signing_job_arn"), - resource.TestCheckResourceAttrPair(dataSourceName, "code_signing_config_arn", resourceName, "code_signing_config_arn"), ), }, }, @@ -245,6 +247,34 @@ func TestAccLambdaFunctionDataSource_architectures(t *testing.T) { }) } +func TestAccLambdaFunctionDataSource_ephemeralStorage(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + dataSourceName := "data.aws_lambda_function.test" + resourceName := "aws_lambda_function.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, lambda.EndpointsID), + Providers: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccFunctionEphemeralStorageDataSourceConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrPair(dataSourceName, "arn", resourceName, "arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "ephemeral_storage.#", resourceName, "ephemeral_storage.#"), + resource.TestCheckResourceAttrPair(dataSourceName, "ephemeral_storage.0.size", resourceName, "ephemeral_storage.0.size"), + ), + }, + }, + }) +} + +func testAccImagePreCheck(t *testing.T) { + if os.Getenv("AWS_LAMBDA_IMAGE_LATEST_ID") == "" { + t.Skip("AWS_LAMBDA_IMAGE_LATEST_ID env var must be set for Lambda Function Data Source Image Support acceptance tests.") + } +} + func testAccFunctionBaseDataSourceConfig(rName string) string { return fmt.Sprintf(` resource "aws_iam_role" "lambda" { @@ -305,7 +335,7 @@ EOF } func testAccFunctionBasicDataSourceConfig(rName string) string { - return testAccFunctionBaseDataSourceConfig(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAccFunctionBaseDataSourceConfig(rName), fmt.Sprintf(` resource "aws_lambda_function" "test" { filename = "test-fixtures/lambdatest.zip" function_name = %[1]q @@ -317,11 +347,11 @@ resource "aws_lambda_function" "test" { data "aws_lambda_function" "test" { function_name = aws_lambda_function.test.function_name } -`, rName) +`, rName)) } func testAccFunctionVersionDataSourceConfig(rName string) string { - return testAccFunctionBaseDataSourceConfig(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAccFunctionBaseDataSourceConfig(rName), fmt.Sprintf(` resource "aws_lambda_function" "test" { filename = "test-fixtures/lambdatest.zip" function_name = %[1]q @@ -335,11 +365,11 @@ data "aws_lambda_function" "test" { function_name = aws_lambda_function.test.function_name qualifier = 1 } -`, rName) +`, rName)) } func testAccFunctionAliasDataSourceConfig(rName string) string { - return testAccFunctionBaseDataSourceConfig(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAccFunctionBaseDataSourceConfig(rName), fmt.Sprintf(` resource "aws_lambda_function" "test" { filename = "test-fixtures/lambdatest.zip" function_name = %[1]q @@ -359,11 +389,11 @@ data "aws_lambda_function" "test" { function_name = aws_lambda_function.test.function_name qualifier = aws_lambda_alias.test.name } -`, rName) +`, rName)) } func testAccFunctionLayersDataSourceConfig(rName string) string { - return testAccFunctionBaseDataSourceConfig(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAccFunctionBaseDataSourceConfig(rName), fmt.Sprintf(` resource "aws_lambda_layer_version" "test" { filename = "test-fixtures/lambdatest.zip" layer_name = %[1]q @@ -382,11 +412,11 @@ resource "aws_lambda_function" "test" { data "aws_lambda_function" "test" { function_name = aws_lambda_function.test.function_name } -`, rName) +`, rName)) } func testAccFunctionVPCDataSourceConfig(rName string) string { - return testAccFunctionBaseDataSourceConfig(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAccFunctionBaseDataSourceConfig(rName), fmt.Sprintf(` resource "aws_vpc" "test" { cidr_block = "10.0.0.0/16" @@ -440,11 +470,11 @@ resource "aws_lambda_function" "test" { data "aws_lambda_function" "test" { function_name = aws_lambda_function.test.function_name } -`, rName) +`, rName)) } func testAccFunctionEnvironmentDataSourceConfig(rName string) string { - return testAccFunctionBaseDataSourceConfig(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAccFunctionBaseDataSourceConfig(rName), fmt.Sprintf(` resource "aws_lambda_function" "test" { filename = "test-fixtures/lambdatest.zip" function_name = %[1]q @@ -463,11 +493,11 @@ resource "aws_lambda_function" "test" { data "aws_lambda_function" "test" { function_name = aws_lambda_function.test.function_name } -`, rName) +`, rName)) } func testAccFunctionFileSystemsDataSourceConfig(rName string) string { - return testAccFunctionBaseDataSourceConfig(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAccFunctionBaseDataSourceConfig(rName), fmt.Sprintf(` resource "aws_vpc" "test" { cidr_block = "10.0.0.0/16" @@ -558,13 +588,11 @@ resource "aws_lambda_function" "test" { data "aws_lambda_function" "test" { function_name = aws_lambda_function.test.function_name } -`, rName) +`, rName)) } func testAccFunctionImageDataSourceConfig(rName, imageID string) string { - return acctest.ConfigCompose( - testAccFunctionBaseDataSourceConfig(rName), - fmt.Sprintf(` + return acctest.ConfigCompose(testAccFunctionBaseDataSourceConfig(rName), fmt.Sprintf(` resource "aws_lambda_function" "test" { image_uri = %q function_name = %q @@ -584,7 +612,7 @@ data "aws_lambda_function" "test" { } func testAccFunctionArchitecturesDataSourceConfig(rName string) string { - return testAccFunctionBaseDataSourceConfig(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAccFunctionBaseDataSourceConfig(rName), fmt.Sprintf(` resource "aws_lambda_function" "test" { filename = "test-fixtures/lambdatest.zip" function_name = %[1]q @@ -597,11 +625,25 @@ resource "aws_lambda_function" "test" { data "aws_lambda_function" "test" { function_name = aws_lambda_function.test.function_name } -`, rName) +`, rName)) } -func testAccImagePreCheck(t *testing.T) { - if os.Getenv("AWS_LAMBDA_IMAGE_LATEST_ID") == "" { - t.Skip("AWS_LAMBDA_IMAGE_LATEST_ID env var must be set for Lambda Function Data Source Image Support acceptance tests.") - } +func testAccFunctionEphemeralStorageDataSourceConfig(rName string) string { + return acctest.ConfigCompose(testAccFunctionBaseDataSourceConfig(rName), fmt.Sprintf(` +resource "aws_lambda_function" "test" { + filename = "test-fixtures/lambdatest.zip" + function_name = %[1]q + handler = "exports.example" + role = aws_iam_role.lambda.arn + runtime = "nodejs12.x" + + ephemeral_storage { + size = 1024 + } +} + +data "aws_lambda_function" "test" { + function_name = aws_lambda_function.test.function_name +} +`, rName)) } diff --git a/internal/service/lambda/function_test.go b/internal/service/lambda/function_test.go index 730fdadd53b5..b8eb91412c54 100644 --- a/internal/service/lambda/function_test.go +++ b/internal/service/lambda/function_test.go @@ -51,16 +51,19 @@ func TestAccLambdaFunction_basic(t *testing.T) { Steps: []resource.TestStep{ { Config: testAccBasicConfig(funcName, policyName, roleName, sgName), - Check: resource.ComposeTestCheckFunc( + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckFunctionExists(resourceName, funcName, &conf), + testAccCheckFunctionInvokeARN(resourceName, &conf), testAccCheckFunctionName(&conf, funcName), + resource.TestCheckResourceAttr(resourceName, "architectures.#", "1"), + resource.TestCheckResourceAttr(resourceName, "architectures.0", lambda.ArchitectureX8664), acctest.CheckResourceAttrRegionalARN(resourceName, "arn", "lambda", fmt.Sprintf("function:%s", funcName)), - testAccCheckFunctionInvokeARN(resourceName, &conf), - resource.TestCheckResourceAttr(resourceName, "reserved_concurrent_executions", "-1"), - resource.TestCheckResourceAttr(resourceName, "version", tflambda.FunctionVersionLatest), + resource.TestCheckResourceAttr(resourceName, "ephemeral_storage.#", "1"), + resource.TestCheckResourceAttr(resourceName, "ephemeral_storage.0.size", "512"), resource.TestCheckResourceAttr(resourceName, "package_type", lambda.PackageTypeZip), - resource.TestCheckResourceAttr(resourceName, "architectures.0", lambda.ArchitectureX8664), acctest.CheckResourceAttrRegionalARN(resourceName, "qualified_arn", "lambda", fmt.Sprintf("function:%s:%s", funcName, tflambda.FunctionVersionLatest)), + resource.TestCheckResourceAttr(resourceName, "reserved_concurrent_executions", "-1"), + resource.TestCheckResourceAttr(resourceName, "version", tflambda.FunctionVersionLatest), ), }, { @@ -1194,6 +1197,51 @@ func TestAccLambdaFunction_architecturesWithLayer(t *testing.T) { }) } +func TestAccLambdaFunction_ephemeralStorage(t *testing.T) { + var conf lambda.GetFunctionOutput + rString := sdkacctest.RandString(8) + funcName := fmt.Sprintf("tf_acc_lambda_func_ephemeral_storage_%s", rString) + policyName := fmt.Sprintf("tf_acc_policy_lambda_func_ephemeral_storage_%s", rString) + roleName := fmt.Sprintf("tf_acc_role_lambda_func_ephemeral_storage_%s", rString) + sgName := fmt.Sprintf("tf_acc_sg_lambda_func_ephemeral_storage_%s", rString) + resourceName := "aws_lambda_function.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(t) + }, + ErrorCheck: acctest.ErrorCheck(t, lambda.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckFunctionDestroy, + + Steps: []resource.TestStep{ + { + Config: testAccWithEphemeralStorage(funcName, policyName, roleName, sgName), + Check: resource.ComposeTestCheckFunc( + testAccCheckFunctionExists(resourceName, funcName, &conf), + resource.TestCheckResourceAttr(resourceName, "ephemeral_storage.#", "1"), + resource.TestCheckResourceAttr(resourceName, "ephemeral_storage.0.size", "1024"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"filename", "publish"}, + }, + { + Config: testAccWithUpdateEphemeralStorage(funcName, policyName, roleName, sgName), + Check: resource.ComposeTestCheckFunc( + testAccCheckFunctionExists(resourceName, funcName, &conf), + testAccCheckFunctionName(&conf, funcName), + acctest.CheckResourceAttrRegionalARN(resourceName, "arn", "lambda", fmt.Sprintf("function:%s", funcName)), + resource.TestCheckResourceAttr(resourceName, "ephemeral_storage.0.size", "2048"), + ), + }, + }, + }) +} + func TestAccLambdaFunction_tracing(t *testing.T) { if testing.Short() { t.Skip("skipping long-running test in short mode") @@ -3597,3 +3645,35 @@ func testAccPreCheckSignerSigningProfile(t *testing.T, platformID string) { t.Skipf("skipping acceptance testing: Signing Platform (%s) not found", platformID) } } + +func testAccWithEphemeralStorage(funcName, policyName, roleName, sgName string) string { + return fmt.Sprintf(acctest.ConfigLambdaBase(policyName, roleName, sgName)+` +resource "aws_lambda_function" "test" { + filename = "test-fixtures/lambdatest.zip" + function_name = "%s" + role = aws_iam_role.iam_for_lambda.arn + handler = "exports.example" + runtime = "nodejs12.x" + + ephemeral_storage { + size = 1024 + } +} +`, funcName) +} + +func testAccWithUpdateEphemeralStorage(funcName, policyName, roleName, sgName string) string { + return fmt.Sprintf(acctest.ConfigLambdaBase(policyName, roleName, sgName)+` +resource "aws_lambda_function" "test" { + filename = "test-fixtures/lambdatest.zip" + function_name = "%s" + role = aws_iam_role.iam_for_lambda.arn + handler = "exports.example" + runtime = "nodejs12.x" + + ephemeral_storage { + size = 2048 + } +} +`, funcName) +} diff --git a/internal/service/lambda/function_url.go b/internal/service/lambda/function_url.go new file mode 100644 index 000000000000..050261816507 --- /dev/null +++ b/internal/service/lambda/function_url.go @@ -0,0 +1,401 @@ +package lambda + +import ( + "context" + "fmt" + "log" + "net/url" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/lambda" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/flex" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func ResourceFunctionUrl() *schema.Resource { + return &schema.Resource{ + CreateWithoutTimeout: resourceFunctionURLCreate, + ReadWithoutTimeout: resourceFunctionURLRead, + UpdateWithoutTimeout: resourceFunctionURLUpdate, + DeleteWithoutTimeout: resourceFunctionURLDelete, + + Importer: &schema.ResourceImporter{ + StateContext: schema.ImportStatePassthroughContext, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(10 * time.Minute), + }, + + Schema: map[string]*schema.Schema{ + "authorization_type": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(lambda.FunctionUrlAuthType_Values(), false), + }, + "cors": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "allow_credentials": { + Type: schema.TypeBool, + Optional: true, + }, + "allow_headers": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "allow_methods": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "allow_origins": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "expose_headers": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "max_age": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntAtMost(86400), + }, + }, + }, + }, + "function_arn": { + Type: schema.TypeString, + Computed: true, + }, + "function_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + // Using function name or ARN should not be shown as a diff. + // Try to convert the old and new values from ARN to function name + oldFunctionName, oldFunctionNameErr := GetFunctionNameFromARN(old) + newFunctionName, newFunctionNameErr := GetFunctionNameFromARN(new) + return (oldFunctionName == new && oldFunctionNameErr == nil) || (newFunctionName == old && newFunctionNameErr == nil) + }, + }, + "function_url": { + Type: schema.TypeString, + Computed: true, + }, + "qualifier": { + Type: schema.TypeString, + ForceNew: true, + Optional: true, + }, + "url_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func resourceFunctionURLCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).LambdaConn + + name := d.Get("function_name").(string) + qualifier := d.Get("qualifier").(string) + id := FunctionURLCreateResourceID(name, qualifier) + input := &lambda.CreateFunctionUrlConfigInput{ + AuthType: aws.String(d.Get("authorization_type").(string)), + FunctionName: aws.String(name), + } + + if qualifier != "" { + input.Qualifier = aws.String(qualifier) + } + + if v, ok := d.GetOk("cors"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.Cors = expandCors(v.([]interface{})[0].(map[string]interface{})) + } + + log.Printf("[DEBUG] Creating Lambda Function URL: %s", input) + _, err := conn.CreateFunctionUrlConfigWithContext(ctx, input) + + if err != nil { + return diag.Errorf("error creating Lambda Function URL (%s): %s", id, err) + } + + d.SetId(id) + + if v := d.Get("authorization_type").(string); v == lambda.FunctionUrlAuthTypeNone { + input := &lambda.AddPermissionInput{ + Action: aws.String("lambda:InvokeFunctionUrl"), + FunctionName: aws.String(name), + FunctionUrlAuthType: aws.String(v), + Principal: aws.String("*"), + StatementId: aws.String("FunctionURLAllowPublicAccess"), + } + + log.Printf("[DEBUG] Adding Lambda Permission: %s", input) + _, err := conn.AddPermissionWithContext(ctx, input) + + if err != nil { + return diag.Errorf("error adding Lambda Function URL (%s) permission %s", d.Id(), err) + } + } + + return resourceFunctionURLRead(ctx, d, meta) +} + +func resourceFunctionURLRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).LambdaConn + + name, qualifier, err := FunctionURLParseResourceID(d.Id()) + + if err != nil { + return diag.FromErr(err) + } + + output, err := FindFunctionURLByNameAndQualifier(ctx, conn, name, qualifier) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] Lambda Function URL %s not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return diag.Errorf("error reading Lambda Function URL (%s): %s", d.Id(), err) + } + + functionURL := aws.StringValue(output.FunctionUrl) + + d.Set("authorization_type", output.AuthType) + if output.Cors != nil { + if err := d.Set("cors", []interface{}{flattenCors(output.Cors)}); err != nil { + return diag.Errorf("error setting cors: %s", err) + } + } else { + d.Set("cors", nil) + } + d.Set("function_arn", output.FunctionArn) + d.Set("function_name", name) + d.Set("function_url", functionURL) + d.Set("qualifier", qualifier) + + // Function URL endpoints have the following format: + // https://.lambda-url..on.aws + if v, err := url.Parse(functionURL); err != nil { + return diag.Errorf("error parsing URL (%s): %s", functionURL, err) + } else if v := strings.Split(v.Host, "."); len(v) > 0 { + d.Set("url_id", v[0]) + } else { + d.Set("url_id", nil) + } + + return nil +} + +func resourceFunctionURLUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).LambdaConn + + name, qualifier, err := FunctionURLParseResourceID(d.Id()) + + if err != nil { + return diag.FromErr(err) + } + + input := &lambda.UpdateFunctionUrlConfigInput{ + FunctionName: aws.String(name), + } + + if qualifier != "" { + input.Qualifier = aws.String(qualifier) + } + + if d.HasChange("authorization_type") { + input.AuthType = aws.String(d.Get("authorization_type").(string)) + } + + if d.HasChange("cors") { + if v, ok := d.GetOk("cors"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input.Cors = expandCors(v.([]interface{})[0].(map[string]interface{})) + } + } + + log.Printf("[DEBUG] Updating Lambda Function URL: %s", input) + _, err = conn.UpdateFunctionUrlConfigWithContext(ctx, input) + + if err != nil { + return diag.Errorf("error updating Lambda Function URL (%s): %s", d.Id(), err) + } + + return resourceFunctionURLRead(ctx, d, meta) +} + +func resourceFunctionURLDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).LambdaConn + + name, qualifier, err := FunctionURLParseResourceID(d.Id()) + + if err != nil { + return diag.FromErr(err) + } + + input := &lambda.DeleteFunctionUrlConfigInput{ + FunctionName: aws.String(name), + } + + if qualifier != "" { + input.Qualifier = aws.String(qualifier) + } + + log.Printf("[INFO] Deleting Lambda Function URL: %s", d.Id()) + _, err = conn.DeleteFunctionUrlConfigWithContext(ctx, input) + + if tfawserr.ErrCodeEquals(err, lambda.ErrCodeResourceNotFoundException) { + return nil + } + + if err != nil { + return diag.Errorf("error deleting Lambda Function URL (%s): %s", d.Id(), err) + } + + return nil +} + +func FindFunctionURLByNameAndQualifier(ctx context.Context, conn *lambda.Lambda, name, qualifier string) (*lambda.GetFunctionUrlConfigOutput, error) { + input := &lambda.GetFunctionUrlConfigInput{ + FunctionName: aws.String(name), + } + + if qualifier != "" { + input.Qualifier = aws.String(qualifier) + } + + output, err := conn.GetFunctionUrlConfigWithContext(ctx, input) + + if tfawserr.ErrCodeEquals(err, lambda.ErrCodeResourceNotFoundException) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output, nil +} + +const functionURLResourceIDSeparator = "/" + +func FunctionURLCreateResourceID(functionName, qualifier string) string { + if qualifier == "" { + return functionName + } + + parts := []string{functionName, qualifier} + id := strings.Join(parts, functionURLResourceIDSeparator) + + return id +} + +func FunctionURLParseResourceID(id string) (string, string, error) { + parts := strings.Split(id, functionURLResourceIDSeparator) + + if len(parts) == 1 && parts[0] != "" { + return parts[0], "", nil + } + if len(parts) == 2 && parts[0] != "" && parts[1] != "" { + return parts[0], parts[1], nil + } + + return "", "", fmt.Errorf("unexpected format for ID (%[1]s), expected FUNCTION-NAME%[2]qQUALIFIER or FUNCTION-NAME", id, functionURLResourceIDSeparator) +} + +func expandCors(tfMap map[string]interface{}) *lambda.Cors { + if tfMap == nil { + return nil + } + + apiObject := &lambda.Cors{} + + if v, ok := tfMap["allow_credentials"].(bool); ok { + apiObject.AllowCredentials = aws.Bool(v) + } + + if v, ok := tfMap["allow_headers"].(*schema.Set); ok && v.Len() > 0 { + apiObject.AllowHeaders = flex.ExpandStringSet(v) + } + + if v, ok := tfMap["allow_methods"].(*schema.Set); ok && v.Len() > 0 { + apiObject.AllowMethods = flex.ExpandStringSet(v) + } + + if v, ok := tfMap["allow_origins"].(*schema.Set); ok && v.Len() > 0 { + apiObject.AllowOrigins = flex.ExpandStringSet(v) + } + + if v, ok := tfMap["expose_headers"].(*schema.Set); ok && v.Len() > 0 { + apiObject.ExposeHeaders = flex.ExpandStringSet(v) + } + + if v, ok := tfMap["max_age"].(int); ok && v != 0 { + apiObject.MaxAge = aws.Int64(int64(v)) + } + + return apiObject +} + +func flattenCors(apiObject *lambda.Cors) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.AllowCredentials; v != nil { + tfMap["allow_credentials"] = aws.BoolValue(v) + } + + if v := apiObject.AllowHeaders; v != nil { + tfMap["allow_headers"] = aws.StringValueSlice(v) + } + + if v := apiObject.AllowMethods; v != nil { + tfMap["allow_methods"] = aws.StringValueSlice(v) + } + + if v := apiObject.AllowOrigins; v != nil { + tfMap["allow_origins"] = aws.StringValueSlice(v) + } + + if v := apiObject.ExposeHeaders; v != nil { + tfMap["expose_headers"] = aws.StringValueSlice(v) + } + + if v := apiObject.MaxAge; v != nil { + tfMap["max_age"] = aws.Int64Value(v) + } + + return tfMap +} diff --git a/internal/service/lambda/function_url_data_source.go b/internal/service/lambda/function_url_data_source.go new file mode 100644 index 000000000000..29e72e97fb92 --- /dev/null +++ b/internal/service/lambda/function_url_data_source.go @@ -0,0 +1,132 @@ +package lambda + +import ( + "context" + "net/url" + "strings" + + "github.com/aws/aws-sdk-go/aws" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/conns" +) + +func DataSourceFunctionURL() *schema.Resource { + return &schema.Resource{ + ReadWithoutTimeout: dataSourceFunctionURLRead, + + Schema: map[string]*schema.Schema{ + "authorization_type": { + Type: schema.TypeString, + Computed: true, + }, + "cors": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "allow_credentials": { + Type: schema.TypeBool, + Computed: true, + }, + "allow_headers": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "allow_methods": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "allow_origins": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "expose_headers": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "max_age": { + Type: schema.TypeInt, + Computed: true, + }, + }, + }, + }, + "creation_time": { + Type: schema.TypeString, + Computed: true, + }, + "function_arn": { + Type: schema.TypeString, + Computed: true, + }, + "function_name": { + Type: schema.TypeString, + Required: true, + }, + "function_url": { + Type: schema.TypeString, + Computed: true, + }, + "last_modified_time": { + Type: schema.TypeString, + Computed: true, + }, + "qualifier": { + Type: schema.TypeString, + Optional: true, + }, + "url_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + } +} + +func dataSourceFunctionURLRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).LambdaConn + + name := d.Get("function_name").(string) + qualifier := d.Get("qualifier").(string) + id := FunctionURLCreateResourceID(name, qualifier) + output, err := FindFunctionURLByNameAndQualifier(ctx, conn, name, qualifier) + + if err != nil { + return diag.Errorf("error reading Lambda Function URL (%s): %s", id, err) + } + + functionURL := aws.StringValue(output.FunctionUrl) + + d.SetId(id) + d.Set("authorization_type", output.AuthType) + if output.Cors != nil { + if err := d.Set("cors", []interface{}{flattenCors(output.Cors)}); err != nil { + return diag.Errorf("error setting cors: %s", err) + } + } else { + d.Set("cors", nil) + } + d.Set("creation_time", output.CreationTime) + d.Set("function_arn", output.FunctionArn) + d.Set("function_name", name) + d.Set("function_url", functionURL) + d.Set("last_modified_time", output.LastModifiedTime) + d.Set("qualifier", qualifier) + + // Function URL endpoints have the following format: + // https://.lambda-url..on.aws + if v, err := url.Parse(functionURL); err != nil { + return diag.Errorf("error parsing URL (%s): %s", functionURL, err) + } else if v := strings.Split(v.Host, "."); len(v) > 0 { + d.Set("url_id", v[0]) + } else { + d.Set("url_id", nil) + } + + return nil +} diff --git a/internal/service/lambda/function_url_data_source_test.go b/internal/service/lambda/function_url_data_source_test.go new file mode 100644 index 000000000000..5993d0cd2fcc --- /dev/null +++ b/internal/service/lambda/function_url_data_source_test.go @@ -0,0 +1,135 @@ +package lambda_test + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/lambda" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" +) + +func TestAccLambdaFunctionURLDataSource_basic(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + dataSourceName := "data.aws_lambda_function_url.test" + resourceName := "aws_lambda_function_url.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccFunctionURLPreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, lambda.EndpointsID), + Providers: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccFunctionURLBasicDataSourceConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrPair(dataSourceName, "authorization_type", resourceName, "authorization_type"), + resource.TestCheckResourceAttrPair(dataSourceName, "cors.#", resourceName, "cors.#"), + resource.TestCheckResourceAttrPair(dataSourceName, "cors.0.allow_credentials", resourceName, "cors.0.allow_credentials"), + resource.TestCheckResourceAttrPair(dataSourceName, "cors.0.allow_headers.#", resourceName, "cors.0.allow_headers.#"), + resource.TestCheckResourceAttrPair(dataSourceName, "cors.0.allow_methods.#", resourceName, "cors.0.allow_methods.#"), + resource.TestCheckResourceAttrPair(dataSourceName, "cors.0.allow_origins.#", resourceName, "cors.0.allow_origins.#"), + resource.TestCheckResourceAttrPair(dataSourceName, "cors.0.expose_headers.#", resourceName, "cors.0.expose_headers.#"), + resource.TestCheckResourceAttrPair(dataSourceName, "cors.0.max_age", resourceName, "cors.0.max_age"), + resource.TestCheckResourceAttrSet(dataSourceName, "creation_time"), + resource.TestCheckResourceAttrPair(dataSourceName, "function_arn", resourceName, "function_arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "function_name", resourceName, "function_name"), + resource.TestCheckResourceAttrPair(dataSourceName, "function_url", resourceName, "function_url"), + resource.TestCheckResourceAttrSet(dataSourceName, "last_modified_time"), + resource.TestCheckResourceAttrPair(dataSourceName, "qualifier", resourceName, "qualifier"), + resource.TestCheckResourceAttrPair(dataSourceName, "url_id", resourceName, "url_id"), + ), + }, + }, + }) +} + +func testAccFunctionURLDataSourceBaseConfig(rName string) string { + return fmt.Sprintf(` +data "aws_partition" "current" {} + +resource "aws_iam_role" "lambda" { + name = %[1]q + + assume_role_policy = < all statements deleted - return nil - } + if tfawserr.ErrCodeEquals(err, lambda.ErrCodeResourceNotFoundException) { + // no policy found => all statements deleted + return nil } if err != nil { return fmt.Errorf("Unexpected error when checking existence of Lambda permission: %s\n%s", @@ -740,26 +753,18 @@ func testAccCPermissionImportStateIdFunc(resourceName string) resource.ImportSta } } -func testAccPermissionConfig(funcName, roleName string) string { +func testAccPermissionBaseConfig(rName string) string { return fmt.Sprintf(` -resource "aws_lambda_permission" "allow_cloudwatch" { - statement_id = "AllowExecutionFromCloudWatch" - action = "lambda:InvokeFunction" - function_name = aws_lambda_function.test.arn - principal = "events.amazonaws.com" - event_source_token = "test-event-source-token" -} - resource "aws_lambda_function" "test" { filename = "test-fixtures/lambdatest.zip" - function_name = "%s" - role = aws_iam_role.iam_for_lambda.arn + function_name = %[1]q + role = aws_iam_role.test.arn handler = "exports.handler" runtime = "nodejs12.x" } -resource "aws_iam_role" "iam_for_lambda" { - name = "%s" +resource "aws_iam_role" "test" { + name = %[1]q assume_role_policy = < maxNumberOfNodesPerShard { - maxNumberOfNodesPerShard = n - } - } - if maxNumberOfNodesPerShard == 0 { - return diag.Errorf("error reading num_replicas_per_shard for MemoryDB Cluster (%s): no available shards found", d.Id()) + numReplicasPerShard, err := deriveClusterNumReplicasPerShard(cluster) + if err != nil { + return diag.Errorf("error reading num_replicas_per_shard for MemoryDB Cluster (%s): %s", d.Id(), err) } - d.Set("num_replicas_per_shard", maxNumberOfNodesPerShard-1) + d.Set("num_replicas_per_shard", numReplicasPerShard) d.Set("num_shards", cluster.NumberOfShards) d.Set("parameter_group_name", cluster.ParameterGroupName) @@ -651,3 +638,29 @@ func flattenShards(shards []*memorydb.Shard) *schema.Set { return shardSet } + +// deriveClusterNumReplicasPerShard determines the replicas per shard +// configuration of a cluster. As this cannot directly be read back, we +// assume that it's the same as that of the largest shard. +// +// For the sake of caution, this search is limited to stable shards. +func deriveClusterNumReplicasPerShard(cluster *memorydb.Cluster) (int, error) { + var maxNumberOfNodesPerShard int64 + + for _, shard := range cluster.Shards { + if aws.StringValue(shard.Status) != ClusterShardStatusAvailable { + continue + } + + n := aws.Int64Value(shard.NumberOfNodes) + if n > maxNumberOfNodesPerShard { + maxNumberOfNodesPerShard = n + } + } + + if maxNumberOfNodesPerShard == 0 { + return 0, fmt.Errorf("no available shards found") + } + + return int(maxNumberOfNodesPerShard - 1), nil +} diff --git a/internal/service/memorydb/cluster_data_source.go b/internal/service/memorydb/cluster_data_source.go new file mode 100644 index 000000000000..39d4c872fca2 --- /dev/null +++ b/internal/service/memorydb/cluster_data_source.go @@ -0,0 +1,229 @@ +package memorydb + +import ( + "context" + + "github.com/aws/aws-sdk-go/aws" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/flex" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func DataSourceCluster() *schema.Resource { + return &schema.Resource{ + ReadWithoutTimeout: dataSourceClusterRead, + + Schema: map[string]*schema.Schema{ + "acl_name": { + Type: schema.TypeString, + Computed: true, + }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "auto_minor_version_upgrade": { + Type: schema.TypeBool, + Computed: true, + }, + "cluster_endpoint": endpointSchema(), + "description": { + Type: schema.TypeString, + Computed: true, + }, + "engine_patch_version": { + Type: schema.TypeString, + Computed: true, + }, + "engine_version": { + Type: schema.TypeString, + Computed: true, + }, + "final_snapshot_name": { + Type: schema.TypeString, + Computed: true, + }, + "kms_key_arn": { + Type: schema.TypeString, + Computed: true, + }, + "maintenance_window": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + }, + "node_type": { + Type: schema.TypeString, + Computed: true, + }, + "num_replicas_per_shard": { + Type: schema.TypeInt, + Computed: true, + }, + "num_shards": { + Type: schema.TypeInt, + Computed: true, + }, + "parameter_group_name": { + Type: schema.TypeString, + Computed: true, + }, + "port": { + Type: schema.TypeInt, + Computed: true, + }, + "security_group_ids": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + }, + }, + "shards": { + Type: schema.TypeSet, + Computed: true, + Set: shardHash, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Computed: true, + }, + "nodes": { + Type: schema.TypeSet, + Computed: true, + Set: nodeHash, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_zone": { + Type: schema.TypeString, + Computed: true, + }, + "create_time": { + Type: schema.TypeString, + Computed: true, + }, + "endpoint": endpointSchema(), + "name": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "num_nodes": { + Type: schema.TypeInt, + Computed: true, + }, + "slots": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "snapshot_retention_limit": { + Type: schema.TypeInt, + Computed: true, + }, + "snapshot_window": { + Type: schema.TypeString, + Computed: true, + }, + "sns_topic_arn": { + Type: schema.TypeString, + Computed: true, + }, + "subnet_group_name": { + Type: schema.TypeString, + Computed: true, + }, + "tags": tftags.TagsSchemaComputed(), + "tls_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + }, + } +} + +func dataSourceClusterRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).MemoryDBConn + ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig + + name := d.Get("name").(string) + + cluster, err := FindClusterByName(ctx, conn, name) + + if err != nil { + return diag.FromErr(tfresource.SingularDataSourceFindError("MemoryDB Cluster", err)) + } + + d.SetId(aws.StringValue(cluster.Name)) + + d.Set("acl_name", cluster.ACLName) + d.Set("arn", cluster.ARN) + d.Set("auto_minor_version_upgrade", cluster.AutoMinorVersionUpgrade) + + if v := cluster.ClusterEndpoint; v != nil { + d.Set("cluster_endpoint", flattenEndpoint(v)) + d.Set("port", v.Port) + } + + d.Set("description", cluster.Description) + d.Set("engine_patch_version", cluster.EnginePatchVersion) + d.Set("engine_version", cluster.EngineVersion) + d.Set("kms_key_arn", cluster.KmsKeyId) // KmsKeyId is actually an ARN here. + d.Set("maintenance_window", cluster.MaintenanceWindow) + d.Set("name", cluster.Name) + d.Set("node_type", cluster.NodeType) + + numReplicasPerShard, err := deriveClusterNumReplicasPerShard(cluster) + if err != nil { + return diag.Errorf("error reading num_replicas_per_shard for MemoryDB Cluster (%s): %s", d.Id(), err) + } + d.Set("num_replicas_per_shard", numReplicasPerShard) + + d.Set("num_shards", cluster.NumberOfShards) + d.Set("parameter_group_name", cluster.ParameterGroupName) + + var securityGroupIds []*string + for _, v := range cluster.SecurityGroups { + securityGroupIds = append(securityGroupIds, v.SecurityGroupId) + } + d.Set("security_group_ids", flex.FlattenStringSet(securityGroupIds)) + + if err := d.Set("shards", flattenShards(cluster.Shards)); err != nil { + return diag.Errorf("failed to set shards for MemoryDB Cluster (%s): %s", d.Id(), err) + } + + d.Set("snapshot_retention_limit", cluster.SnapshotRetentionLimit) + d.Set("snapshot_window", cluster.SnapshotWindow) + + if aws.StringValue(cluster.SnsTopicStatus) == ClusterSNSTopicStatusActive { + d.Set("sns_topic_arn", cluster.SnsTopicArn) + } else { + d.Set("sns_topic_arn", "") + } + + d.Set("subnet_group_name", cluster.SubnetGroupName) + d.Set("tls_enabled", cluster.TLSEnabled) + + tags, err := ListTags(conn, d.Get("arn").(string)) + + if err != nil { + return diag.Errorf("error listing tags for MemoryDB Cluster (%s): %s", d.Id(), err) + } + + if err := d.Set("tags", tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { + return diag.Errorf("error setting tags: %s", err) + } + + return nil +} diff --git a/internal/service/memorydb/cluster_data_source_test.go b/internal/service/memorydb/cluster_data_source_test.go new file mode 100644 index 000000000000..f04864eb858c --- /dev/null +++ b/internal/service/memorydb/cluster_data_source_test.go @@ -0,0 +1,102 @@ +package memorydb_test + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/memorydb" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" +) + +func TestAccMemoryDBClusterDataSource_basic(t *testing.T) { + rName := "tf-test-" + sdkacctest.RandString(8) + resourceName := "aws_memorydb_cluster.test" + dataSourceName := "data.aws_memorydb_cluster.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, memorydb.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + Steps: []resource.TestStep{ + { + Config: testAccClusterDataSourceConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrPair(dataSourceName, "acl_name", resourceName, "acl_name"), + resource.TestCheckResourceAttrPair(dataSourceName, "arn", resourceName, "arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "auto_minor_version_upgrade", resourceName, "auto_minor_version_upgrade"), + resource.TestCheckResourceAttrPair(dataSourceName, "cluster_endpoint.0.address", resourceName, "cluster_endpoint.0.address"), + resource.TestCheckResourceAttrPair(dataSourceName, "cluster_endpoint.0.port", resourceName, "cluster_endpoint.0.port"), + resource.TestCheckResourceAttrPair(dataSourceName, "description", resourceName, "description"), + resource.TestCheckResourceAttrPair(dataSourceName, "engine_patch_version", resourceName, "engine_patch_version"), + resource.TestCheckResourceAttrPair(dataSourceName, "engine_version", resourceName, "engine_version"), + resource.TestCheckResourceAttrPair(dataSourceName, "kms_key_arn", resourceName, "kms_key_arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "maintenance_window", resourceName, "maintenance_window"), + resource.TestCheckResourceAttrPair(dataSourceName, "name", resourceName, "name"), + resource.TestCheckResourceAttrPair(dataSourceName, "node_type", resourceName, "node_type"), + resource.TestCheckResourceAttrPair(dataSourceName, "num_replicas_per_shard", resourceName, "num_replicas_per_shard"), + resource.TestCheckResourceAttrPair(dataSourceName, "num_shards", resourceName, "num_shards"), + resource.TestCheckResourceAttrPair(dataSourceName, "parameter_group_name", resourceName, "parameter_group_name"), + resource.TestCheckResourceAttrPair(dataSourceName, "port", resourceName, "port"), + resource.TestCheckResourceAttr(dataSourceName, "security_group_ids.#", "1"), + resource.TestCheckTypeSetElemAttrPair(dataSourceName, "security_group_ids.*", resourceName, "security_group_ids.0"), + resource.TestCheckResourceAttr(dataSourceName, "shards.#", "2"), + resource.TestCheckResourceAttrPair(dataSourceName, "shards.0.name", resourceName, "shards.0.name"), + resource.TestCheckResourceAttrPair(dataSourceName, "shards.0.num_nodes", resourceName, "shards.0.num_nodes"), + resource.TestCheckResourceAttrPair(dataSourceName, "shards.0.slots", resourceName, "shards.0.slots"), + resource.TestCheckResourceAttr(dataSourceName, "shards.0.nodes.#", "2"), + resource.TestCheckResourceAttrPair(dataSourceName, "shards.0.nodes.0.availability_zone", resourceName, "shards.0.nodes.0.availability_zone"), + resource.TestCheckResourceAttrPair(dataSourceName, "shards.0.nodes.0.create_time", resourceName, "shards.0.nodes.0.create_time"), + resource.TestCheckResourceAttrPair(dataSourceName, "shards.0.nodes.0.name", resourceName, "shards.0.nodes.0.name"), + resource.TestCheckResourceAttrPair(dataSourceName, "shards.0.nodes.0.endpoint.0.address", resourceName, "shards.0.nodes.0.endpoint.0.address"), + resource.TestCheckResourceAttrPair(dataSourceName, "shards.0.nodes.0.endpoint.0.port", resourceName, "shards.0.nodes.0.endpoint.0.port"), + resource.TestCheckResourceAttrPair(dataSourceName, "snapshot_retention_limit", resourceName, "snapshot_retention_limit"), + resource.TestCheckResourceAttrPair(dataSourceName, "snapshot_window", resourceName, "snapshot_window"), + resource.TestCheckResourceAttrPair(dataSourceName, "sns_topic_arn", resourceName, "sns_topic_arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "subnet_group_name", resourceName, "subnet_group_name"), + resource.TestCheckResourceAttr(dataSourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(dataSourceName, "tags.Test", "test"), + resource.TestCheckResourceAttrPair(dataSourceName, "tls_enabled", resourceName, "tls_enabled"), + ), + }, + }, + }) +} + +func testAccClusterDataSourceConfig(rName string) string { + return acctest.ConfigCompose( + testAccClusterConfigBaseNetwork(), + testAccClusterConfigBaseUserAndACL(rName), + fmt.Sprintf(` +resource "aws_security_group" "test" { + name = %[1]q + description = %[1]q + vpc_id = aws_vpc.test.id +} + +resource "aws_kms_key" "test" {} + +resource "aws_memorydb_cluster" "test" { + acl_name = aws_memorydb_acl.test.id + auto_minor_version_upgrade = false + kms_key_arn = aws_kms_key.test.arn + name = %[1]q + node_type = "db.t4g.small" + num_shards = 2 + security_group_ids = [aws_security_group.test.id] + snapshot_retention_limit = 7 + subnet_group_name = aws_memorydb_subnet_group.test.id + tls_enabled = true + + tags = { + Test = "test" + } +} + +data "aws_memorydb_cluster" "test" { + name = aws_memorydb_cluster.test.name +} +`, rName), + ) +} diff --git a/internal/service/memorydb/snapshot_data_source.go b/internal/service/memorydb/snapshot_data_source.go new file mode 100644 index 000000000000..e3ff829a9972 --- /dev/null +++ b/internal/service/memorydb/snapshot_data_source.go @@ -0,0 +1,138 @@ +package memorydb + +import ( + "context" + + "github.com/aws/aws-sdk-go/aws" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func DataSourceSnapshot() *schema.Resource { + return &schema.Resource{ + ReadWithoutTimeout: dataSourceSnapshotRead, + + Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "cluster_configuration": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "description": { + Type: schema.TypeString, + Computed: true, + }, + "engine_version": { + Type: schema.TypeString, + Computed: true, + }, + "maintenance_window": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Computed: true, + }, + "node_type": { + Type: schema.TypeString, + Computed: true, + }, + "num_shards": { + Type: schema.TypeInt, + Computed: true, + }, + "parameter_group_name": { + Type: schema.TypeString, + Computed: true, + }, + "port": { + Type: schema.TypeInt, + Computed: true, + }, + "snapshot_retention_limit": { + Type: schema.TypeInt, + Computed: true, + }, + "snapshot_window": { + Type: schema.TypeString, + Computed: true, + }, + "subnet_group_name": { + Type: schema.TypeString, + Computed: true, + }, + "topic_arn": { + Type: schema.TypeString, + Computed: true, + }, + "vpc_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "cluster_name": { + Type: schema.TypeString, + Computed: true, + }, + "kms_key_arn": { + Type: schema.TypeString, + Computed: true, + }, + "name": { + Type: schema.TypeString, + Required: true, + }, + "source": { + Type: schema.TypeString, + Computed: true, + }, + "tags": tftags.TagsSchemaComputed(), + }, + } +} + +func dataSourceSnapshotRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).MemoryDBConn + ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig + + name := d.Get("name").(string) + + snapshot, err := FindSnapshotByName(ctx, conn, name) + + if err != nil { + return diag.FromErr(tfresource.SingularDataSourceFindError("MemoryDB Snapshot", err)) + } + + d.SetId(aws.StringValue(snapshot.Name)) + + d.Set("arn", snapshot.ARN) + if err := d.Set("cluster_configuration", flattenClusterConfiguration(snapshot.ClusterConfiguration)); err != nil { + return diag.Errorf("failed to set cluster_configuration for MemoryDB Snapshot (%s): %s", d.Id(), err) + } + d.Set("cluster_name", snapshot.ClusterConfiguration.Name) + d.Set("kms_key_arn", snapshot.KmsKeyId) + d.Set("name", snapshot.Name) + d.Set("source", snapshot.Source) + + tags, err := ListTags(conn, d.Get("arn").(string)) + + if err != nil { + return diag.Errorf("error listing tags for MemoryDB Snapshot (%s): %s", d.Id(), err) + } + + if err := d.Set("tags", tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { + return diag.Errorf("error setting tags: %s", err) + } + + return nil +} diff --git a/internal/service/memorydb/snapshot_data_source_test.go b/internal/service/memorydb/snapshot_data_source_test.go new file mode 100644 index 000000000000..f5f9d532c932 --- /dev/null +++ b/internal/service/memorydb/snapshot_data_source_test.go @@ -0,0 +1,73 @@ +package memorydb_test + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/memorydb" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" +) + +func TestAccMemoryDBSnapshotDataSource_basic(t *testing.T) { + rName := "tf-test-" + sdkacctest.RandString(8) + resourceName := "aws_memorydb_snapshot.test" + dataSourceName := "data.aws_memorydb_snapshot.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, memorydb.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + Steps: []resource.TestStep{ + { + Config: testAccSnapshotDataSourceConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrPair(dataSourceName, "arn", resourceName, "arn"), + resource.TestCheckTypeSetElemAttrPair(dataSourceName, "cluster_configuration.0.description", resourceName, "cluster_configuration.0.description"), + resource.TestCheckTypeSetElemAttrPair(dataSourceName, "cluster_configuration.0.engine_version", resourceName, "cluster_configuration.0.engine_version"), + resource.TestCheckTypeSetElemAttrPair(dataSourceName, "cluster_configuration.0.maintenance_window", resourceName, "cluster_configuration.0.maintenance_window"), + resource.TestCheckTypeSetElemAttrPair(dataSourceName, "cluster_configuration.0.name", resourceName, "cluster_configuration.0.name"), + resource.TestCheckTypeSetElemAttrPair(dataSourceName, "cluster_configuration.0.node_type", resourceName, "cluster_configuration.0.node_type"), + resource.TestCheckTypeSetElemAttrPair(dataSourceName, "cluster_configuration.0.num_shards", resourceName, "cluster_configuration.0.num_shards"), + resource.TestCheckTypeSetElemAttrPair(dataSourceName, "cluster_configuration.0.parameter_group_name", resourceName, "cluster_configuration.0.parameter_group_name"), + resource.TestCheckTypeSetElemAttrPair(dataSourceName, "cluster_configuration.0.port", resourceName, "cluster_configuration.0.port"), + resource.TestCheckTypeSetElemAttrPair(dataSourceName, "cluster_configuration.0.snapshot_retention_limit", resourceName, "cluster_configuration.0.snapshot_retention_limit"), + resource.TestCheckTypeSetElemAttrPair(dataSourceName, "cluster_configuration.0.snapshot_window", resourceName, "cluster_configuration.0.snapshot_window"), + resource.TestCheckTypeSetElemAttrPair(dataSourceName, "cluster_configuration.0.subnet_group_name", resourceName, "cluster_configuration.0.subnet_group_name"), + resource.TestCheckTypeSetElemAttrPair(dataSourceName, "cluster_configuration.0.vpc_id", resourceName, "cluster_configuration.0.vpc_id"), + resource.TestCheckTypeSetElemAttrPair(dataSourceName, "cluster_name", resourceName, "cluster_name"), + resource.TestCheckTypeSetElemAttrPair(dataSourceName, "kms_key_arn", resourceName, "kms_key_arn"), + resource.TestCheckTypeSetElemAttrPair(dataSourceName, "name", resourceName, "name"), + resource.TestCheckTypeSetElemAttrPair(dataSourceName, "id", resourceName, "id"), + resource.TestCheckTypeSetElemAttrPair(dataSourceName, "source", resourceName, "source"), + resource.TestCheckResourceAttr(dataSourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(dataSourceName, "tags.Test", "test"), + ), + }, + }, + }) +} + +func testAccSnapshotDataSourceConfig(rName string) string { + return acctest.ConfigCompose( + testAccSnapshotConfigBase(rName), + fmt.Sprintf(` +resource "aws_kms_key" "test" {} + +resource "aws_memorydb_snapshot" "test" { + cluster_name = aws_memorydb_cluster.test.name + kms_key_arn = aws_kms_key.test.arn + name = %[1]q + + tags = { + Test = "test" + } +} + +data "aws_memorydb_snapshot" "test" { + name = aws_memorydb_snapshot.test.name +} +`, rName), + ) +} diff --git a/internal/service/memorydb/user_data_source.go b/internal/service/memorydb/user_data_source.go new file mode 100644 index 000000000000..1ac77fc81843 --- /dev/null +++ b/internal/service/memorydb/user_data_source.go @@ -0,0 +1,98 @@ +package memorydb + +import ( + "context" + + "github.com/aws/aws-sdk-go/aws" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func DataSourceUser() *schema.Resource { + return &schema.Resource{ + ReadWithoutTimeout: dataSourceUserRead, + + Schema: map[string]*schema.Schema{ + "access_string": { + Type: schema.TypeString, + Computed: true, + }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "authentication_mode": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "password_count": { + Type: schema.TypeInt, + Computed: true, + }, + "type": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "minimum_engine_version": { + Type: schema.TypeString, + Computed: true, + }, + "tags": tftags.TagsSchemaComputed(), + "user_name": { + Type: schema.TypeString, + Required: true, + }, + }, + } +} + +func dataSourceUserRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).MemoryDBConn + ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig + + userName := d.Get("user_name").(string) + + user, err := FindUserByName(ctx, conn, userName) + + if err != nil { + return diag.FromErr(tfresource.SingularDataSourceFindError("MemoryDB User", err)) + } + + d.SetId(aws.StringValue(user.Name)) + + d.Set("access_string", user.AccessString) + d.Set("arn", user.ARN) + + if v := user.Authentication; v != nil { + authenticationMode := map[string]interface{}{ + "password_count": aws.Int64Value(v.PasswordCount), + "type": aws.StringValue(v.Type), + } + + if err := d.Set("authentication_mode", []interface{}{authenticationMode}); err != nil { + return diag.Errorf("failed to set authentication_mode of MemoryDB User (%s): %s", d.Id(), err) + } + } + + d.Set("minimum_engine_version", user.MinimumEngineVersion) + d.Set("user_name", user.Name) + + tags, err := ListTags(conn, d.Get("arn").(string)) + + if err != nil { + return diag.Errorf("error listing tags for MemoryDB User (%s): %s", d.Id(), err) + } + + if err := d.Set("tags", tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { + return diag.Errorf("error setting tags: %s", err) + } + + return nil +} diff --git a/internal/service/memorydb/user_data_source_test.go b/internal/service/memorydb/user_data_source_test.go new file mode 100644 index 000000000000..cb64894b481e --- /dev/null +++ b/internal/service/memorydb/user_data_source_test.go @@ -0,0 +1,60 @@ +package memorydb_test + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/memorydb" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" +) + +func TestAccMemoryDBUserDataSource_basic(t *testing.T) { + rName := "tf-test-" + sdkacctest.RandString(8) + resourceName := "aws_memorydb_user.test" + dataSourceName := "data.aws_memorydb_user.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, memorydb.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + Steps: []resource.TestStep{ + { + Config: testAccUserDataSourceConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrPair(dataSourceName, "access_string", resourceName, "access_string"), + resource.TestCheckResourceAttrPair(dataSourceName, "arn", resourceName, "arn"), + resource.TestCheckResourceAttrPair(dataSourceName, "authentication_mode.0.password_count", resourceName, "authentication_mode.0.password_count"), + resource.TestCheckResourceAttrPair(dataSourceName, "authentication_mode.0.type", resourceName, "authentication_mode.0.type"), + resource.TestCheckResourceAttrPair(dataSourceName, "minimum_engine_version", resourceName, "minimum_engine_version"), + resource.TestCheckResourceAttr(dataSourceName, "tags.%", "1"), + resource.TestCheckResourceAttrPair(dataSourceName, "tags.Test", resourceName, "tags.Test"), + resource.TestCheckResourceAttrPair(dataSourceName, "user_name", resourceName, "user_name"), + ), + }, + }, + }) +} + +func testAccUserDataSourceConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_memorydb_user" "test" { + access_string = "on ~* &* +@all" + user_name = %[1]q + + authentication_mode { + type = "password" + passwords = ["aaaaaaaaaaaaaaaa"] + } + + tags = { + Test = "test" + } +} + +data "aws_memorydb_user" "test" { + user_name = aws_memorydb_user.test.user_name +} +`, rName) +} diff --git a/internal/service/mq/broker.go b/internal/service/mq/broker.go index 98caf5946437..a44318491219 100644 --- a/internal/service/mq/broker.go +++ b/internal/service/mq/broker.go @@ -7,6 +7,7 @@ import ( "fmt" "log" "reflect" + "regexp" "strconv" "strings" @@ -58,9 +59,10 @@ func ResourceBroker() *schema.Resource { Default: false, }, "broker_name": { - Type: schema.TypeString, - Required: true, - ForceNew: true, + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: ValidateBrokerName, }, "configuration": { Type: schema.TypeList, @@ -264,6 +266,7 @@ func ResourceBroker() *schema.Resource { Type: schema.TypeSet, Elem: &schema.Schema{Type: schema.TypeString}, Optional: true, + MaxItems: 5, }, "storage_type": { Type: schema.TypeString, @@ -1093,3 +1096,8 @@ func expandMQLDAPServerMetadata(tfList []interface{}) *mq.LdapServerMetadataInpu return apiObject } + +var ValidateBrokerName = validation.All( + validation.StringLenBetween(1, 50), + validation.StringMatch(regexp.MustCompile(`^[0-9A-Za-z_-]+$`), ""), +) diff --git a/internal/service/mq/broker_test.go b/internal/service/mq/broker_test.go index ca6425179021..931ff89f17e1 100644 --- a/internal/service/mq/broker_test.go +++ b/internal/service/mq/broker_test.go @@ -18,6 +18,37 @@ import ( tfmq "github.com/hashicorp/terraform-provider-aws/internal/service/mq" ) +func TestValidateBrokerName(t *testing.T) { + validNames := []string{ + "ValidName", + "V_-dN01e", + "0", + "-", + "_", + strings.Repeat("x", 50), + } + for _, v := range validNames { + _, errors := tfmq.ValidateBrokerName(v, "name") + if len(errors) != 0 { + t.Fatalf("%q should be a valid broker name: %q", v, errors) + } + } + + invalidNames := []string{ + "Inval:d.~Name", + "Invalid Name", + "*", + "", + strings.Repeat("x", 51), + } + for _, v := range invalidNames { + _, errors := tfmq.ValidateBrokerName(v, "name") + if len(errors) == 0 { + t.Fatalf("%q should be an invalid broker name", v) + } + } +} + func TestBrokerPasswordValidation(t *testing.T) { cases := []struct { Value string diff --git a/internal/service/mwaa/environment.go b/internal/service/mwaa/environment.go index f6cce7dc1fd9..5ffe6709547b 100644 --- a/internal/service/mwaa/environment.go +++ b/internal/service/mwaa/environment.go @@ -199,6 +199,11 @@ func ResourceEnvironment() *schema.Resource { Type: schema.TypeString, Optional: true, }, + "schedulers": { + Type: schema.TypeInt, + Optional: true, + Computed: true, + }, "service_role_arn": { Type: schema.TypeString, Computed: true, @@ -292,6 +297,10 @@ func resourceEnvironmentCreate(d *schema.ResourceData, meta interface{}) error { input.RequirementsS3Path = aws.String(v.(string)) } + if v, ok := d.GetOk("schedulers"); ok { + input.Schedulers = aws.Int64(int64(v.(int))) + } + if v, ok := d.GetOk("webserver_access_mode"); ok { input.WebserverAccessMode = aws.String(v.(string)) } @@ -366,6 +375,7 @@ func resourceEnvironmentRead(d *schema.ResourceData, meta interface{}) error { d.Set("plugins_s3_path", environment.PluginsS3Path) d.Set("requirements_s3_object_version", environment.RequirementsS3ObjectVersion) d.Set("requirements_s3_path", environment.RequirementsS3Path) + d.Set("schedulers", environment.Schedulers) d.Set("service_role_arn", environment.ServiceRoleArn) d.Set("source_bucket_arn", environment.SourceBucketArn) d.Set("status", environment.Status) @@ -452,6 +462,10 @@ func resourceEnvironmentUpdate(d *schema.ResourceData, meta interface{}) error { input.RequirementsS3Path = aws.String(d.Get("requirements_s3_path").(string)) } + if d.HasChange("schedulers") { + input.Schedulers = aws.Int64(int64(d.Get("schedulers").(int))) + } + if d.HasChange("source_bucket_arn") { input.SourceBucketArn = aws.String(d.Get("source_bucket_arn").(string)) } diff --git a/internal/service/mwaa/environment_test.go b/internal/service/mwaa/environment_test.go index 196e18e93a87..6b44347b4e99 100644 --- a/internal/service/mwaa/environment_test.go +++ b/internal/service/mwaa/environment_test.go @@ -60,6 +60,7 @@ func TestAccMWAAEnvironment_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "network_configuration.#", "1"), resource.TestCheckResourceAttr(resourceName, "network_configuration.0.security_group_ids.#", "1"), resource.TestCheckResourceAttr(resourceName, "network_configuration.0.subnet_ids.#", "2"), + resource.TestCheckResourceAttr(resourceName, "schedulers", "2"), resource.TestCheckResourceAttrSet(resourceName, "service_role_arn"), acctest.CheckResourceAttrGlobalARNNoAccount(resourceName, "source_bucket_arn", "s3", rName), resource.TestCheckResourceAttrSet(resourceName, "status"), @@ -287,6 +288,7 @@ func TestAccMWAAEnvironment_full(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "network_configuration.0.subnet_ids.#", "2"), resource.TestCheckResourceAttr(resourceName, "plugins_s3_path", "plugins.zip"), resource.TestCheckResourceAttr(resourceName, "requirements_s3_path", "requirements.txt"), + resource.TestCheckResourceAttr(resourceName, "schedulers", "1"), resource.TestCheckResourceAttrSet(resourceName, "service_role_arn"), acctest.CheckResourceAttrGlobalARNNoAccount(resourceName, "source_bucket_arn", "s3", rName), resource.TestCheckResourceAttrSet(resourceName, "status"), @@ -729,6 +731,7 @@ resource "aws_mwaa_environment" "test" { plugins_s3_path = aws_s3_object.plugins.key requirements_s3_path = aws_s3_object.requirements.key + schedulers = 1 source_bucket_arn = aws_s3_bucket.test.arn webserver_access_mode = "PUBLIC_ONLY" weekly_maintenance_window_start = "SAT:03:00" diff --git a/internal/service/networkfirewall/firewall_policy_test.go b/internal/service/networkfirewall/firewall_policy_test.go index 837abaf84333..44f996510b58 100644 --- a/internal/service/networkfirewall/firewall_policy_test.go +++ b/internal/service/networkfirewall/firewall_policy_test.go @@ -188,6 +188,32 @@ func TestAccNetworkFirewallFirewallPolicy_statefulRuleGroupReference(t *testing. }) } +func TestAccNetworkFirewallFirewallPolicy_statefulRuleGroupReferenceManaged(t *testing.T) { + var firewallPolicy networkfirewall.DescribeFirewallPolicyOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_networkfirewall_firewall_policy.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, networkfirewall.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckFirewallPolicyDestroy, + Steps: []resource.TestStep{ + { + Config: testAccFirewallPolicy_statefulRuleGroupReferenceManaged(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckFirewallPolicyExists(resourceName, &firewallPolicy), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + func TestAccNetworkFirewallFirewallPolicy_updateStatefulRuleGroupReference(t *testing.T) { var firewallPolicy networkfirewall.DescribeFirewallPolicyOutput rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) @@ -999,6 +1025,28 @@ resource "aws_networkfirewall_firewall_policy" "test" { `, rName)) } +func testAccFirewallPolicy_statefulRuleGroupReferenceManaged(rName string) string { + return acctest.ConfigCompose( + testAccFirewallPolicyStatefulRuleGroupDependencies(rName, 1), + fmt.Sprintf(` +data "aws_region" "current" {} + +data "aws_partition" "current" {} + +resource "aws_networkfirewall_firewall_policy" "test" { + name = %[1]q + + firewall_policy { + stateless_fragment_default_actions = ["aws:drop"] + stateless_default_actions = ["aws:pass"] + stateful_rule_group_reference { + resource_arn = "arn:${data.aws_partition.current.partition}:network-firewall:${data.aws_region.current.name}:aws-managed:stateful-rulegroup/MalwareDomainsActionOrder" + } + } +} +`, rName)) +} + func testAccFirewallPolicy_multipleStatefulRuleGroupReferences(rName string) string { return acctest.ConfigCompose( testAccFirewallPolicyStatefulRuleGroupDependencies(rName, 2), diff --git a/internal/service/opensearch/README.md b/internal/service/opensearch/README.md new file mode 100644 index 000000000000..d74a4c476e4c --- /dev/null +++ b/internal/service/opensearch/README.md @@ -0,0 +1,13 @@ +# Terraform AWS Provider OpenSearch Package + +This area is primarily for AWS provider contributors and maintainers. For information on _using_ Terraform and the AWS provider, see the links below. + +OpenSearch is a continuation of the Elasticsearch service. + + +## Handy Links + +* [Find out about contributing](../../../docs/contributing) to the AWS provider! +* AWS Provider Docs: [Home](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) +* AWS Provider Docs: [One of the OpenSearch resources](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/opensearch_domain) +* AWS Docs: [AWS SDK for Go OpenSearch](https://docs.aws.amazon.com/sdk-for-go/api/service/opensearchservice/) diff --git a/internal/service/opensearch/domain.go b/internal/service/opensearch/domain.go new file mode 100644 index 000000000000..674259c33cfa --- /dev/null +++ b/internal/service/opensearch/domain.go @@ -0,0 +1,1175 @@ +package opensearch + +import ( + "context" + "fmt" + "log" + "regexp" + "strings" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/opensearchservice" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + awspolicy "github.com/hashicorp/awspolicyequivalence" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/flex" + tfiam "github.com/hashicorp/terraform-provider-aws/internal/service/iam" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/verify" +) + +func ResourceDomain() *schema.Resource { + return &schema.Resource{ + Create: resourceDomainCreate, + Read: resourceDomainRead, + Update: resourceDomainUpdate, + Delete: resourceDomainDelete, + Importer: &schema.ResourceImporter{ + State: resourceDomainImport, + }, + + Timeouts: &schema.ResourceTimeout{ + Create: schema.DefaultTimeout(60 * time.Minute), + Update: schema.DefaultTimeout(180 * time.Minute), + Delete: schema.DefaultTimeout(90 * time.Minute), + }, + + CustomizeDiff: customdiff.Sequence( + customdiff.ForceNewIf("engine_version", func(_ context.Context, d *schema.ResourceDiff, meta interface{}) bool { + newVersion := d.Get("engine_version").(string) + domainName := d.Get("domain_name").(string) + + conn := meta.(*conns.AWSClient).OpenSearchConn + resp, err := conn.GetCompatibleVersions(&opensearchservice.GetCompatibleVersionsInput{ + DomainName: aws.String(domainName), + }) + if err != nil { + log.Printf("[ERROR] Failed to get compatible OpenSearch versions %s", domainName) + return false + } + if len(resp.CompatibleVersions) != 1 { + return true + } + for _, targetVersion := range resp.CompatibleVersions[0].TargetVersions { + if aws.StringValue(targetVersion) == newVersion { + return false + } + } + return true + }), + verify.SetTagsDiff, + ), + + Schema: map[string]*schema.Schema{ + "access_policies": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: verify.SuppressEquivalentPolicyDiffs, + StateFunc: func(v interface{}) string { + json, _ := structure.NormalizeJsonString(v) + return json + }, + }, + "advanced_options": { + Type: schema.TypeMap, + Optional: true, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "advanced_security_options": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + ForceNew: true, + }, + "internal_user_database_enabled": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "master_user_options": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "master_user_arn": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidARN, + }, + "master_user_name": { + Type: schema.TypeString, + Optional: true, + }, + "master_user_password": { + Type: schema.TypeString, + Optional: true, + Sensitive: true, + }, + }, + }, + }, + }, + }, + }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "auto_tune_options": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "desired_state": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(opensearchservice.AutoTuneDesiredState_Values(), false), + }, + "maintenance_schedule": { + Type: schema.TypeSet, + Optional: true, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cron_expression_for_recurrence": { + Type: schema.TypeString, + Required: true, + }, + "duration": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "unit": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(opensearchservice.TimeUnit_Values(), false), + }, + "value": { + Type: schema.TypeInt, + Required: true, + }, + }, + }, + }, + "start_at": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.IsRFC3339Time, + }, + }, + }, + }, + "rollback_on_disable": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice(opensearchservice.RollbackOnDisable_Values(), false), + }, + }, + }, + }, + "cluster_config": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "dedicated_master_count": { + Type: schema.TypeInt, + Optional: true, + DiffSuppressFunc: isDedicatedMasterDisabled, + }, + "dedicated_master_enabled": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "dedicated_master_type": { + Type: schema.TypeString, + Optional: true, + DiffSuppressFunc: isDedicatedMasterDisabled, + }, + "instance_count": { + Type: schema.TypeInt, + Optional: true, + Default: 1, + }, + "instance_type": { + Type: schema.TypeString, + Optional: true, + Default: opensearchservice.OpenSearchPartitionInstanceTypeM3MediumSearch, + }, + "warm_count": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntBetween(2, 150), + }, + "warm_enabled": { + Type: schema.TypeBool, + Optional: true, + }, + "warm_type": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{ + opensearchservice.OpenSearchWarmPartitionInstanceTypeUltrawarm1MediumSearch, + opensearchservice.OpenSearchWarmPartitionInstanceTypeUltrawarm1LargeSearch, + "ultrawarm1.xlarge.search", + }, false), + }, + "zone_awareness_config": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + DiffSuppressFunc: verify.SuppressMissingOptionalConfigurationBlock, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_zone_count": { + Type: schema.TypeInt, + Optional: true, + Default: 2, + ValidateFunc: validation.IntInSlice([]int{2, 3}), + }, + }, + }, + }, + "zone_awareness_enabled": { + Type: schema.TypeBool, + Optional: true, + }, + }, + }, + }, + "cognito_options": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + DiffSuppressFunc: verify.SuppressMissingOptionalConfigurationBlock, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "identity_pool_id": { + Type: schema.TypeString, + Required: true, + }, + "role_arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + }, + "user_pool_id": { + Type: schema.TypeString, + Required: true, + }, + }, + }, + }, + "domain_endpoint_options": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "custom_endpoint": { + Type: schema.TypeString, + Optional: true, + DiffSuppressFunc: isCustomEndpointDisabled, + }, + "custom_endpoint_certificate_arn": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidARN, + DiffSuppressFunc: isCustomEndpointDisabled, + }, + "custom_endpoint_enabled": { + Type: schema.TypeBool, + Optional: true, + Default: false, + }, + "enforce_https": { + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + "tls_security_policy": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice(opensearchservice.TLSSecurityPolicy_Values(), false), + }, + }, + }, + }, + "domain_id": { + Type: schema.TypeString, + Computed: true, + }, + "domain_name": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringMatch(regexp.MustCompile(`^[a-z][0-9a-z\-]{2,27}$`), + "must start with a lowercase alphabet and be at least 3 and no more than 28 characters long."+ + " Valid characters are a-z (lowercase letters), 0-9, and - (hyphen)."), + }, + "ebs_options": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "ebs_enabled": { + Type: schema.TypeBool, + Required: true, + }, + "iops": { + Type: schema.TypeInt, + Optional: true, + }, + "volume_size": { + Type: schema.TypeInt, + Optional: true, + }, + "volume_type": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringInSlice(opensearchservice.VolumeType_Values(), false), + }, + }, + }, + }, + "encrypt_at_rest": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + ForceNew: true, + }, + "kms_key_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + DiffSuppressFunc: suppressEquivalentKmsKeyIds, + }, + }, + }, + }, + "endpoint": { + Type: schema.TypeString, + Computed: true, + }, + "engine_version": { + Type: schema.TypeString, + Optional: true, + Default: "OpenSearch_1.1", + }, + "kibana_endpoint": { + Type: schema.TypeString, + Computed: true, + }, + "log_publishing_options": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cloudwatch_log_group_arn": { + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, + }, + "enabled": { + Type: schema.TypeBool, + Optional: true, + Default: true, + }, + "log_type": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(opensearchservice.LogType_Values(), false), + }, + }, + }, + }, + "node_to_node_encryption": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Required: true, + ForceNew: true, + }, + }, + }, + }, + "snapshot_options": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + DiffSuppressFunc: verify.SuppressMissingOptionalConfigurationBlock, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "automated_snapshot_start_hour": { + Type: schema.TypeInt, + Required: true, + }, + }, + }, + }, + "tags": tftags.TagsSchema(), + "tags_all": tftags.TagsSchemaComputed(), + "vpc_options": { + Type: schema.TypeList, + Optional: true, + ForceNew: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_zones": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "security_group_ids": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "subnet_ids": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, + Set: schema.HashString, + }, + "vpc_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + }, + } +} + +func resourceDomainImport( + d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + d.Set("domain_name", d.Id()) + return []*schema.ResourceData{d}, nil +} + +func resourceDomainCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*conns.AWSClient).OpenSearchConn + defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig + tags := defaultTagsConfig.MergeTags(tftags.New(d.Get("tags").(map[string]interface{}))) + + // The API doesn't check for duplicate names + // so w/out this check Create would act as upsert + // and might cause duplicate domain to appear in state + resp, err := FindDomainByName(conn, d.Get("domain_name").(string)) + if err == nil { + return fmt.Errorf("OpenSearch domain %s already exists", aws.StringValue(resp.DomainName)) + } + + inputCreateDomain := opensearchservice.CreateDomainInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + EngineVersion: aws.String(d.Get("engine_version").(string)), + TagList: Tags(tags.IgnoreAWS()), + } + + if v, ok := d.GetOk("access_policies"); ok { + policy, err := structure.NormalizeJsonString(v.(string)) + + if err != nil { + return fmt.Errorf("policy (%s) is invalid JSON: %w", policy, err) + } + + inputCreateDomain.AccessPolicies = aws.String(policy) + } + + if v, ok := d.GetOk("advanced_options"); ok { + inputCreateDomain.AdvancedOptions = flex.ExpandStringMap(v.(map[string]interface{})) + } + + if v, ok := d.GetOk("advanced_security_options"); ok { + inputCreateDomain.AdvancedSecurityOptions = expandAdvancedSecurityOptions(v.([]interface{})) + } + + if v, ok := d.GetOk("auto_tune_options"); ok && len(v.([]interface{})) > 0 { + inputCreateDomain.AutoTuneOptions = expandAutoTuneOptionsInput(v.([]interface{})[0].(map[string]interface{})) + } + + if v, ok := d.GetOk("ebs_options"); ok { + options := v.([]interface{}) + + if len(options) == 1 { + if options[0] == nil { + return fmt.Errorf("At least one field is expected inside ebs_options") + } + + s := options[0].(map[string]interface{}) + inputCreateDomain.EBSOptions = expandEBSOptions(s) + } + } + + if v, ok := d.GetOk("encrypt_at_rest"); ok { + options := v.([]interface{}) + if options[0] == nil { + return fmt.Errorf("At least one field is expected inside encrypt_at_rest") + } + + s := options[0].(map[string]interface{}) + inputCreateDomain.EncryptionAtRestOptions = expandEncryptAtRestOptions(s) + } + + if v, ok := d.GetOk("cluster_config"); ok { + config := v.([]interface{}) + + if len(config) == 1 { + if config[0] == nil { + return fmt.Errorf("At least one field is expected inside cluster_config") + } + m := config[0].(map[string]interface{}) + inputCreateDomain.ClusterConfig = expandClusterConfig(m) + } + } + + if v, ok := d.GetOk("node_to_node_encryption"); ok { + options := v.([]interface{}) + + s := options[0].(map[string]interface{}) + inputCreateDomain.NodeToNodeEncryptionOptions = expandNodeToNodeEncryptionOptions(s) + } + + if v, ok := d.GetOk("snapshot_options"); ok { + options := v.([]interface{}) + + if len(options) == 1 { + if options[0] == nil { + return fmt.Errorf("At least one field is expected inside snapshot_options") + } + + o := options[0].(map[string]interface{}) + + snapshotOptions := opensearchservice.SnapshotOptions{ + AutomatedSnapshotStartHour: aws.Int64(int64(o["automated_snapshot_start_hour"].(int))), + } + + inputCreateDomain.SnapshotOptions = &snapshotOptions + } + } + + if v, ok := d.GetOk("vpc_options"); ok { + options := v.([]interface{}) + if options[0] == nil { + return fmt.Errorf("At least one field is expected inside vpc_options") + } + + s := options[0].(map[string]interface{}) + inputCreateDomain.VPCOptions = expandVPCOptions(s) + } + + if v, ok := d.GetOk("log_publishing_options"); ok { + inputCreateDomain.LogPublishingOptions = expandLogPublishingOptions(v.(*schema.Set)) + } + + if v, ok := d.GetOk("domain_endpoint_options"); ok { + inputCreateDomain.DomainEndpointOptions = expandDomainEndpointOptions(v.([]interface{})) + } + + if v, ok := d.GetOk("cognito_options"); ok { + inputCreateDomain.CognitoOptions = expandCognitoOptions(v.([]interface{})) + } + + log.Printf("[DEBUG] Creating OpenSearch domain: %s", inputCreateDomain) + + // IAM Roles can take some time to propagate if set in AccessPolicies and created in the same terraform + var out *opensearchservice.CreateDomainOutput + err = resource.Retry(tfiam.PropagationTimeout, func() *resource.RetryError { + var err error + out, err = conn.CreateDomain(&inputCreateDomain) + if err != nil { + if tfawserr.ErrMessageContains(err, "InvalidTypeException", "Error setting policy") { + log.Printf("[DEBUG] Retrying creation of OpenSearch domain %s", aws.StringValue(inputCreateDomain.DomainName)) + return resource.RetryableError(err) + } + if tfawserr.ErrMessageContains(err, "ValidationException", "enable a service-linked role to give Amazon ES permissions") { + return resource.RetryableError(err) + } + if tfawserr.ErrMessageContains(err, "ValidationException", "Domain is still being deleted") { + return resource.RetryableError(err) + } + if tfawserr.ErrMessageContains(err, "ValidationException", "Amazon OpenSearch Service must be allowed to use the passed role") { + return resource.RetryableError(err) + } + if tfawserr.ErrMessageContains(err, "ValidationException", "The passed role has not propagated yet") { + return resource.RetryableError(err) + } + if tfawserr.ErrMessageContains(err, "ValidationException", "Authentication error") { + return resource.RetryableError(err) + } + if tfawserr.ErrMessageContains(err, "ValidationException", "Unauthorized Operation: OpenSearch Service must be authorised to describe") { + return resource.RetryableError(err) + } + if tfawserr.ErrMessageContains(err, "ValidationException", "The passed role must authorize Amazon OpenSearch Service to describe") { + return resource.RetryableError(err) + } + return resource.NonRetryableError(err) + } + return nil + }) + if tfresource.TimedOut(err) { + out, err = conn.CreateDomain(&inputCreateDomain) + } + if err != nil { + return fmt.Errorf("Error creating OpenSearch domain: %w", err) + } + + d.SetId(aws.StringValue(out.DomainStatus.ARN)) + + log.Printf("[DEBUG] Waiting for OpenSearch domain %q to be created", d.Id()) + if err := WaitForDomainCreation(conn, d.Get("domain_name").(string), d.Timeout(schema.TimeoutCreate)); err != nil { + return fmt.Errorf("error waiting for OpenSearch Domain (%s) to be created: %w", d.Id(), err) + } + + log.Printf("[DEBUG] OpenSearch domain %q created", d.Id()) + + if v, ok := d.GetOk("auto_tune_options"); ok && len(v.([]interface{})) > 0 { + + log.Printf("[DEBUG] Modifying config for OpenSearch domain %q", d.Id()) + + inputUpdateDomainConfig := &opensearchservice.UpdateDomainConfigInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + } + + inputUpdateDomainConfig.AutoTuneOptions = expandAutoTuneOptions(v.([]interface{})[0].(map[string]interface{})) + + _, err = conn.UpdateDomainConfig(inputUpdateDomainConfig) + + if err != nil { + return fmt.Errorf("Error modifying config for OpenSearch domain: %s", err) + } + + log.Printf("[DEBUG] Config for OpenSearch domain %q modified", d.Id()) + } + + return resourceDomainRead(d, meta) +} + +func resourceDomainRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*conns.AWSClient).OpenSearchConn + defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig + ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig + + ds, err := FindDomainByName(conn, d.Get("domain_name").(string)) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] OpenSearch domain (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return fmt.Errorf("error reading OpenSearch domain (%s): %w", d.Id(), err) + } + + log.Printf("[DEBUG] Received OpenSearch domain: %s", ds) + + outDescribeDomainConfig, err := conn.DescribeDomainConfig(&opensearchservice.DescribeDomainConfigInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + }) + + if err != nil { + return err + } + + log.Printf("[DEBUG] Received config for OpenSearch domain: %s", outDescribeDomainConfig) + + dc := outDescribeDomainConfig.DomainConfig + + if ds.AccessPolicies != nil && aws.StringValue(ds.AccessPolicies) != "" { + policies, err := verify.PolicyToSet(d.Get("access_policies").(string), aws.StringValue(ds.AccessPolicies)) + + if err != nil { + return err + } + + d.Set("access_policies", policies) + } + + options := advancedOptionsIgnoreDefault(d.Get("advanced_options").(map[string]interface{}), flex.PointersMapToStringList(ds.AdvancedOptions)) + if err = d.Set("advanced_options", options); err != nil { + return fmt.Errorf("setting advanced_options %v: %w", options, err) + } + + d.SetId(aws.StringValue(ds.ARN)) + d.Set("domain_id", ds.DomainId) + d.Set("domain_name", ds.DomainName) + d.Set("engine_version", ds.EngineVersion) + + if err := d.Set("ebs_options", flattenEBSOptions(ds.EBSOptions)); err != nil { + return fmt.Errorf("error setting ebs_options: %w", err) + } + + if err := d.Set("encrypt_at_rest", flattenEncryptAtRestOptions(ds.EncryptionAtRestOptions)); err != nil { + return fmt.Errorf("error setting encrypt_at_rest: %w", err) + } + + if err := d.Set("cluster_config", flattenClusterConfig(ds.ClusterConfig)); err != nil { + return fmt.Errorf("error setting cluster_config: %w", err) + } + + if err := d.Set("cognito_options", flattenCognitoOptions(ds.CognitoOptions)); err != nil { + return fmt.Errorf("error setting cognito_options: %w", err) + } + + if err := d.Set("node_to_node_encryption", flattenNodeToNodeEncryptionOptions(ds.NodeToNodeEncryptionOptions)); err != nil { + return fmt.Errorf("error setting node_to_node_encryption: %w", err) + } + + // Populate AdvancedSecurityOptions with values returned from + // DescribeDomainConfig, if enabled, else use + // values from resource; additionally, append MasterUserOptions + // from resource as they are not returned from the API + if ds.AdvancedSecurityOptions != nil { + advSecOpts := flattenAdvancedSecurityOptions(ds.AdvancedSecurityOptions) + if !aws.BoolValue(ds.AdvancedSecurityOptions.Enabled) { + advSecOpts[0]["internal_user_database_enabled"] = getUserDBEnabled(d) + } + advSecOpts[0]["master_user_options"] = getMasterUserOptions(d) + + if err := d.Set("advanced_security_options", advSecOpts); err != nil { + return fmt.Errorf("error setting advanced_security_options: %w", err) + } + } + + if v := dc.AutoTuneOptions; v != nil { + err = d.Set("auto_tune_options", []interface{}{flattenAutoTuneOptions(v.Options)}) + if err != nil { + return err + } + } + + if err := d.Set("snapshot_options", flattenSnapshotOptions(ds.SnapshotOptions)); err != nil { + return fmt.Errorf("error setting snapshot_options: %w", err) + } + + if ds.VPCOptions != nil { + if err := d.Set("vpc_options", flattenVPCDerivedInfo(ds.VPCOptions)); err != nil { + return fmt.Errorf("error setting vpc_options: %w", err) + } + + endpoints := flex.PointersMapToStringList(ds.Endpoints) + err = d.Set("endpoint", endpoints["vpc"]) + if err != nil { + return err + } + d.Set("kibana_endpoint", getKibanaEndpoint(d)) + if ds.Endpoint != nil { + return fmt.Errorf("%q: OpenSearch domain in VPC expected to have null Endpoint value", d.Id()) + } + } else { + if ds.Endpoint != nil { + d.Set("endpoint", ds.Endpoint) + d.Set("kibana_endpoint", getKibanaEndpoint(d)) + } + if ds.Endpoints != nil { + return fmt.Errorf("%q: OpenSearch domain not in VPC expected to have null Endpoints value", d.Id()) + } + } + + if err := d.Set("log_publishing_options", flattenLogPublishingOptions(ds.LogPublishingOptions)); err != nil { + return fmt.Errorf("error setting log_publishing_options: %w", err) + } + + if err := d.Set("domain_endpoint_options", flattenDomainEndpointOptions(ds.DomainEndpointOptions)); err != nil { + return fmt.Errorf("error setting domain_endpoint_options: %w", err) + } + + d.Set("arn", ds.ARN) + + tags, err := ListTags(conn, d.Id()) + + if err != nil { + return fmt.Errorf("error listing tags for OpenSearch Cluster (%s): %w", d.Id(), err) + } + + tags = tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig) + + //lintignore:AWSR002 + if err := d.Set("tags", tags.RemoveDefaultConfig(defaultTagsConfig).Map()); err != nil { + return fmt.Errorf("error setting tags: %w", err) + } + + if err := d.Set("tags_all", tags.Map()); err != nil { + return fmt.Errorf("error setting tags_all: %w", err) + } + + return nil +} + +func resourceDomainUpdate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*conns.AWSClient).OpenSearchConn + + if d.HasChangesExcept("tags", "tags_all") { + input := opensearchservice.UpdateDomainConfigInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + } + + if d.HasChange("access_policies") { + o, n := d.GetChange("access_policies") + + if equivalent, err := awspolicy.PoliciesAreEquivalent(o.(string), n.(string)); err != nil || !equivalent { + input.AccessPolicies = aws.String(d.Get("access_policies").(string)) + } + } + + if d.HasChange("advanced_options") { + input.AdvancedOptions = flex.ExpandStringMap(d.Get("advanced_options").(map[string]interface{})) + } + + if d.HasChange("advanced_security_options") { + input.AdvancedSecurityOptions = expandAdvancedSecurityOptions(d.Get("advanced_security_options").([]interface{})) + } + + if d.HasChange("auto_tune_options") { + input.AutoTuneOptions = expandAutoTuneOptions(d.Get("auto_tune_options").([]interface{})[0].(map[string]interface{})) + } + + if d.HasChange("domain_endpoint_options") { + input.DomainEndpointOptions = expandDomainEndpointOptions(d.Get("domain_endpoint_options").([]interface{})) + } + + if d.HasChanges("ebs_options", "cluster_config") { + options := d.Get("ebs_options").([]interface{}) + + if len(options) == 1 { + s := options[0].(map[string]interface{}) + input.EBSOptions = expandEBSOptions(s) + } + + if d.HasChange("cluster_config") { + config := d.Get("cluster_config").([]interface{}) + + if len(config) == 1 { + m := config[0].(map[string]interface{}) + input.ClusterConfig = expandClusterConfig(m) + } + } + + } + + if d.HasChange("snapshot_options") { + options := d.Get("snapshot_options").([]interface{}) + + if len(options) == 1 { + o := options[0].(map[string]interface{}) + + snapshotOptions := opensearchservice.SnapshotOptions{ + AutomatedSnapshotStartHour: aws.Int64(int64(o["automated_snapshot_start_hour"].(int))), + } + + input.SnapshotOptions = &snapshotOptions + } + } + + if d.HasChange("vpc_options") { + options := d.Get("vpc_options").([]interface{}) + s := options[0].(map[string]interface{}) + input.VPCOptions = expandVPCOptions(s) + } + + if d.HasChange("cognito_options") { + options := d.Get("cognito_options").([]interface{}) + input.CognitoOptions = expandCognitoOptions(options) + } + + if d.HasChange("log_publishing_options") { + input.LogPublishingOptions = expandLogPublishingOptions(d.Get("log_publishing_options").(*schema.Set)) + } + + _, err := conn.UpdateDomainConfig(&input) + if err != nil { + return err + } + + if err := waitForDomainUpdate(conn, d.Get("domain_name").(string), d.Timeout(schema.TimeoutUpdate)); err != nil { + return fmt.Errorf("error waiting for OpenSearch Domain Update (%s) to succeed: %w", d.Id(), err) + } + + if d.HasChange("engine_version") { + upgradeInput := opensearchservice.UpgradeDomainInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + TargetVersion: aws.String(d.Get("engine_version").(string)), + } + + _, err := conn.UpgradeDomain(&upgradeInput) + if err != nil { + return fmt.Errorf("Failed to upgrade OpenSearch domain: %w", err) + } + + if _, err := waitUpgradeSucceeded(conn, d.Get("domain_name").(string), d.Timeout(schema.TimeoutUpdate)); err != nil { + return fmt.Errorf("error waiting for OpenSearch Domain Upgrade (%s) to succeed: %w", d.Id(), err) + } + } + } + + if d.HasChange("tags_all") { + o, n := d.GetChange("tags_all") + + if err := UpdateTags(conn, d.Id(), o, n); err != nil { + return fmt.Errorf("error updating OpenSearch Domain (%s) tags: %w", d.Id(), err) + } + } + + return resourceDomainRead(d, meta) +} + +func resourceDomainDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*conns.AWSClient).OpenSearchConn + domainName := d.Get("domain_name").(string) + + log.Printf("[DEBUG] Deleting OpenSearch domain: %q", domainName) + _, err := conn.DeleteDomain(&opensearchservice.DeleteDomainInput{ + DomainName: aws.String(domainName), + }) + if err != nil { + if tfawserr.ErrCodeEquals(err, opensearchservice.ErrCodeResourceNotFoundException) { + return nil + } + return err + } + + log.Printf("[DEBUG] Waiting for OpenSearch domain %q to be deleted", domainName) + if err := waitForDomainDelete(conn, d.Get("domain_name").(string), d.Timeout(schema.TimeoutDelete)); err != nil { + return fmt.Errorf("error waiting for OpenSearch Domain (%s) to be deleted: %w", d.Id(), err) + } + + return nil +} + +func suppressEquivalentKmsKeyIds(k, old, new string, d *schema.ResourceData) bool { + // The OpenSearch API accepts a short KMS key id but always returns the ARN of the key. + // The ARN is of the format 'arn:aws:kms:REGION:ACCOUNT_ID:key/KMS_KEY_ID'. + // These should be treated as equivalent. + return strings.Contains(old, new) +} + +func getKibanaEndpoint(d *schema.ResourceData) string { + return d.Get("endpoint").(string) + "/_plugin/kibana/" +} + +func isDedicatedMasterDisabled(k, old, new string, d *schema.ResourceData) bool { + v, ok := d.GetOk("cluster_config") + if ok { + clusterConfig := v.([]interface{})[0].(map[string]interface{}) + return !clusterConfig["dedicated_master_enabled"].(bool) + } + return false +} + +func isCustomEndpointDisabled(k, old, new string, d *schema.ResourceData) bool { + v, ok := d.GetOk("domain_endpoint_options") + if ok { + domainEndpointOptions := v.([]interface{})[0].(map[string]interface{}) + return !domainEndpointOptions["custom_endpoint_enabled"].(bool) + } + return false +} + +func expandNodeToNodeEncryptionOptions(s map[string]interface{}) *opensearchservice.NodeToNodeEncryptionOptions { + options := opensearchservice.NodeToNodeEncryptionOptions{} + + if v, ok := s["enabled"]; ok { + options.Enabled = aws.Bool(v.(bool)) + } + return &options +} + +func flattenNodeToNodeEncryptionOptions(o *opensearchservice.NodeToNodeEncryptionOptions) []map[string]interface{} { + if o == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{} + if o.Enabled != nil { + m["enabled"] = aws.BoolValue(o.Enabled) + } + + return []map[string]interface{}{m} +} + +func expandClusterConfig(m map[string]interface{}) *opensearchservice.ClusterConfig { + config := opensearchservice.ClusterConfig{} + + if v, ok := m["dedicated_master_enabled"]; ok { + isEnabled := v.(bool) + config.DedicatedMasterEnabled = aws.Bool(isEnabled) + + if isEnabled { + if v, ok := m["dedicated_master_count"]; ok && v.(int) > 0 { + config.DedicatedMasterCount = aws.Int64(int64(v.(int))) + } + if v, ok := m["dedicated_master_type"]; ok && v.(string) != "" { + config.DedicatedMasterType = aws.String(v.(string)) + } + } + } + + if v, ok := m["instance_count"]; ok { + config.InstanceCount = aws.Int64(int64(v.(int))) + } + if v, ok := m["instance_type"]; ok { + config.InstanceType = aws.String(v.(string)) + } + + if v, ok := m["zone_awareness_enabled"]; ok { + isEnabled := v.(bool) + config.ZoneAwarenessEnabled = aws.Bool(isEnabled) + + if isEnabled { + if v, ok := m["zone_awareness_config"]; ok { + config.ZoneAwarenessConfig = expandZoneAwarenessConfig(v.([]interface{})) + } + } + } + + if v, ok := m["warm_enabled"]; ok { + isEnabled := v.(bool) + config.WarmEnabled = aws.Bool(isEnabled) + + if isEnabled { + if v, ok := m["warm_count"]; ok { + config.WarmCount = aws.Int64(int64(v.(int))) + } + + if v, ok := m["warm_type"]; ok { + config.WarmType = aws.String(v.(string)) + } + } + } + + return &config +} + +func expandZoneAwarenessConfig(l []interface{}) *opensearchservice.ZoneAwarenessConfig { + if len(l) == 0 || l[0] == nil { + return nil + } + + m := l[0].(map[string]interface{}) + + zoneAwarenessConfig := &opensearchservice.ZoneAwarenessConfig{} + + if v, ok := m["availability_zone_count"]; ok && v.(int) > 0 { + zoneAwarenessConfig.AvailabilityZoneCount = aws.Int64(int64(v.(int))) + } + + return zoneAwarenessConfig +} + +func flattenClusterConfig(c *opensearchservice.ClusterConfig) []map[string]interface{} { + m := map[string]interface{}{ + "zone_awareness_config": flattenZoneAwarenessConfig(c.ZoneAwarenessConfig), + "zone_awareness_enabled": aws.BoolValue(c.ZoneAwarenessEnabled), + } + + if c.DedicatedMasterCount != nil { + m["dedicated_master_count"] = aws.Int64Value(c.DedicatedMasterCount) + } + if c.DedicatedMasterEnabled != nil { + m["dedicated_master_enabled"] = aws.BoolValue(c.DedicatedMasterEnabled) + } + if c.DedicatedMasterType != nil { + m["dedicated_master_type"] = aws.StringValue(c.DedicatedMasterType) + } + if c.InstanceCount != nil { + m["instance_count"] = aws.Int64Value(c.InstanceCount) + } + if c.InstanceType != nil { + m["instance_type"] = aws.StringValue(c.InstanceType) + } + if c.WarmEnabled != nil { + m["warm_enabled"] = aws.BoolValue(c.WarmEnabled) + } + if c.WarmCount != nil { + m["warm_count"] = aws.Int64Value(c.WarmCount) + } + if c.WarmType != nil { + m["warm_type"] = aws.StringValue(c.WarmType) + } + + return []map[string]interface{}{m} +} + +func flattenZoneAwarenessConfig(zoneAwarenessConfig *opensearchservice.ZoneAwarenessConfig) []interface{} { + if zoneAwarenessConfig == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "availability_zone_count": aws.Int64Value(zoneAwarenessConfig.AvailabilityZoneCount), + } + + return []interface{}{m} +} + +// advancedOptionsIgnoreDefault checks for defaults in the n map and, if +// they don't exist in the o map, it deletes them. AWS returns default advanced +// options that cause perpetual diffs. +func advancedOptionsIgnoreDefault(o map[string]interface{}, n map[string]interface{}) map[string]interface{} { + for k, v := range n { + switch fmt.Sprintf("%s=%s", k, v) { + case "override_main_response_version=false": + if _, ok := o[k]; !ok { + delete(n, "override_main_response_version") + } + case "rest.action.multi.allow_explicit_index=true": + if _, ok := o[k]; !ok { + delete(n, "rest.action.multi.allow_explicit_index") + } + } + } + + return n +} diff --git a/internal/service/opensearch/domain_data_source.go b/internal/service/opensearch/domain_data_source.go new file mode 100644 index 000000000000..2e3f4f2504e0 --- /dev/null +++ b/internal/service/opensearch/domain_data_source.go @@ -0,0 +1,450 @@ +package opensearch + +import ( + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/opensearchservice" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/flex" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" +) + +func DataSourceDomain() *schema.Resource { + return &schema.Resource{ + Read: dataSourceDomainRead, + + Schema: map[string]*schema.Schema{ + "access_policies": { + Type: schema.TypeString, + Computed: true, + }, + "advanced_options": { + Type: schema.TypeMap, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "advanced_security_options": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "internal_user_database_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + }, + }, + }, + "auto_tune_options": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "desired_state": { + Type: schema.TypeString, + Computed: true, + }, + "maintenance_schedule": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "start_at": { + Type: schema.TypeString, + Computed: true, + }, + "duration": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "value": { + Type: schema.TypeInt, + Computed: true, + }, + "unit": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "cron_expression_for_recurrence": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "rollback_on_disable": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "domain_name": { + Type: schema.TypeString, + Required: true, + }, + "arn": { + Type: schema.TypeString, + Computed: true, + }, + "domain_id": { + Type: schema.TypeString, + Computed: true, + }, + "endpoint": { + Type: schema.TypeString, + Computed: true, + }, + "kibana_endpoint": { + Type: schema.TypeString, + Computed: true, + }, + "ebs_options": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "ebs_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "iops": { + Type: schema.TypeInt, + Computed: true, + }, + "volume_size": { + Type: schema.TypeInt, + Computed: true, + }, + "volume_type": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "encryption_at_rest": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "kms_key_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "node_to_node_encryption": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Computed: true, + }, + }, + }, + }, + "cluster_config": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "dedicated_master_count": { + Type: schema.TypeInt, + Computed: true, + }, + "dedicated_master_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "dedicated_master_type": { + Type: schema.TypeString, + Computed: true, + }, + "instance_count": { + Type: schema.TypeInt, + Computed: true, + }, + "instance_type": { + Type: schema.TypeString, + Computed: true, + }, + "zone_awareness_config": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_zone_count": { + Type: schema.TypeInt, + Computed: true, + }, + }, + }, + }, + "zone_awareness_enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "warm_enabled": { + Type: schema.TypeBool, + Optional: true, + }, + "warm_count": { + Type: schema.TypeInt, + Computed: true, + }, + "warm_type": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "snapshot_options": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "automated_snapshot_start_hour": { + Type: schema.TypeInt, + Computed: true, + }, + }, + }, + }, + "vpc_options": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "availability_zones": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + //Set: schema.HashString, + }, + "security_group_ids": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "subnet_ids": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + "vpc_id": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + "log_publishing_options": { + Type: schema.TypeSet, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "log_type": { + Type: schema.TypeString, + Computed: true, + }, + "cloudwatch_log_group_arn": { + Type: schema.TypeString, + Computed: true, + }, + "enabled": { + Type: schema.TypeBool, + Computed: true, + }, + }, + }, + }, + "engine_version": { + Type: schema.TypeString, + Computed: true, + }, + "cognito_options": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "enabled": { + Type: schema.TypeBool, + Computed: true, + }, + "user_pool_id": { + Type: schema.TypeString, + Computed: true, + }, + "identity_pool_id": { + Type: schema.TypeString, + Computed: true, + }, + "role_arn": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, + + "created": { + Type: schema.TypeBool, + Computed: true, + }, + "deleted": { + Type: schema.TypeBool, + Computed: true, + }, + "processing": { + Type: schema.TypeBool, + Computed: true, + }, + + "tags": tftags.TagsSchemaComputed(), + }, + } +} + +func dataSourceDomainRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*conns.AWSClient).OpenSearchConn + ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig + + ds, err := FindDomainByName(conn, d.Get("domain_name").(string)) + if err != nil { + return fmt.Errorf("your query returned no results") + } + + reqDescribeDomainConfig := &opensearchservice.DescribeDomainConfigInput{ + DomainName: aws.String(d.Get("domain_name").(string)), + } + + respDescribeDomainConfig, err := conn.DescribeDomainConfig(reqDescribeDomainConfig) + if err != nil { + return fmt.Errorf("error querying config for opensearch_domain: %w", err) + } + + if respDescribeDomainConfig.DomainConfig == nil { + return fmt.Errorf("your query returned no results") + } + + dc := respDescribeDomainConfig.DomainConfig + + d.SetId(aws.StringValue(ds.ARN)) + + if ds.AccessPolicies != nil && aws.StringValue(ds.AccessPolicies) != "" { + policies, err := structure.NormalizeJsonString(aws.StringValue(ds.AccessPolicies)) + if err != nil { + return fmt.Errorf("access policies contain an invalid JSON: %w", err) + } + d.Set("access_policies", policies) + } + + if err := d.Set("advanced_options", flex.PointersMapToStringList(ds.AdvancedOptions)); err != nil { + return fmt.Errorf("error setting advanced_options: %w", err) + } + + d.Set("arn", ds.ARN) + d.Set("domain_id", ds.DomainId) + d.Set("endpoint", ds.Endpoint) + d.Set("kibana_endpoint", getKibanaEndpoint(d)) + + if err := d.Set("advanced_security_options", flattenAdvancedSecurityOptions(ds.AdvancedSecurityOptions)); err != nil { + return fmt.Errorf("error setting advanced_security_options: %w", err) + } + + if dc.AutoTuneOptions != nil { + if err := d.Set("auto_tune_options", []interface{}{flattenAutoTuneOptions(dc.AutoTuneOptions.Options)}); err != nil { + return fmt.Errorf("error setting auto_tune_options: %w", err) + } + } + + if err := d.Set("ebs_options", flattenEBSOptions(ds.EBSOptions)); err != nil { + return fmt.Errorf("error setting ebs_options: %w", err) + } + + if err := d.Set("encryption_at_rest", flattenEncryptAtRestOptions(ds.EncryptionAtRestOptions)); err != nil { + return fmt.Errorf("error setting encryption_at_rest: %w", err) + } + + if err := d.Set("node_to_node_encryption", flattenNodeToNodeEncryptionOptions(ds.NodeToNodeEncryptionOptions)); err != nil { + return fmt.Errorf("error setting node_to_node_encryption: %w", err) + } + + if err := d.Set("cluster_config", flattenClusterConfig(ds.ClusterConfig)); err != nil { + return fmt.Errorf("error setting cluster_config: %w", err) + } + + if err := d.Set("snapshot_options", flattenSnapshotOptions(ds.SnapshotOptions)); err != nil { + return fmt.Errorf("error setting snapshot_options: %w", err) + } + + if ds.VPCOptions != nil { + if err := d.Set("vpc_options", flattenVPCDerivedInfo(ds.VPCOptions)); err != nil { + return fmt.Errorf("error setting vpc_options: %w", err) + } + + endpoints := flex.PointersMapToStringList(ds.Endpoints) + if err := d.Set("endpoint", endpoints["vpc"]); err != nil { + return fmt.Errorf("error setting endpoint: %w", err) + } + d.Set("kibana_endpoint", getKibanaEndpoint(d)) + if ds.Endpoint != nil { + return fmt.Errorf("%q: OpenSearch domain in VPC expected to have null Endpoint value", d.Id()) + } + } else { + if ds.Endpoint != nil { + d.Set("endpoint", ds.Endpoint) + d.Set("kibana_endpoint", getKibanaEndpoint(d)) + } + if ds.Endpoints != nil { + return fmt.Errorf("%q: OpenSearch domain not in VPC expected to have null Endpoints value", d.Id()) + } + } + + if err := d.Set("log_publishing_options", flattenLogPublishingOptions(ds.LogPublishingOptions)); err != nil { + return fmt.Errorf("error setting log_publishing_options: %w", err) + } + + d.Set("engine_version", ds.EngineVersion) + + if err := d.Set("cognito_options", flattenCognitoOptions(ds.CognitoOptions)); err != nil { + return fmt.Errorf("error setting cognito_options: %w", err) + } + + d.Set("created", ds.Created) + d.Set("deleted", ds.Deleted) + + d.Set("processing", ds.Processing) + + tags, err := ListTags(conn, d.Id()) + + if err != nil { + return fmt.Errorf("error listing tags for OpenSearch Cluster (%s): %w", d.Id(), err) + } + + if err := d.Set("tags", tags.IgnoreAWS().IgnoreConfig(ignoreTagsConfig).Map()); err != nil { + return fmt.Errorf("error setting tags: %w", err) + } + + return nil +} diff --git a/internal/service/opensearch/domain_data_source_test.go b/internal/service/opensearch/domain_data_source_test.go new file mode 100644 index 000000000000..8abe04fce0f2 --- /dev/null +++ b/internal/service/opensearch/domain_data_source_test.go @@ -0,0 +1,339 @@ +package opensearch_test + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/opensearchservice" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" +) + +func TestAccOpenSearchDomainDataSource_Data_basic(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + autoTuneStartAtTime := testAccGetValidStartAtTime(t, "24h") + datasourceName := "data.aws_opensearch_domain.test" + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDomainWithDataSourceConfig(rName, autoTuneStartAtTime), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr(datasourceName, "processing", "false"), + resource.TestCheckResourceAttrPair(datasourceName, "engine_version", resourceName, "engine_version"), + resource.TestCheckResourceAttrPair(datasourceName, "auto_tune_options.#", resourceName, "auto_tune_options.#"), + resource.TestCheckResourceAttrPair(datasourceName, "auto_tune_options.0.desired_state", resourceName, "auto_tune_options.0.desired_state"), + resource.TestCheckResourceAttrPair(datasourceName, "auto_tune_options.0.maintenance_schedule", resourceName, "auto_tune_options.0.maintenance_schedule"), + resource.TestCheckResourceAttrPair(datasourceName, "auto_tune_options.0.rollback_on_disable", resourceName, "auto_tune_options.0.rollback_on_disable"), + resource.TestCheckResourceAttrPair(datasourceName, "cluster_config.#", resourceName, "cluster_config.#"), + resource.TestCheckResourceAttrPair(datasourceName, "cluster_config.0.instance_type", resourceName, "cluster_config.0.instance_type"), + resource.TestCheckResourceAttrPair(datasourceName, "cluster_config.0.instance_count", resourceName, "cluster_config.0.instance_count"), + resource.TestCheckResourceAttrPair(datasourceName, "cluster_config.0.dedicated_master_enabled", resourceName, "cluster_config.0.dedicated_master_enabled"), + resource.TestCheckResourceAttrPair(datasourceName, "cluster_config.0.zone_awareness_enabled", resourceName, "cluster_config.0.zone_awareness_enabled"), + resource.TestCheckResourceAttrPair(datasourceName, "ebs_options.#", resourceName, "ebs_options.#"), + resource.TestCheckResourceAttrPair(datasourceName, "ebs_options.0.ebs_enabled", resourceName, "ebs_options.0.ebs_enabled"), + resource.TestCheckResourceAttrPair(datasourceName, "ebs_options.0.volume_type", resourceName, "ebs_options.0.volume_type"), + resource.TestCheckResourceAttrPair(datasourceName, "ebs_options.0.volume_size", resourceName, "ebs_options.0.volume_size"), + resource.TestCheckResourceAttrPair(datasourceName, "snapshot_options.#", resourceName, "snapshot_options.#"), + resource.TestCheckResourceAttrPair(datasourceName, "snapshot_options.0.automated_snapshot_start_hour", resourceName, "snapshot_options.0.automated_snapshot_start_hour"), + resource.TestCheckResourceAttrPair(datasourceName, "advanced_security_options.#", resourceName, "advanced_security_options.#"), + ), + }, + }, + }) +} + +func TestAccOpenSearchDomainDataSource_Data_advanced(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + autoTuneStartAtTime := testAccGetValidStartAtTime(t, "24h") + datasourceName := "data.aws_opensearch_domain.test" + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDomainAdvancedWithDataSourceConfig(rName, autoTuneStartAtTime), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttrPair(datasourceName, "engine_version", resourceName, "engine_version"), + resource.TestCheckResourceAttrPair(datasourceName, "auto_tune_options.#", resourceName, "auto_tune_options.#"), + resource.TestCheckResourceAttrPair(datasourceName, "auto_tune_options.0.desired_state", resourceName, "auto_tune_options.0.desired_state"), + resource.TestCheckResourceAttrPair(datasourceName, "auto_tune_options.0.maintenance_schedule", resourceName, "auto_tune_options.0.maintenance_schedule"), + resource.TestCheckResourceAttrPair(datasourceName, "auto_tune_options.0.rollback_on_disable", resourceName, "auto_tune_options.0.rollback_on_disable"), + resource.TestCheckResourceAttrPair(datasourceName, "cluster_config.#", resourceName, "cluster_config.#"), + resource.TestCheckResourceAttrPair(datasourceName, "cluster_config.0.instance_type", resourceName, "cluster_config.0.instance_type"), + resource.TestCheckResourceAttrPair(datasourceName, "cluster_config.0.instance_count", resourceName, "cluster_config.0.instance_count"), + resource.TestCheckResourceAttrPair(datasourceName, "cluster_config.0.dedicated_master_enabled", resourceName, "cluster_config.0.dedicated_master_enabled"), + resource.TestCheckResourceAttrPair(datasourceName, "cluster_config.0.zone_awareness_enabled", resourceName, "cluster_config.0.zone_awareness_enabled"), + resource.TestCheckResourceAttrPair(datasourceName, "ebs_options.#", resourceName, "ebs_options.#"), + resource.TestCheckResourceAttrPair(datasourceName, "ebs_options.0.ebs_enabled", resourceName, "ebs_options.0.ebs_enabled"), + resource.TestCheckResourceAttrPair(datasourceName, "ebs_options.0.volume_type", resourceName, "ebs_options.0.volume_type"), + resource.TestCheckResourceAttrPair(datasourceName, "ebs_options.0.volume_size", resourceName, "ebs_options.0.volume_size"), + resource.TestCheckResourceAttrPair(datasourceName, "snapshot_options.#", resourceName, "snapshot_options.#"), + resource.TestCheckResourceAttrPair(datasourceName, "snapshot_options.0.automated_snapshot_start_hour", resourceName, "snapshot_options.0.automated_snapshot_start_hour"), + resource.TestCheckResourceAttrPair(datasourceName, "log_publishing_options.#", resourceName, "log_publishing_options.#"), + resource.TestCheckResourceAttrPair(datasourceName, "vpc_options.#", resourceName, "vpc_options.#"), + resource.TestCheckResourceAttrPair(datasourceName, "advanced_security_options.0.enabled", resourceName, "advanced_security_options.0.enabled"), + resource.TestCheckResourceAttrPair(datasourceName, "advanced_security_options.0.internal_user_database_enabled", resourceName, "advanced_security_options.0.internal_user_database_enabled"), + ), + }, + }, + }) +} + +func testAccDomainWithDataSourceConfig(rName, autoTuneStartAtTime string) string { + return fmt.Sprintf(` +data "aws_partition" "current" {} + +data "aws_region" "current" {} + +data "aws_caller_identity" "current" {} + +locals { + domain_substr = substr(%[1]q, 0, 28) +} + +resource "aws_opensearch_domain" "test" { + domain_name = local.domain_substr + engine_version = "Elasticsearch_7.10" + + access_policies = < 0 { + if enabled, ok := v[0].(map[string]interface{})["enabled"].(bool); ok && !enabled { + return true + } + } + return false +} + +func resourceDomainSAMLOptionsRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*conns.AWSClient).OpenSearchConn + + ds, err := FindDomainByName(conn, d.Get("domain_name").(string)) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] OpenSearch Domain SAML Options (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return fmt.Errorf("error reading OpenSearch Domain SAML Options (%s): %w", d.Id(), err) + } + + log.Printf("[DEBUG] Received OpenSearch domain: %s", ds) + + options := ds.AdvancedSecurityOptions.SAMLOptions + + if err := d.Set("saml_options", flattenESSAMLOptions(d, options)); err != nil { + return fmt.Errorf("error setting saml_options for OpenSearch Configuration: %w", err) + } + + return nil +} + +func resourceDomainSAMLOptionsPut(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*conns.AWSClient).OpenSearchConn + + domainName := d.Get("domain_name").(string) + config := opensearchservice.AdvancedSecurityOptionsInput_{} + config.SetSAMLOptions(expandESSAMLOptions(d.Get("saml_options").([]interface{}))) + + log.Printf("[DEBUG] Updating OpenSearch domain SAML Options %s", config) + + _, err := conn.UpdateDomainConfig(&opensearchservice.UpdateDomainConfigInput{ + DomainName: aws.String(domainName), + AdvancedSecurityOptions: &config, + }) + + if err != nil { + return err + } + + d.SetId(domainName) + + if err := waitForDomainUpdate(conn, d.Get("domain_name").(string), d.Timeout(schema.TimeoutUpdate)); err != nil { + return fmt.Errorf("error waiting for OpenSearch Domain SAML Options update (%s) to succeed: %w", d.Id(), err) + } + + return resourceDomainSAMLOptionsRead(d, meta) +} + +func resourceDomainSAMLOptionsDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*conns.AWSClient).OpenSearchConn + + domainName := d.Get("domain_name").(string) + config := opensearchservice.AdvancedSecurityOptionsInput_{} + config.SetSAMLOptions(nil) + + _, err := conn.UpdateDomainConfig(&opensearchservice.UpdateDomainConfigInput{ + DomainName: aws.String(domainName), + AdvancedSecurityOptions: &config, + }) + if err != nil { + return err + } + + log.Printf("[DEBUG] Waiting for OpenSearch domain SAML Options %q to be deleted", d.Get("domain_name").(string)) + + if err := waitForDomainUpdate(conn, d.Get("domain_name").(string), d.Timeout(schema.TimeoutDelete)); err != nil { + return fmt.Errorf("error waiting for OpenSearch Domain SAML Options (%s) to be deleted: %w", d.Id(), err) + } + + return nil +} diff --git a/internal/service/opensearch/domain_saml_options_test.go b/internal/service/opensearch/domain_saml_options_test.go new file mode 100644 index 000000000000..6d7234dbedc1 --- /dev/null +++ b/internal/service/opensearch/domain_saml_options_test.go @@ -0,0 +1,384 @@ +package opensearch_test + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/opensearchservice" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfopensearch "github.com/hashicorp/terraform-provider-aws/internal/service/opensearch" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func TestAccOpenSearchDomainSAMLOptions_basic(t *testing.T) { + var domain opensearchservice.DomainStatus + + rName := sdkacctest.RandomWithPrefix("acc-test") + rUserName := sdkacctest.RandomWithPrefix("opensearch-master-user") + idpEntityId := fmt.Sprintf("https://%s", acctest.RandomDomainName()) + + resourceName := "aws_opensearch_domain_saml_options.test" + esDomainResourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckESDomainSAMLOptionsDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainSAMLOptionsConfig(rUserName, rName, idpEntityId), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(esDomainResourceName, &domain), + testAccCheckESDomainSAMLOptions(esDomainResourceName, resourceName), + resource.TestCheckResourceAttr(resourceName, "saml_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "saml_options.0.enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "saml_options.0.idp.#", "1"), + resource.TestCheckResourceAttr(resourceName, "saml_options.0.idp.0.entity_id", idpEntityId), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchDomainSAMLOptions_disappears(t *testing.T) { + rName := sdkacctest.RandomWithPrefix("acc-test") + rUserName := sdkacctest.RandomWithPrefix("opensearch-master-user") + idpEntityId := fmt.Sprintf("https://%s", acctest.RandomDomainName()) + + resourceName := "aws_opensearch_domain_saml_options.test" + esDomainResourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckESDomainSAMLOptionsDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainSAMLOptionsConfig(rUserName, rName, idpEntityId), + Check: resource.ComposeTestCheckFunc( + testAccCheckESDomainSAMLOptions(esDomainResourceName, resourceName), + acctest.CheckResourceDisappears(acctest.Provider, tfopensearch.ResourceDomainSAMLOptions(), resourceName), + ), + }, + }, + }) +} + +func TestAccOpenSearchDomainSAMLOptions_disappears_Domain(t *testing.T) { + rName := sdkacctest.RandomWithPrefix("acc-test") + rUserName := sdkacctest.RandomWithPrefix("opensearch-master-user") + idpEntityId := fmt.Sprintf("https://%s", acctest.RandomDomainName()) + + resourceName := "aws_opensearch_domain_saml_options.test" + esDomainResourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckESDomainSAMLOptionsDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainSAMLOptionsConfig(rUserName, rName, idpEntityId), + Check: resource.ComposeTestCheckFunc( + testAccCheckESDomainSAMLOptions(esDomainResourceName, resourceName), + acctest.CheckResourceDisappears(acctest.Provider, tfopensearch.ResourceDomain(), esDomainResourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccOpenSearchDomainSAMLOptions_Update(t *testing.T) { + rName := sdkacctest.RandomWithPrefix("acc-test") + rUserName := sdkacctest.RandomWithPrefix("opensearch-master-user") + idpEntityId := fmt.Sprintf("https://%s", acctest.RandomDomainName()) + + resourceName := "aws_opensearch_domain_saml_options.test" + esDomainResourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckESDomainSAMLOptionsDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainSAMLOptionsConfig(rUserName, rName, idpEntityId), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(resourceName, "saml_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "saml_options.0.session_timeout_minutes", "60"), + testAccCheckESDomainSAMLOptions(esDomainResourceName, resourceName), + ), + }, + { + Config: testAccDomainSAMLOptionsConfigUpdate(rUserName, rName, idpEntityId), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(resourceName, "saml_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "saml_options.0.session_timeout_minutes", "180"), + testAccCheckESDomainSAMLOptions(esDomainResourceName, resourceName), + ), + }, + }, + }) +} + +func TestAccOpenSearchDomainSAMLOptions_Disabled(t *testing.T) { + rName := sdkacctest.RandomWithPrefix("acc-test") + rUserName := sdkacctest.RandomWithPrefix("opensearch-master-user") + idpEntityId := fmt.Sprintf("https://%s", acctest.RandomDomainName()) + + resourceName := "aws_opensearch_domain_saml_options.test" + esDomainResourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckESDomainSAMLOptionsDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainSAMLOptionsConfig(rUserName, rName, idpEntityId), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(resourceName, "saml_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "saml_options.0.session_timeout_minutes", "60"), + testAccCheckESDomainSAMLOptions(esDomainResourceName, resourceName), + ), + }, + { + Config: testAccDomainSAMLOptionsConfigDisabled(rUserName, rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(resourceName, "saml_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "saml_options.0.session_timeout_minutes", "0"), + testAccCheckESDomainSAMLOptions(esDomainResourceName, resourceName), + ), + }, + }, + }) +} + +func testAccCheckESDomainSAMLOptionsDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_opensearch_domain_saml_options" { + continue + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn + _, err := tfopensearch.FindDomainByName(conn, rs.Primary.Attributes["domain_name"]) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("OpenSearch domain saml options %s still exists", rs.Primary.ID) + } + + return nil +} + +func testAccCheckESDomainSAMLOptions(esResource string, samlOptionsResource string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[esResource] + if !ok { + return fmt.Errorf("Not found: %s", esResource) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + options, ok := s.RootModule().Resources[samlOptionsResource] + if !ok { + return fmt.Errorf("Not found: %s", samlOptionsResource) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn + _, err := tfopensearch.FindDomainByName(conn, options.Primary.Attributes["domain_name"]) + + return err + } +} + +func testAccDomainSAMLOptionsConfig(userName, domainName, idpEntityId string) string { + return fmt.Sprintf(` +resource "aws_iam_user" "test" { + name = %[1]q +} + +resource "aws_opensearch_domain" "test" { + domain_name = %[2]q + engine_version = "Elasticsearch_7.10" + + cluster_config { + instance_type = "r5.large.search" + } + + # Advanced security option must be enabled to configure SAML. + advanced_security_options { + enabled = true + internal_user_database_enabled = false + master_user_options { + master_user_arn = aws_iam_user.test.arn + } + } + + # You must enable node-to-node encryption to use advanced security options. + encrypt_at_rest { + enabled = true + } + + domain_endpoint_options { + enforce_https = true + tls_security_policy = "Policy-Min-TLS-1-2-2019-07" + } + + node_to_node_encryption { + enabled = true + } + + ebs_options { + ebs_enabled = true + volume_size = 10 + } +} + +resource "aws_opensearch_domain_saml_options" "test" { + domain_name = aws_opensearch_domain.test.domain_name + + saml_options { + enabled = true + idp { + entity_id = %[3]q + metadata_content = templatefile("./test-fixtures/saml-metadata.xml.tpl", { entity_id = %[3]q }) + } + } +} +`, userName, domainName, idpEntityId) +} + +func testAccDomainSAMLOptionsConfigUpdate(userName, domainName, idpEntityId string) string { + return fmt.Sprintf(` +resource "aws_iam_user" "test" { + name = %[1]q +} + +resource "aws_opensearch_domain" "test" { + domain_name = %[2]q + engine_version = "Elasticsearch_7.10" + + cluster_config { + instance_type = "r5.large.search" + } + + # Advanced security option must be enabled to configure SAML. + advanced_security_options { + enabled = true + internal_user_database_enabled = false + master_user_options { + master_user_arn = aws_iam_user.test.arn + } + } + + # You must enable node-to-node encryption to use advanced security options. + encrypt_at_rest { + enabled = true + } + + domain_endpoint_options { + enforce_https = true + tls_security_policy = "Policy-Min-TLS-1-2-2019-07" + } + + node_to_node_encryption { + enabled = true + } + + ebs_options { + ebs_enabled = true + volume_size = 10 + } +} + +resource "aws_opensearch_domain_saml_options" "test" { + domain_name = aws_opensearch_domain.test.domain_name + + saml_options { + enabled = true + idp { + entity_id = %[3]q + metadata_content = templatefile("./test-fixtures/saml-metadata.xml.tpl", { entity_id = %[3]q }) + } + session_timeout_minutes = 180 + } +} +`, userName, domainName, idpEntityId) +} + +func testAccDomainSAMLOptionsConfigDisabled(userName string, domainName string) string { + return fmt.Sprintf(` +resource "aws_iam_user" "test" { + name = %[1]q +} + +resource "aws_opensearch_domain" "test" { + domain_name = %[2]q + engine_version = "Elasticsearch_7.10" + + cluster_config { + instance_type = "r5.large.search" + } + + # Advanced security option must be enabled to configure SAML. + advanced_security_options { + enabled = true + internal_user_database_enabled = false + master_user_options { + master_user_arn = aws_iam_user.test.arn + } + } + + # You must enable node-to-node encryption to use advanced security options. + encrypt_at_rest { + enabled = true + } + + domain_endpoint_options { + enforce_https = true + tls_security_policy = "Policy-Min-TLS-1-2-2019-07" + } + + node_to_node_encryption { + enabled = true + } + + ebs_options { + ebs_enabled = true + volume_size = 10 + } +} + +resource "aws_opensearch_domain_saml_options" "test" { + domain_name = aws_opensearch_domain.test.domain_name + + saml_options { + enabled = false + } +} +`, userName, domainName) +} diff --git a/internal/service/opensearch/domain_structure.go b/internal/service/opensearch/domain_structure.go new file mode 100644 index 000000000000..c23bb8455b39 --- /dev/null +++ b/internal/service/opensearch/domain_structure.go @@ -0,0 +1,324 @@ +package opensearch + +import ( + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/opensearchservice" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" +) + +func expandAdvancedSecurityOptions(m []interface{}) *opensearchservice.AdvancedSecurityOptionsInput_ { + config := opensearchservice.AdvancedSecurityOptionsInput_{} + group := m[0].(map[string]interface{}) + + if advancedSecurityEnabled, ok := group["enabled"]; ok { + config.Enabled = aws.Bool(advancedSecurityEnabled.(bool)) + + if advancedSecurityEnabled.(bool) { + if v, ok := group["internal_user_database_enabled"].(bool); ok { + config.InternalUserDatabaseEnabled = aws.Bool(v) + } + + if v, ok := group["master_user_options"].([]interface{}); ok { + if len(v) > 0 && v[0] != nil { + muo := opensearchservice.MasterUserOptions{} + masterUserOptions := v[0].(map[string]interface{}) + + if v, ok := masterUserOptions["master_user_arn"].(string); ok && v != "" { + muo.MasterUserARN = aws.String(v) + } + + if v, ok := masterUserOptions["master_user_name"].(string); ok && v != "" { + muo.MasterUserName = aws.String(v) + } + + if v, ok := masterUserOptions["master_user_password"].(string); ok && v != "" { + muo.MasterUserPassword = aws.String(v) + } + + config.SetMasterUserOptions(&muo) + } + } + } + } + + return &config +} + +func expandAutoTuneOptions(tfMap map[string]interface{}) *opensearchservice.AutoTuneOptions { + if tfMap == nil { + return nil + } + + options := &opensearchservice.AutoTuneOptions{} + + autoTuneOptionsInput := expandAutoTuneOptionsInput(tfMap) + + options.DesiredState = autoTuneOptionsInput.DesiredState + options.MaintenanceSchedules = autoTuneOptionsInput.MaintenanceSchedules + + options.RollbackOnDisable = aws.String(tfMap["rollback_on_disable"].(string)) + + return options +} + +func expandAutoTuneOptionsInput(tfMap map[string]interface{}) *opensearchservice.AutoTuneOptionsInput_ { + if tfMap == nil { + return nil + } + + options := &opensearchservice.AutoTuneOptionsInput_{} + + options.DesiredState = aws.String(tfMap["desired_state"].(string)) + + if v, ok := tfMap["maintenance_schedule"].(*schema.Set); ok && v.Len() > 0 { + options.MaintenanceSchedules = expandAutoTuneMaintenanceSchedules(v.List()) + } + + return options +} + +func expandAutoTuneMaintenanceSchedules(tfList []interface{}) []*opensearchservice.AutoTuneMaintenanceSchedule { + var autoTuneMaintenanceSchedules []*opensearchservice.AutoTuneMaintenanceSchedule + + for _, tfMapRaw := range tfList { + tfMap, _ := tfMapRaw.(map[string]interface{}) + + autoTuneMaintenanceSchedule := &opensearchservice.AutoTuneMaintenanceSchedule{} + + startAt, _ := time.Parse(time.RFC3339, tfMap["start_at"].(string)) + autoTuneMaintenanceSchedule.StartAt = aws.Time(startAt) + + if v, ok := tfMap["duration"].([]interface{}); ok { + autoTuneMaintenanceSchedule.Duration = expandAutoTuneMaintenanceScheduleDuration(v[0].(map[string]interface{})) + } + + autoTuneMaintenanceSchedule.CronExpressionForRecurrence = aws.String(tfMap["cron_expression_for_recurrence"].(string)) + + autoTuneMaintenanceSchedules = append(autoTuneMaintenanceSchedules, autoTuneMaintenanceSchedule) + } + + return autoTuneMaintenanceSchedules +} + +func expandAutoTuneMaintenanceScheduleDuration(tfMap map[string]interface{}) *opensearchservice.Duration { + autoTuneMaintenanceScheduleDuration := &opensearchservice.Duration{ + Value: aws.Int64(int64(tfMap["value"].(int))), + Unit: aws.String(tfMap["unit"].(string)), + } + + return autoTuneMaintenanceScheduleDuration +} + +func expandESSAMLOptions(data []interface{}) *opensearchservice.SAMLOptionsInput_ { + if len(data) == 0 { + return nil + } + + if data[0] == nil { + return &opensearchservice.SAMLOptionsInput_{} + } + + options := opensearchservice.SAMLOptionsInput_{} + group := data[0].(map[string]interface{}) + + if SAMLEnabled, ok := group["enabled"]; ok { + options.Enabled = aws.Bool(SAMLEnabled.(bool)) + + if SAMLEnabled.(bool) { + options.Idp = expandSAMLOptionsIdp(group["idp"].([]interface{})) + if v, ok := group["master_backend_role"].(string); ok && v != "" { + options.MasterBackendRole = aws.String(v) + } + if v, ok := group["master_user_name"].(string); ok && v != "" { + options.MasterUserName = aws.String(v) + } + if v, ok := group["roles_key"].(string); ok { + options.RolesKey = aws.String(v) + } + if v, ok := group["session_timeout_minutes"].(int); ok { + options.SessionTimeoutMinutes = aws.Int64(int64(v)) + } + if v, ok := group["subject_key"].(string); ok { + options.SubjectKey = aws.String(v) + } + } + } + + return &options +} + +func expandSAMLOptionsIdp(l []interface{}) *opensearchservice.SAMLIdp { + if len(l) == 0 { + return nil + } + + if l[0] == nil { + return &opensearchservice.SAMLIdp{} + } + + m := l[0].(map[string]interface{}) + + return &opensearchservice.SAMLIdp{ + EntityId: aws.String(m["entity_id"].(string)), + MetadataContent: aws.String(m["metadata_content"].(string)), + } +} + +func flattenAdvancedSecurityOptions(advancedSecurityOptions *opensearchservice.AdvancedSecurityOptions) []map[string]interface{} { + if advancedSecurityOptions == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{} + m["enabled"] = aws.BoolValue(advancedSecurityOptions.Enabled) + if aws.BoolValue(advancedSecurityOptions.Enabled) { + m["internal_user_database_enabled"] = aws.BoolValue(advancedSecurityOptions.InternalUserDatabaseEnabled) + } + + return []map[string]interface{}{m} +} + +func flattenAutoTuneOptions(autoTuneOptions *opensearchservice.AutoTuneOptions) map[string]interface{} { + if autoTuneOptions == nil { + return nil + } + + m := map[string]interface{}{} + + m["desired_state"] = aws.StringValue(autoTuneOptions.DesiredState) + + if v := autoTuneOptions.MaintenanceSchedules; v != nil { + m["maintenance_schedule"] = flattenAutoTuneMaintenanceSchedules(v) + } + + m["rollback_on_disable"] = aws.StringValue(autoTuneOptions.RollbackOnDisable) + + return m +} + +func flattenAutoTuneMaintenanceSchedules(autoTuneMaintenanceSchedules []*opensearchservice.AutoTuneMaintenanceSchedule) []interface{} { + if len(autoTuneMaintenanceSchedules) == 0 { + return nil + } + + var tfList []interface{} + + for _, autoTuneMaintenanceSchedule := range autoTuneMaintenanceSchedules { + m := map[string]interface{}{} + + m["start_at"] = aws.TimeValue(autoTuneMaintenanceSchedule.StartAt).Format(time.RFC3339) + + m["duration"] = []interface{}{flattenAutoTuneMaintenanceScheduleDuration(autoTuneMaintenanceSchedule.Duration)} + + m["cron_expression_for_recurrence"] = aws.StringValue(autoTuneMaintenanceSchedule.CronExpressionForRecurrence) + + tfList = append(tfList, m) + } + + return tfList +} + +func flattenAutoTuneMaintenanceScheduleDuration(autoTuneMaintenanceScheduleDuration *opensearchservice.Duration) map[string]interface{} { + m := map[string]interface{}{} + + m["value"] = aws.Int64Value(autoTuneMaintenanceScheduleDuration.Value) + m["unit"] = aws.StringValue(autoTuneMaintenanceScheduleDuration.Unit) + + return m +} + +func flattenESSAMLOptions(d *schema.ResourceData, samlOptions *opensearchservice.SAMLOptionsOutput_) []interface{} { + if samlOptions == nil { + return nil + } + + m := map[string]interface{}{ + "enabled": aws.BoolValue(samlOptions.Enabled), + "idp": flattenESSAMLIdpOptions(samlOptions.Idp), + } + + m["roles_key"] = aws.StringValue(samlOptions.RolesKey) + m["session_timeout_minutes"] = aws.Int64Value(samlOptions.SessionTimeoutMinutes) + m["subject_key"] = aws.StringValue(samlOptions.SubjectKey) + + // samlOptions.master_backend_role and samlOptions.master_user_name will be added to the + // all_access role in kibana's security manager. These values cannot be read or + // modified by the opensearchservice API. So, we ignore it on read and let persist + // the value already in the state. + m["master_backend_role"] = d.Get("saml_options.0.master_backend_role").(string) + m["master_user_name"] = d.Get("saml_options.0.master_user_name").(string) + + return []interface{}{m} +} + +func flattenESSAMLIdpOptions(SAMLIdp *opensearchservice.SAMLIdp) []interface{} { + if SAMLIdp == nil { + return []interface{}{} + } + + m := map[string]interface{}{ + "entity_id": aws.StringValue(SAMLIdp.EntityId), + "metadata_content": aws.StringValue(SAMLIdp.MetadataContent), + } + + return []interface{}{m} +} + +func getMasterUserOptions(d *schema.ResourceData) []interface{} { + if v, ok := d.GetOk("advanced_security_options"); ok { + options := v.([]interface{}) + if len(options) > 0 && options[0] != nil { + m := options[0].(map[string]interface{}) + if opts, ok := m["master_user_options"]; ok { + return opts.([]interface{}) + } + } + } + return []interface{}{} +} + +func getUserDBEnabled(d *schema.ResourceData) bool { + if v, ok := d.GetOk("advanced_security_options"); ok { + options := v.([]interface{}) + if len(options) > 0 && options[0] != nil { + m := options[0].(map[string]interface{}) + if enabled, ok := m["internal_user_database_enabled"]; ok { + return enabled.(bool) + } + } + } + return false +} + +func expandLogPublishingOptions(m *schema.Set) map[string]*opensearchservice.LogPublishingOption { + options := make(map[string]*opensearchservice.LogPublishingOption) + + for _, vv := range m.List() { + lo := vv.(map[string]interface{}) + options[lo["log_type"].(string)] = &opensearchservice.LogPublishingOption{ + CloudWatchLogsLogGroupArn: aws.String(lo["cloudwatch_log_group_arn"].(string)), + Enabled: aws.Bool(lo["enabled"].(bool)), + } + } + + return options +} + +func flattenLogPublishingOptions(o map[string]*opensearchservice.LogPublishingOption) []map[string]interface{} { + m := make([]map[string]interface{}, 0) + for logType, val := range o { + mm := map[string]interface{}{ + "log_type": logType, + "enabled": aws.BoolValue(val.Enabled), + } + + if val.CloudWatchLogsLogGroupArn != nil { + mm["cloudwatch_log_group_arn"] = aws.StringValue(val.CloudWatchLogsLogGroupArn) + } + + m = append(m, mm) + } + return m +} diff --git a/internal/service/opensearch/domain_test.go b/internal/service/opensearch/domain_test.go new file mode 100644 index 000000000000..545b9ab26620 --- /dev/null +++ b/internal/service/opensearch/domain_test.go @@ -0,0 +1,2788 @@ +package opensearch_test + +import ( + "fmt" + "regexp" + "strings" + "testing" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/service/cognitoidentityprovider" + "github.com/aws/aws-sdk-go/service/elb" + "github.com/aws/aws-sdk-go/service/iam" + "github.com/aws/aws-sdk-go/service/opensearchservice" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfopensearch "github.com/hashicorp/terraform-provider-aws/internal/service/opensearch" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func TestAccOpenSearchDomain_basic(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + resource.TestCheckResourceAttr(resourceName, "engine_version", "OpenSearch_1.1"), + resource.TestMatchResourceAttr(resourceName, "kibana_endpoint", regexp.MustCompile(`.*(opensearch|es)\..*/_plugin/kibana/`)), + resource.TestCheckResourceAttr(resourceName, "vpc_options.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchDomain_requireHTTPS(t *testing.T) { + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_domainEndpointOptions(rName, true, "Policy-Min-TLS-1-0-2019-07"), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists("aws_opensearch_domain.test", &domain), + testAccCheckDomainEndpointOptions(true, "Policy-Min-TLS-1-0-2019-07", &domain), + ), + }, + { + ResourceName: "aws_opensearch_domain.test", + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + { + Config: testAccDomainConfig_domainEndpointOptions(rName, true, "Policy-Min-TLS-1-2-2019-07"), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists("aws_opensearch_domain.test", &domain), + testAccCheckDomainEndpointOptions(true, "Policy-Min-TLS-1-2-2019-07", &domain), + ), + }, + }, + }) +} + +func TestAccOpenSearchDomain_customEndpoint(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + customEndpoint := fmt.Sprintf("%s.example.com", rName[:28]) + certResourceName := "aws_acm_certificate.test" + certKey := acctest.TLSRSAPrivateKeyPEM(2048) + certificate := acctest.TLSRSAX509SelfSignedCertificatePEM(certKey, customEndpoint) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_customEndpoint(rName, true, "Policy-Min-TLS-1-0-2019-07", true, customEndpoint, certKey, certificate), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + resource.TestCheckResourceAttr(resourceName, "domain_endpoint_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "domain_endpoint_options.0.custom_endpoint_enabled", "true"), + resource.TestCheckResourceAttrSet(resourceName, "domain_endpoint_options.0.custom_endpoint"), + resource.TestCheckResourceAttrPair(resourceName, "domain_endpoint_options.0.custom_endpoint_certificate_arn", certResourceName, "arn"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + { + Config: testAccDomainConfig_customEndpoint(rName, true, "Policy-Min-TLS-1-0-2019-07", true, customEndpoint, certKey, certificate), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + testAccCheckDomainEndpointOptions(true, "Policy-Min-TLS-1-0-2019-07", &domain), + testAccCheckCustomEndpoint(resourceName, true, customEndpoint, &domain), + ), + }, + { + Config: testAccDomainConfig_customEndpoint(rName, true, "Policy-Min-TLS-1-0-2019-07", false, customEndpoint, certKey, certificate), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + testAccCheckDomainEndpointOptions(true, "Policy-Min-TLS-1-0-2019-07", &domain), + testAccCheckCustomEndpoint(resourceName, false, customEndpoint, &domain), + ), + }, + }, + }) +} + +func TestAccOpenSearchDomain_Cluster_zoneAwareness(t *testing.T) { + var domain1, domain2, domain3, domain4 opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_clusterZoneAwarenessAZCount(rName, 3), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain1), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.zone_awareness_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.zone_awareness_config.0.availability_zone_count", "3"), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.zone_awareness_enabled", "true"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + { + Config: testAccDomainConfig_clusterZoneAwarenessAZCount(rName, 2), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain2), + testAccCheckDomainNotRecreated(&domain1, &domain2), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.zone_awareness_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.zone_awareness_config.0.availability_zone_count", "2"), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.zone_awareness_enabled", "true"), + ), + }, + { + Config: testAccDomainConfig_clusterZoneAwarenessEnabled(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain3), + testAccCheckDomainNotRecreated(&domain2, &domain3), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.zone_awareness_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.zone_awareness_config.#", "0"), + ), + }, + { + Config: testAccDomainConfig_clusterZoneAwarenessAZCount(rName, 3), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain4), + testAccCheckDomainNotRecreated(&domain3, &domain4), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.zone_awareness_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.zone_awareness_config.0.availability_zone_count", "3"), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.zone_awareness_enabled", "true"), + ), + }, + }, + }) +} + +func TestAccOpenSearchDomain_Cluster_warm(t *testing.T) { + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_clusterWarm(rName, "ultrawarm1.medium.search", false, 6), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.warm_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.warm_count", "0"), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.warm_type", ""), + ), + }, + { + Config: testAccDomainConfig_clusterWarm(rName, "ultrawarm1.medium.search", true, 6), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.warm_enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.warm_count", "6"), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.warm_type", "ultrawarm1.medium.search"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + { + Config: testAccDomainConfig_clusterWarm(rName, "ultrawarm1.medium.search", true, 7), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.warm_enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.warm_count", "7"), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.warm_type", "ultrawarm1.medium.search"), + ), + }, + { + Config: testAccDomainConfig_clusterWarm(rName, "ultrawarm1.large.search", true, 7), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.warm_enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.warm_count", "7"), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.warm_type", "ultrawarm1.large.search"), + ), + }, + }, + }) +} + +func TestAccOpenSearchDomain_Cluster_dedicatedMaster(t *testing.T) { + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_dedicatedClusterMaster(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + { + Config: testAccDomainConfig_dedicatedClusterMaster(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + ), + }, + { + Config: testAccDomainConfig_dedicatedClusterMaster(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + ), + }, + }, + }) +} + +func TestAccOpenSearchDomain_Cluster_update(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var input opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_clusterUpdate(rName, 2, 22), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &input), + testAccCheckNumberOfInstances(2, &input), + testAccCheckSnapshotHour(22, &input), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + { + Config: testAccDomainConfig_clusterUpdate(rName, 4, 23), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &input), + testAccCheckNumberOfInstances(4, &input), + testAccCheckSnapshotHour(23, &input), + ), + }, + }}) +} + +func TestAccOpenSearchDomain_duplicate(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn + _, err := conn.DeleteDomain(&opensearchservice.DeleteDomainInput{ + DomainName: aws.String(rName[:28]), + }) + return err + }, + Steps: []resource.TestStep{ + { + PreConfig: func() { + // Create duplicate + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn + _, err := conn.CreateDomain(&opensearchservice.CreateDomainInput{ + DomainName: aws.String(rName[:28]), + EBSOptions: &opensearchservice.EBSOptions{ + EBSEnabled: aws.Bool(true), + VolumeSize: aws.Int64(10), + }, + }) + if err != nil { + t.Fatal(err) + } + + err = tfopensearch.WaitForDomainCreation(conn, rName[:28], 60*time.Minute) + if err != nil { + t.Fatal(err) + } + }, + Config: testAccDomainConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + resource.TestCheckResourceAttr(resourceName, "engine_version", "OpenSearch_1.1")), + ExpectError: regexp.MustCompile(`domain .+ already exists`), + }, + }, + }) +} + +func TestAccOpenSearchDomain_v23(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_v23(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + resource.TestCheckResourceAttr( + resourceName, "engine_version", "Elasticsearch_2.3"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchDomain_complex(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_complex(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchDomain_VPC_basic(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_vpc(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchDomain_VPC_update(t *testing.T) { + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_vpcUpdate1(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + testAccCheckNumberOfSecurityGroups(1, &domain), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + { + Config: testAccDomainConfig_vpcUpdate2(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + testAccCheckNumberOfSecurityGroups(2, &domain), + ), + }, + }, + }) +} + +func TestAccOpenSearchDomain_VPC_internetToVPCEndpoint(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + { + Config: testAccDomainConfig_internetToVpcEndpoint(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + ), + }, + }, + }) +} + +func TestAccOpenSearchDomain_autoTuneOptions(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + autoTuneStartAtTime := testAccGetValidStartAtTime(t, "24h") + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_autoTuneOptions(rName, autoTuneStartAtTime), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + resource.TestCheckResourceAttr(resourceName, "engine_version", "Elasticsearch_6.7"), + resource.TestMatchResourceAttr(resourceName, "kibana_endpoint", regexp.MustCompile(`.*(opensearch|es)\..*/_plugin/kibana/`)), + resource.TestCheckResourceAttr(resourceName, "auto_tune_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "auto_tune_options.0.desired_state", "ENABLED"), + resource.TestCheckResourceAttr(resourceName, "auto_tune_options.0.maintenance_schedule.#", "1"), + resource.TestCheckResourceAttr(resourceName, "auto_tune_options.0.maintenance_schedule.0.start_at", autoTuneStartAtTime), + resource.TestCheckResourceAttr(resourceName, "auto_tune_options.0.maintenance_schedule.0.duration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "auto_tune_options.0.maintenance_schedule.0.duration.0.value", "2"), + resource.TestCheckResourceAttr(resourceName, "auto_tune_options.0.maintenance_schedule.0.duration.0.unit", "HOURS"), + resource.TestCheckResourceAttr(resourceName, "auto_tune_options.0.maintenance_schedule.0.cron_expression_for_recurrence", "cron(0 0 ? * 1 *)"), + resource.TestCheckResourceAttr(resourceName, "auto_tune_options.0.rollback_on_disable", "NO_ROLLBACK"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchDomain_AdvancedSecurityOptions_userDB(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_advancedSecurityOptionsUserDB(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + testAccCheckAdvancedSecurityOptions(true, true, &domain), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + // MasterUserOptions are not returned from DescribeDomainConfig + ImportStateVerifyIgnore: []string{ + "advanced_security_options.0.internal_user_database_enabled", + "advanced_security_options.0.master_user_options", + }, + }, + }, + }) +} + +func TestAccOpenSearchDomain_AdvancedSecurityOptions_iam(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_advancedSecurityOptionsIAM(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + testAccCheckAdvancedSecurityOptions(true, false, &domain), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + // MasterUserOptions are not returned from DescribeDomainConfig + ImportStateVerifyIgnore: []string{ + "advanced_security_options.0.internal_user_database_enabled", + "advanced_security_options.0.master_user_options", + }, + }, + }, + }) +} + +func TestAccOpenSearchDomain_AdvancedSecurityOptions_disabled(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_advancedSecurityOptionsDisabled(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + testAccCheckAdvancedSecurityOptions(false, false, &domain), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + // MasterUserOptions are not returned from DescribeDomainConfig + ImportStateVerifyIgnore: []string{ + "advanced_security_options.0.internal_user_database_enabled", + "advanced_security_options.0.master_user_options", + }, + }, + }, + }) +} + +func TestAccOpenSearchDomain_LogPublishingOptions_indexSlowLogs(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_logPublishingOptions(rName, opensearchservice.LogTypeIndexSlowLogs), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + resource.TestCheckResourceAttr(resourceName, "log_publishing_options.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "log_publishing_options.*", map[string]string{ + "log_type": opensearchservice.LogTypeIndexSlowLogs, + }), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchDomain_LogPublishingOptions_searchSlowLogs(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_logPublishingOptions(rName, opensearchservice.LogTypeSearchSlowLogs), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + resource.TestCheckResourceAttr(resourceName, "log_publishing_options.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "log_publishing_options.*", map[string]string{ + "log_type": opensearchservice.LogTypeSearchSlowLogs, + }), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchDomain_LogPublishingOptions_applicationLogs(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_logPublishingOptions(rName, opensearchservice.LogTypeEsApplicationLogs), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + resource.TestCheckResourceAttr(resourceName, "log_publishing_options.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "log_publishing_options.*", map[string]string{ + "log_type": opensearchservice.LogTypeEsApplicationLogs, + }), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchDomain_LogPublishingOptions_auditLogs(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_logPublishingOptions(rName, opensearchservice.LogTypeAuditLogs), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + resource.TestCheckResourceAttr(resourceName, "log_publishing_options.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "log_publishing_options.*", map[string]string{ + "log_type": opensearchservice.LogTypeAuditLogs, + }), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + // MasterUserOptions are not returned from DescribeDomainConfig + ImportStateVerifyIgnore: []string{"advanced_security_options.0.master_user_options"}, + }, + }, + }) +} + +func TestAccOpenSearchDomain_CognitoOptions_createAndRemove(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(t) + testAccPreCheckCognitoIdentityProvider(t) + testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) + }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_cognitoOptions(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + testAccCheckCognitoOptions(true, &domain), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + { + Config: testAccDomainConfig_cognitoOptions(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + testAccCheckCognitoOptions(false, &domain), + ), + }, + }, + }) +} + +func TestAccOpenSearchDomain_CognitoOptions_update(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(t) + testAccPreCheckCognitoIdentityProvider(t) + testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) + }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_cognitoOptions(rName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + testAccCheckCognitoOptions(false, &domain), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + { + Config: testAccDomainConfig_cognitoOptions(rName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + testAccCheckCognitoOptions(true, &domain), + ), + }, + }, + }) +} + +func TestAccOpenSearchDomain_Policy_basic(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + resourceName := "aws_opensearch_domain.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_policy(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchDomain_Policy_ignoreEquivalent(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + resourceName := "aws_opensearch_domain.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_policyOrder(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + ), + }, + { + Config: testAccDomainConfig_policyNewOrder(rName), + PlanOnly: true, + }, + }, + }) +} + +func TestAccOpenSearchDomain_Encryption_atRestDefaultKey(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + resourceName := "aws_opensearch_domain.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_encryptAtRestDefaultKey(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + testAccCheckDomainEncrypted(true, &domain), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchDomain_Encryption_atRestSpecifyKey(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + resourceName := "aws_opensearch_domain.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_encryptAtRestWithKey(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + testAccCheckDomainEncrypted(true, &domain), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchDomain_Encryption_nodeToNode(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + resourceName := "aws_opensearch_domain.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_nodeToNodeEncryption(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + testAccCheckNodeToNodeEncrypted(true, &domain), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchDomain_tags(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckELBDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_tags1(rName, "key1", "value1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + { + Config: testAccDomainConfig_tags2(rName, "key1", "value1updated", "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), + resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + { + Config: testAccDomainConfig_tags1(rName, "key2", "value2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), + ), + }, + }, + }) +} + +func TestAccOpenSearchDomain_VolumeType_update(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var input opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_clusterUpdateEBSVolume(rName, 24), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &input), + testAccCheckEBSVolumeEnabled(true, &input), + testAccCheckEBSVolumeSize(24, &input), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + { + Config: testAccDomainConfig_clusterUpdateInstanceStore(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &input), + testAccCheckEBSVolumeEnabled(false, &input), + ), + }, + { + Config: testAccDomainConfig_clusterUpdateEBSVolume(rName, 12), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &input), + testAccCheckEBSVolumeEnabled(true, &input), + testAccCheckEBSVolumeSize(12, &input), + ), + }, + }}) +} + +// Reference: https://github.com/hashicorp/terraform-provider-aws/issues/13867 +func TestAccOpenSearchDomain_VolumeType_missing(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + var domain opensearchservice.DomainStatus + resourceName := "aws_opensearch_domain.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_disabledEBSNullVolume(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain), + resource.TestCheckResourceAttr(resourceName, "cluster_config.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.instance_type", "i3.xlarge.search"), + resource.TestCheckResourceAttr(resourceName, "cluster_config.0.instance_count", "1"), + resource.TestCheckResourceAttr(resourceName, "ebs_options.#", "1"), + resource.TestCheckResourceAttr(resourceName, "ebs_options.0.ebs_enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "ebs_options.0.volume_size", "0"), + resource.TestCheckResourceAttr(resourceName, "ebs_options.0.volume_type", ""), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccOpenSearchDomain_versionUpdate(t *testing.T) { + var domain1, domain2, domain3 opensearchservice.DomainStatus + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig_clusterUpdateVersion(rName, "Elasticsearch_5.5"), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain1), + resource.TestCheckResourceAttr(resourceName, "engine_version", "Elasticsearch_5.5"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateId: rName[:28], + ImportStateVerify: true, + }, + { + Config: testAccDomainConfig_clusterUpdateVersion(rName, "Elasticsearch_5.6"), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain2), + testAccCheckDomainNotRecreated(&domain1, &domain2), + resource.TestCheckResourceAttr(resourceName, "engine_version", "Elasticsearch_5.6"), + ), + }, + { + Config: testAccDomainConfig_clusterUpdateVersion(rName, "Elasticsearch_6.3"), + Check: resource.ComposeTestCheckFunc( + testAccCheckDomainExists(resourceName, &domain3), + testAccCheckDomainNotRecreated(&domain2, &domain3), + resource.TestCheckResourceAttr(resourceName, "engine_version", "Elasticsearch_6.3"), + ), + }, + }}) +} + +func TestAccOpenSearchDomain_disappears(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_opensearch_domain.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIAMServiceLinkedRoleOpenSearch(t) }, + ErrorCheck: acctest.ErrorCheck(t, opensearchservice.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDomainDestroy, + Steps: []resource.TestStep{ + { + Config: testAccDomainConfig(rName), + Check: resource.ComposeTestCheckFunc( + acctest.CheckResourceDisappears(acctest.Provider, tfopensearch.ResourceDomain(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccCheckDomainEndpointOptions(enforceHTTPS bool, tls string, status *opensearchservice.DomainStatus) resource.TestCheckFunc { + return func(s *terraform.State) error { + options := status.DomainEndpointOptions + if *options.EnforceHTTPS != enforceHTTPS { + return fmt.Errorf("EnforceHTTPS differ. Given: %t, Expected: %t", *options.EnforceHTTPS, enforceHTTPS) + } + if *options.TLSSecurityPolicy != tls { + return fmt.Errorf("TLSSecurityPolicy differ. Given: %s, Expected: %s", *options.TLSSecurityPolicy, tls) + } + return nil + } +} + +func testAccCheckCustomEndpoint(n string, customEndpointEnabled bool, customEndpoint string, status *opensearchservice.DomainStatus) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + options := status.DomainEndpointOptions + if *options.CustomEndpointEnabled != customEndpointEnabled { + return fmt.Errorf("CustomEndpointEnabled differ. Given: %t, Expected: %t", *options.CustomEndpointEnabled, customEndpointEnabled) + } + if *options.CustomEndpointEnabled { + if *options.CustomEndpoint != customEndpoint { + return fmt.Errorf("CustomEndpoint differ. Given: %s, Expected: %s", *options.CustomEndpoint, customEndpoint) + } + customEndpointCertificateArn := rs.Primary.Attributes["domain_endpoint_options.0.custom_endpoint_certificate_arn"] + if *options.CustomEndpointCertificateArn != customEndpointCertificateArn { + return fmt.Errorf("CustomEndpointCertificateArn differ. Given: %s, Expected: %s", *options.CustomEndpointCertificateArn, customEndpointCertificateArn) + } + } + return nil + } +} + +func testAccCheckNumberOfSecurityGroups(numberOfSecurityGroups int, status *opensearchservice.DomainStatus) resource.TestCheckFunc { + return func(s *terraform.State) error { + count := len(status.VPCOptions.SecurityGroupIds) + if count != numberOfSecurityGroups { + return fmt.Errorf("Number of security groups differ. Given: %d, Expected: %d", count, numberOfSecurityGroups) + } + return nil + } +} + +func testAccCheckEBSVolumeSize(ebsVolumeSize int, status *opensearchservice.DomainStatus) resource.TestCheckFunc { + return func(s *terraform.State) error { + conf := status.EBSOptions + if *conf.VolumeSize != int64(ebsVolumeSize) { + return fmt.Errorf("EBS volume size differ. Given: %d, Expected: %d", *conf.VolumeSize, ebsVolumeSize) + } + return nil + } +} + +func testAccCheckEBSVolumeEnabled(ebsEnabled bool, status *opensearchservice.DomainStatus) resource.TestCheckFunc { + return func(s *terraform.State) error { + conf := status.EBSOptions + if *conf.EBSEnabled != ebsEnabled { + return fmt.Errorf("EBS volume enabled. Given: %t, Expected: %t", *conf.EBSEnabled, ebsEnabled) + } + return nil + } +} + +func testAccCheckSnapshotHour(snapshotHour int, status *opensearchservice.DomainStatus) resource.TestCheckFunc { + return func(s *terraform.State) error { + conf := status.SnapshotOptions + if *conf.AutomatedSnapshotStartHour != int64(snapshotHour) { + return fmt.Errorf("Snapshots start hour differ. Given: %d, Expected: %d", *conf.AutomatedSnapshotStartHour, snapshotHour) + } + return nil + } +} + +func testAccCheckNumberOfInstances(numberOfInstances int, status *opensearchservice.DomainStatus) resource.TestCheckFunc { + return func(s *terraform.State) error { + conf := status.ClusterConfig + if *conf.InstanceCount != int64(numberOfInstances) { + return fmt.Errorf("Number of instances differ. Given: %d, Expected: %d", *conf.InstanceCount, numberOfInstances) + } + return nil + } +} + +func testAccCheckDomainEncrypted(encrypted bool, status *opensearchservice.DomainStatus) resource.TestCheckFunc { + return func(s *terraform.State) error { + conf := status.EncryptionAtRestOptions + if *conf.Enabled != encrypted { + return fmt.Errorf("Encrypt at rest not set properly. Given: %t, Expected: %t", *conf.Enabled, encrypted) + } + return nil + } +} + +func testAccCheckNodeToNodeEncrypted(encrypted bool, status *opensearchservice.DomainStatus) resource.TestCheckFunc { + return func(s *terraform.State) error { + options := status.NodeToNodeEncryptionOptions + if aws.BoolValue(options.Enabled) != encrypted { + return fmt.Errorf("Node-to-Node Encryption not set properly. Given: %t, Expected: %t", aws.BoolValue(options.Enabled), encrypted) + } + return nil + } +} + +func testAccCheckAdvancedSecurityOptions(enabled bool, userDbEnabled bool, status *opensearchservice.DomainStatus) resource.TestCheckFunc { + return func(s *terraform.State) error { + conf := status.AdvancedSecurityOptions + + if aws.BoolValue(conf.Enabled) != enabled { + return fmt.Errorf( + "AdvancedSecurityOptions.Enabled not set properly. Given: %t, Expected: %t", + aws.BoolValue(conf.Enabled), + enabled, + ) + } + + if aws.BoolValue(conf.Enabled) { + if aws.BoolValue(conf.InternalUserDatabaseEnabled) != userDbEnabled { + return fmt.Errorf( + "AdvancedSecurityOptions.InternalUserDatabaseEnabled not set properly. Given: %t, Expected: %t", + aws.BoolValue(conf.InternalUserDatabaseEnabled), + userDbEnabled, + ) + } + } + + return nil + } +} + +func testAccCheckCognitoOptions(enabled bool, status *opensearchservice.DomainStatus) resource.TestCheckFunc { + return func(s *terraform.State) error { + conf := status.CognitoOptions + if *conf.Enabled != enabled { + return fmt.Errorf("CognitoOptions not set properly. Given: %t, Expected: %t", *conf.Enabled, enabled) + } + return nil + } +} + +func testAccCheckDomainExists(n string, domain *opensearchservice.DomainStatus) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No OpenSearch Domain ID is set") + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn + resp, err := tfopensearch.FindDomainByName(conn, rs.Primary.Attributes["domain_name"]) + if err != nil { + return fmt.Errorf("Error describing domain: %s", err.Error()) + } + + *domain = *resp + + return nil + } +} + +func testAccCheckDomainNotRecreated(i, j *opensearchservice.DomainStatus) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn + + iConfig, err := conn.DescribeDomainConfig(&opensearchservice.DescribeDomainConfigInput{ + DomainName: i.DomainName, + }) + if err != nil { + return err + } + jConfig, err := conn.DescribeDomainConfig(&opensearchservice.DescribeDomainConfigInput{ + DomainName: j.DomainName, + }) + if err != nil { + return err + } + + if !aws.TimeValue(iConfig.DomainConfig.ClusterConfig.Status.CreationDate).Equal(aws.TimeValue(jConfig.DomainConfig.ClusterConfig.Status.CreationDate)) { + return fmt.Errorf("OpenSearch Domain was recreated") + } + + return nil + } +} + +func testAccCheckDomainDestroy(s *terraform.State) error { + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_opensearch_domain" { + continue + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).OpenSearchConn + _, err := tfopensearch.FindDomainByName(conn, rs.Primary.Attributes["domain_name"]) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("OpenSearch domain %s still exists", rs.Primary.ID) + + } + return nil +} + +func testAccGetValidStartAtTime(t *testing.T, timeUntilStart string) string { + n := time.Now().UTC() + d, err := time.ParseDuration(timeUntilStart) + if err != nil { + t.Fatalf("err parsing timeUntilStart: %s", err) + } + return n.Add(d).Format(time.RFC3339) +} + +func testAccPreCheckIAMServiceLinkedRoleOpenSearch(t *testing.T) { + conn := acctest.Provider.Meta().(*conns.AWSClient).IAMConn + dnsSuffix := acctest.Provider.Meta().(*conns.AWSClient).DNSSuffix + + input := &iam.ListRolesInput{ + PathPrefix: aws.String("/aws-service-role/opensearchservice."), + } + + var role *iam.Role + err := conn.ListRolesPages(input, func(page *iam.ListRolesOutput, lastPage bool) bool { + for _, r := range page.Roles { + if strings.HasPrefix(aws.StringValue(r.Path), "/aws-service-role/opensearchservice.") { + role = r + } + } + + return !lastPage + }) + + if acctest.PreCheckSkipError(err) { + t.Skipf("skipping acceptance testing: %s", err) + } + + if err != nil { + t.Fatalf("unexpected PreCheck error: %s", err) + } + + if role == nil { + t.Fatalf("missing IAM Service Linked Role (opensearchservice.%s), please create it in the AWS account and retry", dnsSuffix) + } +} + +func testAccPreCheckCognitoIdentityProvider(t *testing.T) { + conn := acctest.Provider.Meta().(*conns.AWSClient).CognitoIDPConn + + input := &cognitoidentityprovider.ListUserPoolsInput{ + MaxResults: aws.Int64(1), + } + + _, err := conn.ListUserPools(input) + + if acctest.PreCheckSkipError(err) { + t.Skipf("skipping acceptance testing: %s", err) + } + + if err != nil { + t.Fatalf("unexpected PreCheck error: %s", err) + } +} + +func testAccCheckELBDestroy(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).ELBConn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_elb" { + continue + } + + describe, err := conn.DescribeLoadBalancers(&elb.DescribeLoadBalancersInput{ + LoadBalancerNames: []*string{aws.String(rs.Primary.ID)}, + }) + + if err == nil { + if len(describe.LoadBalancerDescriptions) != 0 && + *describe.LoadBalancerDescriptions[0].LoadBalancerName == rs.Primary.ID { + return fmt.Errorf("ELB still exists") + } + } + + // Verify the error + providerErr, ok := err.(awserr.Error) + if !ok { + return err + } + + if providerErr.Code() != elb.ErrCodeAccessPointNotFoundException { + return fmt.Errorf("Unexpected error: %s", err) + } + } + + return nil +} + +func testAccDomainConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + ebs_options { + ebs_enabled = true + volume_size = 10 + } +} +`, rName) +} + +func testAccDomainConfig_autoTuneOptions(rName, autoTuneStartAtTime string) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + engine_version = "Elasticsearch_6.7" + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + auto_tune_options { + desired_state = "ENABLED" + + maintenance_schedule { + start_at = %[2]q + duration { + value = "2" + unit = "HOURS" + } + cron_expression_for_recurrence = "cron(0 0 ? * 1 *)" + } + + rollback_on_disable = "NO_ROLLBACK" + + } +} +`, rName, autoTuneStartAtTime) +} + +func testAccDomainConfig_disabledEBSNullVolume(rName string) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + engine_version = "Elasticsearch_6.0" + + cluster_config { + instance_type = "i3.xlarge.search" + instance_count = 1 + } + + ebs_options { + ebs_enabled = false + volume_size = 0 + volume_type = null + } +} +`, rName) +} + +func testAccDomainConfig_domainEndpointOptions(rName string, enforceHttps bool, tlsSecurityPolicy string) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + domain_endpoint_options { + enforce_https = %[2]t + tls_security_policy = %[3]q + } + + ebs_options { + ebs_enabled = true + volume_size = 10 + } +} +`, rName, enforceHttps, tlsSecurityPolicy) +} + +func testAccDomainConfig_customEndpoint(rName string, enforceHttps bool, tlsSecurityPolicy string, customEndpointEnabled bool, customEndpoint string, certKey string, certBody string) string { + return fmt.Sprintf(` +resource "aws_acm_certificate" "test" { + private_key = "%[6]s" + certificate_body = "%[7]s" +} + +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + domain_endpoint_options { + enforce_https = %[2]t + tls_security_policy = %[3]q + custom_endpoint_enabled = %[4]t + custom_endpoint = "%[5]s" + custom_endpoint_certificate_arn = aws_acm_certificate.test.arn + } + + ebs_options { + ebs_enabled = true + volume_size = 10 + } +} +`, rName, enforceHttps, tlsSecurityPolicy, customEndpointEnabled, customEndpoint, acctest.TLSPEMEscapeNewlines(certKey), acctest.TLSPEMEscapeNewlines(certBody)) +} + +func testAccDomainConfig_clusterZoneAwarenessAZCount(rName string, availabilityZoneCount int) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + engine_version = "Elasticsearch_1.5" + + cluster_config { + instance_type = "t2.small.search" + instance_count = 6 + zone_awareness_enabled = true + + zone_awareness_config { + availability_zone_count = %[2]d + } + } + + ebs_options { + ebs_enabled = true + volume_size = 10 + } +} +`, rName, availabilityZoneCount) +} + +func testAccDomainConfig_clusterZoneAwarenessEnabled(rName string, zoneAwarenessEnabled bool) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + engine_version = "Elasticsearch_1.5" + + cluster_config { + instance_type = "t2.small.search" + instance_count = 6 + zone_awareness_enabled = %[2]t + } + + ebs_options { + ebs_enabled = true + volume_size = 10 + } +} +`, rName, zoneAwarenessEnabled) +} + +func testAccDomainConfig_clusterWarm(rName, warmType string, enabled bool, warmCnt int) string { + warmConfig := "" + if enabled { + warmConfig = fmt.Sprintf(` + warm_count = %[1]d + warm_type = %[2]q +`, warmCnt, warmType) + } + + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + engine_version = "Elasticsearch_6.8" + + cluster_config { + zone_awareness_enabled = true + instance_type = "c5.large.search" + instance_count = "3" + dedicated_master_enabled = true + dedicated_master_count = "3" + dedicated_master_type = "c5.large.search" + warm_enabled = %[2]t + + %[3]s + + zone_awareness_config { + availability_zone_count = 3 + } + } + + ebs_options { + ebs_enabled = true + volume_size = 10 + } +} +`, rName, enabled, warmConfig) +} + +func testAccDomainConfig_dedicatedClusterMaster(rName string, enabled bool) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + cluster_config { + instance_type = "t2.small.search" + instance_count = "1" + dedicated_master_enabled = %t + dedicated_master_count = "3" + dedicated_master_type = "t2.small.search" + } + + ebs_options { + ebs_enabled = true + volume_size = 10 + } +} +`, rName, enabled) +} + +func testAccDomainConfig_clusterUpdate(rName string, instanceInt, snapshotInt int) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + advanced_options = { + "indices.fielddata.cache.size" = 80 + } + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + cluster_config { + instance_count = %d + zone_awareness_enabled = true + instance_type = "t2.small.search" + } + + snapshot_options { + automated_snapshot_start_hour = %d + } + + timeouts { + update = "180m" + } +} +`, rName, instanceInt, snapshotInt) +} + +func testAccDomainConfig_clusterUpdateEBSVolume(rName string, volumeSize int) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + engine_version = "Elasticsearch_6.0" + + advanced_options = { + "indices.fielddata.cache.size" = 80 + } + + ebs_options { + ebs_enabled = true + volume_size = %d + } + + cluster_config { + instance_count = 2 + zone_awareness_enabled = true + instance_type = "t2.small.search" + } +} +`, rName, volumeSize) +} + +func testAccDomainConfig_clusterUpdateVersion(rName, version string) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + engine_version = %[2]q + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + cluster_config { + instance_count = 1 + zone_awareness_enabled = false + instance_type = "t2.small.search" + } +} +`, rName, version) +} + +func testAccDomainConfig_clusterUpdateInstanceStore(rName string) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + engine_version = "Elasticsearch_6.0" + + advanced_options = { + "indices.fielddata.cache.size" = 80 + } + + ebs_options { + ebs_enabled = false + } + + cluster_config { + instance_count = 2 + zone_awareness_enabled = true + instance_type = "i3.large.search" + } +} +`, rName) +} + +func testAccDomainConfig_tags1(rName, tagKey1, tagValue1 string) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + tags = { + %[2]q = %[3]q + } +} +`, rName, tagKey1, tagValue1) +} + +func testAccDomainConfig_tags2(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + ebs_options { + ebs_enabled = true + volume_size = 10 + } + tags = { + %[2]q = %[3]q + %[4]q = %[5]q + } +} +`, rName, tagKey1, tagValue1, tagKey2, tagValue2) +} + +func testAccDomainConfig_policy(rName string) string { + return fmt.Sprintf(` +data "aws_partition" "current" {} + +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + access_policies = jsonencode({ + Version = "2012-10-17" + Statement = [{ + Effect = "Allow" + Principal = { + AWS = aws_iam_role.test.arn + } + Action = "es:*" + Resource = "arn:${data.aws_partition.current.partition}:es:*" + }] + }) +} + +resource "aws_iam_role" "test" { + name = %[1]q + assume_role_policy = data.aws_iam_policy_document.test.json +} + +data "aws_iam_policy_document" "test" { + statement { + actions = ["sts:AssumeRole"] + + principals { + type = "Service" + identifiers = ["ec2.${data.aws_partition.current.dns_suffix}"] + } + } +} +`, rName) +} + +func testAccDomainConfig_policyOrder(rName string) string { + return fmt.Sprintf(` +data "aws_partition" "current" {} + +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + access_policies = jsonencode({ + Version = "2012-10-17" + Statement = [{ + Effect = "Allow" + Principal = { + AWS = [ + aws_iam_role.test.arn, + aws_iam_role.test2.arn, + ] + } + Action = "es:*" + Resource = "arn:${data.aws_partition.current.partition}:es:*" + }] + }) +} + +resource "aws_iam_role" "test" { + name = %[1]q + assume_role_policy = data.aws_iam_policy_document.test.json +} + +resource "aws_iam_role" "test2" { + name = "%[1]s-2" + assume_role_policy = data.aws_iam_policy_document.test.json +} + +data "aws_iam_policy_document" "test" { + statement { + actions = ["sts:AssumeRole"] + + principals { + type = "Service" + identifiers = ["ec2.${data.aws_partition.current.dns_suffix}"] + } + } +} +`, rName) +} + +func testAccDomainConfig_policyNewOrder(rName string) string { + return fmt.Sprintf(` +data "aws_partition" "current" {} + +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + access_policies = jsonencode({ + Version = "2012-10-17" + Statement = [{ + Effect = "Allow" + Principal = { + AWS = [ + aws_iam_role.test2.arn, + aws_iam_role.test.arn, + ] + } + Action = "es:*" + Resource = "arn:${data.aws_partition.current.partition}:es:*" + }] + }) +} + +resource "aws_iam_role" "test" { + name = %[1]q + assume_role_policy = data.aws_iam_policy_document.test.json +} + +resource "aws_iam_role" "test2" { + name = "%[1]s-2" + assume_role_policy = data.aws_iam_policy_document.test.json +} + +data "aws_iam_policy_document" "test" { + statement { + actions = ["sts:AssumeRole"] + + principals { + type = "Service" + identifiers = ["ec2.${data.aws_partition.current.dns_suffix}"] + } + } +} +`, rName) +} + +func testAccDomainConfig_encryptAtRestDefaultKey(rName string) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + engine_version = "Elasticsearch_6.0" + + # Encrypt at rest requires m4/c4/r4/i2 instances. See http://docs.aws.amazon.com/opensearch-service/latest/developerguide/aes-supported-instance-types.html + cluster_config { + instance_type = "m4.large.search" + } + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + encrypt_at_rest { + enabled = true + } +} +`, rName) +} + +func testAccDomainConfig_encryptAtRestWithKey(rName string) string { + return fmt.Sprintf(` +resource "aws_kms_key" "test" { + description = %[1]q + deletion_window_in_days = 7 +} + +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + engine_version = "Elasticsearch_6.0" + + # Encrypt at rest requires m4/c4/r4/i2 instances. See http://docs.aws.amazon.com/opensearch-service/latest/developerguide/aes-supported-instance-types.html + cluster_config { + instance_type = "m4.large.search" + } + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + encrypt_at_rest { + enabled = true + kms_key_id = aws_kms_key.test.key_id + } +} +`, rName) +} + +func testAccDomainConfig_nodeToNodeEncryption(rName string) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + engine_version = "Elasticsearch_6.0" + + cluster_config { + instance_type = "m4.large.search" + } + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + node_to_node_encryption { + enabled = true + } +} +`, rName) +} + +func testAccDomainConfig_complex(rName string) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + advanced_options = { + "indices.fielddata.cache.size" = 80 + } + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + cluster_config { + instance_count = 2 + zone_awareness_enabled = true + instance_type = "t2.small.search" + } + + snapshot_options { + automated_snapshot_start_hour = 23 + } + + tags = { + bar = "complex" + } +} +`, rName) +} + +func testAccDomainConfig_v23(rName string) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + engine_version = "Elasticsearch_2.3" +} +`, rName) +} + +func testAccDomainConfig_vpc(rName string) string { + return acctest.ConfigCompose( + acctest.ConfigAvailableAZsNoOptIn(), + fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "192.168.0.0/22" + + tags = { + Name = %[1]q + } +} + +resource "aws_subnet" "test" { + vpc_id = aws_vpc.test.id + availability_zone = data.aws_availability_zones.available.names[0] + cidr_block = "192.168.0.0/24" + + tags = { + Name = %[1]q + } +} + +resource "aws_subnet" "test2" { + vpc_id = aws_vpc.test.id + availability_zone = data.aws_availability_zones.available.names[1] + cidr_block = "192.168.1.0/24" + + tags = { + Name = %[1]q + } +} + +resource "aws_security_group" "test" { + vpc_id = aws_vpc.test.id +} + +resource "aws_security_group" "test2" { + vpc_id = aws_vpc.test.id +} + +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + cluster_config { + instance_count = 2 + zone_awareness_enabled = true + instance_type = "t2.small.search" + } + + vpc_options { + security_group_ids = [aws_security_group.test.id, aws_security_group.test2.id] + subnet_ids = [aws_subnet.test.id, aws_subnet.test2.id] + } +} +`, rName)) +} + +func testAccDomainConfig_vpcUpdate1(rName string) string { + return acctest.ConfigCompose( + acctest.ConfigAvailableAZsNoOptIn(), + fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "192.168.0.0/22" + + tags = { + Name = %[1]q + } +} + +resource "aws_subnet" "az1_first" { + vpc_id = aws_vpc.test.id + availability_zone = data.aws_availability_zones.available.names[0] + cidr_block = "192.168.0.0/24" + + tags = { + Name = %[1]q + } +} + +resource "aws_subnet" "az2_first" { + vpc_id = aws_vpc.test.id + availability_zone = data.aws_availability_zones.available.names[1] + cidr_block = "192.168.1.0/24" + + tags = { + Name = %[1]q + } +} + +resource "aws_subnet" "az1_second" { + vpc_id = aws_vpc.test.id + availability_zone = data.aws_availability_zones.available.names[0] + cidr_block = "192.168.2.0/24" + + tags = { + Name = %[1]q + } +} + +resource "aws_subnet" "az2_second" { + vpc_id = aws_vpc.test.id + availability_zone = data.aws_availability_zones.available.names[1] + cidr_block = "192.168.3.0/24" + + tags = { + Name = %[1]q + } +} + +resource "aws_security_group" "test" { + vpc_id = aws_vpc.test.id +} + +resource "aws_security_group" "test2" { + vpc_id = aws_vpc.test.id +} + +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + cluster_config { + instance_count = 2 + zone_awareness_enabled = true + instance_type = "t2.small.search" + } + + vpc_options { + security_group_ids = [aws_security_group.test.id] + subnet_ids = [aws_subnet.az1_first.id, aws_subnet.az2_first.id] + } +} +`, rName)) +} + +func testAccDomainConfig_vpcUpdate2(rName string) string { + return acctest.ConfigCompose( + acctest.ConfigAvailableAZsNoOptIn(), + fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "192.168.0.0/22" + + tags = { + Name = %[1]q + } +} + +resource "aws_subnet" "az1_first" { + vpc_id = aws_vpc.test.id + availability_zone = data.aws_availability_zones.available.names[0] + cidr_block = "192.168.0.0/24" + + tags = { + Name = %[1]q + } +} + +resource "aws_subnet" "az2_first" { + vpc_id = aws_vpc.test.id + availability_zone = data.aws_availability_zones.available.names[1] + cidr_block = "192.168.1.0/24" + + tags = { + Name = %[1]q + } +} + +resource "aws_subnet" "az1_second" { + vpc_id = aws_vpc.test.id + availability_zone = data.aws_availability_zones.available.names[0] + cidr_block = "192.168.2.0/24" + + tags = { + Name = %[1]q + } +} + +resource "aws_subnet" "az2_second" { + vpc_id = aws_vpc.test.id + availability_zone = data.aws_availability_zones.available.names[1] + cidr_block = "192.168.3.0/24" + + tags = { + Name = %[1]q + } +} + +resource "aws_security_group" "test" { + vpc_id = aws_vpc.test.id +} + +resource "aws_security_group" "test2" { + vpc_id = aws_vpc.test.id +} + +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + cluster_config { + instance_count = 2 + zone_awareness_enabled = true + instance_type = "t2.small.search" + } + + vpc_options { + security_group_ids = [aws_security_group.test.id, aws_security_group.test2.id] + subnet_ids = [aws_subnet.az1_second.id, aws_subnet.az2_second.id] + } +} +`, rName)) +} + +func testAccDomainConfig_internetToVpcEndpoint(rName string) string { + return acctest.ConfigCompose( + acctest.ConfigAvailableAZsNoOptIn(), + fmt.Sprintf(` +resource "aws_vpc" "test" { + cidr_block = "192.168.0.0/22" + + tags = { + Name = %[1]q + } +} + +resource "aws_subnet" "test" { + vpc_id = aws_vpc.test.id + availability_zone = data.aws_availability_zones.available.names[0] + cidr_block = "192.168.0.0/24" + + tags = { + Name = %[1]q + } +} + +resource "aws_subnet" "test2" { + vpc_id = aws_vpc.test.id + availability_zone = data.aws_availability_zones.available.names[1] + cidr_block = "192.168.1.0/24" + + tags = { + Name = %[1]q + } +} + +resource "aws_security_group" "test" { + vpc_id = aws_vpc.test.id +} + +resource "aws_security_group" "test2" { + vpc_id = aws_vpc.test.id +} + +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + cluster_config { + instance_count = 2 + zone_awareness_enabled = true + instance_type = "t2.small.search" + } + + vpc_options { + security_group_ids = [aws_security_group.test.id, aws_security_group.test2.id] + subnet_ids = [aws_subnet.test.id, aws_subnet.test2.id] + } +} +`, rName)) +} + +func testAccDomainConfig_advancedSecurityOptionsUserDB(rName string) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + engine_version = "Elasticsearch_7.1" + + cluster_config { + instance_type = "r5.large.search" + } + + advanced_security_options { + enabled = true + internal_user_database_enabled = true + master_user_options { + master_user_name = "testmasteruser" + master_user_password = "Barbarbarbar1!" + } + } + + encrypt_at_rest { + enabled = true + } + + domain_endpoint_options { + enforce_https = true + tls_security_policy = "Policy-Min-TLS-1-2-2019-07" + } + + node_to_node_encryption { + enabled = true + } + + ebs_options { + ebs_enabled = true + volume_size = 10 + } +} +`, rName) +} + +func testAccDomainConfig_advancedSecurityOptionsIAM(rName string) string { + return fmt.Sprintf(` +resource "aws_iam_user" "test" { + name = %[1]q +} + +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + engine_version = "Elasticsearch_7.1" + + cluster_config { + instance_type = "r5.large.search" + } + + advanced_security_options { + enabled = true + internal_user_database_enabled = false + master_user_options { + master_user_arn = aws_iam_user.test.arn + } + } + + encrypt_at_rest { + enabled = true + } + + domain_endpoint_options { + enforce_https = true + tls_security_policy = "Policy-Min-TLS-1-2-2019-07" + } + + node_to_node_encryption { + enabled = true + } + + ebs_options { + ebs_enabled = true + volume_size = 10 + } +} +`, rName) +} + +func testAccDomainConfig_advancedSecurityOptionsDisabled(rName string) string { + return fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + engine_version = "Elasticsearch_7.1" + + cluster_config { + instance_type = "r5.large.search" + } + + advanced_security_options { + enabled = false + internal_user_database_enabled = true + master_user_options { + master_user_name = "testmasteruser" + master_user_password = "Barbarbarbar1!" + } + } + + encrypt_at_rest { + enabled = true + } + + domain_endpoint_options { + enforce_https = true + tls_security_policy = "Policy-Min-TLS-1-2-2019-07" + } + + node_to_node_encryption { + enabled = true + } + + ebs_options { + ebs_enabled = true + volume_size = 10 + } +} +`, rName) +} + +func testAccDomain_logPublishingOptionsBase(rName string) string { + return fmt.Sprintf(` +data "aws_partition" "current" {} + +resource "aws_cloudwatch_log_group" "test" { + name = %[1]q +} + +resource "aws_cloudwatch_log_resource_policy" "test" { + policy_name = %[1]q + + policy_document = jsonencode({ + Version = "2012-10-17" + Statement = [{ + Effect = "Allow" + Principal = { + Service = [ + "es.${data.aws_partition.current.dns_suffix}", + ] + } + Action = [ + "logs:PutLogEvents", + "logs:PutLogEventsBatch", + "logs:CreateLogStream", + ] + Resource = "arn:${data.aws_partition.current.partition}:logs:*" + }] + }) +} +`, rName) +} + +func testAccDomainConfig_logPublishingOptions(rName, logType string) string { + var auditLogsConfig string + if logType == opensearchservice.LogTypeAuditLogs { + auditLogsConfig = ` + advanced_security_options { + enabled = true + internal_user_database_enabled = true + master_user_options { + master_user_name = "testmasteruser" + master_user_password = "Barbarbarbar1!" + } + } + + domain_endpoint_options { + enforce_https = true + tls_security_policy = "Policy-Min-TLS-1-2-2019-07" + } + + encrypt_at_rest { + enabled = true + } + + node_to_node_encryption { + enabled = true + }` + } + return acctest.ConfigCompose(testAccDomain_logPublishingOptionsBase(rName), fmt.Sprintf(` +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + engine_version = "Elasticsearch_7.1" # needed for ESApplication/Audit Log Types + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + %[2]s + + log_publishing_options { + log_type = %[3]q + cloudwatch_log_group_arn = aws_cloudwatch_log_group.test.arn + } +} +`, rName, auditLogsConfig, logType)) +} + +func testAccDomainConfig_cognitoOptions(rName string, includeCognitoOptions bool) string { + var cognitoOptions string + if includeCognitoOptions { + cognitoOptions = ` + cognito_options { + enabled = true + user_pool_id = aws_cognito_user_pool.test.id + identity_pool_id = aws_cognito_identity_pool.test.id + role_arn = aws_iam_role.test.arn + }` + } else { + cognitoOptions = "" + } + + return fmt.Sprintf(` +data "aws_partition" "current" {} + +resource "aws_cognito_user_pool" "test" { + name = %[1]q +} + +resource "aws_cognito_user_pool_domain" "test" { + domain = %[1]q + user_pool_id = aws_cognito_user_pool.test.id +} + +resource "aws_cognito_identity_pool" "test" { + identity_pool_name = %[1]q + allow_unauthenticated_identities = false + + lifecycle { + ignore_changes = [cognito_identity_providers] + } +} + +resource "aws_iam_role" "test" { + name = %[1]q + path = "/service-role/" + assume_role_policy = data.aws_iam_policy_document.test.json +} + +data "aws_iam_policy_document" "test" { + statement { + actions = ["sts:AssumeRole"] + effect = "Allow" + + principals { + type = "Service" + identifiers = [ + "es.${data.aws_partition.current.dns_suffix}", + ] + } + } +} + +resource "aws_iam_role_policy_attachment" "test" { + role = aws_iam_role.test.name + policy_arn = "arn:${data.aws_partition.current.partition}:iam::aws:policy/AmazonOpenSearchServiceCognitoAccess" +} + +resource "aws_opensearch_domain" "test" { + domain_name = substr(%[1]q, 0, 28) + + engine_version = "OpenSearch_1.1" + + %[2]s + + ebs_options { + ebs_enabled = true + volume_size = 10 + } + + depends_on = [ + aws_cognito_user_pool_domain.test, + aws_iam_role_policy_attachment.test, + ] +} +`, rName, cognitoOptions) +} diff --git a/internal/service/opensearch/find.go b/internal/service/opensearch/find.go new file mode 100644 index 000000000000..4cdfc6a63796 --- /dev/null +++ b/internal/service/opensearch/find.go @@ -0,0 +1,33 @@ +package opensearch + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/opensearchservice" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func FindDomainByName(conn *opensearchservice.OpenSearchService, name string) (*opensearchservice.DomainStatus, error) { + input := &opensearchservice.DescribeDomainInput{ + DomainName: aws.String(name), + } + + output, err := conn.DescribeDomain(input) + if tfawserr.ErrCodeEquals(err, opensearchservice.ErrCodeResourceNotFoundException) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || output.DomainStatus == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output.DomainStatus, nil +} diff --git a/internal/service/opensearch/flex.go b/internal/service/opensearch/flex.go new file mode 100644 index 000000000000..c873877d22c1 --- /dev/null +++ b/internal/service/opensearch/flex.go @@ -0,0 +1,225 @@ +package opensearch + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/opensearchservice" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/flex" +) + +func expandCognitoOptions(c []interface{}) *opensearchservice.CognitoOptions { + options := &opensearchservice.CognitoOptions{ + Enabled: aws.Bool(false), + } + if len(c) < 1 { + return options + } + + m := c[0].(map[string]interface{}) + + if cognitoEnabled, ok := m["enabled"]; ok { + options.Enabled = aws.Bool(cognitoEnabled.(bool)) + + if cognitoEnabled.(bool) { + + if v, ok := m["user_pool_id"]; ok && v.(string) != "" { + options.UserPoolId = aws.String(v.(string)) + } + if v, ok := m["identity_pool_id"]; ok && v.(string) != "" { + options.IdentityPoolId = aws.String(v.(string)) + } + if v, ok := m["role_arn"]; ok && v.(string) != "" { + options.RoleArn = aws.String(v.(string)) + } + } + } + + return options +} + +func expandDomainEndpointOptions(l []interface{}) *opensearchservice.DomainEndpointOptions { + if len(l) == 0 || l[0] == nil { + return nil + } + + m := l[0].(map[string]interface{}) + domainEndpointOptions := &opensearchservice.DomainEndpointOptions{} + + if v, ok := m["enforce_https"].(bool); ok { + domainEndpointOptions.EnforceHTTPS = aws.Bool(v) + } + + if v, ok := m["tls_security_policy"].(string); ok { + domainEndpointOptions.TLSSecurityPolicy = aws.String(v) + } + + if customEndpointEnabled, ok := m["custom_endpoint_enabled"]; ok { + domainEndpointOptions.CustomEndpointEnabled = aws.Bool(customEndpointEnabled.(bool)) + + if customEndpointEnabled.(bool) { + if v, ok := m["custom_endpoint"].(string); ok && v != "" { + domainEndpointOptions.CustomEndpoint = aws.String(v) + } + + if v, ok := m["custom_endpoint_certificate_arn"].(string); ok && v != "" { + domainEndpointOptions.CustomEndpointCertificateArn = aws.String(v) + } + } + } + + return domainEndpointOptions +} + +func expandEBSOptions(m map[string]interface{}) *opensearchservice.EBSOptions { + options := opensearchservice.EBSOptions{} + + if ebsEnabled, ok := m["ebs_enabled"]; ok { + options.EBSEnabled = aws.Bool(ebsEnabled.(bool)) + + if ebsEnabled.(bool) { + if v, ok := m["iops"]; ok && v.(int) > 0 { + options.Iops = aws.Int64(int64(v.(int))) + } + if v, ok := m["volume_size"]; ok && v.(int) > 0 { + options.VolumeSize = aws.Int64(int64(v.(int))) + } + if v, ok := m["volume_type"]; ok && v.(string) != "" { + options.VolumeType = aws.String(v.(string)) + } + } + } + + return &options +} + +func expandEncryptAtRestOptions(m map[string]interface{}) *opensearchservice.EncryptionAtRestOptions { + options := opensearchservice.EncryptionAtRestOptions{} + + if v, ok := m["enabled"]; ok { + options.Enabled = aws.Bool(v.(bool)) + } + if v, ok := m["kms_key_id"]; ok && v.(string) != "" { + options.KmsKeyId = aws.String(v.(string)) + } + + return &options +} + +func expandVPCOptions(m map[string]interface{}) *opensearchservice.VPCOptions { + options := opensearchservice.VPCOptions{} + + if v, ok := m["security_group_ids"]; ok { + options.SecurityGroupIds = flex.ExpandStringSet(v.(*schema.Set)) + } + if v, ok := m["subnet_ids"]; ok { + options.SubnetIds = flex.ExpandStringSet(v.(*schema.Set)) + } + + return &options +} + +func flattenCognitoOptions(c *opensearchservice.CognitoOptions) []map[string]interface{} { + m := map[string]interface{}{} + + m["enabled"] = aws.BoolValue(c.Enabled) + + if aws.BoolValue(c.Enabled) { + m["identity_pool_id"] = aws.StringValue(c.IdentityPoolId) + m["user_pool_id"] = aws.StringValue(c.UserPoolId) + m["role_arn"] = aws.StringValue(c.RoleArn) + } + + return []map[string]interface{}{m} +} + +func flattenDomainEndpointOptions(domainEndpointOptions *opensearchservice.DomainEndpointOptions) []interface{} { + if domainEndpointOptions == nil { + return nil + } + + m := map[string]interface{}{ + "enforce_https": aws.BoolValue(domainEndpointOptions.EnforceHTTPS), + "tls_security_policy": aws.StringValue(domainEndpointOptions.TLSSecurityPolicy), + "custom_endpoint_enabled": aws.BoolValue(domainEndpointOptions.CustomEndpointEnabled), + } + if aws.BoolValue(domainEndpointOptions.CustomEndpointEnabled) { + if domainEndpointOptions.CustomEndpoint != nil { + m["custom_endpoint"] = aws.StringValue(domainEndpointOptions.CustomEndpoint) + } + if domainEndpointOptions.CustomEndpointCertificateArn != nil { + m["custom_endpoint_certificate_arn"] = aws.StringValue(domainEndpointOptions.CustomEndpointCertificateArn) + } + } + + return []interface{}{m} +} + +func flattenEBSOptions(o *opensearchservice.EBSOptions) []map[string]interface{} { + m := map[string]interface{}{} + + if o.EBSEnabled != nil { + m["ebs_enabled"] = aws.BoolValue(o.EBSEnabled) + } + + if aws.BoolValue(o.EBSEnabled) { + if o.Iops != nil { + m["iops"] = aws.Int64Value(o.Iops) + } + if o.VolumeSize != nil { + m["volume_size"] = aws.Int64Value(o.VolumeSize) + } + if o.VolumeType != nil { + m["volume_type"] = aws.StringValue(o.VolumeType) + } + } + + return []map[string]interface{}{m} +} + +func flattenEncryptAtRestOptions(o *opensearchservice.EncryptionAtRestOptions) []map[string]interface{} { + if o == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{} + + if o.Enabled != nil { + m["enabled"] = aws.BoolValue(o.Enabled) + } + if o.KmsKeyId != nil { + m["kms_key_id"] = aws.StringValue(o.KmsKeyId) + } + + return []map[string]interface{}{m} +} + +func flattenSnapshotOptions(snapshotOptions *opensearchservice.SnapshotOptions) []map[string]interface{} { + if snapshotOptions == nil { + return []map[string]interface{}{} + } + + m := map[string]interface{}{ + "automated_snapshot_start_hour": int(aws.Int64Value(snapshotOptions.AutomatedSnapshotStartHour)), + } + + return []map[string]interface{}{m} +} + +func flattenVPCDerivedInfo(o *opensearchservice.VPCDerivedInfo) []map[string]interface{} { + m := map[string]interface{}{} + + if o.AvailabilityZones != nil { + m["availability_zones"] = flex.FlattenStringSet(o.AvailabilityZones) + } + if o.SecurityGroupIds != nil { + m["security_group_ids"] = flex.FlattenStringSet(o.SecurityGroupIds) + } + if o.SubnetIds != nil { + m["subnet_ids"] = flex.FlattenStringSet(o.SubnetIds) + } + if o.VPCId != nil { + m["vpc_id"] = aws.StringValue(o.VPCId) + } + + return []map[string]interface{}{m} +} diff --git a/internal/service/opensearch/generate.go b/internal/service/opensearch/generate.go new file mode 100644 index 000000000000..3d0eae866101 --- /dev/null +++ b/internal/service/opensearch/generate.go @@ -0,0 +1,4 @@ +//go:generate go run ../../generate/tags/main.go -ListTags -ListTagsOp=ListTags -ListTagsInIDElem=ARN -ListTagsOutTagsElem=TagList -ServiceTagsSlice -TagOp=AddTags -TagInIDElem=ARN -TagInTagsElem=TagList -UntagOp=RemoveTags -UpdateTags +// ONLY generate directives and package declaration! Do not add anything else to this file. + +package opensearch diff --git a/internal/service/opensearch/status.go b/internal/service/opensearch/status.go new file mode 100644 index 000000000000..c167daf39dce --- /dev/null +++ b/internal/service/opensearch/status.go @@ -0,0 +1,31 @@ +package opensearch + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/opensearchservice" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" +) + +const ( + UpgradeStatusUnknown = "Unknown" +) + +func statusUpgradeStatus(conn *opensearchservice.OpenSearchService, name string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + out, err := conn.GetUpgradeStatus(&opensearchservice.GetUpgradeStatusInput{ + DomainName: aws.String(name), + }) + if err != nil { + return nil, UpgradeStatusUnknown, err + } + + // opensearch upgrades consist of multiple steps: + // https://docs.aws.amazon.com/opensearch-service/latest/developerguide/opensearch-version-migration.html + // Prevent false positive completion where the UpgradeStep is not the final UPGRADE step. + if aws.StringValue(out.StepStatus) == opensearchservice.UpgradeStatusSucceeded && aws.StringValue(out.UpgradeStep) != opensearchservice.UpgradeStepUpgrade { + return out, opensearchservice.UpgradeStatusInProgress, nil + } + + return out, aws.StringValue(out.StepStatus), nil + } +} diff --git a/internal/service/opensearch/sweep.go b/internal/service/opensearch/sweep.go new file mode 100644 index 000000000000..99be98a3274e --- /dev/null +++ b/internal/service/opensearch/sweep.go @@ -0,0 +1,100 @@ +//go:build sweep +// +build sweep + +package opensearch + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/opensearchservice" + "github.com/hashicorp/go-multierror" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/sweep" +) + +func init() { + resource.AddTestSweepers("aws_opensearch_domain", &resource.Sweeper{ + Name: "aws_opensearch_domain", + F: sweepDomains, + }) +} + +func sweepDomains(region string) error { + client, err := sweep.SharedRegionalSweepClient(region) + + if err != nil { + return fmt.Errorf("error getting client: %w", err) + } + + conn := client.(*conns.AWSClient).OpenSearchConn + sweepResources := make([]*sweep.SweepResource, 0) + var errs *multierror.Error + + input := &opensearchservice.ListDomainNamesInput{} + + // ListDomainNames has no pagination support whatsoever + output, err := conn.ListDomainNames(input) + + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping OpenSearch Domain sweep for %s: %s", region, err) + return errs.ErrorOrNil() + } + + if err != nil { + sweeperErr := fmt.Errorf("error listing OpenSearch Domains: %w", err) + log.Printf("[ERROR] %s", sweeperErr) + errs = multierror.Append(errs, sweeperErr) + return errs.ErrorOrNil() + } + + if output == nil { + log.Printf("[WARN] Skipping OpenSearch Domain sweep for %s: empty response", region) + return errs.ErrorOrNil() + } + + for _, domainInfo := range output.DomainNames { + if domainInfo == nil { + continue + } + + name := aws.StringValue(domainInfo.DomainName) + + // OpenSearch Domains have regularly gotten stuck in a "being deleted" state + // e.g. Deleted and Processing are both true for days in the API + // Filter out domains that are Deleted already. + + output, err := FindDomainByName(conn, name) + if err != nil { + sweeperErr := fmt.Errorf("error describing OpenSearch Domain (%s): %w", name, err) + log.Printf("[ERROR] %s", sweeperErr) + errs = multierror.Append(errs, sweeperErr) + continue + } + + if output != nil && aws.BoolValue(output.Deleted) { + log.Printf("[INFO] Skipping OpenSearch Domain (%s) with deleted status", name) + continue + } + + r := ResourceDomain() + d := r.Data(nil) + d.SetId(name) + d.Set("domain_name", name) + + sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) + } + + if err = sweep.SweepOrchestrator(sweepResources); err != nil { + errs = multierror.Append(errs, fmt.Errorf("error sweeping OpenSearch Domains for %s: %w", region, err)) + } + + if sweep.SkipSweepError(errs.ErrorOrNil()) { + log.Printf("[WARN] Skipping OpenSearch Domain sweep for %s: %s", region, errs) + return nil + } + + return errs.ErrorOrNil() +} diff --git a/internal/service/opensearch/tags_gen.go b/internal/service/opensearch/tags_gen.go new file mode 100644 index 000000000000..0207fae1e691 --- /dev/null +++ b/internal/service/opensearch/tags_gen.go @@ -0,0 +1,92 @@ +// Code generated by internal/generate/tags/main.go; DO NOT EDIT. +package opensearch + +import ( + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/opensearchservice" + tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" +) + +// ListTags lists opensearch service tags. +// The identifier is typically the Amazon Resource Name (ARN), although +// it may also be a different identifier depending on the service. +func ListTags(conn *opensearchservice.OpenSearchService, identifier string) (tftags.KeyValueTags, error) { + input := &opensearchservice.ListTagsInput{ + ARN: aws.String(identifier), + } + + output, err := conn.ListTags(input) + + if err != nil { + return tftags.New(nil), err + } + + return KeyValueTags(output.TagList), nil +} + +// []*SERVICE.Tag handling + +// Tags returns opensearch service tags. +func Tags(tags tftags.KeyValueTags) []*opensearchservice.Tag { + result := make([]*opensearchservice.Tag, 0, len(tags)) + + for k, v := range tags.Map() { + tag := &opensearchservice.Tag{ + Key: aws.String(k), + Value: aws.String(v), + } + + result = append(result, tag) + } + + return result +} + +// KeyValueTags creates tftags.KeyValueTags from opensearchservice service tags. +func KeyValueTags(tags []*opensearchservice.Tag) tftags.KeyValueTags { + m := make(map[string]*string, len(tags)) + + for _, tag := range tags { + m[aws.StringValue(tag.Key)] = tag.Value + } + + return tftags.New(m) +} + +// UpdateTags updates opensearch service tags. +// The identifier is typically the Amazon Resource Name (ARN), although +// it may also be a different identifier depending on the service. +func UpdateTags(conn *opensearchservice.OpenSearchService, identifier string, oldTagsMap interface{}, newTagsMap interface{}) error { + oldTags := tftags.New(oldTagsMap) + newTags := tftags.New(newTagsMap) + + if removedTags := oldTags.Removed(newTags); len(removedTags) > 0 { + input := &opensearchservice.RemoveTagsInput{ + ARN: aws.String(identifier), + TagKeys: aws.StringSlice(removedTags.IgnoreAWS().Keys()), + } + + _, err := conn.RemoveTags(input) + + if err != nil { + return fmt.Errorf("error untagging resource (%s): %w", identifier, err) + } + } + + if updatedTags := oldTags.Updated(newTags); len(updatedTags) > 0 { + input := &opensearchservice.AddTagsInput{ + ARN: aws.String(identifier), + TagList: Tags(updatedTags.IgnoreAWS()), + } + + _, err := conn.AddTags(input) + + if err != nil { + return fmt.Errorf("error tagging resource (%s): %w", identifier, err) + } + } + + return nil +} diff --git a/internal/service/opensearch/test-fixtures/saml-metadata.xml.tpl b/internal/service/opensearch/test-fixtures/saml-metadata.xml.tpl new file mode 100644 index 000000000000..0e7a94912485 --- /dev/null +++ b/internal/service/opensearch/test-fixtures/saml-metadata.xml.tpl @@ -0,0 +1,15 @@ + + + + + + + MIICfjCCAeegAwIBAgIBADANBgkqhkiG9w0BAQ0FADBbMQswCQYDVQQGEwJ1czELMAkGA1UECAwCQ0ExEjAQBgNVBAoMCVRlcnJhZm9ybTErMCkGA1UEAwwidGVycmFmb3JtLWRldi1lZC5teS5zYWxlc2ZvcmNlLmNvbTAgFw0yMDA4MjkxNDQ4MzlaGA8yMDcwMDgxNzE0NDgzOVowWzELMAkGA1UEBhMCdXMxCzAJBgNVBAgMAkNBMRIwEAYDVQQKDAlUZXJyYWZvcm0xKzApBgNVBAMMInRlcnJhZm9ybS1kZXYtZWQubXkuc2FsZXNmb3JjZS5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAOxUTzEKdivVjfZ/BERGpX/ZWQsBKHut17dQTKW/3jox1N9EJ3ULj9qEDen6zQ74Ce8hSEkrG7MP9mcP1oEhQZSca5tTAop1GejJG+bfF4v6cXM9pqHlllrYrmXMfESiahqhBhE8VvoGJkvp393TcB1lX+WxO8Q74demTrQn5tgvAgMBAAGjUDBOMB0GA1UdDgQWBBREKZt4Av70WKQE4aLD2tvbSLnBlzAfBgNVHSMEGDAWgBREKZt4Av70WKQE4aLD2tvbSLnBlzAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBDQUAA4GBACxeC29WMGqeOlQF4JWwsYwIC82SUaZvMDqjAm9ieIrAZRH6J6Cu40c/rvsUGUjQ9logKX15RAyI7Rn0jBUgopRkNL71HyyM7ug4qN5An05VmKQWIbVfxkNVB2Ipb/ICMc5UE38G4y4VbANZFvbFbkVq6OAP2GGNl22o/XSnhFY8 + + + + urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified + + + + diff --git a/internal/service/opensearch/wait.go b/internal/service/opensearch/wait.go new file mode 100644 index 000000000000..fe4b037f72d1 --- /dev/null +++ b/internal/service/opensearch/wait.go @@ -0,0 +1,135 @@ +package opensearch + +import ( + "fmt" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/opensearchservice" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +const ( + domainUpgradeSuccessMinTimeout = 10 * time.Second + domainUpgradeSuccessDelay = 30 * time.Second +) + +// UpgradeSucceeded waits for an Upgrade to return Success +func waitUpgradeSucceeded(conn *opensearchservice.OpenSearchService, name string, timeout time.Duration) (*opensearchservice.GetUpgradeStatusOutput, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{opensearchservice.UpgradeStatusInProgress}, + Target: []string{opensearchservice.UpgradeStatusSucceeded}, + Refresh: statusUpgradeStatus(conn, name), + Timeout: timeout, + MinTimeout: domainUpgradeSuccessMinTimeout, + Delay: domainUpgradeSuccessDelay, + } + + outputRaw, err := stateConf.WaitForState() + + if output, ok := outputRaw.(*opensearchservice.GetUpgradeStatusOutput); ok { + return output, err + } + + return nil, err +} + +func WaitForDomainCreation(conn *opensearchservice.OpenSearchService, domainName string, timeout time.Duration) error { + var out *opensearchservice.DomainStatus + err := resource.Retry(timeout, func() *resource.RetryError { + var err error + out, err = FindDomainByName(conn, domainName) + if err != nil { + return resource.NonRetryableError(err) + } + + if !aws.BoolValue(out.Processing) && (out.Endpoint != nil || out.Endpoints != nil) { + return nil + } + + return resource.RetryableError( + fmt.Errorf("%q: Timeout while waiting for the domain to be created", domainName)) + }) + if tfresource.TimedOut(err) { + out, err = FindDomainByName(conn, domainName) + if err != nil { + return fmt.Errorf("Error describing OpenSearch domain: %w", err) + } + if !aws.BoolValue(out.Processing) && (out.Endpoint != nil || out.Endpoints != nil) { + return nil + } + } + if err != nil { + return fmt.Errorf("Error waiting for OpenSearch domain to be created: %w", err) + } + return nil +} + +func waitForDomainUpdate(conn *opensearchservice.OpenSearchService, domainName string, timeout time.Duration) error { + var out *opensearchservice.DomainStatus + err := resource.Retry(timeout, func() *resource.RetryError { + var err error + out, err = FindDomainByName(conn, domainName) + if err != nil { + return resource.NonRetryableError(err) + } + + if !aws.BoolValue(out.Processing) { + return nil + } + + return resource.RetryableError( + fmt.Errorf("%q: Timeout while waiting for changes to be processed", domainName)) + }) + if tfresource.TimedOut(err) { + out, err = FindDomainByName(conn, domainName) + if err != nil { + return fmt.Errorf("Error describing OpenSearch domain: %w", err) + } + if !aws.BoolValue(out.Processing) { + return nil + } + } + if err != nil { + return fmt.Errorf("Error waiting for OpenSearch domain changes to be processed: %w", err) + } + return nil +} + +func waitForDomainDelete(conn *opensearchservice.OpenSearchService, domainName string, timeout time.Duration) error { + var out *opensearchservice.DomainStatus + err := resource.Retry(timeout, func() *resource.RetryError { + var err error + out, err = FindDomainByName(conn, domainName) + + if err != nil { + if tfresource.NotFound(err) { + return nil + } + return resource.NonRetryableError(err) + } + + if out != nil && !aws.BoolValue(out.Processing) { + return nil + } + + return resource.RetryableError(fmt.Errorf("timeout while waiting for the OpenSearch domain %q to be deleted", domainName)) + }) + if tfresource.TimedOut(err) { + out, err = FindDomainByName(conn, domainName) + if err != nil { + if tfresource.NotFound(err) { + return nil + } + return fmt.Errorf("Error describing OpenSearch domain: %s", err) + } + if out != nil && !aws.BoolValue(out.Processing) { + return nil + } + } + if err != nil { + return fmt.Errorf("Error waiting for OpenSearch domain to be deleted: %s", err) + } + return nil +} diff --git a/internal/service/organizations/account.go b/internal/service/organizations/account.go index fc5cbad810a0..ce9e12c00259 100644 --- a/internal/service/organizations/account.go +++ b/internal/service/organizations/account.go @@ -34,21 +34,31 @@ func ResourceAccount() *schema.Resource { Type: schema.TypeString, Computed: true, }, - "joined_method": { - Type: schema.TypeString, - Computed: true, + "close_on_deletion": { + Type: schema.TypeBool, + Optional: true, + Default: false, }, - "joined_timestamp": { + "email": { + ForceNew: true, Type: schema.TypeString, - Computed: true, + Required: true, + ValidateFunc: validation.All( + validation.StringLenBetween(6, 64), + validation.StringMatch(regexp.MustCompile(`^[^\s@]+@[^\s@]+\.[^\s@]+$`), "must be a valid email address"), + ), }, - "parent_id": { + "iam_user_access_to_billing": { + ForceNew: true, Type: schema.TypeString, - Computed: true, Optional: true, - ValidateFunc: validation.StringMatch(regexp.MustCompile("^(r-[0-9a-z]{4,32})|(ou-[0-9a-z]{4,32}-[a-z0-9]{8,32})$"), "see https://docs.aws.amazon.com/organizations/latest/APIReference/API_MoveAccount.html#organizations-MoveAccount-request-DestinationParentId"), + ValidateFunc: validation.StringInSlice([]string{organizations.IAMUserAccessToBillingAllow, organizations.IAMUserAccessToBillingDeny}, true), }, - "status": { + "joined_method": { + Type: schema.TypeString, + Computed: true, + }, + "joined_timestamp": { Type: schema.TypeString, Computed: true, }, @@ -58,20 +68,11 @@ func ResourceAccount() *schema.Resource { Required: true, ValidateFunc: validation.StringLenBetween(1, 50), }, - "email": { - ForceNew: true, - Type: schema.TypeString, - Required: true, - ValidateFunc: validation.All( - validation.StringLenBetween(6, 64), - validation.StringMatch(regexp.MustCompile(`^[^\s@]+@[^\s@]+\.[^\s@]+$`), "must be a valid email address"), - ), - }, - "iam_user_access_to_billing": { - ForceNew: true, + "parent_id": { Type: schema.TypeString, + Computed: true, Optional: true, - ValidateFunc: validation.StringInSlice([]string{organizations.IAMUserAccessToBillingAllow, organizations.IAMUserAccessToBillingDeny}, true), + ValidateFunc: validation.StringMatch(regexp.MustCompile("^(r-[0-9a-z]{4,32})|(ou-[0-9a-z]{4,32}-[a-z0-9]{8,32})$"), "see https://docs.aws.amazon.com/organizations/latest/APIReference/API_MoveAccount.html#organizations-MoveAccount-request-DestinationParentId"), }, "role_name": { ForceNew: true, @@ -79,6 +80,10 @@ func ResourceAccount() *schema.Resource { Optional: true, ValidateFunc: validation.StringMatch(regexp.MustCompile(`^[\w+=,.@-]{1,64}$`), "must consist of uppercase letters, lowercase letters, digits with no spaces, and any of the following characters"), }, + "status": { + Type: schema.TypeString, + Computed: true, + }, "tags": tftags.TagsSchema(), "tags_all": tftags.TagsSchemaComputed(), }, @@ -93,89 +98,59 @@ func resourceAccountCreate(d *schema.ResourceData, meta interface{}) error { tags := defaultTagsConfig.MergeTags(tftags.New(d.Get("tags").(map[string]interface{}))) // Create the account - createOpts := &organizations.CreateAccountInput{ - AccountName: aws.String(d.Get("name").(string)), + name := d.Get("name").(string) + input := &organizations.CreateAccountInput{ + AccountName: aws.String(name), Email: aws.String(d.Get("email").(string)), } - if role, ok := d.GetOk("role_name"); ok { - createOpts.RoleName = aws.String(role.(string)) + if v, ok := d.GetOk("iam_user_access_to_billing"); ok { + input.IamUserAccessToBilling = aws.String(v.(string)) } - if iam_user, ok := d.GetOk("iam_user_access_to_billing"); ok { - createOpts.IamUserAccessToBilling = aws.String(iam_user.(string)) + if v, ok := d.GetOk("role_name"); ok { + input.RoleName = aws.String(v.(string)) } if len(tags) > 0 { - createOpts.Tags = Tags(tags.IgnoreAWS()) + input.Tags = Tags(tags.IgnoreAWS()) } - log.Printf("[DEBUG] Creating AWS Organizations Account: %s", createOpts) - - var resp *organizations.CreateAccountOutput - err := resource.Retry(4*time.Minute, func() *resource.RetryError { - var err error - - resp, err = conn.CreateAccount(createOpts) - - if tfawserr.ErrCodeEquals(err, organizations.ErrCodeFinalizingOrganizationException) { - return resource.RetryableError(err) - } - - if err != nil { - return resource.NonRetryableError(err) - } - - return nil - }) - - if tfresource.TimedOut(err) { - resp, err = conn.CreateAccount(createOpts) - } + log.Printf("[DEBUG] Creating AWS Organizations Account: %s", input) + outputRaw, err := tfresource.RetryWhenAWSErrCodeEquals(4*time.Minute, + func() (interface{}, error) { + return conn.CreateAccount(input) + }, + organizations.ErrCodeFinalizingOrganizationException, + ) if err != nil { - return fmt.Errorf("Error creating account: %w", err) + return fmt.Errorf("error creating AWS Organizations Account (%s): %w", name, err) } - requestId := aws.StringValue(resp.CreateAccountStatus.Id) - - // Wait for the account to become available - log.Printf("[DEBUG] Waiting for account request (%s) to succeed", requestId) + output, err := waitAccountCreated(conn, aws.StringValue(outputRaw.(*organizations.CreateAccountOutput).CreateAccountStatus.Id)) - stateConf := &resource.StateChangeConf{ - Pending: []string{organizations.CreateAccountStateInProgress}, - Target: []string{organizations.CreateAccountStateSucceeded}, - Refresh: resourceAccountStateRefreshFunc(conn, requestId), - PollInterval: 10 * time.Second, - Timeout: 5 * time.Minute, - } - stateResp, stateErr := stateConf.WaitForState() - if stateErr != nil { - return fmt.Errorf( - "Error waiting for account request (%s) to become available: %w", - requestId, stateErr) + if err != nil { + return fmt.Errorf("error waiting for AWS Organizations Account (%s) create: %w", name, err) } - // Store the ID - accountId := stateResp.(*organizations.CreateAccountStatus).AccountId - d.SetId(aws.StringValue(accountId)) + d.SetId(aws.StringValue(output.AccountId)) if v, ok := d.GetOk("parent_id"); ok { - newParentID := v.(string) - - existingParentID, err := resourceAccountGetParentID(conn, d.Id()) + oldParentAccountID, err := findParentAccountID(conn, d.Id()) if err != nil { - return fmt.Errorf("error getting AWS Organizations Account (%s) parent: %w", d.Id(), err) + return fmt.Errorf("error reading AWS Organizations Account (%s) parent: %w", d.Id(), err) } - if newParentID != existingParentID { + if newParentAccountID := v.(string); newParentAccountID != oldParentAccountID { input := &organizations.MoveAccountInput{ - AccountId: accountId, - SourceParentId: aws.String(existingParentID), - DestinationParentId: aws.String(newParentID), + AccountId: aws.String(d.Id()), + DestinationParentId: aws.String(newParentAccountID), + SourceParentId: aws.String(oldParentAccountID), } + log.Printf("[DEBUG] Moving AWS Organizations Account: %s", input) if _, err := conn.MoveAccount(input); err != nil { return fmt.Errorf("error moving AWS Organizations Account (%s): %w", d.Id(), err) } @@ -190,31 +165,22 @@ func resourceAccountRead(d *schema.ResourceData, meta interface{}) error { defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig - describeOpts := &organizations.DescribeAccountInput{ - AccountId: aws.String(d.Id()), - } - resp, err := conn.DescribeAccount(describeOpts) + account, err := FindAccountByID(conn, d.Id()) - if tfawserr.ErrCodeEquals(err, organizations.ErrCodeAccountNotFoundException) { - log.Printf("[WARN] Account does not exist, removing from state: %s", d.Id()) + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] AWS Organizations Account does not exist, removing from state: %s", d.Id()) d.SetId("") return nil } if err != nil { - return fmt.Errorf("error describing AWS Organizations Account (%s): %w", d.Id(), err) + return fmt.Errorf("error reading AWS Organizations Account (%s): %w", d.Id(), err) } - account := resp.Account - if account == nil { - log.Printf("[WARN] Account does not exist, removing from state: %s", d.Id()) - d.SetId("") - return nil - } + parentAccountID, err := findParentAccountID(conn, d.Id()) - parentId, err := resourceAccountGetParentID(conn, d.Id()) if err != nil { - return fmt.Errorf("error getting AWS Organizations Account (%s) parent: %w", d.Id(), err) + return fmt.Errorf("error reading AWS Organizations Account (%s) parent: %w", d.Id(), err) } d.Set("arn", account.Arn) @@ -222,7 +188,7 @@ func resourceAccountRead(d *schema.ResourceData, meta interface{}) error { d.Set("joined_method", account.JoinedMethod) d.Set("joined_timestamp", aws.TimeValue(account.JoinedTimestamp).Format(time.RFC3339)) d.Set("name", account.Name) - d.Set("parent_id", parentId) + d.Set("parent_id", parentAccountID) d.Set("status", account.Status) tags, err := ListTags(conn, d.Id()) @@ -276,59 +242,46 @@ func resourceAccountUpdate(d *schema.ResourceData, meta interface{}) error { func resourceAccountDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).OrganizationsConn - input := &organizations.RemoveAccountFromOrganizationInput{ - AccountId: aws.String(d.Id()), + close := d.Get("close_on_deletion").(bool) + var err error + + if close { + log.Printf("[DEBUG] Closing AWS Organizations Account: %s", d.Id()) + _, err = conn.CloseAccount(&organizations.CloseAccountInput{ + AccountId: aws.String(d.Id()), + }) + } else { + log.Printf("[DEBUG] Removing AWS Organizations Account from organization: %s", d.Id()) + _, err = conn.RemoveAccountFromOrganization(&organizations.RemoveAccountFromOrganizationInput{ + AccountId: aws.String(d.Id()), + }) } - log.Printf("[DEBUG] Removing AWS account from organization: %s", input) - _, err := conn.RemoveAccountFromOrganization(input) - if err != nil { - if tfawserr.ErrCodeEquals(err, organizations.ErrCodeAccountNotFoundException) { - return nil - } - return err - } - return nil -} -// resourceAccountStateRefreshFunc returns a resource.StateRefreshFunc -// that is used to watch a CreateAccount request -func resourceAccountStateRefreshFunc(conn *organizations.Organizations, id string) resource.StateRefreshFunc { - return func() (interface{}, string, error) { - opts := &organizations.DescribeCreateAccountStatusInput{ - CreateAccountRequestId: aws.String(id), - } - resp, err := conn.DescribeCreateAccountStatus(opts) - if err != nil { - if tfawserr.ErrCodeEquals(err, organizations.ErrCodeCreateAccountStatusNotFoundException) { - resp = nil - } else { - log.Printf("Error on OrganizationAccountStateRefresh: %s", err) - return nil, "", err - } - } + if tfawserr.ErrCodeEquals(err, organizations.ErrCodeAccountNotFoundException) { + return nil + } - if resp == nil { - // Sometimes AWS just has consistency issues and doesn't see - // our account yet. Return an empty state. - return nil, "", nil - } + if err != nil { + return fmt.Errorf("error deleting AWS Organizations Account (%s): %w", d.Id(), err) + } - accountStatus := resp.CreateAccountStatus - if aws.StringValue(accountStatus.State) == organizations.CreateAccountStateFailed { - return nil, aws.StringValue(accountStatus.State), errors.New(aws.StringValue(accountStatus.FailureReason)) + if close { + if _, err := waitAccountDeleted(conn, d.Id()); err != nil { + return fmt.Errorf("error waiting for AWS Organizations Account (%s) delete: %w", d.Id(), err) } - return accountStatus, aws.StringValue(accountStatus.State), nil } + + return nil } -func resourceAccountGetParentID(conn *organizations.Organizations, childId string) (string, error) { +func findParentAccountID(conn *organizations.Organizations, id string) (string, error) { input := &organizations.ListParentsInput{ - ChildId: aws.String(childId), + ChildId: aws.String(id), } - var parents []*organizations.Parent + var output []*organizations.Parent err := conn.ListParentsPages(input, func(page *organizations.ListParentsOutput, lastPage bool) bool { - parents = append(parents, page.Parents...) + output = append(output, page.Parents...) return !lastPage }) @@ -337,12 +290,112 @@ func resourceAccountGetParentID(conn *organizations.Organizations, childId strin return "", err } - if len(parents) == 0 { - return "", nil + if len(output) == 0 || output[0] == nil { + return "", tfresource.NewEmptyResultError(input) } // assume there is only a single parent // https://docs.aws.amazon.com/organizations/latest/APIReference/API_ListParents.html - parent := parents[0] - return aws.StringValue(parent.Id), nil + if count := len(output); count > 1 { + return "", tfresource.NewTooManyResultsError(count, input) + } + + return aws.StringValue(output[0].Id), nil +} + +func findCreateAccountStatusByID(conn *organizations.Organizations, id string) (*organizations.CreateAccountStatus, error) { + input := &organizations.DescribeCreateAccountStatusInput{ + CreateAccountRequestId: aws.String(id), + } + + output, err := conn.DescribeCreateAccountStatus(input) + + if tfawserr.ErrCodeEquals(err, organizations.ErrCodeCreateAccountStatusNotFoundException) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || output.CreateAccountStatus == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output.CreateAccountStatus, nil +} + +func statusCreateAccountState(conn *organizations.Organizations, id string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := findCreateAccountStatusByID(conn, id) + + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return output, aws.StringValue(output.State), nil + } +} + +func waitAccountCreated(conn *organizations.Organizations, id string) (*organizations.CreateAccountStatus, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{organizations.CreateAccountStateInProgress}, + Target: []string{organizations.CreateAccountStateSucceeded}, + Refresh: statusCreateAccountState(conn, id), + PollInterval: 10 * time.Second, + Timeout: 5 * time.Minute, + } + + outputRaw, err := stateConf.WaitForState() + + if output, ok := outputRaw.(*organizations.CreateAccountStatus); ok { + if state := aws.StringValue(output.State); state == organizations.CreateAccountStateFailed { + tfresource.SetLastError(err, errors.New(aws.StringValue(output.FailureReason))) + } + + return output, err + } + + return nil, err +} + +func statusAccountStatus(conn *organizations.Organizations, id string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := FindAccountByID(conn, id) + + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return output, aws.StringValue(output.Status), nil + } +} + +func waitAccountDeleted(conn *organizations.Organizations, id string) (*organizations.Account, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{organizations.AccountStatusPendingClosure}, + Target: []string{}, + Refresh: statusAccountStatus(conn, id), + PollInterval: 10 * time.Second, + Timeout: 5 * time.Minute, + } + + outputRaw, err := stateConf.WaitForState() + + if output, ok := outputRaw.(*organizations.Account); ok { + return output, err + } + + return nil, err } diff --git a/internal/service/organizations/account_test.go b/internal/service/organizations/account_test.go index 407b6b4eef71..c6e4fdb68267 100644 --- a/internal/service/organizations/account_test.go +++ b/internal/service/organizations/account_test.go @@ -6,69 +6,109 @@ import ( "testing" "github.com/aws/aws-sdk-go/service/organizations" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" + tforganizations "github.com/hashicorp/terraform-provider-aws/internal/service/organizations" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) func testAccAccount_basic(t *testing.T) { - acctest.Skip(t, "AWS Organizations Account testing is not currently automated due to manual account deletion steps.") - - var account organizations.Account - - orgsEmailDomain, ok := os.LookupEnv("TEST_AWS_ORGANIZATION_ACCOUNT_EMAIL_DOMAIN") - - if !ok { - acctest.Skip(t, "'TEST_AWS_ORGANIZATION_ACCOUNT_EMAIL_DOMAIN' not set, skipping test.") + key := "TEST_AWS_ORGANIZATION_ACCOUNT_EMAIL_DOMAIN" + orgsEmailDomain := os.Getenv(key) + if orgsEmailDomain == "" { + t.Skipf("Environment variable %s is not set", key) } + var v organizations.Account + resourceName := "aws_organizations_account.test" rInt := sdkacctest.RandInt() name := fmt.Sprintf("tf_acctest_%d", rInt) email := fmt.Sprintf("tf-acctest+%d@%s", rInt, orgsEmailDomain) resource.Test(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(t); acctest.PreCheckOrganizationsAccount(t) }, + PreCheck: func() { acctest.PreCheck(t); acctest.PreCheckOrganizationsEnabled(t) }, ErrorCheck: acctest.ErrorCheck(t, organizations.EndpointsID), Providers: acctest.Providers, CheckDestroy: testAccCheckAccountDestroy, Steps: []resource.TestStep{ { Config: testAccAccountConfig(name, email), - Check: resource.ComposeTestCheckFunc( - testAccCheckAccountExists("aws_organizations_account.test", &account), - resource.TestCheckResourceAttrSet("aws_organizations_account.test", "arn"), - resource.TestCheckResourceAttrSet("aws_organizations_account.test", "joined_method"), - acctest.CheckResourceAttrRFC3339("aws_organizations_account.test", "joined_timestamp"), - resource.TestCheckResourceAttrSet("aws_organizations_account.test", "parent_id"), - resource.TestCheckResourceAttr("aws_organizations_account.test", "name", name), - resource.TestCheckResourceAttr("aws_organizations_account.test", "email", email), - resource.TestCheckResourceAttrSet("aws_organizations_account.test", "status"), - resource.TestCheckResourceAttr("aws_organizations_account.test", "tags.%", "0"), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAccountExists(resourceName, &v), + resource.TestCheckResourceAttrSet(resourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "email", email), + resource.TestCheckResourceAttrSet(resourceName, "joined_method"), + acctest.CheckResourceAttrRFC3339(resourceName, "joined_timestamp"), + resource.TestCheckResourceAttr(resourceName, "name", name), + resource.TestCheckResourceAttrSet(resourceName, "parent_id"), + resource.TestCheckResourceAttr(resourceName, "status", "ACTIVE"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), }, { - ResourceName: "aws_organizations_account.test", - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"close_on_deletion"}, }, }, }) } -func testAccAccount_ParentID(t *testing.T) { - acctest.Skip(t, "AWS Organizations Account testing is not currently automated due to manual account deletion steps.") +func testAccAccount_CloseOnDeletion(t *testing.T) { + key := "TEST_AWS_ORGANIZATION_ACCOUNT_EMAIL_DOMAIN" + orgsEmailDomain := os.Getenv(key) + if orgsEmailDomain == "" { + t.Skipf("Environment variable %s is not set", key) + } - var account organizations.Account + var v organizations.Account + resourceName := "aws_organizations_account.test" + rInt := sdkacctest.RandInt() + name := fmt.Sprintf("tf_acctest_%d", rInt) + email := fmt.Sprintf("tf-acctest+%d@%s", rInt, orgsEmailDomain) - orgsEmailDomain, ok := os.LookupEnv("TEST_AWS_ORGANIZATION_ACCOUNT_EMAIL_DOMAIN") + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); acctest.PreCheckOrganizationsEnabled(t) }, + ErrorCheck: acctest.ErrorCheck(t, organizations.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckAccountDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAccountCloseOnDeletionConfig(name, email), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckAccountExists(resourceName, &v), + resource.TestCheckResourceAttrSet(resourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "email", email), + resource.TestCheckResourceAttrSet(resourceName, "joined_method"), + acctest.CheckResourceAttrRFC3339(resourceName, "joined_timestamp"), + resource.TestCheckResourceAttr(resourceName, "name", name), + resource.TestCheckResourceAttrSet(resourceName, "parent_id"), + resource.TestCheckResourceAttr(resourceName, "status", "ACTIVE"), + resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"close_on_deletion"}, + }, + }, + }) +} - if !ok { - acctest.Skip(t, "'TEST_AWS_ORGANIZATION_ACCOUNT_EMAIL_DOMAIN' not set, skipping test.") +func testAccAccount_ParentID(t *testing.T) { + key := "TEST_AWS_ORGANIZATION_ACCOUNT_EMAIL_DOMAIN" + orgsEmailDomain := os.Getenv(key) + if orgsEmailDomain == "" { + t.Skipf("Environment variable %s is not set", key) } + var v organizations.Account rInt := sdkacctest.RandInt() name := fmt.Sprintf("tf_acctest_%d", rInt) email := fmt.Sprintf("tf-acctest+%d@%s", rInt, orgsEmailDomain) @@ -77,7 +117,7 @@ func testAccAccount_ParentID(t *testing.T) { parentIdResourceName2 := "aws_organizations_organizational_unit.test2" resource.Test(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(t) }, + PreCheck: func() { acctest.PreCheck(t); acctest.PreCheckOrganizationsAccount(t) }, ErrorCheck: acctest.ErrorCheck(t, organizations.EndpointsID), Providers: acctest.Providers, CheckDestroy: testAccCheckAccountDestroy, @@ -85,19 +125,20 @@ func testAccAccount_ParentID(t *testing.T) { { Config: testAccAccountParentId1Config(name, email), Check: resource.ComposeTestCheckFunc( - testAccCheckAccountExists(resourceName, &account), + testAccCheckAccountExists(resourceName, &v), resource.TestCheckResourceAttrPair(resourceName, "parent_id", parentIdResourceName1, "id"), ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"close_on_deletion"}, }, { Config: testAccAccountParentId2Config(name, email), Check: resource.ComposeTestCheckFunc( - testAccCheckAccountExists(resourceName, &account), + testAccCheckAccountExists(resourceName, &v), resource.TestCheckResourceAttrPair(resourceName, "parent_id", parentIdResourceName2, "id"), ), }, @@ -106,23 +147,20 @@ func testAccAccount_ParentID(t *testing.T) { } func testAccAccount_Tags(t *testing.T) { - acctest.Skip(t, "AWS Organizations Account testing is not currently automated due to manual account deletion steps.") - - var account organizations.Account - - orgsEmailDomain, ok := os.LookupEnv("TEST_AWS_ORGANIZATION_ACCOUNT_EMAIL_DOMAIN") - - if !ok { - acctest.Skip(t, "'TEST_AWS_ORGANIZATION_ACCOUNT_EMAIL_DOMAIN' not set, skipping test.") + key := "TEST_AWS_ORGANIZATION_ACCOUNT_EMAIL_DOMAIN" + orgsEmailDomain := os.Getenv(key) + if orgsEmailDomain == "" { + t.Skipf("Environment variable %s is not set", key) } + var v organizations.Account rInt := sdkacctest.RandInt() name := fmt.Sprintf("tf_acctest_%d", rInt) email := fmt.Sprintf("tf-acctest+%d@%s", rInt, orgsEmailDomain) resourceName := "aws_organizations_account.test" resource.Test(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(t) }, + PreCheck: func() { acctest.PreCheck(t); acctest.PreCheckOrganizationsAccount(t) }, ErrorCheck: acctest.ErrorCheck(t, organizations.EndpointsID), Providers: acctest.Providers, CheckDestroy: testAccCheckAccountDestroy, @@ -130,30 +168,32 @@ func testAccAccount_Tags(t *testing.T) { { Config: testAccAccountTags1Config(name, email, "key1", "value1"), Check: resource.ComposeTestCheckFunc( - testAccCheckAccountExists(resourceName, &account), + testAccCheckAccountExists(resourceName, &v), resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1"), ), }, { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"close_on_deletion"}, }, { Config: testAccAccountTags2Config(name, email, "key1", "value1updated", "key2", "value2"), Check: resource.ComposeTestCheckFunc( - testAccCheckAccountExists(resourceName, &account), + testAccCheckAccountExists(resourceName, &v), resource.TestCheckResourceAttr(resourceName, "tags.%", "2"), resource.TestCheckResourceAttr(resourceName, "tags.key1", "value1updated"), resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), }, { - Config: testAccAccountConfig(name, email), + Config: testAccAccountTags1Config(name, email, "key2", "value2"), Check: resource.ComposeTestCheckFunc( - testAccCheckAccountExists("aws_organizations_account.test", &account), - resource.TestCheckResourceAttr("aws_organizations_account.test", "tags.%", "0"), + testAccCheckAccountExists(resourceName, &v), + resource.TestCheckResourceAttr(resourceName, "tags.%", "1"), + resource.TestCheckResourceAttr(resourceName, "tags.key2", "value2"), ), }, }, @@ -168,52 +208,43 @@ func testAccCheckAccountDestroy(s *terraform.State) error { continue } - params := &organizations.DescribeAccountInput{ - AccountId: &rs.Primary.ID, - } - - resp, err := conn.DescribeAccount(params) + _, err := tforganizations.FindAccountByID(conn, rs.Primary.ID) - if tfawserr.ErrCodeEquals(err, organizations.ErrCodeAccountNotFoundException) { - return nil + if tfresource.NotFound(err) { + continue } if err != nil { return err } - if resp != nil && resp.Account != nil { - return fmt.Errorf("Bad: Account still exists: %q", rs.Primary.ID) - } + return fmt.Errorf("AWS Organizations Account %s still exists", rs.Primary.ID) } return nil } -func testAccCheckAccountExists(n string, a *organizations.Account) resource.TestCheckFunc { +func testAccCheckAccountExists(n string, v *organizations.Account) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { return fmt.Errorf("Not found: %s", n) } - conn := acctest.Provider.Meta().(*conns.AWSClient).OrganizationsConn - params := &organizations.DescribeAccountInput{ - AccountId: &rs.Primary.ID, + if rs.Primary.ID == "" { + return fmt.Errorf("No AWS Organizations Account ID is set") } - resp, err := conn.DescribeAccount(params) + conn := acctest.Provider.Meta().(*conns.AWSClient).OrganizationsConn + + output, err := tforganizations.FindAccountByID(conn, rs.Primary.ID) if err != nil { return err } - if resp == nil || resp.Account == nil { - return fmt.Errorf("Account %q does not exist", rs.Primary.ID) - } - - a = resp.Account + *v = *output return nil } @@ -222,8 +253,18 @@ func testAccCheckAccountExists(n string, a *organizations.Account) resource.Test func testAccAccountConfig(name, email string) string { return fmt.Sprintf(` resource "aws_organizations_account" "test" { - name = "%s" - email = "%s" + name = %[1]q + email = %[2]q +} +`, name, email) +} + +func testAccAccountCloseOnDeletionConfig(name, email string) string { + return fmt.Sprintf(` +resource "aws_organizations_account" "test" { + name = %[1]q + email = %[2]q + close_on_deletion = true } `, name, email) } @@ -243,9 +284,10 @@ resource "aws_organizations_organizational_unit" "test2" { } resource "aws_organizations_account" "test" { - name = %[1]q - email = %[2]q - parent_id = aws_organizations_organizational_unit.test1.id + name = %[1]q + email = %[2]q + parent_id = aws_organizations_organizational_unit.test1.id + close_on_deletion = true } `, name, email) } @@ -265,9 +307,10 @@ resource "aws_organizations_organizational_unit" "test2" { } resource "aws_organizations_account" "test" { - name = %[1]q - email = %[2]q - parent_id = aws_organizations_organizational_unit.test2.id + name = %[1]q + email = %[2]q + parent_id = aws_organizations_organizational_unit.test2.id + close_on_deletion = true } `, name, email) } @@ -277,8 +320,9 @@ func testAccAccountTags1Config(name, email, tagKey1, tagValue1 string) string { resource "aws_organizations_organization" "test" {} resource "aws_organizations_account" "test" { - name = %[1]q - email = %[2]q + name = %[1]q + email = %[2]q + close_on_deletion = true tags = { %[3]q = %[4]q @@ -292,8 +336,9 @@ func testAccAccountTags2Config(name, email, tagKey1, tagValue1, tagKey2, tagValu resource "aws_organizations_organization" "test" {} resource "aws_organizations_account" "test" { - name = %[1]q - email = %[2]q + name = %[1]q + email = %[2]q + close_on_deletion = true tags = { %[3]q = %[4]q diff --git a/internal/service/organizations/find.go b/internal/service/organizations/find.go index 380219630bac..936caeb2a5ff 100644 --- a/internal/service/organizations/find.go +++ b/internal/service/organizations/find.go @@ -1,24 +1,63 @@ package organizations import ( + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/organizations" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) +func FindAccountByID(conn *organizations.Organizations, id string) (*organizations.Account, error) { + input := &organizations.DescribeAccountInput{ + AccountId: aws.String(id), + } + + output, err := conn.DescribeAccount(input) + + if tfawserr.ErrCodeEquals(err, organizations.ErrCodeAccountNotFoundException) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || output.Account == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + if status := aws.StringValue(output.Account.Status); status == organizations.AccountStatusSuspended { + return nil, &resource.NotFoundError{ + Message: status, + LastRequest: input, + } + } + + return output.Account, nil +} + func FindOrganization(conn *organizations.Organizations) (*organizations.Organization, error) { input := &organizations.DescribeOrganizationInput{} output, err := conn.DescribeOrganization(input) + if tfawserr.ErrCodeEquals(err, organizations.ErrCodeAWSOrganizationsNotInUseException) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + if err != nil { return nil, err } if output == nil || output.Organization == nil { - return nil, &resource.NotFoundError{ - Message: "Empty result", - LastRequest: input, - } + return nil, tfresource.NewEmptyResultError(input) } return output.Organization, nil diff --git a/internal/service/organizations/organizations_test.go b/internal/service/organizations/organizations_test.go index bb8e96721b8f..77b38707c1b2 100644 --- a/internal/service/organizations/organizations_test.go +++ b/internal/service/organizations/organizations_test.go @@ -16,9 +16,10 @@ func TestAccOrganizations_serial(t *testing.T) { "DataSource": testAccOrganizationDataSource_basic, }, "Account": { - "basic": testAccAccount_basic, - "ParentId": testAccAccount_ParentID, - "Tags": testAccAccount_Tags, + "basic": testAccAccount_basic, + "CloseOnDeletion": testAccAccount_CloseOnDeletion, + "ParentId": testAccAccount_ParentID, + "Tags": testAccAccount_Tags, }, "OrganizationalUnit": { "basic": testAccOrganizationalUnit_basic, diff --git a/internal/service/rds/cluster_activity_stream.go b/internal/service/rds/cluster_activity_stream.go new file mode 100644 index 000000000000..cf97030a1199 --- /dev/null +++ b/internal/service/rds/cluster_activity_stream.go @@ -0,0 +1,138 @@ +package rds + +import ( + "context" + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/rds" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/verify" +) + +func ResourceClusterActivityStream() *schema.Resource { + return &schema.Resource{ + CreateContext: resourceAwsRDSClusterActivityStreamCreate, + ReadContext: resourceAwsRDSClusterActivityStreamRead, + DeleteContext: resourceAwsRDSClusterActivityStreamDelete, + Importer: &schema.ResourceImporter{ + StateContext: schema.ImportStatePassthroughContext, + }, + + Schema: map[string]*schema.Schema{ + "resource_arn": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: verify.ValidARN, + }, + "kms_key_id": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + }, + "mode": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(rds.ActivityStreamMode_Values(), false), + }, + "kinesis_stream_name": { + Type: schema.TypeString, + Computed: true, + }, + "engine_native_audit_fields_included": { + Type: schema.TypeBool, + Optional: true, + Default: false, + ForceNew: true, + }, + }, + } +} + +func resourceAwsRDSClusterActivityStreamCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).RDSConn + + resourceArn := d.Get("resource_arn").(string) + + startActivityStreamInput := &rds.StartActivityStreamInput{ + ResourceArn: aws.String(resourceArn), + ApplyImmediately: aws.Bool(true), + KmsKeyId: aws.String(d.Get("kms_key_id").(string)), + Mode: aws.String(d.Get("mode").(string)), + EngineNativeAuditFieldsIncluded: aws.Bool(d.Get("engine_native_audit_fields_included").(bool)), + } + + log.Printf("[DEBUG] RDS Cluster start activity stream input: %s", startActivityStreamInput) + + resp, err := conn.StartActivityStream(startActivityStreamInput) + if err != nil { + return diag.FromErr(fmt.Errorf("error creating RDS Cluster Activity Stream: %s", err)) + } + + log.Printf("[DEBUG]: RDS Cluster start activity stream response: %s", resp) + + d.SetId(resourceArn) + + err = waitActivityStreamStarted(ctx, conn, d.Id()) + if err != nil { + return diag.FromErr(err) + } + + return resourceAwsRDSClusterActivityStreamRead(ctx, d, meta) +} + +func resourceAwsRDSClusterActivityStreamRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).RDSConn + + log.Printf("[DEBUG] Finding DB Cluster (%s)", d.Id()) + resp, err := FindDBClusterWithActivityStream(conn, d.Id()) + + if tfresource.NotFound(err) { + log.Printf("[WARN] RDS Cluster (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return diag.FromErr(fmt.Errorf("error describing RDS Cluster (%s): %s", d.Id(), err)) + } + + d.Set("resource_arn", resp.DBClusterArn) + d.Set("kms_key_id", resp.ActivityStreamKmsKeyId) + d.Set("kinesis_stream_name", resp.ActivityStreamKinesisStreamName) + d.Set("mode", resp.ActivityStreamMode) + + return nil +} + +func resourceAwsRDSClusterActivityStreamDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).RDSConn + + stopActivityStreamInput := &rds.StopActivityStreamInput{ + ApplyImmediately: aws.Bool(true), + ResourceArn: aws.String(d.Id()), + } + + log.Printf("[DEBUG] RDS Cluster stop activity stream input: %s", stopActivityStreamInput) + + resp, err := conn.StopActivityStream(stopActivityStreamInput) + if err != nil { + return diag.FromErr(fmt.Errorf("error stopping RDS Cluster Activity Stream: %w", err)) + } + + log.Printf("[DEBUG] RDS Cluster stop activity stream response: %s", resp) + + err = waitActivityStreamStopped(ctx, conn, d.Id()) + if err != nil { + return diag.FromErr(err) + } + + return nil +} diff --git a/internal/service/rds/cluster_activity_stream_test.go b/internal/service/rds/cluster_activity_stream_test.go new file mode 100644 index 000000000000..fa493e556ef7 --- /dev/null +++ b/internal/service/rds/cluster_activity_stream_test.go @@ -0,0 +1,207 @@ +package rds_test + +import ( + "fmt" + "regexp" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/rds" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfrds "github.com/hashicorp/terraform-provider-aws/internal/service/rds" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func TestAccAWSRDSClusterActivityStream_basic(t *testing.T) { + var dbCluster rds.DBCluster + clusterName := sdkacctest.RandomWithPrefix("tf-testacc-aurora-cluster") + instanceName := sdkacctest.RandomWithPrefix("tf-testacc-aurora-instance") + resourceName := "aws_rds_cluster_activity_stream.test" + rdsClusterResourceName := "aws_rds_cluster.test" + kmsKeyResourceName := "aws_kms_key.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, rds.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckAWSClusterActivityStreamDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSClusterActivityStreamConfig(clusterName, instanceName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRDSClusterActivityStreamExists(resourceName, &dbCluster), + testAccCheckAWSRDSClusterActivityStreamAttributes(&dbCluster), + acctest.MatchResourceAttrRegionalARN(resourceName, "resource_arn", "rds", regexp.MustCompile("cluster:"+clusterName)), + resource.TestCheckResourceAttrPair(resourceName, "resource_arn", rdsClusterResourceName, "arn"), + resource.TestCheckResourceAttrPair(resourceName, "kms_key_id", kmsKeyResourceName, "key_id"), + resource.TestCheckResourceAttrSet(resourceName, "kinesis_stream_name"), + resource.TestCheckResourceAttr(resourceName, "mode", rds.ActivityStreamModeAsync), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"engine_native_audit_fields_included"}, + }, + }, + }) +} + +func TestAccAWSRDSClusterActivityStream_disappears(t *testing.T) { + var dbCluster rds.DBCluster + clusterName := sdkacctest.RandomWithPrefix("tf-testacc-aurora-cluster") + instanceName := sdkacctest.RandomWithPrefix("tf-testacc-aurora-instance") + resourceName := "aws_rds_cluster_activity_stream.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, rds.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckAWSClusterActivityStreamDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAWSClusterActivityStreamConfig(clusterName, instanceName), + Check: resource.ComposeTestCheckFunc( + testAccCheckAWSRDSClusterActivityStreamExists(resourceName, &dbCluster), + acctest.CheckResourceDisappears(acctest.Provider, tfrds.ResourceClusterActivityStream(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func testAccCheckAWSRDSClusterActivityStreamExists(resourceName string, dbCluster *rds.DBCluster) resource.TestCheckFunc { + return testAccCheckAWSRDSClusterActivityStreamExistsWithProvider(resourceName, dbCluster, acctest.Provider) +} + +func testAccCheckAWSRDSClusterActivityStreamExistsWithProvider(resourceName string, dbCluster *rds.DBCluster, provider *schema.Provider) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("DBCluster ID is not set") + } + + conn := provider.Meta().(*conns.AWSClient).RDSConn + + response, err := tfrds.FindDBClusterWithActivityStream(conn, rs.Primary.ID) + + if err != nil { + return err + } + + *dbCluster = *response + return nil + } +} + +func testAccCheckAWSRDSClusterActivityStreamAttributes(v *rds.DBCluster) resource.TestCheckFunc { + return func(s *terraform.State) error { + + if aws.StringValue(v.DBClusterArn) == "" { + return fmt.Errorf("empty RDS Cluster arn") + } + + if aws.StringValue(v.ActivityStreamKmsKeyId) == "" { + return fmt.Errorf("empty RDS Cluster activity stream kms key id") + } + + if aws.StringValue(v.ActivityStreamKinesisStreamName) == "" { + return fmt.Errorf("empty RDS Cluster activity stream kinesis stream name") + } + + if aws.StringValue(v.ActivityStreamStatus) != rds.ActivityStreamStatusStarted { + return fmt.Errorf("incorrect activity stream status: expected: %s, got: %s", rds.ActivityStreamStatusStarted, aws.StringValue(v.ActivityStreamStatus)) + } + + if aws.StringValue(v.ActivityStreamMode) != rds.ActivityStreamModeSync && aws.StringValue(v.ActivityStreamMode) != rds.ActivityStreamModeAsync { + return fmt.Errorf("incorrect activity stream mode: expected: sync or async, got: %s", aws.StringValue(v.ActivityStreamMode)) + } + + return nil + } +} + +func testAccCheckAWSClusterActivityStreamDestroy(s *terraform.State) error { + return testAccCheckAWSClusterActivityStreamDestroyWithProvider(s, acctest.Provider) +} + +func testAccCheckAWSClusterActivityStreamDestroyWithProvider(s *terraform.State, provider *schema.Provider) error { + conn := provider.Meta().(*conns.AWSClient).RDSConn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_rds_cluster_activity_stream" { + continue + } + + var err error + + _, err = tfrds.FindDBClusterWithActivityStream(conn, rs.Primary.ID) + if err != nil { + // Return nil if the cluster is already destroyed + if tfresource.NotFound(err) { + return nil + } + return err + } + + return err + } + + return nil +} + +func testAccAWSClusterActivityStreamConfigBase(clusterName, instanceName string) string { + return fmt.Sprintf(` +data "aws_availability_zones" "available" { + state = "available" +} + +resource "aws_kms_key" "test" { + description = "Testing for AWS RDS Cluster Activity Stream" + deletion_window_in_days = 7 +} + +resource "aws_rds_cluster" "test" { + cluster_identifier = "%[1]s" + availability_zones = ["${data.aws_availability_zones.available.names[0]}", "${data.aws_availability_zones.available.names[1]}", "${data.aws_availability_zones.available.names[2]}"] + master_username = "foo" + master_password = "mustbeeightcharaters" + skip_final_snapshot = true + deletion_protection = false + engine = "aurora-postgresql" + engine_version = "11.9" +} + +resource "aws_rds_cluster_instance" "test" { + identifier = "%[2]s" + cluster_identifier = aws_rds_cluster.test.id + engine = aws_rds_cluster.test.engine + instance_class = "db.r6g.large" +} +`, clusterName, instanceName) +} + +func testAccAWSClusterActivityStreamConfig(clusterName, instanceName string) string { + return acctest.ConfigCompose( + testAccAWSClusterActivityStreamConfigBase(clusterName, instanceName), + ` +resource "aws_rds_cluster_activity_stream" "test" { + resource_arn = aws_rds_cluster.test.arn + kms_key_id = aws_kms_key.test.key_id + mode = "async" + + depends_on = [aws_rds_cluster_instance.test] +} + `) +} diff --git a/internal/service/rds/cluster_parameter_group.go b/internal/service/rds/cluster_parameter_group.go index 1e8059378beb..2e7bb381d393 100644 --- a/internal/service/rds/cluster_parameter_group.go +++ b/internal/service/rds/cluster_parameter_group.go @@ -64,7 +64,6 @@ func ResourceClusterParameterGroup() *schema.Resource { "parameter": { Type: schema.TypeSet, Optional: true, - ForceNew: false, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "name": { diff --git a/internal/service/rds/consts.go b/internal/service/rds/consts.go index c31b4aa776e2..add45745ed46 100644 --- a/internal/service/rds/consts.go +++ b/internal/service/rds/consts.go @@ -37,6 +37,12 @@ const ( InstanceStatusStorageOptimization = "storage-optimization" ) +const ( + InstanceAutomatedBackupStatusPending = "pending" + InstanceAutomatedBackupStatusReplicating = "replicating" + InstanceAutomatedBackupStatusRetained = "retained" +) + const ( EventSubscriptionStatusActive = "active" EventSubscriptionStatusCreating = "creating" diff --git a/internal/service/rds/find.go b/internal/service/rds/find.go index b53c82c1048c..5f2d9b08a08a 100644 --- a/internal/service/rds/find.go +++ b/internal/service/rds/find.go @@ -1,6 +1,8 @@ package rds import ( + "log" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/rds" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" @@ -118,6 +120,43 @@ func FindDBClusterByID(conn *rds.RDS, id string) (*rds.DBCluster, error) { return dbCluster, nil } +func FindDBClusterWithActivityStream(conn *rds.RDS, dbClusterArn string) (*rds.DBCluster, error) { + log.Printf("[DEBUG] Calling conn.DescribeDBCClusters(input) with DBClusterIdentifier set to %s", dbClusterArn) + input := &rds.DescribeDBClustersInput{ + DBClusterIdentifier: aws.String(dbClusterArn), + } + + output, err := conn.DescribeDBClusters(input) + + if tfawserr.ErrCodeEquals(err, rds.ErrCodeDBClusterNotFoundFault) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if output == nil || len(output.DBClusters) == 0 || output.DBClusters[0] == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + dbCluster := output.DBClusters[0] + + // Eventual consistency check. + if aws.StringValue(dbCluster.DBClusterArn) != dbClusterArn { + return nil, &resource.NotFoundError{ + LastRequest: input, + } + } + + if status := aws.StringValue(dbCluster.ActivityStreamStatus); status == rds.ActivityStreamStatusStopped { + return nil, &resource.NotFoundError{ + Message: status, + } + } + + return dbCluster, nil +} + func FindDBInstanceByID(conn *rds.RDS, id string) (*rds.DBInstance, error) { input := &rds.DescribeDBInstancesInput{ DBInstanceIdentifier: aws.String(id), @@ -189,3 +228,81 @@ func FindEventSubscriptionByID(conn *rds.RDS, id string) (*rds.EventSubscription return output.EventSubscriptionsList[0], nil } + +func FindDBInstanceAutomatedBackupByARN(conn *rds.RDS, arn string) (*rds.DBInstanceAutomatedBackup, error) { + input := &rds.DescribeDBInstanceAutomatedBackupsInput{ + DBInstanceAutomatedBackupsArn: aws.String(arn), + } + + output, err := findDBInstanceAutomatedBackup(conn, input) + + if err != nil { + return nil, err + } + + if status := aws.StringValue(output.Status); status == InstanceAutomatedBackupStatusRetained { + // If the automated backup is retained, the replication is stopped. + return nil, &resource.NotFoundError{ + Message: status, + LastRequest: input, + } + } + + // Eventual consistency check. + if aws.StringValue(output.DBInstanceAutomatedBackupsArn) != arn { + return nil, &resource.NotFoundError{ + LastRequest: input, + } + } + + return output, nil +} + +func findDBInstanceAutomatedBackup(conn *rds.RDS, input *rds.DescribeDBInstanceAutomatedBackupsInput) (*rds.DBInstanceAutomatedBackup, error) { + output, err := findDBInstanceAutomatedBackups(conn, input) + + if err != nil { + return nil, err + } + + if len(output) == 0 || output[0] == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + if count := len(output); count > 1 { + return nil, tfresource.NewTooManyResultsError(count, input) + } + + return output[0], nil +} + +func findDBInstanceAutomatedBackups(conn *rds.RDS, input *rds.DescribeDBInstanceAutomatedBackupsInput) ([]*rds.DBInstanceAutomatedBackup, error) { + var output []*rds.DBInstanceAutomatedBackup + + err := conn.DescribeDBInstanceAutomatedBackupsPages(input, func(page *rds.DescribeDBInstanceAutomatedBackupsOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, v := range page.DBInstanceAutomatedBackups { + if v != nil { + output = append(output, v) + } + } + + return !lastPage + }) + + if tfawserr.ErrCodeEquals(err, rds.ErrCodeDBInstanceAutomatedBackupNotFoundFault) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + return output, nil +} diff --git a/internal/service/rds/instance_automated_backups_replication.go b/internal/service/rds/instance_automated_backups_replication.go new file mode 100644 index 000000000000..2e1840eeae4f --- /dev/null +++ b/internal/service/rds/instance_automated_backups_replication.go @@ -0,0 +1,149 @@ +package rds + +import ( + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/arn" + "github.com/aws/aws-sdk-go/service/rds" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" + "github.com/hashicorp/terraform-provider-aws/internal/verify" +) + +func ResourceInstanceAutomatedBackupsReplication() *schema.Resource { + return &schema.Resource{ + Create: resourceInstanceAutomatedBackupsReplicationCreate, + Read: resourceInstanceAutomatedBackupsReplicationRead, + Delete: resourceInstanceAutomatedBackupsReplicationDelete, + + Importer: &schema.ResourceImporter{ + State: schema.ImportStatePassthrough, + }, + + Schema: map[string]*schema.Schema{ + "kms_key_id": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: verify.ValidARN, + }, + "pre_signed_url": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + }, + "retention_period": { + Type: schema.TypeInt, + ForceNew: true, + Optional: true, + Default: 7, + }, + "source_db_instance_arn": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: verify.ValidARN, + }, + }, + } +} + +func resourceInstanceAutomatedBackupsReplicationCreate(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*conns.AWSClient).RDSConn + + input := &rds.StartDBInstanceAutomatedBackupsReplicationInput{ + BackupRetentionPeriod: aws.Int64(int64(d.Get("retention_period").(int))), + SourceDBInstanceArn: aws.String(d.Get("source_db_instance_arn").(string)), + } + + if v, ok := d.GetOk("kms_key_id"); ok { + input.KmsKeyId = aws.String(v.(string)) + } + + if v, ok := d.GetOk("pre_signed_url"); ok { + input.PreSignedUrl = aws.String(v.(string)) + } + + log.Printf("[DEBUG] Starting RDS instance automated backups replication: %s", input) + output, err := conn.StartDBInstanceAutomatedBackupsReplication(input) + + if err != nil { + return fmt.Errorf("error starting RDS instance automated backups replication: %w", err) + } + + d.SetId(aws.StringValue(output.DBInstanceAutomatedBackup.DBInstanceAutomatedBackupsArn)) + + if _, err := waitDBInstanceAutomatedBackupCreated(conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { + return fmt.Errorf("error waiting for DB instance automated backup (%s) create: %w", d.Id(), err) + } + + return resourceInstanceAutomatedBackupsReplicationRead(d, meta) +} + +func resourceInstanceAutomatedBackupsReplicationRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*conns.AWSClient).RDSConn + + backup, err := FindDBInstanceAutomatedBackupByARN(conn, d.Id()) + + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] RDS instance automated backup %s not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return fmt.Errorf("error reading RDS instance automated backup (%s): %w", d.Id(), err) + } + + d.Set("kms_key_id", backup.KmsKeyId) + d.Set("retention_period", backup.BackupRetentionPeriod) + d.Set("source_db_instance_arn", backup.DBInstanceArn) + + return nil +} + +func resourceInstanceAutomatedBackupsReplicationDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*conns.AWSClient).RDSConn + + backup, err := FindDBInstanceAutomatedBackupByARN(conn, d.Id()) + + if tfresource.NotFound(err) { + return nil + } + + if err != nil { + return fmt.Errorf("error reading RDS instance automated backup (%s): %w", d.Id(), err) + } + + dbInstanceID := aws.StringValue(backup.DBInstanceIdentifier) + sourceDatabaseARN, err := arn.Parse(aws.StringValue(backup.DBInstanceArn)) + + if err != nil { + return err + } + + log.Printf("[DEBUG] Stopping RDS instance automated backups replication: %s", d.Id()) + _, err = conn.StopDBInstanceAutomatedBackupsReplication(&rds.StopDBInstanceAutomatedBackupsReplicationInput{ + SourceDBInstanceArn: aws.String(d.Get("source_db_instance_arn").(string)), + }) + + if err != nil { + return fmt.Errorf("error stopping RDS instance automated backups replication (%s): %w", d.Id(), err) + } + + // Create a new client to the source region. + sourceDatabaseConn := conn + if sourceDatabaseARN.Region != meta.(*conns.AWSClient).Region { + sourceDatabaseConn = rds.New(meta.(*conns.AWSClient).Session, aws.NewConfig().WithRegion(sourceDatabaseARN.Region)) + } + + if _, err := waitDBInstanceAutomatedBackupDeleted(sourceDatabaseConn, dbInstanceID, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { + return fmt.Errorf("error waiting for DB instance automated backup (%s) delete: %w", d.Id(), err) + } + + return nil +} diff --git a/internal/service/rds/instance_automated_backups_replication_test.go b/internal/service/rds/instance_automated_backups_replication_test.go new file mode 100644 index 000000000000..8672e12e2e3c --- /dev/null +++ b/internal/service/rds/instance_automated_backups_replication_test.go @@ -0,0 +1,244 @@ +package rds_test + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/rds" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfrds "github.com/hashicorp/terraform-provider-aws/internal/service/rds" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" +) + +func TestAccRDSInstanceAutomatedBackupsReplication_basic(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_db_instance_automated_backups_replication.test" + + var providers []*schema.Provider + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(t) + acctest.PreCheckMultipleRegion(t, 2) + }, + ErrorCheck: acctest.ErrorCheck(t, rds.EndpointsID), + ProviderFactories: acctest.FactoriesAlternate(&providers), + CheckDestroy: testAccCheckInstanceAutomatedBackupsReplicationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceAutomatedBackupsReplicationConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceAutomatedBackupsReplicationExist(resourceName), + resource.TestCheckResourceAttr(resourceName, "retention_period", "7"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccRDSInstanceAutomatedBackupsReplication_retentionPeriod(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_db_instance_automated_backups_replication.test" + + var providers []*schema.Provider + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(t) + acctest.PreCheckMultipleRegion(t, 2) + }, + ErrorCheck: acctest.ErrorCheck(t, rds.EndpointsID), + ProviderFactories: acctest.FactoriesAlternate(&providers), + CheckDestroy: testAccCheckInstanceAutomatedBackupsReplicationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceAutomatedBackupReplicationsRetentionPeriodConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceAutomatedBackupsReplicationExist(resourceName), + resource.TestCheckResourceAttr(resourceName, "retention_period", "14"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccRDSInstanceAutomatedBackupsReplication_kmsEncrypted(t *testing.T) { + if testing.Short() { + t.Skip("skipping long-running test in short mode") + } + + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_db_instance_automated_backups_replication.test" + + var providers []*schema.Provider + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(t) + acctest.PreCheckMultipleRegion(t, 2) + }, + ErrorCheck: acctest.ErrorCheck(t, rds.EndpointsID), + ProviderFactories: acctest.FactoriesAlternate(&providers), + CheckDestroy: testAccCheckInstanceAutomatedBackupsReplicationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccInstanceAutomatedBackupsReplicationKMSEncryptedConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckInstanceAutomatedBackupsReplicationExist(resourceName), + resource.TestCheckResourceAttr(resourceName, "retention_period", "7"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccInstanceAutomatedBackupsReplicationConfig(rName string) string { + return acctest.ConfigCompose(acctest.ConfigMultipleRegionProvider(2), fmt.Sprintf(` +resource "aws_db_instance" "test" { + allocated_storage = 10 + identifier = %[1]q + engine = "postgres" + engine_version = "13.4" + instance_class = "db.t3.micro" + name = "mydb" + username = "masterusername" + password = "mustbeeightcharacters" + backup_retention_period = 7 + skip_final_snapshot = true + + provider = "awsalternate" +} + +resource "aws_db_instance_automated_backups_replication" "test" { + source_db_instance_arn = aws_db_instance.test.arn +} +`, rName)) +} + +func testAccInstanceAutomatedBackupReplicationsRetentionPeriodConfig(rName string) string { + return acctest.ConfigCompose(acctest.ConfigMultipleRegionProvider(2), fmt.Sprintf(` +resource "aws_db_instance" "test" { + allocated_storage = 10 + identifier = %[1]q + engine = "postgres" + engine_version = "13.4" + instance_class = "db.t3.micro" + name = "mydb" + username = "masterusername" + password = "mustbeeightcharacters" + backup_retention_period = 7 + skip_final_snapshot = true + + provider = "awsalternate" +} + +resource "aws_db_instance_automated_backups_replication" "test" { + source_db_instance_arn = aws_db_instance.test.arn + retention_period = 14 +} +`, rName)) +} + +func testAccInstanceAutomatedBackupsReplicationKMSEncryptedConfig(rName string) string { + return acctest.ConfigCompose(acctest.ConfigMultipleRegionProvider(2), fmt.Sprintf(` +resource "aws_kms_key" "test" { + description = %[1]q +} + +resource "aws_db_instance" "test" { + allocated_storage = 10 + identifier = %[1]q + engine = "postgres" + engine_version = "13.4" + instance_class = "db.t3.micro" + name = "mydb" + username = "masterusername" + password = "mustbeeightcharacters" + backup_retention_period = 7 + storage_encrypted = true + skip_final_snapshot = true + + provider = "awsalternate" +} + +resource "aws_db_instance_automated_backups_replication" "test" { + source_db_instance_arn = aws_db_instance.test.arn + kms_key_id = aws_kms_key.test.arn +} +`, rName)) +} + +func testAccCheckInstanceAutomatedBackupsReplicationExist(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No RDS instance automated backups replication ID is set") + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn + + _, err := tfrds.FindDBInstanceAutomatedBackupByARN(conn, rs.Primary.ID) + + if err != nil { + return err + } + + return nil + } +} + +func testAccCheckInstanceAutomatedBackupsReplicationDestroy(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).RDSConn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_db_instance_automated_backups_replication" { + continue + } + + _, err := tfrds.FindDBInstanceAutomatedBackupByARN(conn, rs.Primary.ID) + + if tfresource.NotFound(err) { + continue + } + + if err != nil { + return err + } + + return fmt.Errorf("RDS instance automated backups replication %s still exists", rs.Primary.ID) + } + + return nil +} diff --git a/internal/service/rds/status.go b/internal/service/rds/status.go index 0f393d980030..a7df1b60f7db 100644 --- a/internal/service/rds/status.go +++ b/internal/service/rds/status.go @@ -1,6 +1,8 @@ package rds import ( + "strconv" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/rds" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" @@ -79,3 +81,63 @@ func statusDBInstance(conn *rds.RDS, id string) resource.StateRefreshFunc { return output, aws.StringValue(output.DBInstanceStatus), nil } } + +func statusDBClusterActivityStream(conn *rds.RDS, dbClusterArn string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := FindDBClusterWithActivityStream(conn, dbClusterArn) + + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + if output == nil { + return nil, "", nil + } + + return output, aws.StringValue(output.ActivityStreamStatus), nil + } +} + +func statusDBInstanceAutomatedBackup(conn *rds.RDS, arn string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := FindDBInstanceAutomatedBackupByARN(conn, arn) + + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return output, aws.StringValue(output.Status), nil + } +} + +// statusDBInstanceHasAutomatedBackup returns whether or not a database instance has a specified automated backup. +// The connection must be valid for the database instance's Region. +func statusDBInstanceHasAutomatedBackup(conn *rds.RDS, dbInstanceID, dbInstanceAutomatedBackupsARN string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := FindDBInstanceByID(conn, dbInstanceID) + + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + for _, v := range output.DBInstanceAutomatedBackupsReplications { + if aws.StringValue(v.DBInstanceAutomatedBackupsArn) == dbInstanceAutomatedBackupsARN { + return output, strconv.FormatBool(true), nil + } + } + + return output, strconv.FormatBool(false), nil + } +} diff --git a/internal/service/rds/sweep.go b/internal/service/rds/sweep.go index 7af82b991bfd..e0089e8271a8 100644 --- a/internal/service/rds/sweep.go +++ b/internal/service/rds/sweep.go @@ -94,6 +94,11 @@ func init() { "aws_db_instance", }, }) + + resource.AddTestSweepers("aws_rds_cluster_activity_stream", &resource.Sweeper{ + Name: "aws_rds_cluster_activity_stream", + F: func(region string) error { return nil }, + }) } func sweepClusterParameterGroups(region string) error { diff --git a/internal/service/rds/wait.go b/internal/service/rds/wait.go index 6751a4d048d5..2d70756cf0db 100644 --- a/internal/service/rds/wait.go +++ b/internal/service/rds/wait.go @@ -1,6 +1,10 @@ package rds import ( + "context" + "fmt" + "log" + "strconv" "time" "github.com/aws/aws-sdk-go/service/rds" @@ -10,6 +14,9 @@ import ( const ( dbClusterRoleAssociationCreatedTimeout = 5 * time.Minute dbClusterRoleAssociationDeletedTimeout = 5 * time.Minute + + dbClusterActivityStreamStartedTimeout = 30 * time.Minute + dbClusterActivityStreamStoppedTimeout = 30 * time.Minute ) func waitEventSubscriptionCreated(conn *rds.RDS, id string, timeout time.Duration) (*rds.EventSubscription, error) { @@ -199,3 +206,79 @@ func waitDBClusterInstanceDeleted(conn *rds.RDS, id string, timeout time.Duratio return nil, err } + +// waitActivityStreamStarted waits for Aurora Cluster Activity Stream to be started +func waitActivityStreamStarted(ctx context.Context, conn *rds.RDS, dbClusterArn string) error { + log.Printf("[DEBUG] Waiting for RDS Cluster Activity Stream %s to become started...", dbClusterArn) + + stateConf := &resource.StateChangeConf{ + Pending: []string{rds.ActivityStreamStatusStarting}, + Target: []string{rds.ActivityStreamStatusStarted}, + Refresh: statusDBClusterActivityStream(conn, dbClusterArn), + Timeout: dbClusterActivityStreamStartedTimeout, + MinTimeout: 10 * time.Second, + Delay: 30 * time.Second, + } + + _, err := stateConf.WaitForStateContext(ctx) + if err != nil { + return fmt.Errorf("error waiting for RDS Cluster Activity Stream (%s) to be started: %v", dbClusterArn, err) + } + return nil +} + +// waitActivityStreamStarted waits for Aurora Cluster Activity Stream to be stopped +func waitActivityStreamStopped(ctx context.Context, conn *rds.RDS, dbClusterArn string) error { + log.Printf("[DEBUG] Waiting for RDS Cluster Activity Stream %s to become stopped...", dbClusterArn) + + stateConf := &resource.StateChangeConf{ + Pending: []string{rds.ActivityStreamStatusStopping}, + Target: []string{}, + Refresh: statusDBClusterActivityStream(conn, dbClusterArn), + Timeout: dbClusterActivityStreamStoppedTimeout, + MinTimeout: 10 * time.Second, + Delay: 30 * time.Second, + } + + _, err := stateConf.WaitForStateContext(ctx) + if err != nil { + return fmt.Errorf("error waiting for RDS Cluster Activity Stream (%s) to be stopped: %v", dbClusterArn, err) + } + return nil +} + +func waitDBInstanceAutomatedBackupCreated(conn *rds.RDS, arn string, timeout time.Duration) (*rds.DBInstanceAutomatedBackup, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{InstanceAutomatedBackupStatusPending}, + Target: []string{InstanceAutomatedBackupStatusReplicating}, + Refresh: statusDBInstanceAutomatedBackup(conn, arn), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForState() + + if output, ok := outputRaw.(*rds.DBInstanceAutomatedBackup); ok { + return output, err + } + + return nil, err +} + +// waitDBInstanceAutomatedBackupDeleted waits for a specified automated backup to be deleted from a database instance. +// The connection must be valid for the database instance's Region. +func waitDBInstanceAutomatedBackupDeleted(conn *rds.RDS, dbInstanceID, dbInstanceAutomatedBackupsARN string, timeout time.Duration) (*rds.DBInstance, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{strconv.FormatBool(true)}, + Target: []string{strconv.FormatBool(false)}, + Refresh: statusDBInstanceHasAutomatedBackup(conn, dbInstanceID, dbInstanceAutomatedBackupsARN), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForState() + + if output, ok := outputRaw.(*rds.DBInstance); ok { + return output, err + } + + return nil, err +} diff --git a/internal/service/redshift/cluster.go b/internal/service/redshift/cluster.go index e850f15ade45..5ce188b60ab3 100644 --- a/internal/service/redshift/cluster.go +++ b/internal/service/redshift/cluster.go @@ -14,7 +14,6 @@ import ( "github.com/aws/aws-sdk-go/service/redshift" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/customdiff" - "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -308,6 +307,8 @@ func ResourceCluster() *schema.Resource { Optional: true, ForceNew: true, }, + "tags": tftags.TagsSchema(), + "tags_all": tftags.TagsSchemaComputed(), "vpc_security_group_ids": { Type: schema.TypeSet, Optional: true, @@ -315,15 +316,13 @@ func ResourceCluster() *schema.Resource { Elem: &schema.Schema{Type: schema.TypeString}, Set: schema.HashString, }, - "tags": tftags.TagsSchema(), - "tags_all": tftags.TagsSchemaComputed(), }, CustomizeDiff: customdiff.All( verify.SetTagsDiff, func(_ context.Context, diff *schema.ResourceDiff, v interface{}) error { if diff.Get("availability_zone_relocation_enabled").(bool) && diff.Get("publicly_accessible").(bool) { - return errors.New("availability_zone_relocation_enabled can not be true when publicly_accessible is true") + return errors.New("`availability_zone_relocation_enabled` cannot be true when `publicly_accessible` is true") } return nil }, @@ -336,7 +335,7 @@ func ResourceCluster() *schema.Resource { } o, n := diff.GetChange("availability_zone") if o.(string) != n.(string) { - return fmt.Errorf("cannot change availability_zone if availability_zone_relocation_enabled is not true") + return fmt.Errorf("cannot change `availability_zone` if `availability_zone_relocation_enabled` is not true") } return nil }, @@ -344,92 +343,87 @@ func ResourceCluster() *schema.Resource { } } -func resourceClusterImport( - d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { - // Neither skip_final_snapshot nor final_snapshot_identifier can be fetched - // from any API call, so we need to default skip_final_snapshot to true so - // that final_snapshot_identifier is not required - d.Set("skip_final_snapshot", true) - return []*schema.ResourceData{d}, nil -} - func resourceClusterCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).RedshiftConn defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig tags := defaultTagsConfig.MergeTags(tftags.New(d.Get("tags").(map[string]interface{}))) if v, ok := d.GetOk("snapshot_identifier"); ok { - restoreOpts := &redshift.RestoreFromClusterSnapshotInput{ - ClusterIdentifier: aws.String(d.Get("cluster_identifier").(string)), - SnapshotIdentifier: aws.String(v.(string)), - Port: aws.Int64(int64(d.Get("port").(int))), + clusterID := d.Get("cluster_identifier").(string) + input := &redshift.RestoreFromClusterSnapshotInput{ AllowVersionUpgrade: aws.Bool(d.Get("allow_version_upgrade").(bool)), + AutomatedSnapshotRetentionPeriod: aws.Int64(int64(d.Get("automated_snapshot_retention_period").(int))), + ClusterIdentifier: aws.String(clusterID), + Port: aws.Int64(int64(d.Get("port").(int))), NodeType: aws.String(d.Get("node_type").(string)), PubliclyAccessible: aws.Bool(d.Get("publicly_accessible").(bool)), - AutomatedSnapshotRetentionPeriod: aws.Int64(int64(d.Get("automated_snapshot_retention_period").(int))), - } - - if v, ok := d.GetOk("owner_account"); ok { - restoreOpts.OwnerAccount = aws.String(v.(string)) - } - - if v, ok := d.GetOk("snapshot_cluster_identifier"); ok { - restoreOpts.SnapshotClusterIdentifier = aws.String(v.(string)) + SnapshotIdentifier: aws.String(v.(string)), } if v, ok := d.GetOk("availability_zone"); ok { - restoreOpts.AvailabilityZone = aws.String(v.(string)) + input.AvailabilityZone = aws.String(v.(string)) } if v, ok := d.GetOk("availability_zone_relocation_enabled"); ok { - restoreOpts.AvailabilityZoneRelocation = aws.Bool(v.(bool)) + input.AvailabilityZoneRelocation = aws.Bool(v.(bool)) } if v, ok := d.GetOk("cluster_subnet_group_name"); ok { - restoreOpts.ClusterSubnetGroupName = aws.String(v.(string)) + input.ClusterSubnetGroupName = aws.String(v.(string)) } if v, ok := d.GetOk("cluster_parameter_group_name"); ok { - restoreOpts.ClusterParameterGroupName = aws.String(v.(string)) + input.ClusterParameterGroupName = aws.String(v.(string)) } if v := d.Get("cluster_security_groups").(*schema.Set); v.Len() > 0 { - restoreOpts.ClusterSecurityGroups = flex.ExpandStringSet(v) + input.ClusterSecurityGroups = flex.ExpandStringSet(v) } - if v := d.Get("vpc_security_group_ids").(*schema.Set); v.Len() > 0 { - restoreOpts.VpcSecurityGroupIds = flex.ExpandStringSet(v) + if v, ok := d.GetOk("elastic_ip"); ok { + input.ElasticIp = aws.String(v.(string)) } - if v, ok := d.GetOk("preferred_maintenance_window"); ok { - restoreOpts.PreferredMaintenanceWindow = aws.String(v.(string)) + if v, ok := d.GetOk("enhanced_vpc_routing"); ok { + input.EnhancedVpcRouting = aws.Bool(v.(bool)) + } + + if v, ok := d.GetOk("iam_roles"); ok { + input.IamRoles = flex.ExpandStringSet(v.(*schema.Set)) } if v, ok := d.GetOk("kms_key_id"); ok { - restoreOpts.KmsKeyId = aws.String(v.(string)) + input.KmsKeyId = aws.String(v.(string)) } - if v, ok := d.GetOk("elastic_ip"); ok { - restoreOpts.ElasticIp = aws.String(v.(string)) + if v, ok := d.GetOk("number_of_nodes"); ok { + input.NumberOfNodes = aws.Int64(int64(v.(int))) } - if v, ok := d.GetOk("enhanced_vpc_routing"); ok { - restoreOpts.EnhancedVpcRouting = aws.Bool(v.(bool)) + if v, ok := d.GetOk("owner_account"); ok { + input.OwnerAccount = aws.String(v.(string)) } - if v, ok := d.GetOk("iam_roles"); ok { - restoreOpts.IamRoles = flex.ExpandStringSet(v.(*schema.Set)) + if v, ok := d.GetOk("preferred_maintenance_window"); ok { + input.PreferredMaintenanceWindow = aws.String(v.(string)) } - log.Printf("[DEBUG] Redshift Cluster restore cluster options: %s", restoreOpts) + if v, ok := d.GetOk("snapshot_cluster_identifier"); ok { + input.SnapshotClusterIdentifier = aws.String(v.(string)) + } - resp, err := conn.RestoreFromClusterSnapshot(restoreOpts) - if err != nil { - return fmt.Errorf("error restoring Redshift Cluster from snapshot: %w", err) + if v := d.Get("vpc_security_group_ids").(*schema.Set); v.Len() > 0 { + input.VpcSecurityGroupIds = flex.ExpandStringSet(v) } - d.SetId(aws.StringValue(resp.Cluster.ClusterIdentifier)) + log.Printf("[DEBUG] Restoring Redshift Cluster: %s", input) + output, err := conn.RestoreFromClusterSnapshot(input) + if err != nil { + return fmt.Errorf("error restoring Redshift Cluster (%s) from snapshot: %w", clusterID, err) + } + + d.SetId(aws.StringValue(output.Cluster.ClusterIdentifier)) } else { if _, ok := d.GetOk("master_password"); !ok { return fmt.Errorf(`provider.aws: aws_redshift_cluster: %s: "master_password": required field is not set`, d.Get("cluster_identifier").(string)) @@ -439,112 +433,109 @@ func resourceClusterCreate(d *schema.ResourceData, meta interface{}) error { return fmt.Errorf(`provider.aws: aws_redshift_cluster: %s: "master_username": required field is not set`, d.Get("cluster_identifier").(string)) } - createOpts := &redshift.CreateClusterInput{ - ClusterIdentifier: aws.String(d.Get("cluster_identifier").(string)), - Port: aws.Int64(int64(d.Get("port").(int))), - MasterUserPassword: aws.String(d.Get("master_password").(string)), - MasterUsername: aws.String(d.Get("master_username").(string)), + clusterID := d.Get("cluster_identifier").(string) + input := &redshift.CreateClusterInput{ + AllowVersionUpgrade: aws.Bool(d.Get("allow_version_upgrade").(bool)), + AutomatedSnapshotRetentionPeriod: aws.Int64(int64(d.Get("automated_snapshot_retention_period").(int))), + ClusterIdentifier: aws.String(clusterID), ClusterVersion: aws.String(d.Get("cluster_version").(string)), - NodeType: aws.String(d.Get("node_type").(string)), DBName: aws.String(d.Get("database_name").(string)), - AllowVersionUpgrade: aws.Bool(d.Get("allow_version_upgrade").(bool)), + MasterUsername: aws.String(d.Get("master_username").(string)), + MasterUserPassword: aws.String(d.Get("master_password").(string)), + NodeType: aws.String(d.Get("node_type").(string)), + Port: aws.Int64(int64(d.Get("port").(int))), PubliclyAccessible: aws.Bool(d.Get("publicly_accessible").(bool)), - AutomatedSnapshotRetentionPeriod: aws.Int64(int64(d.Get("automated_snapshot_retention_period").(int))), Tags: Tags(tags.IgnoreAWS()), } - if v := d.Get("number_of_nodes").(int); v > 1 { - createOpts.ClusterType = aws.String("multi-node") - createOpts.NumberOfNodes = aws.Int64(int64(d.Get("number_of_nodes").(int))) - } else { - createOpts.ClusterType = aws.String("single-node") + if v, ok := d.GetOk("availability_zone"); ok { + input.AvailabilityZone = aws.String(v.(string)) } - if v := d.Get("cluster_security_groups").(*schema.Set); v.Len() > 0 { - createOpts.ClusterSecurityGroups = flex.ExpandStringSet(v) + if v, ok := d.GetOk("availability_zone_relocation_enabled"); ok { + input.AvailabilityZoneRelocation = aws.Bool(v.(bool)) } - if v := d.Get("vpc_security_group_ids").(*schema.Set); v.Len() > 0 { - createOpts.VpcSecurityGroupIds = flex.ExpandStringSet(v) + if v, ok := d.GetOk("cluster_parameter_group_name"); ok { + input.ClusterParameterGroupName = aws.String(v.(string)) } - if v, ok := d.GetOk("cluster_subnet_group_name"); ok { - createOpts.ClusterSubnetGroupName = aws.String(v.(string)) + if v := d.Get("cluster_security_groups").(*schema.Set); v.Len() > 0 { + input.ClusterSecurityGroups = flex.ExpandStringSet(v) } - if v, ok := d.GetOk("availability_zone"); ok { - createOpts.AvailabilityZone = aws.String(v.(string)) + if v, ok := d.GetOk("cluster_subnet_group_name"); ok { + input.ClusterSubnetGroupName = aws.String(v.(string)) } - if v, ok := d.GetOk("availability_zone_relocation_enabled"); ok { - createOpts.AvailabilityZoneRelocation = aws.Bool(v.(bool)) + if v, ok := d.GetOk("elastic_ip"); ok { + input.ElasticIp = aws.String(v.(string)) } - if v, ok := d.GetOk("preferred_maintenance_window"); ok { - createOpts.PreferredMaintenanceWindow = aws.String(v.(string)) + if v, ok := d.GetOk("encrypted"); ok { + input.Encrypted = aws.Bool(v.(bool)) } - if v, ok := d.GetOk("cluster_parameter_group_name"); ok { - createOpts.ClusterParameterGroupName = aws.String(v.(string)) + if v, ok := d.GetOk("enhanced_vpc_routing"); ok { + input.EnhancedVpcRouting = aws.Bool(v.(bool)) } - if v, ok := d.GetOk("encrypted"); ok { - createOpts.Encrypted = aws.Bool(v.(bool)) + if v, ok := d.GetOk("iam_roles"); ok { + input.IamRoles = flex.ExpandStringSet(v.(*schema.Set)) } - if v, ok := d.GetOk("enhanced_vpc_routing"); ok { - createOpts.EnhancedVpcRouting = aws.Bool(v.(bool)) + if v, ok := d.GetOk("kms_key_id"); ok { + input.KmsKeyId = aws.String(v.(string)) } - if v, ok := d.GetOk("kms_key_id"); ok { - createOpts.KmsKeyId = aws.String(v.(string)) + if v := d.Get("number_of_nodes").(int); v > 1 { + input.ClusterType = aws.String(clusterTypeMultiNode) + input.NumberOfNodes = aws.Int64(int64(d.Get("number_of_nodes").(int))) + } else { + input.ClusterType = aws.String(clusterTypeSingleNode) } - if v, ok := d.GetOk("elastic_ip"); ok { - createOpts.ElasticIp = aws.String(v.(string)) + if v, ok := d.GetOk("preferred_maintenance_window"); ok { + input.PreferredMaintenanceWindow = aws.String(v.(string)) } - if v, ok := d.GetOk("iam_roles"); ok { - createOpts.IamRoles = flex.ExpandStringSet(v.(*schema.Set)) + if v := d.Get("vpc_security_group_ids").(*schema.Set); v.Len() > 0 { + input.VpcSecurityGroupIds = flex.ExpandStringSet(v) } - log.Printf("[DEBUG] Redshift Cluster create options: %s", createOpts) - resp, err := conn.CreateCluster(createOpts) + log.Printf("[DEBUG] Creating Redshift Cluster: %s", input) + output, err := conn.CreateCluster(input) + if err != nil { - return fmt.Errorf("error creating Redshift Cluster: %w", err) + return fmt.Errorf("error creating Redshift Cluster (%s): %w", clusterID, err) } - log.Printf("[DEBUG]: Cluster create response: %s", resp) - d.SetId(aws.StringValue(resp.Cluster.ClusterIdentifier)) + d.SetId(aws.StringValue(output.Cluster.ClusterIdentifier)) } - stateConf := &resource.StateChangeConf{ - Pending: []string{"creating", "backing-up", "modifying", "restoring", "available, prep-for-resize"}, - Target: []string{"available"}, - Refresh: resourceClusterStateRefreshFunc(d.Id(), conn), - Timeout: d.Timeout(schema.TimeoutCreate), - MinTimeout: 10 * time.Second, - } - _, err := stateConf.WaitForState() - if err != nil { - return fmt.Errorf("Error waiting for Redshift Cluster state to be \"available\": %w", err) + if _, err := waitClusterCreated(conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { + return fmt.Errorf("error waiting for Redshift Cluster (%s) create: %w", d.Id(), err) } - _, err = waitClusterRelocationStatusResolved(conn, d.Id()) - if err != nil { - return fmt.Errorf("error waiting for Redshift Cluster Availability Zone Relocation Status to resolve: %w", err) + if _, err := waitClusterRelocationStatusResolved(conn, d.Id()); err != nil { + return fmt.Errorf("error waiting for Redshift Cluster (%s) Availability Zone Relocation Status resolution: %w", d.Id(), err) } - if v, ok := d.GetOk("snapshot_copy"); ok { - err := enableRedshiftSnapshotCopy(d.Id(), v.([]interface{}), conn) - if err != nil { + if v, ok := d.GetOk("snapshot_copy"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + if err := enableSnapshotCopy(conn, d.Id(), v.([]interface{})[0].(map[string]interface{})); err != nil { return err } } - if _, ok := d.GetOk("logging.0.enable"); ok { - if err := enableRedshiftClusterLogging(d, conn); err != nil { - return fmt.Errorf("error enabling Redshift Cluster (%s) logging: %w", d.Id(), err) + if v, ok := d.GetOk("logging"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + tfMap := v.([]interface{})[0].(map[string]interface{}) + + if v, ok := tfMap["enable"].(bool); ok && v { + err := enableLogging(conn, d.Id(), tfMap) + + if err != nil { + return err + } } } @@ -675,104 +666,89 @@ func resourceClusterRead(d *schema.ResourceData, meta interface{}) error { func resourceClusterUpdate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).RedshiftConn - if d.HasChange("tags_all") { - o, n := d.GetChange("tags_all") + if d.HasChangesExcept("availability_zone", "iam_roles", "logging", "snapshot_copy", "tags", "tags_all") { + input := &redshift.ModifyClusterInput{ + ClusterIdentifier: aws.String(d.Id()), + } - if err := UpdateTags(conn, d.Get("arn").(string), o, n); err != nil { - return fmt.Errorf("error updating Redshift Cluster (%s) tags: %s", d.Get("arn").(string), err) + if d.HasChange("allow_version_upgrade") { + input.AllowVersionUpgrade = aws.Bool(d.Get("allow_version_upgrade").(bool)) } - } - requestUpdate := false - log.Printf("[INFO] Building Redshift Modify Cluster Options") - req := &redshift.ModifyClusterInput{ - ClusterIdentifier: aws.String(d.Id()), - } + if d.HasChange("automated_snapshot_retention_period") { + input.AutomatedSnapshotRetentionPeriod = aws.Int64(int64(d.Get("automated_snapshot_retention_period").(int))) + } - // If the cluster type, node type, or number of nodes changed, then the AWS API expects all three - // items to be sent over - if d.HasChanges("cluster_type", "node_type", "number_of_nodes") { - req.ClusterType = aws.String(d.Get("cluster_type").(string)) - req.NodeType = aws.String(d.Get("node_type").(string)) - if v := d.Get("number_of_nodes").(int); v > 1 { - req.ClusterType = aws.String("multi-node") - req.NumberOfNodes = aws.Int64(int64(d.Get("number_of_nodes").(int))) - } else { - req.ClusterType = aws.String("single-node") + if d.HasChange("availability_zone_relocation_enabled") { + input.AvailabilityZoneRelocation = aws.Bool(d.Get("availability_zone_relocation_enabled").(bool)) } - requestUpdate = true - } - if d.HasChange("availability_zone_relocation_enabled") { - req.AvailabilityZoneRelocation = aws.Bool(d.Get("availability_zone_relocation_enabled").(bool)) - requestUpdate = true - } + if d.HasChange("cluster_parameter_group_name") { + input.ClusterParameterGroupName = aws.String(d.Get("cluster_parameter_group_name").(string)) + } - if d.HasChange("cluster_security_groups") { - req.ClusterSecurityGroups = flex.ExpandStringSet(d.Get("cluster_security_groups").(*schema.Set)) - requestUpdate = true - } + if d.HasChange("cluster_security_groups") { + input.ClusterSecurityGroups = flex.ExpandStringSet(d.Get("cluster_security_groups").(*schema.Set)) + } - if d.HasChange("vpc_security_group_ids") { - req.VpcSecurityGroupIds = flex.ExpandStringSet(d.Get("vpc_security_group_ids").(*schema.Set)) - requestUpdate = true - } + // If the cluster type, node type, or number of nodes changed, then the AWS API expects all three + // items to be sent over. + if d.HasChanges("cluster_type", "node_type", "number_of_nodes") { + input.NodeType = aws.String(d.Get("node_type").(string)) - if d.HasChange("master_password") { - req.MasterUserPassword = aws.String(d.Get("master_password").(string)) - requestUpdate = true - } + if v := d.Get("number_of_nodes").(int); v > 1 { + input.ClusterType = aws.String(clusterTypeMultiNode) + input.NumberOfNodes = aws.Int64(int64(d.Get("number_of_nodes").(int))) + } else { + input.ClusterType = aws.String(clusterTypeSingleNode) + } + } - if d.HasChange("cluster_parameter_group_name") { - req.ClusterParameterGroupName = aws.String(d.Get("cluster_parameter_group_name").(string)) - requestUpdate = true - } + if d.HasChange("cluster_version") { + input.ClusterVersion = aws.String(d.Get("cluster_version").(string)) + } - if d.HasChange("automated_snapshot_retention_period") { - req.AutomatedSnapshotRetentionPeriod = aws.Int64(int64(d.Get("automated_snapshot_retention_period").(int))) - requestUpdate = true - } + if d.HasChange("encrypted") { + input.Encrypted = aws.Bool(d.Get("encrypted").(bool)) + } - if d.HasChange("preferred_maintenance_window") { - req.PreferredMaintenanceWindow = aws.String(d.Get("preferred_maintenance_window").(string)) - requestUpdate = true - } + if d.HasChange("enhanced_vpc_routing") { + input.EnhancedVpcRouting = aws.Bool(d.Get("enhanced_vpc_routing").(bool)) + } - if d.HasChange("cluster_version") { - req.ClusterVersion = aws.String(d.Get("cluster_version").(string)) - requestUpdate = true - } + if d.Get("encrypted").(bool) && d.HasChange("kms_key_id") { + input.KmsKeyId = aws.String(d.Get("kms_key_id").(string)) + } - if d.HasChange("allow_version_upgrade") { - req.AllowVersionUpgrade = aws.Bool(d.Get("allow_version_upgrade").(bool)) - requestUpdate = true - } + if d.HasChange("master_password") { + input.MasterUserPassword = aws.String(d.Get("master_password").(string)) + } - if d.HasChange("publicly_accessible") { - req.PubliclyAccessible = aws.Bool(d.Get("publicly_accessible").(bool)) - requestUpdate = true - } + if d.HasChange("preferred_maintenance_window") { + input.PreferredMaintenanceWindow = aws.String(d.Get("preferred_maintenance_window").(string)) + } - if d.HasChange("enhanced_vpc_routing") { - req.EnhancedVpcRouting = aws.Bool(d.Get("enhanced_vpc_routing").(bool)) - requestUpdate = true - } + if d.HasChange("publicly_accessible") { + input.PubliclyAccessible = aws.Bool(d.Get("publicly_accessible").(bool)) + } - if d.HasChange("encrypted") { - req.Encrypted = aws.Bool(d.Get("encrypted").(bool)) - requestUpdate = true - } + if d.HasChange("vpc_security_group_ids") { + input.VpcSecurityGroupIds = flex.ExpandStringSet(d.Get("vpc_security_group_ids").(*schema.Set)) + } - if d.Get("encrypted").(bool) && d.HasChange("kms_key_id") { - req.KmsKeyId = aws.String(d.Get("kms_key_id").(string)) - requestUpdate = true - } + log.Printf("[DEBUG] Modifying Redshift Cluster: %s", input) + _, err := conn.ModifyCluster(input) - if requestUpdate { - log.Printf("[DEBUG] Modifying Redshift Cluster: %s", d.Id()) - _, err := conn.ModifyCluster(req) if err != nil { - return fmt.Errorf("Error modifying Redshift Cluster (%s): %w", d.Id(), err) + return fmt.Errorf("error modifying Redshift Cluster (%s): %w", d.Id(), err) + } + + if _, err := waitClusterUpdated(conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { + return fmt.Errorf("error waiting for Redshift Cluster (%s) update: %w", d.Id(), err) + } + + if _, err := waitClusterRelocationStatusResolved(conn, d.Id()); err != nil { + return fmt.Errorf("error waiting for Redshift Cluster (%s) Availability Zone Relocation Status resolution: %w", d.Id(), err) } } @@ -787,70 +763,50 @@ func resourceClusterUpdate(d *schema.ResourceData, meta interface{}) error { os := o.(*schema.Set) ns := n.(*schema.Set) + add := ns.Difference(os) + del := os.Difference(ns) - removeIams := os.Difference(ns) - addIams := ns.Difference(os) - - req := &redshift.ModifyClusterIamRolesInput{ + input := &redshift.ModifyClusterIamRolesInput{ + AddIamRoles: flex.ExpandStringSet(add), ClusterIdentifier: aws.String(d.Id()), - AddIamRoles: flex.ExpandStringSet(addIams), - RemoveIamRoles: flex.ExpandStringSet(removeIams), + RemoveIamRoles: flex.ExpandStringSet(del), } - log.Printf("[DEBUG] Modifying Redshift Cluster IAM Roles: %s", d.Id()) - _, err := conn.ModifyClusterIamRoles(req) - if err != nil { - return fmt.Errorf("Error modifying Redshift Cluster IAM Roles (%s): %w", d.Id(), err) - } - } + log.Printf("[DEBUG] Modifying Redshift Cluster IAM Roles: %s", input) + _, err := conn.ModifyClusterIamRoles(input) - if requestUpdate || d.HasChange("iam_roles") { - stateConf := &resource.StateChangeConf{ - Pending: []string{"creating", "deleting", "rebooting", "resizing", "renaming", "modifying", "available, prep-for-resize"}, - Target: []string{"available"}, - Refresh: resourceClusterStateRefreshFunc(d.Id(), conn), - Timeout: d.Timeout(schema.TimeoutUpdate), - MinTimeout: 10 * time.Second, - } - _, err := stateConf.WaitForState() if err != nil { - return fmt.Errorf("Error waiting for Redshift Cluster modification (%s): %w", d.Id(), err) + return fmt.Errorf("error modifying Redshift Cluster (%s) IAM roles: %w", d.Id(), err) } - _, err = waitClusterRelocationStatusResolved(conn, d.Id()) - if err != nil { - return fmt.Errorf("error waiting for Redshift Cluster Availability Zone Relocation Status to resolve: %w", err) + if _, err := waitClusterUpdated(conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { + return fmt.Errorf("error waiting for Redshift Cluster (%s) update: %w", d.Id(), err) } } // Availability Zone cannot be changed at the same time as other settings if d.HasChange("availability_zone") { - req := &redshift.ModifyClusterInput{ - ClusterIdentifier: aws.String(d.Id()), + input := &redshift.ModifyClusterInput{ AvailabilityZone: aws.String(d.Get("availability_zone").(string)), + ClusterIdentifier: aws.String(d.Id()), } - log.Printf("[DEBUG] Relocating Redshift Cluster: %s", d.Id()) - _, err := conn.ModifyCluster(req) + + log.Printf("[DEBUG] Relocating Redshift Cluster: %s", input) + _, err := conn.ModifyCluster(input) + if err != nil { - return fmt.Errorf("Error relocating Redshift Cluster (%s): %w", d.Id(), err) + return fmt.Errorf("error relocating Redshift Cluster (%s): %w", d.Id(), err) } - stateConf := &resource.StateChangeConf{ - Pending: []string{"creating", "deleting", "rebooting", "resizing", "renaming", "modifying", "available, prep-for-resize", "recovering"}, - Target: []string{"available"}, - Refresh: resourceClusterStateRefreshFunc(d.Id(), conn), - Timeout: d.Timeout(schema.TimeoutUpdate), - MinTimeout: 10 * time.Second, - } - _, err = stateConf.WaitForState() - if err != nil { - return fmt.Errorf("Error waiting for Redshift Cluster relocation (%s): %w", d.Id(), err) + if _, err := waitClusterUpdated(conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { + return fmt.Errorf("error waiting for Redshift Cluster (%s) update: %w", d.Id(), err) } } if d.HasChange("snapshot_copy") { - if v, ok := d.GetOk("snapshot_copy"); ok { - err := enableRedshiftSnapshotCopy(d.Id(), v.([]interface{}), conn) + if v, ok := d.GetOk("snapshot_copy"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + err := enableSnapshotCopy(conn, d.Id(), v.([]interface{})[0].(map[string]interface{})) + if err != nil { return err } @@ -858,90 +814,50 @@ func resourceClusterUpdate(d *schema.ResourceData, meta interface{}) error { _, err := conn.DisableSnapshotCopy(&redshift.DisableSnapshotCopyInput{ ClusterIdentifier: aws.String(d.Id()), }) - if err != nil { - return fmt.Errorf("Failed to disable snapshot copy: %w", err) - } - } - } - if d.HasChange("logging") { - if loggingEnabled, ok := d.GetOk("logging.0.enable"); ok && loggingEnabled.(bool) { - log.Printf("[INFO] Enabling Logging for Redshift Cluster %q", d.Id()) - err := enableRedshiftClusterLogging(d, conn) if err != nil { - return err - } - } else { - log.Printf("[INFO] Disabling Logging for Redshift Cluster %q", d.Id()) - _, err := tfresource.RetryWhenAWSErrCodeEquals( - clusterInvalidClusterStateFaultTimeout, - func() (interface{}, error) { - return conn.DisableLogging(&redshift.DisableLoggingInput{ - ClusterIdentifier: aws.String(d.Id()), - }) - }, - redshift.ErrCodeInvalidClusterStateFault, - ) - - if err != nil { - return fmt.Errorf("error disabling Redshift Cluster (%s) logging: %w", d.Id(), err) + return fmt.Errorf("error disabling Redshift Cluster (%s) snapshot copy: %w", d.Id(), err) } } } - return resourceClusterRead(d, meta) -} - -func enableRedshiftClusterLogging(d *schema.ResourceData, conn *redshift.Redshift) error { - bucketNameRaw, ok := d.GetOk("logging.0.bucket_name") - - if !ok { - return fmt.Errorf("bucket_name must be set when enabling logging for Redshift Clusters") - } + if d.HasChange("logging") { + if v, ok := d.GetOk("logging"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + tfMap := v.([]interface{})[0].(map[string]interface{}) - params := &redshift.EnableLoggingInput{ - ClusterIdentifier: aws.String(d.Id()), - BucketName: aws.String(bucketNameRaw.(string)), - } + if v, ok := tfMap["enable"].(bool); ok && v { + err := enableLogging(conn, d.Id(), tfMap) - if v, ok := d.GetOk("logging.0.s3_key_prefix"); ok { - params.S3KeyPrefix = aws.String(v.(string)) - } - - _, err := tfresource.RetryWhenAWSErrCodeEquals( - clusterInvalidClusterStateFaultTimeout, - func() (interface{}, error) { - return conn.EnableLogging(params) - }, - redshift.ErrCodeInvalidClusterStateFault, - ) + if err != nil { + return err + } + } else { + _, err := tfresource.RetryWhenAWSErrCodeEquals( + clusterInvalidClusterStateFaultTimeout, + func() (interface{}, error) { + return conn.DisableLogging(&redshift.DisableLoggingInput{ + ClusterIdentifier: aws.String(d.Id()), + }) + }, + redshift.ErrCodeInvalidClusterStateFault, + ) - if err != nil { - return fmt.Errorf("error enabling Redshift Cluster (%s) logging: %w", d.Id(), err) + if err != nil { + return fmt.Errorf("error disabling Redshift Cluster (%s) logging: %w", d.Id(), err) + } + } + } } - return nil -} - -func enableRedshiftSnapshotCopy(id string, scList []interface{}, conn *redshift.Redshift) error { - sc := scList[0].(map[string]interface{}) + if d.HasChange("tags_all") { + o, n := d.GetChange("tags_all") - input := redshift.EnableSnapshotCopyInput{ - ClusterIdentifier: aws.String(id), - DestinationRegion: aws.String(sc["destination_region"].(string)), - } - if rp, ok := sc["retention_period"]; ok { - input.RetentionPeriod = aws.Int64(int64(rp.(int))) - } - if gn, ok := sc["grant_name"]; ok { - input.SnapshotCopyGrantName = aws.String(gn.(string)) + if err := UpdateTags(conn, d.Get("arn").(string), o, n); err != nil { + return fmt.Errorf("error updating Redshift Cluster (%s) tags: %s", d.Get("arn").(string), err) + } } - _, err := conn.EnableSnapshotCopy(&input) - if err != nil { - return fmt.Errorf("Failed to enable snapshot copy: %w", err) - } - return nil + return resourceClusterRead(d, meta) } func resourceClusterDelete(d *schema.ResourceData, meta interface{}) error { @@ -978,48 +894,74 @@ func resourceClusterDelete(d *schema.ResourceData, meta interface{}) error { return fmt.Errorf("error deleting Redshift Cluster (%s): %w", d.Id(), err) } - _, err = waitClusterDeleted(conn, d.Id(), d.Timeout(schema.TimeoutDelete)) - - if err != nil { + if _, err := waitClusterDeleted(conn, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { return fmt.Errorf("error waiting for Redshift Cluster (%s) delete: %w", d.Id(), err) } return nil } -func resourceClusterStateRefreshFunc(id string, conn *redshift.Redshift) resource.StateRefreshFunc { - return func() (interface{}, string, error) { - log.Printf("[INFO] Reading Redshift Cluster Information: %s", id) - resp, err := conn.DescribeClusters(&redshift.DescribeClustersInput{ - ClusterIdentifier: aws.String(id), - }) +func resourceClusterImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { + // Neither skip_final_snapshot nor final_snapshot_identifier can be fetched + // from any API call, so we need to default skip_final_snapshot to true so + // that final_snapshot_identifier is not required. + d.Set("skip_final_snapshot", true) - if err != nil { - if tfawserr.ErrCodeEquals(err, redshift.ErrCodeClusterNotFoundFault) { - return 42, "destroyed", nil - } - log.Printf("[WARN] Error on retrieving Redshift Cluster (%s) when waiting: %s", id, err) - return nil, "", err - } + return []*schema.ResourceData{d}, nil +} - var rsc *redshift.Cluster +func enableLogging(conn *redshift.Redshift, clusterID string, tfMap map[string]interface{}) error { + bucketName, ok := tfMap["bucket_name"].(string) - for _, c := range resp.Clusters { - if *c.ClusterIdentifier == id { - rsc = c - } - } + if !ok || bucketName == "" { + return fmt.Errorf("`bucket_name` must be set when enabling logging for Redshift Clusters") + } - if rsc == nil { - return 42, "destroyed", nil - } + input := &redshift.EnableLoggingInput{ + BucketName: aws.String(bucketName), + ClusterIdentifier: aws.String(clusterID), + } - if rsc.ClusterStatus != nil { - log.Printf("[DEBUG] Redshift Cluster status (%s): %s", id, *rsc.ClusterStatus) - } + if v, ok := tfMap["s3_key_prefix"].(string); ok && v != "" { + input.S3KeyPrefix = aws.String(v) + } - return rsc, *rsc.ClusterStatus, nil + _, err := tfresource.RetryWhenAWSErrCodeEquals( + clusterInvalidClusterStateFaultTimeout, + func() (interface{}, error) { + return conn.EnableLogging(input) + }, + redshift.ErrCodeInvalidClusterStateFault, + ) + + if err != nil { + return fmt.Errorf("error enabling Redshift Cluster (%s) logging: %w", clusterID, err) } + + return nil +} + +func enableSnapshotCopy(conn *redshift.Redshift, clusterID string, tfMap map[string]interface{}) error { + input := &redshift.EnableSnapshotCopyInput{ + ClusterIdentifier: aws.String(clusterID), + DestinationRegion: aws.String(tfMap["destination_region"].(string)), + } + + if v, ok := tfMap["retention_period"]; ok { + input.RetentionPeriod = aws.Int64(int64(v.(int))) + } + + if v, ok := tfMap["grant_name"]; ok { + input.SnapshotCopyGrantName = aws.String(v.(string)) + } + + _, err := conn.EnableSnapshotCopy(input) + + if err != nil { + return fmt.Errorf("error enabling Redshift Cluster (%s) snapshot copy: %w", clusterID, err) + } + + return nil } func flattenRedshiftClusterNode(apiObject *redshift.ClusterNode) map[string]interface{} { diff --git a/internal/service/redshift/cluster_snapshot_test.go b/internal/service/redshift/cluster_snapshot_test.go deleted file mode 100644 index 86c1bb2273c7..000000000000 --- a/internal/service/redshift/cluster_snapshot_test.go +++ /dev/null @@ -1 +0,0 @@ -package redshift_test diff --git a/internal/service/redshift/cluster_test.go b/internal/service/redshift/cluster_test.go index 1586e9752461..2b13c5868604 100644 --- a/internal/service/redshift/cluster_test.go +++ b/internal/service/redshift/cluster_test.go @@ -547,7 +547,7 @@ func TestAccRedshiftCluster_changeAvailabilityZone_availabilityZoneRelocationNot }, { Config: testAccClusterConfig_updateAvailabilityZone_availabilityZoneRelocationNotSet(rName, 1), - ExpectError: regexp.MustCompile(`cannot change availability_zone if availability_zone_relocation_enabled is not true`), + ExpectError: regexp.MustCompile("cannot change `availability_zone` if `availability_zone_relocation_enabled` is not true"), }, }, }) @@ -663,7 +663,55 @@ func TestAccRedshiftCluster_availabilityZoneRelocation_publiclyAccessible(t *tes Steps: []resource.TestStep{ { Config: testAccClusterConfig_availabilityZoneRelocation_publiclyAccessible(rName), - ExpectError: regexp.MustCompile(`availability_zone_relocation_enabled can not be true when publicly_accessible is true`), + ExpectError: regexp.MustCompile("`availability_zone_relocation_enabled` cannot be true when `publicly_accessible` is true"), + }, + }, + }) +} + +func TestAccRedshiftCluster_restoreFromSnapshot(t *testing.T) { + var v redshift.Cluster + resourceName := "aws_redshift_cluster.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, redshift.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckDestroyClusterSnapshot(rName), + Steps: []resource.TestStep{ + { + Config: testAccClusterCreateSnapshotConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckClusterExists(resourceName, &v), + resource.TestCheckResourceAttrPair(resourceName, "availability_zone", "data.aws_availability_zones.available", "names.0"), + resource.TestCheckResourceAttr(resourceName, "node_type", "dc2.8xlarge"), + resource.TestCheckResourceAttr(resourceName, "number_of_nodes", "2"), + ), + }, + // Apply a configuration without the source cluster to ensure final snapshot creation. + { + Config: acctest.ConfigAvailableAZsNoOptInExclude("usw2-az2"), + }, + { + Config: testAccClusterRestoreFromSnapshotConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckClusterExists(resourceName, &v), + resource.TestCheckResourceAttrPair(resourceName, "availability_zone", "data.aws_availability_zones.available", "names.1"), + resource.TestCheckResourceAttr(resourceName, "node_type", "dc2.large"), + resource.TestCheckResourceAttr(resourceName, "number_of_nodes", "8"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "final_snapshot_identifier", + "master_password", + "skip_final_snapshot", + "snapshot_identifier", + }, }, }, }) @@ -1488,3 +1536,34 @@ resource "aws_redshift_cluster" "test" { } `, rName)) } + +func testAccClusterCreateSnapshotConfig(rName string) string { + return acctest.ConfigCompose(acctest.ConfigAvailableAZsNoOptInExclude("usw2-az2"), fmt.Sprintf(` +resource "aws_redshift_cluster" "test" { + cluster_identifier = %[1]q + availability_zone = data.aws_availability_zones.available.names[0] + database_name = "mydb" + master_username = "foo_test" + master_password = "Mustbe8characters" + node_type = "dc2.8xlarge" + number_of_nodes = 2 + final_snapshot_identifier = %[1]q +} +`, rName)) +} + +func testAccClusterRestoreFromSnapshotConfig(rName string) string { + return acctest.ConfigCompose(acctest.ConfigAvailableAZsNoOptInExclude("usw2-az2"), fmt.Sprintf(` +resource "aws_redshift_cluster" "test" { + cluster_identifier = %[1]q + snapshot_identifier = %[1]q + availability_zone = data.aws_availability_zones.available.names[1] + database_name = "mydb" + master_username = "foo_test" + master_password = "Mustbe8characters" + node_type = "dc2.large" + number_of_nodes = 8 + skip_final_snapshot = true +} +`, rName)) +} diff --git a/internal/service/redshift/enum.go b/internal/service/redshift/enum.go index 4b49fa8c3d18..6003b755f215 100644 --- a/internal/service/redshift/enum.go +++ b/internal/service/redshift/enum.go @@ -1,11 +1,21 @@ package redshift +//nolint:deadcode,varcheck // These constants are missing from the AWS SDK +const ( + clusterAvailabilityStatusAvailable = "Available" + clusterAvailabilityStatusFailed = "Failed" + clusterAvailabilityStatusMaintenance = "Maintenance" + clusterAvailabilityStatusModifying = "Modifying" + clusterAvailabilityStatusUnavailable = "Unavailable" +) + // https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#rs-mgmt-cluster-status. //nolint:deadcode,varcheck // These constants are missing from the AWS SDK const ( clusterStatusAvailable = "available" clusterStatusAvailablePrepForResize = "available, prep-for-resize" clusterStatusAvailableResizeCleanup = "available, resize-cleanup" + clusterStatusBackingUp = "backing-up" clusterStatusCancellingResize = "cancelling-resize" clusterStatusCreating = "creating" clusterStatusDeleting = "deleting" @@ -18,8 +28,10 @@ const ( clusterStatusModifying = "modifying" clusterStatusPaused = "paused" clusterStatusRebooting = "rebooting" + clusterStatusRecovering = "recovering" clusterStatusRenaming = "renaming" clusterStatusResizing = "resizing" + clusterStatusRestoring = "restoring" clusterStatusRotatingKeys = "rotating-keys" clusterStatusStorageFull = "storage-full" clusterStatusUpdatingHSM = "updating-hsm" diff --git a/internal/service/redshift/parameter_group.go b/internal/service/redshift/parameter_group.go index 87333adcaf61..794245453f82 100644 --- a/internal/service/redshift/parameter_group.go +++ b/internal/service/redshift/parameter_group.go @@ -64,7 +64,6 @@ func ResourceParameterGroup() *schema.Resource { "parameter": { Type: schema.TypeSet, Optional: true, - ForceNew: false, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "name": { diff --git a/internal/service/redshift/status.go b/internal/service/redshift/status.go index 63898f2afa69..0d58add3948c 100644 --- a/internal/service/redshift/status.go +++ b/internal/service/redshift/status.go @@ -7,7 +7,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) -func statusCluster(conn *redshift.Redshift, id string) resource.StateRefreshFunc { +func statusClusterAvailability(conn *redshift.Redshift, id string) resource.StateRefreshFunc { return func() (interface{}, string, error) { output, err := FindClusterByID(conn, id) @@ -19,7 +19,7 @@ func statusCluster(conn *redshift.Redshift, id string) resource.StateRefreshFunc return nil, "", err } - return output, aws.StringValue(output.ClusterStatus), nil + return output, aws.StringValue(output.ClusterAvailabilityStatus), nil } } diff --git a/internal/service/redshift/wait.go b/internal/service/redshift/wait.go index 440a46d4c380..698b7ee34d0f 100644 --- a/internal/service/redshift/wait.go +++ b/internal/service/redshift/wait.go @@ -1,10 +1,13 @@ package redshift import ( + "errors" "time" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/redshift" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) const ( @@ -13,25 +16,58 @@ const ( clusterRelocationStatusResolvedTimeout = 1 * time.Minute ) +func waitClusterCreated(conn *redshift.Redshift, id string, timeout time.Duration) (*redshift.Cluster, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{clusterAvailabilityStatusModifying, clusterAvailabilityStatusUnavailable}, + Target: []string{clusterAvailabilityStatusAvailable}, + Refresh: statusClusterAvailability(conn, id), + Timeout: timeout, + MinTimeout: 10 * time.Second, + } + + outputRaw, err := stateConf.WaitForState() + + if output, ok := outputRaw.(*redshift.Cluster); ok { + tfresource.SetLastError(err, errors.New(aws.StringValue(output.ClusterStatus))) + + return output, err + } + + return nil, err +} + func waitClusterDeleted(conn *redshift.Redshift, id string, timeout time.Duration) (*redshift.Cluster, error) { stateConf := &resource.StateChangeConf{ - Pending: []string{ - clusterStatusAvailable, - clusterStatusCreating, - clusterStatusDeleting, - clusterStatusFinalSnapshot, - clusterStatusRebooting, - clusterStatusRenaming, - clusterStatusResizing, - }, + Pending: []string{clusterAvailabilityStatusModifying}, Target: []string{}, - Refresh: statusCluster(conn, id), + Refresh: statusClusterAvailability(conn, id), + Timeout: timeout, + } + + outputRaw, err := stateConf.WaitForState() + + if output, ok := outputRaw.(*redshift.Cluster); ok { + tfresource.SetLastError(err, errors.New(aws.StringValue(output.ClusterStatus))) + + return output, err + } + + return nil, err +} + +func waitClusterUpdated(conn *redshift.Redshift, id string, timeout time.Duration) (*redshift.Cluster, error) { //nolint:unparam + stateConf := &resource.StateChangeConf{ + Pending: []string{clusterAvailabilityStatusMaintenance, clusterAvailabilityStatusModifying, clusterAvailabilityStatusUnavailable}, + Target: []string{clusterAvailabilityStatusAvailable}, + Refresh: statusClusterAvailability(conn, id), Timeout: timeout, } outputRaw, err := stateConf.WaitForState() if output, ok := outputRaw.(*redshift.Cluster); ok { + tfresource.SetLastError(err, errors.New(aws.StringValue(output.ClusterStatus))) + return output, err } diff --git a/internal/service/s3/bucket.go b/internal/service/s3/bucket.go index 6730d0b37a9f..a6038389e360 100644 --- a/internal/service/s3/bucket.go +++ b/internal/service/s3/bucket.go @@ -8,7 +8,6 @@ import ( "log" "net/http" "net/url" - "regexp" "strings" "time" @@ -20,11 +19,12 @@ import ( "github.com/aws/aws-sdk-go/service/s3" "github.com/aws/aws-sdk-go/service/s3/s3manager" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/structure" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" - "github.com/hashicorp/terraform-provider-aws/internal/create" "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" @@ -33,10 +33,11 @@ import ( func ResourceBucket() *schema.Resource { return &schema.Resource{ - Create: resourceBucketCreate, - Read: resourceBucketRead, - Update: resourceBucketUpdate, - Delete: resourceBucketDelete, + Create: resourceBucketCreate, + Read: resourceBucketRead, + Update: resourceBucketUpdate, + DeleteWithoutTimeout: resourceBucketDelete, + Importer: &schema.ResourceImporter{ State: schema.ImportStatePassthrough, }, @@ -75,83 +76,92 @@ func ResourceBucket() *schema.Resource { }, "acl": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_acl resource instead", + Type: schema.TypeString, + Optional: true, + Computed: true, + ConflictsWith: []string{"grant"}, + ValidateFunc: validation.StringInSlice(BucketCannedACL_Values(), false), + Deprecated: "Use the aws_s3_bucket_acl resource instead", }, "grant": { - Type: schema.TypeSet, - Computed: true, - Deprecated: "Use the aws_s3_bucket_acl resource instead", + Type: schema.TypeSet, + Optional: true, + Computed: true, + ConflictsWith: []string{"acl"}, + Deprecated: "Use the aws_s3_bucket_acl resource instead", Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "id": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_acl resource instead", + Type: schema.TypeString, + Optional: true, }, "type": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_acl resource instead", + Type: schema.TypeString, + Required: true, + // TypeAmazonCustomerByEmail is not currently supported + ValidateFunc: validation.StringInSlice([]string{ + s3.TypeCanonicalUser, + s3.TypeGroup, + }, false), }, "uri": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_acl resource instead", + Type: schema.TypeString, + Optional: true, }, "permissions": { - Type: schema.TypeSet, - Computed: true, - Deprecated: "Use the aws_s3_bucket_acl resource instead", - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeSet, + Required: true, + Set: schema.HashString, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice(s3.Permission_Values(), false), + }, }, }, }, }, "policy": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_policy resource instead", + Type: schema.TypeString, + Optional: true, + Computed: true, + Deprecated: "Use the aws_s3_bucket_policy resource instead", + ValidateFunc: validation.StringIsJSON, + DiffSuppressFunc: verify.SuppressEquivalentPolicyDiffs, }, "cors_rule": { Type: schema.TypeList, + Optional: true, Computed: true, Deprecated: "Use the aws_s3_bucket_cors_configuration resource instead", Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "allowed_headers": { - Type: schema.TypeList, - Computed: true, - Deprecated: "Use the aws_s3_bucket_cors_configuration resource instead", - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, }, "allowed_methods": { - Type: schema.TypeList, - Computed: true, - Deprecated: "Use the aws_s3_bucket_cors_configuration resource instead", - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, }, "allowed_origins": { - Type: schema.TypeList, - Computed: true, - Deprecated: "Use the aws_s3_bucket_cors_configuration resource instead", - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, }, "expose_headers": { - Type: schema.TypeList, - Computed: true, - Deprecated: "Use the aws_s3_bucket_cors_configuration resource instead", - Elem: &schema.Schema{Type: schema.TypeString}, + Type: schema.TypeList, + Optional: true, + Elem: &schema.Schema{Type: schema.TypeString}, }, "max_age_seconds": { - Type: schema.TypeInt, - Computed: true, - Deprecated: "Use the aws_s3_bucket_cors_configuration resource instead", + Type: schema.TypeInt, + Optional: true, }, }, }, @@ -159,32 +169,47 @@ func ResourceBucket() *schema.Resource { "website": { Type: schema.TypeList, + Optional: true, Computed: true, - Deprecated: "Use the aws_s3_bucket_website_configuration resource", + MaxItems: 1, + Deprecated: "Use the aws_s3_bucket_website_configuration resource instead", Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "index_document": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_website_configuration resource", + Type: schema.TypeString, + Optional: true, + ExactlyOneOf: []string{ + "website.0.index_document", + "website.0.redirect_all_requests_to", + }, }, "error_document": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_website_configuration resource", + Type: schema.TypeString, + Optional: true, }, "redirect_all_requests_to": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_website_configuration resource", + Type: schema.TypeString, + ExactlyOneOf: []string{ + "website.0.index_document", + "website.0.redirect_all_requests_to", + }, + ConflictsWith: []string{ + "website.0.error_document", + "website.0.routing_rules", + }, + Optional: true, }, "routing_rules": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_website_configuration resource", + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringIsJSON, + StateFunc: func(v interface{}) string { + json, _ := structure.NormalizeJsonString(v) + return json + }, }, }, }, @@ -213,39 +238,41 @@ func ResourceBucket() *schema.Resource { "versioning": { Type: schema.TypeList, + Optional: true, Computed: true, + MaxItems: 1, Deprecated: "Use the aws_s3_bucket_versioning resource instead", Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "enabled": { - Type: schema.TypeBool, - Computed: true, - Deprecated: "Use the aws_s3_bucket_versioning resource instead", + Type: schema.TypeBool, + Optional: true, + Default: false, }, "mfa_delete": { - Type: schema.TypeBool, - Computed: true, - Deprecated: "Use the aws_s3_bucket_versioning resource instead", + Type: schema.TypeBool, + Optional: true, + Default: false, }, }, }, }, "logging": { - Type: schema.TypeSet, + Type: schema.TypeList, + Optional: true, Computed: true, + MaxItems: 1, Deprecated: "Use the aws_s3_bucket_logging resource instead", Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "target_bucket": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_logging resource instead", + Type: schema.TypeString, + Required: true, }, "target_prefix": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_logging resource instead", + Type: schema.TypeString, + Optional: true, }, }, }, @@ -253,108 +280,104 @@ func ResourceBucket() *schema.Resource { "lifecycle_rule": { Type: schema.TypeList, + Optional: true, Computed: true, Deprecated: "Use the aws_s3_bucket_lifecycle_configuration resource instead", Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "id": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_lifecycle_configuration resource instead", + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringLenBetween(0, 255), }, "prefix": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_lifecycle_configuration resource instead", + Type: schema.TypeString, + Optional: true, }, - "tags": tftags.TagsSchemaComputedDeprecated("Use the aws_s3_bucket_lifecycle_configuration resource instead"), + "tags": tftags.TagsSchema(), "enabled": { - Type: schema.TypeBool, - Computed: true, - Deprecated: "Use the aws_s3_bucket_lifecycle_configuration resource instead", + Type: schema.TypeBool, + Required: true, }, "abort_incomplete_multipart_upload_days": { - Type: schema.TypeInt, - Computed: true, - Deprecated: "Use the aws_s3_bucket_lifecycle_configuration resource instead", + Type: schema.TypeInt, + Optional: true, }, "expiration": { - Type: schema.TypeList, - Computed: true, - Deprecated: "Use the aws_s3_bucket_lifecycle_configuration resource instead", + Type: schema.TypeList, + Optional: true, + MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "date": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_lifecycle_configuration resource instead", + Type: schema.TypeString, + Optional: true, + ValidateFunc: validBucketLifecycleTimestamp, }, "days": { - Type: schema.TypeInt, - Computed: true, - Deprecated: "Use the aws_s3_bucket_lifecycle_configuration resource instead", + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntAtLeast(0), }, "expired_object_delete_marker": { - Type: schema.TypeBool, - Computed: true, - Deprecated: "Use the aws_s3_bucket_lifecycle_configuration resource instead", + Type: schema.TypeBool, + Optional: true, }, }, }, }, "noncurrent_version_expiration": { - Type: schema.TypeList, - Computed: true, - Deprecated: "Use the aws_s3_bucket_lifecycle_configuration resource instead", + Type: schema.TypeList, + MaxItems: 1, + Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "days": { - Type: schema.TypeInt, - Computed: true, - Deprecated: "Use the aws_s3_bucket_lifecycle_configuration resource instead", + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntAtLeast(1), }, }, }, }, "transition": { - Type: schema.TypeSet, - Computed: true, - Deprecated: "Use the aws_s3_bucket_lifecycle_configuration resource instead", + Type: schema.TypeSet, + Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "date": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_lifecycle_configuration resource instead", + Type: schema.TypeString, + Optional: true, + ValidateFunc: validBucketLifecycleTimestamp, }, "days": { - Type: schema.TypeInt, - Computed: true, - Deprecated: "Use the aws_s3_bucket_lifecycle_configuration resource instead", + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntAtLeast(0), }, "storage_class": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_lifecycle_configuration resource instead", + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(s3.TransitionStorageClass_Values(), false), }, }, }, }, "noncurrent_version_transition": { - Type: schema.TypeSet, - Computed: true, - Deprecated: "Use the aws_s3_bucket_lifecycle_configuration resource instead", + Type: schema.TypeSet, + Optional: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "days": { - Type: schema.TypeInt, - Computed: true, - Deprecated: "Use the aws_s3_bucket_lifecycle_configuration resource instead", + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntAtLeast(0), }, "storage_class": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_lifecycle_configuration resource instead", + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(s3.TransitionStorageClass_Values(), false), }, }, }, @@ -370,113 +393,122 @@ func ResourceBucket() *schema.Resource { }, "acceleration_status": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_accelerate_configuration resource instead", + Type: schema.TypeString, + Optional: true, + Computed: true, + Deprecated: "Use the aws_s3_bucket_accelerate_configuration resource instead", + ValidateFunc: validation.StringInSlice(s3.BucketAccelerateStatus_Values(), false), }, "request_payer": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_request_payment_configuration resource instead", + Type: schema.TypeString, + Optional: true, + Computed: true, + Deprecated: "Use the aws_s3_bucket_request_payment_configuration resource instead", + ValidateFunc: validation.StringInSlice(s3.Payer_Values(), false), }, "replication_configuration": { Type: schema.TypeList, + Optional: true, Computed: true, + MaxItems: 1, Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "role": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeString, + Required: true, }, "rules": { - Type: schema.TypeSet, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeSet, + Required: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "id": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(0, 255), }, "destination": { - Type: schema.TypeList, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeList, + MaxItems: 1, + MinItems: 1, + Required: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "account_id": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidAccountID, }, "bucket": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeString, + Required: true, + ValidateFunc: verify.ValidARN, }, "storage_class": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(s3.StorageClass_Values(), false), }, "replica_kms_key_id": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeString, + Optional: true, }, "access_control_translation": { - Type: schema.TypeList, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "owner": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(s3.OwnerOverride_Values(), false), }, }, }, }, "replication_time": { - Type: schema.TypeList, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeList, + Optional: true, + MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "minutes": { - Type: schema.TypeInt, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeInt, + Optional: true, + Default: 15, + ValidateFunc: validation.IntBetween(15, 15), }, "status": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeString, + Optional: true, + Default: s3.ReplicationTimeStatusEnabled, + ValidateFunc: validation.StringInSlice(s3.ReplicationTimeStatus_Values(), false), }, }, }, }, "metrics": { - Type: schema.TypeList, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeList, + Optional: true, + MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "minutes": { - Type: schema.TypeInt, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeInt, + Optional: true, + Default: 15, + ValidateFunc: validation.IntBetween(10, 15), }, "status": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeString, + Optional: true, + Default: s3.MetricsStatusEnabled, + ValidateFunc: validation.StringInSlice(s3.MetricsStatus_Values(), false), }, }, }, @@ -485,21 +517,22 @@ func ResourceBucket() *schema.Resource { }, }, "source_selection_criteria": { - Type: schema.TypeList, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "sse_kms_encrypted_objects": { - Type: schema.TypeList, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "enabled": { - Type: schema.TypeBool, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeBool, + Required: true, }, }, }, @@ -508,39 +541,39 @@ func ResourceBucket() *schema.Resource { }, }, "prefix": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(0, 1024), }, "status": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(s3.ReplicationRuleStatus_Values(), false), }, "priority": { - Type: schema.TypeInt, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeInt, + Optional: true, }, "filter": { - Type: schema.TypeList, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeList, + Optional: true, + MinItems: 1, + MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "prefix": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringLenBetween(0, 1024), }, - "tags": tftags.TagsSchemaComputedDeprecated("Use the aws_s3_bucket_replication_configuration resource instead"), + "tags": tftags.TagsSchema(), }, }, }, "delete_marker_replication_status": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_replication_configuration resource instead", + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice([]string{s3.DeleteMarkerReplicationStatusEnabled}, false), }, }, }, @@ -551,39 +584,39 @@ func ResourceBucket() *schema.Resource { "server_side_encryption_configuration": { Type: schema.TypeList, + MaxItems: 1, + Optional: true, Computed: true, Deprecated: "Use the aws_s3_bucket_server_side_encryption_configuration resource instead", Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "rule": { - Type: schema.TypeList, - Computed: true, - Deprecated: "Use the aws_s3_bucket_server_side_encryption_configuration resource instead", + Type: schema.TypeList, + MaxItems: 1, + Required: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "apply_server_side_encryption_by_default": { - Type: schema.TypeList, - Computed: true, - Deprecated: "Use the aws_s3_bucket_server_side_encryption_configuration resource instead", + Type: schema.TypeList, + MaxItems: 1, + Required: true, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "kms_master_key_id": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_server_side_encryption_configuration resource instead", + Type: schema.TypeString, + Optional: true, }, "sse_algorithm": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_server_side_encryption_configuration resource instead", + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(s3.ServerSideEncryption_Values(), false), }, }, }, }, "bucket_key_enabled": { - Type: schema.TypeBool, - Computed: true, - Deprecated: "Use the aws_s3_bucket_server_side_encryption_configuration resource instead", + Type: schema.TypeBool, + Optional: true, }, }, }, @@ -597,52 +630,56 @@ func ResourceBucket() *schema.Resource { Optional: true, Computed: true, // Can be removed when object_lock_configuration.0.object_lock_enabled is removed ForceNew: true, - ConflictsWith: []string{"object_lock_configuration.0.object_lock_enabled"}, + ConflictsWith: []string{"object_lock_configuration"}, }, "object_lock_configuration": { - Type: schema.TypeList, - Optional: true, - Computed: true, - MaxItems: 1, + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Deprecated: "Use the top-level parameter object_lock_enabled and the aws_s3_bucket_object_lock_configuration resource instead", Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "object_lock_enabled": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice(s3.ObjectLockEnabled_Values(), false), - Deprecated: "Use the top-level parameter object_lock_enabled instead", + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ConflictsWith: []string{"object_lock_enabled"}, + ValidateFunc: validation.StringInSlice(s3.ObjectLockEnabled_Values(), false), + Deprecated: "Use the top-level parameter object_lock_enabled instead", }, "rule": { Type: schema.TypeList, - Computed: true, + Optional: true, Deprecated: "Use the aws_s3_bucket_object_lock_configuration resource instead", + MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "default_retention": { - Type: schema.TypeList, - Computed: true, - Deprecated: "Use the aws_s3_bucket_object_lock_configuration resource instead", + Type: schema.TypeList, + Required: true, + MinItems: 1, + MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "mode": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use the aws_s3_bucket_object_lock_configuration resource instead", + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(s3.ObjectLockRetentionMode_Values(), false), }, "days": { - Type: schema.TypeInt, - Computed: true, - Deprecated: "Use the aws_s3_bucket_object_lock_configuration resource instead", + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntAtLeast(1), }, "years": { - Type: schema.TypeInt, - Computed: true, - Deprecated: "Use the aws_s3_bucket_object_lock_configuration resource instead", + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntAtLeast(1), }, }, }, @@ -674,7 +711,6 @@ func resourceBucketCreate(d *schema.ResourceData, meta interface{}) error { } else { bucket = resource.UniqueId() } - d.Set("bucket", bucket) log.Printf("[DEBUG] S3 bucket create: %s", bucket) @@ -683,6 +719,16 @@ func resourceBucketCreate(d *schema.ResourceData, meta interface{}) error { ObjectLockEnabledForBucket: aws.Bool(d.Get("object_lock_enabled").(bool)), } + if acl, ok := d.GetOk("acl"); ok { + acl := acl.(string) + req.ACL = aws.String(acl) + log.Printf("[DEBUG] S3 bucket %s has canned ACL %s", bucket, acl) + } else { + // Use default value previously available in v3.x of the provider + req.ACL = aws.String(s3.BucketCannedACLPrivate) + log.Printf("[DEBUG] S3 bucket %s has default canned ACL %s", bucket, s3.BucketCannedACLPrivate) + } + awsRegion := meta.(*conns.AWSClient).Region log.Printf("[DEBUG] S3 bucket create: %s, using region: %s", bucket, awsRegion) @@ -695,7 +741,7 @@ func resourceBucketCreate(d *schema.ResourceData, meta interface{}) error { } if err := ValidBucketName(bucket, awsRegion); err != nil { - return fmt.Errorf("Error validating S3 bucket name: %s", err) + return fmt.Errorf("error validating S3 Bucket (%s) name: %w", bucket, err) } // S3 Object Lock can only be enabled on bucket creation. @@ -708,7 +754,7 @@ func resourceBucketCreate(d *schema.ResourceData, meta interface{}) error { _, err := conn.CreateBucket(req) if awsErr, ok := err.(awserr.Error); ok { if awsErr.Code() == ErrCodeOperationAborted { - return resource.RetryableError(fmt.Errorf("Error creating S3 bucket %s, retrying: %w", bucket, err)) + return resource.RetryableError(fmt.Errorf("error creating S3 Bucket (%s), retrying: %w", bucket, err)) } } if err != nil { @@ -721,7 +767,7 @@ func resourceBucketCreate(d *schema.ResourceData, meta interface{}) error { _, err = conn.CreateBucket(req) } if err != nil { - return fmt.Errorf("Error creating S3 bucket: %s", err) + return fmt.Errorf("error creating S3 Bucket (%s): %w", bucket, err) } // Assign the bucket name as the resource ID @@ -745,9 +791,94 @@ func resourceBucketUpdate(d *schema.ResourceData, meta interface{}) error { } } + // Note: Order of argument updates below is important + + if d.HasChange("policy") { + if err := resourceBucketInternalPolicyUpdate(conn, d); err != nil { + return fmt.Errorf("error updating S3 Bucket (%s) Policy: %w", d.Id(), err) + } + } + + if d.HasChange("cors_rule") { + if err := resourceBucketInternalCorsUpdate(conn, d); err != nil { + return fmt.Errorf("error updating S3 Bucket (%s) CORS Rules: %w", d.Id(), err) + } + } + + if d.HasChange("website") { + if err := resourceBucketInternalWebsiteUpdate(conn, d); err != nil { + return fmt.Errorf("error updating S3 Bucket (%s) Website: %w", d.Id(), err) + } + } + + if d.HasChange("versioning") { + v := d.Get("versioning").([]interface{}) + + if d.IsNewResource() { + if versioning := expandVersioningWhenIsNewResource(v); versioning != nil { + err := resourceBucketInternalVersioningUpdate(conn, d.Id(), versioning) + if err != nil { + return fmt.Errorf("error updating (new) S3 Bucket (%s) Versioning: %w", d.Id(), err) + } + } + } else { + if err := resourceBucketInternalVersioningUpdate(conn, d.Id(), expandVersioning(v)); err != nil { + return fmt.Errorf("error updating S3 Bucket (%s) Versioning: %w", d.Id(), err) + } + } + } + + if d.HasChange("acl") && !d.IsNewResource() { + if err := resourceBucketInternalACLUpdate(conn, d); err != nil { + return fmt.Errorf("error updating S3 Bucket (%s) ACL: %w", d.Id(), err) + } + } + + if d.HasChange("grant") { + if err := resourceBucketInternalGrantsUpdate(conn, d); err != nil { + return fmt.Errorf("error updating S3 Bucket (%s) Grants: %w", d.Id(), err) + } + } + + if d.HasChange("logging") { + if err := resourceBucketInternalLoggingUpdate(conn, d); err != nil { + return fmt.Errorf("error updating S3 Bucket (%s) Logging: %w", d.Id(), err) + } + } + + if d.HasChange("lifecycle_rule") { + if err := resourceBucketInternalLifecycleUpdate(conn, d); err != nil { + return fmt.Errorf("error updating S3 Bucket (%s) Lifecycle Rules: %w", d.Id(), err) + } + } + + if d.HasChange("acceleration_status") { + if err := resourceBucketInternalAccelerationUpdate(conn, d); err != nil { + return fmt.Errorf("error updating S3 Bucket (%s) Acceleration Status: %w", d.Id(), err) + } + } + + if d.HasChange("request_payer") { + if err := resourceBucketInternalRequestPayerUpdate(conn, d); err != nil { + return fmt.Errorf("error updating S3 Bucket (%s) Request Payer: %w", d.Id(), err) + } + } + + if d.HasChange("replication_configuration") { + if err := resourceBucketInternalReplicationConfigurationUpdate(conn, d); err != nil { + return fmt.Errorf("error updating S3 Bucket (%s) Replication configuration: %w", d.Id(), err) + } + } + + if d.HasChange("server_side_encryption_configuration") { + if err := resourceBucketInternalServerSideEncryptionConfigurationUpdate(conn, d); err != nil { + return fmt.Errorf("error updating S3 Bucket (%s) Server-side Encryption configuration: %w", d.Id(), err) + } + } + if d.HasChange("object_lock_configuration") { if err := resourceBucketInternalObjectLockConfigurationUpdate(conn, d); err != nil { - return err + return fmt.Errorf("error updating S3 Bucket (%s) Object Lock configuration: %w", d.Id(), err) } } @@ -801,10 +932,7 @@ func resourceBucketRead(d *schema.ResourceData, meta interface{}) error { return fmt.Errorf("error reading S3 Bucket (%s): %w", d.Id(), err) } - // In the import case, we won't have this - if _, ok := d.GetOk("bucket"); !ok { - d.Set("bucket", d.Id()) - } + d.Set("bucket", d.Id()) d.Set("bucket_domain_name", meta.(*conns.AWSClient).PartitionHostname(fmt.Sprintf("%s.s3", d.Get("bucket").(string)))) @@ -834,44 +962,33 @@ func resourceBucketRead(d *schema.ResourceData, meta interface{}) error { d.Set("policy", nil) } - // Read the Grant ACL if configured outside this resource; + // Read the Grant ACL. // In the event grants are not configured on the bucket, the API returns an empty array - - // Reset `grant` if `acl` (canned ACL) is set. - if acl, ok := d.GetOk("acl"); ok && acl.(string) != s3.BucketCannedACLPrivate { - if err := d.Set("grant", nil); err != nil { - return fmt.Errorf("error resetting grant %w", err) - } - } else { - // Set the ACL to its default i.e. "private" (to mimic pre-v4.0 schema) - d.Set("acl", s3.BucketCannedACLPrivate) - - apResponse, err := verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { - return conn.GetBucketAcl(&s3.GetBucketAclInput{ - Bucket: aws.String(d.Id()), - }) + apResponse, err := verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { + return conn.GetBucketAcl(&s3.GetBucketAclInput{ + Bucket: aws.String(d.Id()), }) + }) - // The S3 API method calls above can occasionally return no error (i.e. NoSuchBucket) - // after a bucket has been deleted (eventual consistency woes :/), thus, when making extra S3 API calls - // such as GetBucketAcl, the error should be caught for non-new buckets as follows. - if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { - log.Printf("[WARN] S3 Bucket (%s) not found, removing from state", d.Id()) - d.SetId("") - return nil - } + // The S3 API method calls above can occasionally return no error (i.e. NoSuchBucket) + // after a bucket has been deleted (eventual consistency woes :/), thus, when making extra S3 API calls + // such as GetBucketAcl, the error should be caught for non-new buckets as follows. + if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { + log.Printf("[WARN] S3 Bucket (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } - if err != nil { - return fmt.Errorf("error getting S3 Bucket (%s) ACL: %w", d.Id(), err) - } + if err != nil { + return fmt.Errorf("error getting S3 Bucket (%s) ACL: %w", d.Id(), err) + } - if aclOutput, ok := apResponse.(*s3.GetBucketAclOutput); ok { - if err := d.Set("grant", flattenGrants(aclOutput)); err != nil { - return fmt.Errorf("error setting grant %s", err) - } - } else { - d.Set("grant", nil) + if aclOutput, ok := apResponse.(*s3.GetBucketAclOutput); ok { + if err := d.Set("grant", flattenGrants(aclOutput)); err != nil { + return fmt.Errorf("error setting grant %s", err) } + } else { + d.Set("grant", nil) } // Read the CORS @@ -985,7 +1102,7 @@ func resourceBucketRead(d *schema.ResourceData, meta interface{}) error { // Amazon S3 Transfer Acceleration might not be supported in the region if err != nil && !tfawserr.ErrCodeEquals(err, ErrCodeMethodNotAllowed, ErrCodeUnsupportedArgument, ErrCodeNotImplemented) { - return fmt.Errorf("error getting S3 Bucket acceleration configuration: %w", err) + return fmt.Errorf("error getting S3 Bucket (%s) accelerate configuration: %w", d.Id(), err) } if accelerate, ok := accelerateResponse.(*s3.GetBucketAccelerateConfigurationOutput); ok { @@ -1289,11 +1406,11 @@ func resourceBucketRead(d *schema.ResourceData, meta interface{}) error { return nil } -func resourceBucketDelete(d *schema.ResourceData, meta interface{}) error { +func resourceBucketDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { conn := meta.(*conns.AWSClient).S3Conn - log.Printf("[DEBUG] S3 Delete Bucket: %s", d.Id()) - _, err := conn.DeleteBucket(&s3.DeleteBucketInput{ + log.Printf("[DEBUG] Deleting S3 Bucket: %s", d.Id()) + _, err := conn.DeleteBucketWithContext(ctx, &s3.DeleteBucketInput{ Bucket: aws.String(d.Id()), }) @@ -1301,7 +1418,7 @@ func resourceBucketDelete(d *schema.ResourceData, meta interface{}) error { return nil } - if tfawserr.ErrCodeEquals(err, "BucketNotEmpty") { + if tfawserr.ErrCodeEquals(err, ErrCodeBucketNotEmpty) { if d.Get("force_destroy").(bool) { // Use a S3 service client that can handle multiple slashes in URIs. // While aws_s3_object resources cannot create these object @@ -1309,7 +1426,7 @@ func resourceBucketDelete(d *schema.ResourceData, meta interface{}) error { conn = meta.(*conns.AWSClient).S3ConnURICleaningDisabled // bucket may have things delete them - log.Printf("[DEBUG] S3 Bucket attempting to forceDestroy %+v", err) + log.Printf("[DEBUG] S3 Bucket attempting to forceDestroy %s", err) // Delete everything including locked objects. // Don't ignore any object errors or we could recurse infinitely. @@ -1318,54 +1435,25 @@ func resourceBucketDelete(d *schema.ResourceData, meta interface{}) error { if objectLockConfiguration != nil { objectLockEnabled = aws.StringValue(objectLockConfiguration.ObjectLockEnabled) == s3.ObjectLockEnabledEnabled } - err = DeleteAllObjectVersions(conn, d.Id(), "", objectLockEnabled, false) - if err != nil { - return fmt.Errorf("error S3 Bucket force_destroy: %s", err) + if n, err := EmptyBucket(ctx, conn, d.Id(), objectLockEnabled); err != nil { + return diag.Errorf("emptying S3 Bucket (%s): %s", d.Id(), err) + } else { + log.Printf("[DEBUG] Deleted %d S3 objects", n) } // this line recurses until all objects are deleted or an error is returned - return resourceBucketDelete(d, meta) + return resourceBucketDelete(ctx, d, meta) } } if err != nil { - return fmt.Errorf("error deleting S3 Bucket (%s): %s", d.Id(), err) + return diag.Errorf("deleting S3 Bucket (%s): %s", d.Id(), err) } return nil } -func websiteEndpoint(client *conns.AWSClient, d *schema.ResourceData) (*S3Website, error) { - // If the bucket doesn't have a website configuration, return an empty - // endpoint - if _, ok := d.GetOk("website"); !ok { - return nil, nil - } - - bucket := d.Get("bucket").(string) - - // Lookup the region for this bucket - - locationResponse, err := verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { - return client.S3Conn.GetBucketLocation( - &s3.GetBucketLocationInput{ - Bucket: aws.String(bucket), - }, - ) - }) - if err != nil { - return nil, err - } - location := locationResponse.(*s3.GetBucketLocationOutput) - var region string - if location.LocationConstraint != nil { - region = aws.StringValue(location.LocationConstraint) - } - - return WebsiteEndpoint(client, bucket, region), nil -} - // https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region func BucketRegionalDomainName(bucket string, region string) (string, error) { // Return a default AWS Commercial domain name if no region is provided @@ -1380,6 +1468,10 @@ func BucketRegionalDomainName(bucket string, region string) (string, error) { return fmt.Sprintf("%s.%s", bucket, strings.TrimPrefix(endpoint.URL, "https://")), nil } +type S3Website struct { + Endpoint, Domain string +} + func WebsiteEndpoint(client *conns.AWSClient, bucket string, region string) *S3Website { domain := WebsiteDomainUrl(client, region) return &S3Website{Endpoint: fmt.Sprintf("%s.%s", bucket, domain), Domain: domain} @@ -1397,6 +1489,36 @@ func WebsiteDomainUrl(client *conns.AWSClient, region string) string { return client.RegionalHostname("s3-website") } +func websiteEndpoint(client *conns.AWSClient, d *schema.ResourceData) (*S3Website, error) { + // If the bucket doesn't have a website configuration, return an empty + // endpoint + if _, ok := d.GetOk("website"); !ok { + return nil, nil + } + + bucket := d.Get("bucket").(string) + + // Lookup the region for this bucket + + locationResponse, err := verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { + return client.S3Conn.GetBucketLocation( + &s3.GetBucketLocationInput{ + Bucket: aws.String(bucket), + }, + ) + }) + if err != nil { + return nil, err + } + location := locationResponse.(*s3.GetBucketLocationOutput) + var region string + if location.LocationConstraint != nil { + region = aws.StringValue(location.LocationConstraint) + } + + return WebsiteEndpoint(client, bucket, region), nil +} + func isOldRegion(region string) bool { oldRegions := []string{ endpoints.ApNortheast1RegionID, @@ -1417,71 +1539,765 @@ func isOldRegion(region string) bool { return false } -func resourceBucketInternalObjectLockConfigurationUpdate(conn *s3.S3, d *schema.ResourceData) error { - // S3 Object Lock configuration cannot be deleted, only updated. - req := &s3.PutObjectLockConfigurationInput{ - Bucket: aws.String(d.Get("bucket").(string)), - ObjectLockConfiguration: expandS3ObjectLockConfiguration(d.Get("object_lock_configuration").([]interface{})), +func normalizeRegion(region string) string { + // Default to us-east-1 if the bucket doesn't have a region: + // http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlocation.html + if region == "" { + region = endpoints.UsEast1RegionID + } + + return region +} + +////////////////////////////////////////// Argument-Specific Update Functions ////////////////////////////////////////// + +func resourceBucketInternalAccelerationUpdate(conn *s3.S3, d *schema.ResourceData) error { + input := &s3.PutBucketAccelerateConfigurationInput{ + Bucket: aws.String(d.Id()), + AccelerateConfiguration: &s3.AccelerateConfiguration{ + Status: aws.String(d.Get("acceleration_status").(string)), + }, } _, err := verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { - return conn.PutObjectLockConfiguration(req) + return conn.PutBucketAccelerateConfiguration(input) }) - if err != nil { - return fmt.Errorf("error putting S3 object lock configuration: %s", err) - } - return nil + return err } -func flattenBucketLifecycleRuleExpiration(expiration *s3.LifecycleExpiration) []interface{} { - if expiration == nil { - return []interface{}{} +func resourceBucketInternalACLUpdate(conn *s3.S3, d *schema.ResourceData) error { + acl := d.Get("acl").(string) + if acl == "" { + // Use default value previously available in v3.x of the provider + acl = s3.BucketCannedACLPrivate } - m := make(map[string]interface{}) - - if expiration.Date != nil { - m["date"] = (aws.TimeValue(expiration.Date)).Format("2006-01-02") - } - if expiration.Days != nil { - m["days"] = int(aws.Int64Value(expiration.Days)) - } - if expiration.ExpiredObjectDeleteMarker != nil { - m["expired_object_delete_marker"] = aws.BoolValue(expiration.ExpiredObjectDeleteMarker) + input := &s3.PutBucketAclInput{ + Bucket: aws.String(d.Id()), + ACL: aws.String(acl), } - return []interface{}{m} + _, err := verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { + return conn.PutBucketAcl(input) + }) + + return err } -func flattenBucketLifecycleRules(lifecycleRules []*s3.LifecycleRule) []interface{} { - if len(lifecycleRules) == 0 { - return []interface{}{} - } +func resourceBucketInternalCorsUpdate(conn *s3.S3, d *schema.ResourceData) error { + rawCors := d.Get("cors_rule").([]interface{}) - var results []interface{} + if len(rawCors) == 0 { + // Delete CORS + _, err := verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { + return conn.DeleteBucketCors(&s3.DeleteBucketCorsInput{ + Bucket: aws.String(d.Id()), + }) + }) - for _, lifecycleRule := range lifecycleRules { - if lifecycleRule == nil { - continue + if err != nil { + return fmt.Errorf("error deleting S3 Bucket (%s) CORS: %w", d.Id(), err) } - rule := make(map[string]interface{}) - - // AbortIncompleteMultipartUploadDays - if lifecycleRule.AbortIncompleteMultipartUpload != nil { - if lifecycleRule.AbortIncompleteMultipartUpload.DaysAfterInitiation != nil { - rule["abort_incomplete_multipart_upload_days"] = int(aws.Int64Value(lifecycleRule.AbortIncompleteMultipartUpload.DaysAfterInitiation)) + return nil + } + // Put CORS + rules := make([]*s3.CORSRule, 0, len(rawCors)) + for _, cors := range rawCors { + // Prevent panic + // Reference: https://github.com/hashicorp/terraform-provider-aws/issues/7546 + corsMap, ok := cors.(map[string]interface{}) + if !ok { + continue + } + r := &s3.CORSRule{} + for k, v := range corsMap { + if k == "max_age_seconds" { + r.MaxAgeSeconds = aws.Int64(int64(v.(int))) + } else { + vMap := make([]*string, len(v.([]interface{}))) + for i, vv := range v.([]interface{}) { + if str, ok := vv.(string); ok { + vMap[i] = aws.String(str) + } + } + switch k { + case "allowed_headers": + r.AllowedHeaders = vMap + case "allowed_methods": + r.AllowedMethods = vMap + case "allowed_origins": + r.AllowedOrigins = vMap + case "expose_headers": + r.ExposeHeaders = vMap + } } } + rules = append(rules, r) + } - // ID - if lifecycleRule.ID != nil { - rule["id"] = aws.StringValue(lifecycleRule.ID) + input := &s3.PutBucketCorsInput{ + Bucket: aws.String(d.Id()), + CORSConfiguration: &s3.CORSConfiguration{ + CORSRules: rules, + }, + } + + _, err := verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { + return conn.PutBucketCors(input) + }) + + return err +} + +func resourceBucketInternalGrantsUpdate(conn *s3.S3, d *schema.ResourceData) error { + grants := d.Get("grant").(*schema.Set) + + if grants.Len() == 0 { + log.Printf("[DEBUG] S3 bucket: %s, Grants fallback to canned ACL", d.Id()) + + if err := resourceBucketInternalACLUpdate(conn, d); err != nil { + return fmt.Errorf("error fallback to canned ACL, %s", err) } - // Filter - if filter := lifecycleRule.Filter; filter != nil { + return nil + } + + resp, err := verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { + return conn.GetBucketAcl(&s3.GetBucketAclInput{ + Bucket: aws.String(d.Id()), + }) + }) + + if err != nil { + return fmt.Errorf("error getting S3 Bucket (%s) ACL: %s", d.Id(), err) + } + + output := resp.(*s3.GetBucketAclOutput) + + if output == nil { + return fmt.Errorf("error getting S3 Bucket (%s) ACL: empty output", d.Id()) + } + + input := &s3.PutBucketAclInput{ + Bucket: aws.String(d.Id()), + AccessControlPolicy: &s3.AccessControlPolicy{ + Grants: expandGrants(grants.List()), + Owner: output.Owner, + }, + } + + _, err = verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { + return conn.PutBucketAcl(input) + }) + + return err +} + +func resourceBucketInternalLifecycleUpdate(conn *s3.S3, d *schema.ResourceData) error { + lifecycleRules := d.Get("lifecycle_rule").([]interface{}) + + if len(lifecycleRules) == 0 || lifecycleRules[0] == nil { + input := &s3.DeleteBucketLifecycleInput{ + Bucket: aws.String(d.Id()), + } + + _, err := conn.DeleteBucketLifecycle(input) + + if err != nil { + return fmt.Errorf("error removing S3 Bucket (%s) lifecycle: %w", d.Id(), err) + } + + return nil + } + + rules := make([]*s3.LifecycleRule, 0, len(lifecycleRules)) + + for i, lifecycleRule := range lifecycleRules { + r := lifecycleRule.(map[string]interface{}) + + rule := &s3.LifecycleRule{} + + // Filter + tags := Tags(tftags.New(r["tags"]).IgnoreAWS()) + filter := &s3.LifecycleRuleFilter{} + if len(tags) > 0 { + lifecycleRuleAndOp := &s3.LifecycleRuleAndOperator{} + lifecycleRuleAndOp.SetPrefix(r["prefix"].(string)) + lifecycleRuleAndOp.SetTags(tags) + filter.SetAnd(lifecycleRuleAndOp) + } else { + filter.SetPrefix(r["prefix"].(string)) + } + rule.SetFilter(filter) + + // ID + if val, ok := r["id"].(string); ok && val != "" { + rule.ID = aws.String(val) + } else { + rule.ID = aws.String(resource.PrefixedUniqueId("tf-s3-lifecycle-")) + } + + // Enabled + if val, ok := r["enabled"].(bool); ok && val { + rule.Status = aws.String(s3.ExpirationStatusEnabled) + } else { + rule.Status = aws.String(s3.ExpirationStatusDisabled) + } + + // AbortIncompleteMultipartUpload + if val, ok := r["abort_incomplete_multipart_upload_days"].(int); ok && val > 0 { + rule.AbortIncompleteMultipartUpload = &s3.AbortIncompleteMultipartUpload{ + DaysAfterInitiation: aws.Int64(int64(val)), + } + } + + // Expiration + expiration := d.Get(fmt.Sprintf("lifecycle_rule.%d.expiration", i)).([]interface{}) + if len(expiration) > 0 && expiration[0] != nil { + e := expiration[0].(map[string]interface{}) + i := &s3.LifecycleExpiration{} + if val, ok := e["date"].(string); ok && val != "" { + t, err := time.Parse(time.RFC3339, fmt.Sprintf("%sT00:00:00Z", val)) + if err != nil { + return fmt.Errorf("Error Parsing AWS S3 Bucket Lifecycle Expiration Date: %s", err.Error()) + } + i.Date = aws.Time(t) + } else if val, ok := e["days"].(int); ok && val > 0 { + i.Days = aws.Int64(int64(val)) + } else if val, ok := e["expired_object_delete_marker"].(bool); ok { + i.ExpiredObjectDeleteMarker = aws.Bool(val) + } + rule.Expiration = i + } + + // NoncurrentVersionExpiration + nc_expiration := d.Get(fmt.Sprintf("lifecycle_rule.%d.noncurrent_version_expiration", i)).([]interface{}) + if len(nc_expiration) > 0 && nc_expiration[0] != nil { + e := nc_expiration[0].(map[string]interface{}) + + if val, ok := e["days"].(int); ok && val > 0 { + rule.NoncurrentVersionExpiration = &s3.NoncurrentVersionExpiration{ + NoncurrentDays: aws.Int64(int64(val)), + } + } + } + + // Transitions + transitions := d.Get(fmt.Sprintf("lifecycle_rule.%d.transition", i)).(*schema.Set).List() + if len(transitions) > 0 { + rule.Transitions = make([]*s3.Transition, 0, len(transitions)) + for _, transition := range transitions { + transition := transition.(map[string]interface{}) + i := &s3.Transition{} + if val, ok := transition["date"].(string); ok && val != "" { + t, err := time.Parse(time.RFC3339, fmt.Sprintf("%sT00:00:00Z", val)) + if err != nil { + return fmt.Errorf("Error Parsing AWS S3 Bucket Lifecycle Expiration Date: %s", err.Error()) + } + i.Date = aws.Time(t) + } else if val, ok := transition["days"].(int); ok && val >= 0 { + i.Days = aws.Int64(int64(val)) + } + if val, ok := transition["storage_class"].(string); ok && val != "" { + i.StorageClass = aws.String(val) + } + + rule.Transitions = append(rule.Transitions, i) + } + } + // NoncurrentVersionTransitions + nc_transitions := d.Get(fmt.Sprintf("lifecycle_rule.%d.noncurrent_version_transition", i)).(*schema.Set).List() + if len(nc_transitions) > 0 { + rule.NoncurrentVersionTransitions = make([]*s3.NoncurrentVersionTransition, 0, len(nc_transitions)) + for _, transition := range nc_transitions { + transition := transition.(map[string]interface{}) + i := &s3.NoncurrentVersionTransition{} + if val, ok := transition["days"].(int); ok && val >= 0 { + i.NoncurrentDays = aws.Int64(int64(val)) + } + if val, ok := transition["storage_class"].(string); ok && val != "" { + i.StorageClass = aws.String(val) + } + + rule.NoncurrentVersionTransitions = append(rule.NoncurrentVersionTransitions, i) + } + } + + // As a lifecycle rule requires 1 or more transition/expiration actions, + // we explicitly pass a default ExpiredObjectDeleteMarker value to be able to create + // the rule while keeping the policy unaffected if the conditions are not met. + if rule.Expiration == nil && rule.NoncurrentVersionExpiration == nil && + rule.Transitions == nil && rule.NoncurrentVersionTransitions == nil && + rule.AbortIncompleteMultipartUpload == nil { + rule.Expiration = &s3.LifecycleExpiration{ExpiredObjectDeleteMarker: aws.Bool(false)} + } + + rules = append(rules, rule) + } + + input := &s3.PutBucketLifecycleConfigurationInput{ + Bucket: aws.String(d.Id()), + LifecycleConfiguration: &s3.BucketLifecycleConfiguration{ + Rules: rules, + }, + } + + _, err := verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { + return conn.PutBucketLifecycleConfiguration(input) + }) + + return err +} + +func resourceBucketInternalLoggingUpdate(conn *s3.S3, d *schema.ResourceData) error { + logging := d.Get("logging").([]interface{}) + loggingStatus := &s3.BucketLoggingStatus{} + + if len(logging) > 0 { + c := logging[0].(map[string]interface{}) + + loggingEnabled := &s3.LoggingEnabled{} + if val, ok := c["target_bucket"].(string); ok { + loggingEnabled.TargetBucket = aws.String(val) + } + if val, ok := c["target_prefix"].(string); ok { + loggingEnabled.TargetPrefix = aws.String(val) + } + + loggingStatus.LoggingEnabled = loggingEnabled + } + + input := &s3.PutBucketLoggingInput{ + Bucket: aws.String(d.Id()), + BucketLoggingStatus: loggingStatus, + } + + _, err := verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { + return conn.PutBucketLogging(input) + }) + + return err +} + +func resourceBucketInternalObjectLockConfigurationUpdate(conn *s3.S3, d *schema.ResourceData) error { + // S3 Object Lock configuration cannot be deleted, only updated. + req := &s3.PutObjectLockConfigurationInput{ + Bucket: aws.String(d.Id()), + ObjectLockConfiguration: expandS3ObjectLockConfiguration(d.Get("object_lock_configuration").([]interface{})), + } + + _, err := verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { + return conn.PutObjectLockConfiguration(req) + }) + + return err +} + +func resourceBucketInternalPolicyUpdate(conn *s3.S3, d *schema.ResourceData) error { + policy, err := structure.NormalizeJsonString(d.Get("policy").(string)) + + if err != nil { + return fmt.Errorf("policy (%s) is an invalid JSON: %w", policy, err) + } + + if policy == "" { + _, err := verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { + return conn.DeleteBucketPolicy(&s3.DeleteBucketPolicyInput{ + Bucket: aws.String(d.Id()), + }) + }) + + if err != nil { + return fmt.Errorf("error deleting S3 Bucket (%s) policy: %w", d.Id(), err) + } + + return nil + } + + params := &s3.PutBucketPolicyInput{ + Bucket: aws.String(d.Id()), + Policy: aws.String(policy), + } + + err = resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err := conn.PutBucketPolicy(params) + if tfawserr.ErrCodeEquals(err, ErrCodeMalformedPolicy, s3.ErrCodeNoSuchBucket) { + return resource.RetryableError(err) + } + if err != nil { + return resource.NonRetryableError(err) + } + return nil + }) + + if tfresource.TimedOut(err) { + _, err = conn.PutBucketPolicy(params) + } + + return err +} + +func resourceBucketInternalReplicationConfigurationUpdate(conn *s3.S3, d *schema.ResourceData) error { + replicationConfiguration := d.Get("replication_configuration").([]interface{}) + + if len(replicationConfiguration) == 0 { + input := &s3.DeleteBucketReplicationInput{ + Bucket: aws.String(d.Id()), + } + + _, err := conn.DeleteBucketReplication(input) + + if err != nil { + return fmt.Errorf("error removing S3 Bucket (%s) Replication: %w", d.Id(), err) + } + + return nil + } + + hasVersioning := false + // Validate that bucket versioning is enabled + if versioning, ok := d.GetOk("versioning"); ok { + v := versioning.([]interface{}) + + if v[0].(map[string]interface{})["enabled"].(bool) { + hasVersioning = true + } + } + + if !hasVersioning { + return fmt.Errorf("versioning must be enabled to allow S3 bucket replication") + } + + input := &s3.PutBucketReplicationInput{ + Bucket: aws.String(d.Id()), + ReplicationConfiguration: expandBucketReplicationConfiguration(replicationConfiguration), + } + + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err := conn.PutBucketReplication(input) + if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) || tfawserr.ErrMessageContains(err, ErrCodeInvalidRequest, "Versioning must be 'Enabled' on the bucket") { + return resource.RetryableError(err) + } + if err != nil { + return resource.NonRetryableError(err) + } + return nil + }) + + if tfresource.TimedOut(err) { + _, err = conn.PutBucketReplication(input) + } + + return err +} + +func resourceBucketInternalRequestPayerUpdate(conn *s3.S3, d *schema.ResourceData) error { + payer := d.Get("request_payer").(string) + + input := &s3.PutBucketRequestPaymentInput{ + Bucket: aws.String(d.Id()), + RequestPaymentConfiguration: &s3.RequestPaymentConfiguration{ + Payer: aws.String(payer), + }, + } + + _, err := verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { + return conn.PutBucketRequestPayment(input) + }) + + return err +} + +func resourceBucketInternalServerSideEncryptionConfigurationUpdate(conn *s3.S3, d *schema.ResourceData) error { + serverSideEncryptionConfiguration := d.Get("server_side_encryption_configuration").([]interface{}) + + if len(serverSideEncryptionConfiguration) == 0 { + input := &s3.DeleteBucketEncryptionInput{ + Bucket: aws.String(d.Id()), + } + + _, err := conn.DeleteBucketEncryption(input) + + if err != nil { + return fmt.Errorf("error removing S3 Bucket (%s) Server-side Encryption: %w", d.Id(), err) + } + + return nil + } + + c := serverSideEncryptionConfiguration[0].(map[string]interface{}) + + rc := &s3.ServerSideEncryptionConfiguration{} + + rcRules := c["rule"].([]interface{}) + var rules []*s3.ServerSideEncryptionRule + for _, v := range rcRules { + rr := v.(map[string]interface{}) + rrDefault := rr["apply_server_side_encryption_by_default"].([]interface{}) + sseAlgorithm := rrDefault[0].(map[string]interface{})["sse_algorithm"].(string) + kmsMasterKeyId := rrDefault[0].(map[string]interface{})["kms_master_key_id"].(string) + rcDefaultRule := &s3.ServerSideEncryptionByDefault{ + SSEAlgorithm: aws.String(sseAlgorithm), + } + if kmsMasterKeyId != "" { + rcDefaultRule.KMSMasterKeyID = aws.String(kmsMasterKeyId) + } + rcRule := &s3.ServerSideEncryptionRule{ + ApplyServerSideEncryptionByDefault: rcDefaultRule, + } + + if val, ok := rr["bucket_key_enabled"].(bool); ok { + rcRule.BucketKeyEnabled = aws.Bool(val) + } + + rules = append(rules, rcRule) + } + + rc.Rules = rules + + input := &s3.PutBucketEncryptionInput{ + Bucket: aws.String(d.Id()), + ServerSideEncryptionConfiguration: rc, + } + + _, err := tfresource.RetryWhenAWSErrCodeEquals( + propagationTimeout, + func() (interface{}, error) { + return conn.PutBucketEncryption(input) + }, + s3.ErrCodeNoSuchBucket, + ErrCodeOperationAborted, + ) + + return err +} + +func resourceBucketInternalVersioningUpdate(conn *s3.S3, bucket string, versioningConfig *s3.VersioningConfiguration) error { + input := &s3.PutBucketVersioningInput{ + Bucket: aws.String(bucket), + VersioningConfiguration: versioningConfig, + } + + _, err := verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { + return conn.PutBucketVersioning(input) + }) + + return err +} + +func resourceBucketInternalWebsiteUpdate(conn *s3.S3, d *schema.ResourceData) error { + ws := d.Get("website").([]interface{}) + + if len(ws) == 0 { + input := &s3.DeleteBucketWebsiteInput{ + Bucket: aws.String(d.Id()), + } + + _, err := verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { + return conn.DeleteBucketWebsite(input) + }) + + if err != nil { + return fmt.Errorf("error deleting S3 Bucket (%s) Website: %w", d.Id(), err) + } + + d.Set("website_endpoint", "") + d.Set("website_domain", "") + + return nil + } + + websiteConfig, err := expandWebsiteConfiguration(ws) + if err != nil { + return fmt.Errorf("error expanding S3 Bucket (%s) website configuration: %w", d.Id(), err) + } + + input := &s3.PutBucketWebsiteInput{ + Bucket: aws.String(d.Id()), + WebsiteConfiguration: websiteConfig, + } + + _, err = verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { + return conn.PutBucketWebsite(input) + }) + + return err +} + +///////////////////////////////////////////// Expand and Flatten functions ///////////////////////////////////////////// + +// Cors Rule functions + +func flattenBucketCorsRules(rules []*s3.CORSRule) []interface{} { + var results []interface{} + + for _, rule := range rules { + if rule == nil { + continue + } + + m := make(map[string]interface{}) + + if len(rule.AllowedHeaders) > 0 { + m["allowed_headers"] = flex.FlattenStringList(rule.AllowedHeaders) + } + + if len(rule.AllowedMethods) > 0 { + m["allowed_methods"] = flex.FlattenStringList(rule.AllowedMethods) + } + + if len(rule.AllowedOrigins) > 0 { + m["allowed_origins"] = flex.FlattenStringList(rule.AllowedOrigins) + } + + if len(rule.ExposeHeaders) > 0 { + m["expose_headers"] = flex.FlattenStringList(rule.ExposeHeaders) + } + + if rule.MaxAgeSeconds != nil { + m["max_age_seconds"] = int(aws.Int64Value(rule.MaxAgeSeconds)) + } + + results = append(results, m) + } + + return results +} + +// Grants functions + +func expandGrants(l []interface{}) []*s3.Grant { + var grants []*s3.Grant + + for _, tfMapRaw := range l { + tfMap, ok := tfMapRaw.(map[string]interface{}) + if !ok { + continue + } + + if v, ok := tfMap["permissions"].(*schema.Set); ok { + for _, rawPermission := range v.List() { + permission, ok := rawPermission.(string) + if !ok { + continue + } + + grantee := &s3.Grantee{} + + if v, ok := tfMap["id"].(string); ok && v != "" { + grantee.SetID(v) + } + + if v, ok := tfMap["type"].(string); ok && v != "" { + grantee.SetType(v) + } + + if v, ok := tfMap["uri"].(string); ok && v != "" { + grantee.SetURI(v) + } + + g := &s3.Grant{ + Grantee: grantee, + Permission: aws.String(permission), + } + + grants = append(grants, g) + } + } + } + return grants +} + +func flattenGrants(ap *s3.GetBucketAclOutput) []interface{} { + if len(ap.Grants) == 0 { + return []interface{}{} + } + + getGrant := func(grants []interface{}, grantee map[string]interface{}) (interface{}, bool) { + for _, pg := range grants { + pgt := pg.(map[string]interface{}) + if pgt["type"] == grantee["type"] && pgt["id"] == grantee["id"] && pgt["uri"] == grantee["uri"] && + pgt["permissions"].(*schema.Set).Len() > 0 { + return pg, true + } + } + return nil, false + } + + grants := make([]interface{}, 0, len(ap.Grants)) + for _, granteeObject := range ap.Grants { + grantee := make(map[string]interface{}) + grantee["type"] = aws.StringValue(granteeObject.Grantee.Type) + + if granteeObject.Grantee.ID != nil { + grantee["id"] = aws.StringValue(granteeObject.Grantee.ID) + } + if granteeObject.Grantee.URI != nil { + grantee["uri"] = aws.StringValue(granteeObject.Grantee.URI) + } + if pg, ok := getGrant(grants, grantee); ok { + pg.(map[string]interface{})["permissions"].(*schema.Set).Add(aws.StringValue(granteeObject.Permission)) + } else { + grantee["permissions"] = schema.NewSet(schema.HashString, []interface{}{aws.StringValue(granteeObject.Permission)}) + grants = append(grants, grantee) + } + } + + return grants +} + +// Lifecycle Rule functions + +func flattenBucketLifecycleRuleExpiration(expiration *s3.LifecycleExpiration) []interface{} { + if expiration == nil { + return []interface{}{} + } + + m := make(map[string]interface{}) + + if expiration.Date != nil { + m["date"] = (aws.TimeValue(expiration.Date)).Format("2006-01-02") + } + if expiration.Days != nil { + m["days"] = int(aws.Int64Value(expiration.Days)) + } + if expiration.ExpiredObjectDeleteMarker != nil { + m["expired_object_delete_marker"] = aws.BoolValue(expiration.ExpiredObjectDeleteMarker) + } + + return []interface{}{m} +} + +func flattenBucketLifecycleRules(lifecycleRules []*s3.LifecycleRule) []interface{} { + if len(lifecycleRules) == 0 { + return []interface{}{} + } + + var results []interface{} + + for _, lifecycleRule := range lifecycleRules { + if lifecycleRule == nil { + continue + } + + rule := make(map[string]interface{}) + + // AbortIncompleteMultipartUploadDays + if lifecycleRule.AbortIncompleteMultipartUpload != nil { + if lifecycleRule.AbortIncompleteMultipartUpload.DaysAfterInitiation != nil { + rule["abort_incomplete_multipart_upload_days"] = int(aws.Int64Value(lifecycleRule.AbortIncompleteMultipartUpload.DaysAfterInitiation)) + } + } + + // ID + if lifecycleRule.ID != nil { + rule["id"] = aws.StringValue(lifecycleRule.ID) + } + + // Filter + if filter := lifecycleRule.Filter; filter != nil { if filter.And != nil { // Prefix if filter.And.Prefix != nil { @@ -1597,6 +2413,8 @@ func flattenBucketLifecycleRuleTransitions(transitions []*s3.Transition) []inter return results } +// Logging functions + func flattenBucketLoggingEnabled(loggingEnabled *s3.LoggingEnabled) []interface{} { if loggingEnabled == nil { return []interface{}{} @@ -1614,135 +2432,251 @@ func flattenBucketLoggingEnabled(loggingEnabled *s3.LoggingEnabled) []interface{ return []interface{}{m} } -func flattenServerSideEncryptionConfiguration(c *s3.ServerSideEncryptionConfiguration) []interface{} { - if c == nil { - return []interface{}{} +// Object Lock Configuration functions + +func expandS3ObjectLockConfiguration(vConf []interface{}) *s3.ObjectLockConfiguration { + if len(vConf) == 0 || vConf[0] == nil { + return nil } - m := map[string]interface{}{ - "rule": flattenServerSideEncryptionConfigurationRules(c.Rules), + mConf := vConf[0].(map[string]interface{}) + + conf := &s3.ObjectLockConfiguration{} + + if vObjectLockEnabled, ok := mConf["object_lock_enabled"].(string); ok && vObjectLockEnabled != "" { + conf.ObjectLockEnabled = aws.String(vObjectLockEnabled) } - return []interface{}{m} -} + if vRule, ok := mConf["rule"].([]interface{}); ok && len(vRule) > 0 { + mRule := vRule[0].(map[string]interface{}) -func flattenServerSideEncryptionConfigurationRules(rules []*s3.ServerSideEncryptionRule) []interface{} { - var results []interface{} + if vDefaultRetention, ok := mRule["default_retention"].([]interface{}); ok && len(vDefaultRetention) > 0 && vDefaultRetention[0] != nil { + mDefaultRetention := vDefaultRetention[0].(map[string]interface{}) - for _, rule := range rules { - m := make(map[string]interface{}) + conf.Rule = &s3.ObjectLockRule{ + DefaultRetention: &s3.DefaultRetention{}, + } - if rule.BucketKeyEnabled != nil { - m["bucket_key_enabled"] = aws.BoolValue(rule.BucketKeyEnabled) + if vMode, ok := mDefaultRetention["mode"].(string); ok && vMode != "" { + conf.Rule.DefaultRetention.Mode = aws.String(vMode) + } + if vDays, ok := mDefaultRetention["days"].(int); ok && vDays > 0 { + conf.Rule.DefaultRetention.Days = aws.Int64(int64(vDays)) + } + if vYears, ok := mDefaultRetention["years"].(int); ok && vYears > 0 { + conf.Rule.DefaultRetention.Years = aws.Int64(int64(vYears)) + } } + } - if rule.ApplyServerSideEncryptionByDefault != nil { - m["apply_server_side_encryption_by_default"] = []interface{}{ + return conf +} + +func flattenS3ObjectLockConfiguration(conf *s3.ObjectLockConfiguration) []interface{} { + if conf == nil { + return []interface{}{} + } + + mConf := map[string]interface{}{ + "object_lock_enabled": aws.StringValue(conf.ObjectLockEnabled), + } + + if conf.Rule != nil && conf.Rule.DefaultRetention != nil { + mRule := map[string]interface{}{ + "default_retention": []interface{}{ map[string]interface{}{ - "kms_master_key_id": aws.StringValue(rule.ApplyServerSideEncryptionByDefault.KMSMasterKeyID), - "sse_algorithm": aws.StringValue(rule.ApplyServerSideEncryptionByDefault.SSEAlgorithm), + "mode": aws.StringValue(conf.Rule.DefaultRetention.Mode), + "days": int(aws.Int64Value(conf.Rule.DefaultRetention.Days)), + "years": int(aws.Int64Value(conf.Rule.DefaultRetention.Years)), }, - } + }, } - results = append(results, m) + mConf["rule"] = []interface{}{mRule} } - return results + return []interface{}{mConf} } -func flattenBucketCorsRules(rules []*s3.CORSRule) []interface{} { - var results []interface{} +// Replication Configuration functions - for _, rule := range rules { - if rule == nil { +func expandBucketReplicationConfiguration(l []interface{}) *s3.ReplicationConfiguration { + if len(l) == 0 || l[0] == nil { + return nil + } + + tfMap, ok := l[0].(map[string]interface{}) + if !ok { + return nil + } + + rc := &s3.ReplicationConfiguration{} + + if val, ok := tfMap["role"].(string); ok { + rc.Role = aws.String(val) + } + + if v, ok := tfMap["rules"].(*schema.Set); ok && v.Len() > 0 { + rc.Rules = expandBucketReplicationConfigurationRules(v.List()) + } + + return rc +} + +func expandBucketReplicationConfigurationRules(l []interface{}) []*s3.ReplicationRule { + var rules []*s3.ReplicationRule + + for _, tfMapRaw := range l { + tfMap, ok := tfMapRaw.(map[string]interface{}) + if !ok { continue } - m := make(map[string]interface{}) + rcRule := &s3.ReplicationRule{} - if len(rule.AllowedHeaders) > 0 { - m["allowed_headers"] = flex.FlattenStringList(rule.AllowedHeaders) + if status, ok := tfMap["status"].(string); ok && status != "" { + rcRule.Status = aws.String(status) + } else { + continue } - if len(rule.AllowedMethods) > 0 { - m["allowed_methods"] = flex.FlattenStringList(rule.AllowedMethods) + if v, ok := tfMap["id"].(string); ok && v != "" { + rcRule.ID = aws.String(v) } - if len(rule.AllowedOrigins) > 0 { - m["allowed_origins"] = flex.FlattenStringList(rule.AllowedOrigins) + if v, ok := tfMap["destination"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + rcRule.Destination = expandBucketReplicationConfigurationRulesDestination(v) + } else { + rcRule.Destination = &s3.Destination{} } - if len(rule.ExposeHeaders) > 0 { - m["expose_headers"] = flex.FlattenStringList(rule.ExposeHeaders) + if v, ok := tfMap["source_selection_criteria"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + rcRule.SourceSelectionCriteria = expandBucketReplicationConfigurationRulesSourceSelectionCriteria(v) } - if rule.MaxAgeSeconds != nil { - m["max_age_seconds"] = int(aws.Int64Value(rule.MaxAgeSeconds)) + if v, ok := tfMap["filter"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + // XML schema V2. + rcRule.Priority = aws.Int64(int64(tfMap["priority"].(int))) + + rcRule.Filter = &s3.ReplicationRuleFilter{} + + filter := v[0].(map[string]interface{}) + tags := Tags(tftags.New(filter["tags"]).IgnoreAWS()) + + if len(tags) > 0 { + rcRule.Filter.And = &s3.ReplicationRuleAndOperator{ + Prefix: aws.String(filter["prefix"].(string)), + Tags: tags, + } + } else { + rcRule.Filter.Prefix = aws.String(filter["prefix"].(string)) + } + + if dmr, ok := tfMap["delete_marker_replication_status"].(string); ok && dmr != "" { + rcRule.DeleteMarkerReplication = &s3.DeleteMarkerReplication{ + Status: aws.String(dmr), + } + } else { + rcRule.DeleteMarkerReplication = &s3.DeleteMarkerReplication{ + Status: aws.String(s3.DeleteMarkerReplicationStatusDisabled), + } + } + } else { + // XML schema V1. + rcRule.Prefix = aws.String(tfMap["prefix"].(string)) } - results = append(results, m) + rules = append(rules, rcRule) } - return results + return rules } -func flattenBucketWebsite(ws *s3.GetBucketWebsiteOutput) ([]interface{}, error) { - if ws == nil { - return []interface{}{}, nil +func expandBucketReplicationConfigurationRulesDestination(l []interface{}) *s3.Destination { + if len(l) == 0 || l[0] == nil { + return nil } - m := make(map[string]interface{}) - - if v := ws.IndexDocument; v != nil { - m["index_document"] = aws.StringValue(v.Suffix) + tfMap, ok := l[0].(map[string]interface{}) + if !ok { + return nil } - if v := ws.ErrorDocument; v != nil { - m["error_document"] = aws.StringValue(v.Key) + ruleDestination := &s3.Destination{} + + if v, ok := tfMap["bucket"].(string); ok { + ruleDestination.Bucket = aws.String(v) } - if v := ws.RedirectAllRequestsTo; v != nil { - if v.Protocol == nil { - m["redirect_all_requests_to"] = aws.StringValue(v.HostName) - } else { - var host string - var path string - var query string - parsedHostName, err := url.Parse(aws.StringValue(v.HostName)) - if err == nil { - host = parsedHostName.Host - path = parsedHostName.Path - query = parsedHostName.RawQuery - } else { - host = aws.StringValue(v.HostName) - path = "" - } + if v, ok := tfMap["storage_class"].(string); ok && v != "" { + ruleDestination.StorageClass = aws.String(v) + } - m["redirect_all_requests_to"] = (&url.URL{ - Host: host, - Path: path, - Scheme: aws.StringValue(v.Protocol), - RawQuery: query, - }).String() + if v, ok := tfMap["replica_kms_key_id"].(string); ok && v != "" { + ruleDestination.EncryptionConfiguration = &s3.EncryptionConfiguration{ + ReplicaKmsKeyID: aws.String(v), } } - if v := ws.RoutingRules; v != nil { - rr, err := normalizeRoutingRules(v) - if err != nil { - return nil, fmt.Errorf("error while marshaling routing rules: %w", err) - } - m["routing_rules"] = rr + if v, ok := tfMap["account_id"].(string); ok && v != "" { + ruleDestination.Account = aws.String(v) } - // We have special handling for the website configuration, - // so only return the configuration if there is any - if len(m) == 0 { - return []interface{}{}, nil + if v, ok := tfMap["access_control_translation"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + aclTranslationValues := v[0].(map[string]interface{}) + ruleAclTranslation := &s3.AccessControlTranslation{} + ruleAclTranslation.Owner = aws.String(aclTranslationValues["owner"].(string)) + ruleDestination.AccessControlTranslation = ruleAclTranslation + } + + // replication metrics (required for RTC) + if v, ok := tfMap["metrics"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + metricsConfig := &s3.Metrics{} + metricsValues := v[0].(map[string]interface{}) + metricsConfig.EventThreshold = &s3.ReplicationTimeValue{} + metricsConfig.Status = aws.String(metricsValues["status"].(string)) + metricsConfig.EventThreshold.Minutes = aws.Int64(int64(metricsValues["minutes"].(int))) + ruleDestination.Metrics = metricsConfig + } + + // replication time control (RTC) + if v, ok := tfMap["replication_time"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + rtcValues := v[0].(map[string]interface{}) + rtcConfig := &s3.ReplicationTime{} + rtcConfig.Status = aws.String(rtcValues["status"].(string)) + rtcConfig.Time = &s3.ReplicationTimeValue{} + rtcConfig.Time.Minutes = aws.Int64(int64(rtcValues["minutes"].(int))) + ruleDestination.ReplicationTime = rtcConfig + } + + return ruleDestination +} + +func expandBucketReplicationConfigurationRulesSourceSelectionCriteria(l []interface{}) *s3.SourceSelectionCriteria { + if len(l) == 0 || l[0] == nil { + return nil + } + + tfMap, ok := l[0].(map[string]interface{}) + if !ok { + return nil + } + + ruleSsc := &s3.SourceSelectionCriteria{} + + if v, ok := tfMap["sse_kms_encrypted_objects"].([]interface{}); ok && len(v) > 0 && v[0] != nil { + sseKmsValues := v[0].(map[string]interface{}) + sseKmsEncryptedObjects := &s3.SseKmsEncryptedObjects{} + + if sseKmsValues["enabled"].(bool) { + sseKmsEncryptedObjects.Status = aws.String(s3.SseKmsEncryptedObjectsStatusEnabled) + } else { + sseKmsEncryptedObjects.Status = aws.String(s3.SseKmsEncryptedObjectsStatusDisabled) + } + ruleSsc.SseKmsEncryptedObjects = sseKmsEncryptedObjects } - return []interface{}{m}, nil + return ruleSsc } func flattenBucketReplicationConfiguration(r *s3.ReplicationConfiguration) []interface{} { @@ -1919,141 +2853,111 @@ func flattenBucketReplicationConfigurationReplicationRules(rules []*s3.Replicati return results } -func normalizeRoutingRules(w []*s3.RoutingRule) (string, error) { - withNulls, err := json.Marshal(w) - if err != nil { - return "", err - } - - var rules []map[string]interface{} - if err := json.Unmarshal(withNulls, &rules); err != nil { - return "", err - } +// Server Side Encryption Configuration functions - var cleanRules []map[string]interface{} - for _, rule := range rules { - cleanRules = append(cleanRules, removeNil(rule)) +func flattenServerSideEncryptionConfiguration(c *s3.ServerSideEncryptionConfiguration) []interface{} { + if c == nil { + return []interface{}{} } - withoutNulls, err := json.Marshal(cleanRules) - if err != nil { - return "", err + m := map[string]interface{}{ + "rule": flattenServerSideEncryptionConfigurationRules(c.Rules), } - return string(withoutNulls), nil + return []interface{}{m} } -func removeNil(data map[string]interface{}) map[string]interface{} { - withoutNil := make(map[string]interface{}) +func flattenServerSideEncryptionConfigurationRules(rules []*s3.ServerSideEncryptionRule) []interface{} { + var results []interface{} - for k, v := range data { - if v == nil { - continue - } + for _, rule := range rules { + m := make(map[string]interface{}) - switch v := v.(type) { - case map[string]interface{}: - withoutNil[k] = removeNil(v) - default: - withoutNil[k] = v + if rule.BucketKeyEnabled != nil { + m["bucket_key_enabled"] = aws.BoolValue(rule.BucketKeyEnabled) } - } - return withoutNil -} + if rule.ApplyServerSideEncryptionByDefault != nil { + m["apply_server_side_encryption_by_default"] = []interface{}{ + map[string]interface{}{ + "kms_master_key_id": aws.StringValue(rule.ApplyServerSideEncryptionByDefault.KMSMasterKeyID), + "sse_algorithm": aws.StringValue(rule.ApplyServerSideEncryptionByDefault.SSEAlgorithm), + }, + } + } -func normalizeRegion(region string) string { - // Default to us-east-1 if the bucket doesn't have a region: - // http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlocation.html - if region == "" { - region = endpoints.UsEast1RegionID + results = append(results, m) } - return region + return results } -// ValidBucketName validates any S3 bucket name that is not inside the us-east-1 region. -// Buckets outside of this region have to be DNS-compliant. After the same restrictions are -// applied to buckets in the us-east-1 region, this function can be refactored as a SchemaValidateFunc -func ValidBucketName(value string, region string) error { - if region != endpoints.UsEast1RegionID { - if (len(value) < 3) || (len(value) > 63) { - return fmt.Errorf("%q must contain from 3 to 63 characters", value) - } - if !regexp.MustCompile(`^[0-9a-z-.]+$`).MatchString(value) { - return fmt.Errorf("only lowercase alphanumeric characters and hyphens allowed in %q", value) - } - if regexp.MustCompile(`^(?:[0-9]{1,3}\.){3}[0-9]{1,3}$`).MatchString(value) { - return fmt.Errorf("%q must not be formatted as an IP address", value) - } - if strings.HasPrefix(value, `.`) { - return fmt.Errorf("%q cannot start with a period", value) - } - if strings.HasSuffix(value, `.`) { - return fmt.Errorf("%q cannot end with a period", value) - } - if strings.Contains(value, `..`) { - return fmt.Errorf("%q can be only one period between labels", value) - } - } else { - if len(value) > 255 { - return fmt.Errorf("%q must contain less than 256 characters", value) - } - if !regexp.MustCompile(`^[0-9a-zA-Z-._]+$`).MatchString(value) { - return fmt.Errorf("only alphanumeric characters, hyphens, periods, and underscores allowed in %q", value) - } +// Versioning functions + +func expandVersioning(l []interface{}) *s3.VersioningConfiguration { + if len(l) == 0 || l[0] == nil { + return nil } - return nil -} -func grantHash(v interface{}) int { - var buf bytes.Buffer - m, ok := v.(map[string]interface{}) + tfMap, ok := l[0].(map[string]interface{}) if !ok { - return 0 + return nil } - if v, ok := m["id"]; ok { - buf.WriteString(fmt.Sprintf("%s-", v.(string))) - } - if v, ok := m["type"]; ok { - buf.WriteString(fmt.Sprintf("%s-", v.(string))) - } - if v, ok := m["uri"]; ok { - buf.WriteString(fmt.Sprintf("%s-", v.(string))) + output := &s3.VersioningConfiguration{} + + if v, ok := tfMap["enabled"].(bool); ok { + if v { + output.Status = aws.String(s3.BucketVersioningStatusEnabled) + } else { + output.Status = aws.String(s3.BucketVersioningStatusSuspended) + } } - if p, ok := m["permissions"]; ok { - buf.WriteString(fmt.Sprintf("%v-", p.(*schema.Set).List())) + + if v, ok := tfMap["mfa_delete"].(bool); ok { + if v { + output.MFADelete = aws.String(s3.MFADeleteEnabled) + } else { + output.MFADelete = aws.String(s3.MFADeleteDisabled) + } } - return create.StringHashcode(buf.String()) -} -type S3Website struct { - Endpoint, Domain string + return output } -// -// S3 Object Lock functions. -// +func expandVersioningWhenIsNewResource(l []interface{}) *s3.VersioningConfiguration { + if len(l) == 0 || l[0] == nil { + return nil + } + + tfMap, ok := l[0].(map[string]interface{}) -func expandS3ObjectLockConfiguration(vConf []interface{}) *s3.ObjectLockConfiguration { - if len(vConf) == 0 || vConf[0] == nil { + if !ok { return nil } - mConf := vConf[0].(map[string]interface{}) + output := &s3.VersioningConfiguration{} - conf := &s3.ObjectLockConfiguration{} + // Only set and return a non-nil VersioningConfiguration with at least one of + // MFADelete or Status enabled as the PutBucketVersioning API request + // does not need to be made for new buckets that don't require versioning. + // Reference: https://github.com/hashicorp/terraform-provider-aws/issues/4494 - if vObjectLockEnabled, ok := mConf["object_lock_enabled"].(string); ok && vObjectLockEnabled != "" { - conf.ObjectLockEnabled = aws.String(vObjectLockEnabled) + if v, ok := tfMap["enabled"].(bool); ok && v { + output.Status = aws.String(s3.BucketVersioningStatusEnabled) } - return conf -} + if v, ok := tfMap["mfa_delete"].(bool); ok && v { + output.MFADelete = aws.String(s3.MFADeleteEnabled) + } -// Versioning functions + if output.MFADelete == nil && output.Status == nil { + return nil + } + + return output +} func flattenVersioning(versioning *s3.GetBucketVersioningOutput) []interface{} { if versioning == nil { @@ -2077,71 +2981,163 @@ func flattenVersioning(versioning *s3.GetBucketVersioningOutput) []interface{} { return []interface{}{vc} } -func flattenS3ObjectLockConfiguration(conf *s3.ObjectLockConfiguration) []interface{} { - if conf == nil { - return []interface{}{} +// Website functions + +func expandWebsiteConfiguration(l []interface{}) (*s3.WebsiteConfiguration, error) { + if len(l) == 0 || l[0] == nil { + return nil, nil } - mConf := map[string]interface{}{ - "object_lock_enabled": aws.StringValue(conf.ObjectLockEnabled), + website, ok := l[0].(map[string]interface{}) + if !ok { + return nil, nil } - if conf.Rule != nil && conf.Rule.DefaultRetention != nil { - mRule := map[string]interface{}{ - "default_retention": []interface{}{ - map[string]interface{}{ - "mode": aws.StringValue(conf.Rule.DefaultRetention.Mode), - "days": int(aws.Int64Value(conf.Rule.DefaultRetention.Days)), - "years": int(aws.Int64Value(conf.Rule.DefaultRetention.Years)), - }, - }, + websiteConfiguration := &s3.WebsiteConfiguration{} + + if v, ok := website["index_document"].(string); ok && v != "" { + websiteConfiguration.IndexDocument = &s3.IndexDocument{ + Suffix: aws.String(v), } + } - mConf["rule"] = []interface{}{mRule} + if v, ok := website["error_document"].(string); ok && v != "" { + websiteConfiguration.ErrorDocument = &s3.ErrorDocument{ + Key: aws.String(v), + } } - return []interface{}{mConf} + if v, ok := website["redirect_all_requests_to"].(string); ok && v != "" { + redirect, err := url.Parse(v) + if err == nil && redirect.Scheme != "" { + var redirectHostBuf bytes.Buffer + redirectHostBuf.WriteString(redirect.Host) + if redirect.Path != "" { + redirectHostBuf.WriteString(redirect.Path) + } + if redirect.RawQuery != "" { + redirectHostBuf.WriteString("?") + redirectHostBuf.WriteString(redirect.RawQuery) + } + websiteConfiguration.RedirectAllRequestsTo = &s3.RedirectAllRequestsTo{ + HostName: aws.String(redirectHostBuf.String()), + Protocol: aws.String(redirect.Scheme), + } + } else { + websiteConfiguration.RedirectAllRequestsTo = &s3.RedirectAllRequestsTo{ + HostName: aws.String(v), + } + } + } + + if v, ok := website["routing_rules"].(string); ok && v != "" { + var unmarshaledRules []*s3.RoutingRule + if err := json.Unmarshal([]byte(v), &unmarshaledRules); err != nil { + return nil, err + } + websiteConfiguration.RoutingRules = unmarshaledRules + } + + return websiteConfiguration, nil } -func flattenGrants(ap *s3.GetBucketAclOutput) []interface{} { - if len(ap.Grants) == 0 { - return []interface{}{} +func flattenBucketWebsite(ws *s3.GetBucketWebsiteOutput) ([]interface{}, error) { + if ws == nil { + return []interface{}{}, nil } - //if ACL grants contains bucket owner FULL_CONTROL only - it is default "private" acl - if len(ap.Grants) == 1 && aws.StringValue(ap.Grants[0].Grantee.ID) == aws.StringValue(ap.Owner.ID) && - aws.StringValue(ap.Grants[0].Permission) == s3.PermissionFullControl { - return nil + + m := make(map[string]interface{}) + + if v := ws.IndexDocument; v != nil { + m["index_document"] = aws.StringValue(v.Suffix) } - getGrant := func(grants []interface{}, grantee map[string]interface{}) (interface{}, bool) { - for _, pg := range grants { - pgt := pg.(map[string]interface{}) - if pgt["type"] == grantee["type"] && pgt["id"] == grantee["id"] && pgt["uri"] == grantee["uri"] && - pgt["permissions"].(*schema.Set).Len() > 0 { - return pg, true + if v := ws.ErrorDocument; v != nil { + m["error_document"] = aws.StringValue(v.Key) + } + + if v := ws.RedirectAllRequestsTo; v != nil { + if v.Protocol == nil { + m["redirect_all_requests_to"] = aws.StringValue(v.HostName) + } else { + var host string + var path string + var query string + parsedHostName, err := url.Parse(aws.StringValue(v.HostName)) + if err == nil { + host = parsedHostName.Host + path = parsedHostName.Path + query = parsedHostName.RawQuery + } else { + host = aws.StringValue(v.HostName) + path = "" } + + m["redirect_all_requests_to"] = (&url.URL{ + Host: host, + Path: path, + Scheme: aws.StringValue(v.Protocol), + RawQuery: query, + }).String() } - return nil, false } - grants := make([]interface{}, 0, len(ap.Grants)) - for _, granteeObject := range ap.Grants { - grantee := make(map[string]interface{}) - grantee["type"] = aws.StringValue(granteeObject.Grantee.Type) - - if granteeObject.Grantee.ID != nil { - grantee["id"] = aws.StringValue(granteeObject.Grantee.ID) + if v := ws.RoutingRules; v != nil { + rr, err := normalizeRoutingRules(v) + if err != nil { + return nil, fmt.Errorf("error while marshaling routing rules: %w", err) } - if granteeObject.Grantee.URI != nil { - grantee["uri"] = aws.StringValue(granteeObject.Grantee.URI) + m["routing_rules"] = rr + } + + // We have special handling for the website configuration, + // so only return the configuration if there is any + if len(m) == 0 { + return []interface{}{}, nil + } + + return []interface{}{m}, nil +} + +func normalizeRoutingRules(w []*s3.RoutingRule) (string, error) { + withNulls, err := json.Marshal(w) + if err != nil { + return "", err + } + + var rules []map[string]interface{} + if err := json.Unmarshal(withNulls, &rules); err != nil { + return "", err + } + + var cleanRules []map[string]interface{} + for _, rule := range rules { + cleanRules = append(cleanRules, removeNil(rule)) + } + + withoutNulls, err := json.Marshal(cleanRules) + if err != nil { + return "", err + } + + return string(withoutNulls), nil +} + +func removeNil(data map[string]interface{}) map[string]interface{} { + withoutNil := make(map[string]interface{}) + + for k, v := range data { + if v == nil { + continue } - if pg, ok := getGrant(grants, grantee); ok { - pg.(map[string]interface{})["permissions"].(*schema.Set).Add(aws.StringValue(granteeObject.Permission)) - } else { - grantee["permissions"] = schema.NewSet(schema.HashString, []interface{}{aws.StringValue(granteeObject.Permission)}) - grants = append(grants, grantee) + + switch v := v.(type) { + case map[string]interface{}: + withoutNil[k] = removeNil(v) + default: + withoutNil[k] = v } } - return grants + return withoutNil } diff --git a/internal/service/s3/bucket_accelerate_configuration_test.go b/internal/service/s3/bucket_accelerate_configuration_test.go index aacc2f8c345f..223c3c5047d2 100644 --- a/internal/service/s3/bucket_accelerate_configuration_test.go +++ b/internal/service/s3/bucket_accelerate_configuration_test.go @@ -109,6 +109,66 @@ func TestAccS3BucketAccelerateConfiguration_disappears(t *testing.T) { }) } +func TestAccS3BucketAccelerateConfiguration_migrate_noChange(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_accelerate_configuration.test" + bucketResourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketAccelerateConfigurationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withAcceleration(rName, s3.BucketAccelerateStatusEnabled), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketExists(bucketResourceName), + resource.TestCheckResourceAttr(bucketResourceName, "acceleration_status", s3.BucketAccelerateStatusEnabled), + ), + }, + { + Config: testAccBucketAccelerateConfigurationBasicConfig(rName, s3.BucketAccelerateStatusEnabled), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketAccelerateConfigurationExists(resourceName), + resource.TestCheckResourceAttrPair(resourceName, "bucket", bucketResourceName, "id"), + resource.TestCheckResourceAttr(resourceName, "status", s3.BucketAccelerateStatusEnabled), + ), + }, + }, + }) +} + +func TestAccS3BucketAccelerateConfiguration_migrate_withChange(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_accelerate_configuration.test" + bucketResourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketAccelerateConfigurationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withAcceleration(rName, s3.BucketAccelerateStatusEnabled), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketExists(bucketResourceName), + resource.TestCheckResourceAttr(bucketResourceName, "acceleration_status", s3.BucketAccelerateStatusEnabled), + ), + }, + { + Config: testAccBucketAccelerateConfigurationBasicConfig(rName, s3.BucketAccelerateStatusSuspended), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketAccelerateConfigurationExists(resourceName), + resource.TestCheckResourceAttrPair(resourceName, "bucket", bucketResourceName, "id"), + resource.TestCheckResourceAttr(resourceName, "status", s3.BucketAccelerateStatusSuspended), + ), + }, + }, + }) +} + func testAccCheckBucketAccelerateConfigurationDestroy(s *terraform.State) error { conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn diff --git a/internal/service/s3/bucket_acl_test.go b/internal/service/s3/bucket_acl_test.go index 146dc5e84473..e3825bceebd0 100644 --- a/internal/service/s3/bucket_acl_test.go +++ b/internal/service/s3/bucket_acl_test.go @@ -303,6 +303,159 @@ func TestAccS3BucketAcl_disappears(t *testing.T) { }) } +func TestAccS3BucketAcl_migrate_aclNoChange(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + bucketResourceName := "aws_s3_bucket.test" + resourceName := "aws_s3_bucket_acl.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withACL(bucketName, s3.BucketCannedACLPublicRead), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(bucketResourceName), + resource.TestCheckResourceAttr(bucketResourceName, "acl", s3.BucketCannedACLPublicRead), + ), + }, + { + Config: testAccBucketAcl_Migrate_AclConfig(bucketName, s3.BucketCannedACLPublicRead), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketAclExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "acl", s3.BucketCannedACLPublicRead), + ), + }, + }, + }) +} + +func TestAccS3BucketAcl_migrate_aclWithChange(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + bucketResourceName := "aws_s3_bucket.test" + resourceName := "aws_s3_bucket_acl.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withACL(bucketName, s3.BucketCannedACLPublicRead), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(bucketResourceName), + resource.TestCheckResourceAttr(bucketResourceName, "acl", s3.BucketCannedACLPublicRead), + ), + }, + { + Config: testAccBucketAcl_Migrate_AclConfig(bucketName, s3.BucketCannedACLPrivate), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketAclExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "acl", s3.BucketCannedACLPrivate), + ), + }, + }, + }) +} + +func TestAccS3BucketAcl_migrate_grantsNoChange(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + bucketResourceName := "aws_s3_bucket.test" + resourceName := "aws_s3_bucket_acl.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withGrants(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(bucketResourceName), + resource.TestCheckResourceAttr(bucketResourceName, "grant.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(bucketResourceName, "grant.*", map[string]string{ + "permissions.#": "2", + "type": "CanonicalUser", + }), + resource.TestCheckTypeSetElemAttr(bucketResourceName, "grant.*.permissions.*", "FULL_CONTROL"), + resource.TestCheckTypeSetElemAttr(bucketResourceName, "grant.*.permissions.*", "WRITE"), + ), + }, + { + Config: testAccBucketAcl_Migrate_GrantsNoChangeConfig(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketAclExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "access_control_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "access_control_policy.0.grant.#", "2"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "access_control_policy.0.grant.*", map[string]string{ + "grantee.#": "1", + "grantee.0.type": s3.TypeCanonicalUser, + "permission": s3.PermissionFullControl, + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "access_control_policy.0.grant.*", map[string]string{ + "grantee.#": "1", + "grantee.0.type": s3.TypeCanonicalUser, + "permission": s3.PermissionWrite, + }), + resource.TestCheckTypeSetElemAttrPair(resourceName, "access_control_policy.0.grant.*.grantee.0.id", "data.aws_canonical_user_id.current", "id"), + resource.TestCheckResourceAttr(resourceName, "access_control_policy.0.owner.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "access_control_policy.0.owner.0.id", "data.aws_canonical_user_id.current", "id"), + ), + }, + }, + }) +} + +func TestAccS3BucketAcl_migrate_grantsWithChange(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + bucketResourceName := "aws_s3_bucket.test" + resourceName := "aws_s3_bucket_acl.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withACL(bucketName, s3.BucketCannedACLPublicRead), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(bucketResourceName), + resource.TestCheckResourceAttr(bucketResourceName, "acl", s3.BucketCannedACLPublicRead), + ), + }, + { + Config: testAccBucketAcl_Migrate_GrantsWithChangeConfig(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketAclExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "access_control_policy.#", "1"), + resource.TestCheckResourceAttr(resourceName, "access_control_policy.0.grant.#", "2"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "access_control_policy.0.grant.*", map[string]string{ + "grantee.#": "1", + "grantee.0.type": s3.TypeCanonicalUser, + "permission": s3.PermissionRead, + }), + resource.TestCheckTypeSetElemAttrPair(resourceName, "access_control_policy.0.grant.*.grantee.0.id", "data.aws_canonical_user_id.current", "id"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "access_control_policy.0.grant.*", map[string]string{ + "grantee.#": "1", + "grantee.0.type": s3.TypeGroup, + "permission": s3.PermissionReadAcp, + }), + resource.TestMatchTypeSetElemNestedAttrs(resourceName, "access_control_policy.0.grant.*", map[string]*regexp.Regexp{ + "grantee.0.uri": regexp.MustCompile(`http://acs.*/groups/s3/LogDelivery`), + }), + resource.TestCheckResourceAttr(resourceName, "access_control_policy.0.owner.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "access_control_policy.0.owner.0.id", "data.aws_canonical_user_id.current", "id"), + ), + }, + }, + }) +} + func TestAccS3BucketAcl_updateACL(t *testing.T) { bucketName := sdkacctest.RandomWithPrefix("tf-test-bucket") resourceName := "aws_s3_bucket_acl.test" @@ -599,3 +752,88 @@ resource "aws_s3_bucket_acl" "test" { } `, bucketName) } + +func testAccBucketAcl_Migrate_AclConfig(rName, acl string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} + +resource "aws_s3_bucket_acl" "test" { + bucket = aws_s3_bucket.test.id + acl = %[2]q +} +`, rName, acl) +} + +func testAccBucketAcl_Migrate_GrantsNoChangeConfig(rName string) string { + return fmt.Sprintf(` +data "aws_canonical_user_id" "current" {} + +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} + +resource "aws_s3_bucket_acl" "test" { + bucket = aws_s3_bucket.test.id + access_control_policy { + grant { + grantee { + id = data.aws_canonical_user_id.current.id + type = "CanonicalUser" + } + permission = "FULL_CONTROL" + } + + grant { + grantee { + id = data.aws_canonical_user_id.current.id + type = "CanonicalUser" + } + permission = "WRITE" + } + + owner { + id = data.aws_canonical_user_id.current.id + } + } +} +`, rName) +} + +func testAccBucketAcl_Migrate_GrantsWithChangeConfig(rName string) string { + return fmt.Sprintf(` +data "aws_canonical_user_id" "current" {} + +data "aws_partition" "current" {} + +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} + +resource "aws_s3_bucket_acl" "test" { + bucket = aws_s3_bucket.test.id + access_control_policy { + grant { + grantee { + id = data.aws_canonical_user_id.current.id + type = "CanonicalUser" + } + permission = "READ" + } + + grant { + grantee { + type = "Group" + uri = "http://acs.${data.aws_partition.current.dns_suffix}/groups/s3/LogDelivery" + } + permission = "READ_ACP" + } + + owner { + id = data.aws_canonical_user_id.current.id + } + } +} +`, rName) +} diff --git a/internal/service/s3/bucket_cors_configuration_test.go b/internal/service/s3/bucket_cors_configuration_test.go index 5e94468b56ce..0143c8771e4c 100644 --- a/internal/service/s3/bucket_cors_configuration_test.go +++ b/internal/service/s3/bucket_cors_configuration_test.go @@ -219,6 +219,89 @@ func TestAccS3BucketCorsConfiguration_MultipleRules(t *testing.T) { }) } +func TestAccS3BucketCorsConfiguration_migrate_corsRuleNoChange(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + bucketResourceName := "aws_s3_bucket.test" + resourceName := "aws_s3_bucket_cors_configuration.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withCORS(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(bucketResourceName), + resource.TestCheckResourceAttr(bucketResourceName, "cors_rule.#", "1"), + resource.TestCheckResourceAttr(bucketResourceName, "cors_rule.0.allowed_headers.#", "1"), + resource.TestCheckResourceAttr(bucketResourceName, "cors_rule.0.allowed_methods.#", "2"), + resource.TestCheckResourceAttr(bucketResourceName, "cors_rule.0.allowed_origins.#", "1"), + resource.TestCheckResourceAttr(bucketResourceName, "cors_rule.0.expose_headers.#", "2"), + resource.TestCheckResourceAttr(bucketResourceName, "cors_rule.0.max_age_seconds", "3000"), + ), + }, + { + Config: testAccBucketCorsConfigurationConfig_Migrate_CorsRuleNoChange(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketCorsConfigurationExists(resourceName), + resource.TestCheckResourceAttrPair(resourceName, "bucket", bucketResourceName, "id"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "cors_rule.*", map[string]string{ + "allowed_headers.#": "1", + "allowed_methods.#": "2", + "allowed_origins.#": "1", + "expose_headers.#": "2", + "max_age_seconds": "3000", + }), + ), + }, + }, + }) +} + +func TestAccS3BucketCorsConfiguration_migrate_corsRuleWithChange(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + bucketResourceName := "aws_s3_bucket.test" + resourceName := "aws_s3_bucket_cors_configuration.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withCORS(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(bucketResourceName), + resource.TestCheckResourceAttr(bucketResourceName, "cors_rule.#", "1"), + resource.TestCheckResourceAttr(bucketResourceName, "cors_rule.0.allowed_headers.#", "1"), + resource.TestCheckResourceAttr(bucketResourceName, "cors_rule.0.allowed_methods.#", "2"), + resource.TestCheckResourceAttr(bucketResourceName, "cors_rule.0.allowed_origins.#", "1"), + resource.TestCheckResourceAttr(bucketResourceName, "cors_rule.0.expose_headers.#", "2"), + resource.TestCheckResourceAttr(bucketResourceName, "cors_rule.0.max_age_seconds", "3000"), + ), + }, + { + Config: testAccBucketCorsConfigurationConfig_Migrate_CorsRuleWithChange(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketCorsConfigurationExists(resourceName), + resource.TestCheckResourceAttrPair(resourceName, "bucket", bucketResourceName, "id"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "cors_rule.*", map[string]string{ + "allowed_methods.#": "1", + "allowed_origins.#": "1", + }), + resource.TestCheckTypeSetElemAttr(resourceName, "cors_rule.*.allowed_methods.*", "PUT"), + resource.TestCheckTypeSetElemAttr(resourceName, "cors_rule.*.allowed_origins.*", "https://www.example.com"), + ), + }, + }, + }) +} + func testAccCheckBucketCorsConfigurationDestroy(s *terraform.State) error { conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn @@ -360,3 +443,40 @@ resource "aws_s3_bucket_cors_configuration" "test" { } `, rName) } + +func testAccBucketCorsConfigurationConfig_Migrate_CorsRuleNoChange(rName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} + +resource "aws_s3_bucket_cors_configuration" "test" { + bucket = aws_s3_bucket.test.id + + cors_rule { + allowed_headers = ["*"] + allowed_methods = ["PUT", "POST"] + allowed_origins = ["https://www.example.com"] + expose_headers = ["x-amz-server-side-encryption", "ETag"] + max_age_seconds = 3000 + } +} +`, rName) +} + +func testAccBucketCorsConfigurationConfig_Migrate_CorsRuleWithChange(rName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} + +resource "aws_s3_bucket_cors_configuration" "test" { + bucket = aws_s3_bucket.test.id + + cors_rule { + allowed_methods = ["PUT"] + allowed_origins = ["https://www.example.com"] + } +} +`, rName) +} diff --git a/internal/service/s3/bucket_lifecycle_configuration.go b/internal/service/s3/bucket_lifecycle_configuration.go index c84ac871fe4a..5f10cb4580e7 100644 --- a/internal/service/s3/bucket_lifecycle_configuration.go +++ b/internal/service/s3/bucket_lifecycle_configuration.go @@ -5,6 +5,7 @@ import ( "fmt" "log" "reflect" + "strings" "time" "github.com/aws/aws-sdk-go/aws" @@ -95,7 +96,7 @@ func ResourceBucketLifecycleConfiguration() *schema.Resource { // we apply the Default behavior from v3.x of the provider (Filter with empty string Prefix), // which will thus return a Filter in the GetBucketLifecycleConfiguration request and // require diff suppression. - DiffSuppressFunc: verify.SuppressMissingOptionalConfigurationBlock, + DiffSuppressFunc: suppressMissingFilterConfigurationBlock, MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ @@ -420,3 +421,26 @@ func resourceBucketLifecycleConfigurationDelete(ctx context.Context, d *schema.R return nil } + +// suppressMissingFilterConfigurationBlock suppresses the diff that results from an omitted +// filter configuration block and one returned from the S3 API. +// To work around the issue, https://github.com/hashicorp/terraform-plugin-sdk/issues/743, +// this method only looks for changes in the "filter.#" value and not its nested fields +// which are incorrectly suppressed when using the verify.SuppressMissingOptionalConfigurationBlock method. +func suppressMissingFilterConfigurationBlock(k, old, new string, d *schema.ResourceData) bool { + if strings.HasSuffix(k, "filter.#") { + o, n := d.GetChange(k) + oVal, nVal := o.(int), n.(int) + + if oVal == 1 && nVal == 0 { + return true + } + + if oVal == 1 && nVal == 1 { + return old == "1" && new == "0" + } + + return false + } + return false +} diff --git a/internal/service/s3/bucket_lifecycle_configuration_test.go b/internal/service/s3/bucket_lifecycle_configuration_test.go index 41b9de2f2699..1d75237e10a1 100644 --- a/internal/service/s3/bucket_lifecycle_configuration_test.go +++ b/internal/service/s3/bucket_lifecycle_configuration_test.go @@ -842,6 +842,125 @@ func TestAccS3BucketLifecycleConfiguration_EmptyFilter_NonCurrentVersions(t *tes }, }) } +func TestAccS3BucketLifecycleConfiguration_migrate_noChange(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_lifecycle_configuration.test" + bucketResourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketLifecycleConfigurationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withLifecycleExpireMarker(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketExists(bucketResourceName), + resource.TestCheckResourceAttr(bucketResourceName, "lifecycle_rule.#", "1"), + resource.TestCheckResourceAttr(bucketResourceName, "lifecycle_rule.0.id", "id1"), + resource.TestCheckResourceAttr(bucketResourceName, "lifecycle_rule.0.enabled", "true"), + resource.TestCheckResourceAttr(bucketResourceName, "lifecycle_rule.0.prefix", "path1/"), + resource.TestCheckResourceAttr(bucketResourceName, "lifecycle_rule.0.expiration.0.days", "0"), + resource.TestCheckResourceAttr(bucketResourceName, "lifecycle_rule.0.expiration.0.date", ""), + resource.TestCheckResourceAttr(bucketResourceName, "lifecycle_rule.0.expiration.0.expired_object_delete_marker", "true"), + ), + }, + { + Config: testAccBucketLifecycleConfiguration_Migrate_NoChangeConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketLifecycleConfigurationExists(resourceName), + resource.TestCheckResourceAttrPair(resourceName, "bucket", bucketResourceName, "bucket"), + resource.TestCheckResourceAttr(resourceName, "rule.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rule.0.id", "id1"), + resource.TestCheckResourceAttr(resourceName, "rule.0.status", "Enabled"), + resource.TestCheckResourceAttr(resourceName, "rule.0.prefix", "path1/"), + resource.TestCheckResourceAttr(resourceName, "rule.0.expiration.0.days", "0"), + resource.TestCheckResourceAttr(resourceName, "rule.0.expiration.0.date", ""), + resource.TestCheckResourceAttr(resourceName, "rule.0.expiration.0.expired_object_delete_marker", "true"), + ), + }, + }, + }) +} + +func TestAccS3BucketLifecycleConfiguration_migrate_withChange(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_lifecycle_configuration.test" + bucketResourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketLifecycleConfigurationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withLifecycleExpireMarker(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketExists(bucketResourceName), + resource.TestCheckResourceAttr(bucketResourceName, "lifecycle_rule.#", "1"), + resource.TestCheckResourceAttr(bucketResourceName, "lifecycle_rule.0.id", "id1"), + resource.TestCheckResourceAttr(bucketResourceName, "lifecycle_rule.0.enabled", "true"), + resource.TestCheckResourceAttr(bucketResourceName, "lifecycle_rule.0.prefix", "path1/"), + resource.TestCheckResourceAttr(bucketResourceName, "lifecycle_rule.0.expiration.0.days", "0"), + resource.TestCheckResourceAttr(bucketResourceName, "lifecycle_rule.0.expiration.0.date", ""), + resource.TestCheckResourceAttr(bucketResourceName, "lifecycle_rule.0.expiration.0.expired_object_delete_marker", "true"), + ), + }, + { + Config: testAccBucketLifecycleConfiguration_Migrate_WithChangeConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketLifecycleConfigurationExists(resourceName), + resource.TestCheckResourceAttrPair(resourceName, "bucket", bucketResourceName, "bucket"), + resource.TestCheckResourceAttr(resourceName, "rule.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rule.0.id", "id1"), + resource.TestCheckResourceAttr(resourceName, "rule.0.status", "Disabled"), + resource.TestCheckResourceAttr(resourceName, "rule.0.prefix", "path1/"), + resource.TestCheckResourceAttr(resourceName, "rule.0.expiration.0.days", "0"), + resource.TestCheckResourceAttr(resourceName, "rule.0.expiration.0.date", ""), + resource.TestCheckResourceAttr(resourceName, "rule.0.expiration.0.expired_object_delete_marker", "false"), + ), + }, + }, + }) +} + +// Reference: https://github.com/hashicorp/terraform-provider-aws/issues/23884 +func TestAccS3BucketLifecycleConfiguration_Update_filterWithAndToFilterWithPrefix(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_lifecycle_configuration.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketLifecycleConfigurationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketLifecycleConfiguration_Filter_ObjectSizeGreaterThanAndPrefixConfig(rName, "prefix1"), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketLifecycleConfigurationExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "rule.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rule.0.filter.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rule.0.filter.0.and.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rule.0.filter.0.and.0.object_size_greater_than", "300"), + resource.TestCheckResourceAttr(resourceName, "rule.0.filter.0.and.0.prefix", "prefix1"), + ), + }, + { + Config: testAccBucketLifecycleConfiguration_Filter_PrefixConfig(rName, "prefix2"), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketLifecycleConfigurationExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "rule.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rule.0.filter.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rule.0.filter.0.and.#", "0"), + resource.TestCheckResourceAttr(resourceName, "rule.0.filter.0.prefix", "prefix2"), + ), + }, + }, + }) +} func testAccCheckBucketLifecycleConfigurationDestroy(s *terraform.State) error { conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn @@ -1584,3 +1703,120 @@ resource "aws_s3_bucket_lifecycle_configuration" "test" { } `, rName, date, sizeGreaterThan, sizeLessThan) } + +func testAccBucketLifecycleConfiguration_Filter_ObjectSizeGreaterThanAndPrefixConfig(rName, prefix string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} + +resource "aws_s3_bucket_acl" "test" { + bucket = aws_s3_bucket.test.id + acl = "private" +} + +resource "aws_s3_bucket_lifecycle_configuration" "test" { + bucket = aws_s3_bucket.test.id + + rule { + id = %[1]q + + expiration { + days = 90 + } + + filter { + and { + object_size_greater_than = 300 + prefix = %[2]q + } + } + + status = "Enabled" + } +}`, rName, prefix) +} + +func testAccBucketLifecycleConfiguration_Filter_PrefixConfig(rName, prefix string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} + +resource "aws_s3_bucket_acl" "test" { + bucket = aws_s3_bucket.test.id + acl = "private" +} + +resource "aws_s3_bucket_lifecycle_configuration" "test" { + bucket = aws_s3_bucket.test.id + + rule { + id = %[1]q + + expiration { + days = 90 + } + + filter { + prefix = %[2]q + } + + status = "Enabled" + } +}`, rName, prefix) +} + +func testAccBucketLifecycleConfiguration_Migrate_NoChangeConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} + +resource "aws_s3_bucket_acl" "test" { + bucket = aws_s3_bucket.test.id + acl = "private" +} + +resource "aws_s3_bucket_lifecycle_configuration" "test" { + bucket = aws_s3_bucket.test.bucket + + rule { + id = "id1" + prefix = "path1/" + status = "Enabled" + + expiration { + expired_object_delete_marker = true + } + } +} +`, rName) +} + +func testAccBucketLifecycleConfiguration_Migrate_WithChangeConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} + +resource "aws_s3_bucket_acl" "test" { + bucket = aws_s3_bucket.test.id + acl = "private" +} + +resource "aws_s3_bucket_lifecycle_configuration" "test" { + bucket = aws_s3_bucket.test.bucket + + rule { + id = "id1" + prefix = "path1/" + status = "Disabled" + + expiration { + expired_object_delete_marker = false + } + } +} +`, rName) +} diff --git a/internal/service/s3/bucket_logging_test.go b/internal/service/s3/bucket_logging_test.go index f59581a68020..25ca64878920 100644 --- a/internal/service/s3/bucket_logging_test.go +++ b/internal/service/s3/bucket_logging_test.go @@ -298,6 +298,70 @@ func TestAccS3BucketLogging_TargetGrantByGroup(t *testing.T) { }) } +func TestAccS3BucketLogging_migrate_loggingNoChange(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + bucketResourceName := "aws_s3_bucket.test" + resourceName := "aws_s3_bucket_logging.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withLogging(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(bucketResourceName), + resource.TestCheckResourceAttr(bucketResourceName, "logging.#", "1"), + resource.TestCheckResourceAttrPair(bucketResourceName, "logging.0.target_bucket", "aws_s3_bucket.log_bucket", "id"), + resource.TestCheckResourceAttr(bucketResourceName, "logging.0.target_prefix", "log/"), + ), + }, + { + Config: testAccBucketLogging_Migrate_LoggingConfig(bucketName, "log/"), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketLoggingExists(resourceName), + resource.TestCheckResourceAttrPair(resourceName, "target_bucket", "aws_s3_bucket.log_bucket", "id"), + resource.TestCheckResourceAttr(resourceName, "target_prefix", "log/"), + ), + }, + }, + }) +} + +func TestAccS3BucketLogging_migrate_loggingWithChange(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + bucketResourceName := "aws_s3_bucket.test" + resourceName := "aws_s3_bucket_logging.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withLogging(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(bucketResourceName), + resource.TestCheckResourceAttr(bucketResourceName, "logging.#", "1"), + resource.TestCheckResourceAttrPair(bucketResourceName, "logging.0.target_bucket", "aws_s3_bucket.log_bucket", "id"), + resource.TestCheckResourceAttr(bucketResourceName, "logging.0.target_prefix", "log/"), + ), + }, + { + Config: testAccBucketLogging_Migrate_LoggingConfig(bucketName, "tmp/"), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketLoggingExists(resourceName), + resource.TestCheckResourceAttrPair(resourceName, "target_bucket", "aws_s3_bucket.log_bucket", "id"), + resource.TestCheckResourceAttr(resourceName, "target_prefix", "tmp/"), + ), + }, + }, + }) +} + func testAccCheckBucketLoggingDestroy(s *terraform.State) error { conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn @@ -561,3 +625,32 @@ resource "aws_s3_bucket_logging" "test" { } `, rName, permission) } + +func testAccBucketLogging_Migrate_LoggingConfig(rName, targetPrefix string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "log_bucket" { + bucket = "%[1]s-log" +} + +resource "aws_s3_bucket_acl" "log_bucket_acl" { + bucket = aws_s3_bucket.log_bucket.id + acl = "log-delivery-write" +} + +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} + +resource "aws_s3_bucket_acl" "test" { + bucket = aws_s3_bucket.test.id + acl = "private" +} + +resource "aws_s3_bucket_logging" "test" { + bucket = aws_s3_bucket.test.id + + target_bucket = aws_s3_bucket.log_bucket.id + target_prefix = %[2]q +} +`, rName, targetPrefix) +} diff --git a/internal/service/s3/bucket_object.go b/internal/service/s3/bucket_object.go index c307c89c6a81..94317b7fc418 100644 --- a/internal/service/s3/bucket_object.go +++ b/internal/service/s3/bucket_object.go @@ -390,7 +390,7 @@ func resourceBucketObjectDelete(d *schema.ResourceData, meta interface{}) error var err error if _, ok := d.GetOk("version_id"); ok { - err = DeleteAllObjectVersions(conn, bucket, key, d.Get("force_destroy").(bool), false) + _, err = DeleteAllObjectVersions(conn, bucket, key, d.Get("force_destroy").(bool), false) } else { err = deleteS3ObjectVersion(conn, bucket, key, "", false) } diff --git a/internal/service/s3/bucket_object_lock_configuration_test.go b/internal/service/s3/bucket_object_lock_configuration_test.go index 4b200ca129eb..71390ea61312 100644 --- a/internal/service/s3/bucket_object_lock_configuration_test.go +++ b/internal/service/s3/bucket_object_lock_configuration_test.go @@ -102,6 +102,78 @@ func TestAccS3BucketObjectLockConfiguration_update(t *testing.T) { }) } +func TestAccS3BucketObjectLockConfiguration_migrate_noChange(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_object_lock_configuration.test" + bucketResourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketObjectLockConfigurationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_ObjectLockEnabledWithDefaultRetention(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketExists(bucketResourceName), + resource.TestCheckResourceAttr(bucketResourceName, "object_lock_configuration.#", "1"), + resource.TestCheckResourceAttr(bucketResourceName, "object_lock_configuration.0.object_lock_enabled", s3.ObjectLockEnabledEnabled), + resource.TestCheckResourceAttr(bucketResourceName, "object_lock_configuration.0.rule.#", "1"), + resource.TestCheckResourceAttr(bucketResourceName, "object_lock_configuration.0.rule.0.default_retention.0.mode", s3.ObjectLockRetentionModeCompliance), + resource.TestCheckResourceAttr(bucketResourceName, "object_lock_configuration.0.rule.0.default_retention.0.days", "3"), + ), + }, + { + Config: testAccBucketObjectLockConfigurationBasicConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketObjectLockConfigurationExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "object_lock_enabled", s3.ObjectLockEnabledEnabled), + resource.TestCheckResourceAttr(resourceName, "rule.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rule.0.default_retention.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rule.0.default_retention.0.days", "3"), + resource.TestCheckResourceAttr(resourceName, "rule.0.default_retention.0.mode", s3.ObjectLockRetentionModeCompliance), + ), + }, + }, + }) +} + +func TestAccS3BucketObjectLockConfiguration_migrate_withChange(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_object_lock_configuration.test" + bucketResourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketObjectLockConfigurationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_ObjectLockEnabledNoDefaultRetention(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketExists(bucketResourceName), + resource.TestCheckResourceAttr(bucketResourceName, "object_lock_configuration.#", "1"), + resource.TestCheckResourceAttr(bucketResourceName, "object_lock_configuration.0.object_lock_enabled", s3.ObjectLockEnabledEnabled), + resource.TestCheckResourceAttr(bucketResourceName, "object_lock_configuration.0.rule.#", "0"), + ), + }, + { + Config: testAccBucketObjectLockConfigurationBasicConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketObjectLockConfigurationExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "object_lock_enabled", s3.ObjectLockEnabledEnabled), + resource.TestCheckResourceAttr(resourceName, "rule.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rule.0.default_retention.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rule.0.default_retention.0.days", "3"), + resource.TestCheckResourceAttr(resourceName, "rule.0.default_retention.0.mode", s3.ObjectLockRetentionModeCompliance), + ), + }, + }, + }) +} + func testAccCheckBucketObjectLockConfigurationDestroy(s *terraform.State) error { conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn diff --git a/internal/service/s3/bucket_policy_test.go b/internal/service/s3/bucket_policy_test.go index f31c0f9e474f..60f05ace8d78 100644 --- a/internal/service/s3/bucket_policy_test.go +++ b/internal/service/s3/bucket_policy_test.go @@ -2,6 +2,7 @@ package s3_test import ( "fmt" + "strconv" "testing" "github.com/aws/aws-sdk-go/aws" @@ -249,6 +250,66 @@ func TestAccS3BucketPolicy_IAMRoleOrder_jsonEncode(t *testing.T) { }) } +func TestAccS3BucketPolicy_migrate_noChange(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_policy.test" + bucketResourceName := "aws_s3_bucket.test" + partition := acctest.Partition() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withPolicy(rName, partition), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketExists(bucketResourceName), + testAccCheckBucketPolicy(bucketResourceName, testAccBucketPolicy(rName, partition)), + ), + }, + { + Config: testAccBucketPolicy_Migrate_NoChangeConfig(rName, partition), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketExists(bucketResourceName), + testAccCheckBucketPolicy(resourceName, testAccBucketPolicy(rName, partition)), + ), + }, + }, + }) +} + +func TestAccS3BucketPolicy_migrate_withChange(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_policy.test" + bucketResourceName := "aws_s3_bucket.test" + partition := acctest.Partition() + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withPolicy(rName, partition), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketExists(bucketResourceName), + testAccCheckBucketPolicy(bucketResourceName, testAccBucketPolicy(rName, partition)), + ), + }, + { + Config: testAccBucketPolicy_Migrate_WithChangeConfig(rName, partition), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketExists(resourceName), + testAccCheckBucketPolicy(resourceName, testAccBucketPolicyUpdated(rName, partition)), + ), + }, + }, + }) +} + func testAccCheckBucketHasPolicy(n string, expectedPolicyText string) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] @@ -640,3 +701,56 @@ resource "aws_s3_bucket_policy" "bucket" { } `) } + +func testAccBucketPolicy_Migrate_NoChangeConfig(bucketName, partition string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} + +resource "aws_s3_bucket_acl" "test" { + bucket = aws_s3_bucket.test.id + acl = "private" +} + +resource "aws_s3_bucket_policy" "test" { + bucket = aws_s3_bucket.test.id + policy = %[2]s +} +`, bucketName, strconv.Quote(testAccBucketPolicy(bucketName, partition))) +} + +func testAccBucketPolicy_Migrate_WithChangeConfig(bucketName, partition string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} + +resource "aws_s3_bucket_acl" "test" { + bucket = aws_s3_bucket.test.id + acl = "private" +} + +resource "aws_s3_bucket_policy" "test" { + bucket = aws_s3_bucket.test.id + policy = %[2]s +} +`, bucketName, strconv.Quote(testAccBucketPolicyUpdated(bucketName, partition))) +} + +func testAccBucketPolicyUpdated(bucketName, partition string) string { + return fmt.Sprintf(`{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "", + "Effect": "Allow", + "Principal": { + "AWS": "*" + }, + "Action": "s3:PutObject", + "Resource": "arn:%[1]s:s3:::%[2]s/*" + } + ] +}`, partition, bucketName) +} diff --git a/internal/service/s3/bucket_public_access_block_test.go b/internal/service/s3/bucket_public_access_block_test.go index 96fbb533817d..f4ed91b150e2 100644 --- a/internal/service/s3/bucket_public_access_block_test.go +++ b/internal/service/s3/bucket_public_access_block_test.go @@ -87,7 +87,7 @@ func TestAccS3BucketPublicAccessBlock_Disappears_bucket(t *testing.T) { Config: testAccBucketPublicAccessBlockConfig(name, "false", "false", "false", "false"), Check: resource.ComposeTestCheckFunc( testAccCheckBucketPublicAccessBlockExists(resourceName, &config), - testAccCheckDestroyBucket(bucketResourceName), + acctest.CheckResourceDisappears(acctest.Provider, tfs3.ResourceBucket(), bucketResourceName), ), ExpectNonEmptyPlan: true, }, diff --git a/internal/service/s3/bucket_replication_configuration_test.go b/internal/service/s3/bucket_replication_configuration_test.go index 076487008647..6c45bb01cf53 100644 --- a/internal/service/s3/bucket_replication_configuration_test.go +++ b/internal/service/s3/bucket_replication_configuration_test.go @@ -1073,6 +1073,88 @@ func TestAccS3BucketReplicationConfiguration_withoutPrefix(t *testing.T) { }) } +func TestAccS3BucketReplicationConfiguration_migrate_noChange(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_replication_configuration.test" + bucketResourceName := "aws_s3_bucket.source" + region := acctest.Region() + + // record the initialized providers so that we can use them to check for the instances in each region + var providers []*schema.Provider + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.FactoriesAlternate(&providers), + CheckDestroy: acctest.CheckWithProviders(testAccCheckBucketReplicationConfigurationDestroy, &providers), + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withReplicationV2_PrefixAndTags(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketExistsWithProvider(bucketResourceName, acctest.RegionProviderFunc(region, &providers)), + resource.TestCheckResourceAttr(bucketResourceName, "replication_configuration.0.rules.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(bucketResourceName, "replication_configuration.0.rules.*", map[string]string{ + "filter.#": "1", + "filter.0.prefix": "foo", + "filter.0.tags.%": "2", + }), + ), + }, + { + Config: testAccBucketReplicationConfiguration_Migrate_NoChangeConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketReplicationConfigurationExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "rule.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rule.0.filter.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rule.0.filter.0.and.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rule.0.filter.0.and.0.prefix", "foo"), + resource.TestCheckResourceAttr(resourceName, "rule.0.filter.0.and.0.tags.%", "2"), + ), + }, + }, + }) +} + +func TestAccS3BucketReplicationConfiguration_migrate_withChange(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_replication_configuration.test" + bucketResourceName := "aws_s3_bucket.source" + region := acctest.Region() + + // record the initialized providers so that we can use them to check for the instances in each region + var providers []*schema.Provider + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.FactoriesAlternate(&providers), + CheckDestroy: acctest.CheckWithProviders(testAccCheckBucketReplicationConfigurationDestroy, &providers), + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withReplicationV2_PrefixAndTags(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketExistsWithProvider(bucketResourceName, acctest.RegionProviderFunc(region, &providers)), + resource.TestCheckResourceAttr(bucketResourceName, "replication_configuration.0.rules.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(bucketResourceName, "replication_configuration.0.rules.*", map[string]string{ + "filter.#": "1", + "filter.0.prefix": "foo", + "filter.0.tags.%": "2", + }), + ), + }, + { + Config: testAccBucketReplicationConfiguration_Migrate_WithChangeConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketReplicationConfigurationExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "rule.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rule.0.filter.#", "1"), + resource.TestCheckResourceAttr(resourceName, "rule.0.filter.0.prefix", "bar"), + ), + }, + }, + }) +} + func testAccCheckBucketReplicationConfigurationDestroy(s *terraform.State, provider *schema.Provider) error { conn := provider.Meta().(*conns.AWSClient).S3Conn @@ -2185,3 +2267,137 @@ resource "aws_s3_bucket_replication_configuration" "test" { } }`) } + +func testAccBucketReplicationConfigurationMigrationBase(rName string) string { + return fmt.Sprintf(` +data "aws_partition" "current" {} + +resource "aws_iam_role" "role" { + name = %[1]q + + assume_role_policy = < 0 && err == nil { - t.Fatalf("expected %q to trigger an error", tc.Region) - } - if output != tc.ExpectedOutput { - t.Fatalf("expected %q, received %q", tc.ExpectedOutput, output) - } - } + }) } -func TestWebsiteEndpoint(t *testing.T) { - // https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html - testCases := []struct { - TestingClient *conns.AWSClient - LocationConstraint string - Expected string - }{ - { - TestingClient: &conns.AWSClient{ - DNSSuffix: "amazonaws.com", - Region: endpoints.UsEast1RegionID, +func TestAccS3Bucket_Manage_objectLock(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix("tf-test-bucket") + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_ObjectLockEnabledNoDefaultRetention(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "object_lock_enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "object_lock_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "object_lock_configuration.0.object_lock_enabled", s3.ObjectLockEnabledEnabled), + resource.TestCheckResourceAttr(resourceName, "object_lock_configuration.0.rule.#", "0"), + ), }, - LocationConstraint: "", - Expected: fmt.Sprintf("bucket-name.s3-website-%s.%s", endpoints.UsEast1RegionID, acctest.PartitionDNSSuffix()), - }, - { - TestingClient: &conns.AWSClient{ - DNSSuffix: "amazonaws.com", - Region: endpoints.UsWest2RegionID, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy"}, }, - LocationConstraint: endpoints.UsWest2RegionID, - Expected: fmt.Sprintf("bucket-name.s3-website-%s.%s", endpoints.UsWest2RegionID, acctest.PartitionDNSSuffix()), - }, - { - TestingClient: &conns.AWSClient{ - DNSSuffix: "amazonaws.com", - Region: endpoints.UsWest1RegionID, + { + Config: testAccBucketConfig_ObjectLockEnabledWithDefaultRetention(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "object_lock_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "object_lock_configuration.0.object_lock_enabled", "Enabled"), + resource.TestCheckResourceAttr(resourceName, "object_lock_configuration.0.rule.#", "1"), + resource.TestCheckResourceAttr(resourceName, "object_lock_configuration.0.rule.0.default_retention.0.mode", "COMPLIANCE"), + resource.TestCheckResourceAttr(resourceName, "object_lock_configuration.0.rule.0.default_retention.0.days", "3"), + ), }, - LocationConstraint: endpoints.UsWest1RegionID, - Expected: fmt.Sprintf("bucket-name.s3-website-%s.%s", endpoints.UsWest1RegionID, acctest.PartitionDNSSuffix()), }, - { - TestingClient: &conns.AWSClient{ - DNSSuffix: "amazonaws.com", - Region: endpoints.EuWest1RegionID, + }) +} + +func TestAccS3Bucket_Manage_objectLock_deprecatedEnabled(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix("tf-test-bucket") + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_ObjectLockEnabledNoDefaultRetention_deprecatedEnabled(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "object_lock_enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "object_lock_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "object_lock_configuration.0.object_lock_enabled", s3.ObjectLockEnabledEnabled), + resource.TestCheckResourceAttr(resourceName, "object_lock_configuration.0.rule.#", "0"), + ), }, - LocationConstraint: endpoints.EuWest1RegionID, - Expected: fmt.Sprintf("bucket-name.s3-website-%s.%s", endpoints.EuWest1RegionID, acctest.PartitionDNSSuffix()), - }, - { - TestingClient: &conns.AWSClient{ - DNSSuffix: "amazonaws.com", - Region: endpoints.EuWest3RegionID, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy"}, }, - LocationConstraint: endpoints.EuWest3RegionID, - Expected: fmt.Sprintf("bucket-name.s3-website.%s.%s", endpoints.EuWest3RegionID, acctest.PartitionDNSSuffix()), }, - { - TestingClient: &conns.AWSClient{ - DNSSuffix: "amazonaws.com", - Region: endpoints.EuCentral1RegionID, + }) +} + +func TestAccS3Bucket_Manage_objectLock_migrate(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix("tf-test-bucket") + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_ObjectLockEnabledNoDefaultRetention_deprecatedEnabled(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "object_lock_enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "object_lock_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "object_lock_configuration.0.object_lock_enabled", s3.ObjectLockEnabledEnabled), + ), }, - LocationConstraint: endpoints.EuCentral1RegionID, - Expected: fmt.Sprintf("bucket-name.s3-website.%s.%s", endpoints.EuCentral1RegionID, acctest.PartitionDNSSuffix()), - }, - { - TestingClient: &conns.AWSClient{ - DNSSuffix: "amazonaws.com", - Region: endpoints.ApSouth1RegionID, + { + Config: testAccBucketConfig_ObjectLockEnabledNoDefaultRetention(bucketName), + PlanOnly: true, }, - LocationConstraint: endpoints.ApSouth1RegionID, - Expected: fmt.Sprintf("bucket-name.s3-website.%s.%s", endpoints.ApSouth1RegionID, acctest.PartitionDNSSuffix()), }, - { - TestingClient: &conns.AWSClient{ - DNSSuffix: "amazonaws.com", - Region: endpoints.ApSoutheast1RegionID, + }) +} + +func TestAccS3Bucket_Manage_objectLockWithVersioning(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix("tf-test-bucket") + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_objectLockEnabledWithVersioning(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "object_lock_enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "object_lock_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "object_lock_configuration.0.object_lock_enabled", s3.ObjectLockEnabledEnabled), + ), }, - LocationConstraint: endpoints.ApSoutheast1RegionID, - Expected: fmt.Sprintf("bucket-name.s3-website-%s.%s", endpoints.ApSoutheast1RegionID, acctest.PartitionDNSSuffix()), - }, - { - TestingClient: &conns.AWSClient{ - DNSSuffix: "amazonaws.com", - Region: endpoints.ApNortheast1RegionID, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy"}, }, - LocationConstraint: endpoints.ApNortheast1RegionID, - Expected: fmt.Sprintf("bucket-name.s3-website-%s.%s", endpoints.ApNortheast1RegionID, acctest.PartitionDNSSuffix()), }, - { - TestingClient: &conns.AWSClient{ - DNSSuffix: "amazonaws.com", - Region: endpoints.ApSoutheast2RegionID, + }) +} + +func TestAccS3Bucket_Manage_objectLockWithVersioning_deprecatedEnabled(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix("tf-test-bucket") + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_objectLockEnabledWithVersioning_deprecatedEnabled(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "object_lock_enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "object_lock_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "object_lock_configuration.0.object_lock_enabled", s3.ObjectLockEnabledEnabled), + ), }, - LocationConstraint: endpoints.ApSoutheast2RegionID, - Expected: fmt.Sprintf("bucket-name.s3-website-%s.%s", endpoints.ApSoutheast2RegionID, acctest.PartitionDNSSuffix()), - }, - { - TestingClient: &conns.AWSClient{ - DNSSuffix: "amazonaws.com", - Region: endpoints.ApNortheast2RegionID, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy"}, }, - LocationConstraint: endpoints.ApNortheast2RegionID, - Expected: fmt.Sprintf("bucket-name.s3-website.%s.%s", endpoints.ApNortheast2RegionID, acctest.PartitionDNSSuffix()), }, - { - TestingClient: &conns.AWSClient{ - DNSSuffix: "amazonaws.com", - Region: endpoints.SaEast1RegionID, + }) +} + +func TestAccS3Bucket_Manage_versioning(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withVersioning(bucketName, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "versioning.#", "1"), + resource.TestCheckResourceAttr(resourceName, "versioning.0.enabled", "true"), + resource.TestCheckResourceAttr(resourceName, "versioning.0.mfa_delete", "false"), + ), }, - LocationConstraint: endpoints.SaEast1RegionID, - Expected: fmt.Sprintf("bucket-name.s3-website-%s.%s", endpoints.SaEast1RegionID, acctest.PartitionDNSSuffix()), - }, - { - TestingClient: &conns.AWSClient{ - DNSSuffix: "amazonaws.com", - Region: endpoints.UsGovEast1RegionID, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl"}, + }, + { + Config: testAccBucketConfig_withVersioning(bucketName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "versioning.#", "1"), + resource.TestCheckResourceAttr(resourceName, "versioning.0.enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "versioning.0.mfa_delete", "false"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl"}, }, - LocationConstraint: endpoints.UsGovEast1RegionID, - Expected: fmt.Sprintf("bucket-name.s3-website.%s.%s", endpoints.UsGovEast1RegionID, acctest.PartitionDNSSuffix()), }, - { - TestingClient: &conns.AWSClient{ - DNSSuffix: "amazonaws.com", - Region: endpoints.UsGovWest1RegionID, + }) +} + +func TestAccS3Bucket_Manage_versioningDisabled(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withVersioning(bucketName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "versioning.#", "1"), + resource.TestCheckResourceAttr(resourceName, "versioning.0.enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "versioning.0.mfa_delete", "false"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl"}, }, - LocationConstraint: endpoints.UsGovWest1RegionID, - Expected: fmt.Sprintf("bucket-name.s3-website-%s.%s", endpoints.UsGovWest1RegionID, acctest.PartitionDNSSuffix()), }, - { - TestingClient: &conns.AWSClient{ - DNSSuffix: "c2s.ic.gov", - Region: endpoints.UsIsoEast1RegionID, + }) +} + +func TestAccS3Bucket_Manage_MfaDeleteDisabled(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withVersioningMfaDelete(bucketName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "versioning.#", "1"), + resource.TestCheckResourceAttr(resourceName, "versioning.0.enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "versioning.0.mfa_delete", "false"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl"}, }, - LocationConstraint: endpoints.UsIsoEast1RegionID, - Expected: fmt.Sprintf("bucket-name.s3-website.%s.c2s.ic.gov", endpoints.UsIsoEast1RegionID), }, - { - TestingClient: &conns.AWSClient{ - DNSSuffix: "sc2s.sgov.gov", - Region: endpoints.UsIsobEast1RegionID, + }) +} + +func TestAccS3Bucket_Manage_versioningAndMfaDeleteDisabled(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withVersioningDisabledAndMfaDelete(bucketName, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "versioning.#", "1"), + resource.TestCheckResourceAttr(resourceName, "versioning.0.enabled", "false"), + resource.TestCheckResourceAttr(resourceName, "versioning.0.mfa_delete", "false"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl"}, }, - LocationConstraint: endpoints.UsIsobEast1RegionID, - Expected: fmt.Sprintf("bucket-name.s3-website.%s.sc2s.sgov.gov", endpoints.UsIsobEast1RegionID), }, - { - TestingClient: &conns.AWSClient{ - DNSSuffix: "amazonaws.com.cn", - Region: endpoints.CnNorthwest1RegionID, + }) +} + +func TestAccS3Bucket_Replication_basic(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + alternateRegion := acctest.AlternateRegion() + region := acctest.Region() + iamRoleResourceName := "aws_iam_role.role" + resourceName := "aws_s3_bucket.source" + + // record the initialized providers so that we can use them to check for the instances in each region + var providers []*schema.Provider + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(t) + acctest.PreCheckMultipleRegion(t, 2) + }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.FactoriesAlternate(&providers), + CheckDestroy: acctest.CheckWithProviders(testAccCheckBucketDestroyWithProvider, &providers), + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withReplication(bucketName, s3.StorageClassStandard), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "replication_configuration.0.role", iamRoleResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "1"), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination", acctest.RegionProviderFunc(alternateRegion, &providers)), + ), + }, + { + Config: testAccBucketConfig_withReplication(bucketName, s3.StorageClassGlacier), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "replication_configuration.0.role", iamRoleResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "1"), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination", acctest.RegionProviderFunc(alternateRegion, &providers)), + ), + }, + { + Config: testAccBucketConfig_withReplication_SseKMSEncryptedObjects(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "replication_configuration.0.role", iamRoleResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "1"), + ), }, - LocationConstraint: endpoints.CnNorthwest1RegionID, - Expected: fmt.Sprintf("bucket-name.s3-website.%s.amazonaws.com.cn", endpoints.CnNorthwest1RegionID), }, - { - TestingClient: &conns.AWSClient{ - DNSSuffix: "amazonaws.com.cn", - Region: endpoints.CnNorth1RegionID, + }) +} + +func TestAccS3Bucket_Replication_multipleDestinationsEmptyFilter(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + alternateRegion := acctest.AlternateRegion() + region := acctest.Region() + resourceName := "aws_s3_bucket.source" + + // record the initialized providers so that we can use them to check for the instances in each region + var providers []*schema.Provider + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(t) + acctest.PreCheckMultipleRegion(t, 2) + }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.FactoriesAlternate(&providers), + CheckDestroy: acctest.CheckWithProviders(testAccCheckBucketDestroyWithProvider, &providers), + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withReplication_MultipleDestinations_EmptyFilter(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination", acctest.RegionProviderFunc(alternateRegion, &providers)), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination2", acctest.RegionProviderFunc(alternateRegion, &providers)), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination3", acctest.RegionProviderFunc(alternateRegion, &providers)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "3"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "replication_configuration.0.rules.*", map[string]string{ + "id": "rule1", + "priority": "1", + "status": "Enabled", + "filter.#": "1", + "filter.0.prefix": "", + "destination.#": "1", + "destination.0.storage_class": "STANDARD", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "replication_configuration.0.rules.*", map[string]string{ + "id": "rule2", + "priority": "2", + "status": "Enabled", + "filter.#": "1", + "filter.0.prefix": "", + "destination.#": "1", + "destination.0.storage_class": "STANDARD_IA", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "replication_configuration.0.rules.*", map[string]string{ + "id": "rule3", + "priority": "3", + "status": "Disabled", + "filter.#": "1", + "filter.0.prefix": "", + "destination.#": "1", + "destination.0.storage_class": "ONEZONE_IA", + }), + ), + }, + { + Config: testAccBucketConfig_withReplication_MultipleDestinations_EmptyFilter(bucketName), + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl"}, }, - LocationConstraint: endpoints.CnNorth1RegionID, - Expected: fmt.Sprintf("bucket-name.s3-website.%s.amazonaws.com.cn", endpoints.CnNorth1RegionID), }, - } + }) +} + +func TestAccS3Bucket_Replication_multipleDestinationsNonEmptyFilter(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + alternateRegion := acctest.AlternateRegion() + region := acctest.Region() + resourceName := "aws_s3_bucket.source" + + // record the initialized providers so that we can use them to check for the instances in each region + var providers []*schema.Provider + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(t) + acctest.PreCheckMultipleRegion(t, 2) + }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.FactoriesAlternate(&providers), + CheckDestroy: acctest.CheckWithProviders(testAccCheckBucketDestroyWithProvider, &providers), + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withReplication_MultipleDestinations_NonEmptyFilter(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination", acctest.RegionProviderFunc(alternateRegion, &providers)), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination2", acctest.RegionProviderFunc(alternateRegion, &providers)), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination3", acctest.RegionProviderFunc(alternateRegion, &providers)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "3"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "replication_configuration.0.rules.*", map[string]string{ + "id": "rule1", + "priority": "1", + "status": "Enabled", + "filter.#": "1", + "filter.0.prefix": "prefix1", + "destination.#": "1", + "destination.0.storage_class": "STANDARD", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "replication_configuration.0.rules.*", map[string]string{ + "id": "rule2", + "priority": "2", + "status": "Enabled", + "filter.#": "1", + "filter.0.tags.%": "1", + "filter.0.tags.Key2": "Value2", + "destination.#": "1", + "destination.0.storage_class": "STANDARD_IA", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "replication_configuration.0.rules.*", map[string]string{ + "id": "rule3", + "priority": "3", + "status": "Disabled", + "filter.#": "1", + "filter.0.prefix": "prefix3", + "filter.0.tags.%": "1", + "filter.0.tags.Key3": "Value3", + "destination.#": "1", + "destination.0.storage_class": "ONEZONE_IA", + }), + ), + }, + { + Config: testAccBucketConfig_withReplication_MultipleDestinations_NonEmptyFilter(bucketName), + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl"}, + }, + }, + }) +} + +func TestAccS3Bucket_Replication_twoDestination(t *testing.T) { + // This tests 2 destinations since GovCloud and possibly other non-standard partitions allow a max of 2 + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + alternateRegion := acctest.AlternateRegion() + region := acctest.Region() + resourceName := "aws_s3_bucket.source" + + // record the initialized providers so that we can use them to check for the instances in each region + var providers []*schema.Provider + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(t) + acctest.PreCheckMultipleRegion(t, 2) + }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.FactoriesAlternate(&providers), + CheckDestroy: acctest.CheckWithProviders(testAccCheckBucketDestroyWithProvider, &providers), + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withReplication_MultipleDestinations_TwoDestination(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination", acctest.RegionProviderFunc(alternateRegion, &providers)), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination2", acctest.RegionProviderFunc(alternateRegion, &providers)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "2"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "replication_configuration.0.rules.*", map[string]string{ + "id": "rule1", + "priority": "1", + "status": "Enabled", + "filter.#": "1", + "filter.0.prefix": "prefix1", + "destination.#": "1", + "destination.0.storage_class": "STANDARD", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "replication_configuration.0.rules.*", map[string]string{ + "id": "rule2", + "priority": "2", + "status": "Enabled", + "filter.#": "1", + "filter.0.tags.%": "1", + "filter.0.tags.Key2": "Value2", + "destination.#": "1", + "destination.0.storage_class": "STANDARD_IA", + }), + ), + }, + { + Config: testAccBucketConfig_withReplication_MultipleDestinations_TwoDestination(bucketName), + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl"}, + }, + }, + }) +} + +func TestAccS3Bucket_Replication_ruleDestinationAccessControlTranslation(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + region := acctest.Region() + iamRoleResourceName := "aws_iam_role.role" + resourceName := "aws_s3_bucket.source" + + // record the initialized providers so that we can use them to check for the instances in each region + var providers []*schema.Provider + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(t) + acctest.PreCheckMultipleRegion(t, 2) + }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.FactoriesAlternate(&providers), + CheckDestroy: acctest.CheckWithProviders(testAccCheckBucketDestroyWithProvider, &providers), + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withReplication_AccessControlTranslation(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "replication_configuration.0.role", iamRoleResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "1"), + ), + }, + { + Config: testAccBucketConfig_withReplication_AccessControlTranslation(bucketName), + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl", "versioning"}, + }, + { + Config: testAccBucketConfig_withReplication_SseKMSEncryptedObjectsAndAccessControlTranslation(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "replication_configuration.0.role", iamRoleResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "1"), + ), + }, + }, + }) +} + +// Reference: https://github.com/hashicorp/terraform-provider-aws/issues/12480 +func TestAccS3Bucket_Replication_ruleDestinationAddAccessControlTranslation(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + region := acctest.Region() + iamRoleResourceName := "aws_iam_role.role" + resourceName := "aws_s3_bucket.source" + + // record the initialized providers so that we can use them to check for the instances in each region + var providers []*schema.Provider + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(t) + acctest.PreCheckMultipleRegion(t, 2) + }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.FactoriesAlternate(&providers), + CheckDestroy: acctest.CheckWithProviders(testAccCheckBucketDestroyWithProvider, &providers), + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withReplication_RulesDestination(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "replication_configuration.0.role", iamRoleResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "1"), + ), + }, + { + Config: testAccBucketConfig_withReplication_AccessControlTranslation(bucketName), + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl", "versioning"}, + }, + { + Config: testAccBucketConfig_withReplication_AccessControlTranslation(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "replication_configuration.0.role", iamRoleResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "1"), + ), + }, + }, + }) +} + +// StorageClass issue: https://github.com/hashicorp/terraform/issues/10909 +func TestAccS3Bucket_Replication_withoutStorageClass(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + alternateRegion := acctest.AlternateRegion() + region := acctest.Region() + resourceName := "aws_s3_bucket.source" + + // record the initialized providers so that we can use them to check for the instances in each region + var providers []*schema.Provider + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(t) + acctest.PreCheckMultipleRegion(t, 2) + }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.FactoriesAlternate(&providers), + CheckDestroy: acctest.CheckWithProviders(testAccCheckBucketDestroyWithProvider, &providers), + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withReplication_WithoutStorageClass(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination", acctest.RegionProviderFunc(alternateRegion, &providers)), + ), + }, + { + Config: testAccBucketConfig_withReplication_WithoutStorageClass(bucketName), + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl"}, + }, + }, + }) +} + +func TestAccS3Bucket_Replication_expectVersioningValidationError(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + // record the initialized providers so that we can use them to check for the instances in each region + var providers []*schema.Provider + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(t) + acctest.PreCheckMultipleRegion(t, 2) + }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.FactoriesAlternate(&providers), + CheckDestroy: acctest.CheckWithProviders(testAccCheckBucketDestroyWithProvider, &providers), + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withReplication_NoVersioning(bucketName), + ExpectError: regexp.MustCompile(`versioning must be enabled to allow S3 bucket replication`), + }, + }, + }) +} + +// Prefix issue: https://github.com/hashicorp/terraform-provider-aws/issues/6340 +func TestAccS3Bucket_Replication_withoutPrefix(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + alternateRegion := acctest.AlternateRegion() + region := acctest.Region() + resourceName := "aws_s3_bucket.source" + + // record the initialized providers so that we can use them to check for the instances in each region + var providers []*schema.Provider + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(t) + acctest.PreCheckMultipleRegion(t, 2) + }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.FactoriesAlternate(&providers), + CheckDestroy: acctest.CheckWithProviders(testAccCheckBucketDestroyWithProvider, &providers), + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withReplication_WithoutPrefix(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination", acctest.RegionProviderFunc(alternateRegion, &providers)), + ), + }, + { + Config: testAccBucketConfig_withReplication_WithoutPrefix(bucketName), + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl"}, + }, + }, + }) +} + +func TestAccS3Bucket_Replication_schemaV2(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + alternateRegion := acctest.AlternateRegion() + region := acctest.Region() + iamRoleResourceName := "aws_iam_role.role" + resourceName := "aws_s3_bucket.source" + + // record the initialized providers so that we can use them to check for the instances in each region + var providers []*schema.Provider + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(t) + acctest.PreCheckMultipleRegion(t, 2) + }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.FactoriesAlternate(&providers), + CheckDestroy: acctest.CheckWithProviders(testAccCheckBucketDestroyWithProvider, &providers), + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withReplicationV2_DeleteMarkerReplicationDisabled(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "replication_configuration.0.role", iamRoleResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "1"), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination", acctest.RegionProviderFunc(alternateRegion, &providers)), + ), + }, + { + Config: testAccBucketConfig_withReplicationV2_NoTags(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "replication_configuration.0.role", iamRoleResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "1"), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination", acctest.RegionProviderFunc(alternateRegion, &providers)), + ), + }, + { + Config: testAccBucketConfig_withReplicationV2_NoTags(bucketName), + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl"}, + }, + { + Config: testAccBucketConfig_withReplicationV2_OnlyOneTag(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "replication_configuration.0.role", iamRoleResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "1"), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination", acctest.RegionProviderFunc(alternateRegion, &providers)), + ), + }, + { + Config: testAccBucketConfig_withReplicationV2_PrefixAndTags(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "replication_configuration.0.role", iamRoleResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "1"), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination", acctest.RegionProviderFunc(alternateRegion, &providers)), + ), + }, + { + Config: testAccBucketConfig_withReplicationV2_MultipleTags(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "replication_configuration.0.role", iamRoleResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "1"), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination", acctest.RegionProviderFunc(alternateRegion, &providers)), + ), + }, + }, + }) +} + +func TestAccS3Bucket_Replication_schemaV2SameRegion(t *testing.T) { + resourceName := "aws_s3_bucket.source" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + destinationResourceName := "aws_s3_bucket.destination" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withReplicationV2_SameRegionNoTags(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + acctest.CheckResourceAttrGlobalARN(resourceName, "replication_configuration.0.role", "iam", fmt.Sprintf("role/%s", rName)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "1"), + testAccCheckBucketExists(destinationResourceName), + ), + }, + { + Config: testAccBucketConfig_withReplicationV2_SameRegionNoTags(rName), + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "force_destroy", + "acl", + }, + }, + }, + }) +} + +func TestAccS3Bucket_Replication_RTC_valid(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + alternateRegion := acctest.AlternateRegion() + region := acctest.Region() + iamRoleResourceName := "aws_iam_role.role" + resourceName := "aws_s3_bucket.source" + + // record the initialized providers so that we can use them to check for the instances in each region + var providers []*schema.Provider + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { + acctest.PreCheck(t) + acctest.PreCheckMultipleRegion(t, 2) + }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.FactoriesAlternate(&providers), + CheckDestroy: acctest.CheckWithProviders(testAccCheckBucketDestroyWithProvider, &providers), + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withReplicationV2_RTC(bucketName, 15), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "replication_configuration.0.role", iamRoleResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "1"), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination", acctest.RegionProviderFunc(alternateRegion, &providers)), + ), + }, + { + Config: testAccBucketConfig_withReplicationV2_RTCNoMinutes(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "replication_configuration.0.role", iamRoleResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "1"), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination", acctest.RegionProviderFunc(alternateRegion, &providers)), + ), + }, + { + Config: testAccBucketConfig_withReplicationV2_RTCNoStatus(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "replication_configuration.0.role", iamRoleResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "1"), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination", acctest.RegionProviderFunc(alternateRegion, &providers)), + ), + }, + { + Config: testAccBucketConfig_withReplicationV2_RTCNoConfig(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExistsWithProvider(resourceName, acctest.RegionProviderFunc(region, &providers)), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "replication_configuration.0.role", iamRoleResourceName, "arn"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.#", "1"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.0.destination.#", "1"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.0.destination.0.replication_time.#", "1"), + resource.TestCheckResourceAttr(resourceName, "replication_configuration.0.rules.0.destination.0.metrics.#", "1"), + testAccCheckBucketExistsWithProvider("aws_s3_bucket.destination", acctest.RegionProviderFunc(alternateRegion, &providers)), + ), + }, + }, + }) +} + +func TestAccS3Bucket_Security_updateACL(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withACL(bucketName, s3.BucketCannedACLPublicRead), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "acl", s3.BucketCannedACLPublicRead), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl"}, + }, + { + Config: testAccBucketConfig_withACL(bucketName, s3.BucketCannedACLPrivate), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "acl", s3.BucketCannedACLPrivate), + ), + }, + }, + }) +} + +func TestAccS3Bucket_Security_updateGrant(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix("tf-test-bucket") + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withGrants(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "grant.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "grant.*", map[string]string{ + "permissions.#": "2", + "type": "CanonicalUser", + }), + resource.TestCheckTypeSetElemAttr(resourceName, "grant.*.permissions.*", "FULL_CONTROL"), + resource.TestCheckTypeSetElemAttr(resourceName, "grant.*.permissions.*", "WRITE"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy"}, + }, + { + Config: testAccBucketConfig_withUpdatedGrants(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "grant.#", "2"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "grant.*", map[string]string{ + "permissions.#": "1", + "type": "CanonicalUser", + }), + resource.TestCheckTypeSetElemAttr(resourceName, "grant.*.permissions.*", "READ"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "grant.*", map[string]string{ + "permissions.#": "1", + "type": "Group", + "uri": "http://acs.amazonaws.com/groups/s3/LogDelivery", + }), + resource.TestCheckTypeSetElemAttr(resourceName, "grant.*.permissions.*", "READ_ACP"), + ), + }, + { + // As Grant is a Computed field, removing them from terraform will not + // trigger an update to remove them from the S3 bucket. + Config: testAccBucketConfig_Basic(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "grant.#", "2"), + ), + }, + }, + }) +} + +func TestAccS3Bucket_Security_aclToGrant(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix("tf-test-bucket") + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withACL(bucketName, s3.BucketCannedACLPublicRead), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "acl", s3.BucketCannedACLPublicRead), + // By default, the S3 Bucket will have 2 grants configured + resource.TestCheckResourceAttr(resourceName, "grant.#", "2"), + ), + }, + { + Config: testAccBucketConfig_withGrants(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "grant.#", "1"), + ), + }, + }, + }) +} + +func TestAccS3Bucket_Security_grantToACL(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix("tf-test-bucket") + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withGrants(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "grant.#", "1"), + ), + }, + { + Config: testAccBucketConfig_withACL(bucketName, s3.BucketCannedACLPublicRead), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "acl", s3.BucketCannedACLPublicRead), + resource.TestCheckResourceAttr(resourceName, "grant.#", "1"), + ), + }, + }, + }) +} + +func TestAccS3Bucket_Security_corsUpdate(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix("tf-test-bucket") + resourceName := "aws_s3_bucket.test" + + updateBucketCors := func(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn + _, err := conn.PutBucketCors(&s3.PutBucketCorsInput{ + Bucket: aws.String(rs.Primary.ID), + CORSConfiguration: &s3.CORSConfiguration{ + CORSRules: []*s3.CORSRule{ + { + AllowedHeaders: []*string{aws.String("*")}, + AllowedMethods: []*string{aws.String("GET")}, + AllowedOrigins: []*string{aws.String("https://www.example.com")}, + }, + }, + }, + }) + if err != nil && !tfawserr.ErrCodeEquals(err, tfs3.ErrCodeNoSuchCORSConfiguration) { + return err + } + return nil + } + } + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withCORS(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "cors_rule.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_headers.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_headers.0", "*"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_methods.#", "2"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_methods.0", "PUT"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_methods.1", "POST"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_origins.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_origins.0", "https://www.example.com"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.expose_headers.#", "2"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.expose_headers.0", "x-amz-server-side-encryption"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.expose_headers.1", "ETag"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.max_age_seconds", "3000"), + updateBucketCors(resourceName), + ), + ExpectNonEmptyPlan: true, + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl"}, + }, + { + Config: testAccBucketConfig_withCORS(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "cors_rule.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_headers.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_headers.0", "*"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_methods.#", "2"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_methods.0", "PUT"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_methods.1", "POST"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_origins.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_origins.0", "https://www.example.com"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.expose_headers.#", "2"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.expose_headers.0", "x-amz-server-side-encryption"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.expose_headers.1", "ETag"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.max_age_seconds", "3000"), + ), + }, + }, + }) +} + +func TestAccS3Bucket_Security_corsDelete(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix("tf-test-bucket") + resourceName := "aws_s3_bucket.test" + + deleteBucketCors := func(n string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn + _, err := conn.DeleteBucketCors(&s3.DeleteBucketCorsInput{ + Bucket: aws.String(rs.Primary.ID), + }) + if err != nil && !tfawserr.ErrCodeEquals(err, tfs3.ErrCodeNoSuchCORSConfiguration) { + return err + } + return nil + } + } + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withCORS(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + deleteBucketCors(resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccS3Bucket_Security_corsEmptyOrigin(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix("tf-test-bucket") + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withCORSEmptyOrigin(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "cors_rule.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_headers.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_headers.0", "*"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_methods.#", "2"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_methods.0", "PUT"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_methods.1", "POST"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_origins.#", "1"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.allowed_origins.0", ""), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.expose_headers.#", "2"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.expose_headers.0", "x-amz-server-side-encryption"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.expose_headers.1", "ETag"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.0.max_age_seconds", "3000"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy"}, + }, + }, + }) +} + +func TestAccS3Bucket_Security_corsSingleMethodAndEmptyOrigin(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withCORSSingleMethodAndEmptyOrigin(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy"}, + }, + }, + }) +} + +func TestAccS3Bucket_Security_logging(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withLogging(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "logging.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "logging.0.target_bucket", "aws_s3_bucket.log_bucket", "id"), + resource.TestCheckResourceAttr(resourceName, "logging.0.target_prefix", "log/"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl"}, + }, + }, + }) +} +func TestAccS3Bucket_Security_enableDefaultEncryptionWhenTypical(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix("tf-test-bucket") + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withDefaultEncryption_KmsMasterKey(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "server_side_encryption_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "server_side_encryption_configuration.0.rule.#", "1"), + resource.TestCheckResourceAttr(resourceName, "server_side_encryption_configuration.0.rule.0.apply_server_side_encryption_by_default.#", "1"), + resource.TestCheckResourceAttr(resourceName, "server_side_encryption_configuration.0.rule.0.apply_server_side_encryption_by_default.0.sse_algorithm", s3.ServerSideEncryptionAwsKms), + resource.TestMatchResourceAttr(resourceName, "server_side_encryption_configuration.0.rule.0.apply_server_side_encryption_by_default.0.kms_master_key_id", regexp.MustCompile("^arn")), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl"}, + }, + }, + }) +} + +func TestAccS3Bucket_Security_enableDefaultEncryptionWhenAES256IsUsed(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix("tf-test-bucket") + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withDefaultEncryption_defaultKey(bucketName, s3.ServerSideEncryptionAes256), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "server_side_encryption_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "server_side_encryption_configuration.0.rule.#", "1"), + resource.TestCheckResourceAttr(resourceName, "server_side_encryption_configuration.0.rule.0.apply_server_side_encryption_by_default.#", "1"), + resource.TestCheckResourceAttr(resourceName, "server_side_encryption_configuration.0.rule.0.apply_server_side_encryption_by_default.0.sse_algorithm", s3.ServerSideEncryptionAes256), + resource.TestCheckResourceAttr(resourceName, "server_side_encryption_configuration.0.rule.0.apply_server_side_encryption_by_default.0.kms_master_key_id", ""), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl"}, + }, + }, + }) +} + +func TestAccS3Bucket_Security_disableDefaultEncryptionWhenDefaultEncryptionIsEnabled(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix("tf-test-bucket") + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withDefaultEncryption_defaultKey(bucketName, s3.ServerSideEncryptionAwsKms), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketExists(resourceName), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl"}, + }, + { + // As ServerSide Encryption Configuration is a Computed field, removing them from terraform will not + // trigger an update to remove it from the S3 bucket. + Config: testAccBucketConfig_Basic(bucketName), + Check: resource.ComposeAggregateTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "server_side_encryption_configuration.#", "1"), + ), + }, + }, + }) +} + +func TestAccS3Bucket_Security_policy(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + partition := acctest.Partition() + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withPolicy(bucketName, partition), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + testAccCheckBucketPolicy(resourceName, testAccBucketPolicy(bucketName, partition)), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{ + "acl", + "force_destroy", + "grant", + // NOTE: Prior to Terraform AWS Provider 3.0, this attribute did not import correctly either. + // The Read function does not require GetBucketPolicy, if the argument is not configured. + // Rather than introduce that breaking change as well with 3.0, instead we leave the + // current Read behavior and note this will be deprecated in a later 3.x release along + // with other inline policy attributes across the provider. + "policy", + }, + }, + { + // As Policy is a Computed field, removing it from terraform will not + // trigger an update to remove it from the S3 bucket. + Config: testAccBucketConfig_Basic(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + testAccCheckBucketPolicy(resourceName, testAccBucketPolicy(bucketName, partition)), + ), + }, + { + // As Policy is a Computed field, setting it to the empty String will not + // trigger an update to remove it from the S3 bucket. + Config: testAccBucketConfig_withEmptyPolicy(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + testAccCheckBucketPolicy(resourceName, testAccBucketPolicy(bucketName, partition)), + ), + }, + }, + }) +} + +func TestAccS3Bucket_Web_simple(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + region := acctest.Region() + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withWebsite(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "website.#", "1"), + resource.TestCheckResourceAttr(resourceName, "website.0.index_document", "index.html"), + testAccCheckS3BucketWebsiteEndpoint(resourceName, "website_endpoint", bucketName, region), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl", "grant"}, + }, + { + Config: testAccBucketConfig_withWebsiteAndError(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "website.#", "1"), + resource.TestCheckResourceAttr(resourceName, "website.0.index_document", "index.html"), + resource.TestCheckResourceAttr(resourceName, "website.0.error_document", "error.html"), + testAccCheckS3BucketWebsiteEndpoint(resourceName, "website_endpoint", bucketName, region), + ), + }, + { + // As Website is a Computed field, removing them from terraform will not + // trigger an update to remove them from the S3 bucket. + Config: testAccBucketConfig_Basic(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "website.#", "1"), + resource.TestCheckResourceAttr(resourceName, "website.0.index_document", "index.html"), + resource.TestCheckResourceAttr(resourceName, "website.0.error_document", "error.html"), + testAccCheckS3BucketWebsiteEndpoint(resourceName, "website_endpoint", bucketName, region), + ), + }, + }, + }) +} + +func TestAccS3Bucket_Web_redirect(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + region := acctest.Region() + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withWebsiteAndRedirect(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "website.#", "1"), + resource.TestCheckResourceAttr(resourceName, "website.0.redirect_all_requests_to", "hashicorp.com?my=query"), + testAccCheckS3BucketWebsiteEndpoint(resourceName, "website_endpoint", bucketName, region), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl", "grant"}, + }, + { + Config: testAccBucketConfig_withWebsiteAndHTTPSRedirect(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "website.#", "1"), + resource.TestCheckResourceAttr(resourceName, "website.0.redirect_all_requests_to", "https://hashicorp.com?my=query"), + testAccCheckS3BucketWebsiteEndpoint(resourceName, "website_endpoint", bucketName, region), + ), + }, + { + // As Website is a Computed field, removing them from terraform will not + // trigger an update to remove them from the S3 bucket. + Config: testAccBucketConfig_Basic(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "website.#", "1"), + resource.TestCheckResourceAttr(resourceName, "website.0.redirect_all_requests_to", "https://hashicorp.com?my=query"), + testAccCheckS3BucketWebsiteEndpoint(resourceName, "website_endpoint", bucketName, region), + ), + }, + }, + }) +} + +func TestAccS3Bucket_Web_routingRules(t *testing.T) { + bucketName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + region := acctest.Region() + resourceName := "aws_s3_bucket.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketConfig_withWebsiteAndRoutingRules(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "website.#", "1"), + resource.TestCheckResourceAttr(resourceName, "website.0.error_document", "error.html"), + resource.TestCheckResourceAttr(resourceName, "website.0.index_document", "index.html"), + resource.TestCheckResourceAttrSet(resourceName, "website.0.routing_rules"), + testAccCheckS3BucketWebsiteEndpoint(resourceName, "website_endpoint", bucketName, region), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"force_destroy", "acl", "grant"}, + }, + { + // As Website is a Computed field, removing them from terraform will not + // trigger an update to remove them from the S3 bucket. + Config: testAccBucketConfig_Basic(bucketName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "website.#", "1"), + resource.TestCheckResourceAttr(resourceName, "website.0.error_document", "error.html"), + resource.TestCheckResourceAttr(resourceName, "website.0.index_document", "index.html"), + resource.TestCheckResourceAttrSet(resourceName, "website.0.routing_rules"), + testAccCheckS3BucketWebsiteEndpoint(resourceName, "website_endpoint", bucketName, region), + ), + }, + }, + }) +} + +func TestBucketName(t *testing.T) { + validDnsNames := []string{ + "foobar", + "foo.bar", + "foo.bar.baz", + "1234", + "foo-bar", + strings.Repeat("x", 63), + } + + for _, v := range validDnsNames { + if err := tfs3.ValidBucketName(v, endpoints.UsWest2RegionID); err != nil { + t.Fatalf("%q should be a valid S3 bucket name", v) + } + } + + invalidDnsNames := []string{ + "foo..bar", + "Foo.Bar", + "192.168.0.1", + "127.0.0.1", + ".foo", + "bar.", + "foo_bar", + strings.Repeat("x", 64), + } + + for _, v := range invalidDnsNames { + if err := tfs3.ValidBucketName(v, endpoints.UsWest2RegionID); err == nil { + t.Fatalf("%q should not be a valid S3 bucket name", v) + } + } + + validEastNames := []string{ + "foobar", + "foo_bar", + "127.0.0.1", + "foo..bar", + "foo_bar_baz", + "foo.bar.baz", + "Foo.Bar", + strings.Repeat("x", 255), + } + + for _, v := range validEastNames { + if err := tfs3.ValidBucketName(v, endpoints.UsEast1RegionID); err != nil { + t.Fatalf("%q should be a valid S3 bucket name", v) + } + } + + invalidEastNames := []string{ + "foo;bar", + strings.Repeat("x", 256), + } + + for _, v := range invalidEastNames { + if err := tfs3.ValidBucketName(v, endpoints.UsEast1RegionID); err == nil { + t.Fatalf("%q should not be a valid S3 bucket name", v) + } + } +} + +func TestBucketRegionalDomainName(t *testing.T) { + const bucket = "bucket-name" + + var testCases = []struct { + ExpectedErrCount int + ExpectedOutput string + Region string + }{ + { + Region: "", + ExpectedErrCount: 0, + ExpectedOutput: bucket + ".s3.amazonaws.com", + }, + { + Region: "custom", + ExpectedErrCount: 0, + ExpectedOutput: bucket + ".s3.custom.amazonaws.com", + }, + { + Region: endpoints.UsEast1RegionID, + ExpectedErrCount: 0, + ExpectedOutput: bucket + ".s3.amazonaws.com", + }, + { + Region: endpoints.UsWest2RegionID, + ExpectedErrCount: 0, + ExpectedOutput: bucket + fmt.Sprintf(".s3.%s.%s", endpoints.UsWest2RegionID, acctest.PartitionDNSSuffix()), + }, + { + Region: endpoints.UsGovWest1RegionID, + ExpectedErrCount: 0, + ExpectedOutput: bucket + fmt.Sprintf(".s3.%s.%s", endpoints.UsGovWest1RegionID, acctest.PartitionDNSSuffix()), + }, + { + Region: endpoints.CnNorth1RegionID, + ExpectedErrCount: 0, + ExpectedOutput: bucket + fmt.Sprintf(".s3.%s.amazonaws.com.cn", endpoints.CnNorth1RegionID), + }, + } + + for _, tc := range testCases { + output, err := tfs3.BucketRegionalDomainName(bucket, tc.Region) + if tc.ExpectedErrCount == 0 && err != nil { + t.Fatalf("expected %q not to trigger an error, received: %s", tc.Region, err) + } + if tc.ExpectedErrCount > 0 && err == nil { + t.Fatalf("expected %q to trigger an error", tc.Region) + } + if output != tc.ExpectedOutput { + t.Fatalf("expected %q, received %q", tc.ExpectedOutput, output) + } + } +} + +func TestWebsiteEndpoint(t *testing.T) { + // https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html + testCases := []struct { + TestingClient *conns.AWSClient + LocationConstraint string + Expected string + }{ + { + TestingClient: &conns.AWSClient{ + DNSSuffix: "amazonaws.com", + Region: endpoints.UsEast1RegionID, + }, + LocationConstraint: "", + Expected: fmt.Sprintf("bucket-name.s3-website-%s.%s", endpoints.UsEast1RegionID, acctest.PartitionDNSSuffix()), + }, + { + TestingClient: &conns.AWSClient{ + DNSSuffix: "amazonaws.com", + Region: endpoints.UsWest2RegionID, + }, + LocationConstraint: endpoints.UsWest2RegionID, + Expected: fmt.Sprintf("bucket-name.s3-website-%s.%s", endpoints.UsWest2RegionID, acctest.PartitionDNSSuffix()), + }, + { + TestingClient: &conns.AWSClient{ + DNSSuffix: "amazonaws.com", + Region: endpoints.UsWest1RegionID, + }, + LocationConstraint: endpoints.UsWest1RegionID, + Expected: fmt.Sprintf("bucket-name.s3-website-%s.%s", endpoints.UsWest1RegionID, acctest.PartitionDNSSuffix()), + }, + { + TestingClient: &conns.AWSClient{ + DNSSuffix: "amazonaws.com", + Region: endpoints.EuWest1RegionID, + }, + LocationConstraint: endpoints.EuWest1RegionID, + Expected: fmt.Sprintf("bucket-name.s3-website-%s.%s", endpoints.EuWest1RegionID, acctest.PartitionDNSSuffix()), + }, + { + TestingClient: &conns.AWSClient{ + DNSSuffix: "amazonaws.com", + Region: endpoints.EuWest3RegionID, + }, + LocationConstraint: endpoints.EuWest3RegionID, + Expected: fmt.Sprintf("bucket-name.s3-website.%s.%s", endpoints.EuWest3RegionID, acctest.PartitionDNSSuffix()), + }, + { + TestingClient: &conns.AWSClient{ + DNSSuffix: "amazonaws.com", + Region: endpoints.EuCentral1RegionID, + }, + LocationConstraint: endpoints.EuCentral1RegionID, + Expected: fmt.Sprintf("bucket-name.s3-website.%s.%s", endpoints.EuCentral1RegionID, acctest.PartitionDNSSuffix()), + }, + { + TestingClient: &conns.AWSClient{ + DNSSuffix: "amazonaws.com", + Region: endpoints.ApSouth1RegionID, + }, + LocationConstraint: endpoints.ApSouth1RegionID, + Expected: fmt.Sprintf("bucket-name.s3-website.%s.%s", endpoints.ApSouth1RegionID, acctest.PartitionDNSSuffix()), + }, + { + TestingClient: &conns.AWSClient{ + DNSSuffix: "amazonaws.com", + Region: endpoints.ApSoutheast1RegionID, + }, + LocationConstraint: endpoints.ApSoutheast1RegionID, + Expected: fmt.Sprintf("bucket-name.s3-website-%s.%s", endpoints.ApSoutheast1RegionID, acctest.PartitionDNSSuffix()), + }, + { + TestingClient: &conns.AWSClient{ + DNSSuffix: "amazonaws.com", + Region: endpoints.ApNortheast1RegionID, + }, + LocationConstraint: endpoints.ApNortheast1RegionID, + Expected: fmt.Sprintf("bucket-name.s3-website-%s.%s", endpoints.ApNortheast1RegionID, acctest.PartitionDNSSuffix()), + }, + { + TestingClient: &conns.AWSClient{ + DNSSuffix: "amazonaws.com", + Region: endpoints.ApSoutheast2RegionID, + }, + LocationConstraint: endpoints.ApSoutheast2RegionID, + Expected: fmt.Sprintf("bucket-name.s3-website-%s.%s", endpoints.ApSoutheast2RegionID, acctest.PartitionDNSSuffix()), + }, + { + TestingClient: &conns.AWSClient{ + DNSSuffix: "amazonaws.com", + Region: endpoints.ApNortheast2RegionID, + }, + LocationConstraint: endpoints.ApNortheast2RegionID, + Expected: fmt.Sprintf("bucket-name.s3-website.%s.%s", endpoints.ApNortheast2RegionID, acctest.PartitionDNSSuffix()), + }, + { + TestingClient: &conns.AWSClient{ + DNSSuffix: "amazonaws.com", + Region: endpoints.SaEast1RegionID, + }, + LocationConstraint: endpoints.SaEast1RegionID, + Expected: fmt.Sprintf("bucket-name.s3-website-%s.%s", endpoints.SaEast1RegionID, acctest.PartitionDNSSuffix()), + }, + { + TestingClient: &conns.AWSClient{ + DNSSuffix: "amazonaws.com", + Region: endpoints.UsGovEast1RegionID, + }, + LocationConstraint: endpoints.UsGovEast1RegionID, + Expected: fmt.Sprintf("bucket-name.s3-website.%s.%s", endpoints.UsGovEast1RegionID, acctest.PartitionDNSSuffix()), + }, + { + TestingClient: &conns.AWSClient{ + DNSSuffix: "amazonaws.com", + Region: endpoints.UsGovWest1RegionID, + }, + LocationConstraint: endpoints.UsGovWest1RegionID, + Expected: fmt.Sprintf("bucket-name.s3-website-%s.%s", endpoints.UsGovWest1RegionID, acctest.PartitionDNSSuffix()), + }, + { + TestingClient: &conns.AWSClient{ + DNSSuffix: "c2s.ic.gov", + Region: endpoints.UsIsoEast1RegionID, + }, + LocationConstraint: endpoints.UsIsoEast1RegionID, + Expected: fmt.Sprintf("bucket-name.s3-website.%s.c2s.ic.gov", endpoints.UsIsoEast1RegionID), + }, + { + TestingClient: &conns.AWSClient{ + DNSSuffix: "sc2s.sgov.gov", + Region: endpoints.UsIsobEast1RegionID, + }, + LocationConstraint: endpoints.UsIsobEast1RegionID, + Expected: fmt.Sprintf("bucket-name.s3-website.%s.sc2s.sgov.gov", endpoints.UsIsobEast1RegionID), + }, + { + TestingClient: &conns.AWSClient{ + DNSSuffix: "amazonaws.com.cn", + Region: endpoints.CnNorthwest1RegionID, + }, + LocationConstraint: endpoints.CnNorthwest1RegionID, + Expected: fmt.Sprintf("bucket-name.s3-website.%s.amazonaws.com.cn", endpoints.CnNorthwest1RegionID), + }, + { + TestingClient: &conns.AWSClient{ + DNSSuffix: "amazonaws.com.cn", + Region: endpoints.CnNorth1RegionID, + }, + LocationConstraint: endpoints.CnNorth1RegionID, + Expected: fmt.Sprintf("bucket-name.s3-website.%s.amazonaws.com.cn", endpoints.CnNorth1RegionID), + }, + } + + for _, testCase := range testCases { + got := tfs3.WebsiteEndpoint(testCase.TestingClient, "bucket-name", testCase.LocationConstraint) + if got.Endpoint != testCase.Expected { + t.Errorf("WebsiteEndpointUrl(\"bucket-name\", %q) => %q, want %q", testCase.LocationConstraint, got.Endpoint, testCase.Expected) + } + } +} + +func testAccCheckBucketDestroy(s *terraform.State) error { + return testAccCheckBucketDestroyWithProvider(s, acctest.Provider) +} + +func testAccCheckBucketDestroyWithProvider(s *terraform.State, provider *schema.Provider) error { + conn := provider.Meta().(*conns.AWSClient).S3Conn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_s3_bucket" { + continue + } + + input := &s3.HeadBucketInput{ + Bucket: aws.String(rs.Primary.ID), + } + + // Retry for S3 eventual consistency + err := resource.Retry(1*time.Minute, func() *resource.RetryError { + _, err := conn.HeadBucket(input) + + if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) || tfawserr.ErrCodeEquals(err, "NotFound") { + return nil + } + + if err != nil { + return resource.NonRetryableError(err) + } + + return resource.RetryableError(fmt.Errorf("AWS S3 Bucket still exists: %s", rs.Primary.ID)) + }) + + if tfresource.TimedOut(err) { + _, err = conn.HeadBucket(input) + } + + if err != nil { + return err + } + } + return nil +} + +func testAccCheckBucketExists(n string) resource.TestCheckFunc { + return testAccCheckBucketExistsWithProvider(n, func() *schema.Provider { return acctest.Provider }) +} + +func testAccCheckBucketExistsWithProvider(n string, providerF func() *schema.Provider) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("No ID is set") + } + + provider := providerF() + + conn := provider.Meta().(*conns.AWSClient).S3Conn + _, err := conn.HeadBucket(&s3.HeadBucketInput{ + Bucket: aws.String(rs.Primary.ID), + }) + + if err != nil { + if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { + return fmt.Errorf("S3 bucket not found") + } + return err + } + return nil + + } +} + +func testAccCheckBucketAddObjects(n string, keys ...string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs := s.RootModule().Resources[n] + conn := acctest.Provider.Meta().(*conns.AWSClient).S3ConnURICleaningDisabled + + for _, key := range keys { + _, err := conn.PutObject(&s3.PutObjectInput{ + Bucket: aws.String(rs.Primary.ID), + Key: aws.String(key), + }) + + if err != nil { + return fmt.Errorf("PutObject error: %s", err) + } + } + + return nil + } +} + +func testAccCheckBucketAddObjectsWithLegalHold(n string, keys ...string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs := s.RootModule().Resources[n] + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn + + for _, key := range keys { + _, err := conn.PutObject(&s3.PutObjectInput{ + Bucket: aws.String(rs.Primary.ID), + Key: aws.String(key), + ObjectLockLegalHoldStatus: aws.String(s3.ObjectLockLegalHoldStatusOn), + }) + + if err != nil { + return fmt.Errorf("PutObject error: %s", err) + } + } + + return nil + } +} + +// Create an S3 bucket via a CF stack so that it has system tags. +func testAccCheckBucketCreateViaCloudFormation(n string, stackID *string) resource.TestCheckFunc { + return func(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).CloudFormationConn + stackName := sdkacctest.RandomWithPrefix("tf-acc-test-s3tags") + templateBody := fmt.Sprintf(`{ + "Resources": { + "TfTestBucket": { + "Type": "AWS::S3::Bucket", + "Properties": { + "BucketName": "%s" + } + } + } +}`, n) + + requestToken := resource.UniqueId() + req := &cloudformation.CreateStackInput{ + StackName: aws.String(stackName), + TemplateBody: aws.String(templateBody), + ClientRequestToken: aws.String(requestToken), + } + + log.Printf("[DEBUG] Creating CloudFormation stack: %s", req) + resp, err := conn.CreateStack(req) + if err != nil { + return fmt.Errorf("error creating CloudFormation stack: %w", err) + } + + stack, err := tfcloudformation.WaitStackCreated(conn, aws.StringValue(resp.StackId), requestToken, 10*time.Minute) + if err != nil { + return fmt.Errorf("Error waiting for CloudFormation stack creation: %w", err) + } + status := aws.StringValue(stack.StackStatus) + if status != cloudformation.StackStatusCreateComplete { + return fmt.Errorf("Invalid CloudFormation stack creation status: %s", status) + } + + *stackID = aws.StringValue(resp.StackId) + return nil + } +} + +func testAccCheckBucketTagKeys(n string, keys ...string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs := s.RootModule().Resources[n] + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn + + got, err := tfs3.BucketListTags(conn, rs.Primary.Attributes["bucket"]) + if err != nil { + return err + } + + for _, want := range keys { + ok := false + for _, key := range got.Keys() { + if want == key { + ok = true + break + } + } + if !ok { + return fmt.Errorf("Key %s not found in bucket's tag set", want) + } + } + + return nil + } +} + +func testAccCheckS3BucketDomainName(resourceName string, attributeName string, bucketName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + expectedValue := acctest.Provider.Meta().(*conns.AWSClient).PartitionHostname(fmt.Sprintf("%s.s3", bucketName)) + + return resource.TestCheckResourceAttr(resourceName, attributeName, expectedValue)(s) + } +} + +func testAccCheckBucketPolicy(n string, policy string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs := s.RootModule().Resources[n] + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn + + out, err := conn.GetBucketPolicy(&s3.GetBucketPolicyInput{ + Bucket: aws.String(rs.Primary.ID), + }) + + if policy == "" { + if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "NoSuchBucketPolicy" { + // expected + return nil + } + if err == nil { + return fmt.Errorf("Expected no policy, got: %#v", *out.Policy) + } else { + return fmt.Errorf("GetBucketPolicy error: %v, expected %s", err, policy) + } + } + if err != nil { + return fmt.Errorf("GetBucketPolicy error: %v, expected %s", err, policy) + } + + if v := out.Policy; v == nil { + if policy != "" { + return fmt.Errorf("bad policy, found nil, expected: %s", policy) + } + } else { + expected := make(map[string]interface{}) + if err := json.Unmarshal([]byte(policy), &expected); err != nil { + return err + } + actual := make(map[string]interface{}) + if err := json.Unmarshal([]byte(*v), &actual); err != nil { + return err + } + + if !reflect.DeepEqual(expected, actual) { + return fmt.Errorf("bad policy, expected: %#v, got %#v", expected, actual) + } + } + + return nil + } +} + +func testAccBucketRegionalDomainName(bucket, region string) string { + regionalEndpoint, err := tfs3.BucketRegionalDomainName(bucket, region) + if err != nil { + return fmt.Sprintf("Regional endpoint not found for bucket %s", bucket) + } + return regionalEndpoint +} + +func testAccCheckS3BucketWebsiteEndpoint(resourceName string, attributeName string, bucketName string, region string) resource.TestCheckFunc { + return func(s *terraform.State) error { + website := tfs3.WebsiteEndpoint(acctest.Provider.Meta().(*conns.AWSClient), bucketName, region) + expectedValue := website.Endpoint + + return resource.TestCheckResourceAttr(resourceName, attributeName, expectedValue)(s) + } +} + +func testAccCheckBucketUpdateTags(n string, oldTags, newTags map[string]string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs := s.RootModule().Resources[n] + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn + + return tfs3.BucketUpdateTags(conn, rs.Primary.Attributes["bucket"], oldTags, newTags) + } +} + +func testAccCheckBucketCheckTags(n string, expectedTags map[string]string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs := s.RootModule().Resources[n] + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn + + got, err := tfs3.BucketListTags(conn, rs.Primary.Attributes["bucket"]) + if err != nil { + return err + } + + want := tftags.New(expectedTags) + if !reflect.DeepEqual(want, got) { + return fmt.Errorf("Incorrect tags, want: %v got: %v", want, got) + } + + return nil + } +} + +func testAccBucketPolicy(bucketName, partition string) string { + return fmt.Sprintf(`{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "", + "Effect": "Allow", + "Principal": { + "AWS": "*" + }, + "Action": "s3:GetObject", + "Resource": "arn:%[1]s:s3:::%[2]s/*" + } + ] +}`, partition, bucketName) +} + +func testAccBucketConfig_Basic(bucketName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q +} +`, bucketName) +} + +func testAccBucketConfig_withAcceleration(bucketName, acceleration string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q + acceleration_status = %[2]q +} +`, bucketName, acceleration) +} + +func testAccBucketConfig_withACL(bucketName, acl string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q + acl = %[2]q +} +`, bucketName, acl) +} + +func testAccBucketConfig_withCORS(bucketName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q + + cors_rule { + allowed_headers = ["*"] + allowed_methods = ["PUT", "POST"] + allowed_origins = ["https://www.example.com"] + expose_headers = ["x-amz-server-side-encryption", "ETag"] + max_age_seconds = 3000 + } +} +`, bucketName) +} + +func testAccBucketConfig_withCORSSingleMethodAndEmptyOrigin(bucketName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q + + cors_rule { + allowed_methods = ["GET"] + allowed_origins = [""] + } +} +`, bucketName) +} + +func testAccBucketConfig_withCORSEmptyOrigin(bucketName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q + + cors_rule { + allowed_headers = ["*"] + allowed_methods = ["PUT", "POST"] + allowed_origins = [""] + expose_headers = ["x-amz-server-side-encryption", "ETag"] + max_age_seconds = 3000 + } +} +`, bucketName) +} + +func testAccBucketConfig_withDefaultEncryption_defaultKey(bucketName, sseAlgorithm string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q + + server_side_encryption_configuration { + rule { + apply_server_side_encryption_by_default { + sse_algorithm = %[2]q + } + } + } +} +`, bucketName, sseAlgorithm) +} + +func testAccBucketConfig_withDefaultEncryption_KmsMasterKey(bucketName string) string { + return fmt.Sprintf(` +resource "aws_kms_key" "test" { + description = "KMS Key for Bucket %[1]s" + deletion_window_in_days = 10 +} + +resource "aws_s3_bucket" "test" { + bucket = %[1]q + + server_side_encryption_configuration { + rule { + apply_server_side_encryption_by_default { + kms_master_key_id = aws_kms_key.test.arn + sse_algorithm = "aws:kms" + } + } + } +} +`, bucketName) +} + +func testAccBucketConfig_withDefaultEncryptionAndBucketKeyEnabled_KmsMasterKey(bucketName string) string { + return fmt.Sprintf(` +resource "aws_kms_key" "test" { + description = "KMS Key for Bucket %[1]s" + deletion_window_in_days = 7 +} + +resource "aws_s3_bucket" "test" { + bucket = %[1]q + + server_side_encryption_configuration { + rule { + apply_server_side_encryption_by_default { + kms_master_key_id = aws_kms_key.test.arn + sse_algorithm = "aws:kms" + } + bucket_key_enabled = true + } + } +} +`, bucketName) +} + +func testAccBucketConfig_withGrants(bucketName string) string { + return fmt.Sprintf(` +data "aws_canonical_user_id" "current" {} + +resource "aws_s3_bucket" "test" { + bucket = %[1]q + + grant { + id = data.aws_canonical_user_id.current.id + type = "CanonicalUser" + permissions = ["FULL_CONTROL", "WRITE"] + } +} +`, bucketName) +} + +func testAccBucketConfig_withUpdatedGrants(bucketName string) string { + return fmt.Sprintf(` +data "aws_canonical_user_id" "current" {} + +resource "aws_s3_bucket" "test" { + bucket = %[1]q + + grant { + id = data.aws_canonical_user_id.current.id + type = "CanonicalUser" + permissions = ["READ"] + } + + grant { + type = "Group" + permissions = ["READ_ACP"] + uri = "http://acs.amazonaws.com/groups/s3/LogDelivery" + } +} +`, bucketName) +} + +func testAccBucketConfig_withLifecycle(bucketName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q + acl = "private" + + lifecycle_rule { + id = "id1" + prefix = "path1/" + enabled = true + + expiration { + days = 365 + } + + transition { + days = 30 + storage_class = "STANDARD_IA" + } + + transition { + days = 60 + storage_class = "INTELLIGENT_TIERING" + } + + transition { + days = 90 + storage_class = "ONEZONE_IA" + } + + transition { + days = 120 + storage_class = "GLACIER" + } + + transition { + days = 210 + storage_class = "DEEP_ARCHIVE" + } + } + + lifecycle_rule { + id = "id2" + prefix = "path2/" + enabled = true + + expiration { + date = "2016-01-12" + } + } + + lifecycle_rule { + id = "id3" + prefix = "path3/" + enabled = true + + transition { + days = 0 + storage_class = "GLACIER" + } + } + + lifecycle_rule { + id = "id4" + prefix = "path4/" + enabled = true + + tags = { + "tagKey" = "tagValue" + "terraform" = "hashicorp" + } + + expiration { + date = "2016-01-12" + } + } + + lifecycle_rule { + id = "id5" + enabled = true + + tags = { + "tagKey" = "tagValue" + "terraform" = "hashicorp" + } + + transition { + days = 0 + storage_class = "GLACIER" + } + } + + lifecycle_rule { + id = "id6" + enabled = true + + tags = { + "tagKey" = "tagValue" + } + + transition { + days = 0 + storage_class = "GLACIER" + } + } +} +`, bucketName) +} + +func testAccBucketConfig_withLifecycleExpireMarker(bucketName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q + acl = "private" + + lifecycle_rule { + id = "id1" + prefix = "path1/" + enabled = true + + expiration { + expired_object_delete_marker = "true" + } + } +} +`, bucketName) +} + +func testAccBucketConfig_withLifecycleRuleExpirationEmptyConfigurationBlock(rName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q + + lifecycle_rule { + enabled = true + id = "id1" + + expiration {} + } +} +`, rName) +} + +func testAccBucketConfig_withLifecycleRuleAbortIncompleteMultipartUploadDays(rName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q + + lifecycle_rule { + abort_incomplete_multipart_upload_days = 7 + enabled = true + id = "id1" + } +} +`, rName) +} + +func testAccBucketConfig_withLogging(bucketName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "log_bucket" { + bucket = "%[1]s-log" + acl = "log-delivery-write" +} + +resource "aws_s3_bucket" "test" { + bucket = %[1]q + acl = "private" + + logging { + target_bucket = aws_s3_bucket.log_bucket.id + target_prefix = "log/" + } +} +`, bucketName) +} + +func testAccBucketConfig_withEmptyPolicy(bucketName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q + acl = "private" + policy = "" +} +`, bucketName) +} + +func testAccBucketConfig_withPolicy(bucketName, partition string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q + acl = "private" + policy = %[2]s +} +`, bucketName, strconv.Quote(testAccBucketPolicy(bucketName, partition))) +} + +func testAccBucketConfig_ReplicationBase(bucketName string) string { + return acctest.ConfigCompose( + acctest.ConfigMultipleRegionProvider(2), + fmt.Sprintf(` +data "aws_partition" "current" {} + +resource "aws_iam_role" "role" { + name = %[1]q + + assume_role_policy = < %q, want %q", testCase.LocationConstraint, got.Endpoint, testCase.Expected) - } - } + destination { + bucket = aws_s3_bucket.destination3.arn + storage_class = "ONEZONE_IA" + } + } + } +} +`, bucketName)) } -func testAccCheckBucketDestroy(s *terraform.State) error { - return testAccCheckBucketDestroyWithProvider(s, acctest.Provider) +func testAccBucketConfig_withReplication_MultipleDestinations_NonEmptyFilter(bucketName string) string { + return acctest.ConfigCompose( + testAccBucketConfig_ReplicationBase(bucketName), + fmt.Sprintf(` +resource "aws_s3_bucket" "destination2" { + provider = "awsalternate" + bucket = "%[1]s-destination2" + + versioning { + enabled = true + } } -func testAccCheckBucketDestroyWithProvider(s *terraform.State, provider *schema.Provider) error { - conn := provider.Meta().(*conns.AWSClient).S3Conn +resource "aws_s3_bucket" "destination3" { + provider = "awsalternate" + bucket = "%[1]s-destination3" - for _, rs := range s.RootModule().Resources { - if rs.Type != "aws_s3_bucket" { - continue - } + versioning { + enabled = true + } +} - input := &s3.HeadBucketInput{ - Bucket: aws.String(rs.Primary.ID), - } +resource "aws_s3_bucket" "source" { + bucket = "%[1]s-source" + acl = "private" - // Retry for S3 eventual consistency - err := resource.Retry(1*time.Minute, func() *resource.RetryError { - _, err := conn.HeadBucket(input) + versioning { + enabled = true + } - if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) || tfawserr.ErrCodeEquals(err, "NotFound") { - return nil - } + replication_configuration { + role = aws_iam_role.role.arn - if err != nil { - return resource.NonRetryableError(err) - } + rules { + id = "rule1" + priority = 1 + status = "Enabled" - return resource.RetryableError(fmt.Errorf("AWS S3 Bucket still exists: %s", rs.Primary.ID)) - }) + filter { + prefix = "prefix1" + } - if tfresource.TimedOut(err) { - _, err = conn.HeadBucket(input) - } + destination { + bucket = aws_s3_bucket.destination.arn + storage_class = "STANDARD" + } + } - if err != nil { - return err - } - } - return nil + rules { + id = "rule2" + priority = 2 + status = "Enabled" + + filter { + tags = { + Key2 = "Value2" + } + } + + destination { + bucket = aws_s3_bucket.destination2.arn + storage_class = "STANDARD_IA" + } + } + + rules { + id = "rule3" + priority = 3 + status = "Disabled" + + filter { + prefix = "prefix3" + + tags = { + Key3 = "Value3" + } + } + + destination { + bucket = aws_s3_bucket.destination3.arn + storage_class = "ONEZONE_IA" + } + } + } +} +`, bucketName)) } -func testAccCheckBucketExists(n string) resource.TestCheckFunc { - return testAccCheckBucketExistsWithProvider(n, func() *schema.Provider { return acctest.Provider }) +func testAccBucketConfig_withReplication_MultipleDestinations_TwoDestination(bucketName string) string { + return acctest.ConfigCompose( + testAccBucketConfig_ReplicationBase(bucketName), + fmt.Sprintf(` +resource "aws_s3_bucket" "destination2" { + provider = "awsalternate" + bucket = "%[1]s-destination2" + + versioning { + enabled = true + } } -func testAccCheckBucketExistsWithProvider(n string, providerF func() *schema.Provider) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[n] - if !ok { - return fmt.Errorf("Not found: %s", n) - } +resource "aws_s3_bucket" "source" { + bucket = "%[1]s-source" + acl = "private" - if rs.Primary.ID == "" { - return fmt.Errorf("No ID is set") - } + versioning { + enabled = true + } - provider := providerF() + replication_configuration { + role = aws_iam_role.role.arn - conn := provider.Meta().(*conns.AWSClient).S3Conn - _, err := conn.HeadBucket(&s3.HeadBucketInput{ - Bucket: aws.String(rs.Primary.ID), - }) + rules { + id = "rule1" + priority = 1 + status = "Enabled" - if err != nil { - if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { - return fmt.Errorf("S3 bucket not found") - } - return err - } - return nil + filter { + prefix = "prefix1" + } + + destination { + bucket = aws_s3_bucket.destination.arn + storage_class = "STANDARD" + } + } + + rules { + id = "rule2" + priority = 2 + status = "Enabled" + + filter { + tags = { + Key2 = "Value2" + } + } + + destination { + bucket = aws_s3_bucket.destination2.arn + storage_class = "STANDARD_IA" + } + } + } +} +`, bucketName)) +} + +func testAccBucketConfig_withReplication_NoVersioning(bucketName string) string { + return acctest.ConfigCompose( + testAccBucketConfig_ReplicationBase(bucketName), + fmt.Sprintf(` +resource "aws_s3_bucket" "source" { + bucket = "%[1]s" + acl = "private" + + replication_configuration { + role = aws_iam_role.role.arn + + rules { + id = "foobar" + prefix = "foo" + status = "Enabled" + + destination { + bucket = aws_s3_bucket.destination.arn + storage_class = "STANDARD" + } + } + } +} +`, bucketName)) +} + +func testAccBucketConfig_withReplication_RulesDestination(bucketName string) string { + return acctest.ConfigCompose( + testAccBucketConfig_ReplicationBase(bucketName), + fmt.Sprintf(` +data "aws_caller_identity" "current" {} + +resource "aws_s3_bucket" "source" { + acl = "private" + bucket = "%[1]s-source" + + replication_configuration { + role = aws_iam_role.role.arn + + rules { + id = "foobar" + prefix = "foo" + status = "Enabled" + + destination { + account_id = data.aws_caller_identity.current.account_id + bucket = aws_s3_bucket.destination.arn + storage_class = "STANDARD" + } + } + } + + versioning { + enabled = true + } +} +`, bucketName)) +} + +func testAccBucketConfig_withReplication_SseKMSEncryptedObjects(bucketName string) string { + return acctest.ConfigCompose( + testAccBucketConfig_ReplicationBase(bucketName), + fmt.Sprintf(` +resource "aws_kms_key" "replica" { + provider = "awsalternate" + description = "TF Acceptance Test S3 repl KMS key" + deletion_window_in_days = 7 +} + +resource "aws_s3_bucket" "source" { + bucket = "%[1]s-source" + acl = "private" + + versioning { + enabled = true + } + + replication_configuration { + role = aws_iam_role.role.arn + + rules { + id = "foobar" + prefix = "foo" + status = "Enabled" + + destination { + bucket = aws_s3_bucket.destination.arn + storage_class = "STANDARD" + replica_kms_key_id = aws_kms_key.replica.arn + } + + source_selection_criteria { + sse_kms_encrypted_objects { + enabled = true + } + } + } + } +} +`, bucketName)) +} + +func testAccBucketConfig_withReplication_SseKMSEncryptedObjectsAndAccessControlTranslation(bucketName string) string { + return acctest.ConfigCompose( + testAccBucketConfig_ReplicationBase(bucketName), + fmt.Sprintf(` +data "aws_caller_identity" "current" {} + +resource "aws_kms_key" "replica" { + provider = "awsalternate" + description = "TF Acceptance Test S3 repl KMS key" + deletion_window_in_days = 7 +} + +resource "aws_s3_bucket" "source" { + bucket = "%[1]s-source" + acl = "private" + + versioning { + enabled = true + } + + replication_configuration { + role = aws_iam_role.role.arn + + rules { + id = "foobar" + prefix = "foo" + status = "Enabled" + + destination { + account_id = data.aws_caller_identity.current.account_id + bucket = aws_s3_bucket.destination.arn + storage_class = "STANDARD" + replica_kms_key_id = aws_kms_key.replica.arn + + access_control_translation { + owner = "Destination" + } + } + + source_selection_criteria { + sse_kms_encrypted_objects { + enabled = true + } + } + } + } +} +`, bucketName)) +} + +func testAccBucketConfig_withReplication_WithoutPrefix(bucketName string) string { + return acctest.ConfigCompose( + testAccBucketConfig_ReplicationBase(bucketName), + fmt.Sprintf(` +resource "aws_s3_bucket" "source" { + bucket = "%[1]s-source" + acl = "private" + + versioning { + enabled = true + } + + replication_configuration { + role = aws_iam_role.role.arn + + rules { + id = "foobar" + status = "Enabled" + + destination { + bucket = aws_s3_bucket.destination.arn + storage_class = "STANDARD" + } + } + } +} +`, bucketName)) +} + +func testAccBucketConfig_withReplication_WithoutStorageClass(bucketName string) string { + return acctest.ConfigCompose( + testAccBucketConfig_ReplicationBase(bucketName), + fmt.Sprintf(` +resource "aws_s3_bucket" "source" { + bucket = "%[1]s-source" + acl = "private" + + versioning { + enabled = true + } + + replication_configuration { + role = aws_iam_role.role.arn + + rules { + id = "foobar" + prefix = "foo" + status = "Enabled" + + destination { + bucket = aws_s3_bucket.destination.arn + } + } + } +} +`, bucketName)) +} + +func testAccBucketConfig_withReplicationV2_SameRegionNoTags(rName string) string { + return fmt.Sprintf(` +resource "aws_iam_role" "test" { + name = %[1]q + + assume_role_policy = < 63) { + return fmt.Errorf("%q must contain from 3 to 63 characters", value) + } + if !regexp.MustCompile(`^[0-9a-z-.]+$`).MatchString(value) { + return fmt.Errorf("only lowercase alphanumeric characters and hyphens allowed in %q", value) + } + if regexp.MustCompile(`^(?:[0-9]{1,3}\.){3}[0-9]{1,3}$`).MatchString(value) { + return fmt.Errorf("%q must not be formatted as an IP address", value) + } + if strings.HasPrefix(value, `.`) { + return fmt.Errorf("%q cannot start with a period", value) + } + if strings.HasSuffix(value, `.`) { + return fmt.Errorf("%q cannot end with a period", value) + } + if strings.Contains(value, `..`) { + return fmt.Errorf("%q can be only one period between labels", value) + } + } else { + if len(value) > 255 { + return fmt.Errorf("%q must contain less than 256 characters", value) + } + if !regexp.MustCompile(`^[0-9a-zA-Z-._]+$`).MatchString(value) { + return fmt.Errorf("only alphanumeric characters, hyphens, periods, and underscores allowed in %q", value) + } + } + return nil +} + +func validBucketLifecycleTimestamp(v interface{}, k string) (ws []string, errors []error) { + value := v.(string) + _, err := time.Parse(time.RFC3339, fmt.Sprintf("%sT00:00:00Z", value)) + if err != nil { + errors = append(errors, fmt.Errorf( + "%q cannot be parsed as RFC3339 Timestamp Format", value)) + } + + return +} diff --git a/internal/service/sns/topic.go b/internal/service/sns/topic.go index 0558e95a8131..b537199b325a 100644 --- a/internal/service/sns/topic.go +++ b/internal/service/sns/topic.go @@ -51,7 +51,6 @@ var ( "delivery_policy": { Type: schema.TypeString, Optional: true, - ForceNew: false, ValidateFunc: validation.StringIsJSON, DiffSuppressFunc: verify.SuppressEquivalentJSONDiffs, StateFunc: func(v interface{}) string { diff --git a/internal/service/ssm/maintenance_windows_data_source.go b/internal/service/ssm/maintenance_windows_data_source.go new file mode 100644 index 000000000000..dd089343a70f --- /dev/null +++ b/internal/service/ssm/maintenance_windows_data_source.go @@ -0,0 +1,129 @@ +package ssm + +import ( + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ssm" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/flex" +) + +func DataSourceMaintenanceWindows() *schema.Resource { + return &schema.Resource{ + Read: dataMaintenanceWindowsRead, + Schema: map[string]*schema.Schema{ + "filter": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "name": { + Type: schema.TypeString, + Required: true, + }, + + "values": { + Type: schema.TypeList, + Required: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + }, + }, + "ids": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, + }, + } +} + +func dataMaintenanceWindowsRead(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*conns.AWSClient).SSMConn + + input := &ssm.DescribeMaintenanceWindowsInput{} + + if v, ok := d.GetOk("filter"); ok { + input.Filters = expandMaintenanceWindowFilters(v.(*schema.Set).List()) + } + + var results []*ssm.MaintenanceWindowIdentity + + err := conn.DescribeMaintenanceWindowsPages(input, func(page *ssm.DescribeMaintenanceWindowsOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, windowIdentities := range page.WindowIdentities { + if windowIdentities == nil { + continue + } + + results = append(results, windowIdentities) + } + + return !lastPage + }) + + if err != nil { + return fmt.Errorf("error reading SSM Maintenance Windows: %w", err) + } + + var windowIDs []string + + for _, r := range results { + windowIDs = append(windowIDs, aws.StringValue(r.WindowId)) + } + + d.SetId(meta.(*conns.AWSClient).Region) + d.Set("ids", windowIDs) + + return nil +} + +func expandMaintenanceWindowFilters(tfList []interface{}) []*ssm.MaintenanceWindowFilter { + if len(tfList) == 0 { + return nil + } + + var apiObjects []*ssm.MaintenanceWindowFilter + + for _, tfMapRaw := range tfList { + tfMap, ok := tfMapRaw.(map[string]interface{}) + + if !ok { + continue + } + + apiObject := expandMaintenanceWindowFilter(tfMap) + + if apiObject == nil { + continue + } + + apiObjects = append(apiObjects, apiObject) + } + + return apiObjects +} + +func expandMaintenanceWindowFilter(tfMap map[string]interface{}) *ssm.MaintenanceWindowFilter { + if tfMap == nil { + return nil + } + + apiObject := &ssm.MaintenanceWindowFilter{} + + if v, ok := tfMap["name"].(string); ok && v != "" { + apiObject.Key = aws.String(v) + } + + if v, ok := tfMap["values"].([]interface{}); ok && len(v) > 0 { + apiObject.Values = flex.ExpandStringList(v) + } + + return apiObject +} diff --git a/internal/service/ssm/maintenance_windows_data_source_test.go b/internal/service/ssm/maintenance_windows_data_source_test.go new file mode 100644 index 000000000000..b7ce960d4f17 --- /dev/null +++ b/internal/service/ssm/maintenance_windows_data_source_test.go @@ -0,0 +1,105 @@ +package ssm_test + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/service/ssm" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" +) + +func TestAccSSMMaintenanceWindowsDataSource_filter(t *testing.T) { + dataSourceName := "data.aws_ssm_maintenance_windows.test" + rName1 := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rName2 := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + rName3 := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, ssm.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckMaintenanceWindowDestroy, + Steps: []resource.TestStep{ + { + Config: testAccCheckMaintenanceWindowsDataSourceConfig_filter_name(rName1, rName2, rName3), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr(dataSourceName, "ids.#", "1"), + resource.TestCheckResourceAttrPair(dataSourceName, "ids.0", "aws_ssm_maintenance_window.test2", "id"), + ), + }, + { + Config: testAccCheckMaintenanceWindowsDataSourceConfig_filter_enabled(rName1, rName2, rName3), + Check: resource.ComposeAggregateTestCheckFunc( + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", "1"), + ), + }, + }, + }) +} + +func testAccCheckMaintenanceWindowsDataSourceConfig(rName1, rName2, rName3 string) string { + return fmt.Sprintf(` +resource "aws_ssm_maintenance_window" "test1" { + name = "%[1]s" + duration = 1 + cutoff = 0 + schedule = "cron(0 16 ? * TUE *)" +} + +resource "aws_ssm_maintenance_window" "test2" { + name = "%[2]s" + duration = 1 + cutoff = 0 + schedule = "cron(0 16 ? * WED *)" +} + +resource "aws_ssm_maintenance_window" "test3" { + name = "%[3]s" + duration = 1 + cutoff = 0 + schedule = "cron(0 16 ? * THU *)" + + enabled = false +} +`, rName1, rName2, rName3) +} + +func testAccCheckMaintenanceWindowsDataSourceConfig_filter_name(rName1, rName2, rName3 string) string { + return acctest.ConfigCompose( + testAccCheckMaintenanceWindowsDataSourceConfig(rName1, rName2, rName3), + fmt.Sprintf(` +data "aws_ssm_maintenance_windows" "test" { + filter { + name = "Name" + values = ["%[1]s"] + } + + depends_on = [ + aws_ssm_maintenance_window.test1, + aws_ssm_maintenance_window.test2, + aws_ssm_maintenance_window.test3, + ] +} +`, rName2)) +} + +func testAccCheckMaintenanceWindowsDataSourceConfig_filter_enabled(rName1, rName2, rName3 string) string { + return acctest.ConfigCompose( + testAccCheckMaintenanceWindowsDataSourceConfig(rName1, rName2, rName3), + ` +data "aws_ssm_maintenance_windows" "test" { + filter { + name = "Enabled" + values = ["true"] + } + + depends_on = [ + aws_ssm_maintenance_window.test1, + aws_ssm_maintenance_window.test2, + aws_ssm_maintenance_window.test3, + ] +} +`) +} diff --git a/internal/service/storagegateway/enum.go b/internal/service/storagegateway/enum.go index fe332c00bc56..f3c937e17427 100644 --- a/internal/service/storagegateway/enum.go +++ b/internal/service/storagegateway/enum.go @@ -14,6 +14,12 @@ func authentication_Values() []string { } } +const ( + bandwidthTypeAll = "ALL" + bandwidthTypeDownload = "DOWNLOAD" + bandwidthTypeUpload = "UPLOAD" +) + const ( defaultStorageClassS3IntelligentTiering = "S3_INTELLIGENT_TIERING" defaultStorageClassS3OneZoneIA = "S3_ONEZONE_IA" @@ -30,6 +36,64 @@ func defaultStorageClass_Values() []string { } } +const ( + gatewayTypeCached = "CACHED" + gatewayTypeFileFSXSMB = "FILE_FSX_SMB" + gatewayTypeFileS3 = "FILE_S3" + gatewayTypeStored = "STORED" + gatewayTypeVTL = "VTL" + gatewayTypeVTLSnow = "VTL_SNOW" +) + +func gatewayType_Values() []string { + return []string{ + gatewayTypeCached, + gatewayTypeFileFSXSMB, + gatewayTypeFileS3, + gatewayTypeStored, + gatewayTypeVTL, + gatewayTypeVTLSnow, + } +} + +const ( + mediumChangerTypeAWS_Gateway_VTL = "AWS-Gateway-VTL" + mediumChangerTypeIBM_03584L32_0402 = "IBM-03584L32-0402" + mediumChangerTypeSTK_L700 = "STK-L700" +) + +func mediumChangerType_Values() []string { + return []string{ + mediumChangerTypeAWS_Gateway_VTL, + mediumChangerTypeIBM_03584L32_0402, + mediumChangerTypeSTK_L700, + } +} + +const ( + squashAllSquash = "AllSquash" + squashNoSquash = "NoSquash" + squashRootSquash = "RootSquash" +) + +func squash_Values() []string { + return []string{ + squashAllSquash, + squashNoSquash, + squashRootSquash, + } +} + +const ( + tapeDriveTypeIBM_ULT3580_TD5 = "IBM-ULT3580-TD5" +) + +func tapeDriveType_Values() []string { + return []string{ + tapeDriveTypeIBM_ULT3580_TD5, + } +} + const ( fileShareStatusAvailable = "AVAILABLE" fileShareStatusCreating = "CREATING" diff --git a/internal/service/storagegateway/errors.go b/internal/service/storagegateway/errors.go index 681015d158ad..24e2078db69f 100644 --- a/internal/service/storagegateway/errors.go +++ b/internal/service/storagegateway/errors.go @@ -7,11 +7,12 @@ import ( "github.com/aws/aws-sdk-go/service/storagegateway" ) -// Error code constants missing from AWS Go SDK: -// https://docs.aws.amazon.com/sdk-for-go/api/service/storagegateway/#pkg-constants +// Operation error code constants missing from AWS Go SDK: https://docs.aws.amazon.com/sdk-for-go/api/service/storagegateway/#pkg-constants. +// See https://docs.aws.amazon.com/storagegateway/latest/userguide/AWSStorageGatewayAPI.html#APIOperationErrorCodes for details. const ( operationErrCodeFileShareNotFound = "FileShareNotFound" operationErrCodeFileSystemAssociationNotFound = "FileSystemAssociationNotFound" + operationErrCodeGatewayNotFound = "GatewayNotFound" ) // operationErrorCode returns the operation error code from the specified error: diff --git a/internal/service/storagegateway/find.go b/internal/service/storagegateway/find.go index 37eb9bdf19bc..89d6a8071752 100644 --- a/internal/service/storagegateway/find.go +++ b/internal/service/storagegateway/find.go @@ -3,6 +3,7 @@ package storagegateway import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/storagegateway" + "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) @@ -82,6 +83,60 @@ func FindUploadBufferDisk(conn *storagegateway.StorageGateway, gatewayARN string return result, err } +func FindGatewayByARN(conn *storagegateway.StorageGateway, arn string) (*storagegateway.DescribeGatewayInformationOutput, error) { + input := &storagegateway.DescribeGatewayInformationInput{ + GatewayARN: aws.String(arn), + } + + output, err := conn.DescribeGatewayInformation(input) + + if operationErrorCode(err) == operationErrCodeGatewayNotFound || tfawserr.ErrCodeEquals(err, storagegateway.ErrorCodeGatewayNotFound) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + return output, nil +} + +func FindNFSFileShareByARN(conn *storagegateway.StorageGateway, arn string) (*storagegateway.NFSFileShareInfo, error) { + input := &storagegateway.DescribeNFSFileSharesInput{ + FileShareARNList: aws.StringSlice([]string{arn}), + } + + output, err := conn.DescribeNFSFileShares(input) + + if operationErrorCode(err) == operationErrCodeFileShareNotFound { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + if output == nil || len(output.NFSFileShareInfoList) == 0 || output.NFSFileShareInfoList[0] == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + if count := len(output.NFSFileShareInfoList); count > 1 { + return nil, tfresource.NewTooManyResultsError(count, input) + } + + return output.NFSFileShareInfoList[0], nil +} + func FindSMBFileShareByARN(conn *storagegateway.StorageGateway, arn string) (*storagegateway.SMBFileShareInfo, error) { input := &storagegateway.DescribeSMBFileSharesInput{ FileShareARNList: aws.StringSlice([]string{arn}), diff --git a/internal/service/storagegateway/gateway.go b/internal/service/storagegateway/gateway.go index e4f4c1439b18..3f1fd3642d4d 100644 --- a/internal/service/storagegateway/gateway.go +++ b/internal/service/storagegateway/gateway.go @@ -7,6 +7,7 @@ import ( "net" "net/http" "regexp" + "strconv" "time" "github.com/aws/aws-sdk-go/aws" @@ -17,6 +18,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/experimental/nullable" "github.com/hashicorp/terraform-provider-aws/internal/flex" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" "github.com/hashicorp/terraform-provider-aws/internal/tfresource" @@ -29,47 +31,61 @@ func ResourceGateway() *schema.Resource { Read: resourceGatewayRead, Update: resourceGatewayUpdate, Delete: resourceGatewayDelete, - CustomizeDiff: customdiff.Sequence( - customdiff.ForceNewIfChange("smb_active_directory_settings", func(_ context.Context, old, new, meta interface{}) bool { - return len(old.([]interface{})) == 1 && len(new.([]interface{})) == 0 - }), - verify.SetTagsDiff, - ), + Importer: &schema.ResourceImporter{ State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ Create: schema.DefaultTimeout(15 * time.Minute), }, Schema: map[string]*schema.Schema{ + "activation_key": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ExactlyOneOf: []string{"activation_key", "gateway_ip_address"}, + }, "arn": { Type: schema.TypeString, Computed: true, }, - "activation_key": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - ConflictsWith: []string{"gateway_ip_address"}, + "average_download_rate_limit_in_bits_per_sec": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntAtLeast(102400), }, - "gateway_vpc_endpoint": { + "average_upload_rate_limit_in_bits_per_sec": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntAtLeast(51200), + }, + "cloudwatch_log_group_arn": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: verify.ValidARN, + }, + "ec2_instance_id": { Type: schema.TypeString, - Optional: true, - ForceNew: true, + Computed: true, + }, + "endpoint_type": { + Type: schema.TypeString, + Computed: true, }, "gateway_id": { Type: schema.TypeString, Computed: true, }, "gateway_ip_address": { - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - ValidateFunc: validation.IsIPv4Address, - ConflictsWith: []string{"activation_key"}, + Type: schema.TypeString, + Optional: true, + Computed: true, + ForceNew: true, + ValidateFunc: validation.IsIPv4Address, + ExactlyOneOf: []string{"activation_key", "gateway_ip_address"}, }, "gateway_name": { Type: schema.TypeString, @@ -79,6 +95,18 @@ func ResourceGateway() *schema.Resource { validation.StringLenBetween(2, 255), ), }, + "gateway_network_interface": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "ipv4_address": { + Type: schema.TypeString, + Computed: true, + }, + }, + }, + }, "gateway_timezone": { Type: schema.TypeString, Required: true, @@ -88,27 +116,56 @@ func ResourceGateway() *schema.Resource { ), }, "gateway_type": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + Default: gatewayTypeStored, + ValidateFunc: validation.StringInSlice(gatewayType_Values(), false), + }, + "gateway_vpc_endpoint": { Type: schema.TypeString, Optional: true, ForceNew: true, - Default: "STORED", - ValidateFunc: validation.StringInSlice([]string{ - "CACHED", - "FILE_FSX_SMB", - "FILE_S3", - "STORED", - "VTL", - }, false), }, - "medium_changer_type": { + "host_environment": { Type: schema.TypeString, + Computed: true, + }, + "maintenance_start_time": { + Type: schema.TypeList, Optional: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice([]string{ - "AWS-Gateway-VTL", - "STK-L700", - "IBM-03584L32-0402", - }, false), + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "day_of_week": { + Type: nullable.TypeNullableInt, + Optional: true, + ValidateFunc: nullable.ValidateTypeStringNullableIntBetween(0, 6), + }, + "day_of_month": { + Type: nullable.TypeNullableInt, + Optional: true, + ValidateFunc: nullable.ValidateTypeStringNullableIntBetween(1, 28), + }, + "hour_of_day": { + Type: schema.TypeInt, + Required: true, + ValidateFunc: validation.IntBetween(0, 23), + }, + "minute_of_hour": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntBetween(0, 59), + }, + }, + }, + }, + "medium_changer_type": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: validation.StringInSlice(mediumChangerType_Values(), false), }, "smb_active_directory_settings": { Type: schema.TypeList, @@ -116,6 +173,21 @@ func ResourceGateway() *schema.Resource { MaxItems: 1, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ + "active_directory_status": { + Type: schema.TypeString, + Computed: true, + }, + "domain_controllers": { + Type: schema.TypeSet, + Optional: true, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.All( + validation.StringMatch(regexp.MustCompile(`^(([a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9\-]*[A-Za-z0-9])(:(\d+))?$`), ""), + validation.StringLenBetween(6, 1024), + ), + }, + }, "domain_name": { Type: schema.TypeString, Required: true, @@ -124,11 +196,10 @@ func ResourceGateway() *schema.Resource { validation.StringLenBetween(1, 1024), ), }, - "timeout_in_seconds": { - Type: schema.TypeInt, + "organizational_unit": { + Type: schema.TypeString, Optional: true, - Default: 20, - ValidateFunc: validation.IntBetween(0, 3600), + ValidateFunc: validation.StringLenBetween(1, 1024), }, "password": { Type: schema.TypeString, @@ -139,6 +210,12 @@ func ResourceGateway() *schema.Resource { validation.StringLenBetween(1, 1024), ), }, + "timeout_in_seconds": { + Type: schema.TypeInt, + Optional: true, + Default: 20, + ValidateFunc: validation.IntBetween(0, 3600), + }, "username": { Type: schema.TypeString, Required: true, @@ -147,29 +224,13 @@ func ResourceGateway() *schema.Resource { validation.StringLenBetween(1, 1024), ), }, - "organizational_unit": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: validation.StringLenBetween(1, 1024), - }, - "domain_controllers": { - Type: schema.TypeSet, - Optional: true, - Elem: &schema.Schema{ - Type: schema.TypeString, - ValidateFunc: validation.All( - validation.StringMatch(regexp.MustCompile(`^(([a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9\-]*[A-Za-z0-9])(:(\d+))?$`), ""), - validation.StringLenBetween(6, 1024), - ), - }, - }, - "active_directory_status": { - Type: schema.TypeString, - Computed: true, - }, }, }, }, + "smb_file_share_visibility": { + Type: schema.TypeBool, + Optional: true, + }, "smb_guest_password": { Type: schema.TypeString, Optional: true, @@ -179,66 +240,28 @@ func ResourceGateway() *schema.Resource { validation.StringLenBetween(6, 512), ), }, - "tape_drive_type": { - Type: schema.TypeString, - Optional: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice([]string{ - "IBM-ULT3580-TD5", - }, false), - }, - "tags": tftags.TagsSchema(), - "tags_all": tftags.TagsSchemaComputed(), - "cloudwatch_log_group_arn": { - Type: schema.TypeString, - Optional: true, - ValidateFunc: verify.ValidARN, - }, "smb_security_strategy": { Type: schema.TypeString, Optional: true, Computed: true, ValidateFunc: validation.StringInSlice(storagegateway.SMBSecurityStrategy_Values(), false), }, - "smb_file_share_visibility": { - Type: schema.TypeBool, - Optional: true, - }, - "average_download_rate_limit_in_bits_per_sec": { - Type: schema.TypeInt, - Optional: true, - ValidateFunc: validation.IntAtLeast(102400), - }, - "average_upload_rate_limit_in_bits_per_sec": { - Type: schema.TypeInt, + "tags": tftags.TagsSchema(), + "tags_all": tftags.TagsSchemaComputed(), + "tape_drive_type": { + Type: schema.TypeString, Optional: true, - ValidateFunc: validation.IntAtLeast(51200), - }, - "ec2_instance_id": { - Type: schema.TypeString, - Computed: true, - }, - "endpoint_type": { - Type: schema.TypeString, - Computed: true, - }, - "host_environment": { - Type: schema.TypeString, - Computed: true, - }, - "gateway_network_interface": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "ipv4_address": { - Type: schema.TypeString, - Computed: true, - }, - }, - }, + ForceNew: true, + ValidateFunc: validation.StringInSlice(tapeDriveType_Values(), false), }, }, + + CustomizeDiff: customdiff.Sequence( + customdiff.ForceNewIfChange("smb_active_directory_settings", func(_ context.Context, old, new, meta interface{}) bool { + return len(old.([]interface{})) == 1 && len(new.([]interface{})) == 0 + }), + verify.SetTagsDiff, + ), } } @@ -249,13 +272,10 @@ func resourceGatewayCreate(d *schema.ResourceData, meta interface{}) error { region := meta.(*conns.AWSClient).Region activationKey := d.Get("activation_key").(string) - gatewayIpAddress := d.Get("gateway_ip_address").(string) - // Perform one time fetch of activation key from gateway IP address - if activationKey == "" { - if gatewayIpAddress == "" { - return fmt.Errorf("either activation_key or gateway_ip_address must be provided") - } + // Perform one time fetch of activation key from gateway IP address. + if v, ok := d.GetOk("gateway_ip_address"); ok { + gatewayIPAddress := v.(string) client := &http.Client{ CheckRedirect: func(req *http.Request, via []*http.Request) error { @@ -264,7 +284,7 @@ func resourceGatewayCreate(d *schema.ResourceData, meta interface{}) error { Timeout: time.Second * 10, } - requestURL := fmt.Sprintf("http://%s/?activationRegion=%s", gatewayIpAddress, region) + requestURL := fmt.Sprintf("http://%s/?activationRegion=%s", gatewayIPAddress, region) if v, ok := d.GetOk("gateway_vpc_endpoint"); ok { requestURL = fmt.Sprintf("%s&vpcEndpoint=%s", requestURL, v.(string)) } @@ -304,7 +324,7 @@ func resourceGatewayCreate(d *schema.ResourceData, meta interface{}) error { response, err = client.Do(request) } if err != nil { - return fmt.Errorf("error retrieving activation key from IP Address (%s): %w", gatewayIpAddress, err) + return fmt.Errorf("error retrieving activation key from IP Address (%s): %w", gatewayIPAddress, err) } log.Printf("[DEBUG] Received HTTP response: %#v", response) @@ -319,7 +339,7 @@ func resourceGatewayCreate(d *schema.ResourceData, meta interface{}) error { activationKey = redirectURL.Query().Get("activationKey") if activationKey == "" { - return fmt.Errorf("empty activationKey received from IP Address: %s", gatewayIpAddress) + return fmt.Errorf("empty activationKey received from IP Address: %s", gatewayIPAddress) } } @@ -368,6 +388,18 @@ func resourceGatewayCreate(d *schema.ResourceData, meta interface{}) error { } } + if v, ok := d.GetOk("maintenance_start_time"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input := expandUpdateMaintenanceStartTimeInput(v.([]interface{})[0].(map[string]interface{})) + input.GatewayARN = aws.String(d.Id()) + + log.Printf("[DEBUG] Storage Gateway Gateway %q updating maintenance start time", d.Id()) + _, err := conn.UpdateMaintenanceStartTime(input) + + if err != nil { + return fmt.Errorf("error updating maintenance start time: %w", err) + } + } + if v, ok := d.GetOk("smb_active_directory_settings"); ok && len(v.([]interface{})) > 0 { input := expandStorageGatewayGatewayDomain(v.([]interface{}), d.Id()) log.Printf("[DEBUG] Storage Gateway Gateway %q joining Active Directory domain: %s", d.Id(), aws.StringValue(input.DomainName)) @@ -448,21 +480,16 @@ func resourceGatewayRead(d *schema.ResourceData, meta interface{}) error { defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig - input := &storagegateway.DescribeGatewayInformationInput{ - GatewayARN: aws.String(d.Id()), - } - - log.Printf("[DEBUG] Reading Storage Gateway Gateway: %s", input) + output, err := FindGatewayByARN(conn, d.Id()) - output, err := conn.DescribeGatewayInformation(input) + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] Storage Gateway Gateway (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } if err != nil { - if IsErrGatewayNotFound(err) { - log.Printf("[WARN] Storage Gateway Gateway %q not found - removing from state", d.Id()) - d.SetId("") - return nil - } - return fmt.Errorf("error reading Storage Gateway Gateway: %w", err) + return fmt.Errorf("error reading Storage Gateway Gateway (%s): %w", d.Id(), err) } tags := KeyValueTags(output.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig) @@ -572,21 +599,45 @@ func resourceGatewayRead(d *schema.ResourceData, meta interface{}) error { return fmt.Errorf("error setting gateway_network_interface: %w", err) } - bandwidthInput := &storagegateway.DescribeBandwidthRateLimitInput{ + bandwidthOutput, err := conn.DescribeBandwidthRateLimit(&storagegateway.DescribeBandwidthRateLimitInput{ GatewayARN: aws.String(d.Id()), - } + }) - log.Printf("[DEBUG] Reading Storage Gateway Bandwidth rate limit: %s", bandwidthInput) - bandwidthOutput, err := conn.DescribeBandwidthRateLimit(bandwidthInput) if tfawserr.ErrMessageContains(err, storagegateway.ErrCodeInvalidGatewayRequestException, "The specified operation is not supported") || tfawserr.ErrMessageContains(err, storagegateway.ErrCodeInvalidGatewayRequestException, "This operation is not valid for the specified gateway") { - return nil + err = nil } + if err != nil { return fmt.Errorf("error reading Storage Gateway Bandwidth rate limit: %w", err) } - d.Set("average_download_rate_limit_in_bits_per_sec", bandwidthOutput.AverageDownloadRateLimitInBitsPerSec) - d.Set("average_upload_rate_limit_in_bits_per_sec", bandwidthOutput.AverageUploadRateLimitInBitsPerSec) + + if bandwidthOutput != nil { + d.Set("average_download_rate_limit_in_bits_per_sec", bandwidthOutput.AverageDownloadRateLimitInBitsPerSec) + d.Set("average_upload_rate_limit_in_bits_per_sec", bandwidthOutput.AverageUploadRateLimitInBitsPerSec) + } + + maintenanceStartTimeOutput, err := conn.DescribeMaintenanceStartTime(&storagegateway.DescribeMaintenanceStartTimeInput{ + GatewayARN: aws.String(d.Id()), + }) + + if tfawserr.ErrMessageContains(err, storagegateway.ErrCodeInvalidGatewayRequestException, "The specified operation is not supported") || + tfawserr.ErrMessageContains(err, storagegateway.ErrCodeInvalidGatewayRequestException, "This operation is not valid for the specified gateway") { + err = nil + } + + if err != nil { + return fmt.Errorf("error reading Storage Gateway maintenance start time: %w", err) + } + + if maintenanceStartTimeOutput != nil { + if err := d.Set("maintenance_start_time", []map[string]interface{}{flattenDescribeMaintenanceStartTimeOutput(maintenanceStartTimeOutput)}); err != nil { + return fmt.Errorf("error setting maintenance_start_time: %w", err) + } + } else { + d.Set("maintenance_start_time", nil) + } + return nil } @@ -595,36 +646,47 @@ func resourceGatewayUpdate(d *schema.ResourceData, meta interface{}) error { if d.HasChanges("gateway_name", "gateway_timezone", "cloudwatch_log_group_arn") { input := &storagegateway.UpdateGatewayInformationInput{ + CloudWatchLogGroupARN: aws.String(d.Get("cloudwatch_log_group_arn").(string)), GatewayARN: aws.String(d.Id()), GatewayName: aws.String(d.Get("gateway_name").(string)), GatewayTimezone: aws.String(d.Get("gateway_timezone").(string)), - CloudWatchLogGroupARN: aws.String(d.Get("cloudwatch_log_group_arn").(string)), } log.Printf("[DEBUG] Updating Storage Gateway Gateway: %s", input) _, err := conn.UpdateGatewayInformation(input) + if err != nil { - return fmt.Errorf("error updating Storage Gateway Gateway: %w", err) + return fmt.Errorf("error updating Storage Gateway Gateway (%s): %w", d.Id(), err) } } - if d.HasChange("tags_all") { - o, n := d.GetChange("tags_all") - if err := UpdateTags(conn, d.Get("arn").(string), o, n); err != nil { - return fmt.Errorf("error updating tags: %w", err) + if d.HasChange("maintenance_start_time") { + if v, ok := d.GetOk("maintenance_start_time"); ok && len(v.([]interface{})) > 0 && v.([]interface{})[0] != nil { + input := expandUpdateMaintenanceStartTimeInput(v.([]interface{})[0].(map[string]interface{})) + input.GatewayARN = aws.String(d.Id()) + + log.Printf("[DEBUG] Updating Storage Gateway maintenance start time: %s", input) + _, err := conn.UpdateMaintenanceStartTime(input) + + if err != nil { + return fmt.Errorf("error updating Storage Gateway Gateway (%s) maintenance start time: %w", d.Id(), err) + } } } if d.HasChange("smb_active_directory_settings") { input := expandStorageGatewayGatewayDomain(d.Get("smb_active_directory_settings").([]interface{}), d.Id()) - log.Printf("[DEBUG] Storage Gateway Gateway %q joining Active Directory domain: %s", d.Id(), aws.StringValue(input.DomainName)) + domainName := aws.StringValue(input.DomainName) + + log.Printf("[DEBUG] Joining Storage Gateway to Active Directory domain: %s", input) _, err := conn.JoinDomain(input) + if err != nil { - return fmt.Errorf("error joining Active Directory domain: %w", err) + return fmt.Errorf("error joining Storage Gateway Gateway (%s) to Active Directory domain (%s): %w", d.Id(), domainName, err) } if _, err = waitStorageGatewayGatewayJoinDomainJoined(conn, d.Id()); err != nil { - return fmt.Errorf("error waiting for Storage Gateway Gateway (%q) to be Join domain (%s): %w", d.Id(), aws.StringValue(input.DomainName), err) + return fmt.Errorf("error waiting for Storage Gateway Gateway (%s) to join Active Directory domain (%s): %w", d.Id(), domainName, err) } } @@ -634,10 +696,11 @@ func resourceGatewayUpdate(d *schema.ResourceData, meta interface{}) error { Password: aws.String(d.Get("smb_guest_password").(string)), } - log.Printf("[DEBUG] Storage Gateway Gateway %q setting SMB guest password", d.Id()) + log.Printf("[DEBUG] Setting Storage Gateway SMB guest password: %s", input) _, err := conn.SetSMBGuestPassword(input) + if err != nil { - return fmt.Errorf("error setting SMB guest password: %w", err) + return fmt.Errorf("error updating Storage Gateway Gateway (%s) SMB guest password: %w", d.Id(), err) } } @@ -647,29 +710,29 @@ func resourceGatewayUpdate(d *schema.ResourceData, meta interface{}) error { SMBSecurityStrategy: aws.String(d.Get("smb_security_strategy").(string)), } - log.Printf("[DEBUG] Storage Gateway Gateway %q updating SMB Security Strategy", input) + log.Printf("[DEBUG] Updating Storage Gateway SMB security strategy: %s", input) _, err := conn.UpdateSMBSecurityStrategy(input) + if err != nil { - return fmt.Errorf("error updating SMB Security Strategy: %w", err) + return fmt.Errorf("error updating Storage Gateway Gateway (%s) SMB security strategy: %w", d.Id(), err) } } if d.HasChange("smb_file_share_visibility") { input := &storagegateway.UpdateSMBFileShareVisibilityInput{ - GatewayARN: aws.String(d.Id()), FileSharesVisible: aws.Bool(d.Get("smb_file_share_visibility").(bool)), + GatewayARN: aws.String(d.Id()), } - log.Printf("[DEBUG] Storage Gateway Gateway %q updating SMB File Share Visibility", input) + log.Printf("[DEBUG] Updating Storage Gateway SMB file share visibility: %s", input) _, err := conn.UpdateSMBFileShareVisibility(input) + if err != nil { return fmt.Errorf("error updating Storage Gateway Gateway (%s) SMB file share visibility: %w", d.Id(), err) } } - if d.HasChanges("average_download_rate_limit_in_bits_per_sec", - "average_upload_rate_limit_in_bits_per_sec") { - + if d.HasChanges("average_download_rate_limit_in_bits_per_sec", "average_upload_rate_limit_in_bits_per_sec") { deleteInput := &storagegateway.DeleteBandwidthRateLimitInput{ GatewayARN: aws.String(d.Id()), } @@ -683,7 +746,7 @@ func resourceGatewayUpdate(d *schema.ResourceData, meta interface{}) error { updateInput.AverageDownloadRateLimitInBitsPerSec = aws.Int64(int64(v.(int))) needsUpdate = true } else if d.HasChange("average_download_rate_limit_in_bits_per_sec") { - deleteInput.BandwidthType = aws.String("DOWNLOAD") + deleteInput.BandwidthType = aws.String(bandwidthTypeDownload) needsDelete = true } @@ -692,29 +755,38 @@ func resourceGatewayUpdate(d *schema.ResourceData, meta interface{}) error { needsUpdate = true } else if d.HasChange("average_upload_rate_limit_in_bits_per_sec") { if needsDelete { - deleteInput.BandwidthType = aws.String("ALL") + deleteInput.BandwidthType = aws.String(bandwidthTypeAll) } else { - deleteInput.BandwidthType = aws.String("UPLOAD") + deleteInput.BandwidthType = aws.String(bandwidthTypeUpload) needsDelete = true } } if needsUpdate { - log.Printf("[DEBUG] Storage Gateway Gateway (%q) updating Bandwidth Rate Limit: %#v", d.Id(), updateInput) + log.Printf("[DEBUG] Updating Storage Gateway bandwidth rate limit: %s", updateInput) _, err := conn.UpdateBandwidthRateLimit(updateInput) + if err != nil { - return fmt.Errorf("error updating Bandwidth Rate Limit: %w", err) + return fmt.Errorf("error updating Storage Gateway Gateway (%s) bandwidth rate limit: %w", d.Id(), err) } } if needsDelete { - log.Printf("[DEBUG] Storage Gateway Gateway (%q) unsetting Bandwidth Rate Limit: %#v", d.Id(), deleteInput) + log.Printf("[DEBUG] Deleting Storage Gateway bandwidth rate limit: %s", deleteInput) _, err := conn.DeleteBandwidthRateLimit(deleteInput) + if err != nil { - return fmt.Errorf("error unsetting Bandwidth Rate Limit: %w", err) + return fmt.Errorf("error deleting Storage Gateway Gateway (%s) bandwidth rate limit: %w", d.Id(), err) } } + } + + if d.HasChange("tags_all") { + o, n := d.GetChange("tags_all") + if err := UpdateTags(conn, d.Get("arn").(string), o, n); err != nil { + return fmt.Errorf("error updating tags: %w", err) + } } return resourceGatewayRead(d, meta) @@ -723,17 +795,17 @@ func resourceGatewayUpdate(d *schema.ResourceData, meta interface{}) error { func resourceGatewayDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).StorageGatewayConn - input := &storagegateway.DeleteGatewayInput{ + log.Printf("[DEBUG] Deleting Storage Gateway Gateway: %s", d.Id()) + _, err := conn.DeleteGateway(&storagegateway.DeleteGatewayInput{ GatewayARN: aws.String(d.Id()), + }) + + if operationErrorCode(err) == operationErrCodeGatewayNotFound || tfawserr.ErrCodeEquals(err, storagegateway.ErrorCodeGatewayNotFound) { + return nil } - log.Printf("[DEBUG] Deleting Storage Gateway Gateway: %s", input) - _, err := conn.DeleteGateway(input) if err != nil { - if IsErrGatewayNotFound(err) { - return nil - } - return fmt.Errorf("error deleting Storage Gateway Gateway: %w", err) + return fmt.Errorf("error deleting Storage Gateway Gateway (%s): %w", d.Id(), err) } return nil @@ -790,6 +862,58 @@ func flattenStorageGatewayGatewayNetworkInterfaces(nis []*storagegateway.Network return tfList } +func expandUpdateMaintenanceStartTimeInput(tfMap map[string]interface{}) *storagegateway.UpdateMaintenanceStartTimeInput { + if tfMap == nil { + return nil + } + + apiObject := &storagegateway.UpdateMaintenanceStartTimeInput{} + + if v, null, _ := nullable.Int(tfMap["day_of_month"].(string)).Value(); !null && v > 0 { + apiObject.DayOfMonth = aws.Int64(v) + } + + if v, null, _ := nullable.Int(tfMap["day_of_week"].(string)).Value(); !null && v > 0 { + apiObject.DayOfWeek = aws.Int64(v) + } + + if v, ok := tfMap["hour_of_day"].(int); ok { + apiObject.HourOfDay = aws.Int64(int64(v)) + } + + if v, ok := tfMap["minute_of_hour"].(int); ok { + apiObject.MinuteOfHour = aws.Int64(int64(v)) + } + + return apiObject +} + +func flattenDescribeMaintenanceStartTimeOutput(apiObject *storagegateway.DescribeMaintenanceStartTimeOutput) map[string]interface{} { + if apiObject == nil { + return nil + } + + tfMap := map[string]interface{}{} + + if v := apiObject.DayOfMonth; v != nil { + tfMap["day_of_month"] = strconv.FormatInt(aws.Int64Value(v), 10) + } + + if v := apiObject.DayOfWeek; v != nil { + tfMap["day_of_week"] = strconv.FormatInt(aws.Int64Value(v), 10) + } + + if v := apiObject.HourOfDay; v != nil { + tfMap["hour_of_day"] = aws.Int64Value(v) + } + + if v := apiObject.MinuteOfHour; v != nil { + tfMap["minute_of_hour"] = aws.Int64Value(v) + } + + return tfMap +} + // The API returns multiple responses for a missing gateway func IsErrGatewayNotFound(err error) bool { if tfawserr.ErrMessageContains(err, storagegateway.ErrCodeInvalidGatewayRequestException, "The specified gateway was not found.") { diff --git a/internal/service/storagegateway/gateway_test.go b/internal/service/storagegateway/gateway_test.go index e4bfa629a408..337137f0e7b8 100644 --- a/internal/service/storagegateway/gateway_test.go +++ b/internal/service/storagegateway/gateway_test.go @@ -3,9 +3,9 @@ package storagegateway_test import ( "fmt" "regexp" + "strconv" "testing" - "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/storagegateway" sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" @@ -13,6 +13,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" tfstoragegateway "github.com/hashicorp/terraform-provider-aws/internal/service/storagegateway" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) func TestAccStorageGatewayGateway_GatewayType_cached(t *testing.T) { @@ -28,23 +29,24 @@ func TestAccStorageGatewayGateway_GatewayType_cached(t *testing.T) { Steps: []resource.TestStep{ { Config: testAccGatewayConfig_GatewayType_Cached(rName), - Check: resource.ComposeTestCheckFunc( + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckGatewayExists(resourceName, &gateway), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "storagegateway", regexp.MustCompile(`gateway/sgw-.+`)), + resource.TestCheckResourceAttrPair(resourceName, "ec2_instance_id", "aws_instance.test", "id"), + resource.TestCheckResourceAttr(resourceName, "endpoint_type", "STANDARD"), resource.TestCheckResourceAttrSet(resourceName, "gateway_id"), resource.TestCheckResourceAttr(resourceName, "gateway_name", rName), + resource.TestCheckResourceAttr(resourceName, "gateway_network_interface.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "gateway_network_interface.0.ipv4_address", "aws_instance.test", "private_ip"), resource.TestCheckResourceAttr(resourceName, "gateway_timezone", "GMT"), resource.TestCheckResourceAttr(resourceName, "gateway_type", "CACHED"), + resource.TestCheckResourceAttr(resourceName, "host_environment", "EC2"), + resource.TestCheckResourceAttr(resourceName, "maintenance_start_time.#", "1"), resource.TestCheckResourceAttr(resourceName, "medium_changer_type", ""), resource.TestCheckResourceAttr(resourceName, "smb_active_directory_settings.#", "0"), resource.TestCheckResourceAttr(resourceName, "smb_guest_password", ""), resource.TestCheckResourceAttr(resourceName, "smb_security_strategy", ""), resource.TestCheckResourceAttr(resourceName, "tape_drive_type", ""), - resource.TestCheckResourceAttrPair(resourceName, "ec2_instance_id", "aws_instance.test", "id"), - resource.TestCheckResourceAttr(resourceName, "endpoint_type", "STANDARD"), - resource.TestCheckResourceAttr(resourceName, "host_environment", "EC2"), - resource.TestCheckResourceAttr(resourceName, "gateway_network_interface.#", "1"), - resource.TestCheckResourceAttrPair(resourceName, "gateway_network_interface.0.ipv4_address", "aws_instance.test", "private_ip"), ), }, { @@ -70,22 +72,23 @@ func TestAccStorageGatewayGateway_GatewayType_fileFSxSMB(t *testing.T) { Steps: []resource.TestStep{ { Config: testAccGatewayConfig_GatewayType_FileFSxSMB(rName), - Check: resource.ComposeTestCheckFunc( + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckGatewayExists(resourceName, &gateway), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "storagegateway", regexp.MustCompile(`gateway/sgw-.+`)), + resource.TestCheckResourceAttrPair(resourceName, "ec2_instance_id", "aws_instance.test", "id"), + resource.TestCheckResourceAttr(resourceName, "endpoint_type", "STANDARD"), resource.TestCheckResourceAttrSet(resourceName, "gateway_id"), resource.TestCheckResourceAttr(resourceName, "gateway_name", rName), + resource.TestCheckResourceAttr(resourceName, "gateway_network_interface.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "gateway_network_interface.0.ipv4_address", "aws_instance.test", "private_ip"), resource.TestCheckResourceAttr(resourceName, "gateway_timezone", "GMT"), resource.TestCheckResourceAttr(resourceName, "gateway_type", "FILE_FSX_SMB"), + resource.TestCheckResourceAttr(resourceName, "host_environment", "EC2"), + resource.TestCheckResourceAttr(resourceName, "maintenance_start_time.#", "1"), resource.TestCheckResourceAttr(resourceName, "medium_changer_type", ""), resource.TestCheckResourceAttr(resourceName, "smb_active_directory_settings.#", "0"), resource.TestCheckResourceAttr(resourceName, "smb_guest_password", ""), resource.TestCheckResourceAttr(resourceName, "tape_drive_type", ""), - resource.TestCheckResourceAttrPair(resourceName, "ec2_instance_id", "aws_instance.test", "id"), - resource.TestCheckResourceAttr(resourceName, "endpoint_type", "STANDARD"), - resource.TestCheckResourceAttr(resourceName, "host_environment", "EC2"), - resource.TestCheckResourceAttr(resourceName, "gateway_network_interface.#", "1"), - resource.TestCheckResourceAttrPair(resourceName, "gateway_network_interface.0.ipv4_address", "aws_instance.test", "private_ip"), ), }, { @@ -111,22 +114,23 @@ func TestAccStorageGatewayGateway_GatewayType_fileS3(t *testing.T) { Steps: []resource.TestStep{ { Config: testAccGatewayConfig_GatewayType_FileS3(rName), - Check: resource.ComposeTestCheckFunc( + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckGatewayExists(resourceName, &gateway), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "storagegateway", regexp.MustCompile(`gateway/sgw-.+`)), + resource.TestCheckResourceAttrPair(resourceName, "ec2_instance_id", "aws_instance.test", "id"), + resource.TestCheckResourceAttr(resourceName, "endpoint_type", "STANDARD"), resource.TestCheckResourceAttrSet(resourceName, "gateway_id"), resource.TestCheckResourceAttr(resourceName, "gateway_name", rName), + resource.TestCheckResourceAttr(resourceName, "gateway_network_interface.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "gateway_network_interface.0.ipv4_address", "aws_instance.test", "private_ip"), resource.TestCheckResourceAttr(resourceName, "gateway_timezone", "GMT"), resource.TestCheckResourceAttr(resourceName, "gateway_type", "FILE_S3"), + resource.TestCheckResourceAttr(resourceName, "host_environment", "EC2"), + resource.TestCheckResourceAttr(resourceName, "maintenance_start_time.#", "1"), resource.TestCheckResourceAttr(resourceName, "medium_changer_type", ""), resource.TestCheckResourceAttr(resourceName, "smb_active_directory_settings.#", "0"), resource.TestCheckResourceAttr(resourceName, "smb_guest_password", ""), resource.TestCheckResourceAttr(resourceName, "tape_drive_type", ""), - resource.TestCheckResourceAttrPair(resourceName, "ec2_instance_id", "aws_instance.test", "id"), - resource.TestCheckResourceAttr(resourceName, "endpoint_type", "STANDARD"), - resource.TestCheckResourceAttr(resourceName, "host_environment", "EC2"), - resource.TestCheckResourceAttr(resourceName, "gateway_network_interface.#", "1"), - resource.TestCheckResourceAttrPair(resourceName, "gateway_network_interface.0.ipv4_address", "aws_instance.test", "private_ip"), ), }, { @@ -152,22 +156,23 @@ func TestAccStorageGatewayGateway_GatewayType_stored(t *testing.T) { Steps: []resource.TestStep{ { Config: testAccGatewayConfig_GatewayType_Stored(rName), - Check: resource.ComposeTestCheckFunc( + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckGatewayExists(resourceName, &gateway), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "storagegateway", regexp.MustCompile(`gateway/sgw-.+`)), + resource.TestCheckResourceAttrPair(resourceName, "ec2_instance_id", "aws_instance.test", "id"), + resource.TestCheckResourceAttr(resourceName, "endpoint_type", "STANDARD"), resource.TestCheckResourceAttrSet(resourceName, "gateway_id"), resource.TestCheckResourceAttr(resourceName, "gateway_name", rName), + resource.TestCheckResourceAttr(resourceName, "gateway_network_interface.#", "1"), + resource.TestCheckResourceAttrPair(resourceName, "gateway_network_interface.0.ipv4_address", "aws_instance.test", "private_ip"), resource.TestCheckResourceAttr(resourceName, "gateway_timezone", "GMT"), resource.TestCheckResourceAttr(resourceName, "gateway_type", "STORED"), + resource.TestCheckResourceAttr(resourceName, "host_environment", "EC2"), + resource.TestCheckResourceAttr(resourceName, "maintenance_start_time.#", "1"), resource.TestCheckResourceAttr(resourceName, "medium_changer_type", ""), resource.TestCheckResourceAttr(resourceName, "smb_active_directory_settings.#", "0"), resource.TestCheckResourceAttr(resourceName, "smb_guest_password", ""), resource.TestCheckResourceAttr(resourceName, "tape_drive_type", ""), - resource.TestCheckResourceAttrPair(resourceName, "ec2_instance_id", "aws_instance.test", "id"), - resource.TestCheckResourceAttr(resourceName, "endpoint_type", "STANDARD"), - resource.TestCheckResourceAttr(resourceName, "host_environment", "EC2"), - resource.TestCheckResourceAttr(resourceName, "gateway_network_interface.#", "1"), - resource.TestCheckResourceAttrPair(resourceName, "gateway_network_interface.0.ipv4_address", "aws_instance.test", "private_ip"), ), }, { @@ -193,20 +198,21 @@ func TestAccStorageGatewayGateway_GatewayType_vtl(t *testing.T) { Steps: []resource.TestStep{ { Config: testAccGatewayConfig_GatewayType_Vtl(rName), - Check: resource.ComposeTestCheckFunc( + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckGatewayExists(resourceName, &gateway), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "storagegateway", regexp.MustCompile(`gateway/sgw-.+`)), + resource.TestCheckResourceAttrPair(resourceName, "ec2_instance_id", "aws_instance.test", "id"), + resource.TestCheckResourceAttr(resourceName, "endpoint_type", "STANDARD"), resource.TestCheckResourceAttrSet(resourceName, "gateway_id"), resource.TestCheckResourceAttr(resourceName, "gateway_name", rName), resource.TestCheckResourceAttr(resourceName, "gateway_timezone", "GMT"), resource.TestCheckResourceAttr(resourceName, "gateway_type", "VTL"), + resource.TestCheckResourceAttr(resourceName, "host_environment", "EC2"), + resource.TestCheckResourceAttr(resourceName, "maintenance_start_time.#", "1"), resource.TestCheckResourceAttr(resourceName, "medium_changer_type", ""), resource.TestCheckResourceAttr(resourceName, "smb_active_directory_settings.#", "0"), resource.TestCheckResourceAttr(resourceName, "smb_guest_password", ""), resource.TestCheckResourceAttr(resourceName, "tape_drive_type", ""), - resource.TestCheckResourceAttrPair(resourceName, "ec2_instance_id", "aws_instance.test", "id"), - resource.TestCheckResourceAttr(resourceName, "endpoint_type", "STANDARD"), - resource.TestCheckResourceAttr(resourceName, "host_environment", "EC2"), ), }, { @@ -787,6 +793,48 @@ func TestAccStorageGatewayGateway_bandwidthAll(t *testing.T) { }) } +func TestAccStorageGatewayGateway_maintenanceStartTime(t *testing.T) { + var gateway storagegateway.DescribeGatewayInformationOutput + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_storagegateway_gateway.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, storagegateway.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckGatewayDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGatewayMaintenanceStartTimeConfig(rName, 22, 0, "3", ""), + Check: resource.ComposeTestCheckFunc( + testAccCheckGatewayExists(resourceName, &gateway), + resource.TestCheckResourceAttr(resourceName, "maintenance_start_time.#", "1"), + resource.TestCheckResourceAttr(resourceName, "maintenance_start_time.0.hour_of_day", "22"), + resource.TestCheckResourceAttr(resourceName, "maintenance_start_time.0.minute_of_hour", "0"), + resource.TestCheckResourceAttr(resourceName, "maintenance_start_time.0.day_of_week", "3"), + resource.TestCheckResourceAttr(resourceName, "maintenance_start_time.0.day_of_month", ""), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + ImportStateVerifyIgnore: []string{"activation_key", "gateway_ip_address"}, + }, + { + Config: testAccGatewayMaintenanceStartTimeConfig(rName, 21, 10, "", "12"), + Check: resource.ComposeTestCheckFunc( + testAccCheckGatewayExists(resourceName, &gateway), + resource.TestCheckResourceAttr(resourceName, "maintenance_start_time.0.hour_of_day", "21"), + resource.TestCheckResourceAttr(resourceName, "maintenance_start_time.0.minute_of_hour", "10"), + resource.TestCheckResourceAttr(resourceName, "maintenance_start_time.0.day_of_week", ""), + resource.TestCheckResourceAttr(resourceName, "maintenance_start_time.0.day_of_month", "12"), + ), + }, + }, + }) +} + func testAccCheckGatewayDestroy(s *terraform.State) error { conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn @@ -795,18 +843,17 @@ func testAccCheckGatewayDestroy(s *terraform.State) error { continue } - input := &storagegateway.DescribeGatewayInformationInput{ - GatewayARN: aws.String(rs.Primary.ID), - } + _, err := tfstoragegateway.FindGatewayByARN(conn, rs.Primary.ID) - _, err := conn.DescribeGatewayInformation(input) + if tfresource.NotFound(err) { + continue + } if err != nil { - if tfstoragegateway.IsErrGatewayNotFound(err) { - return nil - } return err } + + return fmt.Errorf("Storage Gateway Gateway %s still exists", rs.Primary.ID) } return nil @@ -821,20 +868,13 @@ func testAccCheckGatewayExists(resourceName string, gateway *storagegateway.Desc } conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn - input := &storagegateway.DescribeGatewayInformationInput{ - GatewayARN: aws.String(rs.Primary.ID), - } - output, err := conn.DescribeGatewayInformation(input) + output, err := tfstoragegateway.FindGatewayByARN(conn, rs.Primary.ID) if err != nil { return err } - if output == nil { - return fmt.Errorf("Gateway %q does not exist", rs.Primary.ID) - } - *gateway = *output return nil @@ -844,8 +884,7 @@ func testAccCheckGatewayExists(resourceName string, gateway *storagegateway.Desc // testAcc_VPCBase provides a publicly accessible subnet // and security group, suitable for Storage Gateway EC2 instances of any type func testAcc_VPCBase(rName string) string { - return acctest.ConfigCompose(acctest.ConfigAvailableAZsNoOptIn(), - fmt.Sprintf(` + return acctest.ConfigCompose(acctest.ConfigAvailableAZsNoOptIn(), fmt.Sprintf(` resource "aws_vpc" "test" { cidr_block = "10.0.0.0/16" @@ -879,6 +918,7 @@ resource "aws_route" "test" { } resource "aws_security_group" "test" { + name = %[1]q vpc_id = aws_vpc.test.id egress { @@ -958,40 +998,40 @@ resource "aws_instance" "test" { } func testAccGatewayConfig_GatewayType_Cached(rName string) string { - return testAcc_TapeAndVolumeGatewayBase(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAcc_TapeAndVolumeGatewayBase(rName), fmt.Sprintf(` resource "aws_storagegateway_gateway" "test" { gateway_ip_address = aws_instance.test.public_ip - gateway_name = %q + gateway_name = %[1]q gateway_timezone = "GMT" gateway_type = "CACHED" } -`, rName) +`, rName)) } func testAccGatewayConfig_GatewayType_FileFSxSMB(rName string) string { - return testAcc_FileGatewayBase(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAcc_FileGatewayBase(rName), fmt.Sprintf(` resource "aws_storagegateway_gateway" "test" { gateway_ip_address = aws_instance.test.public_ip - gateway_name = %q + gateway_name = %[1]q gateway_timezone = "GMT" gateway_type = "FILE_FSX_SMB" } -`, rName) +`, rName)) } func testAccGatewayConfig_GatewayType_FileS3(rName string) string { - return testAcc_FileGatewayBase(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAcc_FileGatewayBase(rName), fmt.Sprintf(` resource "aws_storagegateway_gateway" "test" { gateway_ip_address = aws_instance.test.public_ip - gateway_name = %q + gateway_name = %[1]q gateway_timezone = "GMT" gateway_type = "FILE_S3" } -`, rName) +`, rName)) } func testAccGatewayConfig_Log_Group(rName string) string { - return testAcc_FileGatewayBase(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAcc_FileGatewayBase(rName), fmt.Sprintf(` resource "aws_cloudwatch_log_group" "test" { name = %[1]q } @@ -1003,44 +1043,44 @@ resource "aws_storagegateway_gateway" "test" { gateway_type = "FILE_S3" cloudwatch_log_group_arn = aws_cloudwatch_log_group.test.arn } -`, rName) +`, rName)) } func testAccGatewayConfig_GatewayType_Stored(rName string) string { - return testAcc_TapeAndVolumeGatewayBase(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAcc_TapeAndVolumeGatewayBase(rName), fmt.Sprintf(` resource "aws_storagegateway_gateway" "test" { gateway_ip_address = aws_instance.test.public_ip - gateway_name = %q + gateway_name = %[1]q gateway_timezone = "GMT" gateway_type = "STORED" } -`, rName) +`, rName)) } func testAccGatewayConfig_GatewayType_Vtl(rName string) string { - return testAcc_TapeAndVolumeGatewayBase(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAcc_TapeAndVolumeGatewayBase(rName), fmt.Sprintf(` resource "aws_storagegateway_gateway" "test" { gateway_ip_address = aws_instance.test.public_ip - gateway_name = %q + gateway_name = %[1]q gateway_timezone = "GMT" gateway_type = "VTL" } -`, rName) +`, rName)) } func testAccGatewayConfig_GatewayTimezone(rName, gatewayTimezone string) string { - return testAcc_FileGatewayBase(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAcc_FileGatewayBase(rName), fmt.Sprintf(` resource "aws_storagegateway_gateway" "test" { gateway_ip_address = aws_instance.test.public_ip - gateway_name = %q - gateway_timezone = %q + gateway_name = %[1]q + gateway_timezone = %[2]q gateway_type = "FILE_S3" } -`, rName, gatewayTimezone) +`, rName, gatewayTimezone)) } func testAccGatewayConfig_GatewayVPCEndpoint(rName string) string { - return testAcc_TapeAndVolumeGatewayBase(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAcc_TapeAndVolumeGatewayBase(rName), fmt.Sprintf(` data "aws_vpc_endpoint_service" "storagegateway" { service = "storagegateway" } @@ -1051,6 +1091,10 @@ resource "aws_vpc_endpoint" "test" { subnet_ids = [aws_subnet.test.id] vpc_endpoint_type = data.aws_vpc_endpoint_service.storagegateway.service_type vpc_id = aws_vpc.test.id + + tags = { + Name = %[1]q + } } resource "aws_storagegateway_gateway" "test" { @@ -1060,7 +1104,7 @@ resource "aws_storagegateway_gateway" "test" { gateway_type = "CACHED" gateway_vpc_endpoint = aws_vpc_endpoint.test.dns_entry[0].dns_name } -`, rName) +`, rName)) } func testAccGatewayConfig_DirectoryServiceSimpleDirectory(rName, domainName string) string { @@ -1146,6 +1190,7 @@ resource "aws_route" "test" { } resource "aws_security_group" "test" { + name = %[1]q vpc_id = aws_vpc.test.id egress { @@ -1285,31 +1330,31 @@ resource "aws_storagegateway_gateway" "test" { } func testAccGatewayConfig_SMBGuestPassword(rName, smbGuestPassword string) string { - return testAcc_FileGatewayBase(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAcc_FileGatewayBase(rName), fmt.Sprintf(` resource "aws_storagegateway_gateway" "test" { gateway_ip_address = aws_instance.test.public_ip - gateway_name = %q + gateway_name = %[1]q gateway_timezone = "GMT" gateway_type = "FILE_S3" - smb_guest_password = %q + smb_guest_password = %[2]q } -`, rName, smbGuestPassword) +`, rName, smbGuestPassword)) } func testAccGatewaySMBSecurityStrategyConfig(rName, strategy string) string { - return testAcc_FileGatewayBase(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAcc_FileGatewayBase(rName), fmt.Sprintf(` resource "aws_storagegateway_gateway" "test" { gateway_ip_address = aws_instance.test.public_ip - gateway_name = %q + gateway_name = %[1]q gateway_timezone = "GMT" gateway_type = "FILE_S3" - smb_security_strategy = %q + smb_security_strategy = %[2]q } -`, rName, strategy) +`, rName, strategy)) } func testAccGatewaySMBVisibilityConfig(rName string, visible bool) string { - return testAcc_FileGatewayBase(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAcc_FileGatewayBase(rName), fmt.Sprintf(` resource "aws_storagegateway_gateway" "test" { gateway_ip_address = aws_instance.test.public_ip gateway_name = %[1]q @@ -1317,42 +1362,42 @@ resource "aws_storagegateway_gateway" "test" { gateway_type = "FILE_S3" smb_file_share_visibility = %[2]t } -`, rName, visible) +`, rName, visible)) } func testAccGatewayTags1Config(rName, tagKey1, tagValue1 string) string { - return testAcc_TapeAndVolumeGatewayBase(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAcc_TapeAndVolumeGatewayBase(rName), fmt.Sprintf(` resource "aws_storagegateway_gateway" "test" { gateway_ip_address = aws_instance.test.public_ip - gateway_name = %q + gateway_name = %[1]q gateway_timezone = "GMT" gateway_type = "CACHED" tags = { - %q = %q + %[2]q = %[3]q } } -`, rName, tagKey1, tagValue1) +`, rName, tagKey1, tagValue1)) } func testAccGatewayTags2Config(rName, tagKey1, tagValue1, tagKey2, tagValue2 string) string { - return testAcc_TapeAndVolumeGatewayBase(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAcc_TapeAndVolumeGatewayBase(rName), fmt.Sprintf(` resource "aws_storagegateway_gateway" "test" { gateway_ip_address = aws_instance.test.public_ip - gateway_name = %q + gateway_name = %[1]q gateway_timezone = "GMT" gateway_type = "CACHED" tags = { - %q = %q - %q = %q + %[2]q = %[3]q + %[4]q = %[5]q } } -`, rName, tagKey1, tagValue1, tagKey2, tagValue2) +`, rName, tagKey1, tagValue1, tagKey2, tagValue2)) } func testAccGatewayBandwidthUploadConfig(rName string, rate int) string { - return testAcc_TapeAndVolumeGatewayBase(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAcc_TapeAndVolumeGatewayBase(rName), fmt.Sprintf(` resource "aws_storagegateway_gateway" "test" { gateway_ip_address = aws_instance.test.public_ip gateway_name = %[1]q @@ -1360,11 +1405,11 @@ resource "aws_storagegateway_gateway" "test" { gateway_type = "CACHED" average_upload_rate_limit_in_bits_per_sec = %[2]d } -`, rName, rate) +`, rName, rate)) } func testAccGatewayBandwidthDownloadConfig(rName string, rate int) string { - return testAcc_TapeAndVolumeGatewayBase(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAcc_TapeAndVolumeGatewayBase(rName), fmt.Sprintf(` resource "aws_storagegateway_gateway" "test" { gateway_ip_address = aws_instance.test.public_ip gateway_name = %[1]q @@ -1372,11 +1417,11 @@ resource "aws_storagegateway_gateway" "test" { gateway_type = "CACHED" average_download_rate_limit_in_bits_per_sec = %[2]d } -`, rName, rate) +`, rName, rate)) } func testAccGatewayBandwidthAllConfig(rName string, rate int) string { - return testAcc_TapeAndVolumeGatewayBase(rName) + fmt.Sprintf(` + return acctest.ConfigCompose(testAcc_TapeAndVolumeGatewayBase(rName), fmt.Sprintf(` resource "aws_storagegateway_gateway" "test" { gateway_ip_address = aws_instance.test.public_ip gateway_name = %[1]q @@ -1385,5 +1430,30 @@ resource "aws_storagegateway_gateway" "test" { average_upload_rate_limit_in_bits_per_sec = %[2]d average_download_rate_limit_in_bits_per_sec = %[2]d } -`, rName, rate) +`, rName, rate)) +} + +func testAccGatewayMaintenanceStartTimeConfig(rName string, hourOfDay, minuteOfHour int, dayOfWeek, dayOfMonth string) string { + if dayOfWeek == "" { + dayOfWeek = strconv.Quote(dayOfWeek) + } + if dayOfMonth == "" { + dayOfMonth = strconv.Quote(dayOfMonth) + } + + return acctest.ConfigCompose(testAcc_TapeAndVolumeGatewayBase(rName), fmt.Sprintf(` +resource "aws_storagegateway_gateway" "test" { + gateway_ip_address = aws_instance.test.public_ip + gateway_name = %[1]q + gateway_timezone = "GMT" + gateway_type = "CACHED" + + maintenance_start_time { + hour_of_day = %[2]d + minute_of_hour = %[3]d + day_of_week = %[4]s + day_of_month = %[5]s + } +} +`, rName, hourOfDay, minuteOfHour, dayOfWeek, dayOfMonth)) } diff --git a/internal/service/storagegateway/nfs_file_share.go b/internal/service/storagegateway/nfs_file_share.go index 0fb7711a4ab2..2477e915b930 100644 --- a/internal/service/storagegateway/nfs_file_share.go +++ b/internal/service/storagegateway/nfs_file_share.go @@ -9,7 +9,6 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/storagegateway" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" @@ -26,9 +25,11 @@ func ResourceNFSFileShare() *schema.Resource { Read: resourceNFSFileShareRead, Update: resourceNFSFileShareUpdate, Delete: resourceNFSFileShareDelete, + Importer: &schema.ResourceImporter{ State: schema.ImportStatePassthrough, }, + Timeouts: &schema.ResourceTimeout{ Create: schema.DefaultTimeout(10 * time.Minute), Update: schema.DefaultTimeout(10 * time.Minute), @@ -36,14 +37,34 @@ func ResourceNFSFileShare() *schema.Resource { }, Schema: map[string]*schema.Schema{ + "arn": { + Type: schema.TypeString, + Computed: true, + }, "audit_destination_arn": { Type: schema.TypeString, Optional: true, ValidateFunc: verify.ValidARN, }, - "arn": { - Type: schema.TypeString, - Computed: true, + "bucket_region": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + RequiredWith: []string{"vpc_endpoint_dns_name"}, + }, + "cache_attributes": { + Type: schema.TypeList, + Optional: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "cache_stale_timeout_in_seconds": { + Type: schema.TypeInt, + Optional: true, + ValidateFunc: validation.IntBetween(300, 2592000), + }, + }, + }, }, "client_list": { Type: schema.TypeSet, @@ -59,15 +80,16 @@ func ResourceNFSFileShare() *schema.Resource { }, }, "default_storage_class": { - Type: schema.TypeString, - Optional: true, - Default: "S3_STANDARD", - ValidateFunc: validation.StringInSlice([]string{ - "S3_ONEZONE_IA", - "S3_STANDARD_IA", - "S3_STANDARD", - "S3_INTELLIGENT_TIERING", - }, false), + Type: schema.TypeString, + Optional: true, + Default: defaultStorageClassS3Standard, + ValidateFunc: validation.StringInSlice(defaultStorageClass_Values(), false), + }, + "file_share_name": { + Type: schema.TypeString, + Optional: true, + Computed: true, + ValidateFunc: validation.StringLenBetween(1, 255), }, "fileshare_id": { Type: schema.TypeString, @@ -133,19 +155,14 @@ func ResourceNFSFileShare() *schema.Resource { }, }, }, - "cache_attributes": { - Type: schema.TypeList, + "notification_policy": { + Type: schema.TypeString, Optional: true, - MaxItems: 1, - Elem: &schema.Resource{ - Schema: map[string]*schema.Schema{ - "cache_stale_timeout_in_seconds": { - Type: schema.TypeInt, - Optional: true, - ValidateFunc: validation.IntBetween(300, 2592000), - }, - }, - }, + Default: "{}", + ValidateFunc: validation.All( + validation.StringMatch(regexp.MustCompile(`^\{[\w\s:\{\}\[\]"]*}$`), ""), + validation.StringLenBetween(2, 100), + ), }, "object_acl": { Type: schema.TypeString, @@ -174,32 +191,18 @@ func ResourceNFSFileShare() *schema.Resource { ValidateFunc: verify.ValidARN, }, "squash": { - Type: schema.TypeString, - Optional: true, - Default: "RootSquash", - ValidateFunc: validation.StringInSlice([]string{ - "AllSquash", - "NoSquash", - "RootSquash", - }, false), - }, - "file_share_name": { Type: schema.TypeString, Optional: true, - Computed: true, - ValidateFunc: validation.StringLenBetween(1, 255), + Default: squashRootSquash, + ValidateFunc: validation.StringInSlice(squash_Values(), false), }, - "notification_policy": { + "tags": tftags.TagsSchema(), + "tags_all": tftags.TagsSchemaComputed(), + "vpc_endpoint_dns_name": { Type: schema.TypeString, Optional: true, - Default: "{}", - ValidateFunc: validation.All( - validation.StringMatch(regexp.MustCompile(`^\{[\w\s:\{\}\[\]"]*}$`), ""), - validation.StringLenBetween(2, 100), - ), + ForceNew: true, }, - "tags": tftags.TagsSchema(), - "tags_all": tftags.TagsSchemaComputed(), }, CustomizeDiff: verify.SetTagsDiff, @@ -212,6 +215,7 @@ func resourceNFSFileShareCreate(d *schema.ResourceData, meta interface{}) error tags := defaultTagsConfig.MergeTags(tftags.New(d.Get("tags").(map[string]interface{}))) fileShareDefaults, err := expandStorageGatewayNfsFileShareDefaults(d.Get("nfs_file_share_defaults").([]interface{})) + if err != nil { return err } @@ -237,32 +241,41 @@ func resourceNFSFileShareCreate(d *schema.ResourceData, meta interface{}) error input.AuditDestinationARN = aws.String(v.(string)) } - if v, ok := d.GetOk("kms_key_arn"); ok { - input.KMSKey = aws.String(v.(string)) + if v, ok := d.GetOk("bucket_region"); ok { + input.BucketRegion = aws.String(v.(string)) } - if v, ok := d.GetOk("notification_policy"); ok { - input.NotificationPolicy = aws.String(v.(string)) + if v, ok := d.GetOk("cache_attributes"); ok { + input.CacheAttributes = expandStorageGatewayNfsFileShareCacheAttributes(v.([]interface{})) } if v, ok := d.GetOk("file_share_name"); ok { input.FileShareName = aws.String(v.(string)) } - if v, ok := d.GetOk("cache_attributes"); ok { - input.CacheAttributes = expandStorageGatewayNfsFileShareCacheAttributes(v.([]interface{})) + if v, ok := d.GetOk("kms_key_arn"); ok { + input.KMSKey = aws.String(v.(string)) + } + + if v, ok := d.GetOk("notification_policy"); ok { + input.NotificationPolicy = aws.String(v.(string)) + } + + if v, ok := d.GetOk("vpc_endpoint_dns_name"); ok { + input.VPCEndpointDNSName = aws.String(v.(string)) } log.Printf("[DEBUG] Creating Storage Gateway NFS File Share: %s", input) output, err := conn.CreateNFSFileShare(input) + if err != nil { return fmt.Errorf("error creating Storage Gateway NFS File Share: %w", err) } d.SetId(aws.StringValue(output.FileShareARN)) - if _, err = waitNFSFileShareAvailable(conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { - return fmt.Errorf("error waiting for Storage Gateway NFS File Share (%q) to be Available: %w", d.Id(), err) + if _, err = waitNFSFileShareCreated(conn, d.Id(), d.Timeout(schema.TimeoutCreate)); err != nil { + return fmt.Errorf("error waiting for Storage Gateway NFS File Share (%s) create: %w", d.Id(), err) } return resourceNFSFileShareRead(d, meta) @@ -273,61 +286,46 @@ func resourceNFSFileShareRead(d *schema.ResourceData, meta interface{}) error { defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig - input := &storagegateway.DescribeNFSFileSharesInput{ - FileShareARNList: []*string{aws.String(d.Id())}, - } - - log.Printf("[DEBUG] Reading Storage Gateway NFS File Share: %s", input) - output, err := conn.DescribeNFSFileShares(input) - if err != nil { - if tfawserr.ErrMessageContains(err, storagegateway.ErrCodeInvalidGatewayRequestException, "The specified file share was not found.") { - log.Printf("[WARN] Storage Gateway NFS File Share %q not found, removing from state", d.Id()) - d.SetId("") - return nil - } - return fmt.Errorf("error reading Storage Gateway NFS File Share: %w", err) - } + fileshare, err := FindNFSFileShareByARN(conn, d.Id()) - if output == nil || len(output.NFSFileShareInfoList) == 0 || output.NFSFileShareInfoList[0] == nil { - log.Printf("[WARN] Storage Gateway NFS File Share %q not found, removing from state", d.Id()) + if !d.IsNewResource() && tfresource.NotFound(err) { + log.Printf("[WARN] Storage Gateway NFS File Share (%s) not found, removing from state", d.Id()) d.SetId("") return nil } - fileshare := output.NFSFileShareInfoList[0] - - arn := fileshare.FileShareARN - d.Set("arn", arn) + if err != nil { + return fmt.Errorf("error reading Storage Gateway NFS File Share (%s): %w", d.Id(), err) + } + d.Set("arn", fileshare.FileShareARN) + d.Set("audit_destination_arn", fileshare.AuditDestinationARN) + d.Set("bucket_region", fileshare.BucketRegion) + if err := d.Set("cache_attributes", flattenStorageGatewayNfsFileShareCacheAttributes(fileshare.CacheAttributes)); err != nil { + return fmt.Errorf("error setting cache_attributes: %w", err) + } if err := d.Set("client_list", flex.FlattenStringSet(fileshare.ClientList)); err != nil { return fmt.Errorf("error setting client_list: %w", err) } - - d.Set("audit_destination_arn", fileshare.AuditDestinationARN) d.Set("default_storage_class", fileshare.DefaultStorageClass) + d.Set("file_share_name", fileshare.FileShareName) d.Set("fileshare_id", fileshare.FileShareId) d.Set("gateway_arn", fileshare.GatewayARN) d.Set("guess_mime_type_enabled", fileshare.GuessMIMETypeEnabled) d.Set("kms_encrypted", fileshare.KMSEncrypted) d.Set("kms_key_arn", fileshare.KMSKey) d.Set("location_arn", fileshare.LocationARN) - d.Set("file_share_name", fileshare.FileShareName) - if err := d.Set("nfs_file_share_defaults", flattenStorageGatewayNfsFileShareDefaults(fileshare.NFSFileShareDefaults)); err != nil { return fmt.Errorf("error setting nfs_file_share_defaults: %w", err) } - - if err := d.Set("cache_attributes", flattenStorageGatewayNfsFileShareCacheAttributes(fileshare.CacheAttributes)); err != nil { - return fmt.Errorf("error setting cache_attributes: %w", err) - } - + d.Set("notification_policy", fileshare.NotificationPolicy) d.Set("object_acl", fileshare.ObjectACL) d.Set("path", fileshare.Path) d.Set("read_only", fileshare.ReadOnly) d.Set("requester_pays", fileshare.RequesterPays) d.Set("role_arn", fileshare.Role) d.Set("squash", fileshare.Squash) - d.Set("notification_policy", fileshare.NotificationPolicy) + d.Set("vpc_endpoint_dns_name", fileshare.VPCEndpointDNSName) tags := KeyValueTags(fileshare.Tags).IgnoreAWS().IgnoreConfig(ignoreTagsConfig) @@ -346,15 +344,9 @@ func resourceNFSFileShareRead(d *schema.ResourceData, meta interface{}) error { func resourceNFSFileShareUpdate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).StorageGatewayConn - if d.HasChange("tags_all") { - o, n := d.GetChange("tags_all") - if err := UpdateTags(conn, d.Get("arn").(string), o, n); err != nil { - return fmt.Errorf("error updating tags: %w", err) - } - } - if d.HasChangesExcept("tags_all", "tags") { fileShareDefaults, err := expandStorageGatewayNfsFileShareDefaults(d.Get("nfs_file_share_defaults").([]interface{})) + if err != nil { return err } @@ -376,30 +368,39 @@ func resourceNFSFileShareUpdate(d *schema.ResourceData, meta interface{}) error input.AuditDestinationARN = aws.String(v.(string)) } - if v, ok := d.GetOk("kms_key_arn"); ok { - input.KMSKey = aws.String(v.(string)) - } - - if v, ok := d.GetOk("notification_policy"); ok { - input.NotificationPolicy = aws.String(v.(string)) + if v, ok := d.GetOk("cache_attributes"); ok { + input.CacheAttributes = expandStorageGatewayNfsFileShareCacheAttributes(v.([]interface{})) } if v, ok := d.GetOk("file_share_name"); ok { input.FileShareName = aws.String(v.(string)) } - if v, ok := d.GetOk("cache_attributes"); ok { - input.CacheAttributes = expandStorageGatewayNfsFileShareCacheAttributes(v.([]interface{})) + if v, ok := d.GetOk("kms_key_arn"); ok { + input.KMSKey = aws.String(v.(string)) + } + + if v, ok := d.GetOk("notification_policy"); ok { + input.NotificationPolicy = aws.String(v.(string)) } log.Printf("[DEBUG] Updating Storage Gateway NFS File Share: %s", input) _, err = conn.UpdateNFSFileShare(input) + if err != nil { - return fmt.Errorf("error updating Storage Gateway NFS File Share: %w", err) + return fmt.Errorf("error updating Storage Gateway NFS File Share (%s): %w", d.Id(), err) + } + + if _, err = waitNFSFileShareUpdated(conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { + return fmt.Errorf("error waiting for Storage Gateway NFS File Share (%s) update: %w", d.Id(), err) } + } - if _, err = waitNFSFileShareAvailable(conn, d.Id(), d.Timeout(schema.TimeoutUpdate)); err != nil { - return fmt.Errorf("error waiting for Storage Gateway NFS File Share (%q) to be Available: %w", d.Id(), err) + if d.HasChange("tags_all") { + o, n := d.GetChange("tags_all") + + if err := UpdateTags(conn, d.Get("arn").(string), o, n); err != nil { + return fmt.Errorf("error updating tags: %w", err) } } @@ -409,24 +410,21 @@ func resourceNFSFileShareUpdate(d *schema.ResourceData, meta interface{}) error func resourceNFSFileShareDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).StorageGatewayConn - input := &storagegateway.DeleteFileShareInput{ + log.Printf("[DEBUG] Deleting Storage Gateway NFS File Share: %s", d.Id()) + _, err := conn.DeleteFileShare(&storagegateway.DeleteFileShareInput{ FileShareARN: aws.String(d.Id()), + }) + + if operationErrorCode(err) == operationErrCodeFileShareNotFound { + return nil } - log.Printf("[DEBUG] Deleting Storage Gateway NFS File Share: %s", input) - _, err := conn.DeleteFileShare(input) if err != nil { - if tfawserr.ErrMessageContains(err, storagegateway.ErrCodeInvalidGatewayRequestException, "The specified file share was not found.") { - return nil - } - return fmt.Errorf("error deleting Storage Gateway NFS File Share: %w", err) + return fmt.Errorf("error deleting Storage Gateway NFS File Share (%s): %w", d.Id(), err) } if _, err = waitNFSFileShareDeleted(conn, d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { - if tfresource.NotFound(err) { - return nil - } - return fmt.Errorf("error waiting for Storage Gateway NFS File Share (%q) to be deleted: %w", d.Id(), err) + return fmt.Errorf("error waiting for Storage Gateway NFS File Share (%s) delete: %w", d.Id(), err) } return nil diff --git a/internal/service/storagegateway/nfs_file_share_test.go b/internal/service/storagegateway/nfs_file_share_test.go index 558425fdabfc..30608519b410 100644 --- a/internal/service/storagegateway/nfs_file_share_test.go +++ b/internal/service/storagegateway/nfs_file_share_test.go @@ -5,15 +5,14 @@ import ( "regexp" "testing" - "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/storagegateway" - "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" "github.com/hashicorp/terraform-provider-aws/internal/conns" tfstoragegateway "github.com/hashicorp/terraform-provider-aws/internal/service/storagegateway" + "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) func TestAccStorageGatewayNFSFileShare_basic(t *testing.T) { @@ -32,12 +31,15 @@ func TestAccStorageGatewayNFSFileShare_basic(t *testing.T) { Steps: []resource.TestStep{ { Config: testAccNFSFileShareConfig_Required(rName), - Check: resource.ComposeTestCheckFunc( + Check: resource.ComposeAggregateTestCheckFunc( testAccCheckNFSFileShareExists(resourceName, &nfsFileShare), acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "storagegateway", regexp.MustCompile(`share/share-.+`)), + resource.TestCheckResourceAttr(resourceName, "bucket_region", ""), + resource.TestCheckResourceAttr(resourceName, "cache_attributes.#", "0"), resource.TestCheckResourceAttr(resourceName, "client_list.#", "1"), resource.TestCheckTypeSetElemAttr(resourceName, "client_list.*", "0.0.0.0/0"), resource.TestCheckResourceAttr(resourceName, "default_storage_class", "S3_STANDARD"), + resource.TestCheckResourceAttr(resourceName, "file_share_name", rName), resource.TestMatchResourceAttr(resourceName, "fileshare_id", regexp.MustCompile(`^share-`)), resource.TestCheckResourceAttrPair(resourceName, "gateway_arn", gatewayResourceName, "arn"), resource.TestCheckResourceAttr(resourceName, "guess_mime_type_enabled", "true"), @@ -45,16 +47,15 @@ func TestAccStorageGatewayNFSFileShare_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "kms_key_arn", ""), resource.TestCheckResourceAttrPair(resourceName, "location_arn", bucketResourceName, "arn"), resource.TestCheckResourceAttr(resourceName, "nfs_file_share_defaults.#", "0"), + resource.TestCheckResourceAttr(resourceName, "notification_policy", "{}"), resource.TestCheckResourceAttr(resourceName, "object_acl", storagegateway.ObjectACLPrivate), resource.TestMatchResourceAttr(resourceName, "path", regexp.MustCompile(`^/.+`)), resource.TestCheckResourceAttr(resourceName, "read_only", "false"), resource.TestCheckResourceAttr(resourceName, "requester_pays", "false"), resource.TestCheckResourceAttrPair(resourceName, "role_arn", iamResourceName, "arn"), resource.TestCheckResourceAttr(resourceName, "squash", "RootSquash"), - resource.TestCheckResourceAttr(resourceName, "cache_attributes.#", "0"), - resource.TestCheckResourceAttr(resourceName, "file_share_name", rName), - resource.TestCheckResourceAttr(resourceName, "notification_policy", "{}"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttr(resourceName, "vpc_endpoint_dns_name", ""), ), }, { @@ -668,22 +669,17 @@ func testAccCheckNFSFileShareDestroy(s *terraform.State) error { continue } - input := &storagegateway.DescribeNFSFileSharesInput{ - FileShareARNList: []*string{aws.String(rs.Primary.ID)}, - } + _, err := tfstoragegateway.FindNFSFileShareByARN(conn, rs.Primary.ID) - output, err := conn.DescribeNFSFileShares(input) + if tfresource.NotFound(err) { + continue + } if err != nil { - if tfawserr.ErrMessageContains(err, storagegateway.ErrCodeInvalidGatewayRequestException, "The specified file share was not found.") { - continue - } return err } - if output != nil && len(output.NFSFileShareInfoList) > 0 && output.NFSFileShareInfoList[0] != nil { - return fmt.Errorf("Storage Gateway NFS File Share %q still exists", rs.Primary.ID) - } + return fmt.Errorf("Storage Gateway NFS File Share %s still exists", rs.Primary.ID) } return nil @@ -698,21 +694,14 @@ func testAccCheckNFSFileShareExists(resourceName string, nfsFileShare *storagega } conn := acctest.Provider.Meta().(*conns.AWSClient).StorageGatewayConn - input := &storagegateway.DescribeNFSFileSharesInput{ - FileShareARNList: []*string{aws.String(rs.Primary.ID)}, - } - output, err := conn.DescribeNFSFileShares(input) + output, err := tfstoragegateway.FindNFSFileShareByARN(conn, rs.Primary.ID) if err != nil { return err } - if output == nil || len(output.NFSFileShareInfoList) == 0 || output.NFSFileShareInfoList[0] == nil { - return fmt.Errorf("Storage Gateway NFS File Share %q does not exist", rs.Primary.ID) - } - - *nfsFileShare = *output.NFSFileShareInfoList[0] + *nfsFileShare = *output return nil } diff --git a/internal/service/storagegateway/status.go b/internal/service/storagegateway/status.go index 7dda02c0563f..0a2f24e06a1f 100644 --- a/internal/service/storagegateway/status.go +++ b/internal/service/storagegateway/status.go @@ -1,9 +1,6 @@ package storagegateway import ( - "fmt" - "log" - "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/storagegateway" "github.com/hashicorp/aws-sdk-go-base/v2/awsv1shim/v2/tfawserr" @@ -14,7 +11,6 @@ import ( const ( storageGatewayGatewayStatusConnected = "GatewayConnected" storediSCSIVolumeStatusNotFound = "NotFound" - nfsFileShareStatusNotFound = "NotFound" ) func statusStorageGatewayGateway(conn *storagegateway.StorageGateway, gatewayARN string) resource.StateRefreshFunc { @@ -83,32 +79,23 @@ func statusStorediSCSIVolume(conn *storagegateway.StorageGateway, volumeARN stri } } -func statusNFSFileShare(conn *storagegateway.StorageGateway, fileShareArn string) resource.StateRefreshFunc { +func statusNFSFileShare(conn *storagegateway.StorageGateway, arn string) resource.StateRefreshFunc { return func() (interface{}, string, error) { - input := &storagegateway.DescribeNFSFileSharesInput{ - FileShareARNList: []*string{aws.String(fileShareArn)}, - } + output, err := FindNFSFileShareByARN(conn, arn) - log.Printf("[DEBUG] Reading Storage Gateway NFS File Share: %s", input) - output, err := conn.DescribeNFSFileShares(input) - if err != nil { - if tfawserr.ErrMessageContains(err, storagegateway.ErrCodeInvalidGatewayRequestException, "The specified file share was not found.") { - return nil, nfsFileShareStatusNotFound, nil - } - return nil, "", fmt.Errorf("error reading Storage Gateway NFS File Share: %w", err) + if tfresource.NotFound(err) { + return nil, "", nil } - if output == nil || len(output.NFSFileShareInfoList) == 0 || output.NFSFileShareInfoList[0] == nil { - return nil, nfsFileShareStatusNotFound, nil + if err != nil { + return nil, "", err } - fileshare := output.NFSFileShareInfoList[0] - - return fileshare, aws.StringValue(fileshare.FileShareStatus), nil + return output, aws.StringValue(output.FileShareStatus), nil } } -func statussmBFileShare(conn *storagegateway.StorageGateway, arn string) resource.StateRefreshFunc { +func statusSMBFileShare(conn *storagegateway.StorageGateway, arn string) resource.StateRefreshFunc { return func() (interface{}, string, error) { output, err := FindSMBFileShareByARN(conn, arn) diff --git a/internal/service/storagegateway/wait.go b/internal/service/storagegateway/wait.go index a3304b2acc89..515376c5059d 100644 --- a/internal/service/storagegateway/wait.go +++ b/internal/service/storagegateway/wait.go @@ -75,12 +75,11 @@ func waitStorediSCSIVolumeAvailable(conn *storagegateway.StorageGateway, volumeA return nil, err } -// waitNFSFileShareAvailable waits for a NFS File Share to return Available -func waitNFSFileShareAvailable(conn *storagegateway.StorageGateway, fileShareArn string, timeout time.Duration) (*storagegateway.NFSFileShareInfo, error) { //nolint:unparam +func waitNFSFileShareCreated(conn *storagegateway.StorageGateway, arn string, timeout time.Duration) (*storagegateway.NFSFileShareInfo, error) { stateConf := &resource.StateChangeConf{ - Pending: []string{"BOOTSTRAPPING", "CREATING", "RESTORING", "UPDATING"}, - Target: []string{"AVAILABLE"}, - Refresh: statusNFSFileShare(conn, fileShareArn), + Pending: []string{fileShareStatusCreating}, + Target: []string{fileShareStatusAvailable}, + Refresh: statusNFSFileShare(conn, arn), Timeout: timeout, Delay: nfsFileShareAvailableDelay, } @@ -94,11 +93,11 @@ func waitNFSFileShareAvailable(conn *storagegateway.StorageGateway, fileShareArn return nil, err } -func waitNFSFileShareDeleted(conn *storagegateway.StorageGateway, fileShareArn string, timeout time.Duration) (*storagegateway.NFSFileShareInfo, error) { +func waitNFSFileShareDeleted(conn *storagegateway.StorageGateway, arn string, timeout time.Duration) (*storagegateway.NFSFileShareInfo, error) { stateConf := &resource.StateChangeConf{ - Pending: []string{"AVAILABLE", "DELETING", "FORCE_DELETING"}, + Pending: []string{fileShareStatusAvailable, fileShareStatusDeleting, fileShareStatusForceDeleting}, Target: []string{}, - Refresh: statusNFSFileShare(conn, fileShareArn), + Refresh: statusNFSFileShare(conn, arn), Timeout: timeout, Delay: nfsFileShareDeletedDelay, NotFoundChecks: 1, @@ -113,11 +112,29 @@ func waitNFSFileShareDeleted(conn *storagegateway.StorageGateway, fileShareArn s return nil, err } +func waitNFSFileShareUpdated(conn *storagegateway.StorageGateway, arn string, timeout time.Duration) (*storagegateway.NFSFileShareInfo, error) { + stateConf := &resource.StateChangeConf{ + Pending: []string{fileShareStatusUpdating}, + Target: []string{fileShareStatusAvailable}, + Refresh: statusNFSFileShare(conn, arn), + Timeout: timeout, + Delay: nfsFileShareAvailableDelay, + } + + outputRaw, err := stateConf.WaitForState() + + if output, ok := outputRaw.(*storagegateway.NFSFileShareInfo); ok { + return output, err + } + + return nil, err +} + func waitSMBFileShareCreated(conn *storagegateway.StorageGateway, arn string, timeout time.Duration) (*storagegateway.SMBFileShareInfo, error) { stateConf := &resource.StateChangeConf{ Pending: []string{fileShareStatusCreating}, Target: []string{fileShareStatusAvailable}, - Refresh: statussmBFileShare(conn, arn), + Refresh: statusSMBFileShare(conn, arn), Timeout: timeout, Delay: smbFileShareAvailableDelay, } @@ -135,7 +152,7 @@ func waitSMBFileShareDeleted(conn *storagegateway.StorageGateway, arn string, ti stateConf := &resource.StateChangeConf{ Pending: []string{fileShareStatusAvailable, fileShareStatusDeleting, fileShareStatusForceDeleting}, Target: []string{}, - Refresh: statussmBFileShare(conn, arn), + Refresh: statusSMBFileShare(conn, arn), Timeout: timeout, Delay: smbFileShareDeletedDelay, NotFoundChecks: 1, @@ -154,7 +171,7 @@ func waitSMBFileShareUpdated(conn *storagegateway.StorageGateway, arn string, ti stateConf := &resource.StateChangeConf{ Pending: []string{fileShareStatusUpdating}, Target: []string{fileShareStatusAvailable}, - Refresh: statussmBFileShare(conn, arn), + Refresh: statusSMBFileShare(conn, arn), Timeout: timeout, Delay: smbFileShareAvailableDelay, } diff --git a/internal/service/xray/group.go b/internal/service/xray/group.go index 0a2b2e834fde..d9a9081254ea 100644 --- a/internal/service/xray/group.go +++ b/internal/service/xray/group.go @@ -40,6 +40,25 @@ func ResourceGroup() *schema.Resource { Type: schema.TypeString, Required: true, }, + "insights_configuration": { + Type: schema.TypeList, + Optional: true, + Computed: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "insights_enabled": { + Type: schema.TypeBool, + Required: true, + }, + "notifications_enabled": { + Type: schema.TypeBool, + Optional: true, + Computed: true, + }, + }, + }, + }, "tags": tftags.TagsSchema(), "tags_all": tftags.TagsSchemaComputed(), }, @@ -50,18 +69,25 @@ func resourceGroupCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).XRayConn defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig tags := defaultTagsConfig.MergeTags(tftags.New(d.Get("tags").(map[string]interface{}))) + + name := d.Get("group_name").(string) input := &xray.CreateGroupInput{ - GroupName: aws.String(d.Get("group_name").(string)), + GroupName: aws.String(name), FilterExpression: aws.String(d.Get("filter_expression").(string)), Tags: Tags(tags.IgnoreAWS()), } - out, err := conn.CreateGroup(input) + if v, ok := d.GetOk("insights_configuration"); ok { + input.InsightsConfiguration = expandInsightsConfig(v.([]interface{})) + } + + output, err := conn.CreateGroup(input) + if err != nil { - return fmt.Errorf("error creating XRay Group: %w", err) + return fmt.Errorf("error creating XRay Group (%s): %w", name, err) } - d.SetId(aws.StringValue(out.Group.GroupARN)) + d.SetId(aws.StringValue(output.Group.GroupARN)) return resourceGroupRead(d, meta) } @@ -77,12 +103,13 @@ func resourceGroupRead(d *schema.ResourceData, meta interface{}) error { group, err := conn.GetGroup(input) + if tfawserr.ErrMessageContains(err, xray.ErrCodeInvalidRequestException, "Group not found") { + log.Printf("[WARN] XRay Group (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + if err != nil { - if tfawserr.ErrMessageContains(err, xray.ErrCodeInvalidRequestException, "Group not found") { - log.Printf("[WARN] XRay Group (%s) not found, removing from state", d.Id()) - d.SetId("") - return nil - } return fmt.Errorf("error reading XRay Group (%s): %w", d.Id(), err) } @@ -90,8 +117,12 @@ func resourceGroupRead(d *schema.ResourceData, meta interface{}) error { d.Set("arn", arn) d.Set("group_name", group.Group.GroupName) d.Set("filter_expression", group.Group.FilterExpression) + if err := d.Set("insights_configuration", flattenInsightsConfig(group.Group.InsightsConfiguration)); err != nil { + return fmt.Errorf("error setting insights_configuration: %w", err) + } tags, err := ListTags(conn, arn) + if err != nil { return fmt.Errorf("error listing tags for Xray Group (%q): %s", d.Id(), err) } @@ -113,13 +144,19 @@ func resourceGroupRead(d *schema.ResourceData, meta interface{}) error { func resourceGroupUpdate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).XRayConn - if d.HasChange("filter_expression") { - input := &xray.UpdateGroupInput{ - GroupARN: aws.String(d.Id()), - FilterExpression: aws.String(d.Get("filter_expression").(string)), + if d.HasChangesExcept("tags", "tags_all") { + input := &xray.UpdateGroupInput{GroupARN: aws.String(d.Id())} + + if v, ok := d.GetOk("filter_expression"); ok { + input.FilterExpression = aws.String(v.(string)) + } + + if v, ok := d.GetOk("insights_configuration"); ok { + input.InsightsConfiguration = expandInsightsConfig(v.([]interface{})) } _, err := conn.UpdateGroup(input) + if err != nil { return fmt.Errorf("error updating XRay Group (%s): %w", d.Id(), err) } @@ -127,6 +164,7 @@ func resourceGroupUpdate(d *schema.ResourceData, meta interface{}) error { if d.HasChange("tags_all") { o, n := d.GetChange("tags_all") + if err := UpdateTags(conn, d.Get("arn").(string), o, n); err != nil { return fmt.Errorf("error updating tags: %w", err) } @@ -139,14 +177,48 @@ func resourceGroupDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).XRayConn log.Printf("[INFO] Deleting XRay Group: %s", d.Id()) - - params := &xray.DeleteGroupInput{ + _, err := conn.DeleteGroup(&xray.DeleteGroupInput{ GroupARN: aws.String(d.Id()), - } - _, err := conn.DeleteGroup(params) + }) + if err != nil { return fmt.Errorf("error deleting XRay Group (%s): %w", d.Id(), err) } return nil } + +func expandInsightsConfig(l []interface{}) *xray.InsightsConfiguration { + if len(l) == 0 || l[0] == nil { + return nil + } + + m := l[0].(map[string]interface{}) + config := xray.InsightsConfiguration{} + + if v, ok := m["insights_enabled"]; ok { + config.InsightsEnabled = aws.Bool(v.(bool)) + } + if v, ok := m["notifications_enabled"]; ok { + config.NotificationsEnabled = aws.Bool(v.(bool)) + } + + return &config +} + +func flattenInsightsConfig(config *xray.InsightsConfiguration) []interface{} { + if config == nil { + return nil + } + + m := map[string]interface{}{} + + if config.InsightsEnabled != nil { + m["insights_enabled"] = config.InsightsEnabled + } + if config.NotificationsEnabled != nil { + m["notifications_enabled"] = config.NotificationsEnabled + } + + return []interface{}{m} +} diff --git a/internal/service/xray/group_test.go b/internal/service/xray/group_test.go index b73dede8c362..d579004b9733 100644 --- a/internal/service/xray/group_test.go +++ b/internal/service/xray/group_test.go @@ -34,6 +34,7 @@ func TestAccXRayGroup_basic(t *testing.T) { acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "xray", regexp.MustCompile(`group/.+`)), resource.TestCheckResourceAttr(resourceName, "group_name", rName), resource.TestCheckResourceAttr(resourceName, "filter_expression", "responsetime > 5"), + resource.TestCheckResourceAttr(resourceName, "insights_configuration.#", "1"), // Computed. ), }, { @@ -48,6 +49,49 @@ func TestAccXRayGroup_basic(t *testing.T) { acctest.MatchResourceAttrRegionalARN(resourceName, "arn", "xray", regexp.MustCompile(`group/.+`)), resource.TestCheckResourceAttr(resourceName, "group_name", rName), resource.TestCheckResourceAttr(resourceName, "filter_expression", "responsetime > 10"), + resource.TestCheckResourceAttr(resourceName, "insights_configuration.#", "1"), + ), + }, + }, + }) +} + +func TestAccXRayGroup_insights(t *testing.T) { + var Group xray.Group + resourceName := "aws_xray_group.test" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, xray.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckGroupDestroy, + Steps: []resource.TestStep{ + { + Config: testAccGroupBasicInsightsConfig(rName, "responsetime > 5", true, true), + Check: resource.ComposeTestCheckFunc( + testAccCheckXrayGroupExists(resourceName, &Group), + resource.TestCheckResourceAttr(resourceName, "insights_configuration.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "insights_configuration.*", map[string]string{ + "insights_enabled": "true", + "notifications_enabled": "true", + }), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccGroupBasicInsightsConfig(rName, "responsetime > 10", false, false), + Check: resource.ComposeTestCheckFunc( + testAccCheckXrayGroupExists(resourceName, &Group), + resource.TestCheckResourceAttr(resourceName, "insights_configuration.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "insights_configuration.*", map[string]string{ + "insights_enabled": "false", + "notifications_enabled": "false", + }), ), }, }, @@ -214,3 +258,17 @@ resource "aws_xray_group" "test" { } `, rName, tagKey1, tagValue1, tagKey2, tagValue2) } + +func testAccGroupBasicInsightsConfig(rName, expression string, insightsEnabled bool, notificationsEnabled bool) string { + return fmt.Sprintf(` +resource "aws_xray_group" "test" { + group_name = %[1]q + filter_expression = %[2]q + + insights_configuration { + insights_enabled = %[3]t + notifications_enabled = %[4]t + } +} +`, rName, expression, insightsEnabled, notificationsEnabled) +} diff --git a/internal/sweep/sweep_test.go b/internal/sweep/sweep_test.go index b675b43d6707..aeb8bea78879 100644 --- a/internal/sweep/sweep_test.go +++ b/internal/sweep/sweep_test.go @@ -42,6 +42,7 @@ import ( _ "github.com/hashicorp/terraform-provider-aws/internal/service/datasync" _ "github.com/hashicorp/terraform-provider-aws/internal/service/dax" _ "github.com/hashicorp/terraform-provider-aws/internal/service/directconnect" + _ "github.com/hashicorp/terraform-provider-aws/internal/service/dlm" _ "github.com/hashicorp/terraform-provider-aws/internal/service/dms" _ "github.com/hashicorp/terraform-provider-aws/internal/service/ds" _ "github.com/hashicorp/terraform-provider-aws/internal/service/dynamodb" @@ -69,6 +70,7 @@ import ( _ "github.com/hashicorp/terraform-provider-aws/internal/service/imagebuilder" _ "github.com/hashicorp/terraform-provider-aws/internal/service/iot" _ "github.com/hashicorp/terraform-provider-aws/internal/service/kafka" + _ "github.com/hashicorp/terraform-provider-aws/internal/service/kafkaconnect" _ "github.com/hashicorp/terraform-provider-aws/internal/service/keyspaces" _ "github.com/hashicorp/terraform-provider-aws/internal/service/kinesis" _ "github.com/hashicorp/terraform-provider-aws/internal/service/kinesisanalytics" diff --git a/internal/tfresource/retry.go b/internal/tfresource/retry.go index 9042a3ce1282..eb1c51194543 100644 --- a/internal/tfresource/retry.go +++ b/internal/tfresource/retry.go @@ -73,6 +73,22 @@ func RetryWhenAWSErrCodeEquals(timeout time.Duration, f func() (interface{}, err return RetryWhenAWSErrCodeEqualsContext(context.Background(), timeout, f, codes...) } +// RetryWhenAWSErrMessageContainsContext retries the specified function when it returns an AWS error containing the specified message. +func RetryWhenAWSErrMessageContainsContext(ctx context.Context, timeout time.Duration, f func() (interface{}, error), code, message string) (interface{}, error) { + return RetryWhenContext(ctx, timeout, f, func(err error) (bool, error) { + if tfawserr.ErrMessageContains(err, code, message) { + return true, err + } + + return false, err + }) +} + +// RetryWhenAWSErrMessageContains retries the specified function when it returns an AWS error containing the specified message. +func RetryWhenAWSErrMessageContains(timeout time.Duration, f func() (interface{}, error), code, message string) (interface{}, error) { + return RetryWhenAWSErrMessageContainsContext(context.Background(), timeout, f, code, message) +} + var resourceFoundError = errors.New(`found resource`) // RetryUntilNotFoundContext retries the specified function until it returns a resource.NotFoundError. diff --git a/internal/tfresource/retry_test.go b/internal/tfresource/retry_test.go index eb8d7049e362..8eb11c1c8b31 100644 --- a/internal/tfresource/retry_test.go +++ b/internal/tfresource/retry_test.go @@ -75,6 +75,68 @@ func TestRetryWhenAWSErrCodeEquals(t *testing.T) { } } +func TestRetryWhenAWSErrMessageContains(t *testing.T) { + var retryCount int32 + + testCases := []struct { + Name string + F func() (interface{}, error) + ExpectError bool + }{ + { + Name: "no error", + F: func() (interface{}, error) { + return nil, nil + }, + }, + { + Name: "non-retryable other error", + F: func() (interface{}, error) { + return nil, errors.New("TestCode") + }, + ExpectError: true, + }, + { + Name: "non-retryable AWS error", + F: func() (interface{}, error) { + return nil, awserr.New("TestCode1", "Testing", nil) + }, + ExpectError: true, + }, + { + Name: "retryable AWS error timeout", + F: func() (interface{}, error) { + return nil, awserr.New("TestCode1", "TestMessage1", nil) + }, + ExpectError: true, + }, + { + Name: "retryable AWS error success", + F: func() (interface{}, error) { + if atomic.CompareAndSwapInt32(&retryCount, 0, 1) { + return nil, awserr.New("TestCode1", "TestMessage1", nil) + } + + return nil, nil + }, + }, + } + + for _, testCase := range testCases { + t.Run(testCase.Name, func(t *testing.T) { + retryCount = 0 + + _, err := tfresource.RetryWhenAWSErrMessageContains(5*time.Second, testCase.F, "TestCode1", "TestMessage1") + + if testCase.ExpectError && err == nil { + t.Fatal("expected error") + } else if !testCase.ExpectError && err != nil { + t.Fatalf("unexpected error: %s", err) + } + }) + } +} + func TestRetryWhenNewResourceNotFound(t *testing.T) { var retryCount int32 diff --git a/internal/verify/validate.go b/internal/verify/validate.go index 47624f85edaf..4c7ebd098a92 100644 --- a/internal/verify/validate.go +++ b/internal/verify/validate.go @@ -14,7 +14,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" ) -var accountIDRegexp = regexp.MustCompile(`^(aws|\d{12})$`) +var accountIDRegexp = regexp.MustCompile(`^(aws|aws-managed|\d{12})$`) var partitionRegexp = regexp.MustCompile(`^aws(-[a-z]+)*$`) var regionRegexp = regexp.MustCompile(`^[a-z]{2}(-[a-z]+)+-\d$`) diff --git a/names/names.go b/names/names.go index f638a25157f3..2b79448b617a 100644 --- a/names/names.go +++ b/names/names.go @@ -192,6 +192,7 @@ import ( "github.com/aws/aws-sdk-go/service/networkfirewall" "github.com/aws/aws-sdk-go/service/networkmanager" "github.com/aws/aws-sdk-go/service/nimblestudio" + "github.com/aws/aws-sdk-go/service/opensearchservice" "github.com/aws/aws-sdk-go/service/opsworks" "github.com/aws/aws-sdk-go/service/opsworkscm" "github.com/aws/aws-sdk-go/service/organizations" @@ -471,6 +472,7 @@ const ( NetworkFirewall = "networkfirewall" NetworkManager = "networkmanager" NimbleStudio = "nimblestudio" + OpenSearch = "opensearch" OpsWorks = "opsworks" OpsWorksCM = "opsworkscm" Organizations = "organizations" @@ -768,6 +770,7 @@ func init() { serviceData[NetworkFirewall] = &ServiceDatum{AWSClientName: "NetworkFirewall", AWSServiceName: networkfirewall.ServiceName, AWSEndpointsID: networkfirewall.EndpointsID, AWSServiceID: networkfirewall.ServiceID, ProviderNameUpper: "NetworkFirewall", HCLKeys: []string{"networkfirewall"}} serviceData[NetworkManager] = &ServiceDatum{AWSClientName: "NetworkManager", AWSServiceName: networkmanager.ServiceName, AWSEndpointsID: networkmanager.EndpointsID, AWSServiceID: networkmanager.ServiceID, ProviderNameUpper: "NetworkManager", HCLKeys: []string{"networkmanager"}} serviceData[NimbleStudio] = &ServiceDatum{AWSClientName: "NimbleStudio", AWSServiceName: nimblestudio.ServiceName, AWSEndpointsID: nimblestudio.EndpointsID, AWSServiceID: nimblestudio.ServiceID, ProviderNameUpper: "NimbleStudio", HCLKeys: []string{"nimblestudio"}} + serviceData[OpenSearch] = &ServiceDatum{AWSClientName: "OpenSearchService", AWSServiceName: opensearchservice.ServiceName, AWSEndpointsID: opensearchservice.EndpointsID, AWSServiceID: opensearchservice.ServiceID, ProviderNameUpper: "OpenSearch", HCLKeys: []string{"opensearch", "opensearchservice"}} serviceData[OpsWorks] = &ServiceDatum{AWSClientName: "OpsWorks", AWSServiceName: opsworks.ServiceName, AWSEndpointsID: opsworks.EndpointsID, AWSServiceID: opsworks.ServiceID, ProviderNameUpper: "OpsWorks", HCLKeys: []string{"opsworks"}} serviceData[OpsWorksCM] = &ServiceDatum{AWSClientName: "OpsWorksCM", AWSServiceName: opsworkscm.ServiceName, AWSEndpointsID: opsworkscm.EndpointsID, AWSServiceID: opsworkscm.ServiceID, ProviderNameUpper: "OpsWorksCM", HCLKeys: []string{"opsworkscm"}} serviceData[Organizations] = &ServiceDatum{AWSClientName: "Organizations", AWSServiceName: organizations.ServiceName, AWSEndpointsID: organizations.EndpointsID, AWSServiceID: organizations.ServiceID, ProviderNameUpper: "Organizations", HCLKeys: []string{"organizations"}} diff --git a/providerlint/go.mod b/providerlint/go.mod index 075edb9e9386..0fd02a42304b 100644 --- a/providerlint/go.mod +++ b/providerlint/go.mod @@ -3,8 +3,8 @@ module github.com/hashicorp/terraform-provider-aws/providerlint go 1.16 require ( - github.com/aws/aws-sdk-go v1.43.21 + github.com/aws/aws-sdk-go v1.43.34 github.com/bflad/tfproviderlint v0.28.1 - github.com/hashicorp/terraform-plugin-sdk/v2 v2.12.0 + github.com/hashicorp/terraform-plugin-sdk/v2 v2.13.0 golang.org/x/tools v0.0.0-20201028111035-eafbe7b904eb ) diff --git a/providerlint/go.sum b/providerlint/go.sum index 81999c98e5fd..75457eb7435f 100644 --- a/providerlint/go.sum +++ b/providerlint/go.sum @@ -65,8 +65,8 @@ github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkY github.com/aws/aws-sdk-go v1.15.78/go.mod h1:E3/ieXAlvM0XWO57iftYVDLLvQ824smPP3ATZkfNZeM= github.com/aws/aws-sdk-go v1.25.3/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= github.com/aws/aws-sdk-go v1.37.0/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro= -github.com/aws/aws-sdk-go v1.43.21 h1:E4S2eX3d2gKJyI/ISrcIrSwXwqjIvCK85gtBMt4sAPE= -github.com/aws/aws-sdk-go v1.43.21/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo= +github.com/aws/aws-sdk-go v1.43.34 h1:8+P+773CDgQqN1eLH1QHT6XgXHUbME3sAbDGszzjajY= +github.com/aws/aws-sdk-go v1.43.34/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo= github.com/bflad/gopaniccheck v0.1.0 h1:tJftp+bv42ouERmUMWLoUn/5bi/iQZjHPznM00cP/bU= github.com/bflad/gopaniccheck v0.1.0/go.mod h1:ZCj2vSr7EqVeDaqVsWN4n2MwdROx1YL+LFo47TSWtsA= github.com/bflad/tfproviderlint v0.28.1 h1:7f54/ynV6/lK5/1EyG7tHtc4sMdjJSEFGjZNRJKwBs8= @@ -245,8 +245,8 @@ github.com/hashicorp/terraform-plugin-log v0.3.0/go.mod h1:EjueSP/HjlyFAsDqt+okp github.com/hashicorp/terraform-plugin-sdk v1.16.1 h1:G2iK7MBT4LuNcVASPXWS1ciBUuIm8oIY0zRfCmi3xy4= github.com/hashicorp/terraform-plugin-sdk v1.16.1/go.mod h1:KSsGcuZ1JRqnmYzz+sWIiUwNvJkzXbGRIdefwFfOdyY= github.com/hashicorp/terraform-plugin-sdk/v2 v2.5.0/go.mod h1:z+cMZ0iswzZOahBJ3XmNWgWkVnAd2bl8g+FhyyuPDH4= -github.com/hashicorp/terraform-plugin-sdk/v2 v2.12.0 h1:rjJxyLUVA180BG0ZXTOree4x2RVvo2jigdYoT2rw5j0= -github.com/hashicorp/terraform-plugin-sdk/v2 v2.12.0/go.mod h1:TPjMXvpPNWagHzYOmVPzzRRIBTuaLVukR+esL08tgzg= +github.com/hashicorp/terraform-plugin-sdk/v2 v2.13.0 h1:MyzzWWHOQgYCsoJZEC9YgDqyZoG8pftt2pcYG30A+Do= +github.com/hashicorp/terraform-plugin-sdk/v2 v2.13.0/go.mod h1:TPjMXvpPNWagHzYOmVPzzRRIBTuaLVukR+esL08tgzg= github.com/hashicorp/terraform-plugin-test/v2 v2.1.3/go.mod h1:pmaUHiUtDL/8Mz3FuyZ/vRDb0LpaOWQjVRW9ORF7FHs= github.com/hashicorp/terraform-registry-address v0.0.0-20210412075316-9b2996cce896 h1:1FGtlkJw87UsTMg5s8jrekrHmUPUJaMcu6ELiVhQrNw= github.com/hashicorp/terraform-registry-address v0.0.0-20210412075316-9b2996cce896/go.mod h1:bzBPnUIkI0RxauU8Dqo+2KrZZ28Cf48s8V6IHt3p4co= diff --git a/providerlint/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go b/providerlint/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go index f70c832b193c..bbdf14987836 100644 --- a/providerlint/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go +++ b/providerlint/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go @@ -2374,6 +2374,9 @@ var awsPartition = partition{ endpointKey{ Region: "ap-southeast-2", }: endpoint{}, + endpointKey{ + Region: "ca-central-1", + }: endpoint{}, endpointKey{ Region: "eu-central-1", }: endpoint{}, @@ -5222,6 +5225,147 @@ var awsPartition = partition{ }: endpoint{}, }, }, + "data-ats.iot": service{ + Defaults: endpointDefaults{ + defaultKey{}: endpoint{ + Protocols: []string{"https"}, + CredentialScope: credentialScope{ + Service: "iotdata", + }, + }, + }, + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "ap-east-1", + }: endpoint{}, + endpointKey{ + Region: "ap-northeast-1", + }: endpoint{}, + endpointKey{ + Region: "ap-northeast-2", + }: endpoint{}, + endpointKey{ + Region: "ap-south-1", + }: endpoint{}, + endpointKey{ + Region: "ap-southeast-1", + }: endpoint{}, + endpointKey{ + Region: "ap-southeast-2", + }: endpoint{}, + endpointKey{ + Region: "ca-central-1", + }: endpoint{}, + endpointKey{ + Region: "ca-central-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "data.iot-fips.ca-central-1.amazonaws.com", + }, + endpointKey{ + Region: "eu-central-1", + }: endpoint{}, + endpointKey{ + Region: "eu-north-1", + }: endpoint{}, + endpointKey{ + Region: "eu-west-1", + }: endpoint{}, + endpointKey{ + Region: "eu-west-2", + }: endpoint{}, + endpointKey{ + Region: "eu-west-3", + }: endpoint{}, + endpointKey{ + Region: "fips-ca-central-1", + }: endpoint{ + Hostname: "data.iot-fips.ca-central-1.amazonaws.com", + CredentialScope: credentialScope{ + Service: "iotdata", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "fips-us-east-1", + }: endpoint{ + Hostname: "data.iot-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Service: "iotdata", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "fips-us-east-2", + }: endpoint{ + Hostname: "data.iot-fips.us-east-2.amazonaws.com", + CredentialScope: credentialScope{ + Service: "iotdata", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "fips-us-west-1", + }: endpoint{ + Hostname: "data.iot-fips.us-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Service: "iotdata", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "fips-us-west-2", + }: endpoint{ + Hostname: "data.iot-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Service: "iotdata", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "me-south-1", + }: endpoint{}, + endpointKey{ + Region: "sa-east-1", + }: endpoint{}, + endpointKey{ + Region: "us-east-1", + }: endpoint{}, + endpointKey{ + Region: "us-east-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "data.iot-fips.us-east-1.amazonaws.com", + }, + endpointKey{ + Region: "us-east-2", + }: endpoint{}, + endpointKey{ + Region: "us-east-2", + Variant: fipsVariant, + }: endpoint{ + Hostname: "data.iot-fips.us-east-2.amazonaws.com", + }, + endpointKey{ + Region: "us-west-1", + }: endpoint{}, + endpointKey{ + Region: "us-west-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "data.iot-fips.us-west-1.amazonaws.com", + }, + endpointKey{ + Region: "us-west-2", + }: endpoint{}, + endpointKey{ + Region: "us-west-2", + Variant: fipsVariant, + }: endpoint{ + Hostname: "data.iot-fips.us-west-2.amazonaws.com", + }, + }, + }, "data.jobs.iot": service{ Endpoints: serviceEndpoints{ endpointKey{ @@ -6481,6 +6625,9 @@ var awsPartition = partition{ endpointKey{ Region: "ap-southeast-2", }: endpoint{}, + endpointKey{ + Region: "ap-southeast-3", + }: endpoint{}, endpointKey{ Region: "ca-central-1", }: endpoint{}, @@ -9563,6 +9710,13 @@ var awsPartition = partition{ }: endpoint{}, }, }, + "gamesparks": service{ + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "us-east-1", + }: endpoint{}, + }, + }, "glacier": service{ Defaults: endpointDefaults{ defaultKey{}: endpoint{ @@ -11006,12 +11160,42 @@ var awsPartition = partition{ endpointKey{ Region: "eu-west-1", }: endpoint{}, + endpointKey{ + Region: "fips-us-east-1", + }: endpoint{ + Hostname: "iotsitewise-fips.us-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-east-1", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "fips-us-west-2", + }: endpoint{ + Hostname: "iotsitewise-fips.us-west-2.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-west-2", + }, + Deprecated: boxedTrue, + }, endpointKey{ Region: "us-east-1", }: endpoint{}, + endpointKey{ + Region: "us-east-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "iotsitewise-fips.us-east-1.amazonaws.com", + }, endpointKey{ Region: "us-west-2", }: endpoint{}, + endpointKey{ + Region: "us-west-2", + Variant: fipsVariant, + }: endpoint{ + Hostname: "iotsitewise-fips.us-west-2.amazonaws.com", + }, }, }, "iotthingsgraph": service{ @@ -15112,9 +15296,15 @@ var awsPartition = partition{ }, "profile": service{ Endpoints: serviceEndpoints{ + endpointKey{ + Region: "af-south-1", + }: endpoint{}, endpointKey{ Region: "ap-northeast-1", }: endpoint{}, + endpointKey{ + Region: "ap-northeast-2", + }: endpoint{}, endpointKey{ Region: "ap-southeast-1", }: endpoint{}, @@ -15317,6 +15507,9 @@ var awsPartition = partition{ endpointKey{ Region: "ap-southeast-2", }: endpoint{}, + endpointKey{ + Region: "ap-southeast-3", + }: endpoint{}, endpointKey{ Region: "ca-central-1", }: endpoint{}, @@ -18680,6 +18873,9 @@ var awsPartition = partition{ endpointKey{ Region: "ap-southeast-2", }: endpoint{}, + endpointKey{ + Region: "ap-southeast-3", + }: endpoint{}, endpointKey{ Region: "ca-central-1", }: endpoint{}, @@ -22411,6 +22607,16 @@ var awscnPartition = partition{ }, }, }, + "cloudcontrolapi": service{ + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "cn-north-1", + }: endpoint{}, + endpointKey{ + Region: "cn-northwest-1", + }: endpoint{}, + }, + }, "cloudformation": service{ Endpoints: serviceEndpoints{ endpointKey{ @@ -22530,6 +22736,24 @@ var awscnPartition = partition{ }: endpoint{}, }, }, + "data-ats.iot": service{ + Defaults: endpointDefaults{ + defaultKey{}: endpoint{ + Protocols: []string{"https"}, + CredentialScope: credentialScope{ + Service: "iotdata", + }, + }, + }, + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "cn-north-1", + }: endpoint{}, + endpointKey{ + Region: "cn-northwest-1", + }: endpoint{}, + }, + }, "data.jobs.iot": service{ Endpoints: serviceEndpoints{ endpointKey{ @@ -24783,6 +25007,54 @@ var awsusgovPartition = partition{ }: endpoint{}, }, }, + "data-ats.iot": service{ + Defaults: endpointDefaults{ + defaultKey{}: endpoint{ + Protocols: []string{"https"}, + CredentialScope: credentialScope{ + Service: "iotdata", + }, + }, + }, + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "fips-us-gov-east-1", + }: endpoint{ + Hostname: "data.iot-fips.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Service: "iotdata", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "fips-us-gov-west-1", + }: endpoint{ + Hostname: "data.iot-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Service: "iotdata", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "us-gov-east-1", + }: endpoint{}, + endpointKey{ + Region: "us-gov-east-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "data.iot-fips.us-gov-east-1.amazonaws.com", + }, + endpointKey{ + Region: "us-gov-west-1", + }: endpoint{}, + endpointKey{ + Region: "us-gov-west-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "data.iot-fips.us-gov-west-1.amazonaws.com", + }, + }, + }, "data.jobs.iot": service{ Endpoints: serviceEndpoints{ endpointKey{ @@ -26155,9 +26427,24 @@ var awsusgovPartition = partition{ }, "iotsitewise": service{ Endpoints: serviceEndpoints{ + endpointKey{ + Region: "fips-us-gov-west-1", + }: endpoint{ + Hostname: "iotsitewise-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + Deprecated: boxedTrue, + }, endpointKey{ Region: "us-gov-west-1", }: endpoint{}, + endpointKey{ + Region: "us-gov-west-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "iotsitewise-fips.us-gov-west-1.amazonaws.com", + }, }, }, "kafka": service{ @@ -26425,6 +26712,46 @@ var awsusgovPartition = partition{ }, }, }, + "meetings-chime": service{ + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "us-gov-east-1", + }: endpoint{}, + endpointKey{ + Region: "us-gov-east-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "meetings-chime-fips.us-gov-east-1.amazonaws.com", + }, + endpointKey{ + Region: "us-gov-east-1-fips", + }: endpoint{ + Hostname: "meetings-chime-fips.us-gov-east-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-east-1", + }, + Deprecated: boxedTrue, + }, + endpointKey{ + Region: "us-gov-west-1", + }: endpoint{}, + endpointKey{ + Region: "us-gov-west-1", + Variant: fipsVariant, + }: endpoint{ + Hostname: "meetings-chime-fips.us-gov-west-1.amazonaws.com", + }, + endpointKey{ + Region: "us-gov-west-1-fips", + }: endpoint{ + Hostname: "meetings-chime-fips.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + Deprecated: boxedTrue, + }, + }, + }, "metering.marketplace": service{ Defaults: endpointDefaults{ defaultKey{}: endpoint{ @@ -29116,6 +29443,13 @@ var awsisoPartition = partition{ }: endpoint{}, }, }, + "tagging": service{ + Endpoints: serviceEndpoints{ + endpointKey{ + Region: "us-iso-east-1", + }: endpoint{}, + }, + }, "transcribe": service{ Defaults: endpointDefaults{ defaultKey{}: endpoint{ diff --git a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/plugin.go b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/plugin.go index df901bb525a6..013adbe9e359 100644 --- a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/plugin.go +++ b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/plugin.go @@ -43,9 +43,6 @@ func runProviderCommand(ctx context.Context, t testing.T, f func() error, wd *pl // plugins. os.Setenv("PLUGIN_PROTOCOL_VERSIONS", "5") - // Terraform doesn't need to reach out to Checkpoint during testing. - wd.Setenv("CHECKPOINT_DISABLE", "1") - // Terraform 0.12.X and 0.13.X+ treat namespaceless providers // differently in terms of what namespace they default to. So we're // going to set both variations, as we don't know which version of diff --git a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing.go b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing.go index 9bb40938ce5c..79eb30eeac8a 100644 --- a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing.go +++ b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing.go @@ -99,9 +99,9 @@ func AddTestSweepers(name string, s *Sweeper) { // // Sweeper flags added to the "go test" command: // -// -sweep: Comma-separated list of locations/regions to run available sweepers. -// -sweep-allow-failues: Enable to allow other sweepers to run after failures. -// -sweep-run: Comma-separated list of resource type sweepers to run. Defaults +// -sweep: Comma-separated list of locations/regions to run available sweepers. +// -sweep-allow-failues: Enable to allow other sweepers to run after failures. +// -sweep-run: Comma-separated list of resource type sweepers to run. Defaults // to all sweepers. // // Refer to the Env prefixed constants for environment variables that further @@ -573,16 +573,16 @@ func ParallelTest(t testing.T, c TestCase) { // This function will automatically find or install Terraform CLI into a // temporary directory, based on the following behavior: // -// - If the TF_ACC_TERRAFORM_PATH environment variable is set, that Terraform -// CLI binary is used if found and executable. If not found or executable, -// an error will be returned unless the TF_ACC_TERRAFORM_VERSION environment -// variable is also set. -// - If the TF_ACC_TERRAFORM_VERSION environment variable is set, install and -// use that Terraform CLI version. -// - If both the TF_ACC_TERRAFORM_PATH and TF_ACC_TERRAFORM_VERSION environment -// variables are unset, perform a lookup for the Terraform CLI binary based -// on the operating system PATH. If not found, the latest available Terraform -// CLI binary is installed. +// - If the TF_ACC_TERRAFORM_PATH environment variable is set, that +// Terraform CLI binary is used if found and executable. If not found or +// executable, an error will be returned unless the +// TF_ACC_TERRAFORM_VERSION environment variable is also set. +// - If the TF_ACC_TERRAFORM_VERSION environment variable is set, install +// and use that Terraform CLI version. +// - If both the TF_ACC_TERRAFORM_PATH and TF_ACC_TERRAFORM_VERSION +// environment variables are unset, perform a lookup for the Terraform +// CLI binary based on the operating system PATH. If not found, the +// latest available Terraform CLI binary is installed. // // Refer to the Env prefixed constants for additional details about these // environment variables, and others, that control testing functionality. @@ -718,6 +718,9 @@ func testResource(c TestStep, state *terraform.State) (*terraform.ResourceState, // // As a user testing their provider, this lets you decompose your checks // into smaller pieces more easily. +// +// ComposeTestCheckFunc returns immediately on the first TestCheckFunc error. +// To aggregrate all errors, use ComposeAggregateTestCheckFunc instead. func ComposeTestCheckFunc(fs ...TestCheckFunc) TestCheckFunc { return func(s *terraform.State) error { for i, f := range fs { @@ -752,10 +755,48 @@ func ComposeAggregateTestCheckFunc(fs ...TestCheckFunc) TestCheckFunc { } } -// TestCheckResourceAttrSet is a TestCheckFunc which ensures a value -// exists in state for the given name/key combination. It is useful when -// testing that computed values were set, when it is not possible to -// know ahead of time what the values will be. +// TestCheckResourceAttrSet ensures any value exists in the state for the +// given name and key combination. The opposite of this TestCheckFunc is +// TestCheckNoResourceAttr. State value checking is only recommended for +// testing Computed attributes and attribute defaults. +// +// Use this as a last resort when a more specific TestCheckFunc cannot be +// implemented, such as: +// +// - TestCheckResourceAttr: Equality checking of non-TypeSet state value. +// - TestCheckResourceAttrPair: Equality checking of non-TypeSet state +// value, based on another state value. +// - TestCheckTypeSet*: Equality checking of TypeSet state values. +// - TestMatchResourceAttr: Regular expression checking of non-TypeSet +// state value. +// - TestMatchTypeSet*: Regular expression checking on TypeSet state values. +// +// For managed resources, the name parameter is combination of the resource +// type, a period (.), and the name label. The name for the below example +// configuration would be "myprovider_thing.example". +// +// resource "myprovider_thing" "example" { ... } +// +// For data sources, the name parameter is a combination of the keyword "data", +// a period (.), the data source type, a period (.), and the name label. The +// name for the below example configuration would be +// "data.myprovider_thing.example". +// +// data "myprovider_thing" "example" { ... } +// +// The key parameter is an attribute path in Terraform CLI 0.11 and earlier +// "flatmap" syntax. Keys start with the attribute name of a top-level +// attribute. Use the following special key syntax to inspect underlying +// values of a list or map attribute: +// +// - .{NUMBER}: List value at index, e.g. .0 to inspect the first element +// - .{KEY}: Map value at key, e.g. .example to inspect the example key +// value +// +// While it is possible to check nested attributes under list and map +// attributes using the special key syntax, checking a list, map, or set +// attribute directly is not supported. Use TestCheckResourceAttr with +// the special .# or .% key syntax for those situations instead. func TestCheckResourceAttrSet(name, key string) TestCheckFunc { return checkIfIndexesIntoTypeSet(key, func(s *terraform.State) error { is, err := primaryInstanceState(s, name) @@ -782,15 +823,71 @@ func TestCheckModuleResourceAttrSet(mp []string, name string, key string) TestCh } func testCheckResourceAttrSet(is *terraform.InstanceState, name string, key string) error { - if val, ok := is.Attributes[key]; !ok || val == "" { - return fmt.Errorf("%s: Attribute '%s' expected to be set", name, key) + val, ok := is.Attributes[key] + + if ok && val != "" { + return nil } - return nil + if _, ok := is.Attributes[key+".#"]; ok { + return fmt.Errorf( + "%s: list or set attribute '%s' must be checked by element count key (%s) or element value keys (e.g. %s). Set element value checks should use TestCheckTypeSet functions instead.", + name, + key, + key+".#", + key+".0", + ) + } + + if _, ok := is.Attributes[key+".%"]; ok { + return fmt.Errorf( + "%s: map attribute '%s' must be checked by element count key (%s) or element value keys (e.g. %s).", + name, + key, + key+".%", + key+".examplekey", + ) + } + + return fmt.Errorf("%s: Attribute '%s' expected to be set", name, key) } -// TestCheckResourceAttr is a TestCheckFunc which validates -// the value in state for the given name/key combination. +// TestCheckResourceAttr ensures a specific value is stored in state for the +// given name and key combination. State value checking is only recommended for +// testing Computed attributes and attribute defaults. +// +// For managed resources, the name parameter is combination of the resource +// type, a period (.), and the name label. The name for the below example +// configuration would be "myprovider_thing.example". +// +// resource "myprovider_thing" "example" { ... } +// +// For data sources, the name parameter is a combination of the keyword "data", +// a period (.), the data source type, a period (.), and the name label. The +// name for the below example configuration would be +// "data.myprovider_thing.example". +// +// data "myprovider_thing" "example" { ... } +// +// The key parameter is an attribute path in Terraform CLI 0.11 and earlier +// "flatmap" syntax. Keys start with the attribute name of a top-level +// attribute. Use the following special key syntax to inspect list, map, and +// set attributes: +// +// - .{NUMBER}: List value at index, e.g. .0 to inspect the first element. +// Use the TestCheckTypeSet* and TestMatchTypeSet* functions instead +// for sets. +// - .{KEY}: Map value at key, e.g. .example to inspect the example key +// value. +// - .#: Number of elements in list or set. +// - .%: Number of elements in map. +// +// The value parameter is the stringified data to check at the given key. Use +// the following attribute type rules to set the value: +// +// - Boolean: "false" or "true". +// - Float/Integer: Stringified number, such as "1.2" or "123". +// - String: No conversion necessary. func TestCheckResourceAttr(name, key, value string) TestCheckFunc { return checkIfIndexesIntoTypeSet(key, func(s *terraform.State) error { is, err := primaryInstanceState(s, name) @@ -817,23 +914,40 @@ func TestCheckModuleResourceAttr(mp []string, name string, key string, value str } func testCheckResourceAttr(is *terraform.InstanceState, name string, key string, value string) error { - // Empty containers may be elided from the state. - // If the intent here is to check for an empty container, allow the key to - // also be non-existent. - emptyCheck := false - if value == "0" && (strings.HasSuffix(key, ".#") || strings.HasSuffix(key, ".%")) { - emptyCheck = true - } + v, ok := is.Attributes[key] - if v, ok := is.Attributes[key]; !ok || v != value { - if emptyCheck && !ok { + if !ok { + // Empty containers may be elided from the state. + // If the intent here is to check for an empty container, allow the key to + // also be non-existent. + if value == "0" && (strings.HasSuffix(key, ".#") || strings.HasSuffix(key, ".%")) { return nil } - if !ok { - return fmt.Errorf("%s: Attribute '%s' not found", name, key) + if _, ok := is.Attributes[key+".#"]; ok { + return fmt.Errorf( + "%s: list or set attribute '%s' must be checked by element count key (%s) or element value keys (e.g. %s). Set element value checks should use TestCheckTypeSet functions instead.", + name, + key, + key+".#", + key+".0", + ) + } + + if _, ok := is.Attributes[key+".%"]; ok { + return fmt.Errorf( + "%s: map attribute '%s' must be checked by element count key (%s) or element value keys (e.g. %s).", + name, + key, + key+".%", + key+".examplekey", + ) } + return fmt.Errorf("%s: Attribute '%s' not found", name, key) + } + + if v != value { return fmt.Errorf( "%s: Attribute '%s' expected %#v, got %#v", name, @@ -841,11 +955,41 @@ func testCheckResourceAttr(is *terraform.InstanceState, name string, key string, value, v) } + return nil } -// TestCheckNoResourceAttr is a TestCheckFunc which ensures that -// NO value exists in state for the given name/key combination. +// TestCheckNoResourceAttr ensures no value exists in the state for the +// given name and key combination. The opposite of this TestCheckFunc is +// TestCheckResourceAttrSet. State value checking is only recommended for +// testing Computed attributes and attribute defaults. +// +// For managed resources, the name parameter is combination of the resource +// type, a period (.), and the name label. The name for the below example +// configuration would be "myprovider_thing.example". +// +// resource "myprovider_thing" "example" { ... } +// +// For data sources, the name parameter is a combination of the keyword "data", +// a period (.), the data source type, a period (.), and the name label. The +// name for the below example configuration would be +// "data.myprovider_thing.example". +// +// data "myprovider_thing" "example" { ... } +// +// The key parameter is an attribute path in Terraform CLI 0.11 and earlier +// "flatmap" syntax. Keys start with the attribute name of a top-level +// attribute. Use the following special key syntax to inspect underlying +// values of a list or map attribute: +// +// - .{NUMBER}: List value at index, e.g. .0 to inspect the first element. +// - .{KEY}: Map value at key, e.g. .example to inspect the example key +// value. +// +// While it is possible to check nested attributes under list and map +// attributes using the special key syntax, checking a list, map, or set +// attribute directly is not supported. Use TestCheckResourceAttr with +// the special .# or .% key syntax for those situations instead. func TestCheckNoResourceAttr(name, key string) TestCheckFunc { return checkIfIndexesIntoTypeSet(key, func(s *terraform.State) error { is, err := primaryInstanceState(s, name) @@ -872,28 +1016,76 @@ func TestCheckModuleNoResourceAttr(mp []string, name string, key string) TestChe } func testCheckNoResourceAttr(is *terraform.InstanceState, name string, key string) error { + v, ok := is.Attributes[key] + // Empty containers may sometimes be included in the state. // If the intent here is to check for an empty container, allow the value to // also be "0". - emptyCheck := false - if strings.HasSuffix(key, ".#") || strings.HasSuffix(key, ".%") { - emptyCheck = true - } - - val, exists := is.Attributes[key] - if emptyCheck && val == "0" { + if v == "0" && (strings.HasSuffix(key, ".#") || strings.HasSuffix(key, ".%")) { return nil } - if exists { + if ok { return fmt.Errorf("%s: Attribute '%s' found when not expected", name, key) } + if _, ok := is.Attributes[key+".#"]; ok { + return fmt.Errorf( + "%s: list or set attribute '%s' must be checked by element count key (%s) or element value keys (e.g. %s). Set element value checks should use TestCheckTypeSet functions instead.", + name, + key, + key+".#", + key+".0", + ) + } + + if _, ok := is.Attributes[key+".%"]; ok { + return fmt.Errorf( + "%s: map attribute '%s' must be checked by element count key (%s) or element value keys (e.g. %s).", + name, + key, + key+".%", + key+".examplekey", + ) + } + return nil } -// TestMatchResourceAttr is a TestCheckFunc which checks that the value -// in state for the given name/key combination matches the given regex. +// TestMatchResourceAttr ensures a value matching a regular expression is +// stored in state for the given name and key combination. State value checking +// is only recommended for testing Computed attributes and attribute defaults. +// +// For managed resources, the name parameter is combination of the resource +// type, a period (.), and the name label. The name for the below example +// configuration would be "myprovider_thing.example". +// +// resource "myprovider_thing" "example" { ... } +// +// For data sources, the name parameter is a combination of the keyword "data", +// a period (.), the data source type, a period (.), and the name label. The +// name for the below example configuration would be +// "data.myprovider_thing.example". +// +// data "myprovider_thing" "example" { ... } +// +// The key parameter is an attribute path in Terraform CLI 0.11 and earlier +// "flatmap" syntax. Keys start with the attribute name of a top-level +// attribute. Use the following special key syntax to inspect list, map, and +// set attributes: +// +// - .{NUMBER}: List value at index, e.g. .0 to inspect the first element. +// Use the TestCheckTypeSet* and TestMatchTypeSet* functions instead +// for sets. +// - .{KEY}: Map value at key, e.g. .example to inspect the example key +// value. +// - .#: Number of elements in list or set. +// - .%: Number of elements in map. +// +// The value parameter is a compiled regular expression. A typical pattern is +// using the regexp.MustCompile() function, which will automatically ensure the +// regular expression is supported by the Go regular expression handlers during +// compilation. func TestMatchResourceAttr(name, key string, r *regexp.Regexp) TestCheckFunc { return checkIfIndexesIntoTypeSet(key, func(s *terraform.State) error { is, err := primaryInstanceState(s, name) @@ -935,6 +1127,9 @@ func testMatchResourceAttr(is *terraform.InstanceState, name string, key string, // TestCheckResourceAttrPtr is like TestCheckResourceAttr except the // value is a pointer so that it can be updated while the test is running. // It will only be dereferenced at the point this step is run. +// +// Refer to the TestCheckResourceAttr documentation for more information about +// setting the name, key, and value parameters. func TestCheckResourceAttrPtr(name string, key string, value *string) TestCheckFunc { return func(s *terraform.State) error { return TestCheckResourceAttr(name, key, *value)(s) @@ -949,8 +1144,39 @@ func TestCheckModuleResourceAttrPtr(mp []string, name string, key string, value } } -// TestCheckResourceAttrPair is a TestCheckFunc which validates that the values -// in state for a pair of name/key combinations are equal. +// TestCheckResourceAttrPair ensures value equality in state between the first +// given name and key combination and the second name and key combination. +// State value checking is only recommended for testing Computed attributes +// and attribute defaults. +// +// For managed resources, the name parameter is combination of the resource +// type, a period (.), and the name label. The name for the below example +// configuration would be "myprovider_thing.example". +// +// resource "myprovider_thing" "example" { ... } +// +// For data sources, the name parameter is a combination of the keyword "data", +// a period (.), the data source type, a period (.), and the name label. The +// name for the below example configuration would be +// "data.myprovider_thing.example". +// +// data "myprovider_thing" "example" { ... } +// +// The first and second names may use any combination of managed resources +// and/or data sources. +// +// The key parameter is an attribute path in Terraform CLI 0.11 and earlier +// "flatmap" syntax. Keys start with the attribute name of a top-level +// attribute. Use the following special key syntax to inspect list, map, and +// set attributes: +// +// - .{NUMBER}: List value at index, e.g. .0 to inspect the first element. +// Use the TestCheckTypeSet* and TestMatchTypeSet* functions instead +// for sets. +// - .{KEY}: Map value at key, e.g. .example to inspect the example key +// value. +// - .#: Number of elements in list or set. +// - .%: Number of elements in map. func TestCheckResourceAttrPair(nameFirst, keyFirst, nameSecond, keySecond string) TestCheckFunc { return checkIfIndexesIntoTypeSetPair(keyFirst, keySecond, func(s *terraform.State) error { isFirst, err := primaryInstanceState(s, nameFirst) diff --git a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_sets.go b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_sets.go index ada1d6e7eca3..9b1d57c66651 100644 --- a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_sets.go +++ b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource/testing_sets.go @@ -14,25 +14,46 @@ const ( sentinelIndex = "*" ) -// TestCheckTypeSetElemNestedAttrs is a TestCheckFunc that accepts a resource -// name, an attribute path, which should use the sentinel value '*' for indexing -// into a TypeSet. The function verifies that an element matches the whole value -// map. +// TestCheckTypeSetElemNestedAttrs ensures a subset map of values is stored in +// state for the given name and key combination of attributes nested under a +// list or set block. Use this TestCheckFunc in preference over non-set +// variants to simplify testing code and ensure compatibility with indicies, +// which can easily change with schema changes. State value checking is only +// recommended for testing Computed attributes and attribute defaults. // -// You may check for unset keys, however this will also match keys set to empty -// string. Please provide a map with at least 1 non-empty value. +// For managed resources, the name parameter is a combination of the resource +// type, a period (.), and the name label. The name for the below example +// configuration would be "myprovider_thing.example". // -// map[string]string{ -// "key1": "value", -// "key2": "", -// } +// resource "myprovider_thing" "example" { ... } // -// Use this function over SDK provided TestCheckFunctions when validating a -// TypeSet where its elements are a nested object with their own attrs/values. +// For data sources, the name parameter is a combination of the keyword "data", +// a period (.), the data source type, a period (.), and the name label. The +// name for the below example configuration would be +// "data.myprovider_thing.example". +// +// data "myprovider_thing" "example" { ... } +// +// The key parameter is an attribute path in Terraform CLI 0.11 and earlier +// "flatmap" syntax. Keys start with the attribute name of a top-level +// attribute. Use the sentinel value '*' to replace the element indexing into +// a list or set. The sentinel value can be used for each list or set index, if +// there are multiple lists or sets in the attribute path. +// +// The values parameter is the map of attribute names to attribute values +// expected to be nested under the list or set. +// +// You may check for unset nested attributes, however this will also match keys +// set to an empty string. Use a map with at least 1 non-empty value. // -// Please note, if the provided value map is not granular enough, there exists -// the possibility you match an element you were not intending to, in the TypeSet. -// Provide a full mapping of attributes to be sure the unique element exists. +// map[string]string{ +// "key1": "value", +// "key2": "", +// } +// +// If the values map is not granular enough, it is possible to match an element +// you were not intending to in the set. Provide the most complete mapping of +// attributes possible to be sure the unique element exists. func TestCheckTypeSetElemNestedAttrs(name, attr string, values map[string]string) TestCheckFunc { return func(s *terraform.State) error { is, err := primaryInstanceState(s, name) @@ -64,17 +85,47 @@ func TestCheckTypeSetElemNestedAttrs(name, attr string, values map[string]string } } -// TestMatchTypeSetElemNestedAttrs is a TestCheckFunc similar to TestCheckTypeSetElemNestedAttrs -// with the exception that it verifies that an element matches a *regexp.Regexp. +// TestMatchTypeSetElemNestedAttrs ensures a subset map of values, compared by +// regular expressions, is stored in state for the given name and key +// combination of attributes nested under a list or set block. Use this +// TestCheckFunc in preference over non-set variants to simplify testing code +// and ensure compatibility with indicies, which can easily change with schema +// changes. State value checking is only recommended for testing Computed +// attributes and attribute defaults. +// +// For managed resources, the name parameter is a combination of the resource +// type, a period (.), and the name label. The name for the below example +// configuration would be "myprovider_thing.example". +// +// resource "myprovider_thing" "example" { ... } +// +// For data sources, the name parameter is a combination of the keyword "data", +// a period (.), the data source type, a period (.), and the name label. The +// name for the below example configuration would be +// "data.myprovider_thing.example". +// +// data "myprovider_thing" "example" { ... } +// +// The key parameter is an attribute path in Terraform CLI 0.11 and earlier +// "flatmap" syntax. Keys start with the attribute name of a top-level +// attribute. Use the sentinel value '*' to replace the element indexing into +// a list or set. The sentinel value can be used for each list or set index, if +// there are multiple lists or sets in the attribute path. // -// You may check for unset keys, however this will also match keys set to empty -// string. Please provide a map with at least 1 non-empty value e.g. +// The values parameter is the map of attribute names to regular expressions +// for matching attribute values expected to be nested under the list or set. // -// map[string]*regexp.Regexp{ -// "key1": regexp.MustCompile("value"), -// "key2": regexp.MustCompile(""), -// } +// You may check for unset nested attributes, however this will also match keys +// set to an empty string. Use a map with at least 1 non-empty value. // +// map[string]*regexp.Regexp{ +// "key1": regexp.MustCompile(`^value`), +// "key2": regexp.MustCompile(`^$`), +// } +// +// If the values map is not granular enough, it is possible to match an element +// you were not intending to in the set. Provide the most complete mapping of +// attributes possible to be sure the unique element exists. func TestMatchTypeSetElemNestedAttrs(name, attr string, values map[string]*regexp.Regexp) TestCheckFunc { return func(s *terraform.State) error { is, err := primaryInstanceState(s, name) @@ -113,6 +164,39 @@ func TestMatchTypeSetElemNestedAttrs(name, attr string, values map[string]*regex // // Use this function over SDK provided TestCheckFunctions when validating a // TypeSet where its elements are a simple value + +// TestCheckTypeSetElemAttr ensures a specific value is stored in state for the +// given name and key combination under a list or set. Use this TestCheckFunc +// in preference over non-set variants to simplify testing code and ensure +// compatibility with indicies, which can easily change with schema changes. +// State value checking is only recommended for testing Computed attributes and +// attribute defaults. +// +// For managed resources, the name parameter is a combination of the resource +// type, a period (.), and the name label. The name for the below example +// configuration would be "myprovider_thing.example". +// +// resource "myprovider_thing" "example" { ... } +// +// For data sources, the name parameter is a combination of the keyword "data", +// a period (.), the data source type, a period (.), and the name label. The +// name for the below example configuration would be +// "data.myprovider_thing.example". +// +// data "myprovider_thing" "example" { ... } +// +// The key parameter is an attribute path in Terraform CLI 0.11 and earlier +// "flatmap" syntax. Keys start with the attribute name of a top-level +// attribute. Use the sentinel value '*' to replace the element indexing into +// a list or set. The sentinel value can be used for each list or set index, if +// there are multiple lists or sets in the attribute path. +// +// The value parameter is the stringified data to check at the given key. Use +// the following attribute type rules to set the value: +// +// - Boolean: "false" or "true". +// - Float/Integer: Stringified number, such as "1.2" or "123". +// - String: No conversion necessary. func TestCheckTypeSetElemAttr(name, attr, value string) TestCheckFunc { return func(s *terraform.State) error { is, err := primaryInstanceState(s, name) @@ -129,12 +213,32 @@ func TestCheckTypeSetElemAttr(name, attr, value string) TestCheckFunc { } } -// TestCheckTypeSetElemAttrPair is a TestCheckFunc that verifies a pair of name/key -// combinations are equal where the first uses the sentinel value to index into a -// TypeSet. +// TestCheckTypeSetElemAttrPair ensures value equality in state between the +// first given name and key combination and the second name and key +// combination. State value checking is only recommended for testing Computed +// attributes and attribute defaults. +// +// For managed resources, the name parameter is a combination of the resource +// type, a period (.), and the name label. The name for the below example +// configuration would be "myprovider_thing.example". +// +// resource "myprovider_thing" "example" { ... } +// +// For data sources, the name parameter is a combination of the keyword "data", +// a period (.), the data source type, a period (.), and the name label. The +// name for the below example configuration would be +// "data.myprovider_thing.example". +// +// data "myprovider_thing" "example" { ... } +// +// The first and second names may use any combination of managed resources +// and/or data sources. // -// E.g., TestCheckTypeSetElemAttrPair("aws_autoscaling_group.bar", "availability_zones.*", "data.aws_availability_zones.available", "names.0") -// E.g., TestCheckTypeSetElemAttrPair("aws_spot_fleet_request.bar", "launch_specification.*.instance_type", "data.data.aws_ec2_instance_type_offering.available", "instance_type") +// The key parameter is an attribute path in Terraform CLI 0.11 and earlier +// "flatmap" syntax. Keys start with the attribute name of a top-level +// attribute. Use the sentinel value '*' to replace the element indexing into +// a list or set. The sentinel value can be used for each list or set index, if +// there are multiple lists or sets in the attribute path. func TestCheckTypeSetElemAttrPair(nameFirst, keyFirst, nameSecond, keySecond string) TestCheckFunc { return func(s *terraform.State) error { isFirst, err := primaryInstanceState(s, nameFirst) diff --git a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/grpc_provider.go b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/grpc_provider.go index 86291a2e858d..8ab3fb98ef27 100644 --- a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/grpc_provider.go +++ b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/grpc_provider.go @@ -131,7 +131,7 @@ func (s *GRPCProviderServer) PrepareProviderConfig(ctx context.Context, req *tfp configVal, err := msgpack.Unmarshal(req.Config.MsgPack, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -183,7 +183,7 @@ func (s *GRPCProviderServer) PrepareProviderConfig(ctx context.Context, req *tfp // helper/schema used to allow setting "" to a bool if val.Type() == cty.Bool && tmpVal.RawEquals(cty.StringVal("")) { // return a warning about the conversion - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, "provider set empty string as default value for bool "+getAttr.Name) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, "provider set empty string as default value for bool "+getAttr.Name) tmpVal = cty.False } @@ -195,31 +195,31 @@ func (s *GRPCProviderServer) PrepareProviderConfig(ctx context.Context, req *tfp return val, nil }) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } configVal, err = schemaBlock.CoerceValue(configVal) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } // Ensure there are no nulls that will cause helper/schema to panic. - if err := validateConfigNulls(configVal, nil); err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + if err := validateConfigNulls(ctx, configVal, nil); err != nil { + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } config := terraform.NewResourceConfigShimmed(configVal, schemaBlock) logging.HelperSchemaTrace(ctx, "Calling downstream") - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, s.provider.Validate(config)) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, s.provider.Validate(config)) logging.HelperSchemaTrace(ctx, "Called downstream") preparedConfigMP, err := msgpack.Marshal(configVal, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -236,14 +236,14 @@ func (s *GRPCProviderServer) ValidateResourceTypeConfig(ctx context.Context, req configVal, err := msgpack.Unmarshal(req.Config.MsgPack, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } config := terraform.NewResourceConfigShimmed(configVal, schemaBlock) logging.HelperSchemaTrace(ctx, "Calling downstream") - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, s.provider.ValidateResource(req.TypeName, config)) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, s.provider.ValidateResource(req.TypeName, config)) logging.HelperSchemaTrace(ctx, "Called downstream") return resp, nil @@ -257,20 +257,20 @@ func (s *GRPCProviderServer) ValidateDataSourceConfig(ctx context.Context, req * configVal, err := msgpack.Unmarshal(req.Config.MsgPack, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } // Ensure there are no nulls that will cause helper/schema to panic. - if err := validateConfigNulls(configVal, nil); err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + if err := validateConfigNulls(ctx, configVal, nil); err != nil { + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } config := terraform.NewResourceConfigShimmed(configVal, schemaBlock) logging.HelperSchemaTrace(ctx, "Calling downstream") - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, s.provider.ValidateDataSource(req.TypeName, config)) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, s.provider.ValidateDataSource(req.TypeName, config)) logging.HelperSchemaTrace(ctx, "Called downstream") return resp, nil @@ -282,7 +282,7 @@ func (s *GRPCProviderServer) UpgradeResourceState(ctx context.Context, req *tfpr res, ok := s.provider.ResourcesMap[req.TypeName] if !ok { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, fmt.Errorf("unknown resource type: %s", req.TypeName)) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, fmt.Errorf("unknown resource type: %s", req.TypeName)) return resp, nil } schemaBlock := s.getResourceSchemaBlock(req.TypeName) @@ -300,7 +300,7 @@ func (s *GRPCProviderServer) UpgradeResourceState(ctx context.Context, req *tfpr jsonMap, version, err = s.upgradeFlatmapState(ctx, version, req.RawState.Flatmap, res) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } // if there's a JSON state, we need to decode it. @@ -311,7 +311,7 @@ func (s *GRPCProviderServer) UpgradeResourceState(ctx context.Context, req *tfpr err = json.Unmarshal(req.RawState.JSON, &jsonMap) } if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } default: @@ -324,7 +324,7 @@ func (s *GRPCProviderServer) UpgradeResourceState(ctx context.Context, req *tfpr jsonMap, err = s.upgradeJSONState(ctx, version, jsonMap, res) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -335,7 +335,7 @@ func (s *GRPCProviderServer) UpgradeResourceState(ctx context.Context, req *tfpr // that it can be re-decoded using the actual schema. val, err := JSONMapToStateValue(jsonMap, schemaBlock) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -344,7 +344,7 @@ func (s *GRPCProviderServer) UpgradeResourceState(ctx context.Context, req *tfpr // First we need to CoerceValue to ensure that all object types match. val, err = schemaBlock.CoerceValue(val) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } // Normalize the value and fill in any missing blocks. @@ -353,7 +353,7 @@ func (s *GRPCProviderServer) UpgradeResourceState(ctx context.Context, req *tfpr // encode the final state to the expected msgpack format newStateMP, err := msgpack.Marshal(val, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -533,15 +533,15 @@ func (s *GRPCProviderServer) ConfigureProvider(ctx context.Context, req *tfproto configVal, err := msgpack.Unmarshal(req.Config.MsgPack, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } s.provider.TerraformVersion = req.TerraformVersion // Ensure there are no nulls that will cause helper/schema to panic. - if err := validateConfigNulls(configVal, nil); err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + if err := validateConfigNulls(ctx, configVal, nil); err != nil { + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -557,7 +557,7 @@ func (s *GRPCProviderServer) ConfigureProvider(ctx context.Context, req *tfproto diags := s.provider.Configure(ctxHack, config) logging.HelperSchemaTrace(ctx, "Called downstream") - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, diags) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, diags) return resp, nil } @@ -573,20 +573,20 @@ func (s *GRPCProviderServer) ReadResource(ctx context.Context, req *tfprotov5.Re res, ok := s.provider.ResourcesMap[req.TypeName] if !ok { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, fmt.Errorf("unknown resource type: %s", req.TypeName)) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, fmt.Errorf("unknown resource type: %s", req.TypeName)) return resp, nil } schemaBlock := s.getResourceSchemaBlock(req.TypeName) stateVal, err := msgpack.Unmarshal(req.CurrentState.MsgPack, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } instanceState, err := res.ShimInstanceStateFromValue(stateVal) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } instanceState.RawState = stateVal @@ -594,7 +594,7 @@ func (s *GRPCProviderServer) ReadResource(ctx context.Context, req *tfprotov5.Re private := make(map[string]interface{}) if len(req.Private) > 0 { if err := json.Unmarshal(req.Private, &private); err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } } @@ -604,14 +604,14 @@ func (s *GRPCProviderServer) ReadResource(ctx context.Context, req *tfprotov5.Re if pmSchemaBlock != nil && req.ProviderMeta != nil { providerSchemaVal, err := msgpack.Unmarshal(req.ProviderMeta.MsgPack, pmSchemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } instanceState.ProviderMeta = providerSchemaVal } newInstanceState, diags := res.RefreshWithoutUpgrade(ctx, instanceState, s.provider.Meta()) - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, diags) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, diags) if diags.HasError() { return resp, nil } @@ -622,7 +622,7 @@ func (s *GRPCProviderServer) ReadResource(ctx context.Context, req *tfprotov5.Re // to see a null value (in the cty sense) in that case. newStateMP, err := msgpack.Marshal(cty.NullVal(schemaBlock.ImpliedType()), schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) } resp.NewState = &tfprotov5.DynamicValue{ MsgPack: newStateMP, @@ -635,7 +635,7 @@ func (s *GRPCProviderServer) ReadResource(ctx context.Context, req *tfprotov5.Re newStateVal, err := hcl2shim.HCL2ValueFromFlatmap(newInstanceState.Attributes, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -644,7 +644,7 @@ func (s *GRPCProviderServer) ReadResource(ctx context.Context, req *tfprotov5.Re newStateMP, err := msgpack.Marshal(newStateVal, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -669,14 +669,14 @@ func (s *GRPCProviderServer) PlanResourceChange(ctx context.Context, req *tfprot res, ok := s.provider.ResourcesMap[req.TypeName] if !ok { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, fmt.Errorf("unknown resource type: %s", req.TypeName)) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, fmt.Errorf("unknown resource type: %s", req.TypeName)) return resp, nil } schemaBlock := s.getResourceSchemaBlock(req.TypeName) priorStateVal, err := msgpack.Unmarshal(req.PriorState.MsgPack, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -684,7 +684,7 @@ func (s *GRPCProviderServer) PlanResourceChange(ctx context.Context, req *tfprot proposedNewStateVal, err := msgpack.Unmarshal(req.ProposedNewState.MsgPack, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -697,13 +697,13 @@ func (s *GRPCProviderServer) PlanResourceChange(ctx context.Context, req *tfprot configVal, err := msgpack.Unmarshal(req.Config.MsgPack, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } priorState, err := res.ShimInstanceStateFromValue(priorStateVal) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } priorState.RawState = priorStateVal @@ -712,7 +712,7 @@ func (s *GRPCProviderServer) PlanResourceChange(ctx context.Context, req *tfprot priorPrivate := make(map[string]interface{}) if len(req.PriorPrivate) > 0 { if err := json.Unmarshal(req.PriorPrivate, &priorPrivate); err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } } @@ -723,15 +723,15 @@ func (s *GRPCProviderServer) PlanResourceChange(ctx context.Context, req *tfprot if pmSchemaBlock != nil && req.ProviderMeta != nil { providerSchemaVal, err := msgpack.Unmarshal(req.ProviderMeta.MsgPack, pmSchemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } priorState.ProviderMeta = providerSchemaVal } // Ensure there are no nulls that will cause helper/schema to panic. - if err := validateConfigNulls(proposedNewStateVal, nil); err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + if err := validateConfigNulls(ctx, proposedNewStateVal, nil); err != nil { + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -740,7 +740,7 @@ func (s *GRPCProviderServer) PlanResourceChange(ctx context.Context, req *tfprot diff, err := res.SimpleDiff(ctx, priorState, cfg, s.provider.Meta()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -773,26 +773,26 @@ func (s *GRPCProviderServer) PlanResourceChange(ctx context.Context, req *tfprot plannedAttrs, err := diff.Apply(priorState.Attributes, schemaBlock) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } plannedStateVal, err := hcl2shim.HCL2ValueFromFlatmap(plannedAttrs, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } plannedStateVal, err = schemaBlock.CoerceValue(plannedStateVal) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } plannedStateVal = normalizeNullValues(plannedStateVal, proposedNewStateVal, false) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -820,7 +820,7 @@ func (s *GRPCProviderServer) PlanResourceChange(ctx context.Context, req *tfprot plannedMP, err := msgpack.Marshal(plannedStateVal, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } resp.PlannedState = &tfprotov5.DynamicValue{ @@ -830,12 +830,12 @@ func (s *GRPCProviderServer) PlanResourceChange(ctx context.Context, req *tfprot // encode any timeouts into the diff Meta t := &ResourceTimeout{} if err := t.ConfigDecode(res, cfg); err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } if err := t.DiffEncode(diff); err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -858,7 +858,7 @@ func (s *GRPCProviderServer) PlanResourceChange(ctx context.Context, req *tfprot // the Meta field gets encoded into PlannedPrivate plannedPrivate, err := json.Marshal(privateMap) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } resp.PlannedPrivate = plannedPrivate @@ -885,7 +885,7 @@ func (s *GRPCProviderServer) PlanResourceChange(ctx context.Context, req *tfprot requiresReplace, err := hcl2shim.RequiresReplace(requiresNew, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -906,39 +906,39 @@ func (s *GRPCProviderServer) ApplyResourceChange(ctx context.Context, req *tfpro res, ok := s.provider.ResourcesMap[req.TypeName] if !ok { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, fmt.Errorf("unknown resource type: %s", req.TypeName)) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, fmt.Errorf("unknown resource type: %s", req.TypeName)) return resp, nil } schemaBlock := s.getResourceSchemaBlock(req.TypeName) priorStateVal, err := msgpack.Unmarshal(req.PriorState.MsgPack, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } plannedStateVal, err := msgpack.Unmarshal(req.PlannedState.MsgPack, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } configVal, err := msgpack.Unmarshal(req.Config.MsgPack, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } priorState, err := res.ShimInstanceStateFromValue(priorStateVal) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } private := make(map[string]interface{}) if len(req.PlannedPrivate) > 0 { if err := json.Unmarshal(req.PlannedPrivate, &private); err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } } @@ -960,7 +960,7 @@ func (s *GRPCProviderServer) ApplyResourceChange(ctx context.Context, req *tfpro } else { diff, err = DiffFromValues(ctx, priorStateVal, plannedStateVal, configVal, stripResourceModifiers(res)) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } } @@ -1012,14 +1012,14 @@ func (s *GRPCProviderServer) ApplyResourceChange(ctx context.Context, req *tfpro if pmSchemaBlock != nil && req.ProviderMeta != nil { providerSchemaVal, err := msgpack.Unmarshal(req.ProviderMeta.MsgPack, pmSchemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } priorState.ProviderMeta = providerSchemaVal } newInstanceState, diags := res.Apply(ctx, priorState, diff, s.provider.Meta()) - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, diags) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, diags) newStateVal := cty.NullVal(schemaBlock.ImpliedType()) @@ -1029,7 +1029,7 @@ func (s *GRPCProviderServer) ApplyResourceChange(ctx context.Context, req *tfpro if destroy || newInstanceState == nil || newInstanceState.Attributes == nil || newInstanceState.ID == "" { newStateMP, err := msgpack.Marshal(newStateVal, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } resp.NewState = &tfprotov5.DynamicValue{ @@ -1042,7 +1042,7 @@ func (s *GRPCProviderServer) ApplyResourceChange(ctx context.Context, req *tfpro // entire object, even if the new state was nil. newStateVal, err = StateValueFromInstanceState(newInstanceState, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -1052,7 +1052,7 @@ func (s *GRPCProviderServer) ApplyResourceChange(ctx context.Context, req *tfpro newStateMP, err := msgpack.Marshal(newStateVal, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } resp.NewState = &tfprotov5.DynamicValue{ @@ -1061,7 +1061,7 @@ func (s *GRPCProviderServer) ApplyResourceChange(ctx context.Context, req *tfpro meta, err := json.Marshal(newInstanceState.Meta) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } resp.Private = meta @@ -1087,7 +1087,7 @@ func (s *GRPCProviderServer) ImportResourceState(ctx context.Context, req *tfpro newInstanceStates, err := s.provider.ImportState(ctx, info, req.ID) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -1103,7 +1103,7 @@ func (s *GRPCProviderServer) ImportResourceState(ctx context.Context, req *tfpro schemaBlock := s.getResourceSchemaBlock(resourceType) newStateVal, err := hcl2shim.HCL2ValueFromFlatmap(is.Attributes, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -1112,13 +1112,13 @@ func (s *GRPCProviderServer) ImportResourceState(ctx context.Context, req *tfpro newStateMP, err := msgpack.Marshal(newStateVal, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } meta, err := json.Marshal(is.Meta) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -1144,13 +1144,13 @@ func (s *GRPCProviderServer) ReadDataSource(ctx context.Context, req *tfprotov5. configVal, err := msgpack.Unmarshal(req.Config.MsgPack, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } // Ensure there are no nulls that will cause helper/schema to panic. - if err := validateConfigNulls(configVal, nil); err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + if err := validateConfigNulls(ctx, configVal, nil); err != nil { + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -1160,12 +1160,12 @@ func (s *GRPCProviderServer) ReadDataSource(ctx context.Context, req *tfprotov5. // the old behavior res, ok := s.provider.DataSourcesMap[req.TypeName] if !ok { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, fmt.Errorf("unknown data source: %s", req.TypeName)) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, fmt.Errorf("unknown data source: %s", req.TypeName)) return resp, nil } diff, err := res.Diff(ctx, nil, config, s.provider.Meta()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -1177,14 +1177,14 @@ func (s *GRPCProviderServer) ReadDataSource(ctx context.Context, req *tfprotov5. // now we can get the new complete data source newInstanceState, diags := res.ReadDataApply(ctx, diff, s.provider.Meta()) - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, diags) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, diags) if diags.HasError() { return resp, nil } newStateVal, err := StateValueFromInstanceState(newInstanceState, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } @@ -1192,7 +1192,7 @@ func (s *GRPCProviderServer) ReadDataSource(ctx context.Context, req *tfprotov5. newStateMP, err := msgpack.Marshal(newStateVal, schemaBlock.ImpliedType()) if err != nil { - resp.Diagnostics = convert.AppendProtoDiag(resp.Diagnostics, err) + resp.Diagnostics = convert.AppendProtoDiag(ctx, resp.Diagnostics, err) return resp, nil } resp.State = &tfprotov5.DynamicValue{ @@ -1485,7 +1485,7 @@ func normalizeNullValues(dst, src cty.Value, apply bool) cty.Value { // appears in a list-like attribute (list, set, tuple) will present a nil value // to helper/schema which can panic. Return an error to the user in this case, // indicating the attribute with the null value. -func validateConfigNulls(v cty.Value, path cty.Path) []*tfprotov5.Diagnostic { +func validateConfigNulls(ctx context.Context, v cty.Value, path cty.Path) []*tfprotov5.Diagnostic { var diags []*tfprotov5.Diagnostic if v.IsNull() || !v.IsKnown() { return diags @@ -1514,8 +1514,8 @@ func validateConfigNulls(v cty.Value, path cty.Path) []*tfprotov5.Diagnostic { continue } - d := validateConfigNulls(ev, append(path, cty.IndexStep{Key: kv})) - diags = convert.AppendProtoDiag(diags, d) + d := validateConfigNulls(ctx, ev, append(path, cty.IndexStep{Key: kv})) + diags = convert.AppendProtoDiag(ctx, diags, d) } case v.Type().IsMapType() || v.Type().IsObjectType(): @@ -1529,8 +1529,8 @@ func validateConfigNulls(v cty.Value, path cty.Path) []*tfprotov5.Diagnostic { case v.Type().IsObjectType(): step = cty.GetAttrStep{Name: kv.AsString()} } - d := validateConfigNulls(ev, append(path, step)) - diags = convert.AppendProtoDiag(diags, d) + d := validateConfigNulls(ctx, ev, append(path, step)) + diags = convert.AppendProtoDiag(ctx, diags, d) } } diff --git a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource.go b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource.go index 9b94f8ece7f5..136ed036d7dc 100644 --- a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource.go +++ b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/resource.go @@ -32,35 +32,44 @@ var ReservedResourceFields = []string{ "provisioner", } -// Resource represents a thing in Terraform that has a set of configurable -// attributes and a lifecycle (create, read, update, delete). +// Resource is an abstraction for multiple Terraform concepts: // -// The Resource schema is an abstraction that allows provider writers to -// worry only about CRUD operations while off-loading validation, diff -// generation, etc. to this higher level library. +// - Managed Resource: An infrastructure component with a schema, lifecycle +// operations such as create, read, update, and delete +// (CRUD), and optional implementation details such as +// import support, upgrade state support, and difference +// customization. +// - Data Resource: Also known as a data source. An infrastructure component +// with a schema and only the read lifecycle operation. +// - Block: When implemented within a Schema type Elem field, a configuration +// block that contains nested schema information such as attributes +// and blocks. // -// In spite of the name, this struct is not used only for terraform resources, -// but also for data sources. In the case of data sources, the Create, -// Update and Delete functions must not be provided. +// To fully implement managed resources, the Provider type ResourcesMap field +// should include a reference to an implementation of this type. To fully +// implement data resources, the Provider type DataSourcesMap field should +// include a reference to an implementation of this type. +// +// Each field further documents any constraints based on the Terraform concept +// being implemented. type Resource struct { - // Schema is the schema for the configuration of this resource. - // - // The keys of this map are the configuration keys, and the values - // describe the schema of the configuration value. + // Schema is the structure and type information for this component. This + // field is required for all Resource concepts. // - // The schema is used to represent both configurable data as well - // as data that might be computed in the process of creating this - // resource. + // The keys of this map are the names used in a practitioner configuration, + // such as the attribute or block name. The values describe the structure + // and type information of that attribute or block. Schema map[string]*Schema // SchemaVersion is the version number for this resource's Schema - // definition. The current SchemaVersion stored in the state for each - // resource. Provider authors can increment this version number - // when Schema semantics change. If the State's SchemaVersion is less than - // the current SchemaVersion, the InstanceState is yielded to the - // MigrateState callback, where the provider can make whatever changes it - // needs to update the state to be compatible to the latest version of the - // Schema. + // definition. This field is only valid when the Resource is a managed + // resource. + // + // The current SchemaVersion stored in the state for each resource. + // Provider authors can increment this version number when Schema semantics + // change in an incompatible manner. If the state's SchemaVersion is less + // than the current SchemaVersion, the MigrateState and StateUpgraders + // functionality is executed to upgrade the state information. // // When unset, SchemaVersion defaults to 0, so provider authors can start // their Versioning at any integer >= 1 @@ -68,6 +77,7 @@ type Resource struct { // MigrateState is responsible for updating an InstanceState with an old // version to the format expected by the current version of the Schema. + // This field is only valid when the Resource is a managed resource. // // It is called during Refresh if the State's stored SchemaVersion is less // than the current SchemaVersion of the Resource. @@ -87,7 +97,8 @@ type Resource struct { // StateUpgraders contains the functions responsible for upgrading an // existing state with an old schema version to a newer schema. It is // called specifically by Terraform when the stored schema version is less - // than the current SchemaVersion of the Resource. + // than the current SchemaVersion of the Resource. This field is only valid + // when the Resource is a managed resource. // // StateUpgraders map specific schema versions to a StateUpgrader // function. The registered versions are expected to be ordered, @@ -96,57 +107,261 @@ type Resource struct { // MigrateState. StateUpgraders []StateUpgrader - // The functions below are the CRUD operations for this resource. + // Create is called when the provider must create a new instance of a + // managed resource. This field is only valid when the Resource is a + // managed resource. Only one of Create, CreateContext, or + // CreateWithoutTimeout should be implemented. // - // Deprecated: Please use the context aware equivalents instead. Only one of - // the operations or context aware equivalent can be set, not both. + // The *ResourceData parameter contains the plan and state data for this + // managed resource instance. The available data in the Get* methods is the + // the proposed state, which is the merged data of the practitioner + // configuration and any CustomizeDiff field logic. + // + // The SetId method must be called with a non-empty value for the managed + // resource instance to be properly saved into the Terraform state and + // avoid a "inconsistent result after apply" error. + // + // The interface{} parameter is the result of the Provider type + // ConfigureFunc field execution. If the Provider does not define + // a ConfigureFunc, this will be nil. This parameter is conventionally + // used to store API clients and other provider instance specific data. + // + // The error return parameter, if not nil, will be converted into an error + // diagnostic when passed back to Terraform. + // + // Deprecated: Use CreateContext or CreateWithoutTimeout instead. This + // implementation does not support request cancellation initiated by + // Terraform, such as a system or practitioner sending SIGINT (Ctrl-c). + // This implementation also does not support warning diagnostics. Create CreateFunc - // Deprecated: Please use the context aware equivalents instead. + + // Read is called when the provider must refresh the state of a managed + // resource instance or data resource instance. This field is only valid + // when the Resource is a managed resource or data resource. Only one of + // Read, ReadContext, or ReadWithoutTimeout should be implemented. + // + // The *ResourceData parameter contains the state data for this managed + // resource instance or data resource instance. + // + // Managed resources can signal to Terraform that the managed resource + // instance no longer exists and potentially should be recreated by calling + // the SetId method with an empty string ("") parameter and without + // returning an error. + // + // Data resources that are designed to return state for a singular + // infrastructure component should conventionally return an error if that + // infrastructure does not exist and omit any calls to the + // SetId method. + // + // The interface{} parameter is the result of the Provider type + // ConfigureFunc field execution. If the Provider does not define + // a ConfigureFunc, this will be nil. This parameter is conventionally + // used to store API clients and other provider instance specific data. + // + // The error return parameter, if not nil, will be converted into an error + // diagnostic when passed back to Terraform. + // + // Deprecated: Use ReadContext or ReadWithoutTimeout instead. This + // implementation does not support request cancellation initiated by + // Terraform, such as a system or practitioner sending SIGINT (Ctrl-c). + // This implementation also does not support warning diagnostics. Read ReadFunc - // Deprecated: Please use the context aware equivalents instead. + + // Update is called when the provider must update an instance of a + // managed resource. This field is only valid when the Resource is a + // managed resource. Only one of Update, UpdateContext, or + // UpdateWithoutTimeout should be implemented. + // + // This implementation is optional. If omitted, all Schema must enable + // the ForceNew field and any practitioner changes that would have + // caused and update will instead destroy and recreate the infrastructure + // compontent. + // + // The *ResourceData parameter contains the plan and state data for this + // managed resource instance. The available data in the Get* methods is the + // the proposed state, which is the merged data of the prior state, + // practitioner configuration, and any CustomizeDiff field logic. The + // available data for the GetChange* and HasChange* methods is the prior + // state and proposed state. + // + // The interface{} parameter is the result of the Provider type + // ConfigureFunc field execution. If the Provider does not define + // a ConfigureFunc, this will be nil. This parameter is conventionally + // used to store API clients and other provider instance specific data. + // + // The error return parameter, if not nil, will be converted into an error + // diagnostic when passed back to Terraform. + // + // Deprecated: Use UpdateContext or UpdateWithoutTimeout instead. This + // implementation does not support request cancellation initiated by + // Terraform, such as a system or practitioner sending SIGINT (Ctrl-c). + // This implementation also does not support warning diagnostics. Update UpdateFunc - // Deprecated: Please use the context aware equivalents instead. + + // Delete is called when the provider must destroy the instance of a + // managed resource. This field is only valid when the Resource is a + // managed resource. Only one of Delete, DeleteContext, or + // DeleteWithoutTimeout should be implemented. + // + // The *ResourceData parameter contains the state data for this managed + // resource instance. + // + // The interface{} parameter is the result of the Provider type + // ConfigureFunc field execution. If the Provider does not define + // a ConfigureFunc, this will be nil. This parameter is conventionally + // used to store API clients and other provider instance specific data. + // + // The error return parameter, if not nil, will be converted into an error + // diagnostic when passed back to Terraform. + // + // Deprecated: Use DeleteContext or DeleteWithoutTimeout instead. This + // implementation does not support request cancellation initiated by + // Terraform, such as a system or practitioner sending SIGINT (Ctrl-c). + // This implementation also does not support warning diagnostics. Delete DeleteFunc // Exists is a function that is called to check if a resource still - // exists. If this returns false, then this will affect the diff + // exists. This field is only valid when the Resource is a managed + // resource. + // + // If this returns false, then this will affect the diff // accordingly. If this function isn't set, it will not be called. You // can also signal existence in the Read method by calling d.SetId("") // if the Resource is no longer present and should be removed from state. // The *ResourceData passed to Exists should _not_ be modified. // - // Deprecated: ReadContext should be able to encapsulate the logic of Exists + // Deprecated: Remove in preference of ReadContext or ReadWithoutTimeout. Exists ExistsFunc - // The functions below are the CRUD operations for this resource. + // CreateContext is called when the provider must create a new instance of + // a managed resource. This field is only valid when the Resource is a + // managed resource. Only one of Create, CreateContext, or + // CreateWithoutTimeout should be implemented. // - // The only optional operation is Update. If Update is not - // implemented, then updates will not be supported for this resource. + // The Context parameter stores SDK information, such as loggers and + // timeout deadlines. It also is wired to receive any cancellation from + // Terraform such as a system or practitioner sending SIGINT (Ctrl-c). // - // The ResourceData parameter in the functions below are used to - // query configuration and changes for the resource as well as to set - // the ID, computed data, etc. + // By default, CreateContext has a 20 minute timeout. Use the Timeouts + // field to control the default duration or implement CreateWithoutTimeout + // instead of CreateContext to remove the default timeout. // - // The interface{} parameter is the result of the ConfigureFunc in - // the provider for this resource. If the provider does not define - // a ConfigureFunc, this will be nil. This parameter should be used - // to store API clients, configuration structures, etc. + // The *ResourceData parameter contains the plan and state data for this + // managed resource instance. The available data in the Get* methods is the + // the proposed state, which is the merged data of the practitioner + // configuration and any CustomizeDiff field logic. // - // These functions are passed a context configured to timeout with whatever - // was set as the timeout for this operation. Useful for forwarding on to - // backend SDK's that accept context. The context will also cancel if - // Terraform sends a cancellation signal. + // The SetId method must be called with a non-empty value for the managed + // resource instance to be properly saved into the Terraform state and + // avoid a "inconsistent result after apply" error. // - // These functions return diagnostics, allowing developers to build - // a list of warnings and errors to be presented to the Terraform user. - // The AttributePath of those diagnostics should be built within these - // functions, please consult go-cty documentation for building a cty.Path + // The interface{} parameter is the result of the Provider type + // ConfigureFunc field execution. If the Provider does not define + // a ConfigureFunc, this will be nil. This parameter is conventionally + // used to store API clients and other provider instance specific data. + // + // The diagnostics return parameter, if not nil, can contain any + // combination and multiple of warning and/or error diagnostics. CreateContext CreateContextFunc - ReadContext ReadContextFunc + + // ReadContext is called when the provider must refresh the state of a managed + // resource instance or data resource instance. This field is only valid + // when the Resource is a managed resource or data resource. Only one of + // Read, ReadContext, or ReadWithoutTimeout should be implemented. + // + // The Context parameter stores SDK information, such as loggers and + // timeout deadlines. It also is wired to receive any cancellation from + // Terraform such as a system or practitioner sending SIGINT (Ctrl-c). + // + // By default, ReadContext has a 20 minute timeout. Use the Timeouts + // field to control the default duration or implement ReadWithoutTimeout + // instead of ReadContext to remove the default timeout. + // + // The *ResourceData parameter contains the state data for this managed + // resource instance or data resource instance. + // + // Managed resources can signal to Terraform that the managed resource + // instance no longer exists and potentially should be recreated by calling + // the SetId method with an empty string ("") parameter and without + // returning an error. + // + // Data resources that are designed to return state for a singular + // infrastructure component should conventionally return an error if that + // infrastructure does not exist and omit any calls to the + // SetId method. + // + // The interface{} parameter is the result of the Provider type + // ConfigureFunc field execution. If the Provider does not define + // a ConfigureFunc, this will be nil. This parameter is conventionally + // used to store API clients and other provider instance specific data. + // + // The diagnostics return parameter, if not nil, can contain any + // combination and multiple of warning and/or error diagnostics. + ReadContext ReadContextFunc + + // UpdateContext is called when the provider must update an instance of a + // managed resource. This field is only valid when the Resource is a + // managed resource. Only one of Update, UpdateContext, or + // UpdateWithoutTimeout should be implemented. + // + // This implementation is optional. If omitted, all Schema must enable + // the ForceNew field and any practitioner changes that would have + // caused and update will instead destroy and recreate the infrastructure + // compontent. + // + // The Context parameter stores SDK information, such as loggers and + // timeout deadlines. It also is wired to receive any cancellation from + // Terraform such as a system or practitioner sending SIGINT (Ctrl-c). + // + // By default, UpdateContext has a 20 minute timeout. Use the Timeouts + // field to control the default duration or implement UpdateWithoutTimeout + // instead of UpdateContext to remove the default timeout. + // + // The *ResourceData parameter contains the plan and state data for this + // managed resource instance. The available data in the Get* methods is the + // the proposed state, which is the merged data of the prior state, + // practitioner configuration, and any CustomizeDiff field logic. The + // available data for the GetChange* and HasChange* methods is the prior + // state and proposed state. + // + // The interface{} parameter is the result of the Provider type + // ConfigureFunc field execution. If the Provider does not define + // a ConfigureFunc, this will be nil. This parameter is conventionally + // used to store API clients and other provider instance specific data. + // + // The diagnostics return parameter, if not nil, can contain any + // combination and multiple of warning and/or error diagnostics. UpdateContext UpdateContextFunc + + // DeleteContext is called when the provider must destroy the instance of a + // managed resource. This field is only valid when the Resource is a + // managed resource. Only one of Delete, DeleteContext, or + // DeleteWithoutTimeout should be implemented. + // + // The Context parameter stores SDK information, such as loggers and + // timeout deadlines. It also is wired to receive any cancellation from + // Terraform such as a system or practitioner sending SIGINT (Ctrl-c). + // + // By default, DeleteContext has a 20 minute timeout. Use the Timeouts + // field to control the default duration or implement DeleteWithoutTimeout + // instead of DeleteContext to remove the default timeout. + // + // The *ResourceData parameter contains the state data for this managed + // resource instance. + // + // The interface{} parameter is the result of the Provider type + // ConfigureFunc field execution. If the Provider does not define + // a ConfigureFunc, this will be nil. This parameter is conventionally + // used to store API clients and other provider instance specific data. + // + // The diagnostics return parameter, if not nil, can contain any + // combination and multiple of warning and/or error diagnostics. DeleteContext DeleteContextFunc - // CreateWithoutTimeout is equivalent to CreateContext with no context timeout. + // CreateWithoutTimeout is called when the provider must create a new + // instance of a managed resource. This field is only valid when the + // Resource is a managed resource. Only one of Create, CreateContext, or + // CreateWithoutTimeout should be implemented. // // Most resources should prefer CreateContext with properly implemented // operation timeout values, however there are cases where operation @@ -154,9 +369,34 @@ type Resource struct { // logic, such as a mutex, to prevent remote system errors. Since these // operations would have an indeterminate timeout that scales with the // number of resources, this allows resources to control timeout behavior. + // + // The Context parameter stores SDK information, such as loggers. It also + // is wired to receive any cancellation from Terraform such as a system or + // practitioner sending SIGINT (Ctrl-c). + // + // The *ResourceData parameter contains the plan and state data for this + // managed resource instance. The available data in the Get* methods is the + // the proposed state, which is the merged data of the practitioner + // configuration and any CustomizeDiff field logic. + // + // The SetId method must be called with a non-empty value for the managed + // resource instance to be properly saved into the Terraform state and + // avoid a "inconsistent result after apply" error. + // + // The interface{} parameter is the result of the Provider type + // ConfigureFunc field execution. If the Provider does not define + // a ConfigureFunc, this will be nil. This parameter is conventionally + // used to store API clients and other provider instance specific data. + // + // The diagnostics return parameter, if not nil, can contain any + // combination and multiple of warning and/or error diagnostics. CreateWithoutTimeout CreateContextFunc - // ReadWithoutTimeout is equivalent to ReadContext with no context timeout. + // ReadWithoutTimeout is called when the provider must refresh the state of + // a managed resource instance or data resource instance. This field is + // only valid when the Resource is a managed resource or data resource. + // Only one of Read, ReadContext, or ReadWithoutTimeout should be + // implemented. // // Most resources should prefer ReadContext with properly implemented // operation timeout values, however there are cases where operation @@ -164,9 +404,37 @@ type Resource struct { // logic, such as a mutex, to prevent remote system errors. Since these // operations would have an indeterminate timeout that scales with the // number of resources, this allows resources to control timeout behavior. + // + // The Context parameter stores SDK information, such as loggers. It also + // is wired to receive any cancellation from Terraform such as a system or + // practitioner sending SIGINT (Ctrl-c). + // + // The *ResourceData parameter contains the state data for this managed + // resource instance or data resource instance. + // + // Managed resources can signal to Terraform that the managed resource + // instance no longer exists and potentially should be recreated by calling + // the SetId method with an empty string ("") parameter and without + // returning an error. + // + // Data resources that are designed to return state for a singular + // infrastructure component should conventionally return an error if that + // infrastructure does not exist and omit any calls to the + // SetId method. + // + // The interface{} parameter is the result of the Provider type + // ConfigureFunc field execution. If the Provider does not define + // a ConfigureFunc, this will be nil. This parameter is conventionally + // used to store API clients and other provider instance specific data. + // + // The diagnostics return parameter, if not nil, can contain any + // combination and multiple of warning and/or error diagnostics. ReadWithoutTimeout ReadContextFunc - // UpdateWithoutTimeout is equivalent to UpdateContext with no context timeout. + // UpdateWithoutTimeout is called when the provider must update an instance + // of a managed resource. This field is only valid when the Resource is a + // managed resource. Only one of Update, UpdateContext, or + // UpdateWithoutTimeout should be implemented. // // Most resources should prefer UpdateContext with properly implemented // operation timeout values, however there are cases where operation @@ -174,9 +442,36 @@ type Resource struct { // logic, such as a mutex, to prevent remote system errors. Since these // operations would have an indeterminate timeout that scales with the // number of resources, this allows resources to control timeout behavior. + // + // This implementation is optional. If omitted, all Schema must enable + // the ForceNew field and any practitioner changes that would have + // caused and update will instead destroy and recreate the infrastructure + // compontent. + // + // The Context parameter stores SDK information, such as loggers. It also + // is wired to receive any cancellation from Terraform such as a system or + // practitioner sending SIGINT (Ctrl-c). + // + // The *ResourceData parameter contains the plan and state data for this + // managed resource instance. The available data in the Get* methods is the + // the proposed state, which is the merged data of the prior state, + // practitioner configuration, and any CustomizeDiff field logic. The + // available data for the GetChange* and HasChange* methods is the prior + // state and proposed state. + // + // The interface{} parameter is the result of the Provider type + // ConfigureFunc field execution. If the Provider does not define + // a ConfigureFunc, this will be nil. This parameter is conventionally + // used to store API clients and other provider instance specific data. + // + // The diagnostics return parameter, if not nil, can contain any + // combination and multiple of warning and/or error diagnostics. UpdateWithoutTimeout UpdateContextFunc - // DeleteWithoutTimeout is equivalent to DeleteContext with no context timeout. + // DeleteWithoutTimeout is called when the provider must destroy the + // instance of a managed resource. This field is only valid when the + // Resource is a managed resource. Only one of Delete, DeleteContext, or + // DeleteWithoutTimeout should be implemented. // // Most resources should prefer DeleteContext with properly implemented // operation timeout values, however there are cases where operation @@ -184,15 +479,37 @@ type Resource struct { // logic, such as a mutex, to prevent remote system errors. Since these // operations would have an indeterminate timeout that scales with the // number of resources, this allows resources to control timeout behavior. + // + // The Context parameter stores SDK information, such as loggers. It also + // is wired to receive any cancellation from Terraform such as a system or + // practitioner sending SIGINT (Ctrl-c). + // + // The *ResourceData parameter contains the state data for this managed + // resource instance. + // + // The interface{} parameter is the result of the Provider type + // ConfigureFunc field execution. If the Provider does not define + // a ConfigureFunc, this will be nil. This parameter is conventionally + // used to store API clients and other provider instance specific data. + // + // The diagnostics return parameter, if not nil, can contain any + // combination and multiple of warning and/or error diagnostics. DeleteWithoutTimeout DeleteContextFunc - // CustomizeDiff is a custom function for working with the diff that - // Terraform has created for this resource - it can be used to customize the - // diff that has been created, diff values not controlled by configuration, - // or even veto the diff altogether and abort the plan. It is passed a - // *ResourceDiff, a structure similar to ResourceData but lacking most write - // functions like Set, while introducing new functions that work with the - // diff such as SetNew, SetNewComputed, and ForceNew. + // CustomizeDiff is called after a difference (plan) has been generated + // for the Resource and allows for customizations, such as setting values + // not controlled by configuration, conditionally triggering resource + // recreation, or implementing additional validation logic to abort a plan. + // This field is only valid when the Resource is a managed resource. + // + // The Context parameter stores SDK information, such as loggers. It also + // is wired to receive any cancellation from Terraform such as a system or + // practitioner sending SIGINT (Ctrl-c). + // + // The *ResourceDiff parameter is similar to ResourceData but replaces the + // Set method with other difference handling methods, such as SetNew, + // SetNewComputed, and ForceNew. In general, only Schema with Computed + // enabled can have those methods executed against them. // // The phases Terraform runs this in, and the state available via functions // like Get and GetChange, are as follows: @@ -206,41 +523,60 @@ type Resource struct { // // This function needs to be resilient to support all scenarios. // - // For the most part, only computed fields can be customized by this - // function. + // The interface{} parameter is the result of the Provider type + // ConfigureFunc field execution. If the Provider does not define + // a ConfigureFunc, this will be nil. This parameter is conventionally + // used to store API clients and other provider instance specific data. // - // This function is only allowed on regular resources (not data sources). + // The error return parameter, if not nil, will be converted into an error + // diagnostic when passed back to Terraform. CustomizeDiff CustomizeDiffFunc - // Importer is the ResourceImporter implementation for this resource. + // Importer is called when the provider must import an instance of a + // managed resource. This field is only valid when the Resource is a + // managed resource. + // // If this is nil, then this resource does not support importing. If // this is non-nil, then it supports importing and ResourceImporter // must be validated. The validity of ResourceImporter is verified // by InternalValidate on Resource. Importer *ResourceImporter - // If non-empty, this string is emitted as a warning during Validate. + // If non-empty, this string is emitted as the details of a warning + // diagnostic during validation (validate, plan, and apply operations). + // This field is only valid when the Resource is a managed resource or + // data resource. DeprecationMessage string - // Timeouts allow users to specify specific time durations in which an - // operation should time out, to allow them to extend an action to suit their - // usage. For example, a user may specify a large Creation timeout for their - // AWS RDS Instance due to it's size, or restoring from a snapshot. - // Resource implementors must enable Timeout support by adding the allowed - // actions (Create, Read, Update, Delete, Default) to the Resource struct, and - // accessing them in the matching methods. + // Timeouts configures the default time duration allowed before a create, + // read, update, or delete operation is considered timed out, which returns + // an error to practitioners. This field is only valid when the Resource is + // a managed resource or data resource. + // + // When implemented, practitioners can add a timeouts configuration block + // within their managed resource or data resource configuration to further + // customize the create, read, update, or delete operation timeouts. For + // example, a configuration may specify a longer create timeout for a + // database resource due to its data size. + // + // The ResourceData that is passed to create, read, update, and delete + // functionality can access the merged time duration of the Resource + // default timeouts configured in this field and the practitioner timeouts + // configuration via the Timeout method. Practitioner configuration + // always overrides any default values set here, whether shorter or longer. Timeouts *ResourceTimeout // Description is used as the description for docs, the language server and // other user facing usage. It can be plain-text or markdown depending on the - // global DescriptionKind setting. + // global DescriptionKind setting. This field is valid for any Resource. Description string // UseJSONNumber should be set when state upgraders will expect // json.Numbers instead of float64s for numbers. This is added as a // toggle for backwards compatibility for type assertions, but should // be used in all new resources to avoid bugs with sufficiently large - // user input. + // user input. This field is only valid when the Resource is a managed + // resource. // // See github.com/hashicorp/terraform-plugin-sdk/issues/655 for more // details. @@ -301,6 +637,7 @@ type DeleteContextFunc func(context.Context, *ResourceData, interface{}) diag.Di type StateMigrateFunc func( int, *terraform.InstanceState, interface{}) (*terraform.InstanceState, error) +// Implementation of a single schema version state upgrade. type StateUpgrader struct { // Version is the version schema that this Upgrader will handle, converting // it to Version+1. @@ -319,7 +656,36 @@ type StateUpgrader struct { Upgrade StateUpgradeFunc } -// See StateUpgrader +// Function signature for a schema version state upgrade handler. +// +// The Context parameter stores SDK information, such as loggers. It also +// is wired to receive any cancellation from Terraform such as a system or +// practitioner sending SIGINT (Ctrl-c). +// +// The map[string]interface{} parameter contains the previous schema version +// state data for a managed resource instance. The keys are top level attribute +// or block names mapped to values that can be type asserted similar to +// fetching values using the ResourceData Get* methods: +// +// - TypeBool: bool +// - TypeFloat: float +// - TypeInt: int +// - TypeList: []interface{} +// - TypeMap: map[string]interface{} +// - TypeSet: *Set +// - TypeString: string +// +// In certain scenarios, the map may be nil, so checking for that condition +// upfront is recommended to prevent potential panics. +// +// The interface{} parameter is the result of the Provider type +// ConfigureFunc field execution. If the Provider does not define +// a ConfigureFunc, this will be nil. This parameter is conventionally +// used to store API clients and other provider instance specific data. +// +// The map[string]interface{} return parameter should contain the upgraded +// schema version state data for a managed resource instance. Values must +// align to the typing mentioned above. type StateUpgradeFunc func(ctx context.Context, rawState map[string]interface{}, meta interface{}) (map[string]interface{}, error) // See Resource documentation. diff --git a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/schema.go b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/schema.go index 7b2374051b2d..fdd080a97374 100644 --- a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/schema.go +++ b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema/schema.go @@ -33,9 +33,14 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" ) -// Schema is used to describe the structure of a value. +// Schema describes the structure and type information of a value, whether +// sourced from configuration, plan, or state data. Schema is used in Provider +// and Resource types (for managed resources and data resources) and is +// fundamental to the implementations of ResourceData and ResourceDiff. // -// Read the documentation of the struct elements for important details. +// The Type field must always be set. At least one of Required, Optional, +// Optional and Computed, or Computed must be enabled unless the Schema is +// directly an implementation of an Elem field of another Schema. type Schema struct { // Type is the type of the value and must be one of the ValueType values. // @@ -73,14 +78,37 @@ type Schema struct { // behavior, and SchemaConfigModeBlock is not permitted. ConfigMode SchemaConfigMode - // If one of these is set, then this item can come from the configuration. - // Both cannot be set. If Optional is set, the value is optional. If - // Required is set, the value is required. + // Required indicates whether the practitioner must enter a value in the + // configuration for this attribute. Required cannot be used with Computed + // Default, DefaultFunc, DiffSuppressFunc, DiffSuppressOnRefresh, + // InputDefault, Optional, or StateFunc. At least one of Required, + // Optional, Optional and Computed, or Computed must be enabled. + Required bool + + // Optional indicates whether the practitioner can choose to not enter + // a value in the configuration for this attribute. Optional cannot be used + // with Required. // - // One of these must be set if the value is not computed. That is: - // value either comes from the config, is computed, or is both. + // If also using Default or DefaultFunc, Computed should also be enabled, + // otherwise Terraform can output warning logs or "inconsistent result + // after apply" errors. Optional bool - Required bool + + // Computed indicates whether the provider may return its own value for + // this attribute or not. Computed cannot be used with Required. If + // Required and Optional are both false, the attribute will be considered + // "read only" for the practitioner, with only the provider able to set + // its value. + Computed bool + + // ForceNew indicates whether a change in this value requires the + // replacement (destroy and create) of the managed resource instance, + // rather than an in-place update. This field is only valid when the + // encapsulating Resource is a managed resource. + // + // If conditional replacement logic is needed, use the Resource type + // CustomizeDiff field to call the ResourceDiff type ForceNew method. + ForceNew bool // If this is non-nil, the provided function will be used during diff // of this field. If this is nil, a default diff for the type of the @@ -129,29 +157,34 @@ type Schema struct { // for existing providers if activated everywhere all at once. DiffSuppressOnRefresh bool - // If this is non-nil, then this will be a default value that is used - // when this item is not set in the configuration. + // Default indicates a value to set if this attribute is not set in the + // configuration. Default cannot be used with DefaultFunc or Required. + // Default is only supported if the Type is TypeBool, TypeFloat, TypeInt, + // or TypeString. Default cannot be used if the Schema is directly an + // implementation of an Elem field of another Schema, such as trying to + // set a default value for a TypeList or TypeSet. // - // DefaultFunc can be specified to compute a dynamic default. - // Only one of Default or DefaultFunc can be set. If DefaultFunc is - // used then its return value should be stable to avoid generating - // confusing/perpetual diffs. + // Changing either Default can be a breaking change, especially if the + // attribute has ForceNew enabled. If a default needs to change to align + // with changing assumptions in an upstream API, then it may be necessary + // to also implement resource state upgrade functionality to change the + // state to match or update read operation logic to align with the new + // default. + Default interface{} + + // DefaultFunc can be specified to compute a dynamic default when this + // attribute is not set in the configuration. DefaultFunc cannot be used + // with Default. For legacy reasons, DefaultFunc can be used with Required + // attributes in a Provider schema, which will prompt practitioners for + // input if the result of this function is nil. // - // Changing either Default or the return value of DefaultFunc can be - // a breaking change, especially if the attribute in question has - // ForceNew set. If a default needs to change to align with changing - // assumptions in an upstream API then it may be necessary to also use - // the MigrateState function on the resource to change the state to match, - // or have the Read function adjust the state value to align with the - // new default. - // - // If Required is true above, then Default cannot be set. DefaultFunc - // can be set with Required. If the DefaultFunc returns nil, then there - // will be no default and the user will be asked to fill it in. - // - // If either of these is set, then the user won't be asked for input - // for this key if the default is not nil. - Default interface{} + // The return value should be stable to avoid generating confusing + // plan differences. Changing the return value can be a breaking change, + // especially if ForceNew is enabled. If a default needs to change to align + // with changing assumptions in an upstream API, then it may be necessary + // to also implement resource state upgrade functionality to change the + // state to match or update read operation logic to align with the new + // default. DefaultFunc SchemaDefaultFunc // Description is used as the description for docs, the language server and @@ -164,85 +197,125 @@ type Schema struct { // asked for. If Input is asked, this will be the default value offered. InputDefault string - // The fields below relate to diffs. - // - // If Computed is true, then the result of this value is computed - // (unless specified by config) on creation. - // - // If ForceNew is true, then a change in this resource necessitates - // the creation of a new resource. - // // StateFunc is a function called to change the value of this before // storing it in the state (and likewise before comparing for diffs). // The use for this is for example with large strings, you may want // to simply store the hash of it. - Computed bool - ForceNew bool StateFunc SchemaStateFunc - // The following fields are only set for a TypeList, TypeSet, or TypeMap. + // Elem represents the element type for a TypeList, TypeSet, or TypeMap + // attribute or block. The only valid types are *Schema and *Resource. + // Only TypeList and TypeSet support *Resource. + // + // If the Elem is a *Schema, the surrounding Schema represents a single + // attribute with a single element type for underlying elements. In + // practitioner configurations, an equals sign (=) is required to set + // the value. Refer to the following documentation: + // + // https://www.terraform.io/docs/language/syntax/configuration.html + // + // The underlying *Schema is only required to implement Type. ValidateFunc + // or ValidateDiagFunc can be used to validate each element value. + // + // If the Elem is a *Resource, the surrounding Schema represents a + // configuration block. Blocks can contain underlying attributes or blocks. + // In practitioner configurations, an equals sign (=) cannot be used to + // set the value. Blocks are instead repeated as necessary, or require + // the use of dynamic block expressions. Refer to the following + // documentation: // - // Elem represents the element type. For a TypeMap, it must be a *Schema - // with a Type that is one of the primitives: TypeString, TypeBool, - // TypeInt, or TypeFloat. Otherwise it may be either a *Schema or a - // *Resource. If it is *Schema, the element type is just a simple value. - // If it is *Resource, the element type is a complex structure, - // potentially managed via its own CRUD actions on the API. + // https://www.terraform.io/docs/language/syntax/configuration.html + // https://www.terraform.io/docs/language/expressions/dynamic-blocks.html + // + // The underlying *Resource must only implement the Schema field. Elem interface{} - // The following fields are only set for a TypeList or TypeSet. - // // MaxItems defines a maximum amount of items that can exist within a - // TypeSet or TypeList. Specific use cases would be if a TypeSet is being - // used to wrap a complex structure, however more than one instance would - // cause instability. - // + // TypeSet or TypeList. + MaxItems int + // MinItems defines a minimum amount of items that can exist within a - // TypeSet or TypeList. Specific use cases would be if a TypeSet is being - // used to wrap a complex structure, however less than one instance would - // cause instability. + // TypeSet or TypeList. // // If the field Optional is set to true then MinItems is ignored and thus // effectively zero. - MaxItems int MinItems int - // The following fields are only valid for a TypeSet type. - // - // Set defines a function to determine the unique ID of an item so that - // a proper set can be built. + // Set defines custom hash algorithm for each TypeSet element. If not + // defined, the SDK implements a default hash algorithm based on the + // underlying structure and type information of the Elem field. Set SchemaSetFunc // ComputedWhen is a set of queries on the configuration. Whenever any // of these things is changed, it will require a recompute (this requires // that Computed is set to true). // - // NOTE: This currently does not work. + // Deprecated: This functionality is not implemented and this field + // declaration should be removed. ComputedWhen []string - // ConflictsWith is a set of schema keys that conflict with this schema. - // This will only check that they're set in the _config_. This will not - // raise an error for a malfunctioning resource that sets a conflicting - // key. - // - // ExactlyOneOf is a set of schema keys that, when set, only one of the - // keys in that list can be specified. It will error if none are - // specified as well. - // - // AtLeastOneOf is a set of schema keys that, when set, at least one of - // the keys in that list must be specified. + // ConflictsWith is a set of attribute paths, including this attribute, + // whose configurations cannot be set simultaneously. This implements the + // validation logic declaratively within the schema and can trigger earlier + // in Terraform operations, rather than using create or update logic which + // only triggers during apply. // - // RequiredWith is a set of schema keys that must be set simultaneously. + // Only absolute attribute paths, ones starting with top level attribute + // names, are supported. Attribute paths cannot be accurately declared + // for TypeList (if MaxItems is greater than 1), TypeMap, or TypeSet + // attributes. To reference an attribute under a single configuration block + // (TypeList with Elem of *Resource and MaxItems of 1), the syntax is + // "parent_block_name.0.child_attribute_name". ConflictsWith []string - ExactlyOneOf []string - AtLeastOneOf []string - RequiredWith []string - // When Deprecated is set, this attribute is deprecated. + // ExactlyOneOf is a set of attribute paths, including this attribute, + // where only one attribute out of all specified can be configured. It will + // return a validation error if none are specified as well. This implements + // the validation logic declaratively within the schema and can trigger + // earlier in Terraform operations, rather than using create or update + // logic which only triggers during apply. // - // A deprecated field still works, but will probably stop working in near - // future. This string is the message shown to the user with instructions on - // how to address the deprecation. + // Only absolute attribute paths, ones starting with top level attribute + // names, are supported. Attribute paths cannot be accurately declared + // for TypeList (if MaxItems is greater than 1), TypeMap, or TypeSet + // attributes. To reference an attribute under a single configuration block + // (TypeList with Elem of *Resource and MaxItems of 1), the syntax is + // "parent_block_name.0.child_attribute_name". + ExactlyOneOf []string + + // AtLeastOneOf is a set of attribute paths, including this attribute, + // in which at least one of the attributes must be configured. This + // implements the validation logic declaratively within the schema and can + // trigger earlier in Terraform operations, rather than using create or + // update logic which only triggers during apply. + // + // Only absolute attribute paths, ones starting with top level attribute + // names, are supported. Attribute paths cannot be accurately declared + // for TypeList (if MaxItems is greater than 1), TypeMap, or TypeSet + // attributes. To reference an attribute under a single configuration block + // (TypeList with Elem of *Resource and MaxItems of 1), the syntax is + // "parent_block_name.0.child_attribute_name". + AtLeastOneOf []string + + // RequiredWith is a set of attribute paths, including this attribute, + // that must be set simultaneously. This implements the validation logic + // declaratively within the schema and can trigger earlier in Terraform + // operations, rather than using create or update logic which only triggers + // during apply. + // + // Only absolute attribute paths, ones starting with top level attribute + // names, are supported. Attribute paths cannot be accurately declared + // for TypeList (if MaxItems is greater than 1), TypeMap, or TypeSet + // attributes. To reference an attribute under a single configuration block + // (TypeList with Elem of *Resource and MaxItems of 1), the syntax is + // "parent_block_name.0.child_attribute_name". + RequiredWith []string + + // Deprecated indicates the message to include in a warning diagnostic to + // practitioners when this attribute is configured. Typically this is used + // to signal that this attribute will be removed in the future and provide + // next steps to the practitioner, such as using a different attribute, + // different resource, or if it should just be removed. Deprecated string // ValidateFunc allows individual fields to define arbitrary validation @@ -278,9 +351,28 @@ type Schema struct { ValidateDiagFunc SchemaValidateDiagFunc // Sensitive ensures that the attribute's value does not get displayed in - // logs or regular output. It should be used for passwords or other - // secret fields. Future versions of Terraform may encrypt these - // values. + // the Terraform user interface output. It should be used for password or + // other values which should be hidden. + // + // Terraform does not support conditional sensitivity, so if the value may + // only be sensitive in certain scenarios, a pragmatic choice will be + // necessary upfront of whether or not to always hide the value. Some + // providers may opt to split up resources based on sensitivity, to ensure + // that practitioners without sensitive values do not have values + // unnecessarily hidden. + // + // Terraform does not support passing sensitivity from configurations to + // providers. For example, if a sensitive value is configured via another + // attribute, this attribute is not marked Sensitive, and the value is used + // in this attribute value, the sensitivity is not transitive. The value + // will be displayed as normal. + // + // Sensitive values propagate when referenced in other parts of a + // configuration unless the nonsensitive() configuration function is used. + // Certain configuration usage may also expand the sensitivity. For + // example, including the sensitive value in a set may mark the whole set + // as sensitive. Any outputs containing a sensitive value must enable the + // output sensitive argument. Sensitive bool } @@ -2153,8 +2245,19 @@ func (m schemaMap) validatePrimitive( // decode a float as an integer. // the config shims only use int for integral number values + // also accept a string, just as the TypeBool and TypeFloat cases do if v, ok := raw.(int); ok { decoded = v + } else if _, ok := raw.(string); ok { + var n int + if err := mapstructure.WeakDecode(raw, &n); err != nil { + return append(diags, diag.Diagnostic{ + Severity: diag.Error, + Summary: err.Error(), + AttributePath: path, + }) + } + decoded = n } else { return append(diags, diag.Diagnostic{ Severity: diag.Error, @@ -2163,7 +2266,7 @@ func (m schemaMap) validatePrimitive( }) } case TypeFloat: - // Verify that we can parse this as an int + // Verify that we can parse this as a float var n float64 if err := mapstructure.WeakDecode(raw, &n); err != nil { return append(diags, diag.Diagnostic{ diff --git a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/meta.go b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/meta.go index f1376c2d394a..b0515c8d0854 100644 --- a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/meta.go +++ b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/meta.go @@ -66,8 +66,21 @@ func ToDiagFunc(validator schema.SchemaValidateFunc) schema.SchemaValidateDiagFu return func(i interface{}, p cty.Path) diag.Diagnostics { var diags diag.Diagnostics - attr := p[len(p)-1].(cty.GetAttrStep) - ws, es := validator(i, attr.Name) + // A practitioner-friendly key for any SchemaValidateFunc output. + // Generally this should be the last attribute name on the path. + // If not found for some unexpected reason, an empty string is fine + // as the diagnostic will have the full attribute path anyways. + var key string + + // Reverse search for last cty.GetAttrStep + for i := len(p) - 1; i >= 0; i-- { + if pathStep, ok := p[i].(cty.GetAttrStep); ok { + key = pathStep.Name + break + } + } + + ws, es := validator(i, key) for _, w := range ws { diags = append(diags, diag.Diagnostic{ diff --git a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/testing.go b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/testing.go index 596c5754a48e..d861f5a2af9c 100644 --- a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/testing.go +++ b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation/testing.go @@ -5,7 +5,6 @@ import ( testing "github.com/mitchellh/go-testing-interface" - "github.com/hashicorp/go-cty/cty" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" ) @@ -16,12 +15,6 @@ type testCase struct { expectedErr *regexp.Regexp } -type diagTestCase struct { - val interface{} - f schema.SchemaValidateDiagFunc - expectedErr *regexp.Regexp -} - func runTestCases(t testing.T, cases []testCase) { t.Helper() @@ -52,29 +45,6 @@ func matchAnyError(errs []error, r *regexp.Regexp) bool { return false } -func runDiagTestCases(t testing.T, cases []diagTestCase) { - t.Helper() - - for i, tc := range cases { - p := cty.Path{ - cty.GetAttrStep{Name: "test_property"}, - } - diags := tc.f(tc.val, p) - - if !diags.HasError() && tc.expectedErr == nil { - continue - } - - if diags.HasError() && tc.expectedErr == nil { - t.Fatalf("expected test case %d to produce no errors, got %v", i, diags) - } - - if !matchAnyDiagSummary(diags, tc.expectedErr) { - t.Fatalf("expected test case %d to produce error matching \"%s\", got %v", i, tc.expectedErr, diags) - } - } -} - func matchAnyDiagSummary(ds diag.Diagnostics, r *regexp.Regexp) bool { for _, d := range ds { if r.MatchString(d.Summary) { diff --git a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugin/convert/diagnostics.go b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugin/convert/diagnostics.go index d0277227564f..e02c1e443944 100644 --- a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugin/convert/diagnostics.go +++ b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugin/convert/diagnostics.go @@ -1,34 +1,61 @@ package convert import ( - "fmt" + "context" "github.com/hashicorp/go-cty/cty" "github.com/hashicorp/terraform-plugin-go/tfprotov5" "github.com/hashicorp/terraform-plugin-go/tftypes" "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/internal/logging" ) // AppendProtoDiag appends a new diagnostic from a warning string or an error. // This panics if d is not a string or error. -func AppendProtoDiag(diags []*tfprotov5.Diagnostic, d interface{}) []*tfprotov5.Diagnostic { +func AppendProtoDiag(ctx context.Context, diags []*tfprotov5.Diagnostic, d interface{}) []*tfprotov5.Diagnostic { switch d := d.(type) { case cty.PathError: ap := PathToAttributePath(d.Path) - diags = append(diags, &tfprotov5.Diagnostic{ + diagnostic := &tfprotov5.Diagnostic{ Severity: tfprotov5.DiagnosticSeverityError, Summary: d.Error(), Attribute: ap, - }) + } + + if diagnostic.Summary == "" { + logging.HelperSchemaWarn(ctx, "detected empty error string for diagnostic in AppendProtoDiag for cty.PathError type") + diagnostic.Summary = "Empty Error String" + diagnostic.Detail = "This is always a bug in the provider and should be reported to the provider developers." + } + + diags = append(diags, diagnostic) case diag.Diagnostics: diags = append(diags, DiagsToProto(d)...) case error: - diags = append(diags, &tfprotov5.Diagnostic{ + if d == nil { + logging.HelperSchemaDebug(ctx, "skipping diagnostic for nil error in AppendProtoDiag") + return diags + } + + diagnostic := &tfprotov5.Diagnostic{ Severity: tfprotov5.DiagnosticSeverityError, Summary: d.Error(), - }) + } + + if diagnostic.Summary == "" { + logging.HelperSchemaWarn(ctx, "detected empty error string for diagnostic in AppendProtoDiag for error type") + diagnostic.Summary = "Error Missing Message" + diagnostic.Detail = "This is always a bug in the provider and should be reported to the provider developers." + } + + diags = append(diags, diagnostic) case string: + if d == "" { + logging.HelperSchemaDebug(ctx, "skipping diagnostic for empty string in AppendProtoDiag") + return diags + } + diags = append(diags, &tfprotov5.Diagnostic{ Severity: tfprotov5.DiagnosticSeverityWarning, Summary: d, @@ -68,19 +95,18 @@ func ProtoToDiags(ds []*tfprotov5.Diagnostic) diag.Diagnostics { func DiagsToProto(diags diag.Diagnostics) []*tfprotov5.Diagnostic { var ds []*tfprotov5.Diagnostic for _, d := range diags { - if err := d.Validate(); err != nil { - panic(fmt.Errorf("Invalid diagnostic: %s. This is always a bug in the provider implementation", err)) - } protoDiag := &tfprotov5.Diagnostic{ + Severity: tfprotov5.DiagnosticSeverityError, Summary: d.Summary, Detail: d.Detail, Attribute: PathToAttributePath(d.AttributePath), } - if d.Severity == diag.Error { - protoDiag.Severity = tfprotov5.DiagnosticSeverityError - } else if d.Severity == diag.Warning { + if d.Severity == diag.Warning { protoDiag.Severity = tfprotov5.DiagnosticSeverityWarning } + if d.Summary == "" { + protoDiag.Summary = "Empty Summary: This is always a bug in the provider and should be reported to the provider developers." + } ds = append(ds, protoDiag) } return ds diff --git a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/environment_variables.go b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/environment_variables.go index 4aa7c57c69c0..e3eb626202fc 100644 --- a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/environment_variables.go +++ b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/environment_variables.go @@ -2,6 +2,9 @@ package plugintest // Environment variables const ( + // Disables checkpoint.hashicorp.com calls in Terraform CLI. + EnvCheckpointDisable = "CHECKPOINT_DISABLE" + // Environment variable with acceptance testing temporary directory for // testing files and Terraform CLI installation, if installation is // required. By default, the operating system temporary directory is used. diff --git a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/helper.go b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/helper.go index 8ea6756fd4fa..d9e620f33fda 100644 --- a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/helper.go +++ b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/helper.go @@ -119,8 +119,17 @@ func (h *Helper) NewWorkingDir(ctx context.Context) (*WorkingDir, error) { } tf, err := tfexec.NewTerraform(dir, h.terraformExec) + if err != nil { - return nil, err + return nil, fmt.Errorf("unable to create terraform-exec instance: %w", err) + } + + err = tf.SetEnv(map[string]string{ + EnvCheckpointDisable: "1", + }) + + if err != nil { + return nil, fmt.Errorf("unable to set terraform-exec environment variables: %w", err) } return &WorkingDir{ diff --git a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/working_dir.go b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/working_dir.go index 37cd58f09a02..6a398a9ea128 100644 --- a/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/working_dir.go +++ b/providerlint/vendor/github.com/hashicorp/terraform-plugin-sdk/v2/internal/plugintest/working_dir.go @@ -47,8 +47,6 @@ type WorkingDir struct { // reattachInfo stores the gRPC socket info required for Terraform's // plugin reattach functionality reattachInfo tfexec.ReattachInfo - - env map[string]string } // Close deletes the directories and files created to represent the receiving @@ -58,19 +56,6 @@ func (wd *WorkingDir) Close() error { return os.RemoveAll(wd.baseDir) } -// Setenv sets an environment variable on the WorkingDir. -func (wd *WorkingDir) Setenv(envVar, val string) { - if wd.env == nil { - wd.env = map[string]string{} - } - wd.env[envVar] = val -} - -// Unsetenv removes an environment variable from the WorkingDir. -func (wd *WorkingDir) Unsetenv(envVar string) { - delete(wd.env, envVar) -} - func (wd *WorkingDir) SetReattachInfo(ctx context.Context, reattachInfo tfexec.ReattachInfo) { logging.HelperResourceTrace(ctx, "Setting Terraform CLI reattach configuration", map[string]interface{}{"tf_reattach_config": reattachInfo}) wd.reattachInfo = reattachInfo diff --git a/providerlint/vendor/modules.txt b/providerlint/vendor/modules.txt index 334a0165e100..3b39e25cb7d9 100644 --- a/providerlint/vendor/modules.txt +++ b/providerlint/vendor/modules.txt @@ -2,7 +2,7 @@ github.com/agext/levenshtein # github.com/apparentlymart/go-textseg/v13 v13.0.0 github.com/apparentlymart/go-textseg/v13/textseg -# github.com/aws/aws-sdk-go v1.43.21 +# github.com/aws/aws-sdk-go v1.43.34 ## explicit github.com/aws/aws-sdk-go/aws/awserr github.com/aws/aws-sdk-go/aws/endpoints @@ -245,7 +245,7 @@ github.com/hashicorp/terraform-plugin-log/internal/hclogutils github.com/hashicorp/terraform-plugin-log/internal/logging github.com/hashicorp/terraform-plugin-log/tflog github.com/hashicorp/terraform-plugin-log/tfsdklog -# github.com/hashicorp/terraform-plugin-sdk/v2 v2.12.0 +# github.com/hashicorp/terraform-plugin-sdk/v2 v2.13.0 ## explicit github.com/hashicorp/terraform-plugin-sdk/v2/diag github.com/hashicorp/terraform-plugin-sdk/v2/helper/logging diff --git a/skaff/go.mod b/skaff/go.mod index 90cbdeabdf69..4b1913ee8136 100644 --- a/skaff/go.mod +++ b/skaff/go.mod @@ -8,12 +8,12 @@ require ( ) require ( - github.com/aws/aws-sdk-go v1.43.21 // indirect - github.com/aws/aws-sdk-go-v2 v1.15.0 // indirect - github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.6 // indirect - github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.0 // indirect - github.com/aws/aws-sdk-go-v2/service/route53domains v1.12.0 // indirect - github.com/aws/smithy-go v1.11.1 // indirect + github.com/aws/aws-sdk-go v1.43.26 // indirect + github.com/aws/aws-sdk-go-v2 v1.16.1 // indirect + github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.8 // indirect + github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.2 // indirect + github.com/aws/aws-sdk-go-v2/service/route53domains v1.12.2 // indirect + github.com/aws/smithy-go v1.11.2 // indirect github.com/inconshreveable/mousetrap v1.0.0 // indirect github.com/jmespath/go-jmespath v0.4.0 // indirect github.com/spf13/pflag v1.0.5 // indirect diff --git a/skaff/go.sum b/skaff/go.sum index 54cc7f6608ed..fdab753fabd8 100644 --- a/skaff/go.sum +++ b/skaff/go.sum @@ -19,26 +19,30 @@ github.com/apparentlymart/go-textseg/v13 v13.0.0/go.mod h1:ZK2fH7c4NqDTLtiYLvIkE github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs= github.com/aws/aws-sdk-go v1.42.18/go.mod h1:585smgzpB/KqRA+K3y/NL/oYRqQvpNJYvLm+LY1U59Q= github.com/aws/aws-sdk-go v1.42.52/go.mod h1:OGr6lGMAKGlG9CVrYnWYDKIyb829c6EVBRjxqjmPepc= -github.com/aws/aws-sdk-go v1.43.21 h1:E4S2eX3d2gKJyI/ISrcIrSwXwqjIvCK85gtBMt4sAPE= -github.com/aws/aws-sdk-go v1.43.21/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo= -github.com/aws/aws-sdk-go-v2 v1.15.0 h1:f9kWLNfyCzCB43eupDAk3/XgJ2EpgktiySD6leqs0js= +github.com/aws/aws-sdk-go v1.43.26 h1:/ABcm/2xp+Vu+iUx8+TmlwXMGjO7fmZqJMoZjml4y/4= +github.com/aws/aws-sdk-go v1.43.26/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo= github.com/aws/aws-sdk-go-v2 v1.15.0/go.mod h1:lJYcuZZEHWNIb6ugJjbQY1fykdoobWbOS7kJYb4APoI= +github.com/aws/aws-sdk-go-v2 v1.16.1 h1:udzee98w8H6ikRgtFdVN9JzzYEbi/quFfSvduZETJIU= +github.com/aws/aws-sdk-go-v2 v1.16.1/go.mod h1:ytwTPBG6fXTZLxxeeCCWj2/EMYp/xDUgX+OET6TLNNU= github.com/aws/aws-sdk-go-v2/config v1.15.0/go.mod h1:NccaLq2Z9doMmeQXHQRrt2rm+2FbkrcPvfdbCaQn5hY= github.com/aws/aws-sdk-go-v2/credentials v1.10.0/go.mod h1:HWJMr4ut5X+Lt/7epc7I6Llg5QIcoFHKAeIzw32t6EE= github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.12.0/go.mod h1:prX26x9rmLwkEE1VVCelQOQgRN9sOVIssgowIJ270SE= -github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.6 h1:xiGjGVQsem2cxoIX61uRGy+Jux2s9C/kKbTrWLdrU54= github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.6/go.mod h1:SSPEdf9spsFgJyhjrXvawfpyzrXHBCUe+2eQ1CjC1Ak= -github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.0 h1:bt3zw79tm209glISdMRCIVRCwvSDXxgAxh5KWe2qHkY= +github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.8 h1:CDaO90VZVBAL1sK87S5oSPIrp7yZqORv1hPIi2UsTMk= +github.com/aws/aws-sdk-go-v2/internal/configsources v1.1.8/go.mod h1:LnTQMTqbKsbtt+UI5+wPsB7jedW+2ZgozoPG8k6cMxg= github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.0/go.mod h1:viTrxhAuejD+LszDahzAE2x40YjYWhMqzHxv2ZiWaME= +github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.2 h1:XXR3cdOcKRCTZf6ctcqpMf+go1BdzTm6+T9Ul5zxcMI= +github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.4.2/go.mod h1:1x4ZP3Z8odssdhuLI+/1Tqw6Pt/VAaP4Tr8EUxHvPXE= github.com/aws/aws-sdk-go-v2/internal/ini v1.3.7/go.mod h1:P5sjYYf2nc5dE6cZIzEMsVtq6XeLD7c4rM+kQJPrByA= github.com/aws/aws-sdk-go-v2/service/iam v1.18.0/go.mod h1:9wRsXAkRJ7qBWIDTFYa66Cx+oQJsPEnBYCPrinanpS8= github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.9.0/go.mod h1:R31ot6BgESRCIoxwfKtIHzZMo/vsZn2un81g9BJ4nmo= -github.com/aws/aws-sdk-go-v2/service/route53domains v1.12.0 h1:PN0LQirFrjh9esAO80iZXo+asiTtLpjNCXpzZ+1BKCw= -github.com/aws/aws-sdk-go-v2/service/route53domains v1.12.0/go.mod h1:xzqCQW+Y6wn/4+9WVo3IPmnRTsN8Nwlw6cNUd6HVzqI= +github.com/aws/aws-sdk-go-v2/service/route53domains v1.12.2 h1:27HvYVELTcPcwv+jOn/mcwFGJWfAzuLsoVh/XQYm0wo= +github.com/aws/aws-sdk-go-v2/service/route53domains v1.12.2/go.mod h1:CUyAVaFNv9yWPgexd2pHi8nwxwygUWf+MKAsKbUt6Ts= github.com/aws/aws-sdk-go-v2/service/sso v1.11.0/go.mod h1:d1WcT0OjggjQCAdOkph8ijkr5sUwk1IH/VenOn7W1PU= github.com/aws/aws-sdk-go-v2/service/sts v1.16.0/go.mod h1:+8k4H2ASUZZXmjx/s3DFLo9tGBb44lkz3XcgfypJY7s= -github.com/aws/smithy-go v1.11.1 h1:IQ+lPZVkSM3FRtyaDox41R8YS6iwPMYIreejOgPW49g= github.com/aws/smithy-go v1.11.1/go.mod h1:3xHYmszWVx2c0kIwQeEVf9uSm4fYZt67FBJnwub1bgM= +github.com/aws/smithy-go v1.11.2 h1:eG/N+CcUMAvsdffgMvjMKwfyDzIkjM6pfxMJ8Mzc6mE= +github.com/aws/smithy-go v1.11.2/go.mod h1:3xHYmszWVx2c0kIwQeEVf9uSm4fYZt67FBJnwub1bgM= github.com/beevik/etree v1.1.0/go.mod h1:r8Aw8JqVegEf0w2fDnATrX9VpkMcyFeM0FhwO62wh+A= github.com/boombuler/barcode v1.0.1-0.20190219062509-6c824513bacc/go.mod h1:paBWMcWSl3LHKBqUq+rly7CNSldXjb2rDl3JlRe0mD8= github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= diff --git a/tools/go.mod b/tools/go.mod index dda3eeb9f063..d1e9c3b7cb3a 100644 --- a/tools/go.mod +++ b/tools/go.mod @@ -5,7 +5,7 @@ go 1.17 require ( github.com/bflad/tfproviderdocs v0.9.1 github.com/client9/misspell v0.3.4 - github.com/golangci/golangci-lint v1.45.0 + github.com/golangci/golangci-lint v1.45.2 github.com/hashicorp/go-changelog v0.0.0-20201005170154-56335215ce3a github.com/katbyte/terrafmt v0.3.0 github.com/pavius/impi v0.0.3 @@ -43,7 +43,7 @@ require ( github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d // indirect github.com/bgentry/speakeasy v0.1.0 // indirect github.com/bkielbasa/cyclop v1.2.0 // indirect - github.com/blizzy78/varnamelen v0.6.0 // indirect + github.com/blizzy78/varnamelen v0.6.1 // indirect github.com/bmatcuk/doublestar v1.3.4 // indirect github.com/bombsimon/wsl/v3 v3.3.0 // indirect github.com/breml/bidichk v0.2.2 // indirect @@ -109,7 +109,7 @@ require ( github.com/hashicorp/go-plugin v1.4.3 // indirect github.com/hashicorp/go-safetemp v1.0.0 // indirect github.com/hashicorp/go-uuid v1.0.2 // indirect - github.com/hashicorp/go-version v1.3.0 // indirect + github.com/hashicorp/go-version v1.4.0 // indirect github.com/hashicorp/hcl v1.0.0 // indirect github.com/hashicorp/hcl/v2 v2.11.1 // indirect github.com/hashicorp/logutils v1.0.0 // indirect @@ -197,7 +197,7 @@ require ( github.com/spf13/viper v1.10.1 // indirect github.com/ssgreg/nlreturn/v2 v2.2.1 // indirect github.com/stretchr/objx v0.1.1 // indirect - github.com/stretchr/testify v1.7.0 // indirect + github.com/stretchr/testify v1.7.1 // indirect github.com/subosito/gotenv v1.2.0 // indirect github.com/sylvia7788/contextcheck v1.0.4 // indirect github.com/tdakkota/asciicheck v0.1.1 // indirect diff --git a/tools/go.sum b/tools/go.sum index 4b8b6783630f..281ffc7f733e 100644 --- a/tools/go.sum +++ b/tools/go.sum @@ -154,8 +154,8 @@ github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kB github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84= github.com/bkielbasa/cyclop v1.2.0 h1:7Jmnh0yL2DjKfw28p86YTd/B4lRGcNuu12sKE35sM7A= github.com/bkielbasa/cyclop v1.2.0/go.mod h1:qOI0yy6A7dYC4Zgsa72Ppm9kONl0RoIlPbzot9mhmeI= -github.com/blizzy78/varnamelen v0.6.0 h1:TOIDk9qRIMspALZKX8x+5hQfAjuvAFogppnxtvuNmBo= -github.com/blizzy78/varnamelen v0.6.0/go.mod h1:zy2Eic4qWqjrxa60jG34cfL0VXcSwzUrIx68eJPb4Q8= +github.com/blizzy78/varnamelen v0.6.1 h1:kttPCLzXFa+0nt++Cw9fb7GrSSM4KkyIAoX/vXsbuqA= +github.com/blizzy78/varnamelen v0.6.1/go.mod h1:zy2Eic4qWqjrxa60jG34cfL0VXcSwzUrIx68eJPb4Q8= github.com/bmatcuk/doublestar v1.2.1/go.mod h1:wiQtGV+rzVYxB7WIlirSN++5HPtPlXEo9MEoZQC/PmE= github.com/bmatcuk/doublestar v1.3.4 h1:gPypJ5xD31uhX6Tf54sDPUOBXTqKH4c9aPY66CyQrS0= github.com/bmatcuk/doublestar v1.3.4/go.mod h1:wiQtGV+rzVYxB7WIlirSN++5HPtPlXEo9MEoZQC/PmE= @@ -383,8 +383,8 @@ github.com/golangci/go-misc v0.0.0-20180628070357-927a3d87b613 h1:9kfjN3AdxcbsZB github.com/golangci/go-misc v0.0.0-20180628070357-927a3d87b613/go.mod h1:SyvUF2NxV+sN8upjjeVYr5W7tyxaT1JVtvhKhOn2ii8= github.com/golangci/gofmt v0.0.0-20190930125516-244bba706f1a h1:iR3fYXUjHCR97qWS8ch1y9zPNsgXThGwjKPrYfqMPks= github.com/golangci/gofmt v0.0.0-20190930125516-244bba706f1a/go.mod h1:9qCChq59u/eW8im404Q2WWTrnBUQKjpNYKMbU4M7EFU= -github.com/golangci/golangci-lint v1.45.0 h1:T2oCVkYoeckBxcNS6DTYiSXN2QcTNuAWaHyLGfqzMlU= -github.com/golangci/golangci-lint v1.45.0/go.mod h1:Y6grRO3drH/7kGP88i9jSl9fGWwCrbA5u7i++jOXll4= +github.com/golangci/golangci-lint v1.45.2 h1:9I3PzkvscJkFAQpTQi5Ga0V4qWdJERajX1UZ7QqkW+I= +github.com/golangci/golangci-lint v1.45.2/go.mod h1:f20dpzMmUTRp+oYnX0OGjV1Au3Jm2JeI9yLqHq1/xsI= github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0 h1:MfyDlzVjl1hoaPzPD4Gpb/QgoRfSBR0jdhwGyAWwMSA= github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0/go.mod h1:66R6K6P6VWk9I95jvqGxkqJxVWGFy9XlDwLwVz1RCFg= github.com/golangci/maligned v0.0.0-20180506175553-b1d89398deca h1:kNY3/svz5T29MYHubXix4aDDuE3RWHkPvopM/EDv/MA= @@ -547,8 +547,9 @@ github.com/hashicorp/go-uuid v1.0.2/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/b github.com/hashicorp/go-version v1.1.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/hashicorp/go-version v1.2.1/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= -github.com/hashicorp/go-version v1.3.0 h1:McDWVJIU/y+u1BRV06dPaLfLCaT7fUTJLp5r04x7iNw= github.com/hashicorp/go-version v1.3.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= +github.com/hashicorp/go-version v1.4.0 h1:aAQzgqIrRKRa7w75CKpbBxYsmUoPjzVm1W59ca1L0J4= +github.com/hashicorp/go-version v1.4.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90= github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= @@ -991,8 +992,9 @@ github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UV github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA= github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= -github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= +github.com/stretchr/testify v1.7.1 h1:5TQK59W5E3v0r2duFAb7P95B6hEeOyEnHRa8MjYSMTY= +github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s= github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw= github.com/sylvia7788/contextcheck v1.0.4 h1:MsiVqROAdr0efZc/fOCt0c235qm9XJqHtWwM+2h2B04= diff --git a/website/allowed-subcategories.txt b/website/allowed-subcategories.txt index cf5bc7821f08..09791ff1014e 100644 --- a/website/allowed-subcategories.txt +++ b/website/allowed-subcategories.txt @@ -109,6 +109,7 @@ Managed Workflows for Apache Airflow (MWAA) Neptune Network Firewall Network Manager +OpenSearch OpsWorks Organizations Outposts diff --git a/website/docs/d/eips.html.markdown b/website/docs/d/eips.html.markdown index e974cbf7e847..97b26f6fed56 100644 --- a/website/docs/d/eips.html.markdown +++ b/website/docs/d/eips.html.markdown @@ -47,4 +47,4 @@ More complex filters can be expressed using one or more `filter` sub-blocks, whi * `id` - AWS Region. * `allocation_ids` - A list of all the allocation IDs for address for use with EC2-VPC. -* `public_ips` - A list of all the Elastic IP addresses for use with EC2-Classic. +* `public_ips` - A list of all the Elastic IP addresses. diff --git a/website/docs/d/eks_addon_version.html.markdown b/website/docs/d/eks_addon_version.html.markdown new file mode 100644 index 000000000000..a81e217ef345 --- /dev/null +++ b/website/docs/d/eks_addon_version.html.markdown @@ -0,0 +1,54 @@ +--- +subcategory: "EKS" +layout: "aws" +page_title: "AWS: aws_eks_addon_version" +description: |- + Retrieve information about versions of an EKS add-on +--- + +# Data Source: aws_eks_addon_version + +Retrieve information about a specific EKS add-on version compatible with an EKS cluster version. + +## Example Usage + +```terraform +data "aws_eks_addon_version" "default" { + addon_name = "vpc-cni" + kubernetes_version = aws_eks_cluster.example.version +} + +data "aws_eks_addon_version" "latest" { + addon_name = "vpc-cni" + kubernetes_version = aws_eks_cluster.example.version + most_recent = true +} + +resource "aws_eks_addon" "vpc_cni" { + cluster_name = aws_eks_cluster.example.name + addon_name = "vpc-cni" + addon_version = data.aws_eks_addon_version.latest.version +} + +output "default" { + value = data.aws_eks_addon_version.default.version +} + +output "latest" { + value = data.aws_eks_addon_version.latest.version +} +``` + +## Argument Reference + +* `addon_name` – (Required) Name of the EKS add-on. The name must match one of + the names returned by [list-addon](https://docs.aws.amazon.com/cli/latest/reference/eks/list-addons.html). +* `kubernetes_version` – (Required) Version of the EKS Cluster. Must be between 1-100 characters in length. Must begin with an alphanumeric character, and must only contain alphanumeric characters, dashes and underscores (`^[0-9A-Za-z][A-Za-z0-9\-_]+$`). +* `most_recent` - (Optional) Determines if the most recent or default version of the addon should be returned. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The name of the add-on +* `version` - The version of the EKS add-on. diff --git a/website/docs/d/elasticache_cluster.html.markdown b/website/docs/d/elasticache_cluster.html.markdown index f1b7a42b0117..e96bf7100b71 100644 --- a/website/docs/d/elasticache_cluster.html.markdown +++ b/website/docs/d/elasticache_cluster.html.markdown @@ -38,6 +38,7 @@ In addition to all arguments above, the following attributes are exported: * `security_group_ids` – List VPC security groups associated with the cache cluster. * `parameter_group_name` – Name of the parameter group associated with this cache cluster. * `replication_group_id` - The replication group to which this cache cluster belongs. +* `log_delivery_configuration` - Redis [SLOWLOG](https://redis.io/commands/slowlog) or Redis [Engine Log](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Log_Delivery.html#Log_contents-engine-log) delivery settings. * `maintenance_window` – Specifies the weekly time range for when maintenance on the cache cluster is performed. * `snapshot_window` - The daily time range (in UTC) during which ElastiCache will diff --git a/website/docs/d/elasticache_replication_group.html.markdown b/website/docs/d/elasticache_replication_group.html.markdown index b163ddee8e34..f1fa59c88b80 100644 --- a/website/docs/d/elasticache_replication_group.html.markdown +++ b/website/docs/d/elasticache_replication_group.html.markdown @@ -40,6 +40,7 @@ In addition to all arguments above, the following attributes are exported: * `multi_az_enabled` - Specifies whether Multi-AZ Support is enabled for the replication group. * `replicas_per_node_group` - Number of replica nodes in each node group. * `replication_group_description` - (**Deprecated** use `description` instead) The description of the replication group. +* `log_delivery_configuration` - Redis [SLOWLOG](https://redis.io/commands/slowlog) or Redis [Engine Log](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Log_Delivery.html#Log_contents-engine-log) delivery settings. * `snapshot_window` - The daily time range (in UTC) during which ElastiCache begins taking a daily snapshot of your node group (shard). * `snapshot_retention_limit` - The number of days for which ElastiCache retains automatic cache cluster snapshots before deleting them. * `port` – The port number on which the configuration endpoint will accept connections. diff --git a/website/docs/d/elasticsearch_domain.html.markdown b/website/docs/d/elasticsearch_domain.html.markdown index afa8b68fd66b..a0da4a6f88a3 100644 --- a/website/docs/d/elasticsearch_domain.html.markdown +++ b/website/docs/d/elasticsearch_domain.html.markdown @@ -45,6 +45,8 @@ The following attributes are exported: * `cron_expression_for_recurrence` - A cron expression specifying the recurrence pattern for an Auto-Tune maintenance schedule. * `rollback_on_disable` - Whether the domain is set to roll back to default Auto-Tune settings when disabling Auto-Tune. * `cluster_config` - Cluster configuration of the domain. + * `cold_storage_options` - Configuration block containing cold storage configuration. + * `enabled` - Indicates cold storage is enabled. * `instance_type` - Instance type of data nodes in the cluster. * `instance_count` - Number of instances in the cluster. * `dedicated_master_enabled` - Indicates whether dedicated master nodes are enabled for the cluster. diff --git a/website/docs/d/imagebuilder_distribution_configuration.html.markdown b/website/docs/d/imagebuilder_distribution_configuration.html.markdown index ccbd7cfc25bc..7279e56abd7a 100644 --- a/website/docs/d/imagebuilder_distribution_configuration.html.markdown +++ b/website/docs/d/imagebuilder_distribution_configuration.html.markdown @@ -49,6 +49,7 @@ In addition to all arguments above, the following attributes are exported: * `launch_template_configuration` - Nested list of launch template configurations. * `default` - Indicates whether the specified Amazon EC2 launch template is set as the default launch template. * `launch_template_id` - ID of the Amazon EC2 launch template. + * `account_id` - The account ID that this configuration applies to. * `license_configuration_arns` - Set of Amazon Resource Names (ARNs) of License Manager License Configurations. * `region` - AWS Region of distribution. * `name` - Name of the distribution configuration. diff --git a/website/docs/d/lambda_function.html.markdown b/website/docs/d/lambda_function.html.markdown index 6c9a8d3f5554..1cab22b9b5c8 100644 --- a/website/docs/d/lambda_function.html.markdown +++ b/website/docs/d/lambda_function.html.markdown @@ -39,6 +39,7 @@ In addition to all arguments above, the following attributes are exported: * `dead_letter_config` - Configure the function's *dead letter queue*. * `description` - Description of what your Lambda Function does. * `environment` - The Lambda environment's configuration settings. +* `ephemeral_storage` - The amount of Ephemeral storage(`/tmp`) allocated for the Lambda Function. * `file_system_config` - The connection settings for an Amazon EFS file system. * `handler` - The function entrypoint in your code. * `image_uri` - The URI of the container image. diff --git a/website/docs/d/lambda_function_url.html.markdown b/website/docs/d/lambda_function_url.html.markdown new file mode 100644 index 000000000000..dda86793933a --- /dev/null +++ b/website/docs/d/lambda_function_url.html.markdown @@ -0,0 +1,42 @@ +--- +subcategory: "Lambda" +layout: "aws" +page_title: "AWS: aws_lambda_function_url" +description: |- + Provides a Lambda function URL data source. +--- + +# aws_lambda_function_url + +Provides information about a Lambda function URL. + +## Example Usage + +```terraform +variable "function_name" { + type = string +} + +data "aws_lambda_function_url" "existing" { + function_name = var.function_name +} +``` + +## Argument Reference + +The following arguments are supported: + +* `function_name` - (Required) he name (or ARN) of the Lambda function. +* `qualifier` - (Optional) The alias name or `"$LATEST"`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `authorization_type` - The type of authentication that the function URL uses. +* `cors` - The [cross-origin resource sharing (CORS)](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) settings for the function URL. See the [`aws_lambda_function_url` resource](/docs/providers/aws/r/lambda_function_url.html) documentation for more details. +* `creation_time` - When the function URL was created, in [ISO-8601 format](https://www.w3.org/TR/NOTE-datetime). +* `function_arn` - The Amazon Resource Name (ARN) of the function. +* `function_url` - The HTTP URL endpoint for the function in the format `https://.lambda-url..on.aws`. +* `last_modified_time` - When the function URL configuration was last updated, in [ISO-8601 format](https://www.w3.org/TR/NOTE-datetime). +* `url_id` - A generated ID for the endpoint. diff --git a/website/docs/d/memorydb_acl.html.markdown b/website/docs/d/memorydb_acl.html.markdown new file mode 100644 index 000000000000..22e271ff52b9 --- /dev/null +++ b/website/docs/d/memorydb_acl.html.markdown @@ -0,0 +1,35 @@ +--- +subcategory: "MemoryDB" +layout: "aws" +page_title: "AWS: aws_memorydb_acl" +description: |- + Provides information about a MemoryDB ACL. +--- + +# Resource: aws_memorydb_acl + +Provides information about a MemoryDB ACL. + +## Example Usage + +```terraform +data "aws_memorydb_acl" "example" { + name = "my-acl" +} +``` + +## Argument Reference + +The following arguments are required: + +* `name` - (Required) Name of the ACL. + +## Attributes Reference + +In addition, the following attributes are exported: + +* `id` - Name of the ACL. +* `arn` - ARN of the ACL. +* `minimum_engine_version` - The minimum engine version supported by the ACL. +* `tags` - A map of tags assigned to the ACL. +* `user_names` - Set of MemoryDB user names included in this ACL. diff --git a/website/docs/d/memorydb_cluster.html.markdown b/website/docs/d/memorydb_cluster.html.markdown new file mode 100644 index 000000000000..484aa3cdbc5f --- /dev/null +++ b/website/docs/d/memorydb_cluster.html.markdown @@ -0,0 +1,66 @@ +--- +subcategory: "MemoryDB" +layout: "aws" +page_title: "AWS: aws_memorydb_cluster" +description: |- + Provides information about a MemoryDB Cluster. +--- + +# Resource: aws_memorydb_cluster + +Provides information about a MemoryDB Cluster. + +## Example Usage + +```terraform +data "aws_memorydb_cluster" "example" { + name = "my-cluster" +} +``` + +## Argument Reference + +The following arguments are required: + +* `name` - (Required) Name of the cluster. + +## Attributes Reference + +In addition, the following attributes are exported: + +* `id` - Same as `name`. +* `arn` - The ARN of the cluster. +* `acl_name` - The name of the Access Control List associated with the cluster. +* `auto_minor_version_upgrade` - True when the cluster allows automatic minor version upgrades. +* `cluster_endpoint` + * `address` - DNS hostname of the cluster configuration endpoint. + * `port` - Port number that the cluster configuration endpoint is listening on. +* `description` - Description for the cluster. +* `engine_patch_version` - Patch version number of the Redis engine used by the cluster. +* `engine_version` - Version number of the Redis engine used by the cluster. +* `final_snapshot_name` - Name of the final cluster snapshot to be created when this resource is deleted. If omitted, no final snapshot will be made. +* `kms_key_arn` - ARN of the KMS key used to encrypt the cluster at rest. +* `maintenance_window` - The weekly time range during which maintenance on the cluster is performed. It is specified as a range in the format `ddd:hh24:mi-ddd:hh24:mi` (24H Clock UTC). Example: `sun:23:00-mon:01:30`. +* `node_type` - The compute and memory capacity of the nodes in the cluster. +* `num_replicas_per_shard` - The number of replicas to apply to each shard. +* `num_shards` - The number of shards in the cluster. +* `parameter_group_name` - The name of the parameter group associated with the cluster. +* `port` - The port number on which each of the nodes accepts connections. +* `security_group_ids` - Set of VPC Security Group ID-s associated with this cluster. +* `shards` - Set of shards in this cluster. + * `name` - Name of this shard. + * `num_nodes` - Number of individual nodes in this shard. + * `slots` - Keyspace for this shard. Example: `0-16383`. + * `nodes` - Set of nodes in this shard. + * `availability_zone` - The Availability Zone in which the node resides. + * `create_time` - The date and time when the node was created. Example: `2022-01-01T21:00:00Z`. + * `name` - Name of this node. + * `endpoint` + * `address` - DNS hostname of the node. + * `port` - Port number that this node is listening on. +* `snapshot_retention_limit` - The number of days for which MemoryDB retains automatic snapshots before deleting them. When set to `0`, automatic backups are disabled. +* `snapshot_window` - The daily time range (in UTC) during which MemoryDB begins taking a daily snapshot of your shard. Example: `05:00-09:00`. +* `sns_topic_arn` - ARN of the SNS topic to which cluster notifications are sent. +* `subnet_group_name` -The name of the subnet group used for the cluster. +* `tls_enabled` - When true, in-transit encryption is enabled for the cluster. +* `tags` - A map of tags assigned to the cluster. diff --git a/website/docs/d/memorydb_snapshot.html.markdown b/website/docs/d/memorydb_snapshot.html.markdown new file mode 100644 index 000000000000..c9a775dbafb2 --- /dev/null +++ b/website/docs/d/memorydb_snapshot.html.markdown @@ -0,0 +1,50 @@ +--- +subcategory: "MemoryDB" +layout: "aws" +page_title: "AWS: aws_memorydb_snapshot" +description: |- + Provides information about a MemoryDB Snapshot. +--- + +# Resource: aws_memorydb_snapshot + +Provides information about a MemoryDB Snapshot. + +## Example Usage + +```terraform +data "aws_memorydb_snapshot" "example" { + name = "my-snapshot" +} +``` + +## Argument Reference + +The following arguments are required: + +* `name` - (Required) Name of the snapshot. + +## Attributes Reference + +In addition, the following attributes are exported: + +* `id` - The name of the snapshot. +* `arn` - The ARN of the snapshot. +* `cluster_configuration` - The configuration of the cluster from which the snapshot was taken. + * `description` - Description for the cluster. + * `engine_version` - Version number of the Redis engine used by the cluster. + * `maintenance_window` - The weekly time range during which maintenance on the cluster is performed. + * `name` - Name of the cluster. + * `node_type` - Compute and memory capacity of the nodes in the cluster. + * `num_shards` - Number of shards in the cluster. + * `parameter_group_name` - Name of the parameter group associated with the cluster. + * `port` - Port number on which the cluster accepts connections. + * `snapshot_retention_limit` - Number of days for which MemoryDB retains automatic snapshots before deleting them. + * `snapshot_window` - The daily time range (in UTC) during which MemoryDB begins taking a daily snapshot of the shard. + * `subnet_group_name` - Name of the subnet group used by the cluster. + * `topic_arn` - ARN of the SNS topic to which cluster notifications are sent. + * `vpc_id` - The VPC in which the cluster exists. +* `cluster_name` - Name of the MemoryDB cluster that this snapshot was taken from. +* `kms_key_arn` - ARN of the KMS key used to encrypt the snapshot at rest. +* `source` - Indicates whether the snapshot is from an automatic backup (`automated`) or was created manually (`manual`). +* `tags` - A map of tags assigned to the snapshot. diff --git a/website/docs/d/memorydb_user.html.markdown b/website/docs/d/memorydb_user.html.markdown new file mode 100644 index 000000000000..74afcdd1797b --- /dev/null +++ b/website/docs/d/memorydb_user.html.markdown @@ -0,0 +1,38 @@ +--- +subcategory: "MemoryDB" +layout: "aws" +page_title: "AWS: aws_memorydb_user" +description: |- + Provides information about a MemoryDB User. +--- + +# Resource: aws_memorydb_user + +Provides information about a MemoryDB User. + +## Example Usage + +```terraform +data "aws_memorydb_user" "example" { + user_name = "my-user" +} +``` + +## Argument Reference + +The following arguments are required: + +* `user_name` - (Required) Name of the user. + +## Attributes Reference + +In addition, the following attributes are exported: + +* `id` - Name of the user. +* `access_string` - The access permissions string used for this user. +* `arn` - ARN of the user. +* `authentication_mode` - Denotes the user's authentication properties. + * `password_count` - The number of passwords belonging to the user. + * `type` - Indicates whether the user requires a password to authenticate. +* `minimum_engine_version` - The minimum engine version supported for the user. +* `tags` - A map of tags assigned to the subnet group. diff --git a/website/docs/d/mskconnect_connector.html.markdown b/website/docs/d/mskconnect_connector.html.markdown new file mode 100644 index 000000000000..5b82acbf97d2 --- /dev/null +++ b/website/docs/d/mskconnect_connector.html.markdown @@ -0,0 +1,33 @@ +--- +subcategory: "Kafka Connect (MSK Connect)" +layout: "aws" +page_title: "AWS: aws_mskconnect_connector" +description: |- + Get information on an Amazon MSK Connect Connector. +--- + +# Data Source: aws_mskconnect_connector + +Get information on an Amazon MSK Connect Connector. + +## Example Usage + +```terraform +data "aws_mskconnect_connector" "example" { + name = "example-mskconnector" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) Name of the connector. + +## Attribute Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The Amazon Resource Name (ARN) of the connector. +* `description` - A summary description of the connector. +* `version` - The current version of the connector. diff --git a/website/docs/d/opensearch_domain.html.markdown b/website/docs/d/opensearch_domain.html.markdown new file mode 100644 index 000000000000..0dd5e78282b1 --- /dev/null +++ b/website/docs/d/opensearch_domain.html.markdown @@ -0,0 +1,94 @@ +--- +subcategory: "OpenSearch" +layout: "aws" +page_title: "AWS: aws_opensearch_domain" +description: |- + Get information on an OpenSearch Domain resource. +--- + +# Data Source: aws_opensearch_domain + +Use this data source to get information about an OpenSearch Domain + +## Example Usage + +```terraform +data "aws_opensearch_domain" "my_domain" { + domain_name = "my-domain-name" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `domain_name` – (Required) Name of the domain. + + +## Attributes Reference + +The following attributes are exported: + +* `access_policies` – Policy document attached to the domain. +* `advanced_options` - Key-value string pairs to specify advanced configuration options. +* `advanced_security_options` - Status of the OpenSearch domain's advanced security options. The block consists of the following attributes: + * `enabled` - Whether advanced security is enabled. + * `internal_user_database_enabled` - Whether the internal user database is enabled. +* `arn` – ARN of the domain. +* `auto_tune_options` - Configuration of the Auto-Tune options of the domain. + * `desired_state` - Auto-Tune desired state for the domain. + * `maintenance_schedule` - A list of the nested configurations for the Auto-Tune maintenance windows of the domain. + * `start_at` - Date and time at which the Auto-Tune maintenance schedule starts in [RFC3339 format](https://tools.ietf.org/html/rfc3339#section-5.8). + * `duration` - Configuration block for the duration of the Auto-Tune maintenance window. + * `value` - An integer specifying the value of the duration of an Auto-Tune maintenance window. + * `unit` - Unit of time specifying the duration of an Auto-Tune maintenance window. + * `cron_expression_for_recurrence` - A cron expression specifying the recurrence pattern for an Auto-Tune maintenance schedule. + * `rollback_on_disable` - Whether the domain is set to roll back to default Auto-Tune settings when disabling Auto-Tune. +* `cluster_config` - Cluster configuration of the domain. + * `cold_storage_options` - Configuration block containing cold storage configuration. + * `enabled` - Indicates cold storage is enabled. + * `instance_type` - Instance type of data nodes in the cluster. + * `instance_count` - Number of instances in the cluster. + * `dedicated_master_enabled` - Indicates whether dedicated master nodes are enabled for the cluster. + * `dedicated_master_type` - Instance type of the dedicated master nodes in the cluster. + * `dedicated_master_count` - Number of dedicated master nodes in the cluster. + * `zone_awareness_enabled` - Indicates whether zone awareness is enabled. + * `zone_awareness_config` - Configuration block containing zone awareness settings. + * `availability_zone_count` - Number of availability zones used. + * `warm_enabled` - Indicates warm storage is enabled. + * `warm_count` - Number of warm nodes in the cluster. + * `warm_type` - Instance type for the OpenSearch cluster's warm nodes. +* `cognito_options` - Domain Amazon Cognito Authentication options for Kibana. + * `enabled` - Whether Amazon Cognito Authentication is enabled. + * `user_pool_id` - Cognito User pool used by the domain. + * `identity_pool_id` - Cognito Identity pool used by the domain. + * `role_arn` - IAM Role with the AmazonOpenSearchServiceCognitoAccess policy attached. +* `created` – Status of the creation of the domain. +* `deleted` – Status of the deletion of the domain. +* `domain_id` – Unique identifier for the domain. +* `ebs_options` - EBS Options for the instances in the domain. + * `ebs_enabled` - Whether EBS volumes are attached to data nodes in the domain. + * `volume_type` - Type of EBS volumes attached to data nodes. + * `volume_size` - Size of EBS volumes attached to data nodes (in GB). + * `iops` - Baseline input/output (I/O) performance of EBS volumes attached to data nodes. +* `engine_version` – OpenSearch version for the domain. +* `encryption_at_rest` - Domain encryption at rest related options. + * `enabled` - Whether encryption at rest is enabled in the domain. + * `kms_key_id` - KMS key id used to encrypt data at rest. +* `endpoint` – Domain-specific endpoint used to submit index, search, and data upload requests. +* `kibana_endpoint` - Domain-specific endpoint used to access the Kibana application. +* `log_publishing_options` - Domain log publishing related options. + * `log_type` - Type of OpenSearch log being published. + * `cloudwatch_log_group_arn` - CloudWatch Log Group where the logs are published. + * `enabled` - Whether log publishing is enabled. +* `node_to_node_encryption` - Domain in transit encryption related options. + * `enabled` - Whether node to node encryption is enabled. +* `processing` – Status of a configuration change in the domain. +* `snapshot_options` – Domain snapshot related options. + * `automated_snapshot_start_hour` - Hour during which the service takes an automated daily snapshot of the indices in the domain. +* `tags` - Tags assigned to the domain. +* `vpc_options` - VPC Options for private OpenSearch domains. + * `availability_zones` - Availability zones used by the domain. + * `security_group_ids` - Security groups used by the domain. + * `subnet_ids` - Subnets used by the domain. + * `vpc_id` - VPC used by the domain. diff --git a/website/docs/d/route.html.markdown b/website/docs/d/route.html.markdown index 6f71b5821a52..ed91dc54d78b 100644 --- a/website/docs/d/route.html.markdown +++ b/website/docs/d/route.html.markdown @@ -44,6 +44,7 @@ The following arguments are required: The following arguments are optional: * `carrier_gateway_id` - (Optional) EC2 Carrier Gateway ID of the Route belonging to the Route Table. +* `core_network_arn` - (Optional) Core network ARN of the Route belonging to the Route Table. * `destination_cidr_block` - (Optional) CIDR block of the Route belonging to the Route Table. * `destination_ipv6_cidr_block` - (Optional) IPv6 CIDR block of the Route belonging to the Route Table. * `destination_prefix_list_id` - (Optional) The ID of a [managed prefix list](ec2_managed_prefix_list.html) destination of the Route belonging to the Route Table. diff --git a/website/docs/d/route_table.html.markdown b/website/docs/d/route_table.html.markdown index 3630c4b161fe..bff6990bfee5 100644 --- a/website/docs/d/route_table.html.markdown +++ b/website/docs/d/route_table.html.markdown @@ -74,6 +74,7 @@ For destinations: For targets: * `carrier_gateway_id` - ID of the Carrier Gateway. +* `core_network_arn` - ARN of the core network. * `egress_only_gateway_id` - ID of the Egress Only Internet Gateway. * `gateway_id` - Internet Gateway ID. * `local_gateway_id` - Local Gateway ID. diff --git a/website/docs/d/ssm_maintenance_windows.html.markdown b/website/docs/d/ssm_maintenance_windows.html.markdown new file mode 100644 index 000000000000..c873fe4aafe9 --- /dev/null +++ b/website/docs/d/ssm_maintenance_windows.html.markdown @@ -0,0 +1,37 @@ +--- +subcategory: "SSM" +layout: "aws" +page_title: "AWS: aws_ssm_maintenance_windows" +description: |- + Get information on SSM maintenance windows. +--- + +# Data Source: ssm_maintenance_windows + +Use this data source to get the window IDs of SSM maintenance windows. + +## Example Usage + +```terraform +data "aws_ssm_maintenance_windows" "example" { + filter { + name = "Enabled" + values = ["true"] + } +} +``` + +## Argument Reference + +* `filter` - (Optional) Configuration block(s) for filtering. Detailed below. + +### filter Configuration Block + +The following arguments are supported by the `filter` configuration block: + +* `name` - (Required) The name of the filter field. Valid values can be found in the [SSM DescribeMaintenanceWindows API Reference](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribeMaintenanceWindows.html#API_DescribeMaintenanceWindows_RequestSyntax). +* `values` - (Required) Set of values that are accepted for the given filter field. Results will be selected if any given value matches. + +## Attributes Reference + +* `ids` - List of window IDs of the matched SSM maintenance windows. diff --git a/website/docs/guides/version-4-upgrade.html.md b/website/docs/guides/version-4-upgrade.html.md index ec878e7f0b43..a202b843bce1 100644 --- a/website/docs/guides/version-4-upgrade.html.md +++ b/website/docs/guides/version-4-upgrade.html.md @@ -12,7 +12,8 @@ Version 4.0.0 of the AWS provider for Terraform is a major release and includes We previously marked most of the changes we outline in this guide as deprecated in the Terraform plan/apply output throughout previous provider releases. You can find these changes, including deprecation notices, in the [Terraform AWS Provider CHANGELOG](https://github.com/hashicorp/terraform-provider-aws/blob/main/CHANGELOG.md). -~> **NOTE:** Version 4.0.0 of the AWS Provider introduces significant changes to the `aws_s3_bucket` resource. See [S3 Bucket Refactor](#s3-bucket-refactor) for more details. +~> **NOTE:** Versions 4.0.0 through v4.8.0 of the AWS Provider introduce significant breaking changes to the `aws_s3_bucket` resource. See [S3 Bucket Refactor](#s3-bucket-refactor) for more details. +We recommend upgrading to v4.9.0 or later of the AWS Provider instead, where only non-breaking changes and deprecation notices are introduced to the `aws_s3_bucket`. See [Changes to S3 Bucket Drift Detection](#changes-to-s3-bucket-drift-detection) for additional considerations when upgrading to v4.9.0 or later. ~> **NOTE:** Version 4.0.0 of the AWS Provider introduces changes to the precedence of some authentication and configuration parameters. These changes bring the provider in line with the AWS CLI and SDKs. @@ -29,7 +30,8 @@ Upgrade topics: - [Provider Version Configuration](#provider-version-configuration) - [Changes to Authentication](#changes-to-authentication) - [New Provider Arguments](#new-provider-arguments) -- [S3 Bucket Refactor](#s3-bucket-refactor) +- [Changes to S3 Bucket Drift Detection](#changes-to-s3-bucket-drift-detection) (**Applicable to v4.9.0 and later of the AWS Provider**) +- [S3 Bucket Refactor](#s3-bucket-refactor) (**Only applicable to v4.0.0 through v4.8.0 of the AWS Provider**) - [`acceleration_status` Argument](#acceleration_status-argument) - [`acl` Argument](#acl-argument) - [`cors_rule` Argument](#cors_rule-argument) @@ -195,8 +197,1137 @@ provider "aws" { Note that the provider can only resolve FIPS endpoints where AWS provides FIPS support. Support depends on the service and may include `us-east-1`, `us-east-2`, `us-west-1`, `us-west-2`, `us-gov-east-1`, `us-gov-west-1`, and `ca-central-1`. For more information, see [Federal Information Processing Standard (FIPS) 140-2](https://aws.amazon.com/compliance/fips/). +## Changes to S3 Bucket Drift Detection + +~> **NOTE:** This only applies to v4.9.0 and later of the AWS Provider. + +~> **NOTE:** If you are migrating from v3.75.x of the AWS Provider and you have already adopted the standalone S3 bucket resources (e.g. `aws_s3_bucket_lifecycle_configuration`), +a [`lifecycle` configuration block to ignore changes](https://www.terraform.io/language/meta-arguments/lifecycle#ignore_changes) to the internal parameters of the source `aws_s3_bucket` resources will no longer be necessary and can be removed upon upgrade. + +~> **NOTE:** In the next major version, v5.0, the parameters listed below will be removed entirely from the `aws_s3_bucket` resource. +For this reason, a deprecation notice is printed in the Terraform CLI for each of the parameters when used in a configuration. + +To remediate the breaking changes introduced to the `aws_s3_bucket` resource in v4.0.0 of the AWS Provider, +v4.9.0 and later retain the same configuration parameters of the `aws_s3_bucket` resource as in v3.x and functionality of the `aws_s3_bucket` resource only differs from v3.x +in that Terraform will only perform drift detection for each of the following parameters if a configuration value is provided: + +* `acceleration_status` +* `acl` +* `cors_rule` +* `grant` +* `lifecycle_rule` +* `logging` +* `object_lock_configuration` +* `policy` +* `replication_configuration` +* `request_payer` +* `server_side_encryption_configuration` +* `versioning` +* `website` + +Thus, if one of these parameters was once configured and then is entirely removed from an `aws_s3_bucket` resource configuration, +Terraform will not pick up on these changes on a subsequent `terraform plan` or `terraform apply`. + +For example, given the following configuration with a single `cors_rule`: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... + cors_rule { + allowed_headers = ["*"] + allowed_methods = ["PUT", "POST"] + allowed_origins = ["https://s3-website-test.hashicorp.com"] + expose_headers = ["ETag"] + max_age_seconds = 3000 + } +} +``` + +When updated to the following configuration without a `cors_rule`: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... +} +``` + +Terraform CLI with v4.9.0 of the AWS Provider will report back: + +```shell +aws_s3_bucket.example: Refreshing state... [id=yournamehere] +... +No changes. Your infrastructure matches the configuration. +``` + +With that said, to manage changes to these parameters in the `aws_s3_bucket` resource, practitioners should configure each parameter's respective standalone resource +and perform updates directly on those new configurations. The parameters are mapped to the standalone resources as follows: + +| `aws_s3_bucket` Parameter | Standalone Resource | +|----------------------------------------|------------------------------------------------------| +| `acceleration_status` | `aws_s3_bucket_accelerate_configuration` | +| `acl` | `aws_s3_bucket_acl` | +| `cors_rule` | `aws_s3_bucket_cors_configuration` | +| `grant` | `aws_s3_bucket_acl` | +| `lifecycle_rule` | `aws_s3_bucket_lifecycle_configuration` | +| `logging` | `aws_s3_bucket_logging` | +| `object_lock_configuration` | `aws_s3_bucket_object_lock_configuration` | +| `policy` | `aws_s3_bucket_policy` | +| `replication_configuration` | `aws_s3_bucket_replication_configuration` | +| `request_payer` | `aws_s3_bucket_request_payment_configuration` | +| `server_side_encryption_configuration` | `aws_s3_bucket_server_side_encryption_configuration` | +| `versioning` | `aws_s3_bucket_versioning` | +| `website` | `aws_s3_bucket_website_configuration` | + +Going back to the earlier example, given the following configuration: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... + cors_rule { + allowed_headers = ["*"] + allowed_methods = ["PUT", "POST"] + allowed_origins = ["https://s3-website-test.hashicorp.com"] + expose_headers = ["ETag"] + max_age_seconds = 3000 + } +} +``` + +Practitioners can upgrade to v4.9.0 and then introduce the standalone `aws_s3_bucket_cors_configuration` resource, e.g. + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + # ... other configuration ... +} + +resource "aws_s3_bucket_cors_configuration" "example" { + bucket = aws_s3_bucket.example.id + + cors_rule { + allowed_headers = ["*"] + allowed_methods = ["PUT", "POST"] + allowed_origins = ["https://s3-website-test.hashicorp.com"] + expose_headers = ["ETag"] + max_age_seconds = 3000 + } +} +``` + +Depending on the tools available to you, the above configuration can either be directly applied with Terraform or the standalone resource +can be imported into Terraform state. Please refer to each standalone resource's _Import_ documentation for the proper syntax. + +Once the standalone resources are managed by Terraform, updates and removal can be performed as needed. + +The following sections depict standalone resource adoption per individual parameter. Standalone resource adoption is not required to upgrade but is recommended to ensure drift is detected by Terraform. +The examples below are by no means exhaustive. The aim is to provide important concepts when migrating to a standalone resource whose parameters may not entirely align with the corresponding parameter in the `aws_s3_bucket` resource. + +### Migrating to `aws_s3_bucket_accelerate_configuration` + +Given this previous configuration: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... + acceleration_status = "Enabled" +} +``` + +Update the configuration to: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... +} + +resource "aws_s3_bucket_accelerate_configuration" "example" { + bucket = aws_s3_bucket.example.id + status = "Enabled" +} +``` + +### Migrating to `aws_s3_bucket_acl` + +#### With `acl` + +Given this previous configuration: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + acl = "private" + + # ... other configuration ... +} +``` + +Update the configuration to: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" +} + +resource "aws_s3_bucket_acl" "example" { + bucket = aws_s3_bucket.example.id + acl = "private" +} +``` + +#### With `grant` + +Given this previous configuration: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... + grant { + id = data.aws_canonical_user_id.current_user.id + type = "CanonicalUser" + permissions = ["FULL_CONTROL"] + } + + grant { + type = "Group" + permissions = ["READ_ACP", "WRITE"] + uri = "http://acs.amazonaws.com/groups/s3/LogDelivery" + } +} +``` + +Update the configuration to: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... +} + +resource "aws_s3_bucket_acl" "example" { + bucket = aws_s3_bucket.example.id + + access_control_policy { + grant { + grantee { + id = data.aws_canonical_user_id.current_user.id + type = "CanonicalUser" + } + permission = "FULL_CONTROL" + } + + grant { + grantee { + type = "Group" + uri = "http://acs.amazonaws.com/groups/s3/LogDelivery" + } + permission = "READ_ACP" + } + + grant { + grantee { + type = "Group" + uri = "http://acs.amazonaws.com/groups/s3/LogDelivery" + } + permission = "WRITE" + } + + owner { + id = data.aws_canonical_user_id.current_user.id + } + } +} +``` + +### Migrating to `aws_s3_bucket_cors_configuration` + +Given this previous configuration: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... + cors_rule { + allowed_headers = ["*"] + allowed_methods = ["PUT", "POST"] + allowed_origins = ["https://s3-website-test.hashicorp.com"] + expose_headers = ["ETag"] + max_age_seconds = 3000 + } +} +``` + +Update the configuration to: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... +} + +resource "aws_s3_bucket_cors_configuration" "example" { + bucket = aws_s3_bucket.example.id + + cors_rule { + allowed_headers = ["*"] + allowed_methods = ["PUT", "POST"] + allowed_origins = ["https://s3-website-test.hashicorp.com"] + expose_headers = ["ETag"] + max_age_seconds = 3000 + } +} +``` + +### Migrating to `aws_s3_bucket_lifecycle_configuration` + +#### For Lifecycle Rules with no `prefix` previously configured + +~> **Note:** When configuring the `rule.filter` configuration block in the new `aws_s3_bucket_lifecycle_configuration` resource, use the AWS CLI s3api [get-bucket-lifecycle-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-lifecycle-configuration.html) +to get the source bucket's lifecycle configuration and determine if the `Filter` is configured as `"Filter" : {}` or `"Filter" : { "Prefix": "" }`. +If AWS returns the former, configure `rule.filter` as `filter {}`. Otherwise, neither a `rule.filter` nor `rule.prefix` parameter should be configured as shown here: + +Given this previous configuration: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + lifecycle_rule { + id = "Keep previous version 30 days, then in Glacier another 60" + enabled = true + + noncurrent_version_transition { + days = 30 + storage_class = "GLACIER" + } + + noncurrent_version_expiration { + days = 90 + } + } + + lifecycle_rule { + id = "Delete old incomplete multi-part uploads" + enabled = true + abort_incomplete_multipart_upload_days = 7 + } +} +``` + +Update the configuration to: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... +} + +resource "aws_s3_bucket_lifecycle_configuration" "example" { + bucket = aws_s3_bucket.example.id + + rule { + id = "Keep previous version 30 days, then in Glacier another 60" + status = "Enabled" + + noncurrent_version_transition { + noncurrent_days = 30 + storage_class = "GLACIER" + } + + noncurrent_version_expiration { + noncurrent_days = 90 + } + } + + rule { + id = "Delete old incomplete multi-part uploads" + status = "Enabled" + + abort_incomplete_multipart_upload { + days_after_initiation = 7 + } + } +} +``` + +#### For Lifecycle Rules with `prefix` previously configured as an empty string + +Given this previous configuration: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + lifecycle_rule { + id = "log-expiration" + enabled = true + prefix = "" + + transition { + days = 30 + storage_class = "STANDARD_IA" + } + + transition { + days = 180 + storage_class = "GLACIER" + } + } +} +``` + +Update the configuration to: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... +} + +resource "aws_s3_bucket_lifecycle_configuration" "example" { + bucket = aws_s3_bucket.example.id + + rule { + id = "log-expiration" + status = "Enabled" + + transition { + days = 30 + storage_class = "STANDARD_IA" + } + + transition { + days = 180 + storage_class = "GLACIER" + } + } +} +``` + +#### For Lifecycle Rules with `prefix` + +Given this previous configuration: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + lifecycle_rule { + id = "log-expiration" + enabled = true + prefix = "foobar" + + transition { + days = 30 + storage_class = "STANDARD_IA" + } + + transition { + days = 180 + storage_class = "GLACIER" + } + } +} +``` + +Update the configuration to: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... +} + +resource "aws_s3_bucket_lifecycle_configuration" "example" { + bucket = aws_s3_bucket.example.id + + rule { + id = "log-expiration" + status = "Enabled" + + filter { + prefix = "foobar" + } + + transition { + days = 30 + storage_class = "STANDARD_IA" + } + + transition { + days = 180 + storage_class = "GLACIER" + } + } +} +``` + +#### For Lifecycle Rules with `prefix` and `tags` + +Given this previous configuration: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... + lifecycle_rule { + id = "log" + enabled = true + prefix = "log/" + + tags = { + rule = "log" + autoclean = "true" + } + + transition { + days = 30 + storage_class = "STANDARD_IA" + } + + transition { + days = 60 + storage_class = "GLACIER" + } + + expiration { + days = 90 + } + } + + lifecycle_rule { + id = "tmp" + prefix = "tmp/" + enabled = true + + expiration { + date = "2022-12-31" + } + } +} +``` + +Update the configuration to: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... +} + +resource "aws_s3_bucket_lifecycle_configuration" "example" { + bucket = aws_s3_bucket.example.id + + rule { + id = "log" + status = "Enabled" + + filter { + and { + prefix = "log/" + + tags = { + rule = "log" + autoclean = "true" + } + } + } + + transition { + days = 30 + storage_class = "STANDARD_IA" + } + + transition { + days = 60 + storage_class = "GLACIER" + } + + expiration { + days = 90 + } + } + + rule { + id = "tmp" + + filter { + prefix = "tmp/" + } + + expiration { + date = "2022-12-31T00:00:00Z" + } + + status = "Enabled" + } +} +``` + +### Migrating to `aws_s3_bucket_logging` + +Given this previous configuration: + +```terraform +resource "aws_s3_bucket" "log_bucket" { + # ... other configuration ... + bucket = "example-log-bucket" +} + +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... + logging { + target_bucket = aws_s3_bucket.log_bucket.id + target_prefix = "log/" + } +} +``` + +Update the configuration to: + +```terraform +resource "aws_s3_bucket" "log_bucket" { + bucket = "example-log-bucket" + + # ... other configuration ... +} + +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... +} + +resource "aws_s3_bucket_logging" "example" { + bucket = aws_s3_bucket.example.id + target_bucket = aws_s3_bucket.log_bucket.id + target_prefix = "log/" +} +``` + +### Migrating to `aws_s3_bucket_object_lock_configuration` + +Given this previous configuration: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... + object_lock_configuration { + object_lock_enabled = "Enabled" + + rule { + default_retention { + mode = "COMPLIANCE" + days = 3 + } + } + } +} +``` + +Update the configuration to: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... + object_lock_enabled = true +} + +resource "aws_s3_bucket_object_lock_configuration" "example" { + bucket = aws_s3_bucket.example.id + + rule { + default_retention { + mode = "COMPLIANCE" + days = 3 + } + } +} +``` + +### Migrating to `aws_s3_bucket_policy` + +Given this previous configuration: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... + policy = < **NOTE:** As `aws_s3_bucket_versioning` is a separate resource, any S3 objects for which versioning is important (_e.g._, a truststore for mutual TLS authentication) must implicitly or explicitly depend on the `aws_s3_bucket_versioning` resource. Otherwise, the S3 objects may be created before versioning has been set. [See below](#ensure-objects-depend-on-versioning) for an example. Also note that AWS recommends waiting 15 minutes after enabling versioning on a bucket before putting or deleting objects in/from the bucket. + +#### Buckets With Versioning Enabled + +Given this previous configuration: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... + versioning { + enabled = true + } +} +``` + +Update the configuration to: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... +} + +resource "aws_s3_bucket_versioning" "example" { + bucket = aws_s3_bucket.example.id + versioning_configuration { + status = "Enabled" + } +} +``` + +#### Buckets With Versioning Disabled or Suspended + +Depending on the version of the Terraform AWS Provider you are migrating from, the interpretation of `versioning.enabled = false` +in your `aws_s3_bucket` resource will differ and thus the migration to the `aws_s3_bucket_versioning` resource will also differ as follows. + +If you are migrating from the Terraform AWS Provider `v3.70.0` or later: + +* For new S3 buckets, `enabled = false` is synonymous to `Disabled`. +* For existing S3 buckets, `enabled = false` is synonymous to `Suspended`. + +If you are migrating from an earlier version of the Terraform AWS Provider: + +* For both new and existing S3 buckets, `enabled = false` is synonymous to `Suspended`. + +Given this previous configuration: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... + versioning { + enabled = false + } +} +``` + +Update the configuration to one of the following: + +* If migrating from Terraform AWS Provider `v3.70.0` or later and bucket versioning was never enabled: + + ```terraform + resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... + } + + resource "aws_s3_bucket_versioning" "example" { + bucket = aws_s3_bucket.example.id + versioning_configuration { + status = "Disabled" + } + } + ``` + +* If migrating from Terraform AWS Provider `v3.70.0` or later and bucket versioning was enabled at one point: + + ```terraform + resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... + } + + resource "aws_s3_bucket_versioning" "example" { + bucket = aws_s3_bucket.example.id + versioning_configuration { + status = "Suspended" + } + } + ``` + +* If migrating from an earlier version of Terraform AWS Provider: + + ```terraform + resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... + } + + resource "aws_s3_bucket_versioning" "example" { + bucket = aws_s3_bucket.example.id + versioning_configuration { + status = "Suspended" + } + } + ``` + +#### Ensure Objects Depend on Versioning + +When you create an object whose `version_id` you need and an `aws_s3_bucket_versioning` resource in the same configuration, you are more likely to have success by ensuring the `s3_object` depends either implicitly (see below) or explicitly (i.e., using `depends_on = [aws_s3_bucket_versioning.example]`) on the `aws_s3_bucket_versioning` resource. + +~> **NOTE:** For critical and/or production S3 objects, do not create a bucket, enable versioning, and create an object in the bucket within the same configuration. Doing so will not allow the AWS-recommended 15 minutes between enabling versioning and writing to the bucket. + +This example shows the `aws_s3_object.example` depending implicitly on the versioning resource through the reference to `aws_s3_bucket_versioning.example.bucket` to define `bucket`: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yotto" +} + +resource "aws_s3_bucket_versioning" "example" { + bucket = aws_s3_bucket.example.id + + versioning_configuration { + status = "Enabled" + } +} + +resource "aws_s3_object" "example" { + bucket = aws_s3_bucket_versioning.example.bucket + key = "droeloe" + source = "example.txt" +} +``` + +### Migrating to `aws_s3_bucket_website_configuration` + +Given this previous configuration: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... + website { + index_document = "index.html" + error_document = "error.html" + } +} +``` + +Update the configuration to: + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "yournamehere" + + # ... other configuration ... +} + +resource "aws_s3_bucket_website_configuration" "example" { + bucket = aws_s3_bucket.example.id + + index_document { + suffix = "index.html" + } + + error_document { + key = "error.html" + } +} +``` + +Given this previous configuration that uses the `aws_s3_bucket` parameter `website_domain` with `aws_route53_record`: + +```terraform +resource "aws_route53_zone" "main" { + name = "domain.test" +} + +resource "aws_s3_bucket" "website" { + # ... other configuration ... + website { + index_document = "index.html" + error_document = "error.html" + } +} + +resource "aws_route53_record" "alias" { + zone_id = aws_route53_zone.main.zone_id + name = "www" + type = "A" + + alias { + zone_id = aws_s3_bucket.website.hosted_zone_id + name = aws_s3_bucket.website.website_domain + evaluate_target_health = true + } +} +``` + +Update the configuration to use the `aws_s3_bucket_website_configuration` resource and its `website_domain` parameter: + +```terraform +resource "aws_route53_zone" "main" { + name = "domain.test" +} + +resource "aws_s3_bucket" "website" { + # ... other configuration ... +} + +resource "aws_s3_bucket_website_configuration" "example" { + bucket = aws_s3_bucket.website.id + + index_document { + suffix = "index.html" + } +} + +resource "aws_route53_record" "alias" { + zone_id = aws_route53_zone.main.zone_id + name = "www" + type = "A" + + alias { + zone_id = aws_s3_bucket.website.hosted_zone_id + name = aws_s3_bucket_website_configuration.example.website_domain + evaluate_target_health = true + } +} +``` + ## S3 Bucket Refactor +~> **NOTE:** This only applies to v4.0.0 through v4.8.0 of the AWS Provider, which introduce significant breaking +changes to the `aws_s3_bucket` resource. We recommend upgrading to v4.9.0 of the AWS Provider instead. See the section above, [Changes to S3 Bucket Drift Detection](#changes-to-s3-bucket-drift-detection), for additional upgrade considerations. + To help distribute the management of S3 bucket settings via independent resources, various arguments and attributes in the `aws_s3_bucket` resource have become **read-only**. Configurations dependent on these arguments should be updated to use the corresponding `aws_s3_bucket_*` resource in order to prevent Terraform from reporting “unconfigurable attribute” errors for read-only arguments. Once updated, it is recommended to import new `aws_s3_bucket_*` resources into Terraform state. @@ -1062,7 +2193,7 @@ resource "aws_s3_bucket" "example" { "Principal": { "AWS": "${data.aws_elb_service_account.current.arn}" }, - "Resource": "arn:${data.aws_partition.current.partition}:s3:::example/*", + "Resource": "arn:${data.aws_partition.current.partition}:s3:::yournamehere/*", "Sid": "Stmt1446575236270" } ], @@ -1106,7 +2237,7 @@ resource "aws_s3_bucket_policy" "example" { "Principal": { "AWS": "${data.aws_elb_service_account.current.arn}" }, - "Resource": "arn:${data.aws_partition.current.partition}:s3:::example/*", + "Resource": "${aws_s3_bucket.example.arn}/*", "Sid": "Stmt1446575236270" } ], @@ -1513,7 +2644,7 @@ resource and remove `versioning` and its nested arguments in the `aws_s3_bucket` } } ``` - + * If migrating from an earlier version of Terraform AWS Provider: ```terraform diff --git a/website/docs/index.html.markdown b/website/docs/index.html.markdown index fb2c87f21c3e..3b857e437797 100644 --- a/website/docs/index.html.markdown +++ b/website/docs/index.html.markdown @@ -78,6 +78,9 @@ The AWS Provider supports assuming an IAM role, either in the provider configuration block parameter `assume_role` or in [a named profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html). +The AWS Provider supports assuming an IAM role using [web identity federation and OpenID Connect (OIDC)](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html#cli-configure-role-oidc). +This can be configured either using environment variables or in a named profile. + When using a named profile, the AWS Provider also supports [sourcing credentials from an external process](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sourcing-external.html). ### Provider Configuration @@ -212,7 +215,7 @@ credential_process = custom-process --username jdoe |Secret Access Key|`secret_key`|`AWS_SECRET_ACCESS_KEY`|`aws_secret_access_key`| |Session Token|`token`|`AWS_SESSION_TOKEN`|`aws_session_token`| |Region|`region`|`AWS_REGION` or `AWS_DEFAULT_REGION`|`region`| -|Custom CA Bundle |`custom_ca_bundle`|`AWS_CA_BUNDLE`|Not supported| +|Custom CA Bundle |`custom_ca_bundle`|`AWS_CA_BUNDLE`|`ca_bundle`| |EC2 IMDS Endpoint |`ec2_metadata_service_endpoint`|`AWS_EC2_METADATA_SERVICE_ENDPOINT`|N/A| |EC2 IMDS Endpoint Mode|`ec2_metadata_service_endpoint_mode`|`AWS_EC2_METADATA_SERVICE_ENDPOINT_MODE`|N/A| |Disable EC2 IMDS|`skip_metadata_api_check`|`AWS_EC2_METADATA_DISABLED`|N/A| @@ -246,6 +249,23 @@ See the [assume role documentation](https://docs.aws.amazon.com/cli/latest/userg [envvars]: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html [config]: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-settings +### Assume Role with Web Identity Configuration Reference + +Configuration for assuming an IAM role using web identify federation can be done using environment variables or a named profile in shared configuration files. + +Provider configuration cannot be used to assume an IAM role using web identity federation. + +See the assume role documentation [section on web identities](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html#cli-configure-role-oidc) for more information. + +|Setting|Provider|[Environment Variable][envvars]|[Shared Config][config]| +|-------|--------|-----------------------| +|Role ARN|Not supported|`AWS_ROLE_ARN`|`role_arn`| +|Web Identity Token|Not supported|`AWS_WEB_IDENTITY_TOKEN_FILE`|`web_identity_token_file`| +|Duration|Not supported|N/A|`duration_seconds`| +|Policy|Not supported|N/A|`policy`| +|Policy ARNs|Not supported|N/A|`policy_arns`| +|Session Name|Not supported|`AWS_ROLE_SESSION_NAME`|`role_session_name`| + ## Custom User-Agent Information By default, the underlying AWS client used by the Terraform AWS Provider creates requests with User-Agent headers including information about Terraform and AWS SDK for Go versions. To provide additional information in the User-Agent headers, the `TF_APPEND_USER_AGENT` environment variable can be set and its value will be directly added to HTTP requestsE.g., diff --git a/website/docs/r/autoscaling_attachment.html.markdown b/website/docs/r/autoscaling_attachment.html.markdown index 88a5e035146a..333e6196dd99 100644 --- a/website/docs/r/autoscaling_attachment.html.markdown +++ b/website/docs/r/autoscaling_attachment.html.markdown @@ -33,7 +33,7 @@ resource "aws_autoscaling_attachment" "asg_attachment_bar" { # Create a new ALB Target Group attachment resource "aws_autoscaling_attachment" "asg_attachment_bar" { autoscaling_group_name = aws_autoscaling_group.asg.id - alb_target_group_arn = aws_lb_target_group.test.arn + lb_target_group_arn = aws_lb_target_group.test.arn } ``` diff --git a/website/docs/r/backup_plan.html.markdown b/website/docs/r/backup_plan.html.markdown index 8b64592501a3..86dabbe93b75 100644 --- a/website/docs/r/backup_plan.html.markdown +++ b/website/docs/r/backup_plan.html.markdown @@ -20,6 +20,10 @@ resource "aws_backup_plan" "example" { rule_name = "tf_example_backup_rule" target_vault_name = aws_backup_vault.test.name schedule = "cron(0 12 * * ? *)" + + lifecycle { + delete_after = 14 + } } advanced_backup_setting { diff --git a/website/docs/r/cloudformation_stack_set.html.markdown b/website/docs/r/cloudformation_stack_set.html.markdown index 15a92117d8b7..02ac56aad108 100644 --- a/website/docs/r/cloudformation_stack_set.html.markdown +++ b/website/docs/r/cloudformation_stack_set.html.markdown @@ -91,6 +91,7 @@ The following arguments are supported: * `retain_stacks_on_account_removal` - (Optional) Whether or not to retain stacks when the account is removed. * `name` - (Required) Name of the StackSet. The name must be unique in the region where you create your StackSet. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphabetic character and cannot be longer than 128 characters. * `capabilities` - (Optional) A list of capabilities. Valid values: `CAPABILITY_IAM`, `CAPABILITY_NAMED_IAM`, `CAPABILITY_AUTO_EXPAND`. +* `operation_preferences` - (Optional) Preferences for how AWS CloudFormation performs a stack set update. * `description` - (Optional) Description of the StackSet. * `execution_role_name` - (Optional) Name of the IAM Role in all target accounts for StackSet operations. Defaults to `AWSCloudFormationStackSetExecutionRole` when using the `SELF_MANAGED` permission model. This should not be defined when using the `SERVICE_MANAGED` permission model. * `parameters` - (Optional) Key-value map of input parameters for the StackSet template. All template parameters, including those with a `Default`, must be configured or ignored with `lifecycle` configuration block `ignore_changes` argument. All `NoEcho` template parameters must be ignored with the `lifecycle` configuration block `ignore_changes` argument. @@ -100,6 +101,17 @@ The following arguments are supported: * `template_body` - (Optional) String containing the CloudFormation template body. Maximum size: 51,200 bytes. Conflicts with `template_url`. * `template_url` - (Optional) String containing the location of a file containing the CloudFormation template body. The URL must point to a template that is located in an Amazon S3 bucket. Maximum location file size: 460,800 bytes. Conflicts with `template_body`. +### `operation_preferences` Argument Reference + +The `operation_preferences` configuration block supports the following arguments: + +*`failure_tolerance_count` - (Optional) The number of accounts, per Region, for which this operation can fail before AWS CloudFormation stops the operation in that Region. +*`failure_tolerance_percentage` - (Optional) The percentage of accounts, per Region, for which this stack operation can fail before AWS CloudFormation stops the operation in that Region. +*`max_concurrent_count` - (Optional) The maximum number of accounts in which to perform this operation at one time. +*`max_concurrent_percentage` - (Optional) The maximum percentage of accounts in which to perform this operation at one time. +*`region_concurrency_type` - (Optional) The concurrency type of deploying StackSets operations in Regions, could be in parallel or one Region at a time. +*`region_order` - (Optional) The order of the Regions in where you want to perform the stack operation. + ## Attributes Reference In addition to all arguments above, the following attributes are exported: diff --git a/website/docs/r/db_instance_automated_backups_replication.markdown b/website/docs/r/db_instance_automated_backups_replication.markdown new file mode 100644 index 000000000000..f70e68c9c405 --- /dev/null +++ b/website/docs/r/db_instance_automated_backups_replication.markdown @@ -0,0 +1,96 @@ +--- +subcategory: "RDS" +layout: "aws" +page_title: "AWS: aws_db_instance_automated_backups_replication" +description: |- + Enables replication of automated backups to a different AWS Region. +--- + +# Resource: aws_db_instance_automated_backups_replication + +Manage cross-region replication of automated backups to a different AWS Region. Documentation for cross-region automated backup replication can be found at: + +* [Replicating automated backups to another AWS Region](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReplicateBackups.html) + +-> **Note:** This resource has to be created in the destination region. + +## Example Usage + +```terraform +resource "aws_db_instance_automated_backups_replication" "default" { + source_db_instance_arn = "arn:aws:rds:us-west-2:123456789012:db:mydatabase" + retention_period = 14 +} +``` + +## Encrypting the automated backup with KMS + +```terraform +resource "aws_db_instance_automated_backups_replication" "default" { + source_db_instance_arn = "arn:aws:rds:us-west-2:123456789012:db:mydatabase" + kms_key_id = "arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012" +} +``` + +## Example including a RDS DB instance + +```terraform +provider "aws" { + region = "us-east-1" +} + +provider "aws" { + region = "us-west-2" + alias = "replica" +} + +resource "aws_db_instance" "default" { + allocated_storage = 10 + identifier = "mydb" + engine = "postgres" + engine_version = "13.4" + instance_class = "db.t3.micro" + name = "mydb" + username = "masterusername" + password = "mustbeeightcharacters" + backup_retention_period = 7 + storage_encrypted = true + skip_final_snapshot = true +} + +resource "aws_kms_key" "default" { + description = "Encryption key for automated backups" + + provider = "aws.replica" +} + +resource "aws_db_instance_automated_backups_replication" "default" { + source_db_instance_arn = aws_db_instance.default.arn + kms_key_id = aws_kms_key.default.arn + + provider = "aws.replica" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `kms_key_id` - (Optional, Forces new resource) The AWS KMS key identifier for encryption of the replicated automated backups. The KMS key ID is the Amazon Resource Name (ARN) for the KMS encryption key in the destination AWS Region, for example, `arn:aws:kms:us-east-1:123456789012:key/AKIAIOSFODNN7EXAMPLE`. +* `pre_signed_url` - (Optional, Forces new resource) A URL that contains a [Signature Version 4](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) signed request for the [`StartDBInstanceAutomatedBackupsReplication`](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_StartDBInstanceAutomatedBackupsReplication.html) action to be called in the AWS Region of the source DB instance. +* `retention_period` - (Optional, Forces new resource) The retention period for the replicated automated backups, defaults to `7`. +* `source_db_instance_arn` - (Required, Forces new resource) The Amazon Resource Name (ARN) of the source DB instance for the replicated automated backups, for example, `arn:aws:rds:us-west-2:123456789012:db:mydatabase`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The Amazon Resource Name (ARN) of the replicated automated backups. + +## Import + +RDS instance automated backups replication can be imported using the `arn`, e.g., + +``` +$ terraform import aws_db_instance_automated_backups_replication.default arn:aws:rds:us-east-1:123456789012:auto-backup:ab-faaa2mgdj1vmp4xflr7yhsrmtbtob7ltrzzz2my +``` diff --git a/website/docs/r/default_network_acl.html.markdown b/website/docs/r/default_network_acl.html.markdown index 1432771fd8ec..78ae4ec7b491 100644 --- a/website/docs/r/default_network_acl.html.markdown +++ b/website/docs/r/default_network_acl.html.markdown @@ -36,7 +36,7 @@ resource "aws_default_network_acl" "default" { protocol = -1 rule_no = 100 action = "allow" - cidr_block = aws_vpc.mainvpc.cidr_block + cidr_block = "0.0.0.0/0" from_port = 0 to_port = 0 } diff --git a/website/docs/r/default_route_table.html.markdown b/website/docs/r/default_route_table.html.markdown index 7482fadd0d4e..77e951fa6b6d 100644 --- a/website/docs/r/default_route_table.html.markdown +++ b/website/docs/r/default_route_table.html.markdown @@ -76,6 +76,7 @@ One of the following destination arguments must be supplied: One of the following target arguments must be supplied: +* `core_network_arn` - (Optional) The Amazon Resource Name (ARN) of a core network. * `egress_only_gateway_id` - (Optional) Identifier of a VPC Egress Only Internet Gateway. * `gateway_id` - (Optional) Identifier of a VPC internet gateway or a virtual private gateway. * `instance_id` - (Optional) Identifier of an EC2 instance. diff --git a/website/docs/r/dlm_lifecycle_policy.markdown b/website/docs/r/dlm_lifecycle_policy.markdown index 85c26eb185a1..993d79ce2306 100644 --- a/website/docs/r/dlm_lifecycle_policy.markdown +++ b/website/docs/r/dlm_lifecycle_policy.markdown @@ -176,6 +176,52 @@ resource "aws_dlm_lifecycle_policy" "example" { } ``` +### Example Event Based Policy Usage + +``` +data "aws_caller_identity" "current" {} + +resource "aws_dlm_lifecycle_policy" "example" { + description = "tf-acc-basic" + execution_role_arn = aws_iam_role.example.arn + + policy_details { + policy_type = "EVENT_BASED_POLICY" + + action { + name = "tf-acc-basic" + cross_region_copy { + encryption_configuration {} + retain_rule { + interval = 15 + interval_unit = "MONTHS" + } + + target = %[1]q + } + } + + event_source { + type = "MANAGED_CWE" + parameters { + description_regex = "^.*Created for policy: policy-1234567890abcdef0.*$" + event_type = "shareSnapshot" + snapshot_owner = [data.aws_caller_identity.current.account_id] + } + } + } +} + +data "aws_iam_policy" "example" { + name = "AWSDataLifecycleManagerServiceRole" +} + +resource "aws_iam_role_policy_attachment" "example" { + role = aws_iam_role.example.id + policy_arn = data.aws_iam_policy.example.arn +} +``` + ## Argument Reference The following arguments are supported: @@ -188,30 +234,94 @@ The following arguments are supported: #### Policy Details arguments -* `resource_types` - (Required) A list of resource types that should be targeted by the lifecycle policy. `VOLUME` is currently the only allowed value. -* `schedule` - (Required) See the [`schedule` configuration](#schedule-arguments) block. -* `target_tags` (Required) A map of tag keys and their values. Any resources that match the `resource_types` and are tagged with _any_ of these tags will be targeted. +* `action` - (Optional) The actions to be performed when the event-based policy is triggered. You can specify only one action per policy. This parameter is required for event-based policies only. If you are creating a snapshot or AMI policy, omit this parameter. See the [`action` configuration](#action-arguments) block. +* `event_source` - (Optional) The event that triggers the event-based policy. This parameter is required for event-based policies only. If you are creating a snapshot or AMI policy, omit this parameter. See the [`event_source` configuration](#event-source-arguments) block. +* `resource_types` - (Optional) A list of resource types that should be targeted by the lifecycle policy. Valid values are `VOLUME` and `INSTANCE`. +* `resource_locations` - (Optional) The location of the resources to backup. If the source resources are located in an AWS Region, specify `CLOUD`. If the source resources are located on an Outpost in your account, specify `OUTPOST`. If you specify `OUTPOST`, Amazon Data Lifecycle Manager backs up all resources of the specified type with matching target tags across all of the Outposts in your account. Valid values are `CLOUD` and `OUTPOST`. +* `policy_type` - (Optional) The valid target resource types and actions a policy can manage. Specify `EBS_SNAPSHOT_MANAGEMENT` to create a lifecycle policy that manages the lifecycle of Amazon EBS snapshots. Specify `IMAGE_MANAGEMENT` to create a lifecycle policy that manages the lifecycle of EBS-backed AMIs. Specify `EVENT_BASED_POLICY` to create an event-based policy that performs specific actions when a defined event occurs in your AWS account. Default value is `EBS_SNAPSHOT_MANAGEMENT`. +* `parameters` - (Optional) A set of optional parameters for snapshot and AMI lifecycle policies. See the [`parameters` configuration](#parameters-arguments) block. +* `schedule` - (Optional) See the [`schedule` configuration](#schedule-arguments) block. +* `target_tags` (Optional) A map of tag keys and their values. Any resources that match the `resource_types` and are tagged with _any_ of these tags will be targeted. ~> Note: You cannot have overlapping lifecycle policies that share the same `target_tags`. Terraform is unable to detect this at plan time but it will fail during apply. +#### Action arguments + +* `cross_region_copy` - (Optional) The rule for copying shared snapshots across Regions. See the [`cross_region_copy` configuration](#acion-cross-region-copy-arguments) block. +* `name` - (Optional) A descriptive name for the action. + +##### Action Cross Region Copy Rule arguments + +* `encryption_configuration` - (Required) The encryption settings for the copied snapshot. See the [`encryption_configuration`](#encryption-configuration-arguments) block. Max of 1 per action. +* `retain_rule` - (Required) Specifies the retention rule for cross-Region snapshot copies. See the [`retain_rule`](#cross-region-copy-rule-retain-rule-arguments) block. Max of 1 per action. +* `target` - (Required) The target Region or the Amazon Resource Name (ARN) of the target Outpost for the snapshot copies. + +###### Encryption Configuration arguments + +* `cmk_arn` - (Optional) The Amazon Resource Name (ARN) of the AWS KMS key to use for EBS encryption. If this parameter is not specified, the default KMS key for the account is used. +* `encrypted` - (Required) To encrypt a copy of an unencrypted snapshot when encryption by default is not enabled, enable encryption using this parameter. Copies of encrypted snapshots are encrypted, even if this parameter is false or when encryption by default is not enabled. + +#### Event Source arguments + +* `parameters` - (Required) Information about the event. See the [`parameters` configuration](#event-source-parameters-arguments) block. +* `type` - (Required) The source of the event. Currently only managed CloudWatch Events rules are supported. Valid values are `MANAGED_CWE`. + +##### Event Source Parameters arguments + +* `description_regex` - (Required) The snapshot description that can trigger the policy. The description pattern is specified using a regular expression. The policy runs only if a snapshot with a description that matches the specified pattern is shared with your account. +* `event_type` - (Required) The type of event. Currently, only `shareSnapshot` events are supported. +* `snapshot_owner` - (Required) The IDs of the AWS accounts that can trigger policy by sharing snapshots with your account. The policy only runs if one of the specified AWS accounts shares a snapshot with your account. + +#### Parameters arguments + +* `exclude_boot_volume` - (Optional) Indicates whether to exclude the root volume from snapshots created using CreateSnapshots. The default is `false`. +* `no_reboot` - (Optional) Applies to AMI lifecycle policies only. Indicates whether targeted instances are rebooted when the lifecycle policy runs. `true` indicates that targeted instances are not rebooted when the policy runs. `false` indicates that target instances are rebooted when the policy runs. The default is `true` (instances are not rebooted). + #### Schedule arguments * `copy_tags` - (Optional) Copy all user-defined tags on a source volume to snapshots of the volume created by this policy. * `create_rule` - (Required) See the [`create_rule`](#create-rule-arguments) block. Max of 1 per schedule. * `cross_region_copy_rule` (Optional) - See the [`cross_region_copy_rule`](#cross-region-copy-rule-arguments) block. Max of 3 per schedule. * `name` - (Required) A name for the schedule. +* `deprecate_rule` - (Required) See the [`deprecate_rule`](#deprecate-rule-arguments) block. Max of 1 per schedule. +* `fast_restore_rule` - (Required) See the [`fast_restore_rule`](#fast-restore-rule-arguments) block. Max of 1 per schedule. * `retain_rule` - (Required) See the [`retain_rule`](#retain-rule-arguments) block. Max of 1 per schedule. +* `share_rule` - (Required) See the [`share_rule`](#share-rule-arguments) block. Max of 1 per schedule. * `tags_to_add` - (Optional) A map of tag keys and their values. DLM lifecycle policies will already tag the snapshot with the tags on the volume. This configuration adds extra tags on top of these. +* `variable_tags` - (Optional) A map of tag keys and variable values, where the values are determined when the policy is executed. Only `$(instance-id)` or `$(timestamp)` are valid values. Can only be used when `resource_types` is `INSTANCE`. #### Create Rule arguments -* `interval` - (Required) How often this lifecycle policy should be evaluated. `1`, `2`,`3`,`4`,`6`,`8`,`12` or `24` are valid values. +* `cron_expression` - (Optional) The schedule, as a Cron expression. The schedule interval must be between 1 hour and 1 year. +* `interval` - (Optional) How often this lifecycle policy should be evaluated. `1`, `2`,`3`,`4`,`6`,`8`,`12` or `24` are valid values. * `interval_unit` - (Optional) The unit for how often the lifecycle policy should be evaluated. `HOURS` is currently the only allowed value and also the default value. +* `location` - (Optional) Specifies the destination for snapshots created by the policy. To create snapshots in the same Region as the source resource, specify `CLOUD`. To create snapshots on the same Outpost as the source resource, specify `OUTPOST_LOCAL`. If you omit this parameter, `CLOUD` is used by default. If the policy targets resources in an AWS Region, then you must create snapshots in the same Region as the source resource. If the policy targets resources on an Outpost, then you can create snapshots on the same Outpost as the source resource, or in the Region of that Outpost. Valid values are `CLOUD` and `OUTPOST_LOCAL`. * `times` - (Optional) A list of times in 24 hour clock format that sets when the lifecycle policy should be evaluated. Max of 1. +#### Deprecate Rule arguments + +* `count` - (Optional) Specifies the number of oldest AMIs to deprecate. Must be an integer between `1` and `1000`. +* `interval` - (Optional) Specifies the period after which to deprecate AMIs created by the schedule. The maximum is 100 years. This is equivalent to 1200 months, 5200 weeks, or 36500 days. +* `interval_unit` - (Optional) The unit of time for time-based retention. Valid values are `DAYS`, `WEEKS`, `MONTHS`, `YEARS`. + +#### Fast Restore Rule arguments + +* `availability_zones` - (Required) The Availability Zones in which to enable fast snapshot restore. +* `count` - (Optional) The number of snapshots to be enabled with fast snapshot restore. Must be an integer between `1` and `1000`. +* `interval` - (Optional) The amount of time to enable fast snapshot restore. The maximum is 100 years. This is equivalent to 1200 months, 5200 weeks, or 36500 days. +* `interval_unit` - (Optional) The unit of time for enabling fast snapshot restore. Valid values are `DAYS`, `WEEKS`, `MONTHS`, `YEARS`. + #### Retain Rule arguments -* `count` - (Required) How many snapshots to keep. Must be an integer between 1 and 1000. +* `count` - (Optional) How many snapshots to keep. Must be an integer between `1` and `1000`. +* `interval` - (Optional) The amount of time to retain each snapshot. The maximum is 100 years. This is equivalent to 1200 months, 5200 weeks, or 36500 days. +* `interval_unit` - (Optional) The unit of time for time-based retention. Valid values are `DAYS`, `WEEKS`, `MONTHS`, `YEARS`. + +#### Share Rule arguments + +* `target_accounts` - (Required) The IDs of the AWS accounts with which to share the snapshots. +* `interval` - (Optional) The period after which snapshots that are shared with other AWS accounts are automatically unshared. +* `interval_unit` - (Optional) The unit of time for the automatic unsharing interval. Valid values are `DAYS`, `WEEKS`, `MONTHS`, `YEARS`. #### Cross Region Copy Rule arguments diff --git a/website/docs/r/dynamodb_contributor_insights.html.markdown b/website/docs/r/dynamodb_contributor_insights.html.markdown new file mode 100644 index 000000000000..5a07ee1a666e --- /dev/null +++ b/website/docs/r/dynamodb_contributor_insights.html.markdown @@ -0,0 +1,38 @@ +--- +subcategory: "DynamoDB" +layout: "aws" +page_title: "AWS: aws_dynamodb_contributor_insights" +description: |- + Provides a DynamoDB contributor insights resource +--- + +# Resource: aws_dynamodb_contributor_insights + +Provides a DynamoDB contributor insights resource + +## Example Usage + +```terraform +resource "aws_dynamodb_contributor_insights" "test" { + table_name = "ExampleTableName" +} +``` + +## Argument Reference + +The following arguments are supported: + +* `table_name` - (Required) The name of the table to enable contributor insights +* `index_name` - (Optional) The global secondary index name + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +## Import + +`aws_dynamodb_contributor_insights` can be imported using the format `name:table_name/index:index_name`, followed by the account number, e.g., + +``` +$ terraform import aws_dynamodb_contributor_insights.test name:ExampleTableName/index:ExampleIndexName/123456789012 +``` diff --git a/website/docs/r/dynamodb_table_item.html.markdown b/website/docs/r/dynamodb_table_item.html.markdown index 803e2d8c5fbb..8478a5ce213a 100644 --- a/website/docs/r/dynamodb_table_item.html.markdown +++ b/website/docs/r/dynamodb_table_item.html.markdown @@ -46,13 +46,14 @@ resource "aws_dynamodb_table" "example" { ## Argument Reference +~> **Note:** Names included in `item` are represented internally with everything but letters removed. There is the possibility of collisions if two names, once filtered, are the same. For example, the names `your-name-here` and `yournamehere` will overlap and cause an error. + The following arguments are supported: -* `table_name` - (Required) The name of the table to contain the item. * `hash_key` - (Required) Hash key to use for lookups and identification of the item +* `item` - (Required) JSON representation of a map of attribute name/value pairs, one for each attribute. Only the primary key attributes are required; you can optionally provide other attribute name-value pairs for the item. * `range_key` - (Optional) Range key to use for lookups and identification of the item. Required if there is range key defined in the table. -* `item` - (Required) JSON representation of a map of attribute name/value pairs, one for each attribute. - Only the primary key attributes are required; you can optionally provide other attribute name-value pairs for the item. +* `table_name` - (Required) Name of the table to contain the item. ## Attributes Reference diff --git a/website/docs/r/eks_addon.html.markdown b/website/docs/r/eks_addon.html.markdown index 997da63205e3..df62e1bd1ecd 100644 --- a/website/docs/r/eks_addon.html.markdown +++ b/website/docs/r/eks_addon.html.markdown @@ -75,7 +75,7 @@ resource "aws_iam_role_policy_attachment" "example" { The following arguments are required: * `addon_name` – (Required) Name of the EKS add-on. The name must match one of - the names returned by [list-addon](https://docs.aws.amazon.com/cli/latest/reference/eks/list-addons.html). + the names returned by [describe-addon-versions](https://docs.aws.amazon.com/cli/latest/reference/eks/describe-addon-versions.html). * `cluster_name` – (Required) Name of the EKS Cluster. Must be between 1-100 characters in length. Must begin with an alphanumeric character, and must only contain alphanumeric characters, dashes and underscores (`^[0-9A-Za-z][A-Za-z0-9\-_]+$`). The following arguments are optional: diff --git a/website/docs/r/elasticache_cluster.html.markdown b/website/docs/r/elasticache_cluster.html.markdown index fe56e9c4f0d1..8ba4b9f758f4 100644 --- a/website/docs/r/elasticache_cluster.html.markdown +++ b/website/docs/r/elasticache_cluster.html.markdown @@ -68,6 +68,31 @@ resource "aws_elasticache_cluster" "replica" { } ``` +### Redis Log Delivery configuration + +```terraform +resource "aws_elasticache_cluster" "test" { + cluster_id = "mycluster" + engine = "redis" + node_type = "cache.t3.micro" + num_cache_nodes = 1 + port = 6379 + apply_immediately = true + log_delivery_configuration { + destination = aws_cloudwatch_log_group.example.name + destination_type = "cloudwatch-logs" + log_format = "text" + log_type = "slow-log" + } + log_delivery_configuration { + destination = aws_kinesis_firehose_delivery_stream.example.name + destination_type = "kinesis-firehose" + log_format = "json" + log_type = "engine-log" + } +} +``` + ## Argument Reference The following arguments are required: @@ -81,12 +106,17 @@ The following arguments are required: The following arguments are optional: * `apply_immediately` - (Optional) Whether any database modifications are applied immediately, or during the next maintenance window. Default is `false`. See [Amazon ElastiCache Documentation for more information.](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyCacheCluster.html). +* `auto_minor_version_upgrade` - (Optional) Specifies whether minor version engine upgrades will be applied automatically to the underlying Cache Cluster instances during the maintenance window. + Only supported for engine type `"redis"` and if the engine version is 6 or higher. + Defaults to `true`. * `availability_zone` - (Optional) Availability Zone for the cache cluster. If you want to create cache nodes in multi-az, use `preferred_availability_zones` instead. Default: System chosen Availability Zone. Changing this value will re-create the resource. * `az_mode` - (Optional, Memcached only) Whether the nodes in this Memcached node group are created in a single Availability Zone or created across multiple Availability Zones in the cluster's region. Valid values for this parameter are `single-az` or `cross-az`, default is `single-az`. If you want to choose `cross-az`, `num_cache_nodes` must be greater than `1`. * `engine_version` – (Optional) Version number of the cache engine to be used. -See [Describe Cache Engine Versions](https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-cache-engine-versions.html) -in the AWS Documentation for supported versions. When `engine` is `redis` and the version is 6 or higher, only the major version can be set, e.g., `6.x`, otherwise, specify the full version desired, e.g., `5.0.6`. The actual engine version used is returned in the attribute `engine_version_actual`, [defined below](#engine_version_actual). + If not set, defaults to the latest version. + See [Describe Cache Engine Versions](https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-cache-engine-versions.html) + in the AWS Documentation for supported versions. When `engine` is `redis` and the version is 6 or higher, only the major version can be set, e.g., `6.x`, otherwise, specify the full version desired, e.g., `5.0.6`. The actual engine version used is returned in the attribute `engine_version_actual`, [defined below](#engine_version_actual). * `final_snapshot_identifier` - (Optional, Redis only) Name of your final cluster snapshot. If omitted, no final snapshot will be made. +* `log_delivery_configuration` - (Optional, Redis only) Specifies the destination and format of Redis [SLOWLOG](https://redis.io/commands/slowlog) or Redis [Engine Log](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Log_Delivery.html#Log_contents-engine-log). See the documentation on [Amazon ElastiCache](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Log_Delivery.html). See [Log Delivery Configuration](#log-delivery-configuration) below for more details. * `maintenance_window` – (Optional) Specifies the weekly time range for when maintenance on the cache cluster is performed. The format is `ddd:hh24:mi-ddd:hh24:mi` (24H Clock UTC). The minimum maintenance window is a 60 minute period. Example: `sun:05:00-sun:09:00`. @@ -114,6 +144,15 @@ In addition to all arguments above, the following attributes are exported: * `configuration_endpoint` - (Memcached only) Configuration endpoint to allow host discovery. * `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://www.terraform.io/docs/providers/aws/index.html#default_tags-configuration-block). +### Log Delivery Configuration + +The `log_delivery_configuration` block allows the streaming of Redis [SLOWLOG](https://redis.io/commands/slowlog) or Redis [Engine Log](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Log_Delivery.html#Log_contents-engine-log) to CloudWatch Logs or Kinesis Data Firehose. Max of 2 blocks. + +* `destination` - Name of either the CloudWatch Logs LogGroup or Kinesis Data Firehose resource. +* `destination_type` - For CloudWatch Logs use `cloudwatch-logs` or for Kinesis Data Firehose use `kinesis-firehose`. +* `log_format` - Valid values are `json` or `text` +* `log_type` - Valid values are `slow-log` or `engine-log`. Max 1 of each. + ## Import ElastiCache Clusters can be imported using the `cluster_id`, e.g., diff --git a/website/docs/r/elasticache_replication_group.html.markdown b/website/docs/r/elasticache_replication_group.html.markdown index c8ab61b6d1c5..7e336b1f91ac 100644 --- a/website/docs/r/elasticache_replication_group.html.markdown +++ b/website/docs/r/elasticache_replication_group.html.markdown @@ -95,6 +95,33 @@ resource "aws_elasticache_replication_group" "baz" { } ``` +### Redis Log Delivery configuration + +```terraform +resource "aws_elasticache_replication_group" "test" { + replication_group_id = "myreplicaciongroup" + replication_group_description = "test description" + node_type = "cache.t3.small" + port = 6379 + apply_immediately = true + auto_minor_version_upgrade = false + maintenance_window = "tue:06:30-tue:07:30" + snapshot_window = "01:00-02:00" + log_delivery_configuration { + destination = aws_cloudwatch_log_group.example.name + destination_type = "cloudwatch-logs" + log_format = "text" + log_type = "slow-log" + } + log_delivery_configuration { + destination = aws_kinesis_firehose_delivery_stream.example.name + destination_type = "kinesis-firehose" + log_format = "json" + log_type = "engine-log" + } +} +``` + ~> **Note:** We currently do not support passing a `primary_cluster_id` in order to create the Replication Group. ~> **Note:** Automatic Failover is unavailable for Redis versions earlier than 2.8.6, @@ -148,7 +175,9 @@ The following arguments are optional: * `apply_immediately` - (Optional) Specifies whether any modifications are applied immediately, or during the next maintenance window. Default is `false`. * `at_rest_encryption_enabled` - (Optional) Whether to enable encryption at rest. * `auth_token` - (Optional) Password used to access a password protected server. Can be specified only if `transit_encryption_enabled = true`. -* `auto_minor_version_upgrade` - (Optional) Specifies whether a minor engine upgrades will be applied automatically to the underlying Cache Cluster instances during the maintenance window. This parameter is currently not supported by the AWS API. Defaults to `true`. +* `auto_minor_version_upgrade` - (Optional) Specifies whether minor version engine upgrades will be applied automatically to the underlying Cache Cluster instances during the maintenance window. + Only supported for engine type `"redis"` and if the engine version is 6 or higher. + Defaults to `true`. * `automatic_failover_enabled` - (Optional) Specifies whether a read-only replica will be automatically promoted to read/write primary if the existing primary fails. If enabled, `number_cache_clusters` must be greater than 1. Must be enabled for Redis (cluster mode enabled) replication groups. Defaults to `false`. * `availability_zones` - (Optional, **Deprecated** use `preferred_cache_cluster_azs` instead) List of EC2 availability zones in which the replication group's cache clusters will be created. The order of the availability zones in the list is not considered. * `cluster_mode` - (Optional, **Deprecated** use root-level `num_node_groups` and `replicas_per_node_group` instead) Create a native Redis cluster. `automatic_failover_enabled` must be set to true. Cluster Mode documented below. Only 1 `cluster_mode` block is allowed. Note that configuring this block does not enable cluster mode, i.e., data sharding, this requires using a parameter group that has the parameter `cluster-enabled` set to true. @@ -158,6 +187,7 @@ The following arguments are optional: * `final_snapshot_identifier` - (Optional) The name of your final node group (shard) snapshot. ElastiCache creates the snapshot from the primary node in the cluster. If omitted, no final snapshot will be made. * `global_replication_group_id` - (Optional) The ID of the global replication group to which this replication group should belong. If this parameter is specified, the replication group is added to the specified global replication group as a secondary replication group; otherwise, the replication group is not part of any global replication group. If `global_replication_group_id` is set, the `num_node_groups` parameter (or the `num_node_groups` parameter of the deprecated `cluster_mode` block) cannot be set. * `kms_key_id` - (Optional) The ARN of the key that you wish to use if encrypting at rest. If not supplied, uses service managed encryption. Can be specified only if `at_rest_encryption_enabled = true`. +* `log_delivery_configuration` - (Optional, Redis only) Specifies the destination and format of Redis [SLOWLOG](https://redis.io/commands/slowlog) or Redis [Engine Log](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Log_Delivery.html#Log_contents-engine-log). See the documentation on [Amazon ElastiCache](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Log_Delivery.html#Log_contents-engine-log). See [Log Delivery Configuration](#log-delivery-configuration) below for more details. * `maintenance_window` – (Optional) Specifies the weekly time range for when maintenance on the cache cluster is performed. The format is `ddd:hh24:mi-ddd:hh24:mi` (24H Clock UTC). The minimum maintenance window is a 60 minute period. Example: `sun:05:00-sun:09:00` * `multi_az_enabled` - (Optional) Specifies whether to enable Multi-AZ Support for the replication group. If `true`, `automatic_failover_enabled` must also be enabled. Defaults to `false`. * `node_type` - (Optional) Instance class to be used. See AWS documentation for information on [supported node types](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/CacheNodes.SupportedTypes.html) and [guidance on selecting node types](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/nodes-select-size.html). Required unless `global_replication_group_id` is set. Cannot be set if `global_replication_group_id` is set. @@ -183,6 +213,15 @@ The following arguments are optional: * `num_node_groups` - (Optional, **Deprecated** use root-level `num_node_groups` instead) Number of node groups (shards) for this Redis replication group. Changing this number will trigger an online resizing operation before other settings modifications. Required unless `global_replication_group_id` is set. * `replicas_per_node_group` - (Optional, Required with `cluster_mode` `num_node_groups`, **Deprecated** use root-level `replicas_per_node_group` instead) Number of replica nodes in each node group. Valid values are 0 to 5. Changing this number will trigger an online resizing operation before other settings modifications. +### Log Delivery Configuration + +The `log_delivery_configuration` block allows the streaming of Redis [SLOWLOG](https://redis.io/commands/slowlog) or Redis [Engine Log](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Log_Delivery.html#Log_contents-engine-log) to CloudWatch Logs or Kinesis Data Firehose. Max of 2 blocks. + +* `destination` - Name of either the CloudWatch Logs LogGroup or Kinesis Data Firehose resource. +* `destination_type` - For CloudWatch Logs use `cloudwatch-logs` or for Kinesis Data Firehose use `kinesis-firehose`. +* `log_format` - Valid values are `json` or `text` +* `log_type` - Valid values are `slow-log` or `engine-log`. Max 1 of each. + ## Attributes Reference In addition to all arguments above, the following attributes are exported: diff --git a/website/docs/r/elasticsearch_domain.html.markdown b/website/docs/r/elasticsearch_domain.html.markdown index 286552baa045..872ceb968759 100644 --- a/website/docs/r/elasticsearch_domain.html.markdown +++ b/website/docs/r/elasticsearch_domain.html.markdown @@ -253,6 +253,7 @@ The following arguments are optional: ### cluster_config +* `cold_storage_options` - (Optional) Configuration block containing cold storage configuration. Detailed below. * `dedicated_master_count` - (Optional) Number of dedicated main nodes in the cluster. * `dedicated_master_enabled` - (Optional) Whether dedicated main nodes are enabled for the cluster. * `dedicated_master_type` - (Optional) Instance type of the dedicated main nodes in the cluster. @@ -264,6 +265,10 @@ The following arguments are optional: * `zone_awareness_config` - (Optional) Configuration block containing zone awareness settings. Detailed below. * `zone_awareness_enabled` - (Optional) Whether zone awareness is enabled, set to `true` for multi-az deployment. To enable awareness with three Availability Zones, the `availability_zone_count` within the `zone_awareness_config` must be set to `3`. +#### cold_storage_options + +* `enabled` - (Optional) Boolean to enable cold storage for an Elasticsearch domain. Defaults to `false`. Master and ultrawarm nodes must be enabled for cold storage. + #### zone_awareness_config * `availability_zone_count` - (Optional) Number of Availability Zones for the domain to use with `zone_awareness_enabled`. Defaults to `2`. Valid values: `2` or `3`. @@ -339,7 +344,9 @@ In addition to all arguments above, the following attributes are exported: `aws_elasticsearch_domain` provides the following [Timeouts](https://www.terraform.io/docs/configuration/blocks/resources/syntax.html#operation-timeouts) configuration options: +* `create` - (Optional, Default: `60m`) How long to wait for creation. * `update` - (Optional, Default: `60m`) How long to wait for updates. +* `delete` - (Optional, Default: `90m`) How long to wait for deletion. ## Import diff --git a/website/docs/r/elasticsearch_domain_saml_options.html.markdown b/website/docs/r/elasticsearch_domain_saml_options.html.markdown index 8abf58c3e854..d85af204f77e 100644 --- a/website/docs/r/elasticsearch_domain_saml_options.html.markdown +++ b/website/docs/r/elasticsearch_domain_saml_options.html.markdown @@ -62,7 +62,7 @@ The following arguments are optional: * `master_user_name` - (Optional) This username from the SAML IdP receives full permissions to the cluster, equivalent to a new master user. * `roles_key` - (Optional) Element of the SAML assertion to use for backend roles. Default is roles. * `session_timeout_minutes` - (Optional) Duration of a session in minutes after a user logs in. Default is 60. Maximum value is 1,440. -* `subject_key` - (Optional) Element of the SAML assertion to use for username. Default is NameID. +* `subject_key` - (Optional) Custom SAML attribute to use for user names. Default is an empty string - `""`. This will cause Elasticsearch to use the `NameID` element of the `Subject`, which is the default location for name identifiers in the SAML specification. #### idp diff --git a/website/docs/r/glue_catalog_table.html.markdown b/website/docs/r/glue_catalog_table.html.markdown index 746ad547445d..f1c8f78a6431 100644 --- a/website/docs/r/glue_catalog_table.html.markdown +++ b/website/docs/r/glue_catalog_table.html.markdown @@ -104,6 +104,10 @@ The follow arguments are optional: ### partition_index +~> **NOTE:** A `partition_index` cannot be added to an existing `glue_catalog_table`. +This will destroy and recreate the table, possibly resulting in data loss. +To add an index to an existing table, see the [`glue_partition_index` resource](/docs/providers/aws/r/glue_partition_index.html) for configuration details. + * `index_name` - (Required) Name of the partition index. * `keys` - (Required) Keys for the partition index. diff --git a/website/docs/r/iam_group.html.markdown b/website/docs/r/iam_group.html.markdown index 51f420d97e58..c5455a151cc8 100644 --- a/website/docs/r/iam_group.html.markdown +++ b/website/docs/r/iam_group.html.markdown @@ -10,6 +10,8 @@ description: |- Provides an IAM group. +~> **NOTE on user management:** Using `aws_iam_group_membership` or `aws_iam_user_group_membership` resources in addition to manually managing user/group membership using the console may lead to configuration drift or conflicts. For this reason, it's recommended to either manage membership entirely with Terraform or entirely within the AWS console. + ## Example Usage ```terraform diff --git a/website/docs/r/imagebuilder_distribution_configuration.html.markdown b/website/docs/r/imagebuilder_distribution_configuration.html.markdown index ee099916d39c..47e0ac047159 100644 --- a/website/docs/r/imagebuilder_distribution_configuration.html.markdown +++ b/website/docs/r/imagebuilder_distribution_configuration.html.markdown @@ -98,6 +98,7 @@ The following arguments are optional: ### launch_template_configuration * `default` - (Optional) Indicates whether to set the specified Amazon EC2 launch template as the default launch template. Defaults to `true`. +* `account_id` - The account ID that this configuration applies to. * `launch_template_id` - (Required) The ID of the Amazon EC2 launch template to use. ## Attributes Reference diff --git a/website/docs/r/iot_authorizer.html.markdown b/website/docs/r/iot_authorizer.html.markdown index e251cc0b640a..c2d774cf03f6 100644 --- a/website/docs/r/iot_authorizer.html.markdown +++ b/website/docs/r/iot_authorizer.html.markdown @@ -29,6 +29,7 @@ resource "aws_iot_authorizer" "example" { ## Argument Reference * `authorizer_function_arn` - (Required) The ARN of the authorizer's Lambda function. +* `enable_caching_for_http` - (Optional) Specifies whether the HTTP caching is enabled or not. Default: `false`. * `name` - (Required) The name of the authorizer. * `signing_disabled` - (Optional) Specifies whether AWS IoT validates the token signature in an authorization request. Default: `false`. * `status` - (Optional) The status of Authorizer request at creation. Valid values: `ACTIVE`, `INACTIVE`. Default: `ACTIVE`. diff --git a/website/docs/r/iot_indexing_configuration.html.markdown b/website/docs/r/iot_indexing_configuration.html.markdown new file mode 100644 index 000000000000..9d8855fefd52 --- /dev/null +++ b/website/docs/r/iot_indexing_configuration.html.markdown @@ -0,0 +1,76 @@ +--- +subcategory: "IoT" +layout: "aws" +page_title: "AWS: aws_iot_indexing_configuration" +description: |- + Managing IoT Thing indexing. +--- + +# Resource: aws_iot_indexing_configuration + +Managing [IoT Thing indexing](https://docs.aws.amazon.com/iot/latest/developerguide/managing-index.html). + +## Example Usage + +```terraform +resource "aws_iot_indexing_configuration" "example" { + thing_indexing_configuration { + thing_indexing_mode = "REGISTRY_AND_SHADOW" + thing_connectivity_indexing_mode = "STATUS" + device_defender_indexing_mode = "VIOLATIONS" + named_shadow_indexing_mode = "ON" + + custom_field { + name = "shadow.desired.power" + type = "Boolean" + } + custom_field { + name = "attributes.version" + type = "Number" + } + custom_field { + name = "shadow.name.thing1shadow.desired.DefaultDesired" + type = "String" + } + custom_field { + name = "deviceDefender.securityProfile1.NUMBER_VALUE_BEHAVIOR.lastViolationValue.number" + type = "Number" + } + } +} +``` + +## Argument Reference + +* `thing_group_indexing_configuration` - (Optional) Thing group indexing configuration. See below. +* `thing_indexing_configuration` - (Optional) Thing indexing configuration. See below. + +### thing_group_indexing_configuration + +The `thing_group_indexing_configuration` configuration block supports the following: + +* `custom_field` - (Optional) A list of thing group fields to index. This list cannot contain any managed fields. See below. +* `managed_field` - (Optional) Contains fields that are indexed and whose types are already known by the Fleet Indexing service. See below. +* `thing_group_indexing_mode` - (Required) Thing group indexing mode. Valid values: `OFF`, `ON`. + +### thing_indexing_configuration + +The `thing_indexing_configuration` configuration block supports the following: + +* `custom_field` - (Optional) Contains custom field names and their data type. See below. +* `device_defender_indexing_mode` - (Optional) Device Defender indexing mode. Valid values: `VIOLATIONS`, `OFF`. Default: `OFF`. +* `managed_field` - (Optional) Contains fields that are indexed and whose types are already known by the Fleet Indexing service. See below. +* `named_shadow_indexing_mode` - (Optional) [Named shadow](https://docs.aws.amazon.com/iot/latest/developerguide/iot-device-shadows.html) indexing mode. Valid values: `ON`, `OFF`. Default: `OFF`. +* `thing_connectivity_indexing_mode` - (Optional) Thing connectivity indexing mode. Valid values: `STATUS`, `OFF`. Default: `OFF`. +* `thing_indexing_mode` - (Required) Thing indexing mode. Valid values: `REGISTRY`, `REGISTRY_AND_SHADOW`, `OFF`. + +### field + +The `custom_field` and `managed_field` configuration blocks supports the following: + +* `name` - (Optional) The name of the field. +* `type` - (Optional) The data type of the field. Valid values: `Number`, `String`, `Boolean`. + +## Attributes Reference + +No additional attributes are exported. diff --git a/website/docs/r/iot_logging_options.html.markdown b/website/docs/r/iot_logging_options.html.markdown new file mode 100644 index 000000000000..645840faf0c3 --- /dev/null +++ b/website/docs/r/iot_logging_options.html.markdown @@ -0,0 +1,30 @@ +--- +subcategory: "IoT" +layout: "aws" +page_title: "AWS: aws_iot_logging_options" +description: |- + Provides a resource to manage default logging options. +--- + +# Resource: aws_iot_logging_options + +Provides a resource to manage [default logging options](https://docs.aws.amazon.com/iot/latest/developerguide/configure-logging.html#configure-logging-console). + +## Example Usage + +```terraform +resource "aws_iot_logging_options" "example" { + default_log_level = "WARN" + role_arn = aws_iam_role.example.arn +} +``` + +## Argument Reference + +* `default_log_level` - (Optional) The default logging level. Valid Values: `"DEBUG"`, `"INFO"`, `"ERROR"`, `"WARN"`, `"DISABLED"`. +* `disable_all_logs` - (Optional) If `true` all logs are disabled. The default is `false`. +* `role_arn` - (Required) The ARN of the role that allows IoT to write to Cloudwatch logs. + +## Attributes Reference + +No additional attributes are exported. diff --git a/website/docs/r/iot_provisioning_template.html.markdown b/website/docs/r/iot_provisioning_template.html.markdown new file mode 100644 index 000000000000..8ed4db80facd --- /dev/null +++ b/website/docs/r/iot_provisioning_template.html.markdown @@ -0,0 +1,113 @@ +--- +subcategory: "IoT" +layout: "aws" +page_title: "AWS: aws_iot_provisioning_template" +description: |- + Manages an IoT fleet provisioning template. +--- + +# Resource: aws_iot_provisioning_template + +Manages an IoT fleet provisioning template. For more info, see the AWS documentation on [fleet provisioning](https://docs.aws.amazon.com/iot/latest/developerguide/provision-wo-cert.html). + +## Example Usage + +```terraform +data "aws_iam_policy_document" "iot_assume_role_policy" { + statement { + actions = ["sts:AssumeRole"] + + principals { + type = "Service" + identifiers = ["iot.amazonaws.com"] + } + } +} + +resource "aws_iam_role" "iot_fleet_provisioning" { + name = "IoTProvisioningServiceRole" + path = "/service-role/" + assume_role_policy = data.aws_iam_policy_document.iot_assume_role_policy.json +} + +resource "aws_iam_role_policy_attachment" "iot_fleet_provisioning_registration" { + role = aws_iam_role.iot_fleet_provisioning.name + policy_arn = "arn:aws:iam::aws:policy/service-role/AWSIoTThingsRegistration" +} + +data "aws_iam_policy_document" "device_policy" { + statement { + actions = ["iot:Subscribe"] + resources = ["*"] + } +} + +resource "aws_iot_policy" "device_policy" { + name = "DevicePolicy" + policy = data.aws_iam_policy_document.device_policy.json +} + +resource "aws_iot_provisioning_template" "fleet" { + name = "FleetTemplate" + description = "My provisioning template" + provisioning_role_arn = aws_iam_role.iot_fleet_provisioning.arn + + template_body = jsonencode({ + Parameters = { + SerialNumber = { Type = "String" } + } + + Resources = { + certificate = { + Properties = { + CertificateId = { Ref = "AWS::IoT::Certificate::Id" } + Status = "Active" + } + Type = "AWS::IoT::Certificate" + } + + policy = { + Properties = { + PolicyName = aws_iot_policy.device_policy.name + } + Type = "AWS::IoT::Policy" + } + } + }) +} +``` + +## Argument Reference + +The following arguments are supported: + +* `name` - (Required) The name of the fleet provisioning template. +* `description` - (Optional) The description of the fleet provisioning template. +* `enabled` - (Optional) True to enable the fleet provisioning template, otherwise false. +* `pre_provisioning_hook` - (Optional) Creates a pre-provisioning hook template. Details below. +* `provisioning_role_arn` - (Required) The role ARN for the role associated with the fleet provisioning template. This IoT role grants permission to provision a device. +* `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `template_body` - (Required) The JSON formatted contents of the fleet provisioning template. + +### pre_provisioning_hook + +The `pre_provisioning_hook` configuration block supports the following: + +* `payload_version` - (Optional) The version of the payload that was sent to the target function. The only valid (and the default) payload version is `"2020-04-01"`. +* `target_arb` - (Optional) The ARN of the target function. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The ARN that identifies the provisioning template. +* `default_version_id` - The default version of the fleet provisioning template. +* `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). + +## Import + +IoT fleet provisioning templates can be imported using the `name`, e.g. + +``` +$ terraform import aws_iot_provisioning_template.fleet FleetProvisioningTemplate +``` \ No newline at end of file diff --git a/website/docs/r/lambda_function.html.markdown b/website/docs/r/lambda_function.html.markdown index 7cbb5ab21472..3ab3fbda6c5c 100644 --- a/website/docs/r/lambda_function.html.markdown +++ b/website/docs/r/lambda_function.html.markdown @@ -44,6 +44,8 @@ EOF } resource "aws_lambda_function" "test_lambda" { + # If the file is not in the current working directory you will need to include a + # path.module in the filename. filename = "lambda_function_payload.zip" function_name = "lambda_function_name" role = aws_iam_role.iam_for_lambda.arn @@ -79,6 +81,44 @@ resource "aws_lambda_function" "example" { } ``` +### Lambda Ephemeral Storage + +Lambda Function Ephemeral Storage(`/tmp`) allows you to configure the storage upto `10` GB. The default value set to `512` MB. + +```terraform +resource "aws_iam_role" "iam_for_lambda" { + name = "iam_for_lambda" + + assume_role_policy = <.lambda-url..on.aws`. +* `url_id` - A generated ID for the endpoint. + +## Import + +Lambda function URLs can be imported using the `function_name` or `function_name/qualifier`, e.g., + +``` +$ terraform import aws_lambda_function_url.test_lambda_url my_test_lambda_function +``` diff --git a/website/docs/r/lambda_permission.html.markdown b/website/docs/r/lambda_permission.html.markdown index b60dcb2f4c86..e7b6f63426ae 100644 --- a/website/docs/r/lambda_permission.html.markdown +++ b/website/docs/r/lambda_permission.html.markdown @@ -194,6 +194,7 @@ EOF For API Gateway, this should be the ARN of the API, as described [here][2]. * `statement_id` - (Optional) A unique statement identifier. By default generated by Terraform. * `statement_id_prefix` - (Optional) A statement identifier prefix. Terraform will generate a unique suffix. Conflicts with `statement_id`. +* `principal_org_id` - (Optional) The identifier for your organization in AWS Organizations. Use this to grant permissions to all the AWS accounts under this organization. [1]: https://developer.amazon.com/docs/custom-skills/host-a-custom-skill-as-an-aws-lambda-function.html#use-aws-cli [2]: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-control-access-using-iam-policies-to-invoke-api.html diff --git a/website/docs/r/mskconnect_connector.html.markdown b/website/docs/r/mskconnect_connector.html.markdown new file mode 100644 index 000000000000..d48e5557b3a5 --- /dev/null +++ b/website/docs/r/mskconnect_connector.html.markdown @@ -0,0 +1,202 @@ +--- +subcategory: "Kafka Connect (MSK Connect)" +layout: "aws" +page_title: "AWS: aws_mskconnect_connector" +description: |- + Provides an Amazon MSK Connect Connector resource. +--- + +# Resource: aws_mskconnect_connector + +Provides an Amazon MSK Connect Connector resource. + +## Example Usage + +### Basic configuration + +```terraform +resource "aws_mskconnect_connector" "example" { + name = "example" + + kafkaconnect_version = "2.7.1" + + capacity { + autoscaling { + mcu_count = 1 + min_worker_count = 1 + max_worker_count = 2 + + scale_in_policy { + cpu_utilization_percentage = 20 + } + + scale_out_policy { + cpu_utilization_percentage = 80 + } + } + } + + connector_configuration = { + "connector.class" = "com.github.jcustenborder.kafka.connect.simulator.SimulatorSinkConnector" + "tasks.max" = "1" + "topics" = "example" + } + + kafka_cluster { + apache_kafka_cluster { + bootstrap_servers = aws_msk_cluster.example.bootstrap_brokers_tls + + vpc { + security_groups = [aws_security_group.example.id] + subnets = [aws_subnet.example1.id, aws_subnet.example2.id, aws_subnet.example3.id] + } + } + } + + kafka_cluster_client_authentication { + authentication_type = "NONE" + } + + kafka_cluster_encryption_in_transit { + encryption_type = "TLS" + } + + plugin { + custom_plugin { + arn = aws_mskconnect_custom_plugin.example.arn + revision = aws_mskconnect_custom_plugin.example.latest_revision + } + } + + service_execution_role_arn = aws_iam_role.example.arn +} +``` + +## Argument Reference + +The following arguments are supported: + +* `capacity` - (Required) Information about the capacity allocated to the connector. See below. +* `connector_configuration` - (Required) A map of keys to values that represent the configuration for the connector. +* `description` - (Optional) A summary description of the connector. +* `kafka_cluster` - (Required) Specifies which Apache Kafka cluster to connect to. See below. +* `kafka_cluster_client_authentication` - (Required) Details of the client authentication used by the Apache Kafka cluster. See below. +* `kafka_cluster_encryption_in_transit` - (Required) Details of encryption in transit to the Apache Kafka cluster. See below. +* `kafkaconnect_version` - (Required) The version of Kafka Connect. It has to be compatible with both the Apache Kafka cluster's version and the plugins. +* `log_delivery` - (Optional) Details about log delivery. See below. +* `name` - (Required) The name of the connector. +* `plugin` - (Required) Specifies which plugins to use for the connector. See below. +* `service_execution_role_arn` - (Required) The Amazon Resource Name (ARN) of the IAM role used by the connector to access the Amazon Web Services resources that it needs. The types of resources depends on the logic of the connector. For example, a connector that has Amazon S3 as a destination must have permissions that allow it to write to the S3 destination bucket. +* `worker_configuration` - (Optional) Specifies which worker configuration to use with the connector. See below. + +### capacity Configuration Block + +* `autoscaling` - (Optional) Information about the auto scaling parameters for the connector. See below. +* `provisioned_capacity` - (Optional) Details about a fixed capacity allocated to a connector. See below. + +### autoscaling Configuration Block + +* `max_worker_count` - (Required) The maximum number of workers allocated to the connector. +* `mcu_count` - (Optional) The number of microcontroller units (MCUs) allocated to each connector worker. Valid values: `1`, `2`, `4`, `8`. The default value is `1`. +* `min_worker_count` - (Required) The minimum number of workers allocated to the connector. +* `scale_in_policy` - (Optional) The scale-in policy for the connector. See below. +* `scale_out_policy` - (Optional) The scale-out policy for the connector. See below. + +### scale_in_policy Configuration Block + +* `cpu_utilization_percentage` - (Required) Specifies the CPU utilization percentage threshold at which you want connector scale in to be triggered. + +### scale_out_policy Configuration Block + +* `cpu_utilization_percentage` - (Required) The CPU utilization percentage threshold at which you want connector scale out to be triggered. + +### provisioned_capacity Configuration Block + +* `mcu_count` - (Optional) The number of microcontroller units (MCUs) allocated to each connector worker. Valid values: `1`, `2`, `4`, `8`. The default value is `1`. +* `worker_count` - (Required) The number of workers that are allocated to the connector. + +### kafka_cluster Configuration Block + +* `apache_kafka_cluster` - (Required) The Apache Kafka cluster to which the connector is connected. + +### apache_kafka_cluster Configuration Block + +* `bootstrap_servers` - (Required) The bootstrap servers of the cluster. +* `vpc` - (Required) Details of an Amazon VPC which has network connectivity to the Apache Kafka cluster. + +### vpc Configuration Block + +* `security_groups` - (Required) The security groups for the connector. +* `subnets` - (Required) The subnets for the connector. + +### kafka_cluster_client_authentication Configuration Block + +* `authentication_type` - (Optional) The type of client authentication used to connect to the Apache Kafka cluster. Valid values: `IAM`, `NONE`. A value of `NONE` means that no client authentication is used. The default value is `NONE`. + +### kafka_cluster_encryption_in_transit Configuration Block + +* `encryption_type` - (Optional) The type of encryption in transit to the Apache Kafka cluster. Valid values: `PLAINTEXT`, `TLS`. The default values is `PLAINTEXT`. + +### log_delivery Configuration Block + +* `worker_log_delivery` - (Required) The workers can send worker logs to different destination types. This configuration specifies the details of these destinations. See below. + +### worker_log_delivery Configuration Block + +* `cloudwatch_logs` - (Optional) Details about delivering logs to Amazon CloudWatch Logs. See below. +* `firehose` - (Optional) Details about delivering logs to Amazon Kinesis Data Firehose. See below. +* `s3` - (Optional) Details about delivering logs to Amazon S3. See below. + +### cloudwatch_logs Configuration Block + +* `enabled` - (Optional) Whether log delivery to Amazon CloudWatch Logs is enabled. +* `log_group` - (Required) The name of the CloudWatch log group that is the destination for log delivery. + +### firehose Configuration Block + +* `delivery_stream` - (Optional) The name of the Kinesis Data Firehose delivery stream that is the destination for log delivery. +* `enabled` - (Required) Specifies whether connector logs get delivered to Amazon Kinesis Data Firehose. + +### s3 Configuration Block + +* `bucket` - (Optional) The name of the S3 bucket that is the destination for log delivery. +* `enabled` - (Required) Specifies whether connector logs get sent to the specified Amazon S3 destination. +* `prefix` - (Optional) The S3 prefix that is the destination for log delivery. + +### plugin Configuration Block + +* `custom_plugin` - (Required) Details about a custom plugin. See below. + +### custom_plugin Configuration Block + +* `arn` - (Required) The Amazon Resource Name (ARN) of the custom plugin. +* `revision` - (Required) The revision of the custom plugin. + +### worker_configuration Configuration Block + +* `arn` - (Required) The Amazon Resource Name (ARN) of the worker configuration. +* `revision` - (Required) The revision of the worker configuration. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - The Amazon Resource Name (ARN) of the connector. +* `version` - The current version of the connector. + +## Timeouts + +`aws_mskconnect_connector` provides the following +[Timeouts](https://www.terraform.io/docs/configuration/blocks/resources/syntax.html#operation-timeouts) configuration options: + +* `create` - (Default `20 minutes`) How long to wait for the MSK Connect Connector to be created. +* `update` - (Default `20 minutes`) How long to wait for the MSK Connect Connector to be created. +* `delete` - (Default `10 minutes`) How long to wait for the MSK Connect Connector to be created. + +## Import + +MSK Connect Connector can be imported using the connector's `arn`, e.g., + +``` +$ terraform import aws_mskconnect_connector.example 'arn:aws:kafkaconnect:eu-central-1:123456789012:connector/example/264edee4-17a3-412e-bd76-6681cfc93805-3' +``` diff --git a/website/docs/r/mskconnect_custom_plugin.html.markdown b/website/docs/r/mskconnect_custom_plugin.html.markdown index 70e1c130b34e..91b92c632dbf 100644 --- a/website/docs/r/mskconnect_custom_plugin.html.markdown +++ b/website/docs/r/mskconnect_custom_plugin.html.markdown @@ -73,6 +73,7 @@ In addition to all arguments above, the following attributes are exported: [Timeouts](https://www.terraform.io/docs/configuration/blocks/resources/syntax.html#operation-timeouts) configuration options: * `create` - (Default `10 minutes`) How long to wait for the MSK Connect Custom Plugin to be created. +* `delete` - (Default `10 minutes`) How long to wait for the MSK Connect Custom Plugin to be created. ## Import diff --git a/website/docs/r/mwaa_environment.html.markdown b/website/docs/r/mwaa_environment.html.markdown index 5f34d1ab9a5d..295ee58de3c8 100644 --- a/website/docs/r/mwaa_environment.html.markdown +++ b/website/docs/r/mwaa_environment.html.markdown @@ -141,6 +141,7 @@ The following arguments are supported: * `plugins_s3_path` - (Optional) The relative path to the plugins.zip file on your Amazon S3 storage bucket. For example, plugins.zip. If a relative path is provided in the request, then plugins_s3_object_version is required. For more information, see [Importing DAGs on Amazon MWAA](https://docs.aws.amazon.com/mwaa/latest/userguide/configuring-dag-import.html). * `requirements_s3_object_version` - (Optional) The requirements.txt file version you want to use. * `requirements_s3_path` - (Optional) The relative path to the requirements.txt file on your Amazon S3 storage bucket. For example, requirements.txt. If a relative path is provided in the request, then requirements_s3_object_version is required. For more information, see [Importing DAGs on Amazon MWAA](https://docs.aws.amazon.com/mwaa/latest/userguide/configuring-dag-import.html). +* `schedulers` - (Optional) The number of schedulers that you want to run in your environment. v2.0.2 and above accepts `2` - `5`, default `2`. v1.10.12 accepts `1`. * `source_bucket_arn` - (Required) The Amazon Resource Name (ARN) of your Amazon S3 storage bucket. For example, arn:aws:s3:::airflow-mybucketname. * `webserver_access_mode` - (Optional) Specifies whether the webserver should be accessible over the internet or via your specified VPC. Possible options: `PRIVATE_ONLY` (default) and `PUBLIC_ONLY`. * `weekly_maintenance_window_start` - (Optional) Specifies the start date for the weekly maintenance window. diff --git a/website/docs/r/neptune_cluster_endpoint.html.markdown b/website/docs/r/neptune_cluster_endpoint.html.markdown index 0c517120c3c1..63648431ba17 100644 --- a/website/docs/r/neptune_cluster_endpoint.html.markdown +++ b/website/docs/r/neptune_cluster_endpoint.html.markdown @@ -25,7 +25,7 @@ resource "aws_neptune_cluster_endpoint" "example" { The following arguments are supported: * `cluster_identifier` - (Required, Forces new resources) The DB cluster identifier of the DB cluster associated with the endpoint. -* `cluster_identifier_endpoint` - (Required, Forces new resources) The identifier of the endpoint. +* `cluster_endpoint_identifier` - (Required, Forces new resources) The identifier of the endpoint. * `endpoint_type` - (Required) The type of the endpoint. One of: `READER`, `WRITER`, `ANY`. * `excluded_members` - (Optional) List of DB instance identifiers that aren't part of the custom endpoint group. All other eligible instances are reachable through the custom endpoint. Only relevant if the list of static members is empty. * `static_members` - (Optional) List of DB instance identifiers that are part of the custom endpoint group. diff --git a/website/docs/r/opensearch_domain.html.markdown b/website/docs/r/opensearch_domain.html.markdown new file mode 100644 index 000000000000..aaab763b2ec9 --- /dev/null +++ b/website/docs/r/opensearch_domain.html.markdown @@ -0,0 +1,374 @@ +--- +subcategory: "OpenSearch" +layout: "aws" +page_title: "AWS: aws_opensearch_domain" +description: |- + Terraform resource for managing an AWS OpenSearch Domain. +--- + +# Resource: aws_opensearch_domain + +Manages an Amazon OpenSearch Domain. + +## Elasticsearch vs. OpenSearch + +Amazon OpenSearch Service is the successor to Amazon Elasticsearch Service and supports OpenSearch and legacy Elasticsearch OSS (up to 7.10, the final open source version of the software). + +OpenSearch Domain configurations are similar in many ways to Elasticsearch Domain configurations. However, there are important differences including these: + +* OpenSearch has `engine_version` while Elasticsearch has `elastisearch_version` +* Versions are specified differently - _e.g._, `Elastisearch_7.10` with OpenSearch vs. `7.10` for Elasticsearch. +* `instance_type` argument values end in `search` for OpenSearch vs. `elasticsearch` for Elasticsearch (_e.g._, `t2.micro.search` vs. `t2.micro.elasticsearch`). +* The AWS-managed service-linked role for OpenSearch is called `AWSServiceRoleForAmazonOpenSearchService` instead of `AWSServiceRoleForAmazonElasticsearchService` for Elasticsearch. + +There are also some potentially unexpected similarities in configurations: + +* ARNs for both are prefaced with `arn:aws:es:`. +* Both OpenSearch and Elasticsearch use assume role policies that refer to the `Principal` `Service` as `es.amazonaws.com`. +* IAM policy actions, such as those you will find in `access_policies`, are prefaced with `es:` for both. + +## Example Usage + +### Basic Usage + +```terraform +resource "aws_opensearch_domain" "example" { + domain_name = "example" + engine_version = "Elasticsearch_7.10" + + cluster_config { + instance_type = "r4.large.search" + } + + tags = { + Domain = "TestDomain" + } +} +``` + +### Access Policy + +-> See also: [`aws_opensearch_domain_policy` resource](/docs/providers/aws/r/opensearch_domain_policy.html) + +```terraform +variable "domain" { + default = "tf-test" +} + +data "aws_region" "current" {} + +data "aws_caller_identity" "current" {} + +resource "aws_opensearch_domain" "example" { + domain_name = var.domain + + # ... other configuration ... + + access_policies = < **Note:** You must have created the service linked role for the OpenSearch service to use `vpc_options`. If you need to create the service linked role at the same time as the OpenSearch domain then you must use `depends_on` to make sure that the role is created before the OpenSearch domain. See the [VPC based ES domain example](#vpc-based-es) above. + +-> Security Groups and Subnets referenced in these attributes must all be within the same VPC. This determines what VPC the endpoints are created in. + +* `security_group_ids` - (Optional) List of VPC Security Group IDs to be applied to the OpenSearch domain endpoints. If omitted, the default Security Group for the VPC will be used. +* `subnet_ids` - (Required) List of VPC Subnet IDs for the OpenSearch domain endpoints to be created in. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `arn` - ARN of the domain. +* `domain_id` - Unique identifier for the domain. +* `domain_name` - Name of the OpenSearch domain. +* `endpoint` - Domain-specific endpoint used to submit index, search, and data upload requests. +* `kibana_endpoint` - Domain-specific endpoint for kibana without https scheme. +* `tags_all` - Map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](https://www.terraform.io/docs/providers/aws/index.html#default_tags-configuration-block). +* `vpc_options.0.availability_zones` - If the domain was created inside a VPC, the names of the availability zones the configured `subnet_ids` were created inside. +* `vpc_options.0.vpc_id` - If the domain was created inside a VPC, the ID of the VPC. + +## Timeouts + +`aws_opensearch_domain` provides the following [Timeouts](https://www.terraform.io/docs/configuration/blocks/resources/syntax.html#operation-timeouts) configuration options: + +* `create` - (Optional, Default: `60m`) How long to wait for creation. +* `update` - (Optional, Default: `180m`) How long to wait for updates. +* `delete` - (Optional, Default: `90m`) How long to wait for deletion. + +## Import + +OpenSearch domains can be imported using the `domain_name`, e.g., + +``` +$ terraform import aws_opensearch_domain.example domain_name +``` diff --git a/website/docs/r/opensearch_domain_policy.html.markdown b/website/docs/r/opensearch_domain_policy.html.markdown new file mode 100644 index 000000000000..957cb84d7102 --- /dev/null +++ b/website/docs/r/opensearch_domain_policy.html.markdown @@ -0,0 +1,59 @@ +--- +subcategory: "OpenSearch" +layout: "aws" +page_title: "AWS: aws_opensearch_domain" +description: |- + Provides an OpenSearch Domain Policy. +--- + +# Resource: aws_opensearch_domain_policy + +Allows setting policy to an OpenSearch domain while referencing domain attributes (e.g., ARN). + +## Example Usage + +```terraform +resource "aws_opensearch_domain" "example" { + domain_name = "tf-test" + engine_version = "OpenSearch_1.1" +} + +resource "aws_opensearch_domain_policy" "main" { + domain_name = aws_opensearch_domain.example.domain_name + + access_policies = < **Note:** Account management must be done from the organization's master account. -!> **WARNING:** Deleting this Terraform resource will only remove an AWS account from an organization. Terraform will not close the account. The member account must be prepared to be a standalone account beforehand. See the [AWS Organizations documentation](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_remove.html) for more information. +~> **Note:** By default, deleting this Terraform resource will only remove an AWS account from an organization. You must set the `close_on_deletion` flag to true to close the account. It is worth noting that quotas are enforced when using the `close_on_deletion` argument, which you can produce a [CLOSE_ACCOUNT_QUOTA_EXCEEDED](https://docs.aws.amazon.com/organizations/latest/APIReference/API_CloseAccount.html) error, and require you to close the account manually. ## Example Usage @@ -32,6 +32,7 @@ The following arguments are supported: * `iam_user_access_to_billing` - (Optional) If set to `ALLOW`, the new account enables IAM users to access account billing information if they have the required permissions. If set to `DENY`, then only the root user of the new account can access account billing information. * `parent_id` - (Optional) Parent Organizational Unit ID or Root ID for the account. Defaults to the Organization default Root ID. A configuration must be present for this argument to perform drift detection. * `role_name` - (Optional) The name of an IAM role that Organizations automatically preconfigures in the new member account. This role trusts the master account, allowing users in the master account to assume the role, as permitted by the master account administrator. The role has administrator permissions in the new member account. The Organizations API provides no method for reading this information after account creation, so Terraform cannot perform drift detection on its value and will always show a difference for a configured value after import unless [`ignore_changes`](https://www.terraform.io/docs/configuration/meta-arguments/lifecycle.html#ignore_changes) is used. +* `close_on_deletion` - (Optional) If true, a deletion event will close the account. Otherwise, it will only remove from the organization. * `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. ## Attributes Reference @@ -47,7 +48,7 @@ In addition to all arguments above, the following attributes are exported: The AWS member account can be imported by using the `account_id`, e.g., ``` -$ terraform import aws_organizations_account.my_org 111111111111 +$ terraform import aws_organizations_account.my_account 111111111111 ``` Certain resource arguments, like `role_name`, do not have an Organizations API method for reading the information after account creation. If the argument is set in the Terraform configuration on an imported resource, Terraform will always show a difference. To workaround this behavior, either omit the argument from the Terraform configuration or use [`ignore_changes`](https://www.terraform.io/docs/configuration/meta-arguments/lifecycle.html#ignore_changes) to hide the difference, e.g., diff --git a/website/docs/r/rds_cluster_activity_stream.html.markdown b/website/docs/r/rds_cluster_activity_stream.html.markdown new file mode 100644 index 000000000000..120635c87a09 --- /dev/null +++ b/website/docs/r/rds_cluster_activity_stream.html.markdown @@ -0,0 +1,87 @@ +--- +subcategory: "RDS" +layout: "aws" +page_title: "AWS: aws_rds_cluster_activity_stream" +description: |- + Manages RDS Aurora Cluster Database Activity Streams +--- + +# Resource: aws_rds_cluster_activity_stream + +Manages RDS Aurora Cluster Database Activity Streams. + +Database Activity Streams have some limits and requirements, refer to the [Monitoring Amazon Aurora using Database Activity Streams][1] documentation for detailed limitations and requirements. + +~> **Note:** This resource always calls the RDS [`StartActivityStream`][2] API with the `ApplyImmediately` parameter set to `true`. This is because the Terraform needs the activity stream to be started in order for it to get the associated attributes. + +~> **Note:** This resource depends on having at least one `aws_rds_cluster_instance` created. To avoid race conditions when all resources are being created together, add an explicit resource reference using the [resource `depends_on` meta-argument](/docs/configuration/resources.html#depends_on-explicit-resource-dependencies). + +~> **Note:** This resource is available in all regions except the following: `cn-north-1`, `cn-northwest-1`, `us-gov-east-1`, `us-gov-west-1` + +## Example Usage + +```terraform +resource "aws_rds_cluster" "default" { + cluster_identifier = "aurora-cluster-demo" + availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"] + database_name = "mydb" + master_username = "foo" + master_password = "mustbeeightcharaters" + engine = "aurora-postgresql" + engine_version = "13.4" +} + +resource "aws_rds_cluster_instance" "default" { + identifier = "aurora-instance-demo" + cluster_identifier = aws_rds_cluster.default.cluster_identifier + engine = aws_rds_cluster.default.engine + instance_class = "db.r6g.large" +} + +resource "aws_kms_key" "default" { + description = "AWS KMS Key to encrypt Database Activity Stream" +} + +resource "aws_rds_cluster_activity_stream" "default" { + resource_arn = aws_rds_cluster.default.arn + mode = "async" + kms_key_id = aws_kms_key.default.key_id + + depends_on = [aws_rds_cluster_instance.default] +} +``` + + +## Argument Reference + +For more detailed documentation about each argument, refer to +the [AWS official documentation][3]. + +The following arguments are supported: + +* `resource_arn` - (Required, Forces new resources) The Amazon Resource Name (ARN) of the DB cluster. +* `mode` - (Required, Forces new resources) Specifies the mode of the database activity stream. Database events such as a change or access generate an activity stream event. The database session can handle these events either synchronously or asynchronously. One of: `sync`, `async`. +* `kms_key_id` - (Required, Forces new resources) The AWS KMS key identifier for encrypting messages in the database activity stream. The AWS KMS key identifier is the key ARN, key ID, alias ARN, or alias name for the KMS key. +* `engine_native_audit_fields_included` - (Optional, Forces new resources) Specifies whether the database activity stream includes engine-native audit fields. This option only applies to an Oracle DB instance. By default, no engine-native audit fields are included. Defaults `false`. + + + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The Amazon Resource Name (ARN) of the DB cluster. +* `kinesis_stream_name` - The name of the Amazon Kinesis data stream to be used for the database activity stream. + + +## Import + +RDS Aurora Cluster Database Activity Streams can be imported using the `resource_arn`, e.g. + +``` +$ terraform import aws_rds_cluster_activity_stream.default arn:aws:rds:us-west-2:123456789012:cluster:aurora-cluster-demo +``` + +[1]: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/DBActivityStreams.html +[2]: https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_StartActivityStream.html +[3]: https://docs.aws.amazon.com/cli/latest/reference/rds/start-activity-stream.html \ No newline at end of file diff --git a/website/docs/r/route.html.markdown b/website/docs/r/route.html.markdown index 2654d59120c8..b4873838a873 100644 --- a/website/docs/r/route.html.markdown +++ b/website/docs/r/route.html.markdown @@ -59,6 +59,7 @@ One of the following destination arguments must be supplied: One of the following target arguments must be supplied: * `carrier_gateway_id` - (Optional) Identifier of a carrier gateway. This attribute can only be used when the VPC contains a subnet which is associated with a Wavelength Zone. +* `core_network_arn` - (Optional) The Amazon Resource Name (ARN) of a core network. * `egress_only_gateway_id` - (Optional) Identifier of a VPC Egress Only Internet Gateway. * `gateway_id` - (Optional) Identifier of a VPC internet gateway or a virtual private gateway. * `nat_gateway_id` - (Optional) Identifier of a VPC NAT gateway. diff --git a/website/docs/r/route_table.html.markdown b/website/docs/r/route_table.html.markdown index 9e90ed7560c5..7091eb28ef65 100644 --- a/website/docs/r/route_table.html.markdown +++ b/website/docs/r/route_table.html.markdown @@ -88,6 +88,7 @@ One of the following destination arguments must be supplied: One of the following target arguments must be supplied: * `carrier_gateway_id` - (Optional) Identifier of a carrier gateway. This attribute can only be used when the VPC contains a subnet which is associated with a Wavelength Zone. +* `core_network_arn` - (Optional) The Amazon Resource Name (ARN) of a core network. * `egress_only_gateway_id` - (Optional) Identifier of a VPC Egress Only Internet Gateway. * `gateway_id` - (Optional) Identifier of a VPC internet gateway or a virtual private gateway. * `local_gateway_id` - (Optional) Identifier of a Outpost local gateway. diff --git a/website/docs/r/s3_bucket.html.markdown b/website/docs/r/s3_bucket.html.markdown index 71148bbc9fa9..08aa82e88912 100644 --- a/website/docs/r/s3_bucket.html.markdown +++ b/website/docs/r/s3_bucket.html.markdown @@ -12,6 +12,58 @@ Provides a S3 bucket resource. -> This functionality is for managing S3 in an AWS Partition. To manage [S3 on Outposts](https://docs.aws.amazon.com/AmazonS3/latest/dev/S3onOutposts.html), see the [`aws_s3control_bucket`](/docs/providers/aws/r/s3control_bucket.html) resource. +~> **NOTE on S3 Bucket Accelerate Configuration:** S3 Bucket Accelerate can be configured in either the standalone resource [`aws_s3_bucket_accelerate_configuration`](s3_bucket_accelerate_configuration.html) +or with the deprecated parameter `acceleration_status` in the resource `aws_s3_bucket`. +Configuring with both will cause inconsistencies and may overwrite configuration. + +~> **NOTE on S3 Bucket canned ACL Configuration:** S3 Bucket canned ACL can be configured in either the standalone resource [`aws_s3_bucket_acl`](s3_bucket_acl.html.markdown) +or with the deprecated parameter `acl` in the resource `aws_s3_bucket`. +Configuring with both will cause inconsistencies and may overwrite configuration. + +~> **NOTE on S3 Bucket ACL Grants Configuration:** S3 Bucket grants can be configured in either the standalone resource [`aws_s3_bucket_acl`](s3_bucket_acl.html.markdown) +or with the deprecated parameter `grant` in the resource `aws_s3_bucket`. +Configuring with both will cause inconsistencies and may overwrite configuration. + +~> **NOTE on S3 Bucket CORS Configuration:** S3 Bucket CORS can be configured in either the standalone resource [`aws_s3_bucket_cors_configuration`](s3_bucket_cors_configuration.html.markdown) +or with the deprecated parameter `cors_rule` in the resource `aws_s3_bucket`. +Configuring with both will cause inconsistencies and may overwrite configuration. + +~> **NOTE on S3 Bucket Lifecycle Configuration:** S3 Bucket Lifecycle can be configured in either the standalone resource [`aws_s3_bucket_lifecycle_configuration`](s3_bucket_lifecycle_configuration.html) +or with the deprecated parameter `lifecycle_rule` in the resource `aws_s3_bucket`. +Configuring with both will cause inconsistencies and may overwrite configuration. + +~> **NOTE on S3 Bucket Logging Configuration:** S3 Bucket logging can be configured in either the standalone resource [`aws_s3_bucket_logging`](s3_bucket_logging.html.markdown) +or with the deprecated parameter `logging` in the resource `aws_s3_bucket`. +Configuring with both will cause inconsistencies and may overwrite configuration. + +~> **NOTE on S3 Bucket Object Lock Configuration:** S3 Bucket Object Lock can be configured in either the standalone resource [`aws_s3_bucket_object_lock_configuration`](s3_bucket_object_lock_configuration.html) +or with the deprecated parameter `object_lock_configuration` in the resource `aws_s3_bucket`. +Configuring with both will cause inconsistencies and may overwrite configuration. + +~> **NOTE on S3 Bucket Policy Configuration:** S3 Bucket Policy can be configured in either the standalone resource [`aws_s3_bucket_policy`](s3_bucket_policy.html) +or with the deprecated parameter `policy` in the resource `aws_s3_bucket`. +Configuring with both will cause inconsistencies and may overwrite configuration. + +~> **NOTE on S3 Bucket Replication Configuration:** S3 Bucket Replication can be configured in either the standalone resource [`aws_s3_bucket_replicaton_configuration`](s3_bucket_replication_configuration.html) +or with the deprecated parameter `replication_configuration` in the resource `aws_s3_bucket`. +Configuring with both will cause inconsistencies and may overwrite configuration. + +~> **NOTE on S3 Bucket Request Payment Configuration:** S3 Bucket Request Payment can be configured in either the standalone resource [`aws_s3_bucket_request_payment_configuration`](s3_bucket_request_payment_configuration.html) +or with the deprecated parameter `request_payer` in the resource `aws_s3_bucket`. +Configuring with both will cause inconsistencies and may overwrite configuration. + +~> **NOTE on S3 Bucket Server Side Encryption Configuration:** S3 Bucket Server Side Encryption can be configured in either the standalone resource [`aws_s3_bucket_server_side_encryption_configuration`](s3_bucket_server_side_encryption_configuration.html) +or with the deprecated parameter `server_side_encryption_configuration` in the resource `aws_s3_bucket`. +Configuring with both will cause inconsistencies and may overwrite configuration. + +~> **NOTE on S3 Bucket Versioning Configuration:** S3 Bucket versioning can be configured in either the standalone resource [`aws_s3_bucket_versioning`](s3_bucket_versioning.html.markdown) +or with the deprecated parameter `versioning` in the resource `aws_s3_bucket`. +Configuring with both will cause inconsistencies and may overwrite configuration. + +~> **NOTE on S3 Bucket Website Configuration:** S3 Bucket Website can be configured in either the standalone resource [`aws_s3_bucket_website_configuration`](s3_bucket_website_configuration.html.markdown) +or with the deprecated parameter `website` in the resource `aws_s3_bucket`. +Configuring with both will cause inconsistencies and may overwrite configuration. + ## Example Usage ### Private Bucket w/ Tags @@ -34,50 +86,368 @@ resource "aws_s3_bucket_acl" "example" { ### Static Website Hosting -The `website` argument is read-only as of version 4.0 of the Terraform AWS Provider. -See the [`aws_s3_bucket_website_configuration` resource](s3_bucket_website_configuration.html.markdown) for configuration details. +-> **NOTE:** The parameter `website` is deprecated. +Use the resource [`aws_s3_bucket_website_configuration`](s3_bucket_website_configuration.html.markdown) instead. + +```terraform +resource "aws_s3_bucket" "b" { + bucket = "s3-website-test.hashicorp.com" + acl = "public-read" + policy = file("policy.json") + + website { + index_document = "index.html" + error_document = "error.html" + + routing_rules = < **NOTE:** The parameter `cors_rule` is deprecated. +Use the resource [`aws_s3_bucket_cors_configuration`](s3_bucket_cors_configuration.html.markdown) instead. + +```terraform +resource "aws_s3_bucket" "b" { + bucket = "s3-website-test.hashicorp.com" + acl = "public-read" + + cors_rule { + allowed_headers = ["*"] + allowed_methods = ["PUT", "POST"] + allowed_origins = ["https://s3-website-test.hashicorp.com"] + expose_headers = ["ETag"] + max_age_seconds = 3000 + } +} +``` ### Using versioning -The `versioning` argument is read-only as of version 4.0 of the Terraform AWS Provider. -See the [`aws_s3_bucket_versioning` resource](s3_bucket_versioning.html.markdown) for configuration details. +-> **NOTE:** The parameter `versioning` is deprecated. +Use the resource [`aws_s3_bucket_versioning`](s3_bucket_versioning.html.markdown) instead. + +```terraform +resource "aws_s3_bucket" "b" { + bucket = "my-tf-test-bucket" + acl = "private" + + versioning { + enabled = true + } +} +``` ### Enable Logging -The `logging` argument is read-only as of version 4.0 of the Terraform AWS Provider. -See the [`aws_s3_bucket_logging` resource](s3_bucket_logging.html.markdown) for configuration details. +-> **NOTE:** The parameter `logging` is deprecated. +Use the resource [`aws_s3_bucket_logging`](s3_bucket_logging.html.markdown) instead. + +```terraform +resource "aws_s3_bucket" "log_bucket" { + bucket = "my-tf-log-bucket" + acl = "log-delivery-write" +} + +resource "aws_s3_bucket" "b" { + bucket = "my-tf-test-bucket" + acl = "private" + + logging { + target_bucket = aws_s3_bucket.log_bucket.id + target_prefix = "log/" + } +} +``` ### Using object lifecycle -The `lifecycle_rule` argument is read-only as of version 4.0 of the Terraform AWS Provider. -See the [`aws_s3_bucket_lifecycle_configuration` resource](s3_bucket_lifecycle_configuration.html.markdown) for configuration details. +-> **NOTE:** The parameter `lifecycle_rule` is deprecated. +Use the resource [`aws_s3_bucket_lifecycle_configuration`](s3_bucket_lifecycle_configuration.html) instead. + +```terraform +resource "aws_s3_bucket" "bucket" { + bucket = "my-bucket" + acl = "private" + + lifecycle_rule { + id = "log" + enabled = true + + prefix = "log/" + + tags = { + rule = "log" + autoclean = "true" + } + + transition { + days = 30 + storage_class = "STANDARD_IA" # or "ONEZONE_IA" + } + + transition { + days = 60 + storage_class = "GLACIER" + } + + expiration { + days = 90 + } + } + + lifecycle_rule { + id = "tmp" + prefix = "tmp/" + enabled = true + + expiration { + date = "2016-01-12" + } + } +} + +resource "aws_s3_bucket" "versioning_bucket" { + bucket = "my-versioning-bucket" + acl = "private" + + versioning { + enabled = true + } + + lifecycle_rule { + prefix = "config/" + enabled = true + + noncurrent_version_transition { + days = 30 + storage_class = "STANDARD_IA" + } + + noncurrent_version_transition { + days = 60 + storage_class = "GLACIER" + } + + noncurrent_version_expiration { + days = 90 + } + } +} +``` ### Using object lock configuration -The `object_lock_configuration.rule` argument is read-only as of version 4.0 of the Terraform AWS Provider. -To **enable** Object Lock on a **new** bucket, use the `object_lock_enabled` argument in **this** resource. See [Object Lock Configuration](#object-lock-configuration) below for details. -To configure the default retention rule of the Object Lock configuration, see the [`aws_s3_bucket_object_lock_configuration` resource](s3_bucket_object_lock_configuration.html.markdown) for configuration details. +-> **NOTE:** The parameter `object_lock_configuration` is deprecated. +To **enable** Object Lock on a **new** bucket, use the `object_lock_enabled` argument in **this** resource. +To configure the default retention rule of the Object Lock configuration use the resource [`aws_s3_bucket_object_lock_configuration` resource](s3_bucket_object_lock_configuration.html.markdown) instead. To **enable** Object Lock on an **existing** bucket, please contact AWS Support and refer to the [Object lock configuration for an existing bucket](s3_bucket_object_lock_configuration.html.markdown#object-lock-configuration-for-an-existing-bucket) example for more details. +```terraform +resource "aws_s3_bucket" "example" { + bucket = "my-tf-example-bucket" + + object_lock_configuration { + object_lock_enabled = "Enabled" + + rule { + default_retention { + mode = "COMPLIANCE" + days = 5 + } + } + } +} +``` + ### Using replication configuration -The `replication_configuration` argument is read-only as of version 4.0 of the Terraform AWS Provider. -See the [`aws_s3_bucket_replication_configuration` resource](s3_bucket_replication_configuration.html.markdown) for configuration details. +-> **NOTE:** The parameter `replication_configuration` is deprecated. +Use the resource [`aws_s3_bucket_replication_configuration`](s3_bucket_replication_configuration.html) instead. + +```terraform +provider "aws" { + region = "eu-west-1" +} + +provider "aws" { + alias = "central" + region = "eu-central-1" +} + +resource "aws_iam_role" "replication" { + name = "tf-iam-role-replication-12345" + + assume_role_policy = < **NOTE:** The parameter `server_side_encryption_configuration` is deprecated. +Use the resource [`aws_s3_bucket_server_side_encryption_configuration`](s3_bucket_server_side_encryption_configuration.html) instead. + +```terraform +resource "aws_kms_key" "mykey" { + description = "This key is used to encrypt bucket objects" + deletion_window_in_days = 10 +} + +resource "aws_s3_bucket" "mybucket" { + bucket = "mybucket" + + server_side_encryption_configuration { + rule { + apply_server_side_encryption_by_default { + kms_master_key_id = aws_kms_key.mykey.arn + sse_algorithm = "aws:kms" + } + } + } +} +``` ### Using ACL policy grants -The `acl` and `grant` arguments are read-only as of version 4.0 of the Terraform AWS Provider. -See the [`aws_s3_bucket_acl` resource](s3_bucket_acl.html.markdown) for configuration details. +-> **NOTE:** The parameters `acl` and `grant` are deprecated. +Use the resource [`aws_s3_bucket_acl`](s3_bucket_acl.html.markdown) instead. + +```terraform +data "aws_canonical_user_id" "current_user" {} + +resource "aws_s3_bucket" "bucket" { + bucket = "mybucket" + + grant { + id = data.aws_canonical_user_id.current_user.id + type = "CanonicalUser" + permissions = ["FULL_CONTROL"] + } + + grant { + type = "Group" + permissions = ["READ_ACP", "WRITE"] + uri = "http://acs.amazonaws.com/groups/s3/LogDelivery" + } +} +``` ## Argument Reference @@ -85,117 +455,272 @@ The following arguments are supported: * `bucket` - (Optional, Forces new resource) The name of the bucket. If omitted, Terraform will assign a random, unique name. Must be lowercase and less than or equal to 63 characters in length. A full list of bucket naming rules [may be found here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html). * `bucket_prefix` - (Optional, Forces new resource) Creates a unique bucket name beginning with the specified prefix. Conflicts with `bucket`. Must be lowercase and less than or equal to 37 characters in length. A full list of bucket naming rules [may be found here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html). +* `acceleration_status` - (Optional, **Deprecated**) Sets the accelerate configuration of an existing bucket. Can be `Enabled` or `Suspended`. Cannot be used in `cn-north-1` or `us-gov-west-1`. Terraform will only perform drift detection if a configuration value is provided. + Use the resource [`aws_s3_bucket_accelerate_configuration`](s3_bucket_accelerate_configuration.html) instead. +* `acl` - (Optional, **Deprecated**) The [canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) to apply. Valid values are `private`, `public-read`, `public-read-write`, `aws-exec-read`, `authenticated-read`, and `log-delivery-write`. Defaults to `private`. Conflicts with `grant`. Terraform will only perform drift detection if a configuration value is provided. Use the resource [`aws_s3_bucket_acl`](s3_bucket_acl.html.markdown) instead. +* `grant` - (Optional, **Deprecated**) An [ACL policy grant](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#sample-acl). See [Grant](#grant) below for details. Conflicts with `acl`. Terraform will only perform drift detection if a configuration value is provided. Use the resource [`aws_s3_bucket_acl`](s3_bucket_acl.html.markdown) instead. +* `cors_rule` - (Optional, **Deprecated**) A rule of [Cross-Origin Resource Sharing](https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html). See [CORS rule](#cors-rule) below for details. Terraform will only perform drift detection if a configuration value is provided. Use the resource [`aws_s3_bucket_cors_configuration`](s3_bucket_cors_configuration.html.markdown) instead. * `force_destroy` - (Optional, Default:`false`) A boolean that indicates all objects (including any [locked objects](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html)) should be deleted from the bucket so that the bucket can be destroyed without error. These objects are *not* recoverable. +* `lifecycle_rule` - (Optional, **Deprecated**) A configuration of [object lifecycle management](http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html). See [Lifecycle Rule](#lifecycle-rule) below for details. Terraform will only perform drift detection if a configuration value is provided. + Use the resource [`aws_s3_bucket_lifecycle_configuration`](s3_bucket_lifecycle_configuration.html) instead. +* `logging` - (Optional, **Deprecated**) A configuration of [S3 bucket logging](https://docs.aws.amazon.com/AmazonS3/latest/UG/ManagingBucketLogging.html) parameters. See [Logging](#logging) below for details. Terraform will only perform drift detection if a configuration value is provided. + Use the resource [`aws_s3_bucket_logging`](s3_bucket_logging.html.markdown) instead. * `object_lock_enabled` - (Optional, Default:`false`, Forces new resource) Indicates whether this bucket has an Object Lock configuration enabled. -* `object_lock_configuration` - (Optional) A configuration of [S3 object locking](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock.html). See [Object Lock Configuration](#object-lock-configuration) below. +* `object_lock_configuration` - (Optional, **Deprecated**) A configuration of [S3 object locking](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock.html). See [Object Lock Configuration](#object-lock-configuration) below for details. + Terraform wil only perform drift detection if a configuration value is provided. + Use the `object_lock_enabled` parameter and the resource [`aws_s3_bucket_object_lock_configuration`](s3_bucket_object_lock_configuration.html.markdown) instead. +* `policy` - (Optional, **Deprecated**) A valid [bucket policy](https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html) JSON document. Note that if the policy document is not specific enough (but still valid), Terraform may view the policy as constantly changing in a `terraform plan`. In this case, please make sure you use the verbose/specific version of the policy. For more information about building AWS IAM policy documents with Terraform, see the [AWS IAM Policy Document Guide](https://learn.hashicorp.com/terraform/aws/iam-policy). + Terraform will only perform drift detection if a configuration value is provided. + Use the resource [`aws_s3_bucket_policy`](s3_bucket_policy.html) instead. +* `replication_configuration` - (Optional, **Deprecated**) A configuration of [replication configuration](http://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html). See [Replication Configuration](#replication-configuration) below for details. Terraform will only perform drift detection if a configuration value is provided. + Use the resource [`aws_s3_bucket_replication_configuration`](s3_bucket_replication_configuration.html) instead. +* `request_payer` - (Optional, **Deprecated**) Specifies who should bear the cost of Amazon S3 data transfer. + Can be either `BucketOwner` or `Requester`. By default, the owner of the S3 bucket would incur the costs of any data transfer. + See [Requester Pays Buckets](http://docs.aws.amazon.com/AmazonS3/latest/dev/RequesterPaysBuckets.html) developer guide for more information. + Terraform will only perform drift detection if a configuration value is provided. + Use the resource [`aws_s3_bucket_request_payment_configuration`](s3_bucket_request_payment_configuration.html) instead. +* `server_side_encryption_configuration` - (Optional, **Deprecated**) A configuration of [server-side encryption configuration](http://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html). See [Server Side Encryption Configuration](#server-side-encryption-configuration) below for details. + Terraform will only perform drift detection if a configuration value is provided. + Use the resource [`aws_s3_bucket_server_side_encryption_configuration`](s3_bucket_server_side_encryption_configuration.html) instead. +* `versioning` - (Optional, **Deprecated**) A configuration of the [S3 bucket versioning state](https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html). See [Versioning](#versioning) below for details. Terraform will only perform drift detection if a configuration value is provided. Use the resource [`aws_s3_bucket_versioning`](s3_bucket_versioning.html.markdown) instead. +* `website` - (Optional, **Deprecated**) A configuration of the [S3 bucket website](https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html). See [Website](#website) below for details. Terraform will only perform drift detection if a configuration value is provided. + Use the resource [`aws_s3_bucket_website_configuration`](s3_bucket_website_configuration.html.markdown) instead. * `tags` - (Optional) A map of tags to assign to the bucket. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +### CORS Rule + +~> **NOTE:** Currently, changes to the `cors_rule` configuration of _existing_ resources cannot be automatically detected by Terraform. To manage changes of CORS rules to an S3 bucket, use the `aws_s3_bucket_cors_configuration` resource instead. If you use `cors_rule` on an `aws_s3_bucket`, Terraform will assume management over the full set of CORS rules for the S3 bucket, treating additional CORS rules as drift. For this reason, `cors_rule` cannot be mixed with the external `aws_s3_bucket_cors_configuration` resource for a given S3 bucket. + +The `cors_rule` configuration block supports the following arguments: + +* `allowed_headers` - (Optional) List of headers allowed. +* `allowed_methods` - (Required) One or more HTTP methods that you allow the origin to execute. Can be `GET`, `PUT`, `POST`, `DELETE` or `HEAD`. +* `allowed_origins` - (Required) One or more origins you want customers to be able to access the bucket from. +* `expose_headers` - (Optional) One or more headers in the response that you want customers to be able to access from their applications (for example, from a JavaScript `XMLHttpRequest` object). +* `max_age_seconds` - (Optional) Specifies time in seconds that browser can cache the response for a preflight request. + +### Grant + +~> **NOTE:** Currently, changes to the `grant` configuration of _existing_ resources cannot be automatically detected by Terraform. To manage changes of ACL grants to an S3 bucket, use the `aws_s3_bucket_acl` resource instead. If you use `grant` on an `aws_s3_bucket`, Terraform will assume management over the full set of ACL grants for the S3 bucket, treating additional ACL grants as drift. For this reason, `grant` cannot be mixed with the external `aws_s3_bucket_acl` resource for a given S3 bucket. + +The `grant` configuration block supports the following arguments: + +* `id` - (Optional) Canonical user id to grant for. Used only when `type` is `CanonicalUser`. +* `type` - (Required) Type of grantee to apply for. Valid values are `CanonicalUser` and `Group`. `AmazonCustomerByEmail` is not supported. +* `permissions` - (Required) List of permissions to apply for grantee. Valid values are `READ`, `WRITE`, `READ_ACP`, `WRITE_ACP`, `FULL_CONTROL`. +* `uri` - (Optional) Uri address to grant for. Used only when `type` is `Group`. + +### Lifecycle Rule + +~> **NOTE:** Currently, changes to the `lifecycle_rule` configuration of _existing_ resources cannot be automatically detected by Terraform. To manage changes of Lifecycle rules to an S3 bucket, use the `aws_s3_bucket_lifecycle_configuration` resource instead. If you use `lifecycle_rule` on an `aws_s3_bucket`, Terraform will assume management over the full set of Lifecycle rules for the S3 bucket, treating additional Lifecycle rules as drift. For this reason, `lifecycle_rule` cannot be mixed with the external `aws_s3_bucket_lifecycle_configuration` resource for a given S3 bucket. + +~> **NOTE:** At least one of `abort_incomplete_multipart_upload_days`, `expiration`, `transition`, `noncurrent_version_expiration`, `noncurrent_version_transition` must be specified. + +The `lifecycle_rule` configuration block supports the following arguments: + +* `id` - (Optional) Unique identifier for the rule. Must be less than or equal to 255 characters in length. +* `prefix` - (Optional) Object key prefix identifying one or more objects to which the rule applies. +* `tags` - (Optional) Specifies object tags key and value. +* `enabled` - (Required) Specifies lifecycle rule status. +* `abort_incomplete_multipart_upload_days` (Optional) Specifies the number of days after initiating a multipart upload when the multipart upload must be completed. +* `expiration` - (Optional) Specifies a period in the object's expire. See [Expiration](#expiration) below for details. +* `transition` - (Optional) Specifies a period in the object's transitions. See [Transition](#transition) below for details. +* `noncurrent_version_expiration` - (Optional) Specifies when noncurrent object versions expire. See [Noncurrent Version Expiration](#noncurrent-version-expiration) below for details. +* `noncurrent_version_transition` - (Optional) Specifies when noncurrent object versions transitions. See [Noncurrent Version Transition](#noncurrent-version-transition) below for details. + +### Expiration + +The `expiration` configuration block supports the following arguments: + +* `date` - (Optional) Specifies the date after which you want the corresponding action to take effect. +* `days` - (Optional) Specifies the number of days after object creation when the specific rule action takes effect. +* `expired_object_delete_marker` - (Optional) On a versioned bucket (versioning-enabled or versioning-suspended bucket), you can add this element in the lifecycle configuration to direct Amazon S3 to delete expired object delete markers. This cannot be specified with Days or Date in a Lifecycle Expiration Policy. + +### Transition + +The `transition` configuration block supports the following arguments: + +* `date` - (Optional) Specifies the date after which you want the corresponding action to take effect. +* `days` - (Optional) Specifies the number of days after object creation when the specific rule action takes effect. +* `storage_class` - (Required) Specifies the Amazon S3 [storage class](https://docs.aws.amazon.com/AmazonS3/latest/API/API_Transition.html#AmazonS3-Type-Transition-StorageClass) to which you want the object to transition. + +### Noncurrent Version Expiration + +The `noncurrent_version_expiration` configuration block supports the following arguments: + +* `days` - (Required) Specifies the number of days noncurrent object versions expire. + +### Noncurrent Version Transition + +The `noncurrent_version_transition` configuration supports the following arguments: + +* `days` - (Required) Specifies the number of days noncurrent object versions transition. +* `storage_class` - (Required) Specifies the Amazon S3 [storage class](https://docs.aws.amazon.com/AmazonS3/latest/API/API_Transition.html#AmazonS3-Type-Transition-StorageClass) to which you want the object to transition. + +### Logging + +~> **NOTE:** Currently, changes to the `logging` configuration of _existing_ resources cannot be automatically detected by Terraform. To manage changes of logging parameters to an S3 bucket, use the `aws_s3_bucket_logging` resource instead. If you use `logging` on an `aws_s3_bucket`, Terraform will assume management over the full set of logging parameters for the S3 bucket, treating additional logging parameters as drift. For this reason, `logging` cannot be mixed with the external `aws_s3_bucket_logging` resource for a given S3 bucket. + +The `logging` configuration block supports the following arguments: + +* `target_bucket` - (Required) The name of the bucket that will receive the log objects. +* `target_prefix` - (Optional) To specify a key prefix for log objects. + ### Object Lock Configuration ~> **NOTE:** You can only **enable** S3 Object Lock for **new** buckets. If you need to **enable** S3 Object Lock for an **existing** bucket, please contact AWS Support. When you create a bucket with S3 Object Lock enabled, Amazon S3 automatically enables versioning for the bucket. Once you create a bucket with S3 Object Lock enabled, you can't disable Object Lock or suspend versioning for the bucket. -To configure the default retention rule of the Object Lock configuration, see the [`aws_s3_bucket_object_lock_configuration` resource](s3_bucket_object_lock_configuration.html.markdown) for configuration details. -The `object_lock_configuration` configuration block supports the following argument: +~> **NOTE:** Currently, changes to the `object_lock_configuration` configuration of _existing_ resources cannot be automatically detected by Terraform. To manage changes of Object Lock settings to an S3 bucket, use the `aws_s3_bucket_object_lock_configuration` resource instead. If you use `object_lock_configuration` on an `aws_s3_bucket`, Terraform will assume management over the full set of Object Lock configuration parameters for the S3 bucket, treating additional Object Lock configuration parameters as drift. For this reason, `object_lock_configuration` cannot be mixed with the external `aws_s3_bucket_object_lock_configuration` resource for a given S3 bucket. + +The `object_lock_configuration` configuration block supports the following arguments: * `object_lock_enabled` - (Optional, **Deprecated**) Indicates whether this bucket has an Object Lock configuration enabled. Valid value is `Enabled`. Use the top-level argument `object_lock_enabled` instead. +* `rule` - (Optional) The Object Lock rule in place for this bucket ([documented below](#rule)). + +#### Rule + +The `rule` configuration block supports the following argument: + +* `default_retention` - (Required) The default retention period that you want to apply to new objects placed in this bucket ([documented below](#default-retention)). + +#### Default Retention + +The `default_retention` configuration block supports the following arguments: + +~> **NOTE:** Either `days` or `years` must be specified, but not both. + +* `mode` - (Required) The default Object Lock retention mode you want to apply to new objects placed in this bucket. Valid values are `GOVERNANCE` and `COMPLIANCE`. +* `days` - (Optional) The number of days that you want to specify for the default retention period. +* `years` - (Optional) The number of years that you want to specify for the default retention period. + +### Replication Configuration + +~> **NOTE:** Currently, changes to the `replication_configuration` configuration of _existing_ resources cannot be automatically detected by Terraform. To manage replication configuration changes to an S3 bucket, use the `aws_s3_bucket_replication_configuration` resource instead. If you use `replication_configuration` on an `aws_s3_bucket`, Terraform will assume management over the full replication configuration for the S3 bucket, treating additional replication configuration rules as drift. For this reason, `replication_configuration` cannot be mixed with the external `aws_s3_bucket_replication_configuration` resource for a given S3 bucket. + +The `replication_configuration` configuration block supports the following arguments: + +* `role` - (Required) The ARN of the IAM role for Amazon S3 to assume when replicating the objects. +* `rules` - (Required) Specifies the rules managing the replication ([documented below](#rules)). + +#### Rules + +The `rules` configuration block supports the following arguments: + +~> **NOTE:** Amazon S3's latest version of the replication configuration is V2, which includes the `filter` attribute for replication rules. +With the `filter` attribute, you can specify object filters based on the object key prefix, tags, or both to scope the objects that the rule applies to. +Replication configuration V1 supports filtering based on only the `prefix` attribute. For backwards compatibility, Amazon S3 continues to support the V1 configuration. + +* `delete_marker_replication_status` - (Optional) Whether delete markers are replicated. The only valid value is `Enabled`. To disable, omit this argument. This argument is only valid with V2 replication configurations (i.e., when `filter` is used). +* `destination` - (Required) Specifies the destination for the rule ([documented below](#destination)). +* `filter` - (Optional, Conflicts with `prefix`) Filter that identifies subset of objects to which the replication rule applies ([documented below](#filter)). +* `id` - (Optional) Unique identifier for the rule. Must be less than or equal to 255 characters in length. +* `prefix` - (Optional, Conflicts with `filter`) Object keyname prefix identifying one or more objects to which the rule applies. Must be less than or equal to 1024 characters in length. +* `priority` - (Optional) The priority associated with the rule. Priority should only be set if `filter` is configured. If not provided, defaults to `0`. Priority must be unique between multiple rules. +* `source_selection_criteria` - (Optional) Specifies special object selection criteria ([documented below](#source-selection-criteria)). +* `status` - (Required) The status of the rule. Either `Enabled` or `Disabled`. The rule is ignored if status is not Enabled. + +#### Filter + +The `filter` configuration block supports the following arguments: + +* `prefix` - (Optional) Object keyname prefix that identifies subset of objects to which the rule applies. Must be less than or equal to 1024 characters in length. +* `tags` - (Optional) A map of tags that identifies subset of objects to which the rule applies. + The rule applies only to objects having all the tags in its tagset. + +#### Destination + +~> **NOTE:** Replication to multiple destination buckets requires that `priority` is specified in the `rules` object. If the corresponding rule requires no filter, an empty configuration block `filter {}` must be specified. + +The `destination` configuration block supports the following arguments: + +* `bucket` - (Required) The ARN of the S3 bucket where you want Amazon S3 to store replicas of the object identified by the rule. +* `storage_class` - (Optional) The [storage class](https://docs.aws.amazon.com/AmazonS3/latest/API/API_Destination.html#AmazonS3-Type-Destination-StorageClass) used to store the object. By default, Amazon S3 uses the storage class of the source object to create the object replica. +* `replica_kms_key_id` - (Optional) Destination KMS encryption key ARN for SSE-KMS replication. Must be used in conjunction with + `sse_kms_encrypted_objects` source selection criteria. +* `access_control_translation` - (Optional) Specifies the overrides to use for object owners on replication. Must be used in conjunction with `account_id` owner override configuration. +* `account_id` - (Optional) The Account ID to use for overriding the object owner on replication. Must be used in conjunction with `access_control_translation` override configuration. +* `replication_time` - (Optional) Enables S3 Replication Time Control (S3 RTC) ([documented below](#replication-time)). +* `metrics` - (Optional) Enables replication metrics (required for S3 RTC) ([documented below](#metrics)). + +#### Replication Time + +The `replication_time` configuration block supports the following arguments: + +* `status` - (Optional) The status of RTC. Either `Enabled` or `Disabled`. +* `minutes` - (Optional) Threshold within which objects are to be replicated. The only valid value is `15`. + +#### Metrics + +The `metrics` configuration block supports the following arguments: + +* `status` - (Optional) The status of replication metrics. Either `Enabled` or `Disabled`. +* `minutes` - (Optional) Threshold within which objects are to be replicated. The only valid value is `15`. + +#### Source Selection Criteria + +The `source_selection_criteria` configuration block supports the following argument: + +* `sse_kms_encrypted_objects` - (Optional) Match SSE-KMS encrypted objects ([documented below](#sse-kms-encrypted-objects)). If specified, `replica_kms_key_id` + in `destination` must be specified as well. + +#### SSE KMS Encrypted Objects + +The `sse_kms_encrypted_objects` configuration block supports the following argument: + +* `enabled` - (Required) Boolean which indicates if this criteria is enabled. + +### Server Side Encryption Configuration + +~> **NOTE:** Currently, changes to the `server_side_encryption_configuration` configuration of _existing_ resources cannot be automatically detected by Terraform. To manage changes in encryption of an S3 bucket, use the `aws_s3_bucket_server_side_encryption_configuration` resource instead. If you use `server_side_encryption_configuration` on an `aws_s3_bucket`, Terraform will assume management over the encryption configuration for the S3 bucket, treating additional encryption changes as drift. For this reason, `server_side_encryption_configuration` cannot be mixed with the external `aws_s3_bucket_server_side_encryption_configuration` resource for a given S3 bucket. + +The `server_side_encryption_configuration` configuration block supports the following argument: + +* `rule` - (Required) A single object for server-side encryption by default configuration. (documented below) + +The `rule` configuration block supports the following arguments: + +* `apply_server_side_encryption_by_default` - (Required) A single object for setting server-side encryption by default. (documented below) +* `bucket_key_enabled` - (Optional) Whether or not to use [Amazon S3 Bucket Keys](https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-key.html) for SSE-KMS. + +The `apply_server_side_encryption_by_default` configuration block supports the following arguments: + +* `sse_algorithm` - (Required) The server-side encryption algorithm to use. Valid values are `AES256` and `aws:kms` +* `kms_master_key_id` - (Optional) The AWS KMS master key ID used for the SSE-KMS encryption. This can only be used when you set the value of `sse_algorithm` as `aws:kms`. The default `aws/s3` AWS KMS master key is used if this element is absent while the `sse_algorithm` is `aws:kms`. + +### Versioning + +~> **NOTE:** Currently, changes to the `versioning` configuration of _existing_ resources cannot be automatically detected by Terraform. To manage changes of versioning state to an S3 bucket, use the `aws_s3_bucket_versioning` resource instead. If you use `versioning` on an `aws_s3_bucket`, Terraform will assume management over the versioning state of the S3 bucket, treating additional versioning state changes as drift. For this reason, `versioning` cannot be mixed with the external `aws_s3_bucket_versioning` resource for a given S3 bucket. + +The `versioning` configuration block supports the following arguments: + +* `enabled` - (Optional) Enable versioning. Once you version-enable a bucket, it can never return to an unversioned state. You can, however, suspend versioning on that bucket. +* `mfa_delete` - (Optional) Enable MFA delete for either `Change the versioning state of your bucket` or `Permanently delete an object version`. Default is `false`. This cannot be used to toggle this setting but is available to allow managed buckets to reflect the state in AWS + +### Website + +~> **NOTE:** Currently, changes to the `website` configuration of _existing_ resources cannot be automatically detected by Terraform. To manage changes to the website configuration of an S3 bucket, use the `aws_s3_bucket_website_configuration` resource instead. If you use `website` on an `aws_s3_bucket`, Terraform will assume management over the configuration of the website of the S3 bucket, treating additional website configuration changes as drift. For this reason, `website` cannot be mixed with the external `aws_s3_bucket_website_configuration` resource for a given S3 bucket. + +The `website` configuration block supports the following arguments: + +* `index_document` - (Required, unless using `redirect_all_requests_to`) Amazon S3 returns this index document when requests are made to the root domain or any of the subfolders. +* `error_document` - (Optional) An absolute path to the document to return in case of a 4XX error. +* `redirect_all_requests_to` - (Optional) A hostname to redirect all website requests for this bucket to. Hostname can optionally be prefixed with a protocol (`http://` or `https://`) to use when redirecting requests. The default is the protocol that is used in the original request. +* `routing_rules` - (Optional) A json array containing [routing rules](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-websiteconfiguration-routingrules.html) + describing redirect behavior and when redirects are applied. ## Attributes Reference In addition to all arguments above, the following attributes are exported: * `id` - The name of the bucket. -* `acceleration_status` - (Optional) The accelerate configuration status of the bucket. Not available in `cn-north-1` or `us-gov-west-1`. -* `acl` - The [canned ACL](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl) applied to the bucket. * `arn` - The ARN of the bucket. Will be of format `arn:aws:s3:::bucketname`. * `bucket_domain_name` - The bucket domain name. Will be of format `bucketname.s3.amazonaws.com`. * `bucket_regional_domain_name` - The bucket region-specific domain name. The bucket domain name including the region name, please refer [here](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) for format. Note: The AWS CloudFront allows specifying S3 region-specific endpoint when creating S3 origin, it will prevent [redirect issues](https://forums.aws.amazon.com/thread.jspa?threadID=216814) from CloudFront to S3 Origin URL. -* `cors_rule` - Set of origins and methods ([cross-origin](https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html) access allowed). - * `allowed_headers` - Set of headers that are specified in the Access-Control-Request-Headers header. - * `allowed_methods` - Set of HTTP methods that the origin is allowed to execute. - * `allowed_origins` - Set of origins customers are able to access the bucket from. - * `expose_headers` - Set of headers in the response that customers are able to access from their applications. - * `max_age_seconds` The time in seconds that browser can cache the response for a preflight request. -* `grant` - The set of [ACL policy grants](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#sample-acl). - * `id` - Canonical user id of the grantee. - * `type` - Type of grantee. - * `permissions` - List of permissions given to the grantee. - * `uri` - URI of the grantee group. * `hosted_zone_id` - The [Route 53 Hosted Zone ID](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_website_region_endpoints) for this bucket's region. -* `lifecycle_rule` - A configuration of [object lifecycle management](http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html). - * `id` - Unique identifier for the rule. - * `prefix` - Object key prefix identifying one or more objects to which the rule applies. - * `tags` - Object tags key and value. - * `enabled` - Lifecycle rule status. - * `abort_incomplete_multipart_upload_days` - Number of days after initiating a multipart upload when the multipart upload must be completed. - * `expiration` - The expiration for the lifecycle of the object in the form of date, days and, whether the object has a delete marker. - * `date` - Indicates at what date the object is to be moved or deleted. - * `days` - Indicates the lifetime, in days, of the objects that are subject to the rule. The value must be a non-zero positive integer. - * `expired_object_delete_marker` - Indicates whether Amazon S3 will remove a delete marker with no noncurrent versions. - * `transition` - Specifies when an Amazon S3 object transitions to a specified storage class. - * `date` - The date after which you want the corresponding action to take effect. - * `days` - The number of days after object creation when the specific rule action takes effect. - * `storage_class` - The Amazon S3 [storage class](https://docs.aws.amazon.com/AmazonS3/latest/API/API_Transition.html#AmazonS3-Type-Transition-StorageClass) an object will transition to. - * `noncurrent_version_expiration` - When noncurrent object versions expire. - * `days` - The number of days noncurrent object versions expire. - * `noncurrent_version_transition` - When noncurrent object versions transition. - * `days` - The number of days noncurrent object versions transition. - * `storage_class` - The Amazon S3 [storage class](https://docs.aws.amazon.com/AmazonS3/latest/API/API_Transition.html#AmazonS3-Type-Transition-StorageClass) an object will transition to. -* `logging` - The [logging parameters](https://docs.aws.amazon.com/AmazonS3/latest/UG/ManagingBucketLogging.html) for the bucket. - * `target_bucket` - The name of the bucket that receives the log objects. - * `target_prefix` - The prefix for all log object keys/ -* `object_lock_configuration` - The [S3 object locking](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock.html) configuration. - * `rule` - The Object Lock rule in place for this bucket. - * `default_retention` - The default retention period applied to new objects placed in this bucket. - * `mode` - The default Object Lock retention mode applied to new objects placed in this bucket. - * `days` - The number of days specified for the default retention period. - * `years` - The number of years specified for the default retention period. -* `policy` - The [bucket policy](https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html) JSON document. * `region` - The AWS region this bucket resides in. -* `replication_configuration` - The [replication configuration](http://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html). - * `role` - The ARN of the IAM role for Amazon S3 assumed when replicating the objects. - * `rules` - The rules managing the replication. - * `delete_marker_replication_status` - Whether delete markers are replicated. - * `destination` - The destination for the rule. - * `access_control_translation` - The overrides to use for object owners on replication. - * `owner` - The override value for the owner on replicated objects. - * `account_id` - The Account ID to use for overriding the object owner on replication. - * `bucket` - The ARN of the S3 bucket where Amazon S3 stores replicas of the object identified by the rule. - * `metrics` - Replication metrics. - * `status` - The status of replication metrics. - * `minutes` - Threshold within which objects are replicated. - * `storage_class` - The [storage class](https://docs.aws.amazon.com/AmazonS3/latest/API/API_Destination.html#AmazonS3-Type-Destination-StorageClass) used to store the object. - * `replica_kms_key_id` - Destination KMS encryption key ARN for SSE-KMS replication. - * `replication_time` - S3 Replication Time Control (S3 RTC). - * `status` - The status of RTC. - * `minutes` - Threshold within which objects are to be replicated. - * `filter` - Filter that identifies subset of objects to which the replication rule applies. - * `prefix` - Object keyname prefix that identifies subset of objects to which the rule applies. - * `tags` - Map of tags that identifies subset of objects to which the rule applies. - * `id` - Unique identifier for the rule. - * `prefix` - Object keyname prefix identifying one or more objects to which the rule applies - * `priority` - The priority associated with the rule. - * `source_selection_criteria` - The special object selection criteria. - * `sse_kms_encrypted_objects` - Matched SSE-KMS encrypted objects. - * `enabled` - Whether this criteria is enabled. - * `status` - The status of the rule. -* `request_payer` - Either `BucketOwner` or `Requester` that pays for the download and request fees. -* `server_side_encryption_configuration` - The [server-side encryption configuration](http://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html). - * `rule` - (required) Information about a particular server-side encryption configuration rule. - * `apply_server_side_encryption_by_default` - The default server-side encryption applied to new objects in the bucket. - * `kms_master_key_id` - (optional) The AWS KMS master key ID used for the SSE-KMS encryption. - * `sse_algorithm` - (required) The server-side encryption algorithm used. - * `bucket_key_enabled` - (Optional) Whether an [Amazon S3 Bucket Key](https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-key.html) is used for SSE-KMS. * `tags_all` - A map of tags assigned to the resource, including those inherited from the provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block). -* `versioning` - The [versioning](https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html) state of the bucket. - * `enabled` - Whether versioning is enabled. - * `mfa_delete` - Whether MFA delete is enabled. -* `website` - The website configuration, if configured. - * `error_document` - The name of the error document for the website. - * `index_document` - The name of the index document for the website. - * `redirect_all_requests_to` - The redirect behavior for every request to this bucket's website endpoint. - * `routing_rules` - (Optional) The rules that define when a redirect is applied and the redirect behavior. * `website_endpoint` - The website endpoint, if the bucket is configured with a website. If not, this will be an empty string. * `website_domain` - The domain of the website endpoint, if the bucket is configured with a website. If not, this will be an empty string. This is used to create Route 53 alias records. diff --git a/website/docs/r/ses_identity_notification_topic.markdown b/website/docs/r/ses_identity_notification_topic.markdown index 93dcfddbbd8e..a28e6150f82a 100644 --- a/website/docs/r/ses_identity_notification_topic.markdown +++ b/website/docs/r/ses_identity_notification_topic.markdown @@ -25,10 +25,10 @@ resource "aws_ses_identity_notification_topic" "test" { The following arguments are supported: -* `topic_arn` - (Optional) The Amazon Resource Name (ARN) of the Amazon SNS topic. Can be set to "" (an empty string) to disable publishing. -* `notification_type` - (Required) The type of notifications that will be published to the specified Amazon SNS topic. Valid Values: *Bounce*, *Complaint* or *Delivery*. +* `topic_arn` - (Optional) The Amazon Resource Name (ARN) of the Amazon SNS topic. Can be set to `""` (an empty string) to disable publishing. +* `notification_type` - (Required) The type of notifications that will be published to the specified Amazon SNS topic. Valid Values: `Bounce`, `Complaint` or `Delivery`. * `identity` - (Required) The identity for which the Amazon SNS topic will be set. You can specify an identity by using its name or by using its Amazon Resource Name (ARN). -* `include_original_headers` - (Optional) Whether SES should include original email headers in SNS notifications of this type. *false* by default. +* `include_original_headers` - (Optional) Whether SES should include original email headers in SNS notifications of this type. `false` by default. ## Attributes Reference @@ -36,7 +36,7 @@ No additional attributes are exported. ## Import -Identity Notification Topics can be imported using ID of the record. The ID is made up as IDENTITY|TYPE where IDENTITY is the SES Identity and TYPE is the Notification Type. +Identity Notification Topics can be imported using the ID of the record. The ID is made up as `IDENTITY|TYPE` where `IDENTITY` is the SES Identity and `TYPE` is the Notification Type. ``` $ terraform import aws_ses_identity_notification_topic.test 'example.com|Bounce' diff --git a/website/docs/r/ses_receipt_rule.html.markdown b/website/docs/r/ses_receipt_rule.html.markdown index 95f31d0e10b9..f1f1a994f70e 100644 --- a/website/docs/r/ses_receipt_rule.html.markdown +++ b/website/docs/r/ses_receipt_rule.html.markdown @@ -91,7 +91,7 @@ SNS actions support the following: Stop actions support the following: -* `scope` - (Required) The scope to apply +* `scope` - (Required) The scope to apply. The only acceptable value is `RuleSet`. * `topic_arn` - (Optional) The ARN of an SNS topic to notify * `position` - (Required) The position of the action in the receipt rule diff --git a/website/docs/r/storagegateway_gateway.html.markdown b/website/docs/r/storagegateway_gateway.html.markdown index 86b87ee80c49..40c2fdbc9696 100644 --- a/website/docs/r/storagegateway_gateway.html.markdown +++ b/website/docs/r/storagegateway_gateway.html.markdown @@ -113,6 +113,7 @@ The following arguments are supported: * `gateway_type` - (Optional) Type of the gateway. The default value is `STORED`. Valid values: `CACHED`, `FILE_FSX_SMB`, `FILE_S3`, `STORED`, `VTL`. * `gateway_vpc_endpoint` - (Optional) VPC endpoint address to be used when activating your gateway. This should be used when your instance is in a private subnet. Requires HTTP access from client computer running terraform. More info on what ports are required by your VPC Endpoint Security group in [Activating a Gateway in a Virtual Private Cloud](https://docs.aws.amazon.com/storagegateway/latest/userguide/gateway-private-link.html). * `cloudwatch_log_group_arn` - (Optional) The Amazon Resource Name (ARN) of the Amazon CloudWatch log group to use to monitor and log events in the gateway. +* `maintenance_start_time` - (Optional) The gateway's weekly maintenance start time information, including day and time of the week. The maintenance time is the time in your gateway's time zone. More details below. * `medium_changer_type` - (Optional) Type of medium changer to use for tape gateway. Terraform cannot detect drift of this argument. Valid values: `STK-L700`, `AWS-Gateway-VTL`, `IBM-03584L32-0402`. * `smb_active_directory_settings` - (Optional) Nested argument with Active Directory domain join information for Server Message Block (SMB) file shares. Only valid for `FILE_S3` and `FILE_FSX_SMB` gateway types. Must be set before creating `ActiveDirectory` authentication SMB file shares. More details below. * `smb_guest_password` - (Optional) Guest password for Server Message Block (SMB) file shares. Only valid for `FILE_S3` and `FILE_FSX_SMB` gateway types. Must be set before creating `GuestAccess` authentication SMB file shares. Terraform can only detect drift of the existence of a guest password, not its actual value from the gateway. Terraform can however update the password with changing the argument. @@ -121,6 +122,13 @@ The following arguments are supported: * `tape_drive_type` - (Optional) Type of tape drive to use for tape gateway. Terraform cannot detect drift of this argument. Valid values: `IBM-ULT3580-TD5`. * `tags` - (Optional) Key-value map of resource tags. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +### maintenance_start_time + +* `day_of_month` - (Optional) The day of the month component of the maintenance start time represented as an ordinal number from 1 to 28, where 1 represents the first day of the month and 28 represents the last day of the month. +* `day_of_week` - (Optional) The day of the week component of the maintenance start time week represented as an ordinal number from 0 to 6, where 0 represents Sunday and 6 Saturday. +* `hour_of_day` - (Required) The hour component of the maintenance start time represented as _hh_, where _hh_ is the hour (00 to 23). The hour of the day is in the time zone of the gateway. +* `minute_of_hour` - (Required) The minute component of the maintenance start time represented as _mm_, where _mm_ is the minute (00 to 59). The minute of the hour is in the time zone of the gateway. + ### smb_active_directory_settings Information to join the gateway to an Active Directory domain for Server Message Block (SMB) file shares. diff --git a/website/docs/r/storagegateway_nfs_file_share.html.markdown b/website/docs/r/storagegateway_nfs_file_share.html.markdown index 5e9cb2d474f0..5e075f08da33 100644 --- a/website/docs/r/storagegateway_nfs_file_share.html.markdown +++ b/website/docs/r/storagegateway_nfs_file_share.html.markdown @@ -28,6 +28,8 @@ The following arguments are supported: * `client_list` - (Required) The list of clients that are allowed to access the file gateway. The list must contain either valid IP addresses or valid CIDR blocks. Set to `["0.0.0.0/0"]` to not limit access. Minimum 1 item. Maximum 100 items. * `gateway_arn` - (Required) Amazon Resource Name (ARN) of the file gateway. * `location_arn` - (Required) The ARN of the backed storage used for storing file data. +* `vpc_endpoint_dns_name` - (Optional) The DNS name of the VPC endpoint for S3 PrivateLink. +* `bucket_region` - (Optional) The region of the S3 bucket used by the file share. Required when specifying `vpc_endpoint_dns_name`. * `role_arn` - (Required) The ARN of the AWS Identity and Access Management (IAM) role that a file gateway assumes when it accesses the underlying storage. * `audit_destination_arn` - (Optional) The Amazon Resource Name (ARN) of the storage used for audit logs. * `default_storage_class` - (Optional) The default [storage class](https://docs.aws.amazon.com/storagegateway/latest/APIReference/API_CreateNFSFileShare.html#StorageGateway-CreateNFSFileShare-request-DefaultStorageClass) for objects put into an Amazon S3 bucket by the file gateway. Defaults to `S3_STANDARD`. diff --git a/website/docs/r/vpc.html.markdown b/website/docs/r/vpc.html.markdown index a77d407607e0..6f6bca2e9066 100644 --- a/website/docs/r/vpc.html.markdown +++ b/website/docs/r/vpc.html.markdown @@ -69,7 +69,7 @@ resource "aws_vpc" "test" { The following arguments are supported: * `cidr_block` - (Optional) The IPv4 CIDR block for the VPC. CIDR can be explicitly set or it can be derived from IPAM using `ipv4_netmask_length`. -* `instance_tenancy` - (Optional) A tenancy option for instances launched into the VPC. Default is `default`, which makes your instances shared on the host. Using either of the other options (`dedicated` or `host`) costs at least $2/hr. +* `instance_tenancy` - (Optional) A tenancy option for instances launched into the VPC. Default is `default`, which ensures that EC2 instances launched in this VPC use the EC2 instance tenancy attribute specified when the EC2 instance is launched. The only other option is `dedicated`, which ensures that EC2 instances launched in this VPC are run on dedicated tenancy instances regardless of the tenancy attribute specified at launch. This has a dedicated per region fee of $2 per hour, plus an hourly per instance usage fee. * `ipv4_ipam_pool_id` - (Optional) The ID of an IPv4 IPAM pool you want to use for allocating this VPC's CIDR. IPAM is a VPC feature that you can use to automate your IP address management workflows including assigning, tracking, troubleshooting, and auditing IP addresses across AWS Regions and accounts. Using IPAM you can monitor IP address usage throughout your AWS Organization. * `ipv4_netmask_length` - (Optional) The netmask length of the IPv4 CIDR you want to allocate to this VPC. Requires specifying a `ipv4_ipam_pool_id`. * `ipv6_cidr_block` - (Optional) IPv6 CIDR block to request from an IPAM Pool. Can be set explicitly or derived from IPAM using `ipv6_netmask_length`. diff --git a/website/docs/r/vpc_endpoint_service.html.markdown b/website/docs/r/vpc_endpoint_service.html.markdown index c0466c3003c9..0fa6303fe5cf 100644 --- a/website/docs/r/vpc_endpoint_service.html.markdown +++ b/website/docs/r/vpc_endpoint_service.html.markdown @@ -53,9 +53,9 @@ The following arguments are supported: In addition to all arguments above, the following attributes are exported: * `id` - The ID of the VPC endpoint service. -* `availability_zones` - The Availability Zones in which the service is available. +* `availability_zones` - A set of Availability Zones in which the service is available. * `arn` - The Amazon Resource Name (ARN) of the VPC endpoint service. -* `base_endpoint_dns_names` - The DNS names for the service. +* `base_endpoint_dns_names` - A set of DNS names for the service. * `manages_vpc_endpoints` - Whether or not the service manages its VPC endpoints - `true` or `false`. * `service_name` - The service name. * `service_type` - The service type, `Gateway` or `Interface`. diff --git a/website/docs/r/vpc_ipam.html.markdown b/website/docs/r/vpc_ipam.html.markdown index e73be7d8f379..b755644434d3 100644 --- a/website/docs/r/vpc_ipam.html.markdown +++ b/website/docs/r/vpc_ipam.html.markdown @@ -55,6 +55,7 @@ The following arguments are supported: * `description` - (Optional) A description for the IPAM. * `operating_regions` - (Required) Determines which locales can be chosen when you create pools. Locale is the Region where you want to make an IPAM pool available for allocations. You can only create pools with locales that match the operating Regions of the IPAM. You can only create VPCs from a pool whose locale matches the VPC's Region. You specify a region using the [region_name](#operating_regions) parameter. You **must** set your provider block region as an operating_region. * `tags` - (Optional) A map of tags to assign to the resource. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +* `cascade` - (Optional) Enables you to quickly delete an IPAM, private scopes, pools in private scopes, and any allocations in the pools in private scopes. ### operating_regions diff --git a/website/docs/r/vpn_connection.html.markdown b/website/docs/r/vpn_connection.html.markdown index 7c5d10b42e76..670995eadcbb 100644 --- a/website/docs/r/vpn_connection.html.markdown +++ b/website/docs/r/vpn_connection.html.markdown @@ -127,6 +127,8 @@ In addition to all arguments above, the following attributes are exported: * `arn` - Amazon Resource Name (ARN) of the VPN Connection. * `id` - The amazon-assigned ID of the VPN connection. +* `core_network_arn` - The ARN of the core network. +* `core_network_attachment_arn` - The ARN of the core network attachment. * `customer_gateway_configuration` - The configuration information for the VPN connection's customer gateway (in the native XML format). * `customer_gateway_id` - The ID of the customer gateway to which the connection is attached. * `routes` - The static routes associated with the VPN connection. Detailed below. diff --git a/website/docs/r/xray_group.html.markdown b/website/docs/r/xray_group.html.markdown index f5c3e7730bbb..15c09dceb6a7 100644 --- a/website/docs/r/xray_group.html.markdown +++ b/website/docs/r/xray_group.html.markdown @@ -16,6 +16,11 @@ Creates and manages an AWS XRay Group. resource "aws_xray_group" "example" { group_name = "example" filter_expression = "responsetime > 5" + + insights_configuration { + insights_enabled = true + notifications_enabled = true + } } ``` @@ -23,8 +28,16 @@ resource "aws_xray_group" "example" { * `group_name` - (Required) The name of the group. * `filter_expression` - (Required) The filter expression defining criteria by which to group traces. more info can be found in official [docs](https://docs.aws.amazon.com/xray/latest/devguide/xray-console-filters.html). +* `insights_configuration` - (Optional) Configuration options for enabling insights. * `tags` - (Optional) Key-value mapping of resource tags. If configured with a provider [`default_tags` configuration block](https://www.terraform.io/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. +### Nested fields + +#### `insights_configuration` + +* `insights_enabled` - (Required) Specifies whether insights are enabled. +* `notifications_enabled` - (Optional) Specifies whether insight notifications are enabled. + ## Attributes Reference In addition to all arguments above, the following attributes are exported: