From 5ba7a2bb8ebdfb81c74c74a8038dbbac7b6b9a48 Mon Sep 17 00:00:00 2001 From: "docs-sourcer[bot]" <99042413+docs-sourcer[bot]@users.noreply.github.com> Date: Thu, 9 Feb 2023 15:13:58 +0000 Subject: [PATCH 1/4] Updated with the latest changes from the knowledge base discussions. --- docs/discussions/knowledge-base/655.mdx | 27 +++++++++++++++++++++++++ 1 file changed, 27 insertions(+) create mode 100644 docs/discussions/knowledge-base/655.mdx diff --git a/docs/discussions/knowledge-base/655.mdx b/docs/discussions/knowledge-base/655.mdx new file mode 100644 index 0000000000..60ecf14d10 --- /dev/null +++ b/docs/discussions/knowledge-base/655.mdx @@ -0,0 +1,27 @@ +--- +hide_table_of_contents: true +hide_title: true +custom_edit_url: null +--- + +import CenterLayout from "/src/components/CenterLayout" +import GitHub from "/src/components/GitHub" + +
+ + + +We are having an issue deploying an RDS instance in our RA. When using the for-production example in the service catalog, we are able to terragrunt apply locally, but when committing the changes to the repo Github actions returns an error: access denied because no identify-based policy allows rds:DescribeDBSSubnetGroups action.
\nWe have added RDS permissions to the deploy_permissions.yml and read_only_permissions.yml
\nRDSDeployAccess:
\neffect: \"Allow\"
\nactions:
But still, we get this error:
\nError: AccessDenied: User: arn:aws:sts::xxxxxxxx:assumed-role/ecs-deploy-runner-terraform-planner/xxxxxx is not authorized to perform: rds:DescribeDBSubnetGroups on resource: arn:aws:rds:us-east-1:xxxxxxxxx:subgrp:rds-xxxxx because no identity-based policy allows the rds:DescribeDBSubnetGroups action
\nstatus code: 403, request id: f179d814-1dd7-4f5e-97db-c136883ae1db
\nwith module.database.aws_db_subnet_group.db[0],
\non .terraform/modules/database/modules/rds/main.tf line 397, in resource \"aws_db_subnet_group\" \"db\":
\n397: resource \"aws_db_subnet_group\" \"db\" {
We copied these two files from the service catalog:
\n\n\nAre we overlooking something obvious?
\nHi @drafie, I wonder if github has the permission to assume the role that has those permissions. When setting up the account baseline with the landingzone module, have you enabled var.enable_github_actions_access?
If you haven't yet, here are some varibles you might find useful from
\nhttps://github.com/gruntwork-io/terraform-aws-service-catalog/blob/master/modules/landingzone/account-baseline-app/variables.tf :
# ---------------------------------------------------------------------------------------------------------------------\n# OPTIONAL EXTERNAL IAM ACCESS PARAMETERS\n# These variables have defaults, but may be overridden by the operator.\n# ---------------------------------------------------------------------------------------------------------------------\n\nvariable \"enable_github_actions_access\" {\n  description = \"When true, create an Open ID Connect Provider that GitHub actions can use to assume IAM roles in the account. Refer to https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services for more information.\"\n  type        = bool\n  default     = false\n}\n\nvariable \"github_actions_openid_connect_provider_thumbprint_list\" {\n  description = \"When set, use the statically provided hardcoded list of thumbprints rather than looking it up dynamically. This is useful if you want to trade reliability of the OpenID Connect Provider across certificate renewals with a static list that is obtained using a trustworthy mechanism, to mitigate potential damage from a domain hijacking attack on GitHub domains.\"\n  type        = list(string)\n  default     = null\n}\nvariable \"allow_auto_deploy_from_github_actions_for_sources\" {\n  description = \"Map of github repositories to the list of branches that are allowed to assume the IAM role. The repository should be encoded as org/repo-name (e.g., gruntwork-io/terrraform-aws-ci). Allows GitHub Actions to assume the auto deploy IAM role using an OpenID Connect Provider for the given repositories. Refer to the docs for github-actions-iam-role for more information. Note that this is mutually exclusive with var.allow_auto_deploy_from_other_account_arns. Only used if var.enable_github_actions_access is true. \"\n  type        = map(list(string))\n  default     = {}\n  # Example:\n  # default = {\n  #   \"gruntwork-io/terraform-aws-security\" = [\"main\", \"dev\"]\n  # }\n}\nYou might want to use them as input here if you want to enable it for all environments (
\nhttps://github.com/gruntwork-io/terraform-aws-service-catalog/blob/master/examples/for-production/infrastructure-live/_envcommon/landingzone/account-baseline-app-base.hcl) or use it somewhere else depending how you want to configure this. You can also find more information about enabling github actions on our github-actions-iam-role module.
Need help to make lambda able to read secrets.
\nGet this error when provisioning lambda from docker
\nLog:
\n[run-lambda-entrypoint] time=\"2023-02-07T02:24:55Z\" level=debug msg=\"Loading Secret Manager entry arn:aws:secretsmanager:ap-southeast-3:*****:secret:***-MpQE8U as environment variables.\"\n[run-lambda-entrypoint] time=\"2023-02-07T02:25:06Z\" level=debug msg=\"Loading Secret Manager entry arn:aws:secretsmanager:ap-southeast-3:*****:secret:***-MpQE8U as environment variables.\"\nSTART RequestId: 90f7d618-8fdf-4d96-b8c8-b31cd8a4e348 Version: $LATEST\n2023-02-07T02:25:36.085Z 90f7d618-8fdf-4d96-b8c8-b31cd8a4e348 Task timed out after 30.03 seconds\n[run-lambda-entrypoint] time=\"2023-02-07T03:02:21Z\" level=error msg=\"FAIL Loading Secret Manager entry arn:aws:secretsmanager:ap-southeast-3:***:secret:***-MpQE8U fail operation error Secrets Manager: GetSecretValue, exceeded maximum number of attempts, 3, https response error StatusCode: 0, RequestID: , request send failed, Post \\\"https://secretsmanager.ap-southeast-3.amazonaws.com/\\\": dial tcp 108.136.159.12:443: i/o timeout.\"\nmy configuration:
\n  name      = \"edo-daily-installments-cron\"\n  image_uri = local.image\n\n  run_in_vpc = true\n  vpc_id     = dependency.vpc.outputs.vpc_id\n  subnet_ids = dependency.vpc.outputs.private_app_subnet_ids\n\n  environment_variables = merge(\n    include.envcommon.locals.environment_variables,\n    {\n      IMAGE_URI                       = local.image,\n      SECRETS_MANAGER_ARN             = local.edo_cron_workers_secrets_manager_arn,\n    }\n  )\n\n  iam_policy = {\n    SecretsAccess = {\n      actions = [\n        \"secretsmanager:GetSecretValue\",\n        \"secretsmanager:DescribeSecret\",\n        \"secretsmanager:ListSecretVersionIds\",\n        \"secretsmanager:PutSecretValue\",\n        \"secretsmanager:UpdateSecret\",\n        \"secretsmanager:TagResource\",\n        \"secretsmanager:UntagResource\"\n      ],\n      resources = [\"${local.edo_cron_workers_secrets_manager_arn}\"]\n      effect    = \"Allow\"\n    }\n  }\n\n  cloudwatch_log_group_retention_in_days = 14\nseems like related to https://aws.amazon.com/blogs/security/how-to-connect-to-aws-secrets-manager-service-within-a-virtual-private-cloud/
\nThank you
\nSolved:
\nsteps:
\nterraform {\n  source = \"git::git@github.com:gruntwork-io/terraform-aws-vpc.git//modules/vpc-interface-endpoint?ref=v0.22.4\"\n}\n\ninputs = {\n  vpc_id             = dependency.vpc.outputs.vpc_id\n  subnet_ids         = dependency.vpc.outputs.private_app_subnet_ids\n\n  create_https_security_group = true\n  enable_secretsmanager_endpoint = true\n}\n  run_in_vpc = true\n  vpc_id     = dependency.vpc.outputs.vpc_id\n  subnet_ids = dependency.vpc.outputs.private_app_subnet_ids\n  should_create_outbound_rule = true\nI am trying to create an EC2 instance with an EBS volume attached to the said instance.
\nI have the code to create the EC2 instance using terragrunt, and it works fine.
However, to create the EBS volume and attach it to the instance I need to use some terraform code.
\ne.g.
\nLayout tree is:
\ndev
\n-ec2
\n--terragrunt.hcl
\n--ebs.tf
In the ebs.tf file we can have
\n\n\nresource \"aws_ebs_volume\" \"this\" {
\n
\navailability_zone = \"ap-southeast-2a\"
\nsize = 20
\n}resource \"aws_volume_attachment\" \"this\" {
\n
\ndevice_name = \"/dev/sdh\"
\nvolume_id = aws_ebs_volume.this.id
\ninstance_id = <instance.parameter.from.terragrunt>
\n}
terragrunt.hcl
\n\n\nlocals {
\n
\nenvironment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))
\nenv = local.environment_vars.locals.environmentproject_vars = read_terragrunt_config(find_in_parent_folders(\"project.hcl\"))
\n
\nproject = local.project_vars.locals.project_name
\napplication = local.project_vars.locals.application_name}
\ninclude {
\n
\npath = find_in_parent_folders()
\n}terraform {
\n
\nsource = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"
\n}
\n``
\ndependency \"sg\" {
\nconfig_path = \"../sg-ec2\"mock_outputs = {
\n
\nsecurity_group_id = \"sg-xxxxxxxxxxxx\"
\n}
\n}inputs = {
\nname = \"ui01-${local.project}-${local.application}-${local.env}\"
\n
\ndescription = \"UI 01${local.project} $ {local.application} Instance for ${local.env}\"ami = \"ami-0bd2230cfb28832f7\" # Amazon Linux kernel 5.10
\n
\ninstance_type = \"c5.large\"
\nkey_name = \"key-test\" # This key is manually created
\nmonitoring = true
\niam_instance_profile = \"AmazonSSMRoleForInstancesQuickSetup\"vpc_id = \"vpc-xxxxxxx\"
\n
\nsubnet_id = \"subnet-xxxxxxxx\"vpc_security_group_ids = [\"${dependency.sg.outputs.security_group_id}\"]
\n}
\n
Is it possible to use the output of the instance and pass this parameter/object to the ebs.tf file so that the ebs volume gets attached to the instance on the fly?
\nAnother question is, is it possible for the *.tf files to use the variables defined in the .hcl files?
\ne.g.
\nIf you call in terragrunt
\n\nlocals {
\n
\nenvironment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))
\nenv = local.environment_vars.locals.environment
\n}env.hcl is:
\n
\nlocals {
\nenvironment = \"dev\"
\n}
you can use the variable env as ${local.env} for your inputs
\nCan you call this variable in the .tf file in some way?
OK so I have this almost working fully, well in fact it does work, I can grab the instance id and attach an ebs volume to this instance, but at the same time the ebs directory tries to create a new ec2 instance. This is not what I want as I have a ec2 directory looking after the entire ec2 instance creation.
\n├── ebs
\n│   ├── ebs.tf
\n│   └── terragrunt.hcl
\n└── ec2-instance
\n└── terragrunt.hcl
ebs.tf
\nvariable \"instance_id\" {\n  type = string\n}\n\nresource \"aws_ebs_volume\" \"this\" {\n  availability_zone = \"ap-southeast-2a\"\n  size              = 20\n}\n\nresource \"aws_volume_attachment\" \"this\" {\n  device_name = \"/dev/sdh\"\n  volume_id   = aws_ebs_volume.this.id\n  instance_id = \"${var.instance_id}\"\n}\nterragrunt.hcl
\nlocals { }\n\ninclude {\n  path = find_in_parent_folders()\n}\n\nterraform {\n  source = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"\n}\n\ndependency \"ec2-linux-ui\" {\n  config_path = \"../ec2-linux-ui\"\n  mock_outputs = {\n    instance_id = \"12345\"\n  }\n}\n\ninputs = {\n      instance_id = dependency.ec2-linux-ui.outputs.id\n}\nterragrunt.hcl for the ec2 instance
\nlocals {\n  environment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))\n  env              = local.environment_vars.locals.environment\n  project_vars = read_terragrunt_config(find_in_parent_folders(\"project.hcl\"))\n  project      = local.project_vars.locals.project_name\n  application  = local.project_vars.locals.application_name\n}\n\ninclude {\n  path = find_in_parent_folders()\n}\n\nterraform {\n  source = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"\n}\n\n# Need the output of the correct Security Group ID to attach to the RDS instance\ndependency \"sg\" {\n  config_path = \"../sg-ec2\"\n\n  mock_outputs = {\n    security_group_id = \"sg-xxxxxxxxxx\"\n  }\n}\n\ninputs = {\n\n  # Naming\n  name        = \"ui01-${local.project}-${local.application}-${local.env}\"\n  description = \"UI 01 ${local.project} ${local.application} Instance for ${local.env}\"\n\n  # EC2 Config\n  ami                  = \"ami-0bd2230cfb28832f7\" # Amazon Linux kernel 5.10\n  instance_type        = \"c5.large\"\n  key_name             = \"xxxxxxx\" \n  monitoring           = true\n\n\n  # Networking\n  vpc_id    = \"xxxxxxx\"   \n  subnet_id = \"xxxxxxxx\"\n\n  # Security Group\n  vpc_security_group_ids = [\"${dependency.sg.outputs.security_group_id}\"]\n\n}\nNot sure why the ebs/terragrunt.hcl file wants to create a new instance when I can successfully get the instance id returned from the ec2-linux-ui dependency? If I can fix that, we are done.
"}}} /> +I am trying to create an EC2 instance with an EBS volume attached to the said instance.
\nI have the code to create the EC2 instance using terragrunt, and it works fine.
However, to create the EBS volume and attach it to the instance I need to use some terraform code.
\ne.g.
\nLayout tree is:
\ndev
\n-ec2
\n--terragrunt.hcl
\n--ebs.tf
In the ebs.tf file we can have
\n\n\nresource \"aws_ebs_volume\" \"this\" {
\n
\navailability_zone = \"ap-southeast-2a\"
\nsize = 20
\n}resource \"aws_volume_attachment\" \"this\" {
\n
\ndevice_name = \"/dev/sdh\"
\nvolume_id = aws_ebs_volume.this.id
\ninstance_id = <instance.parameter.from.terragrunt>
\n}
terragrunt.hcl
\n\n\nlocals {
\n
\nenvironment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))
\nenv = local.environment_vars.locals.environmentproject_vars = read_terragrunt_config(find_in_parent_folders(\"project.hcl\"))
\n
\nproject = local.project_vars.locals.project_name
\napplication = local.project_vars.locals.application_name}
\ninclude {
\n
\npath = find_in_parent_folders()
\n}terraform {
\n
\nsource = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"
\n}
\n``
\ndependency \"sg\" {
\nconfig_path = \"../sg-ec2\"mock_outputs = {
\n
\nsecurity_group_id = \"sg-xxxxxxxxxxxx\"
\n}
\n}inputs = {
\nname = \"ui01-${local.project}-${local.application}-${local.env}\"
\n
\ndescription = \"UI 01${local.project} $ {local.application} Instance for ${local.env}\"ami = \"ami-0bd2230cfb28832f7\" # Amazon Linux kernel 5.10
\n
\ninstance_type = \"c5.large\"
\nkey_name = \"key-test\" # This key is manually created
\nmonitoring = true
\niam_instance_profile = \"AmazonSSMRoleForInstancesQuickSetup\"vpc_id = \"vpc-xxxxxxx\"
\n
\nsubnet_id = \"subnet-xxxxxxxx\"vpc_security_group_ids = [\"${dependency.sg.outputs.security_group_id}\"]
\n}
\n
Is it possible to use the output of the instance and pass this parameter/object to the ebs.tf file so that the ebs volume gets attached to the instance on the fly?
\nAnother question is, is it possible for the *.tf files to use the variables defined in the .hcl files?
\ne.g.
\nIf you call in terragrunt
\n\nlocals {
\n
\nenvironment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))
\nenv = local.environment_vars.locals.environment
\n}env.hcl is:
\n
\nlocals {
\nenvironment = \"dev\"
\n}
you can use the variable env as ${local.env} for your inputs
\nCan you call this variable in the .tf file in some way?
OK so I have this almost working fully, well in fact it does work, I can grab the instance id and attach an ebs volume to this instance, but at the same time the ebs directory tries to create a new ec2 instance. This is not what I want as I have a ec2 directory looking after the entire ec2 instance creation.
\n├── ebs
\n│   ├── ebs.tf
\n│   └── terragrunt.hcl
\n└── ec2-instance
\n└── terragrunt.hcl
ebs.tf
\nvariable \"instance_id\" {\n  type = string\n}\n\nresource \"aws_ebs_volume\" \"this\" {\n  availability_zone = \"ap-southeast-2a\"\n  size              = 20\n}\n\nresource \"aws_volume_attachment\" \"this\" {\n  device_name = \"/dev/sdh\"\n  volume_id   = aws_ebs_volume.this.id\n  instance_id = \"${var.instance_id}\"\n}\nterragrunt.hcl
\nlocals { }\n\ninclude {\n  path = find_in_parent_folders()\n}\n\nterraform {\n  source = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"\n}\n\ndependency \"ec2-linux-ui\" {\n  config_path = \"../ec2-linux-ui\"\n  mock_outputs = {\n    instance_id = \"12345\"\n  }\n}\n\ninputs = {\n      instance_id = dependency.ec2-linux-ui.outputs.id\n}\nterragrunt.hcl for the ec2 instance
\nlocals {\n  environment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))\n  env              = local.environment_vars.locals.environment\n  project_vars = read_terragrunt_config(find_in_parent_folders(\"project.hcl\"))\n  project      = local.project_vars.locals.project_name\n  application  = local.project_vars.locals.application_name\n}\n\ninclude {\n  path = find_in_parent_folders()\n}\n\nterraform {\n  source = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"\n}\n\n# Need the output of the correct Security Group ID to attach to the RDS instance\ndependency \"sg\" {\n  config_path = \"../sg-ec2\"\n\n  mock_outputs = {\n    security_group_id = \"sg-xxxxxxxxxx\"\n  }\n}\n\ninputs = {\n\n  # Naming\n  name        = \"ui01-${local.project}-${local.application}-${local.env}\"\n  description = \"UI 01 ${local.project} ${local.application} Instance for ${local.env}\"\n\n  # EC2 Config\n  ami                  = \"ami-0bd2230cfb28832f7\" # Amazon Linux kernel 5.10\n  instance_type        = \"c5.large\"\n  key_name             = \"xxxxxxx\" \n  monitoring           = true\n\n\n  # Networking\n  vpc_id    = \"xxxxxxx\"   \n  subnet_id = \"xxxxxxxx\"\n\n  # Security Group\n  vpc_security_group_ids = [\"${dependency.sg.outputs.security_group_id}\"]\n\n}\nNot sure why the ebs/terragrunt.hcl file wants to create a new instance when I can successfully get the instance id returned from the ec2-linux-ui dependency? If I can fix that, we are done.
"}}} />Hi,
\nNeed help, I got this error on provisioning public-static-website to (ap-southeast-3 region) using terraform-aws-service-catalog version 0.100.0
\nmodule.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0]: Creating...\nmodule.static_website.aws_s3_bucket_policy.website[0]: Modifying... [id=edo.xxxx.com]\nmodule.static_website.aws_s3_bucket_policy.website[0]: Modifications complete after 0s [id=edo.xxxx.com]\nmodule.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0]: Still creating... [10s elapsed]\nmodule.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0]: Still creating... [20s elapsed]\nmodule.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0]: Still creating... [30s elapsed]\nmodule.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0]: Still creating... [40s elapsed]\nmodule.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0]: Still creating... [50s elapsed]\n╷\n│ Error: Error putting S3 policy: MalformedPolicy: Invalid principal in policy\n│ \tstatus code: 400, request id: 1SW2NGZYREGCX0YP, host id: u2nGs1sUcy3uxBIkhLr9Yu2gAkdd3ngTZmIsYUg9Mnctb5xer+Y9r2Dcig0IqQ35obzqSunQBjg=\n│\n│   with module.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0],\n│   on .terraform/modules/cloudfront.access_logs/modules/private-s3-bucket/main.tf line 429, in resource \"aws_s3_bucket_policy\" \"bucket_policy\":\n│  429: resource \"aws_s3_bucket_policy\" \"bucket_policy\" {\n│\n╵\nERRO[0086] 1 error occurred:\n\t* exit status 1\ndetails input :
\ninputs = {\n  restrict_bucket_access_to_cloudfront    = true\n  create_route53_entry                    = true\n  base_domain_name                        = local.account_vars.locals.domain_name.name\n  website_domain_name                     = \"edo.${local.account_vars.locals.domain_name.name}\"\n  acm_certificate_domain_name             = \"${local.account_vars.locals.domain_name.name}\"\n  security_header_content_security_policy = \"default-src 'self'; base-uri 'self'; block-all-mixed-content; font-src 'self' https: data:; form-action 'self'; frame-ancestors 'self'; img-src 'self' data:; object-src 'none'; script-src 'self' blob:; script-src-attr 'none'; style-src 'self' https: 'unsafe-inline';  upgrade-insecure-requests\"\n\n  error_responses = {\n    404 = {\n      response_code         = 200\n      response_page_path    = \"index.html\"\n      error_caching_min_ttl = 10\n    }\n  }\n\n  force_destroy = true\n}\nHi @andi-pangeran,
\nAs discussed in other replies, CloudFront doesn't deliver standard logs to buckets in some regions, and for those cases, you need to use var.disable_logging which is now exposed to module public-static-website on the service catalog:
\nhttps://github.com/gruntwork-io/terraform-aws-service-catalog/releases/tag/v0.100.5
Hi,
\nNeed help, I got this error on provisioning public-static-website to (ap-southeast-3 region) using terraform-aws-service-catalog version 0.100.0
\nmodule.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0]: Creating...\nmodule.static_website.aws_s3_bucket_policy.website[0]: Modifying... [id=edo.xxxx.com]\nmodule.static_website.aws_s3_bucket_policy.website[0]: Modifications complete after 0s [id=edo.xxxx.com]\nmodule.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0]: Still creating... [10s elapsed]\nmodule.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0]: Still creating... [20s elapsed]\nmodule.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0]: Still creating... [30s elapsed]\nmodule.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0]: Still creating... [40s elapsed]\nmodule.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0]: Still creating... [50s elapsed]\n╷\n│ Error: Error putting S3 policy: MalformedPolicy: Invalid principal in policy\n│ \tstatus code: 400, request id: 1SW2NGZYREGCX0YP, host id: u2nGs1sUcy3uxBIkhLr9Yu2gAkdd3ngTZmIsYUg9Mnctb5xer+Y9r2Dcig0IqQ35obzqSunQBjg=\n│\n│   with module.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0],\n│   on .terraform/modules/cloudfront.access_logs/modules/private-s3-bucket/main.tf line 429, in resource \"aws_s3_bucket_policy\" \"bucket_policy\":\n│  429: resource \"aws_s3_bucket_policy\" \"bucket_policy\" {\n│\n╵\nERRO[0086] 1 error occurred:\n\t* exit status 1\ndetails input :
\ninputs = {\n  restrict_bucket_access_to_cloudfront    = true\n  create_route53_entry                    = true\n  base_domain_name                        = local.account_vars.locals.domain_name.name\n  website_domain_name                     = \"edo.${local.account_vars.locals.domain_name.name}\"\n  acm_certificate_domain_name             = \"${local.account_vars.locals.domain_name.name}\"\n  security_header_content_security_policy = \"default-src 'self'; base-uri 'self'; block-all-mixed-content; font-src 'self' https: data:; form-action 'self'; frame-ancestors 'self'; img-src 'self' data:; object-src 'none'; script-src 'self' blob:; script-src-attr 'none'; style-src 'self' https: 'unsafe-inline';  upgrade-insecure-requests\"\n\n  error_responses = {\n    404 = {\n      response_code         = 200\n      response_page_path    = \"index.html\"\n      error_caching_min_ttl = 10\n    }\n  }\n\n  force_destroy = true\n}\nHi @andi-pangeran,
\nAs discussed in other replies, CloudFront doesn't deliver standard logs to buckets in some regions, and for those cases, you need to use var.disable_logging which is now exposed to module public-static-website on the service catalog:
\nhttps://github.com/gruntwork-io/terraform-aws-service-catalog/releases/tag/v0.100.5
Hi,
\nNeed help, I got this error on provisioning public-static-website to (ap-southeast-3 region) using terraform-aws-service-catalog version 0.100.0
\nmodule.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0]: Creating...\nmodule.static_website.aws_s3_bucket_policy.website[0]: Modifying... [id=edo.xxxx.com]\nmodule.static_website.aws_s3_bucket_policy.website[0]: Modifications complete after 0s [id=edo.xxxx.com]\nmodule.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0]: Still creating... [10s elapsed]\nmodule.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0]: Still creating... [20s elapsed]\nmodule.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0]: Still creating... [30s elapsed]\nmodule.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0]: Still creating... [40s elapsed]\nmodule.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0]: Still creating... [50s elapsed]\n╷\n│ Error: Error putting S3 policy: MalformedPolicy: Invalid principal in policy\n│ \tstatus code: 400, request id: 1SW2NGZYREGCX0YP, host id: u2nGs1sUcy3uxBIkhLr9Yu2gAkdd3ngTZmIsYUg9Mnctb5xer+Y9r2Dcig0IqQ35obzqSunQBjg=\n│\n│   with module.cloudfront.module.access_logs[0].aws_s3_bucket_policy.bucket_policy[0],\n│   on .terraform/modules/cloudfront.access_logs/modules/private-s3-bucket/main.tf line 429, in resource \"aws_s3_bucket_policy\" \"bucket_policy\":\n│  429: resource \"aws_s3_bucket_policy\" \"bucket_policy\" {\n│\n╵\nERRO[0086] 1 error occurred:\n\t* exit status 1\ndetails input :
\ninputs = {\n  restrict_bucket_access_to_cloudfront    = true\n  create_route53_entry                    = true\n  base_domain_name                        = local.account_vars.locals.domain_name.name\n  website_domain_name                     = \"edo.${local.account_vars.locals.domain_name.name}\"\n  acm_certificate_domain_name             = \"${local.account_vars.locals.domain_name.name}\"\n  security_header_content_security_policy = \"default-src 'self'; base-uri 'self'; block-all-mixed-content; font-src 'self' https: data:; form-action 'self'; frame-ancestors 'self'; img-src 'self' data:; object-src 'none'; script-src 'self' blob:; script-src-attr 'none'; style-src 'self' https: 'unsafe-inline';  upgrade-insecure-requests\"\n\n  error_responses = {\n    404 = {\n      response_code         = 200\n      response_page_path    = \"index.html\"\n      error_caching_min_ttl = 10\n    }\n  }\n\n  force_destroy = true\n}\nHi @andi-pangeran,
\nAs discussed in other replies, CloudFront doesn't deliver standard logs to buckets in some regions, and for those cases, you need to use var.disable_logging which is now exposed to module public-static-website on the service catalog as of v0.100.5:
\nhttps://github.com/gruntwork-io/terraform-aws-service-catalog/releases/tag/v0.100.5