From 7d479576f9d3807a7b8613352ef0689db43def49 Mon Sep 17 00:00:00 2001 From: "docs-sourcer[bot]" <99042413+docs-sourcer[bot]@users.noreply.github.com> Date: Mon, 30 Jan 2023 16:27:36 +0000 Subject: [PATCH 1/7] Updated with the latest changes from the knowledge base discussions. --- docs/discussions/knowledge-base/137.mdx | 4 ++-- docs/discussions/knowledge-base/653.mdx | 27 +++++++++++++++++++++++++ 2 files changed, 29 insertions(+), 2 deletions(-) create mode 100644 docs/discussions/knowledge-base/653.mdx diff --git a/docs/discussions/knowledge-base/137.mdx b/docs/discussions/knowledge-base/137.mdx index 31dd726dcc..c5c57528a7 100644 --- a/docs/discussions/knowledge-base/137.mdx +++ b/docs/discussions/knowledge-base/137.mdx @@ -14,7 +14,7 @@ import GitHub from "/src/components/GitHub" Knowledge Base

Passing variables between Terragrunt and Terraform

- resource \"aws_ebs_volume\" \"this\" {\r\n> availability_zone = \"ap-southeast-2a\"\r\n> size = 20\r\n> }\r\n> \r\n> resource \"aws_volume_attachment\" \"this\" {\r\n> device_name = \"/dev/sdh\"\r\n> volume_id = aws_ebs_volume.this.id\r\n> instance_id = \r\n> }\r\n> \r\n\r\nterragrunt.hcl\r\n\r\n> locals {\r\n> environment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))\r\n> env = local.environment_vars.locals.environment\r\n> \r\n> project_vars = read_terragrunt_config(find_in_parent_folders(\"project.hcl\"))\r\n> project = local.project_vars.locals.project_name\r\n> application = local.project_vars.locals.application_name\r\n> \r\n> }\r\n> \r\n> include {\r\n> path = find_in_parent_folders()\r\n> }\r\n> \r\n> terraform {\r\n> source = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"\r\n> }\r\n> ``\r\n> dependency \"sg\" {\r\n> config_path = \"../sg-ec2\"\r\n> \r\n> mock_outputs = {\r\n> security_group_id = \"sg-xxxxxxxxxxxx\"\r\n> }\r\n> }\r\n> \r\n> inputs = {\r\n> \r\n> \r\n> name = \"ui01-${local.project}-${local.application}-${local.env}\"\r\n> description = \"UI 01 ${local.project} ${local.application} Instance for ${local.env}\"\r\n> \r\n> \r\n> ami = \"ami-0bd2230cfb28832f7\" # Amazon Linux kernel 5.10\r\n> instance_type = \"c5.large\"\r\n> key_name = \"key-test\" # This key is manually created\r\n> monitoring = true\r\n> iam_instance_profile = \"AmazonSSMRoleForInstancesQuickSetup\"\r\n> \r\n> \r\n> vpc_id = \"vpc-xxxxxxx\" \r\n> subnet_id = \"subnet-xxxxxxxx\" \r\n> \r\n> \r\n> vpc_security_group_ids = [\"${dependency.sg.outputs.security_group_id}\"]\r\n> \r\n> }\r\n\r\n\r\n\r\nIs it possible to use the output of the instance and pass this parameter/object to the ebs.tf file so that the ebs volume gets attached to the instance on the fly?\r\n\r\nAnother question is, is it possible for the *.tf files to use the variables defined in the .hcl files?\r\n\r\ne.g.\r\nIf you call in terragrunt \r\n\r\n> locals {\r\n> environment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))\r\n> env = local.environment_vars.locals.environment\r\n> }\r\n> \r\n> env.hcl is:\r\n> locals {\r\n> environment = \"dev\"\r\n> }\r\n> \r\n\r\nyou can use the variable env as ${local.env} for your inputs\r\nCan you call this variable in the .tf file in some way?\r\n","bodyHTML":"

I am trying to create an EC2 instance with an EBS volume attached to the said instance.
\nI have the code to create the EC2 instance using terragrunt, and it works fine.

\n

However, to create the EBS volume and attach it to the instance I need to use some terraform code.

\n

e.g.

\n

Layout tree is:

\n

dev
\n-ec2
\n--terragrunt.hcl
\n--ebs.tf

\n

In the ebs.tf file we can have

\n
\n

resource \"aws_ebs_volume\" \"this\" {
\navailability_zone = \"ap-southeast-2a\"
\nsize = 20
\n}

\n

resource \"aws_volume_attachment\" \"this\" {
\ndevice_name = \"/dev/sdh\"
\nvolume_id = aws_ebs_volume.this.id
\ninstance_id = <instance.parameter.from.terragrunt>
\n}

\n
\n

terragrunt.hcl

\n
\n

locals {
\nenvironment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))
\nenv = local.environment_vars.locals.environment

\n

project_vars = read_terragrunt_config(find_in_parent_folders(\"project.hcl\"))
\nproject = local.project_vars.locals.project_name
\napplication = local.project_vars.locals.application_name

\n

}

\n

include {
\npath = find_in_parent_folders()
\n}

\n

terraform {
\nsource = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"
\n}
\n``
\ndependency \"sg\" {
\nconfig_path = \"../sg-ec2\"

\n

mock_outputs = {
\nsecurity_group_id = \"sg-xxxxxxxxxxxx\"
\n}
\n}

\n

inputs = {

\n

name = \"ui01-${local.project}-${local.application}-${local.env}\"
\ndescription = \"UI 01 ${local.project} ${local.application} Instance for ${local.env}\"

\n

ami = \"ami-0bd2230cfb28832f7\" # Amazon Linux kernel 5.10
\ninstance_type = \"c5.large\"
\nkey_name = \"key-test\" # This key is manually created
\nmonitoring = true
\niam_instance_profile = \"AmazonSSMRoleForInstancesQuickSetup\"

\n

vpc_id = \"vpc-xxxxxxx\"
\nsubnet_id = \"subnet-xxxxxxxx\"

\n

vpc_security_group_ids = [\"${dependency.sg.outputs.security_group_id}\"]

\n

}

\n
\n

Is it possible to use the output of the instance and pass this parameter/object to the ebs.tf file so that the ebs volume gets attached to the instance on the fly?

\n

Another question is, is it possible for the *.tf files to use the variables defined in the .hcl files?

\n

e.g.
\nIf you call in terragrunt

\n
\n

locals {
\nenvironment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))
\nenv = local.environment_vars.locals.environment
\n}

\n

env.hcl is:
\nlocals {
\nenvironment = \"dev\"
\n}

\n
\n

you can use the variable env as ${local.env} for your inputs
\nCan you call this variable in the .tf file in some way?

","answer":{"body":"OK so I have this almost working fully, well in fact it does work, I can grab the instance id and attach an ebs volume to this instance, but at the same time the ebs directory tries to create a new ec2 instance. This is not what I want as I have a ec2 directory looking after the entire ec2 instance creation.\r\n\r\n\r\n├── ebs\r\n│ ├── ebs.tf\r\n│ └── terragrunt.hcl\r\n└── ec2-instance\r\n └── terragrunt.hcl\r\n\r\n\r\nebs.tf\r\n```\r\nvariable \"instance_id\" {\r\n type = string\r\n}\r\n\r\nresource \"aws_ebs_volume\" \"this\" {\r\n availability_zone = \"ap-southeast-2a\"\r\n size = 20\r\n}\r\n\r\nresource \"aws_volume_attachment\" \"this\" {\r\n device_name = \"/dev/sdh\"\r\n volume_id = aws_ebs_volume.this.id\r\n instance_id = \"${var.instance_id}\"\r\n}\r\n```\r\n\r\n\r\nterragrunt.hcl\r\n\r\n```\r\nlocals { }\r\n\r\ninclude {\r\n path = find_in_parent_folders()\r\n}\r\n\r\nterraform {\r\n source = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"\r\n}\r\n\r\ndependency \"ec2-linux-ui\" {\r\n config_path = \"../ec2-linux-ui\"\r\n mock_outputs = {\r\n instance_id = \"12345\"\r\n }\r\n}\r\n\r\ninputs = {\r\n instance_id = dependency.ec2-linux-ui.outputs.id\r\n}\r\n```\r\n\r\n\r\n\r\nterragrunt.hcl for the ec2 instance\r\n\r\n```\r\nlocals {\r\n environment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))\r\n env = local.environment_vars.locals.environment\r\n project_vars = read_terragrunt_config(find_in_parent_folders(\"project.hcl\"))\r\n project = local.project_vars.locals.project_name\r\n application = local.project_vars.locals.application_name\r\n}\r\n\r\ninclude {\r\n path = find_in_parent_folders()\r\n}\r\n\r\nterraform {\r\n source = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"\r\n}\r\n\r\n# Need the output of the correct Security Group ID to attach to the RDS instance\r\ndependency \"sg\" {\r\n config_path = \"../sg-ec2\"\r\n\r\n mock_outputs = {\r\n security_group_id = \"sg-xxxxxxxxxx\"\r\n }\r\n}\r\n\r\ninputs = {\r\n\r\n # Naming\r\n name = \"ui01-${local.project}-${local.application}-${local.env}\"\r\n description = \"UI 01 ${local.project} ${local.application} Instance for ${local.env}\"\r\n\r\n # EC2 Config\r\n ami = \"ami-0bd2230cfb28832f7\" # Amazon Linux kernel 5.10\r\n instance_type = \"c5.large\"\r\n key_name = \"xxxxxxx\" \r\n monitoring = true\r\n\r\n\r\n # Networking\r\n vpc_id = \"xxxxxxx\" \r\n subnet_id = \"xxxxxxxx\"\r\n\r\n # Security Group\r\n vpc_security_group_ids = [\"${dependency.sg.outputs.security_group_id}\"]\r\n\r\n}\r\n```\r\n\r\nNot sure why the ebs/terragrunt.hcl file wants to create a new instance when I can successfully get the instance id returned from the ec2-linux-ui dependency? If I can fix that, we are done.","bodyHTML":"

OK so I have this almost working fully, well in fact it does work, I can grab the instance id and attach an ebs volume to this instance, but at the same time the ebs directory tries to create a new ec2 instance. This is not what I want as I have a ec2 directory looking after the entire ec2 instance creation.

\n

├── ebs
\n│ ├── ebs.tf
\n│ └── terragrunt.hcl
\n└── ec2-instance
\n└── terragrunt.hcl

\n

ebs.tf

\n
variable \"instance_id\" {\n  type = string\n}\n\nresource \"aws_ebs_volume\" \"this\" {\n  availability_zone = \"ap-southeast-2a\"\n  size              = 20\n}\n\nresource \"aws_volume_attachment\" \"this\" {\n  device_name = \"/dev/sdh\"\n  volume_id   = aws_ebs_volume.this.id\n  instance_id = \"${var.instance_id}\"\n}\n
\n

terragrunt.hcl

\n
locals { }\n\ninclude {\n  path = find_in_parent_folders()\n}\n\nterraform {\n  source = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"\n}\n\ndependency \"ec2-linux-ui\" {\n  config_path = \"../ec2-linux-ui\"\n  mock_outputs = {\n    instance_id = \"12345\"\n  }\n}\n\ninputs = {\n      instance_id = dependency.ec2-linux-ui.outputs.id\n}\n
\n

terragrunt.hcl for the ec2 instance

\n
locals {\n  environment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))\n  env              = local.environment_vars.locals.environment\n  project_vars = read_terragrunt_config(find_in_parent_folders(\"project.hcl\"))\n  project      = local.project_vars.locals.project_name\n  application  = local.project_vars.locals.application_name\n}\n\ninclude {\n  path = find_in_parent_folders()\n}\n\nterraform {\n  source = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"\n}\n\n# Need the output of the correct Security Group ID to attach to the RDS instance\ndependency \"sg\" {\n  config_path = \"../sg-ec2\"\n\n  mock_outputs = {\n    security_group_id = \"sg-xxxxxxxxxx\"\n  }\n}\n\ninputs = {\n\n  # Naming\n  name        = \"ui01-${local.project}-${local.application}-${local.env}\"\n  description = \"UI 01 ${local.project} ${local.application} Instance for ${local.env}\"\n\n  # EC2 Config\n  ami                  = \"ami-0bd2230cfb28832f7\" # Amazon Linux kernel 5.10\n  instance_type        = \"c5.large\"\n  key_name             = \"xxxxxxx\" \n  monitoring           = true\n\n\n  # Networking\n  vpc_id    = \"xxxxxxx\"   \n  subnet_id = \"xxxxxxxx\"\n\n  # Security Group\n  vpc_security_group_ids = [\"${dependency.sg.outputs.security_group_id}\"]\n\n}\n
\n

Not sure why the ebs/terragrunt.hcl file wants to create a new instance when I can successfully get the instance id returned from the ec2-linux-ui dependency? If I can fix that, we are done.

"}}} /> + resource \"aws_ebs_volume\" \"this\" {\r\n> availability_zone = \"ap-southeast-2a\"\r\n> size = 20\r\n> }\r\n> \r\n> resource \"aws_volume_attachment\" \"this\" {\r\n> device_name = \"/dev/sdh\"\r\n> volume_id = aws_ebs_volume.this.id\r\n> instance_id = \r\n> }\r\n> \r\n\r\nterragrunt.hcl\r\n\r\n> locals {\r\n> environment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))\r\n> env = local.environment_vars.locals.environment\r\n> \r\n> project_vars = read_terragrunt_config(find_in_parent_folders(\"project.hcl\"))\r\n> project = local.project_vars.locals.project_name\r\n> application = local.project_vars.locals.application_name\r\n> \r\n> }\r\n> \r\n> include {\r\n> path = find_in_parent_folders()\r\n> }\r\n> \r\n> terraform {\r\n> source = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"\r\n> }\r\n> ``\r\n> dependency \"sg\" {\r\n> config_path = \"../sg-ec2\"\r\n> \r\n> mock_outputs = {\r\n> security_group_id = \"sg-xxxxxxxxxxxx\"\r\n> }\r\n> }\r\n> \r\n> inputs = {\r\n> \r\n> \r\n> name = \"ui01-${local.project}-${local.application}-${local.env}\"\r\n> description = \"UI 01 ${local.project} ${local.application} Instance for ${local.env}\"\r\n> \r\n> \r\n> ami = \"ami-0bd2230cfb28832f7\" # Amazon Linux kernel 5.10\r\n> instance_type = \"c5.large\"\r\n> key_name = \"key-test\" # This key is manually created\r\n> monitoring = true\r\n> iam_instance_profile = \"AmazonSSMRoleForInstancesQuickSetup\"\r\n> \r\n> \r\n> vpc_id = \"vpc-xxxxxxx\" \r\n> subnet_id = \"subnet-xxxxxxxx\" \r\n> \r\n> \r\n> vpc_security_group_ids = [\"${dependency.sg.outputs.security_group_id}\"]\r\n> \r\n> }\r\n\r\n\r\n\r\nIs it possible to use the output of the instance and pass this parameter/object to the ebs.tf file so that the ebs volume gets attached to the instance on the fly?\r\n\r\nAnother question is, is it possible for the *.tf files to use the variables defined in the .hcl files?\r\n\r\ne.g.\r\nIf you call in terragrunt \r\n\r\n> locals {\r\n> environment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))\r\n> env = local.environment_vars.locals.environment\r\n> }\r\n> \r\n> env.hcl is:\r\n> locals {\r\n> environment = \"dev\"\r\n> }\r\n> \r\n\r\nyou can use the variable env as ${local.env} for your inputs\r\nCan you call this variable in the .tf file in some way?\r\n","bodyHTML":"

I am trying to create an EC2 instance with an EBS volume attached to the said instance.
\nI have the code to create the EC2 instance using terragrunt, and it works fine.

\n

However, to create the EBS volume and attach it to the instance I need to use some terraform code.

\n

e.g.

\n

Layout tree is:

\n

dev
\n-ec2
\n--terragrunt.hcl
\n--ebs.tf

\n

In the ebs.tf file we can have

\n
\n

resource \"aws_ebs_volume\" \"this\" {
\navailability_zone = \"ap-southeast-2a\"
\nsize = 20
\n}

\n

resource \"aws_volume_attachment\" \"this\" {
\ndevice_name = \"/dev/sdh\"
\nvolume_id = aws_ebs_volume.this.id
\ninstance_id = <instance.parameter.from.terragrunt>
\n}

\n
\n

terragrunt.hcl

\n
\n

locals {
\nenvironment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))
\nenv = local.environment_vars.locals.environment

\n

project_vars = read_terragrunt_config(find_in_parent_folders(\"project.hcl\"))
\nproject = local.project_vars.locals.project_name
\napplication = local.project_vars.locals.application_name

\n

}

\n

include {
\npath = find_in_parent_folders()
\n}

\n

terraform {
\nsource = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"
\n}
\n``
\ndependency \"sg\" {
\nconfig_path = \"../sg-ec2\"

\n

mock_outputs = {
\nsecurity_group_id = \"sg-xxxxxxxxxxxx\"
\n}
\n}

\n

inputs = {

\n

name = \"ui01-${local.project}-${local.application}-${local.env}\"
\ndescription = \"UI 01 ${local.project} ${local.application} Instance for ${local.env}\"

\n

ami = \"ami-0bd2230cfb28832f7\" # Amazon Linux kernel 5.10
\ninstance_type = \"c5.large\"
\nkey_name = \"key-test\" # This key is manually created
\nmonitoring = true
\niam_instance_profile = \"AmazonSSMRoleForInstancesQuickSetup\"

\n

vpc_id = \"vpc-xxxxxxx\"
\nsubnet_id = \"subnet-xxxxxxxx\"

\n

vpc_security_group_ids = [\"${dependency.sg.outputs.security_group_id}\"]

\n

}

\n
\n

Is it possible to use the output of the instance and pass this parameter/object to the ebs.tf file so that the ebs volume gets attached to the instance on the fly?

\n

Another question is, is it possible for the *.tf files to use the variables defined in the .hcl files?

\n

e.g.
\nIf you call in terragrunt

\n
\n

locals {
\nenvironment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))
\nenv = local.environment_vars.locals.environment
\n}

\n

env.hcl is:
\nlocals {
\nenvironment = \"dev\"
\n}

\n
\n

you can use the variable env as ${local.env} for your inputs
\nCan you call this variable in the .tf file in some way?

","answer":{"body":"OK so I have this almost working fully, well in fact it does work, I can grab the instance id and attach an ebs volume to this instance, but at the same time the ebs directory tries to create a new ec2 instance. This is not what I want as I have a ec2 directory looking after the entire ec2 instance creation.\r\n\r\n\r\n├── ebs\r\n│ ├── ebs.tf\r\n│ └── terragrunt.hcl\r\n└── ec2-instance\r\n └── terragrunt.hcl\r\n\r\n\r\nebs.tf\r\n```\r\nvariable \"instance_id\" {\r\n type = string\r\n}\r\n\r\nresource \"aws_ebs_volume\" \"this\" {\r\n availability_zone = \"ap-southeast-2a\"\r\n size = 20\r\n}\r\n\r\nresource \"aws_volume_attachment\" \"this\" {\r\n device_name = \"/dev/sdh\"\r\n volume_id = aws_ebs_volume.this.id\r\n instance_id = \"${var.instance_id}\"\r\n}\r\n```\r\n\r\n\r\nterragrunt.hcl\r\n\r\n```\r\nlocals { }\r\n\r\ninclude {\r\n path = find_in_parent_folders()\r\n}\r\n\r\nterraform {\r\n source = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"\r\n}\r\n\r\ndependency \"ec2-linux-ui\" {\r\n config_path = \"../ec2-linux-ui\"\r\n mock_outputs = {\r\n instance_id = \"12345\"\r\n }\r\n}\r\n\r\ninputs = {\r\n instance_id = dependency.ec2-linux-ui.outputs.id\r\n}\r\n```\r\n\r\n\r\n\r\nterragrunt.hcl for the ec2 instance\r\n\r\n```\r\nlocals {\r\n environment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))\r\n env = local.environment_vars.locals.environment\r\n project_vars = read_terragrunt_config(find_in_parent_folders(\"project.hcl\"))\r\n project = local.project_vars.locals.project_name\r\n application = local.project_vars.locals.application_name\r\n}\r\n\r\ninclude {\r\n path = find_in_parent_folders()\r\n}\r\n\r\nterraform {\r\n source = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"\r\n}\r\n\r\n# Need the output of the correct Security Group ID to attach to the RDS instance\r\ndependency \"sg\" {\r\n config_path = \"../sg-ec2\"\r\n\r\n mock_outputs = {\r\n security_group_id = \"sg-xxxxxxxxxx\"\r\n }\r\n}\r\n\r\ninputs = {\r\n\r\n # Naming\r\n name = \"ui01-${local.project}-${local.application}-${local.env}\"\r\n description = \"UI 01 ${local.project} ${local.application} Instance for ${local.env}\"\r\n\r\n # EC2 Config\r\n ami = \"ami-0bd2230cfb28832f7\" # Amazon Linux kernel 5.10\r\n instance_type = \"c5.large\"\r\n key_name = \"xxxxxxx\" \r\n monitoring = true\r\n\r\n\r\n # Networking\r\n vpc_id = \"xxxxxxx\" \r\n subnet_id = \"xxxxxxxx\"\r\n\r\n # Security Group\r\n vpc_security_group_ids = [\"${dependency.sg.outputs.security_group_id}\"]\r\n\r\n}\r\n```\r\n\r\nNot sure why the ebs/terragrunt.hcl file wants to create a new instance when I can successfully get the instance id returned from the ec2-linux-ui dependency? If I can fix that, we are done.","bodyHTML":"

OK so I have this almost working fully, well in fact it does work, I can grab the instance id and attach an ebs volume to this instance, but at the same time the ebs directory tries to create a new ec2 instance. This is not what I want as I have a ec2 directory looking after the entire ec2 instance creation.

\n

├── ebs
\n│ ├── ebs.tf
\n│ └── terragrunt.hcl
\n└── ec2-instance
\n└── terragrunt.hcl

\n

ebs.tf

\n
variable \"instance_id\" {\n  type = string\n}\n\nresource \"aws_ebs_volume\" \"this\" {\n  availability_zone = \"ap-southeast-2a\"\n  size              = 20\n}\n\nresource \"aws_volume_attachment\" \"this\" {\n  device_name = \"/dev/sdh\"\n  volume_id   = aws_ebs_volume.this.id\n  instance_id = \"${var.instance_id}\"\n}\n
\n

terragrunt.hcl

\n
locals { }\n\ninclude {\n  path = find_in_parent_folders()\n}\n\nterraform {\n  source = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"\n}\n\ndependency \"ec2-linux-ui\" {\n  config_path = \"../ec2-linux-ui\"\n  mock_outputs = {\n    instance_id = \"12345\"\n  }\n}\n\ninputs = {\n      instance_id = dependency.ec2-linux-ui.outputs.id\n}\n
\n

terragrunt.hcl for the ec2 instance

\n
locals {\n  environment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))\n  env              = local.environment_vars.locals.environment\n  project_vars = read_terragrunt_config(find_in_parent_folders(\"project.hcl\"))\n  project      = local.project_vars.locals.project_name\n  application  = local.project_vars.locals.application_name\n}\n\ninclude {\n  path = find_in_parent_folders()\n}\n\nterraform {\n  source = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"\n}\n\n# Need the output of the correct Security Group ID to attach to the RDS instance\ndependency \"sg\" {\n  config_path = \"../sg-ec2\"\n\n  mock_outputs = {\n    security_group_id = \"sg-xxxxxxxxxx\"\n  }\n}\n\ninputs = {\n\n  # Naming\n  name        = \"ui01-${local.project}-${local.application}-${local.env}\"\n  description = \"UI 01 ${local.project} ${local.application} Instance for ${local.env}\"\n\n  # EC2 Config\n  ami                  = \"ami-0bd2230cfb28832f7\" # Amazon Linux kernel 5.10\n  instance_type        = \"c5.large\"\n  key_name             = \"xxxxxxx\" \n  monitoring           = true\n\n\n  # Networking\n  vpc_id    = \"xxxxxxx\"   \n  subnet_id = \"xxxxxxxx\"\n\n  # Security Group\n  vpc_security_group_ids = [\"${dependency.sg.outputs.security_group_id}\"]\n\n}\n
\n

Not sure why the ebs/terragrunt.hcl file wants to create a new instance when I can successfully get the instance id returned from the ec2-linux-ui dependency? If I can fix that, we are done.

"}}} />
@@ -22,6 +22,6 @@ import GitHub from "/src/components/GitHub" diff --git a/docs/discussions/knowledge-base/653.mdx b/docs/discussions/knowledge-base/653.mdx new file mode 100644 index 0000000000..5f96a5b64b --- /dev/null +++ b/docs/discussions/knowledge-base/653.mdx @@ -0,0 +1,27 @@ +--- +hide_table_of_contents: true +hide_title: true +custom_edit_url: null +--- + +import CenterLayout from "/src/components/CenterLayout" +import GitHub from "/src/components/GitHub" + + + + + + +Knowledge Base +

How do I check if Gruntwork has a module for X technology or service?

+ How can I tell if Gruntwork offers a module for a given AWS service or technology?\r\n\n\n---\n\n\n

Tracked in ticket #109851

\n
\n","bodyHTML":"

A customer asked:

\n
\n

How can I tell if Gruntwork offers a module for a given AWS service or technology?

\n
\n
\n\n

Tracked in ticket #109851

\n
","answer":{"body":"\r\n\r\nThanks for your question. There are a couple of ways to confirm whether or not Gruntwork currently offers a module for a given service or technology: \r\n\r\n# Option 1 - use our Infrastructure as Code Library's search page\r\n\r\nTry the search bar on [the official page for the Gruntwork Infrastructure as Code Library ](https://gruntwork.io/infrastructure-as-code-library/). If there are no matches for your search query, then it's likely Gruntwork does not currently offer such a module.\r\n\r\n![AWS-Infrastructure-as-Code-Library](https://user-images.githubusercontent.com/1769996/215534278-fa6cb144-00e3-4c05-a2f5-5ea6cc18e84c.png)\r\n\r\n# Option 2 - Use GitHub's search functionality \r\n\r\nUse GitHub search when logged into GitHub as an account that is a member of the `gruntwork-io` GitHub organization. You can also enter `org:gruntwork-io language:hcl ` - so, if, for example, you wanted to check for an AWS Neptune module, you could enter `org:gruntwork-io language:hcl neptune` - which would return no results at the time of this writing, indicating no matching module. \r\n\r\n![Search-·-org-gruntwork-io-language-hcl-neptune](https://user-images.githubusercontent.com/1769996/215534729-351b61ae-60f8-40ae-939b-6e988fcc0631.png)\r\n\r\nAs an alternative example, `org:gruntwork-io language:hcl ecs` does return a number of our modules including our ECS module and our ECS Deploy Runner modules.\r\n\r\n![Search-·-org-gruntwork-io-language-hcl-ecs](https://user-images.githubusercontent.com/1769996/215534840-165c77ef-8903-43e4-b1f2-7753be152f33.png)","bodyHTML":"

Thanks for your question. There are a couple of ways to confirm whether or not Gruntwork currently offers a module for a given service or technology:

\n

Option 1 - use our Infrastructure as Code Library's search page

\n

Try the search bar on the official page for the Gruntwork Infrastructure as Code Library . If there are no matches for your search query, then it's likely Gruntwork does not currently offer such a module.

\n

\"AWS-Infrastructure-as-Code-Library\"

\n

Option 2 - Use GitHub's search functionality

\n

Use GitHub search when logged into GitHub as an account that is a member of the gruntwork-io GitHub organization. You can also enter org:gruntwork-io language:hcl <search-term> - so, if, for example, you wanted to check for an AWS Neptune module, you could enter org:gruntwork-io language:hcl neptune - which would return no results at the time of this writing, indicating no matching module.

\n

\"Search-·-org-gruntwork-io-language-hcl-neptune\"

\n

As an alternative example, org:gruntwork-io language:hcl ecs does return a number of our modules including our ECS module and our ECS Deploy Runner modules.

\n

\"Search-·-org-gruntwork-io-language-hcl-ecs\"

"}}} /> + +
+ + + From f3835c6d995c9319e5c024c33dfa2d792819c3fb Mon Sep 17 00:00:00 2001 From: "docs-sourcer[bot]" <99042413+docs-sourcer[bot]@users.noreply.github.com> Date: Mon, 30 Jan 2023 16:44:19 +0000 Subject: [PATCH 2/7] Updated with the latest changes from the knowledge base discussions. --- docs/discussions/knowledge-base/632.mdx | 27 +++++++++++++++++++++++++ 1 file changed, 27 insertions(+) create mode 100644 docs/discussions/knowledge-base/632.mdx diff --git a/docs/discussions/knowledge-base/632.mdx b/docs/discussions/knowledge-base/632.mdx new file mode 100644 index 0000000000..86babb9310 --- /dev/null +++ b/docs/discussions/knowledge-base/632.mdx @@ -0,0 +1,27 @@ +--- +hide_table_of_contents: true +hide_title: true +custom_edit_url: null +--- + +import CenterLayout from "/src/components/CenterLayout" +import GitHub from "/src/components/GitHub" + + + + + + +Knowledge Base +

Best-practice recommendations on authenticating to Gruntwork's codebase in my CI/CD pipelines

+\n

Tracked in ticket #109793

\n\n","bodyHTML":"

Since there are many ways to do this, what is the best way to authenticate to the Gruntwork modules on GitHub.com in my CI/CD pipelines? Also, how does this impact my authentication to other module sources, such as my internal VCS?

\n
\n\n

Tracked in ticket #109793

\n
","answer":{"body":"[This Knowledge Base post](https://github.com/gruntwork-io/knowledge-base/discussions/650) discusses how ECS Deploy Runner and Gruntwork Pipelines use your GitHub Personal Access Token (PAT) securely, by storing it in AWS Secrets Manager and only fetching it into your running ECS container on a just-in-time basis, so your token only exists ephemerally in volatile memory within your running task. This is the default pattern that Gruntwork prefers to use when authenticating to your GitHub resources within your CI/CD pipelines.","bodyHTML":"

This Knowledge Base post discusses how ECS Deploy Runner and Gruntwork Pipelines use your GitHub Personal Access Token (PAT) securely, by storing it in AWS Secrets Manager and only fetching it into your running ECS container on a just-in-time basis, so your token only exists ephemerally in volatile memory within your running task. This is the default pattern that Gruntwork prefers to use when authenticating to your GitHub resources within your CI/CD pipelines.

"}}} /> + +
+ + + From 9616afa4698a275767602f3f879d98e7a0794959 Mon Sep 17 00:00:00 2001 From: "docs-sourcer[bot]" <99042413+docs-sourcer[bot]@users.noreply.github.com> Date: Mon, 30 Jan 2023 17:25:10 +0000 Subject: [PATCH 3/7] Updated with the latest changes from the knowledge base discussions. --- docs/discussions/knowledge-base/646.mdx | 27 +++++++++++++++++++++++++ 1 file changed, 27 insertions(+) create mode 100644 docs/discussions/knowledge-base/646.mdx diff --git a/docs/discussions/knowledge-base/646.mdx b/docs/discussions/knowledge-base/646.mdx new file mode 100644 index 0000000000..ddaec02eda --- /dev/null +++ b/docs/discussions/knowledge-base/646.mdx @@ -0,0 +1,27 @@ +--- +hide_table_of_contents: true +hide_title: true +custom_edit_url: null +--- + +import CenterLayout from "/src/components/CenterLayout" +import GitHub from "/src/components/GitHub" + + + + + + +Knowledge Base +

Specify different name for the s3 bucket and the DNS domain of the website published with public-static-website module

+\r\n

Tracked in ticket #109837

\r\n\r\n","bodyHTML":"

Hi all 👋

\n

Is it possible to create a SPA website with an hosting S3 bucket name other than the website domain?

\n

I'm asking since the public-static-website module does not expose this functionality, so the maximum length of a domain is equal to 63 chars - len(\"-cloudfront-logs\") = 47 chars.

\n

Thx! 🙇

\n
\n\n

Tracked in ticket #109837

\n
","answer":{"body":"Hi! No, at the moment it is not possible as it would need to be exposed from module [`s3-static-website`](https://github.com/gruntwork-io/terraform-aws-static-assets/tree/main/modules/s3-static-website) first. Are you currently blocked by this? Is your domain name longer than 47 characters? If that's the case, we should start by filing a bug report at the [terraform-aws-static-assets](https://github.com/gruntwork-io/terraform-aws-static-assets) repo. ","bodyHTML":"

Hi! No, at the moment it is not possible as it would need to be exposed from module s3-static-website first. Are you currently blocked by this? Is your domain name longer than 47 characters? If that's the case, we should start by filing a bug report at the terraform-aws-static-assets repo.

"}}} /> + +
+ + + From 36472ed97a86b80317f2b86b919147a392d4e7dc Mon Sep 17 00:00:00 2001 From: "docs-sourcer[bot]" <99042413+docs-sourcer[bot]@users.noreply.github.com> Date: Tue, 31 Jan 2023 14:51:13 +0000 Subject: [PATCH 4/7] Updated with the latest changes from the knowledge base discussions. --- docs/discussions/knowledge-base/652.mdx | 27 +++++++++++++++++++++++++ 1 file changed, 27 insertions(+) create mode 100644 docs/discussions/knowledge-base/652.mdx diff --git a/docs/discussions/knowledge-base/652.mdx b/docs/discussions/knowledge-base/652.mdx new file mode 100644 index 0000000000..de785abaf0 --- /dev/null +++ b/docs/discussions/knowledge-base/652.mdx @@ -0,0 +1,27 @@ +--- +hide_table_of_contents: true +hide_title: true +custom_edit_url: null +--- + +import CenterLayout from "/src/components/CenterLayout" +import GitHub from "/src/components/GitHub" + + + + + + +Knowledge Base +

Use Shared VPC pattern with infrastructure-live

+\n

Tracked in ticket #109850

\n\n","bodyHTML":"

Do we have any documentation or tips for using the Shared VPC pattern, https://docs.aws.amazon.com/vpc/latest/userguide/vpc-sharing.html, with the Reference Architecture/infrastructure-live? Are there other Gruntwork modules using AWS RAM?

\n
\n\n

Tracked in ticket #109850

\n
","answer":{"body":"We have [a guide to VPC sharing here](https://docs.gruntwork.io/guides/build-it-yourself/vpc/core-concepts/vpc-peering). \r\n\r\nWe also have the following examples available in our [terraform-aws-vpc](https://github.com/gruntwork-io/terraform-aws-vpc) repository: \r\n\r\n* [VPC peering cross-accounts](https://github.com/gruntwork-io/terraform-aws-vpc/tree/main/examples/vpc-peering-cross-accounts)\r\n* [VPC peering external](https://github.com/gruntwork-io/terraform-aws-vpc/tree/main/examples/vpc-peering-external)\r\n* [VPC peering](https://github.com/gruntwork-io/terraform-aws-vpc/tree/main/examples/vpc-peering)\r\n\r\n> Are there other Gruntwork modules using AWS RAM?\r\n\r\nPlease see [our KB post on determining whether or not Gruntwork offers a particular module.](https://github.com/gruntwork-io/knowledge-base/discussions/653)","bodyHTML":"

We have a guide to VPC sharing here.

\n

We also have the following examples available in our terraform-aws-vpc repository:

\n\n
\n

Are there other Gruntwork modules using AWS RAM?

\n
\n

Please see our KB post on determining whether or not Gruntwork offers a particular module.

"}}} /> + +
+ + + From ca7dacdc744b3b98053964e2a91994b96e8685ad Mon Sep 17 00:00:00 2001 From: "docs-sourcer[bot]" <99042413+docs-sourcer[bot]@users.noreply.github.com> Date: Tue, 31 Jan 2023 15:23:29 +0000 Subject: [PATCH 5/7] Updated with the latest changes from the knowledge base discussions. --- docs/discussions/knowledge-base/551.mdx | 27 +++++++++++++++++++++++++ 1 file changed, 27 insertions(+) create mode 100644 docs/discussions/knowledge-base/551.mdx diff --git a/docs/discussions/knowledge-base/551.mdx b/docs/discussions/knowledge-base/551.mdx new file mode 100644 index 0000000000..a158089afc --- /dev/null +++ b/docs/discussions/knowledge-base/551.mdx @@ -0,0 +1,27 @@ +--- +hide_table_of_contents: true +hide_title: true +custom_edit_url: null +--- + +import CenterLayout from "/src/components/CenterLayout" +import GitHub from "/src/components/GitHub" + + + + + + +Knowledge Base +

How do I import existing security group rules using terragrunt

+ (known after apply)\r\n - ipv6_cidr_blocks = [] -> null\r\n - prefix_list_ids = [] -> null\r\n + source_security_group_id = (known after apply)\r\n # (6 unchanged attributes hidden)\r\n }\r\n```\r\n\r\nThe IPs \"10.8.80.0/21\", \"10.8.88.0/21\", \"10.8.96.0/21\" are already added manually from the console. When I applied, the security group lost all the ingress rules. When I planned next time it showed the ingress rules ready to be applied. Running apply one more time recreated the rules properly, but I don't want to do that in my production environment - therefore trying the import option.\r\n\r\nCOMMAND:\r\n```\r\naws-vault exec stage -- terragrunt import aws_security_group_rule.ingress sg-01e69230e5c0f1169_ingress_tcp_5432_5432_10.8.80.0/21\r\n```\r\n\r\n--------------------------------------------------------------------------------\r\n\r\nERROR:\r\n\r\n```\r\nError: resource address \"aws_security_group_rule.ingress\" does not exist in the configuration.\r\n\r\nBefore importing this resource, please create its configuration in the root module. For example:\r\n\r\nresource \"aws_security_group_rule\" \"ingress\" {\r\n (resource arguments)\r\n}\r\n\r\nERRO[0025] 1 error occurred:\r\n\t* exit status 1\r\n``` \r\n\r\n\r\n---\r\n\r\n\r\n

Tracked in ticket #109192

\r\n
\r\n","bodyHTML":"

Hello,

\n

I am currently attempting to import existing security group rules using terragrunt import command. This worked without an issue when I did the same for a cloudwatch log group.

\n

However, with security group rules I am not able to do this. Can you please let me know what I am doing wrong here.

\n

OUTPUT OF TERRAGRUNT PLAN:

\n
module.database.aws_security_group_rule.allow_connections_from_cidr_blocks[0] must be replaced\n-/+ resource \"aws_security_group_rule\" \"allow_connections_from_cidr_blocks\" {\n      ~ cidr_blocks              = [ # forces replacement\n            # (2 unchanged elements hidden)\n            \"10.2.96.0/21\",\n          + \"10.8.80.0/21\",\n          + \"10.8.88.0/21\",\n          + \"10.8.96.0/21\",\n        ]\n      ~ id                       = \"sgrule-3614096971\" -> (known after apply)\n      - ipv6_cidr_blocks         = [] -> null\n      - prefix_list_ids          = [] -> null\n      + source_security_group_id = (known after apply)\n        # (6 unchanged attributes hidden)\n    }\n
\n

The IPs \"10.8.80.0/21\", \"10.8.88.0/21\", \"10.8.96.0/21\" are already added manually from the console. When I applied, the security group lost all the ingress rules. When I planned next time it showed the ingress rules ready to be applied. Running apply one more time recreated the rules properly, but I don't want to do that in my production environment - therefore trying the import option.

\n

COMMAND:

\n
aws-vault exec stage -- terragrunt import aws_security_group_rule.ingress sg-01e69230e5c0f1169_ingress_tcp_5432_5432_10.8.80.0/21\n
\n
\n

ERROR:

\n
Error: resource address \"aws_security_group_rule.ingress\" does not exist in the configuration.\n\nBefore importing this resource, please create its configuration in the root module. For example:\n\nresource \"aws_security_group_rule\" \"ingress\" {\n   (resource arguments)\n}\n\nERRO[0025] 1 error occurred:\n\t* exit status 1\n
\n
\n\n

Tracked in ticket #109192

\n
","answer":{"body":"Hi @zackproser,\r\n\r\nI was able to use terragrunt state list to find the address. It returned:\r\n`module.database.aws_security_group_rule.allow_connections_from_cidr_blocks[0]`, but the command only worked without [0].\r\n\r\n**PLAN BEFORE IMPORT**\r\n```\r\n # module.database.aws_security_group_rule.allow_connections_from_cidr_blocks[0] must be replaced\r\n-/+ resource \"aws_security_group_rule\" \"allow_connections_from_cidr_blocks\" {\r\n ~ cidr_blocks = [ # forces replacement\r\n # (2 unchanged elements hidden)\r\n \"10.2.96.0/21\",\r\n + \"10.8.80.0/21\",\r\n + \"10.8.88.0/21\",\r\n + \"10.8.96.0/21\",\r\n ]\r\n ~ id = \"sgrule-3614096971\" -> (known after apply)\r\n - ipv6_cidr_blocks = [] -> null\r\n - prefix_list_ids = [] -> null\r\n + source_security_group_id = (known after apply)\r\n # (6 unchanged attributes hidden)\r\n }\r\n```\r\n**IMPORT COMMAND THAT WORKE**D\r\n` aws-vault exec stage -- terragrunt import module.database.aws_security_group_rule.allow_connections_from_cidr_blocks sg-01e69230e5c0f1169_ingress_tcp_5432_5432_10.8.80.0/21`\r\n\r\n```\r\nmodule.database.aws_security_group_rule.allow_connections_from_cidr_blocks: Importing from ID \"sg-01e69230e5c0f1169_ingress_tcp_5432_5432_10.8.80.0/21\"...\r\nmodule.database.aws_security_group_rule.allow_connections_from_cidr_blocks: Import prepared!\r\n Prepared aws_security_group_rule for import\r\nmodule.database.aws_security_group_rule.allow_connections_from_cidr_blocks: Refreshing state... [id=sg-01e69230e5c0f1169_ingress_tcp_5432_5432_10.8.80.0/21]\r\n\r\nImport successful!\r\n```\r\n\r\n**PLAN AFTER IMPORT**\r\nFrom the looks of it, it is trying to delete my import.\r\n\r\n```\r\n# module.database.aws_security_group_rule.allow_connections_from_cidr_blocks will be destroyed\r\n # (because resource uses count or for_each)\r\n - resource \"aws_security_group_rule\" \"allow_connections_from_cidr_blocks\" {\r\n - cidr_blocks = [\r\n - \"10.8.80.0/21\",\r\n ] -> null\r\n - from_port = 5432 -> null\r\n - id = \"sgrule-3185997217\" -> null\r\n - ipv6_cidr_blocks = [] -> null\r\n - prefix_list_ids = [] -> null\r\n - protocol = \"tcp\" -> null\r\n - security_group_id = \"sg-01e69230e5c0f1169\" -> null\r\n - self = false -> null\r\n - to_port = 5432 -> null\r\n - type = \"ingress\" -> null\r\n }\r\n\r\n # module.database.aws_security_group_rule.allow_connections_from_cidr_blocks[0] must be replaced\r\n-/+ resource \"aws_security_group_rule\" \"allow_connections_from_cidr_blocks\" {\r\n ~ cidr_blocks = [ # forces replacement\r\n # (2 unchanged elements hidden)\r\n \"10.2.96.0/21\",\r\n + \"10.8.80.0/21\",\r\n + \"10.8.88.0/21\",\r\n + \"10.8.96.0/21\",\r\n ]\r\n ~ id = \"sgrule-3614096971\" -> (known after apply)\r\n - ipv6_cidr_blocks = [] -> null\r\n - prefix_list_ids = [] -> null\r\n + source_security_group_id = (known after apply)\r\n # (6 unchanged attributes hidden)\r\n }\r\n\r\n```\r\n","bodyHTML":"

Hi @zackproser,

\n

I was able to use terragrunt state list to find the address. It returned:
\nmodule.database.aws_security_group_rule.allow_connections_from_cidr_blocks[0], but the command only worked without [0].

\n

PLAN BEFORE IMPORT

\n
  # module.database.aws_security_group_rule.allow_connections_from_cidr_blocks[0] must be replaced\n-/+ resource \"aws_security_group_rule\" \"allow_connections_from_cidr_blocks\" {\n      ~ cidr_blocks              = [ # forces replacement\n            # (2 unchanged elements hidden)\n            \"10.2.96.0/21\",\n          + \"10.8.80.0/21\",\n          + \"10.8.88.0/21\",\n          + \"10.8.96.0/21\",\n        ]\n      ~ id                       = \"sgrule-3614096971\" -> (known after apply)\n      - ipv6_cidr_blocks         = [] -> null\n      - prefix_list_ids          = [] -> null\n      + source_security_group_id = (known after apply)\n        # (6 unchanged attributes hidden)\n    }\n
\n

IMPORT COMMAND THAT WORKED
\n aws-vault exec stage -- terragrunt import module.database.aws_security_group_rule.allow_connections_from_cidr_blocks sg-01e69230e5c0f1169_ingress_tcp_5432_5432_10.8.80.0/21

\n
module.database.aws_security_group_rule.allow_connections_from_cidr_blocks: Importing from ID \"sg-01e69230e5c0f1169_ingress_tcp_5432_5432_10.8.80.0/21\"...\nmodule.database.aws_security_group_rule.allow_connections_from_cidr_blocks: Import prepared!\n  Prepared aws_security_group_rule for import\nmodule.database.aws_security_group_rule.allow_connections_from_cidr_blocks: Refreshing state... [id=sg-01e69230e5c0f1169_ingress_tcp_5432_5432_10.8.80.0/21]\n\nImport successful!\n
\n

PLAN AFTER IMPORT
\nFrom the looks of it, it is trying to delete my import.

\n
# module.database.aws_security_group_rule.allow_connections_from_cidr_blocks will be destroyed\n  # (because resource uses count or for_each)\n  - resource \"aws_security_group_rule\" \"allow_connections_from_cidr_blocks\" {\n      - cidr_blocks       = [\n          - \"10.8.80.0/21\",\n        ] -> null\n      - from_port         = 5432 -> null\n      - id                = \"sgrule-3185997217\" -> null\n      - ipv6_cidr_blocks  = [] -> null\n      - prefix_list_ids   = [] -> null\n      - protocol          = \"tcp\" -> null\n      - security_group_id = \"sg-01e69230e5c0f1169\" -> null\n      - self              = false -> null\n      - to_port           = 5432 -> null\n      - type              = \"ingress\" -> null\n    }\n\n  # module.database.aws_security_group_rule.allow_connections_from_cidr_blocks[0] must be replaced\n-/+ resource \"aws_security_group_rule\" \"allow_connections_from_cidr_blocks\" {\n      ~ cidr_blocks              = [ # forces replacement\n            # (2 unchanged elements hidden)\n            \"10.2.96.0/21\",\n          + \"10.8.80.0/21\",\n          + \"10.8.88.0/21\",\n          + \"10.8.96.0/21\",\n        ]\n      ~ id                       = \"sgrule-3614096971\" -> (known after apply)\n      - ipv6_cidr_blocks         = [] -> null\n      - prefix_list_ids          = [] -> null\n      + source_security_group_id = (known after apply)\n        # (6 unchanged attributes hidden)\n    }\n\n
"}}} /> + +
+ + + From 7a7ae195519061b09be8f31d1a3236ecfc97e640 Mon Sep 17 00:00:00 2001 From: "docs-sourcer[bot]" <99042413+docs-sourcer[bot]@users.noreply.github.com> Date: Tue, 31 Jan 2023 20:21:34 +0000 Subject: [PATCH 6/7] Updated with the latest changes from the knowledge base discussions. --- docs/discussions/knowledge-base/227.mdx | 4 ++-- docs/discussions/knowledge-base/582.mdx | 4 ++-- docs/discussions/knowledge-base/653.mdx | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/discussions/knowledge-base/227.mdx b/docs/discussions/knowledge-base/227.mdx index 9754b0f27f..9b4447c108 100644 --- a/docs/discussions/knowledge-base/227.mdx +++ b/docs/discussions/knowledge-base/227.mdx @@ -14,7 +14,7 @@ import GitHub from "/src/components/GitHub" Knowledge Base

Single vs Multiple Repos for Ref Architecture

-We've just started using the Reference Architecture and it seems very dependent on having the one repository for everything but this goes against everything that I've ever known - from my experience, repositories should be purposeful and not monolithic. Is there a way to have the Reference Architecture used with multiple repostories?

","answer":{"body":"You can reorganize the reference architecture in any way you like. E.g., you can create multiple `infrastructure-live` repos and copy paste the folder and common files to the new repos to split it out.\r\n\r\nNote that migrating to multi-repo has a few gotchas that you will want to keep in mind:\r\n\r\n- `terragrunt` currently doesn't support remote dependencies. What this means is that if a `terragrunt` module depends on anothe resource (e.g., EKS depending on VPC), then you will need to make sure that the repo contains both the code for VPC and EKS so that the dependency references can work. If you want to further split off, then just be aware that you will no longer be able to use `dependency` blocks to link the two, requiring either hard coding or a look up with the AWS CLI.\r\n- `terragrunt` currently doesn't support remote includes. What this means is that depending on how you split off the `infrastructure-live` repo, you may end up with code duplication as you will no longer be able to include common values via the `_envcommon` pattern.\r\n- The current state file path is dependent on relative paths between the root `terragrunt.hcl` and the child `terragrunt.hcl`. What this means is that depending on how you split the `infrastructure-live` repo, you may inadvertently update the path of the state file. You can fix this by doing a state migration to the new path, following https://github.com/gruntwork-io/knowledge-base/discussions/229\r\n\r\nYou may also want to take a look at https://github.com/gruntwork-io/terragrunt-infrastructure-modules-example#monorepo-vs-polyrepo to understand the tradeoffs between a monorepo setup and polyrepo setup.","bodyHTML":"

You can reorganize the reference architecture in any way you like. E.g., you can create multiple infrastructure-live repos and copy paste the folder and common files to the new repos to split it out.

\n

Note that migrating to multi-repo has a few gotchas that you will want to keep in mind:

\n\n

You may also want to take a look at https://github.com/gruntwork-io/terragrunt-infrastructure-modules-example#monorepo-vs-polyrepo to understand the tradeoffs between a monorepo setup and polyrepo setup.

"}}} /> +We've just started using the Reference Architecture and it seems very dependent on having the one repository for everything but this goes against everything that I've ever known - from my experience, repositories should be purposeful and not monolithic. Is there a way to have the Reference Architecture used with multiple repostories?

","answer":{"body":"You can reorganize the reference architecture in any way you like. E.g., you can create multiple `infrastructure-live` repos and copy paste the folder and common files to the new repos to split it out.\r\n\r\nNote that migrating to multi-repo has a few gotchas that you will want to keep in mind:\r\n\r\n- `terragrunt` currently doesn't support remote dependencies. What this means is that if a `terragrunt` module depends on anothe resource (e.g., EKS depending on VPC), then you will need to make sure that the repo contains both the code for VPC and EKS so that the dependency references can work. If you want to further split off, then just be aware that you will no longer be able to use `dependency` blocks to link the two, requiring either hard coding or a look up with the AWS CLI.\r\n- `terragrunt` currently doesn't support remote includes. What this means is that depending on how you split off the `infrastructure-live` repo, you may end up with code duplication as you will no longer be able to include common values via the `_envcommon` pattern.\r\n- The current state file path is dependent on relative paths between the root `terragrunt.hcl` and the child `terragrunt.hcl`. What this means is that depending on how you split the `infrastructure-live` repo, you may inadvertently update the path of the state file. You can fix this by doing a state migration to the new path, following https://github.com/gruntwork-io/knowledge-base/discussions/229\r\n\r\nYou may also want to take a look at https://github.com/gruntwork-io/terragrunt-infrastructure-modules-example#monorepo-vs-polyrepo to understand the tradeoffs between a monorepo setup and polyrepo setup.","bodyHTML":"

You can reorganize the reference architecture in any way you like. E.g., you can create multiple infrastructure-live repos and copy paste the folder and common files to the new repos to split it out.

\n

Note that migrating to multi-repo has a few gotchas that you will want to keep in mind:

\n
    \n
  • terragrunt currently doesn't support remote dependencies. What this means is that if a terragrunt module depends on anothe resource (e.g., EKS depending on VPC), then you will need to make sure that the repo contains both the code for VPC and EKS so that the dependency references can work. If you want to further split off, then just be aware that you will no longer be able to use dependency blocks to link the two, requiring either hard coding or a look up with the AWS CLI.
  • \n
  • terragrunt currently doesn't support remote includes. What this means is that depending on how you split off the infrastructure-live repo, you may end up with code duplication as you will no longer be able to include common values via the _envcommon pattern.
  • \n
  • The current state file path is dependent on relative paths between the root terragrunt.hcl and the child terragrunt.hcl. What this means is that depending on how you split the infrastructure-live repo, you may inadvertently update the path of the state file. You can fix this by doing a state migration to the new path, following #229
  • \n
\n

You may also want to take a look at https://github.com/gruntwork-io/terragrunt-infrastructure-modules-example#monorepo-vs-polyrepo to understand the tradeoffs between a monorepo setup and polyrepo setup.

"}}} />
@@ -22,6 +22,6 @@ import GitHub from "/src/components/GitHub" diff --git a/docs/discussions/knowledge-base/582.mdx b/docs/discussions/knowledge-base/582.mdx index 54b13d5231..2fb403986c 100644 --- a/docs/discussions/knowledge-base/582.mdx +++ b/docs/discussions/knowledge-base/582.mdx @@ -14,7 +14,7 @@ import GitHub from "/src/components/GitHub" Knowledge Base

CIDR change to reference architecture

-\n

Tracked in ticket #109521

\n\n","bodyHTML":"

Currently we had got setup the reference architecture with the default CIDR for the dev, stage and prod and we want to change those CIDR to reserved CIDR. Is that could impact to whole architecture. Also, I want to know if we are following per account per application structure in that case what CIDR range would be required for the dev, stage and prod instead of /16 range.

\n
\n\n

Tracked in ticket #109521

\n
","answer":{"body":"We answered a similar question in https://github.com/gruntwork-io/knowledge-base/discussions/600.","bodyHTML":"

We answered a similar question in #600.

"}}} /> +\n

Tracked in ticket #109521

\n\n","bodyHTML":"

Currently we had got setup the reference architecture with the default CIDR for the dev, stage and prod and we want to change those CIDR to reserved CIDR. Is that could impact to whole architecture. Also, I want to know if we are following per account per application structure in that case what CIDR range would be required for the dev, stage and prod instead of /16 range.

\n
\n\n

Tracked in ticket #109521

\n
","answer":{"body":"We answered a similar question in https://github.com/gruntwork-io/knowledge-base/discussions/600.","bodyHTML":"

We answered a similar question in #600.

"}}} />
@@ -22,6 +22,6 @@ import GitHub from "/src/components/GitHub" diff --git a/docs/discussions/knowledge-base/653.mdx b/docs/discussions/knowledge-base/653.mdx index 5f96a5b64b..72f9e3db00 100644 --- a/docs/discussions/knowledge-base/653.mdx +++ b/docs/discussions/knowledge-base/653.mdx @@ -14,7 +14,7 @@ import GitHub from "/src/components/GitHub" Knowledge Base

How do I check if Gruntwork has a module for X technology or service?

- How can I tell if Gruntwork offers a module for a given AWS service or technology?\r\n\n\n---\n\n\n

Tracked in ticket #109851

\n
\n","bodyHTML":"

A customer asked:

\n
\n

How can I tell if Gruntwork offers a module for a given AWS service or technology?

\n
\n
\n\n

Tracked in ticket #109851

\n
","answer":{"body":"\r\n\r\nThanks for your question. There are a couple of ways to confirm whether or not Gruntwork currently offers a module for a given service or technology: \r\n\r\n# Option 1 - use our Infrastructure as Code Library's search page\r\n\r\nTry the search bar on [the official page for the Gruntwork Infrastructure as Code Library ](https://gruntwork.io/infrastructure-as-code-library/). If there are no matches for your search query, then it's likely Gruntwork does not currently offer such a module.\r\n\r\n![AWS-Infrastructure-as-Code-Library](https://user-images.githubusercontent.com/1769996/215534278-fa6cb144-00e3-4c05-a2f5-5ea6cc18e84c.png)\r\n\r\n# Option 2 - Use GitHub's search functionality \r\n\r\nUse GitHub search when logged into GitHub as an account that is a member of the `gruntwork-io` GitHub organization. You can also enter `org:gruntwork-io language:hcl ` - so, if, for example, you wanted to check for an AWS Neptune module, you could enter `org:gruntwork-io language:hcl neptune` - which would return no results at the time of this writing, indicating no matching module. \r\n\r\n![Search-·-org-gruntwork-io-language-hcl-neptune](https://user-images.githubusercontent.com/1769996/215534729-351b61ae-60f8-40ae-939b-6e988fcc0631.png)\r\n\r\nAs an alternative example, `org:gruntwork-io language:hcl ecs` does return a number of our modules including our ECS module and our ECS Deploy Runner modules.\r\n\r\n![Search-·-org-gruntwork-io-language-hcl-ecs](https://user-images.githubusercontent.com/1769996/215534840-165c77ef-8903-43e4-b1f2-7753be152f33.png)","bodyHTML":"

Thanks for your question. There are a couple of ways to confirm whether or not Gruntwork currently offers a module for a given service or technology:

\n

Option 1 - use our Infrastructure as Code Library's search page

\n

Try the search bar on the official page for the Gruntwork Infrastructure as Code Library . If there are no matches for your search query, then it's likely Gruntwork does not currently offer such a module.

\n

\"AWS-Infrastructure-as-Code-Library\"

\n

Option 2 - Use GitHub's search functionality

\n

Use GitHub search when logged into GitHub as an account that is a member of the gruntwork-io GitHub organization. You can also enter org:gruntwork-io language:hcl <search-term> - so, if, for example, you wanted to check for an AWS Neptune module, you could enter org:gruntwork-io language:hcl neptune - which would return no results at the time of this writing, indicating no matching module.

\n

\"Search-·-org-gruntwork-io-language-hcl-neptune\"

\n

As an alternative example, org:gruntwork-io language:hcl ecs does return a number of our modules including our ECS module and our ECS Deploy Runner modules.

\n

\"Search-·-org-gruntwork-io-language-hcl-ecs\"

"}}} /> + How can I tell if Gruntwork offers a module for a given AWS service or technology?\r\n\n\n---\n\n\n

Tracked in ticket #109851

\n
\n","bodyHTML":"

A customer asked:

\n
\n

How can I tell if Gruntwork offers a module for a given AWS service or technology?

\n
\n
\n\n

Tracked in ticket #109851

\n
","answer":{"body":"\r\n\r\nThanks for your question. There are a couple of ways to confirm whether or not Gruntwork currently offers a module for a given service or technology: \r\n\r\n# Option 1 - Use our [Repository Browser](https://gruntwork.io/repos/)\r\n\r\n![gruntwork-repo-browser](https://user-images.githubusercontent.com/1769996/215873791-7e47595f-a96e-4a2d-8f95-c8643b8e69b7.png)\r\n\r\n# Option 2 - use our Infrastructure as Code Library's search page\r\n\r\nTry the search bar on [the official page for the Gruntwork Infrastructure as Code Library ](https://gruntwork.io/infrastructure-as-code-library/). If there are no matches for your search query, then it's likely Gruntwork does not currently offer such a module.\r\n\r\n![AWS-Infrastructure-as-Code-Library](https://user-images.githubusercontent.com/1769996/215534278-fa6cb144-00e3-4c05-a2f5-5ea6cc18e84c.png)\r\n\r\n# Option 2 - Use GitHub's search functionality \r\n\r\nUse GitHub search when logged into GitHub as an account that is a member of the `gruntwork-io` GitHub organization. You can also enter `org:gruntwork-io language:hcl ` - so, if, for example, you wanted to check for an AWS Neptune module, you could enter `org:gruntwork-io language:hcl neptune` - which would return no results at the time of this writing, indicating no matching module. \r\n\r\n![Search-·-org-gruntwork-io-language-hcl-neptune](https://user-images.githubusercontent.com/1769996/215534729-351b61ae-60f8-40ae-939b-6e988fcc0631.png)\r\n\r\nAs an alternative example, `org:gruntwork-io language:hcl ecs` does return a number of our modules including our ECS module and our ECS Deploy Runner modules.\r\n\r\n![Search-·-org-gruntwork-io-language-hcl-ecs](https://user-images.githubusercontent.com/1769996/215534840-165c77ef-8903-43e4-b1f2-7753be152f33.png)","bodyHTML":"

Thanks for your question. There are a couple of ways to confirm whether or not Gruntwork currently offers a module for a given service or technology:

\n

Option 1 - Use our Repository Browser

\n

\"gruntwork-repo-browser\"

\n

Option 2 - use our Infrastructure as Code Library's search page

\n

Try the search bar on the official page for the Gruntwork Infrastructure as Code Library . If there are no matches for your search query, then it's likely Gruntwork does not currently offer such a module.

\n

\"AWS-Infrastructure-as-Code-Library\"

\n

Option 2 - Use GitHub's search functionality

\n

Use GitHub search when logged into GitHub as an account that is a member of the gruntwork-io GitHub organization. You can also enter org:gruntwork-io language:hcl <search-term> - so, if, for example, you wanted to check for an AWS Neptune module, you could enter org:gruntwork-io language:hcl neptune - which would return no results at the time of this writing, indicating no matching module.

\n

\"Search-·-org-gruntwork-io-language-hcl-neptune\"

\n

As an alternative example, org:gruntwork-io language:hcl ecs does return a number of our modules including our ECS module and our ECS Deploy Runner modules.

\n

\"Search-·-org-gruntwork-io-language-hcl-ecs\"

"}}} />
@@ -22,6 +22,6 @@ import GitHub from "/src/components/GitHub" From 37a5011a04915cfe34d1a9dfb164773d9e9a5f4f Mon Sep 17 00:00:00 2001 From: "docs-sourcer[bot]" <99042413+docs-sourcer[bot]@users.noreply.github.com> Date: Tue, 31 Jan 2023 21:58:56 +0000 Subject: [PATCH 7/7] Updated with the latest changes from the knowledge base discussions. --- docs/discussions/knowledge-base/653.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/discussions/knowledge-base/653.mdx b/docs/discussions/knowledge-base/653.mdx index 72f9e3db00..6047c5bab0 100644 --- a/docs/discussions/knowledge-base/653.mdx +++ b/docs/discussions/knowledge-base/653.mdx @@ -14,7 +14,7 @@ import GitHub from "/src/components/GitHub" Knowledge Base

How do I check if Gruntwork has a module for X technology or service?

- How can I tell if Gruntwork offers a module for a given AWS service or technology?\r\n\n\n---\n\n\n

Tracked in ticket #109851

\n
\n","bodyHTML":"

A customer asked:

\n
\n

How can I tell if Gruntwork offers a module for a given AWS service or technology?

\n
\n
\n\n

Tracked in ticket #109851

\n
","answer":{"body":"\r\n\r\nThanks for your question. There are a couple of ways to confirm whether or not Gruntwork currently offers a module for a given service or technology: \r\n\r\n# Option 1 - Use our [Repository Browser](https://gruntwork.io/repos/)\r\n\r\n![gruntwork-repo-browser](https://user-images.githubusercontent.com/1769996/215873791-7e47595f-a96e-4a2d-8f95-c8643b8e69b7.png)\r\n\r\n# Option 2 - use our Infrastructure as Code Library's search page\r\n\r\nTry the search bar on [the official page for the Gruntwork Infrastructure as Code Library ](https://gruntwork.io/infrastructure-as-code-library/). If there are no matches for your search query, then it's likely Gruntwork does not currently offer such a module.\r\n\r\n![AWS-Infrastructure-as-Code-Library](https://user-images.githubusercontent.com/1769996/215534278-fa6cb144-00e3-4c05-a2f5-5ea6cc18e84c.png)\r\n\r\n# Option 2 - Use GitHub's search functionality \r\n\r\nUse GitHub search when logged into GitHub as an account that is a member of the `gruntwork-io` GitHub organization. You can also enter `org:gruntwork-io language:hcl ` - so, if, for example, you wanted to check for an AWS Neptune module, you could enter `org:gruntwork-io language:hcl neptune` - which would return no results at the time of this writing, indicating no matching module. \r\n\r\n![Search-·-org-gruntwork-io-language-hcl-neptune](https://user-images.githubusercontent.com/1769996/215534729-351b61ae-60f8-40ae-939b-6e988fcc0631.png)\r\n\r\nAs an alternative example, `org:gruntwork-io language:hcl ecs` does return a number of our modules including our ECS module and our ECS Deploy Runner modules.\r\n\r\n![Search-·-org-gruntwork-io-language-hcl-ecs](https://user-images.githubusercontent.com/1769996/215534840-165c77ef-8903-43e4-b1f2-7753be152f33.png)","bodyHTML":"

Thanks for your question. There are a couple of ways to confirm whether or not Gruntwork currently offers a module for a given service or technology:

\n

Option 1 - Use our Repository Browser

\n

\"gruntwork-repo-browser\"

\n

Option 2 - use our Infrastructure as Code Library's search page

\n

Try the search bar on the official page for the Gruntwork Infrastructure as Code Library . If there are no matches for your search query, then it's likely Gruntwork does not currently offer such a module.

\n

\"AWS-Infrastructure-as-Code-Library\"

\n

Option 2 - Use GitHub's search functionality

\n

Use GitHub search when logged into GitHub as an account that is a member of the gruntwork-io GitHub organization. You can also enter org:gruntwork-io language:hcl <search-term> - so, if, for example, you wanted to check for an AWS Neptune module, you could enter org:gruntwork-io language:hcl neptune - which would return no results at the time of this writing, indicating no matching module.

\n

\"Search-·-org-gruntwork-io-language-hcl-neptune\"

\n

As an alternative example, org:gruntwork-io language:hcl ecs does return a number of our modules including our ECS module and our ECS Deploy Runner modules.

\n

\"Search-·-org-gruntwork-io-language-hcl-ecs\"

"}}} /> + How can I tell if Gruntwork offers a module for a given AWS service or technology?\r\n\n\n---\n\n\n

Tracked in ticket #109851

\n
\n","bodyHTML":"

A customer asked:

\n
\n

How can I tell if Gruntwork offers a module for a given AWS service or technology?

\n
\n
\n\n

Tracked in ticket #109851

\n
","answer":{"body":"\r\n\r\nThanks for your question. There are a couple of ways to confirm whether or not Gruntwork currently offers a module for a given service or technology: \r\n\r\n# Option 1 - Use our [Repository Browser](https://gruntwork.io/repos/)\r\n\r\n![gruntwork-repo-browser](https://user-images.githubusercontent.com/1769996/215873791-7e47595f-a96e-4a2d-8f95-c8643b8e69b7.png)\r\n\r\n# Option 2 - use our Infrastructure as Code Library's search page\r\n\r\nTry the search bar on [the official page for the Gruntwork Infrastructure as Code Library ](https://gruntwork.io/infrastructure-as-code-library/). If there are no matches for your search query, then it's likely Gruntwork does not currently offer such a module.\r\n\r\n![AWS-Infrastructure-as-Code-Library](https://user-images.githubusercontent.com/1769996/215534278-fa6cb144-00e3-4c05-a2f5-5ea6cc18e84c.png)\r\n\r\n# Option 3 - Use GitHub's search functionality \r\n\r\nUse GitHub search when logged into GitHub as an account that is a member of the `gruntwork-io` GitHub organization. You can also enter `org:gruntwork-io language:hcl ` - so, if, for example, you wanted to check for an AWS Neptune module, you could enter `org:gruntwork-io language:hcl neptune` - which would return no results at the time of this writing, indicating no matching module. \r\n\r\n![Search-·-org-gruntwork-io-language-hcl-neptune](https://user-images.githubusercontent.com/1769996/215534729-351b61ae-60f8-40ae-939b-6e988fcc0631.png)\r\n\r\nAs an alternative example, `org:gruntwork-io language:hcl ecs` does return a number of our modules including our ECS module and our ECS Deploy Runner modules.\r\n\r\n![Search-·-org-gruntwork-io-language-hcl-ecs](https://user-images.githubusercontent.com/1769996/215534840-165c77ef-8903-43e4-b1f2-7753be152f33.png)","bodyHTML":"

Thanks for your question. There are a couple of ways to confirm whether or not Gruntwork currently offers a module for a given service or technology:

\n

Option 1 - Use our Repository Browser

\n

\"gruntwork-repo-browser\"

\n

Option 2 - use our Infrastructure as Code Library's search page

\n

Try the search bar on the official page for the Gruntwork Infrastructure as Code Library . If there are no matches for your search query, then it's likely Gruntwork does not currently offer such a module.

\n

\"AWS-Infrastructure-as-Code-Library\"

\n

Option 3 - Use GitHub's search functionality

\n

Use GitHub search when logged into GitHub as an account that is a member of the gruntwork-io GitHub organization. You can also enter org:gruntwork-io language:hcl <search-term> - so, if, for example, you wanted to check for an AWS Neptune module, you could enter org:gruntwork-io language:hcl neptune - which would return no results at the time of this writing, indicating no matching module.

\n

\"Search-·-org-gruntwork-io-language-hcl-neptune\"

\n

As an alternative example, org:gruntwork-io language:hcl ecs does return a number of our modules including our ECS module and our ECS Deploy Runner modules.

\n

\"Search-·-org-gruntwork-io-language-hcl-ecs\"

"}}} />
@@ -22,6 +22,6 @@ import GitHub from "/src/components/GitHub"