From 7d479576f9d3807a7b8613352ef0689db43def49 Mon Sep 17 00:00:00 2001
From: "docs-sourcer[bot]"
<99042413+docs-sourcer[bot]@users.noreply.github.com>
Date: Mon, 30 Jan 2023 16:27:36 +0000
Subject: [PATCH 1/7] Updated with the latest changes from the knowledge base
discussions.
---
docs/discussions/knowledge-base/137.mdx | 4 ++--
docs/discussions/knowledge-base/653.mdx | 27 +++++++++++++++++++++++++
2 files changed, 29 insertions(+), 2 deletions(-)
create mode 100644 docs/discussions/knowledge-base/653.mdx
diff --git a/docs/discussions/knowledge-base/137.mdx b/docs/discussions/knowledge-base/137.mdx
index 31dd726dcc..c5c57528a7 100644
--- a/docs/discussions/knowledge-base/137.mdx
+++ b/docs/discussions/knowledge-base/137.mdx
@@ -14,7 +14,7 @@ import GitHub from "/src/components/GitHub"
I am trying to create an EC2 instance with an EBS volume attached to the said instance. However, to create the EBS volume and attach it to the instance I need to use some terraform code. e.g. Layout tree is: dev In the ebs.tf file we can have resource \"aws_ebs_volume\" \"this\" { resource \"aws_volume_attachment\" \"this\" { terragrunt.hcl locals { project_vars = read_terragrunt_config(find_in_parent_folders(\"project.hcl\")) } include { terraform { mock_outputs = { inputs = { name = \"ui01-${local.project}-${local.application}-${local.env}\" ami = \"ami-0bd2230cfb28832f7\" # Amazon Linux kernel 5.10 vpc_id = \"vpc-xxxxxxx\" vpc_security_group_ids = [\"${dependency.sg.outputs.security_group_id}\"] } Is it possible to use the output of the instance and pass this parameter/object to the ebs.tf file so that the ebs volume gets attached to the instance on the fly? Another question is, is it possible for the *.tf files to use the variables defined in the .hcl files? e.g. locals { env.hcl is: you can use the variable env as ${local.env} for your inputs OK so I have this almost working fully, well in fact it does work, I can grab the instance id and attach an ebs volume to this instance, but at the same time the ebs directory tries to create a new ec2 instance. This is not what I want as I have a ec2 directory looking after the entire ec2 instance creation. ├── ebs ebs.tf terragrunt.hcl terragrunt.hcl for the ec2 instance Not sure why the ebs/terragrunt.hcl file wants to create a new instance when I can successfully get the instance id returned from the ec2-linux-ui dependency? If I can fix that, we are done. I am trying to create an EC2 instance with an EBS volume attached to the said instance. However, to create the EBS volume and attach it to the instance I need to use some terraform code. e.g. Layout tree is: dev In the ebs.tf file we can have resource \"aws_ebs_volume\" \"this\" { resource \"aws_volume_attachment\" \"this\" { terragrunt.hcl locals { project_vars = read_terragrunt_config(find_in_parent_folders(\"project.hcl\")) } include { terraform { mock_outputs = { inputs = { name = \"ui01-${local.project}-${local.application}-${local.env}\" ami = \"ami-0bd2230cfb28832f7\" # Amazon Linux kernel 5.10 vpc_id = \"vpc-xxxxxxx\" vpc_security_group_ids = [\"${dependency.sg.outputs.security_group_id}\"] } Is it possible to use the output of the instance and pass this parameter/object to the ebs.tf file so that the ebs volume gets attached to the instance on the fly? Another question is, is it possible for the *.tf files to use the variables defined in the .hcl files? e.g. locals { env.hcl is: you can use the variable env as ${local.env} for your inputs OK so I have this almost working fully, well in fact it does work, I can grab the instance id and attach an ebs volume to this instance, but at the same time the ebs directory tries to create a new ec2 instance. This is not what I want as I have a ec2 directory looking after the entire ec2 instance creation. ├── ebs ebs.tf terragrunt.hcl terragrunt.hcl for the ec2 instance Not sure why the ebs/terragrunt.hcl file wants to create a new instance when I can successfully get the instance id returned from the ec2-linux-ui dependency? If I can fix that, we are done.Passing variables between Terragrunt and Terraform
-
\nI have the code to create the EC2 instance using terragrunt, and it works fine.
\n-ec2
\n--terragrunt.hcl
\n--ebs.tf\n
\n
\navailability_zone = \"ap-southeast-2a\"
\nsize = 20
\n}
\ndevice_name = \"/dev/sdh\"
\nvolume_id = aws_ebs_volume.this.id
\ninstance_id = <instance.parameter.from.terragrunt>
\n}\n
\n
\nenvironment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))
\nenv = local.environment_vars.locals.environment
\nproject = local.project_vars.locals.project_name
\napplication = local.project_vars.locals.application_name
\npath = find_in_parent_folders()
\n}
\nsource = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"
\n}
\n``
\ndependency \"sg\" {
\nconfig_path = \"../sg-ec2\"
\nsecurity_group_id = \"sg-xxxxxxxxxxxx\"
\n}
\n}
\ndescription = \"UI 01
\ninstance_type = \"c5.large\"
\nkey_name = \"key-test\" # This key is manually created
\nmonitoring = true
\niam_instance_profile = \"AmazonSSMRoleForInstancesQuickSetup\"
\nsubnet_id = \"subnet-xxxxxxxx\"
\nIf you call in terragrunt\n
\n
\nenvironment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))
\nenv = local.environment_vars.locals.environment
\n}
\nlocals {
\nenvironment = \"dev\"
\n}
\nCan you call this variable in the .tf file in some way?
\n│ ├── ebs.tf
\n│ └── terragrunt.hcl
\n└── ec2-instance
\n└── terragrunt.hclvariable \"instance_id\" {\n type = string\n}\n\nresource \"aws_ebs_volume\" \"this\" {\n availability_zone = \"ap-southeast-2a\"\n size = 20\n}\n\nresource \"aws_volume_attachment\" \"this\" {\n device_name = \"/dev/sdh\"\n volume_id = aws_ebs_volume.this.id\n instance_id = \"${var.instance_id}\"\n}\nlocals { }\n\ninclude {\n path = find_in_parent_folders()\n}\n\nterraform {\n source = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"\n}\n\ndependency \"ec2-linux-ui\" {\n config_path = \"../ec2-linux-ui\"\n mock_outputs = {\n instance_id = \"12345\"\n }\n}\n\ninputs = {\n instance_id = dependency.ec2-linux-ui.outputs.id\n}\nlocals {\n environment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))\n env = local.environment_vars.locals.environment\n project_vars = read_terragrunt_config(find_in_parent_folders(\"project.hcl\"))\n project = local.project_vars.locals.project_name\n application = local.project_vars.locals.application_name\n}\n\ninclude {\n path = find_in_parent_folders()\n}\n\nterraform {\n source = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"\n}\n\n# Need the output of the correct Security Group ID to attach to the RDS instance\ndependency \"sg\" {\n config_path = \"../sg-ec2\"\n\n mock_outputs = {\n security_group_id = \"sg-xxxxxxxxxx\"\n }\n}\n\ninputs = {\n\n # Naming\n name = \"ui01-${local.project}-${local.application}-${local.env}\"\n description = \"UI 01 ${local.project} ${local.application} Instance for ${local.env}\"\n\n # EC2 Config\n ami = \"ami-0bd2230cfb28832f7\" # Amazon Linux kernel 5.10\n instance_type = \"c5.large\"\n key_name = \"xxxxxxx\" \n monitoring = true\n\n\n # Networking\n vpc_id = \"xxxxxxx\" \n subnet_id = \"xxxxxxxx\"\n\n # Security Group\n vpc_security_group_ids = [\"${dependency.sg.outputs.security_group_id}\"]\n\n}\n
\nI have the code to create the EC2 instance using terragrunt, and it works fine.
\n-ec2
\n--terragrunt.hcl
\n--ebs.tf\n
\n
\navailability_zone = \"ap-southeast-2a\"
\nsize = 20
\n}
\ndevice_name = \"/dev/sdh\"
\nvolume_id = aws_ebs_volume.this.id
\ninstance_id = <instance.parameter.from.terragrunt>
\n}\n
\n
\nenvironment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))
\nenv = local.environment_vars.locals.environment
\nproject = local.project_vars.locals.project_name
\napplication = local.project_vars.locals.application_name
\npath = find_in_parent_folders()
\n}
\nsource = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"
\n}
\n``
\ndependency \"sg\" {
\nconfig_path = \"../sg-ec2\"
\nsecurity_group_id = \"sg-xxxxxxxxxxxx\"
\n}
\n}
\ndescription = \"UI 01
\ninstance_type = \"c5.large\"
\nkey_name = \"key-test\" # This key is manually created
\nmonitoring = true
\niam_instance_profile = \"AmazonSSMRoleForInstancesQuickSetup\"
\nsubnet_id = \"subnet-xxxxxxxx\"
\nIf you call in terragrunt\n
\n
\nenvironment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))
\nenv = local.environment_vars.locals.environment
\n}
\nlocals {
\nenvironment = \"dev\"
\n}
\nCan you call this variable in the .tf file in some way?
\n│ ├── ebs.tf
\n│ └── terragrunt.hcl
\n└── ec2-instance
\n└── terragrunt.hclvariable \"instance_id\" {\n type = string\n}\n\nresource \"aws_ebs_volume\" \"this\" {\n availability_zone = \"ap-southeast-2a\"\n size = 20\n}\n\nresource \"aws_volume_attachment\" \"this\" {\n device_name = \"/dev/sdh\"\n volume_id = aws_ebs_volume.this.id\n instance_id = \"${var.instance_id}\"\n}\nlocals { }\n\ninclude {\n path = find_in_parent_folders()\n}\n\nterraform {\n source = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"\n}\n\ndependency \"ec2-linux-ui\" {\n config_path = \"../ec2-linux-ui\"\n mock_outputs = {\n instance_id = \"12345\"\n }\n}\n\ninputs = {\n instance_id = dependency.ec2-linux-ui.outputs.id\n}\nlocals {\n environment_vars = read_terragrunt_config(find_in_parent_folders(\"env.hcl\"))\n env = local.environment_vars.locals.environment\n project_vars = read_terragrunt_config(find_in_parent_folders(\"project.hcl\"))\n project = local.project_vars.locals.project_name\n application = local.project_vars.locals.application_name\n}\n\ninclude {\n path = find_in_parent_folders()\n}\n\nterraform {\n source = \"git::git@github.com:terraform-aws-modules/terraform-aws-ec2-instance.git?ref=v3.3.0\"\n}\n\n# Need the output of the correct Security Group ID to attach to the RDS instance\ndependency \"sg\" {\n config_path = \"../sg-ec2\"\n\n mock_outputs = {\n security_group_id = \"sg-xxxxxxxxxx\"\n }\n}\n\ninputs = {\n\n # Naming\n name = \"ui01-${local.project}-${local.application}-${local.env}\"\n description = \"UI 01 ${local.project} ${local.application} Instance for ${local.env}\"\n\n # EC2 Config\n ami = \"ami-0bd2230cfb28832f7\" # Amazon Linux kernel 5.10\n instance_type = \"c5.large\"\n key_name = \"xxxxxxx\" \n monitoring = true\n\n\n # Networking\n vpc_id = \"xxxxxxx\" \n subnet_id = \"xxxxxxxx\"\n\n # Security Group\n vpc_security_group_ids = [\"${dependency.sg.outputs.security_group_id}\"]\n\n}\n
A customer asked:
\n\n\nHow can I tell if Gruntwork offers a module for a given AWS service or technology?
\n
Thanks for your question. There are a couple of ways to confirm whether or not Gruntwork currently offers a module for a given service or technology:
\nTry the search bar on the official page for the Gruntwork Infrastructure as Code Library . If there are no matches for your search query, then it's likely Gruntwork does not currently offer such a module.
\n\nUse GitHub search when logged into GitHub as an account that is a member of the gruntwork-io GitHub organization. You can also enter org:gruntwork-io language:hcl <search-term> - so, if, for example, you wanted to check for an AWS Neptune module, you could enter org:gruntwork-io language:hcl neptune - which would return no results at the time of this writing, indicating no matching module.
As an alternative example, org:gruntwork-io language:hcl ecs does return a number of our modules including our ECS module and our ECS Deploy Runner modules.
Since there are many ways to do this, what is the best way to authenticate to the Gruntwork modules on GitHub.com in my CI/CD pipelines? Also, how does this impact my authentication to other module sources, such as my internal VCS?
\nThis Knowledge Base post discusses how ECS Deploy Runner and Gruntwork Pipelines use your GitHub Personal Access Token (PAT) securely, by storing it in AWS Secrets Manager and only fetching it into your running ECS container on a just-in-time basis, so your token only exists ephemerally in volatile memory within your running task. This is the default pattern that Gruntwork prefers to use when authenticating to your GitHub resources within your CI/CD pipelines.
"}}} /> + +Hi all
Is it possible to create a SPA website with an hosting S3 bucket name other than the website domain?
\nI'm asking since the public-static-website module does not expose this functionality, so the maximum length of a domain is equal to 63 chars - len(\"-cloudfront-logs\") = 47 chars.
Thx!
Hi! No, at the moment it is not possible as it would need to be exposed from module s3-static-website first. Are you currently blocked by this? Is your domain name longer than 47 characters? If that's the case, we should start by filing a bug report at the terraform-aws-static-assets repo.
Do we have any documentation or tips for using the Shared VPC pattern, https://docs.aws.amazon.com/vpc/latest/userguide/vpc-sharing.html, with the Reference Architecture/infrastructure-live? Are there other Gruntwork modules using AWS RAM?
\nWe have a guide to VPC sharing here.
\nWe also have the following examples available in our terraform-aws-vpc repository:
\n\n\n\nAre there other Gruntwork modules using AWS RAM?
\n
Please see our KB post on determining whether or not Gruntwork offers a particular module.
"}}} /> + +Hello,
\nI am currently attempting to import existing security group rules using terragrunt import command. This worked without an issue when I did the same for a cloudwatch log group.
\nHowever, with security group rules I am not able to do this. Can you please let me know what I am doing wrong here.
\nOUTPUT OF TERRAGRUNT PLAN:
\nmodule.database.aws_security_group_rule.allow_connections_from_cidr_blocks[0] must be replaced\n-/+ resource \"aws_security_group_rule\" \"allow_connections_from_cidr_blocks\" {\n ~ cidr_blocks = [ # forces replacement\n # (2 unchanged elements hidden)\n \"10.2.96.0/21\",\n + \"10.8.80.0/21\",\n + \"10.8.88.0/21\",\n + \"10.8.96.0/21\",\n ]\n ~ id = \"sgrule-3614096971\" -> (known after apply)\n - ipv6_cidr_blocks = [] -> null\n - prefix_list_ids = [] -> null\n + source_security_group_id = (known after apply)\n # (6 unchanged attributes hidden)\n }\nThe IPs \"10.8.80.0/21\", \"10.8.88.0/21\", \"10.8.96.0/21\" are already added manually from the console. When I applied, the security group lost all the ingress rules. When I planned next time it showed the ingress rules ready to be applied. Running apply one more time recreated the rules properly, but I don't want to do that in my production environment - therefore trying the import option.
\nCOMMAND:
\naws-vault exec stage -- terragrunt import aws_security_group_rule.ingress sg-01e69230e5c0f1169_ingress_tcp_5432_5432_10.8.80.0/21\nERROR:
\nError: resource address \"aws_security_group_rule.ingress\" does not exist in the configuration.\n\nBefore importing this resource, please create its configuration in the root module. For example:\n\nresource \"aws_security_group_rule\" \"ingress\" {\n (resource arguments)\n}\n\nERRO[0025] 1 error occurred:\n\t* exit status 1\nHi @zackproser,
\nI was able to use terragrunt state list to find the address. It returned:
\nmodule.database.aws_security_group_rule.allow_connections_from_cidr_blocks[0], but the command only worked without [0].
PLAN BEFORE IMPORT
\n # module.database.aws_security_group_rule.allow_connections_from_cidr_blocks[0] must be replaced\n-/+ resource \"aws_security_group_rule\" \"allow_connections_from_cidr_blocks\" {\n ~ cidr_blocks = [ # forces replacement\n # (2 unchanged elements hidden)\n \"10.2.96.0/21\",\n + \"10.8.80.0/21\",\n + \"10.8.88.0/21\",\n + \"10.8.96.0/21\",\n ]\n ~ id = \"sgrule-3614096971\" -> (known after apply)\n - ipv6_cidr_blocks = [] -> null\n - prefix_list_ids = [] -> null\n + source_security_group_id = (known after apply)\n # (6 unchanged attributes hidden)\n }\nIMPORT COMMAND THAT WORKED
\n aws-vault exec stage -- terragrunt import module.database.aws_security_group_rule.allow_connections_from_cidr_blocks sg-01e69230e5c0f1169_ingress_tcp_5432_5432_10.8.80.0/21
module.database.aws_security_group_rule.allow_connections_from_cidr_blocks: Importing from ID \"sg-01e69230e5c0f1169_ingress_tcp_5432_5432_10.8.80.0/21\"...\nmodule.database.aws_security_group_rule.allow_connections_from_cidr_blocks: Import prepared!\n Prepared aws_security_group_rule for import\nmodule.database.aws_security_group_rule.allow_connections_from_cidr_blocks: Refreshing state... [id=sg-01e69230e5c0f1169_ingress_tcp_5432_5432_10.8.80.0/21]\n\nImport successful!\nPLAN AFTER IMPORT
\nFrom the looks of it, it is trying to delete my import.
# module.database.aws_security_group_rule.allow_connections_from_cidr_blocks will be destroyed\n # (because resource uses count or for_each)\n - resource \"aws_security_group_rule\" \"allow_connections_from_cidr_blocks\" {\n - cidr_blocks = [\n - \"10.8.80.0/21\",\n ] -> null\n - from_port = 5432 -> null\n - id = \"sgrule-3185997217\" -> null\n - ipv6_cidr_blocks = [] -> null\n - prefix_list_ids = [] -> null\n - protocol = \"tcp\" -> null\n - security_group_id = \"sg-01e69230e5c0f1169\" -> null\n - self = false -> null\n - to_port = 5432 -> null\n - type = \"ingress\" -> null\n }\n\n # module.database.aws_security_group_rule.allow_connections_from_cidr_blocks[0] must be replaced\n-/+ resource \"aws_security_group_rule\" \"allow_connections_from_cidr_blocks\" {\n ~ cidr_blocks = [ # forces replacement\n # (2 unchanged elements hidden)\n \"10.2.96.0/21\",\n + \"10.8.80.0/21\",\n + \"10.8.88.0/21\",\n + \"10.8.96.0/21\",\n ]\n ~ id = \"sgrule-3614096971\" -> (known after apply)\n - ipv6_cidr_blocks = [] -> null\n - prefix_list_ids = [] -> null\n + source_security_group_id = (known after apply)\n # (6 unchanged attributes hidden)\n }\n\nYou can reorganize the reference architecture in any way you like. E.g., you can create multiple infrastructure-live repos and copy paste the folder and common files to the new repos to split it out.
Note that migrating to multi-repo has a few gotchas that you will want to keep in mind:
\nterragrunt currently doesn't support remote dependencies. What this means is that if a terragrunt module depends on anothe resource (e.g., EKS depending on VPC), then you will need to make sure that the repo contains both the code for VPC and EKS so that the dependency references can work. If you want to further split off, then just be aware that you will no longer be able to use dependency blocks to link the two, requiring either hard coding or a look up with the AWS CLI.terragrunt currently doesn't support remote includes. What this means is that depending on how you split off the infrastructure-live repo, you may end up with code duplication as you will no longer be able to include common values via the _envcommon pattern.terragrunt.hcl and the child terragrunt.hcl. What this means is that depending on how you split the infrastructure-live repo, you may inadvertently update the path of the state file. You can fix this by doing a state migration to the new path, following #229You may also want to take a look at https://github.com/gruntwork-io/terragrunt-infrastructure-modules-example#monorepo-vs-polyrepo to understand the tradeoffs between a monorepo setup and polyrepo setup.
"}}} /> +You can reorganize the reference architecture in any way you like. E.g., you can create multiple infrastructure-live repos and copy paste the folder and common files to the new repos to split it out.
Note that migrating to multi-repo has a few gotchas that you will want to keep in mind:
\nterragrunt currently doesn't support remote dependencies. What this means is that if a terragrunt module depends on anothe resource (e.g., EKS depending on VPC), then you will need to make sure that the repo contains both the code for VPC and EKS so that the dependency references can work. If you want to further split off, then just be aware that you will no longer be able to use dependency blocks to link the two, requiring either hard coding or a look up with the AWS CLI.terragrunt currently doesn't support remote includes. What this means is that depending on how you split off the infrastructure-live repo, you may end up with code duplication as you will no longer be able to include common values via the _envcommon pattern.terragrunt.hcl and the child terragrunt.hcl. What this means is that depending on how you split the infrastructure-live repo, you may inadvertently update the path of the state file. You can fix this by doing a state migration to the new path, following #229You may also want to take a look at https://github.com/gruntwork-io/terragrunt-infrastructure-modules-example#monorepo-vs-polyrepo to understand the tradeoffs between a monorepo setup and polyrepo setup.
"}}} />Currently we had got setup the reference architecture with the default CIDR for the dev, stage and prod and we want to change those CIDR to reserved CIDR. Is that could impact to whole architecture. Also, I want to know if we are following per account per application structure in that case what CIDR range would be required for the dev, stage and prod instead of /16 range.
\nWe answered a similar question in #600.
"}}} /> +Currently we had got setup the reference architecture with the default CIDR for the dev, stage and prod and we want to change those CIDR to reserved CIDR. Is that could impact to whole architecture. Also, I want to know if we are following per account per application structure in that case what CIDR range would be required for the dev, stage and prod instead of /16 range.
\nWe answered a similar question in #600.
"}}} />A customer asked:
\n\n\nHow can I tell if Gruntwork offers a module for a given AWS service or technology?
\n
Thanks for your question. There are a couple of ways to confirm whether or not Gruntwork currently offers a module for a given service or technology:
\nTry the search bar on the official page for the Gruntwork Infrastructure as Code Library . If there are no matches for your search query, then it's likely Gruntwork does not currently offer such a module.
\n\nUse GitHub search when logged into GitHub as an account that is a member of the gruntwork-io GitHub organization. You can also enter org:gruntwork-io language:hcl <search-term> - so, if, for example, you wanted to check for an AWS Neptune module, you could enter org:gruntwork-io language:hcl neptune - which would return no results at the time of this writing, indicating no matching module.
As an alternative example, org:gruntwork-io language:hcl ecs does return a number of our modules including our ECS module and our ECS Deploy Runner modules.
A customer asked:
\n\n\nHow can I tell if Gruntwork offers a module for a given AWS service or technology?
\n
Thanks for your question. There are a couple of ways to confirm whether or not Gruntwork currently offers a module for a given service or technology:
\nTry the search bar on the official page for the Gruntwork Infrastructure as Code Library . If there are no matches for your search query, then it's likely Gruntwork does not currently offer such a module.
\n\nUse GitHub search when logged into GitHub as an account that is a member of the gruntwork-io GitHub organization. You can also enter org:gruntwork-io language:hcl <search-term> - so, if, for example, you wanted to check for an AWS Neptune module, you could enter org:gruntwork-io language:hcl neptune - which would return no results at the time of this writing, indicating no matching module.
As an alternative example, org:gruntwork-io language:hcl ecs does return a number of our modules including our ECS module and our ECS Deploy Runner modules.
A customer asked:
\n\n\nHow can I tell if Gruntwork offers a module for a given AWS service or technology?
\n
Thanks for your question. There are a couple of ways to confirm whether or not Gruntwork currently offers a module for a given service or technology:
\nTry the search bar on the official page for the Gruntwork Infrastructure as Code Library . If there are no matches for your search query, then it's likely Gruntwork does not currently offer such a module.
\n\nUse GitHub search when logged into GitHub as an account that is a member of the gruntwork-io GitHub organization. You can also enter org:gruntwork-io language:hcl <search-term> - so, if, for example, you wanted to check for an AWS Neptune module, you could enter org:gruntwork-io language:hcl neptune - which would return no results at the time of this writing, indicating no matching module.
As an alternative example, org:gruntwork-io language:hcl ecs does return a number of our modules including our ECS module and our ECS Deploy Runner modules.
A customer asked:
\n\n\nHow can I tell if Gruntwork offers a module for a given AWS service or technology?
\n
Thanks for your question. There are a couple of ways to confirm whether or not Gruntwork currently offers a module for a given service or technology:
\nTry the search bar on the official page for the Gruntwork Infrastructure as Code Library . If there are no matches for your search query, then it's likely Gruntwork does not currently offer such a module.
\n\nUse GitHub search when logged into GitHub as an account that is a member of the gruntwork-io GitHub organization. You can also enter org:gruntwork-io language:hcl <search-term> - so, if, for example, you wanted to check for an AWS Neptune module, you could enter org:gruntwork-io language:hcl neptune - which would return no results at the time of this writing, indicating no matching module.
As an alternative example, org:gruntwork-io language:hcl ecs does return a number of our modules including our ECS module and our ECS Deploy Runner modules.