diff --git a/docs/intro/dev-portal/create-account.md b/_docs-sources/developer-portal/create-account.md similarity index 85% rename from docs/intro/dev-portal/create-account.md rename to _docs-sources/developer-portal/create-account.md index cbd8f60ccd..61aab83c05 100644 --- a/docs/intro/dev-portal/create-account.md +++ b/_docs-sources/developer-portal/create-account.md @@ -31,10 +31,8 @@ For security, sign in emails expire after 10 minutes. You can enter your email a If you are the admin for your organization, you'll be prompted to confirm details including your company address and phone number, as well as a billing email. Provide the required information and click **Continue** to finish signing in. +## Related Knowledge Base Discussions - +- [Invitation to the Developer Portal not received](https://github.com/orgs/gruntwork-io/discussions/716) +- [Trouble logging into the Portal with email](https://github.com/orgs/gruntwork-io/discussions/395) +- [How can the email associated with an account be changed?](https://github.com/orgs/gruntwork-io/discussions/714) diff --git a/docs/intro/dev-portal/invite-team.md b/_docs-sources/developer-portal/invite-team.md similarity index 89% rename from docs/intro/dev-portal/invite-team.md rename to _docs-sources/developer-portal/invite-team.md index 26a30fe97b..056a74aa05 100644 --- a/docs/intro/dev-portal/invite-team.md +++ b/_docs-sources/developer-portal/invite-team.md @@ -40,10 +40,8 @@ This change will take effect immediately. Any team members who have accepted the The number of licenses available depends on the level of your subscription. You can see the total number of licenses as well as the number remaining at the top of the [Team](https://app.gruntwork.io/team) page. If you need to invite more team members than your current license limit allows, you may request additional licenses, which are billed at a standard monthly rate. To do so, contact sales@gruntwork.io. +## Related Knowledge Base Discussions - +- [Invitation to the Developer Portal not received](https://github.com/orgs/gruntwork-io/discussions/716) +- [Trouble logging into the Portal with email](https://github.com/orgs/gruntwork-io/discussions/395) +- [How can the email associated with an account be changed?](https://github.com/orgs/gruntwork-io/discussions/714) diff --git a/_docs-sources/intro/dev-portal/link-github-id.md b/_docs-sources/developer-portal/link-github-id.md similarity index 57% rename from _docs-sources/intro/dev-portal/link-github-id.md rename to _docs-sources/developer-portal/link-github-id.md index 31f9435ba7..e4cbd124df 100644 --- a/_docs-sources/intro/dev-portal/link-github-id.md +++ b/_docs-sources/developer-portal/link-github-id.md @@ -1,8 +1,6 @@ -# Link Your GitHub ID +# Link Your GitHub Account -Gruntwork provides all code included in your subscription through GitHub. You’ll need to link a GitHub ID to your account in order to access the IaC Library on GitHub. Follow the steps below to link your GitHub ID. - -## Linking your GitHub account +Gruntwork provides all code included in your subscription through GitHub. You need to link a GitHub ID to your Gruntwork Developer Portal account in order to access the IaC Library on GitHub. Follow the steps below to link your GitHub ID. 1. First, sign in to the [Gruntwork Developer Portal](https://app.gruntwork.io). 2. Click the **Link my GitHub Account** button in the notice at the top of the home page, or the corresponding button located in your [Profile Settings](https://app.gruntwork.io/settings/profile). @@ -10,13 +8,13 @@ Gruntwork provides all code included in your subscription through GitHub. You’ 4. After being redirected back to the Gruntwork Developer Portal, click the **Accept My Invite** button. This will take you to GitHub again, where you can accept an invitation to join the Gruntwork organization. (You can ignore the corresponding invite email you receive from GitHub.) 5. Click **Join Gruntwork** to accept the invitation and access the IaC Library. -Once you’ve linked your account, the notice on the home page will disappear and you’ll find your GitHub ID recorded in your [Profile Settings](https://app.gruntwork.io/settings/profile). Going forward, you’ll have access to all private repositories included in your subscription. If you haven’t yet done so, we strongly recommend [adding an SSH key](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) to your GitHub account. An SSH key is required to access the Gruntwork IaC library without adding a password in your Terraform code. +:::info + +Once you’ve linked your account, the notice on the home page will disappear and you’ll find your GitHub ID recorded in your [Profile Settings](https://app.gruntwork.io/settings/profile). Going forward, you’ll have access to all private repositories included in your subscription. If you haven’t done so yet, we strongly recommend [adding an SSH key](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) to your GitHub account. An SSH key is required to access the Gruntwork IaC library without adding a password in your Terraform code. -## Linking a new GitHub account +::: -To link a new GitHub ID, you’ll first have to unlink the current one. Although uncommon, note that any private forks of Gruntwork repos will be deleted when you unlink your account. +## Related Knowledge Base Discussions -1. Sign in to the Gruntwork Developer Portal and navigate to your [Profile Settings](https://app.gruntwork.io/settings/profile). -2. Click **Unlink** in the description under the **GitHub Account** section. -3. Click **Yes, Unlink My Account** in the confirmation dialog that appears. -4. Proceed with the [steps above](#linking-your-github-account) to link a new GitHub account *using a private/incognito browser window*. (This guarantees you’ll have an opportunity to specify the new account you wish to link.) +- [I have linked my GitHub Account but do not have code access](https://github.com/orgs/gruntwork-io/discussions/715) +- [How can I change my GitHub account (unlink/link)?](https://github.com/orgs/gruntwork-io/discussions/713) diff --git a/_docs-sources/guides/working-with-code/using-modules.md b/_docs-sources/guides/working-with-code/using-modules.md index 01298349f1..da10b33077 100644 --- a/_docs-sources/guides/working-with-code/using-modules.md +++ b/_docs-sources/guides/working-with-code/using-modules.md @@ -162,7 +162,7 @@ This code pulls in a module using Terraform’s native `module` functionality. F The `source` URL in the code above uses a Git URL with SSH authentication (see [module sources](https://www.terraform.io/docs/modules/sources.html) for all the types of `source` URLs you can use). -If you followed the [SSH key instructions](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) when [linking your GitHub ID](/intro/dev-portal/link-github-id.md), this will allow you to access private repos in the Gruntwork +If you followed the [SSH key instructions](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) when [linking your GitHub ID](/developer-portal/link-github-id.md), this will allow you to access private repos in the Gruntwork Infrastructure as Code Library without having to hard-code a password in your Terraform code. #### Versioned URL diff --git a/_docs-sources/iac/getting-started/accessing-the-code.md b/_docs-sources/iac/getting-started/accessing-the-code.md new file mode 100644 index 0000000000..eecdb8862f --- /dev/null +++ b/_docs-sources/iac/getting-started/accessing-the-code.md @@ -0,0 +1,9 @@ +# Accessing the code + +Gruntwork provides all code included in your subscription to the Infrastructure as Code (IaC) library through GitHub. To gain access to the IaC Library, you must first [create an account in the Developer Portal](../../developer-portal/create-account.md). Once you have an account, you must [link your GitHub ID](../../developer-portal/link-github-id) to your Developer Portal account to gain access to the IaC Library. + +## Accessing Modules and Services in the IaC library + +Once you have gained access to the Gruntwork IaC library, you can view the source code for our modules and services in [GitHub](https://github.com/orgs/gruntwork-io/repositories). For a full list of modules and services, check the [Library Reference](../../iac/reference/index.md). + +In GitHub, each IaC repository is prefixed with `terraform-aws-` then a high level description of the modules it contains. For example, Amazon SNS, SQS, MSK, and Kinesis are located in the `terraform-aws-messaging` repository. In each repository, the modules are located in the `modules` directory. Example usage and tests are provided for each module in the `examples` and `tests` directories, respectively. diff --git a/_docs-sources/iac/getting-started/deploying-a-module.md b/_docs-sources/iac/getting-started/deploying-a-module.md new file mode 100644 index 0000000000..9ed81cada7 --- /dev/null +++ b/_docs-sources/iac/getting-started/deploying-a-module.md @@ -0,0 +1,254 @@ +# Deploying your first module + +[Modules](../overview/modules.md) allow you to define an interface to create one or many resources in the cloud or on-premise, similar to how in object oriented programming you can define a class that may have different attribute values across many instances. + +This tutorial will teach you how to develop a Terraform module that deploys an AWS Lambda function. We will create the required file structure, define an AWS Lambda function and AWS IAM role as code, then plan and apply the resource in an AWS account. Then, we’ll verify the deployment by invoking the Lambda using the AWS CLI. Finally, we'll clean up the resources we create to avoid unexpected costs. + +## Prerequisites +- An AWS account with permissions to create the necessary resources +- An [AWS Identity and Access Management](https://aws.amazon.com/iam/) (IAM) user or role with permissions to create AWS IAM roles and Lambda functions +- [AWS Command Line Interface](https://aws.amazon.com/cli/) (AWS CLI) installed on your local machine +- [Terraform](https://www.terraform.io) installed on your local machine + +## Create the module + +In this section you’ll create a Terraform module that can create an AWS Lambda function and IAM role. This module will include three files — `main.tf` which will contain the resource definitions, `variables.tf`, which specifies the possible inputs to the module, and `outputs.tf`, which specifies the values that can be used to pass references to attributes from the resources in the module. + +This module could be referenced many times to create any number of AWS Lambda functions and IAM roles. + + +### Create a basic file structure +First, create the directories and files that will contain the Terraform configuration. + +```bash +mkdir -p terraform-aws-gw-lambda-tutorial/modules/lambda +touch terraform-aws-gw-lambda-tutorial/modules/lambda/main.tf +touch terraform-aws-gw-lambda-tutorial/modules/lambda/variables.tf +touch terraform-aws-gw-lambda-tutorial/modules/lambda/outputs.tf +``` + +### Define the module resources + +First, define the resources that should be created by the module. This is where you define resource level blocks provided by Terraform. For this module, we need an AWS Lambda function and an IAM role that will be used by the Lambda function. + +Paste the following snippet in `terraform-aws-gw-lambda/modules/lambda/main.tf`. +```hcl title="terraform-aws-gw-lambda/modules/lambda/main.tf" +resource "aws_iam_role" "lambda_role" { + name = "${var.lambda_name}-role" + + assume_role_policy = < Create an account with our Developer Portal to access the IaC Library and training courses. - + Prepare your local development environment for efficiently working with the industry standard DevOps tools. - + Learn how to leverage these tools with Gruntwork products to realize your infrastructure needs. diff --git a/_docs-sources/intro/overview/how-it-works.md b/_docs-sources/intro/overview/how-it-works.md deleted file mode 100644 index f6817597f1..0000000000 --- a/_docs-sources/intro/overview/how-it-works.md +++ /dev/null @@ -1,47 +0,0 @@ -# How it works - -## Overview - -There are two fundamental ways to engage Gruntwork: - -1. **Gruntwork builds your architecture.** We generate the [Reference Architecture](https://gruntwork.io/reference-architecture/) based on your needs, deploy into your AWS accounts, and give you 100% of the code. Since you have all the code, you can extend, enhance, and customize the environment exactly according to your needs. The deploy process takes about one day. -2. **Build it yourself.** The Gruntwork IaC library empowers you to [construct your own bespoke architecture](/guides#build-your-own-architecture) in record time. By mix-and-matching our modules and services you can quickly define a custom architecture to suit your needs, all with the confidence of having world-class, battle-tested code running under the hood. - -## What we provide - -The Gruntwork product suite is designed to help you implement a world-class DevOps setup. It includes a combination of products, services, and support. - -### Gruntwork IaC Library - -A battle-tested, production-grade _catalog_ of infrastructure code that contains the core "building blocks" of infrastructure. It includes everything you’ll need to set up: - -- A Multi-account structure -- An infrastructure CI/CD Pipeline -- Networking and VPCs -- App orchestration — ECS, EC2, Kubernetes, and more -- Data storage — Aurora, Elasticache, RDS, and more -- Best-practice security baselines -- _and more…_ - -### Gruntwork Compliance - -An optional _catalog extension_ that contains building blocks that implement various compliance standards. Today we support CIS compliance; SOC 2 is coming soon, and we plan on adding additional standards in the future. - -### Support - -Gruntwork offers basic and paid support options: - -- **[Community support](/support#get-support).** Get help via a [Gruntwork Community Slack](https://gruntwork-community.slack.com/archives/CHH9Y3Z62) and our [Knowledge Base](https://github.com/gruntwork-io/knowledge-base/discussions). -- **[Paid support](/support#paid-support-tiers).** Get help via email, a private Slack channel, or scheduled Zoom calls, with response times backed by SLAs. - -## What you provide - -Gruntwork products and services can help you quickly achieve world-class infrastructure. However, we aren’t a consulting company. To succeed, you (or your trusted DevOps consultant/contractor) must commit to learning how to leverage our products for your use cases, making any additional customizations, and deploying or migrating your apps and services. - -### Learn how to use our products - -To work effectively with our products, you’ll need to understand our opinionated stance on DevOps best practices and how to apply it for your purposes. You'll also need to learn how to use the Gruntwork products themselves. Our guides and support remain available to assist you in these endeavors. - -### Implement the “last mile” - -Gruntwork products strike a balance between opinionatedness and configurability. They’ll get you most of the way to your goal, but you may need to make some customizations to suit your use case. You may also need to adapt your apps and services to run in your new infrastructure. Our [Knowledge Base](https://github.com/gruntwork-io/knowledge-base/discussions) and [Community Slack Channel](https://gruntwork-community.slack.com/archives/CHH9Y3Z62) provide great resources to assist you in this effort. diff --git a/_docs-sources/intro/overview/intro-to-gruntwork.md b/_docs-sources/intro/overview/intro-to-gruntwork.md index 85d8bac99a..291e619296 100644 --- a/_docs-sources/intro/overview/intro-to-gruntwork.md +++ b/_docs-sources/intro/overview/intro-to-gruntwork.md @@ -1,17 +1,12 @@ -# Introduction to Gruntwork +# What we do -### What is Gruntwork? +**Gruntwork is a “DevOps accelerator” that gets you to a world-class DevOps setup leveraging infrastructure-as-code in just a few days.** -**Gruntwork is a "DevOps accelerator" designed to make it possible to achieve a world-class DevOps setup based completely on infrastructure-as-code in just a few days.** +Gruntwork works best for teams building new infrastructure (“greenfield”), either from scratch or as part of a migration. However, it can also be used by teams with existing infrastructure (“brownfield”) if they have sufficient DevOps experience. All Gruntwork products exist within a [framework](/guides/production-framework) we’ve devised specifically to emphasize DevOps industry best-practices and maximize your team’s efficiency. -All Gruntwork products exist within a framework we’ve devised specifically to emphasize DevOps industry best-practices and maximize your team’s efficiency. In the [how it works](how-it-works.md) section, we’ll cover how Gruntwork can help your team implement your infrastructure using this framework. +All Gruntwork products are built on and fully compatible with [Terraform](https://terraform.io). The one exception to this is the [Gruntwork Reference Architecture](/refarch/whats-this/what-is-a-reference-architecture), which uses [Terragrunt](https://terragrunt.gruntwork.io/) (one of our open source tools) to implement an end-to-end architecture. -Gruntwork works best for teams building new infrastructure ("greenfield"), either from scratch or as part of a migration. However, it can also be used by teams with existing infrastructure ("brownfield") if they have sufficient DevOps experience. +There are two fundamental ways to engage Gruntwork: -### Supported public clouds - -Gruntwork products focus on Amazon Web Services (AWS). Support for other public clouds such as GCP and Azure may be added in the future. - -### Gruntwork uses Terraform - -All Gruntwork products are built on and fully compatible with [open source Terraform](https://terraform.io). The one exception to this is the [Gruntwork Reference Architecture](https://gruntwork.io/reference-architecture/), which uses [Terragrunt](https://terragrunt.gruntwork.io/) (one of our open source tools) to implement an end-to-end architecture. +1. **Gruntwork builds your architecture.** We generate a Reference Architecture based on your needs, deploy into your AWS accounts, and give you 100% of the code. Since you have all the code, you can extend, enhance, and customize the environment exactly according to your needs. See [the docs](/refarch/whats-this/what-is-a-reference-architecture) for more information about our Reference Architecture. +2. **Build it yourself.** The [Gruntwork IaC library](/iac/overview/) empowers you to construct your own bespoke architecture in record time. By mix-and-matching our [modules](/iac/overview/modules) and [services](/iac/overview/services) you can quickly define a custom architecture to suit your needs, all with the confidence of having world-class, battle-tested code running under the hood. diff --git a/_docs-sources/intro/overview/prerequisites.md b/_docs-sources/intro/overview/prerequisites.md new file mode 100644 index 0000000000..8c8565348b --- /dev/null +++ b/_docs-sources/intro/overview/prerequisites.md @@ -0,0 +1,32 @@ +# What you need to know + +Gruntwork accelerates your infrastructure. Our products allow you to treat your infrastructure like you do your application: as code, complete with pull requests and peer reviews. Our products require a _variety of skills_ to maintain and customize to your needs over time. + +## Terraform + +Our modules are all built using [Terraform](https://www.terraform.io/). You should be comfortable using Terraform for Infrastructure as Code. + +## Terragrunt + +If you purchase the Reference Architecture, it is delivered in [Terragrunt](https://terragrunt.gruntwork.io/), our open source wrapper around Terraform which allows you to + +1. Separate your monolithic terraform state files into smaller ones to speed up your plans and applies +2. Keep your infrastructure code DRY + +See [How to Manage Multiple Environments with Terraform](https://blog.gruntwork.io/how-to-manage-multiple-environments-with-terraform-32c7bc5d692) and our [Terragrunt Quick start](https://terragrunt.gruntwork.io/docs/getting-started/quick-start/) documentation for more details. + +## Git and GitHub + +Our code is stored in Git repositories in GitHub. You must have a working knowledge of Git via SSH (`add`, `commit`, `pull`, branches, et cetera) and GitHub (Pull requests, issues, et cetera) in order to interface with the Reference Architecture and our code library. + +## Knowledge of Go, Shell, and Python + +Some of the modules we have leverage Go, Shell scripting and Python. To customize these to suit your needs, you may need to dive in and make changes. In addition, all of our automated testing is written in Go, so familiarity with Go is highly recommended. + +## AWS + +To be successful with the infrastructure provisioned by us, you must have a decent working knowledge of AWS, its permissions schemes ([IAM](https://aws.amazon.com/iam/)), services, and APIs. While having AWS certification is not required, it is certainly helpful. Since Gruntwork is an accelerator for your AWS infrastructure and not an abstraction layer in front of AWS, knowledge of AWS and the services you intend to use is required. + +## Containerization tools like Docker and Packer + +We create Docker containers throughout our code library, and use them heavily in our [Gruntwork Pipelines](/pipelines/overview/) product, an important piece of the Reference Architecture. Containerization is an important part of helping many companies scale in the cloud, and we’re no exception. Familiarity with creating docker images and pushing and pulling them from repositories is required. Likewise, we use Packer to build AMIs. Understanding Packer will enable you to build your own AMIs for your own infrastructure and make modifications to the infrastructure we provision for you. diff --git a/_docs-sources/intro/overview/reference-architecture-prerequisites-guide.md b/_docs-sources/intro/overview/reference-architecture-prerequisites-guide.md deleted file mode 100644 index b8cee102da..0000000000 --- a/_docs-sources/intro/overview/reference-architecture-prerequisites-guide.md +++ /dev/null @@ -1,85 +0,0 @@ -# Reference Architecture Prerequisites Guide - -Gruntwork accelerates your infrastructure with our [Reference Architecture](https://gruntwork.io/reference-architecture/). This framework allows you to treat your infrastructure like you do your application: as code, complete with pull requests and peer reviews. The Reference Architecture requires a variety of skills to maintain it and customize it to your needs over time. - -Here's what your team will need so you can succeed with the Gruntwork Reference Architecture: - -
- - Knowledge of Terraform -
- -Our modules are all built using [Terraform](https://www.terraform.io/), and the Reference Architecture uses our modules to build out your infrastructure. You should be comfortable using Terraform for Infrastructure as Code. -
-
- -
- Knowledge of Terragrunt or willingness to learn -
- -The Reference Architecture is delivered in [Terragrunt](https://terragrunt.gruntwork.io/), our open source wrapper around Terraform which allows you to - -1. Separate your monolithic terraform state files into smaller ones to speed up your plans and applies -2. Keep your infrastructure code DRY - -See [How to Manage Multiple Environments with Terraform](https://blog.gruntwork.io/how-to-manage-multiple-environments-with-terraform-32c7bc5d692) and our [Terragrunt Quick start](https://terragrunt.gruntwork.io/docs/getting-started/quick-start/) documentation for more details. -
-
- -
- Knowledge of git and GitHub -
- -Our Reference Architecture and the modules that it consumes are all stored in Git repositories in GitHub. You must have a working knowledge of Git via SSH (`add`, `commit`, `pull`, branches, et cetera) and GitHub (Pull requests, issues, et cetera) in order to interface with the Reference Architecture and our code library. -
-
- -
- Knowledge of AWS and its services -
- -The Reference Architecture is provisioned in [AWS](https://aws.amazon.com/). To be successful with the infrastructure provisioned by us, you must have a decent working knowledge of AWS, its permissions schemes ([IAM](https://aws.amazon.com/iam/)), services, and APIs. While having AWS certification is not required, it is certainly helpful. Since Gruntwork is an accelerator for your AWS infrastructure and not an abstraction layer in front of AWS, knowledge of AWS and the services you intend to use is required. -
-
- -
- Knowledge of Gruntwork’s Limitations -
- -During the process of setting up the AWS accounts for your reference architecture, our tooling will automatically submit quota increase requests to AWS as a support ticket. These AWS quota increases are required to install the components of the Reference Architecture. Often, AWS will approve these requests quickly. Sometimes these support tickets will take some time for AWS to resolve. Unfortunately, some of these requests may be denied by AWS’s support team. Gruntwork can work with you to get these requests approved, but this can take some time, and that time is mostly out of our control. - -Gruntwork focuses on helping you launch and maintain your infrastructure as code. Understanding and using the AWS services that our code provisioned is up to you. Since Gruntwork is an accelerator for your AWS infrastructure and not an abstraction layer in front of AWS, knowledge of AWS and the services you intend to use is required. -
-
- -
- Knowledge of Go, Shell, and Python -
- -Some of the modules we have leverage Go, Shell scripting and Python. To customize these to suit your needs, you may need to dive in and make changes. In addition, all of our automated testing is written in Go, so familiarity with Go is highly recommended. -
-
- -
- Knowledge of containerization tools like Docker and Packer -
- -We create Docker containers throughout our code library, and use them heavily in our [Gruntwork Pipelines](https://gruntwork.io/pipelines/) product, an important piece of the Reference Architecture. Containerization is an important part of helping many companies scale in the cloud, and we’re no exception. Familiarity with creating docker images and pushing and pulling them from repositories is required. Likewise, we use Packer to build AMIs. Understanding Packer will enable you to build your own AMIs for your own infrastructure and make modifications to the infrastructure we provision for you. -
-
- -
- Brand new AWS accounts -
- -With our Gruntwork Wizard, we help you create new AWS accounts, which we’ll then use to build your Reference Architecture. All accounts must be completely empty. At this time we do not support “brown field” deployments of the Reference Architecture. -
-
- -
- Time to learn -
- -Gruntwork accelerates you down the road towards having your entire AWS cloud infrastructure captured as Infrastructure as Code. The Reference Architecture will set you up with a solid foundation with our [Landing Zone](https://gruntwork.io/landing-zone-for-aws/) and help you regularly modify your infrastructure with [Gruntwork Pipelines](https://gruntwork.io/pipelines/). Infrastructure and Infrastructure as Code is complex, and while we strive to make it as easy as possible for you, you will need time to understand the twists and turns of your infrastructure in order to tune it to fully suit your needs. -
-
\ No newline at end of file diff --git a/_docs-sources/intro/overview/shared-responsibility-model.md b/_docs-sources/intro/overview/shared-responsibility-model.md deleted file mode 100644 index 0693e0c5b4..0000000000 --- a/_docs-sources/intro/overview/shared-responsibility-model.md +++ /dev/null @@ -1,45 +0,0 @@ -# Shared Responsibility Model - -:::note - -The implementation and maintenance of Gruntwork products in AWS is a shared responsibility between Gruntwork and the customer. - -::: - -## Gruntwork is responsible for: - -1. Providing a tested, updated, and richly featured collection of infrastructure code for the customer to use. -1. Maintaining a healthy Knowledge Base community where other engineers (including Grunts) post & answer questions. -1. For Pro / Enterprise Support customers: Answering questions via email and Slack. -1. For Reference Architecture customers: - 1. Generating the initial Reference Architecture based on our customer’s selections of available configurations. This includes: - 1. Our implementation of Landing Zone - 1. A complete sample app with underlying database and caching layer - 1. The Gruntwork Pipeline for deploying changes to infrastructure - 1. An overview of how to use the Reference Architecture - 1. Deploying the initial Reference Architecture into the customer’s brand new empty AWS accounts. - 1. Delivering the initial Reference Architecture Infrastructure as Code to the customer. - 1. Providing resources to the customer for deeply understanding the inner workings of the Reference Architecture. -1. For CIS customers: - 1. Providing IaC libraries to the CIS customer that correctly implement CIS requirements and restrictions. - 1. For aspects of the CIS AWS Foundations Benchmark where those requirements cannot be met by modules, but require human intervention, provide instructions on manual steps the customer must take to meet the requirements. - 1. For CIS Reference Architecture customers, deploying a Reference Architecture and providing access to infrastructure code that implements the CIS AWS Foundations Benchmark requirements out-of-the-box, wherever possible. - -## As a Gruntwork customer, you are responsible for: - -1. Staffing appropriately (as described in the [Prerequisites Guide](/intro/overview/reference-architecture-prerequisites-guide/)) to maintain and customize the modules and (if applicable) the Reference Architecture and to understand how the Gruntwork product works so that changes can be made to customize it to the customer’s needs. - 1. Raise limitations of Gruntwork modules as a feature request or a pull request. - 1. N.B., Gruntwork does not guarantee any turn-around time on getting features built or PRs reviewed and merged. Gruntwork modules must also be applicable to a wide range of companies, so we will be selective about features added and pull requests accepted. -1. Adding additional Infrastructure as Code to customize it for your company. -1. Communicating with AWS to fix account issues and limitations beyond Gruntwork’s control (quotas, account verification, et cetera). -1. For Reference Architecture customers: - 1. Following all provided manual steps in the Reference Architecture documents where automation is not possible. There are certain steps a Reference Architecture customer must perform on their own. Please keep an eye out for emails from Gruntwork engineers when you are configuring your Reference Architecture form for - deployment. - 1. Extending and customizing Gruntwork Pipelines beyond the basic CI/CD pipeline that Gruntwork has provided to suit your deployment requirements. - 1. Designing and implementing your AWS infrastructure beyond the Reference Architecture. - 1. Understanding and awareness of AWS resource costs for all infrastructure deployed into your AWS accounts ([Knowledge Base #307](https://github.com/gruntwork-io/knowledge-base/discussions/307) for Ref Arch baseline). - 1. Once deployed, maintaining the Reference Architecture to keep it secure and up to date. - 1. Keeping the Reference Architecture secure in accordance with their company needs. - 1. Understanding and accepting the security implications of any changes made to the Reference Architecture. - 1. Monitoring Gruntwork repositories for updates and new releases and applying them as appropriate. - 1. Maintaining all compliance standards after the Reference Architecture has been delivered. diff --git a/_docs-sources/intro/overview/what-we-provide.md b/_docs-sources/intro/overview/what-we-provide.md new file mode 100644 index 0000000000..a12027bea1 --- /dev/null +++ b/_docs-sources/intro/overview/what-we-provide.md @@ -0,0 +1,41 @@ +# What we provide + +## Gruntwork IaC Library + +A battle-tested, production-grade _[catalog](/iac/reference/)_ of infrastructure code that contains the core “building blocks” of infrastructure. It includes everything you’ll need to set up: + +- A Multi-account structure +- An infrastructure CI/CD Pipeline +- Networking and VPCs +- App orchestration — ECS, EC2, Kubernetes, and more +- Data storage — Aurora, Elasticache, RDS, and more +- Best-practice security baselines +- _and [more…](/iac/reference/)_ + +## Gruntwork Compliance + +An optional _catalog extension_ that contains building blocks that correctly implement CIS compliance standards. For aspects of the CIS AWS Foundations Benchmark where those requirements cannot be met by modules, but require human intervention, we provide instructions on manual steps you must take to meet the requirements. + +:::note + +For CIS Reference Architecture customers, we deploy a Reference Architecture and provide access to infrastructure code that implements the CIS AWS Foundations Benchmark requirements out-of-the-box, wherever possible. + +::: + +## Gruntwork Reference Architecture + +An optional end-to-end, multi-account architecture that Gruntwork deploys into your brand new AWS accounts that includes: + +- Our implementation of Landing Zone +- A complete sample app with underlying database and caching layer +- The Gruntwork Pipeline for deploying changes to infrastructure +- An overview of how to use the Reference Architecture + +Once the infrastructure is deployed, Gruntwork engineers deliver the full Infrastructure as Code to you. + +## Support + +Gruntwork offers basic and paid support options: + +- **[Community support](/support#get-support).** Get help via a [Gruntwork Community Slack](https://gruntwork-community.slack.com/archives/CHH9Y3Z62) and our [Knowledge Base](https://github.com/gruntwork-io/knowledge-base/discussions) where we maintain healthy communities where other engineers (including Grunts) post & answer questions. +- **[Paid support](/support#paid-support-tiers).** Get help via email or a private Slack channel with response times backed by SLAs. diff --git a/_docs-sources/intro/overview/what-you-provide.md b/_docs-sources/intro/overview/what-you-provide.md new file mode 100644 index 0000000000..9db8aca398 --- /dev/null +++ b/_docs-sources/intro/overview/what-you-provide.md @@ -0,0 +1,51 @@ +# What you provide + +Gruntwork products and services can help you quickly achieve world-class infrastructure. However, we aren’t a consulting company. To succeed, you (or your trusted DevOps consultant/contractor) must commit to learning how to leverage our products for your use cases, making any additional customizations, and deploying or migrating your apps and services. + +## Your team + +You must be appropriately staffed in order to maintain and customize the modules, services, and (if applicable) the Reference Architecture. + +## Time to learn + +With Gruntwork, you can accelerate your journey towards capturing your AWS cloud infrastructure as Infrastructure as Code. Although our aim is to simplify this intricate process, gaining a comprehensive understanding of your infrastructure's complexities and tailoring it to your specific needs will require a significant investment of time and effort on your part. Our [product documentation](/products) and [support](/support) remain available to assist you in these endeavors. + +## Implement the “last mile” + +Gruntwork products strike a balance between being opinionated and configurable. They’ll get you most of the way to your goal, but you may need to make some customizations to suit your use case. You may also need to adapt your apps and services to run in your new infrastructure by customizing/adding additional Infrastructure as Code to customize according to the requirements for your company. Our [Knowledge Base](https://github.com/gruntwork-io/knowledge-base/discussions) and [Community Slack Channel](https://gruntwork-community.slack.com/archives/CHH9Y3Z62) provide great resources to assist you in this effort. + +If you notice a limitation or bug in Gruntwork modules, we greatly appreciate and welcome [customer PRs](/iac/support/contributing) or you raising this to our attention via [bug or feature requests](/support#share-feedback). + +:::note + +Gruntwork does not guarantee any turn-around time on getting features built or PRs reviewed and merged. Gruntwork modules must also be applicable to a wide range of companies, so we will be selective about features added and pull requests accepted. + +::: + +## Talk to AWS if needed + +You'll have to communicate with AWS to fix account issues and limitations beyond Gruntwork’s control (quotas, account verification, et cetera). + +## If you purchased a Reference Architecture + +### Perform any required manual steps + +Following all provided manual steps in the Reference Architecture documents where automation is not possible. There are certain steps a Reference Architecture customer must perform on their own. Please keep an eye out for emails from Gruntwork engineers when you are configuring your Reference Architecture form for +deployment. + +### Customize Pipelines + +Extend and customize Gruntwork Pipelines beyond the basic CI/CD pipeline that Gruntwork has provided to suit your deployment requirements. + +### Understand your AWS costs + +Understanding and awareness of AWS resource costs for all infrastructure deployed into your AWS accounts ([Knowledge Base #307](https://github.com/gruntwork-io/knowledge-base/discussions/307) for Ref Arch baseline). + +### Maintain your Reference Architecture + +Once deployed, Gruntwork hands the Reference Architecture over to your team. You should expect to keep it secure and up to date by: + +- Keeping the Reference Architecture secure in accordance with your company needs. +- Understanding and accepting the security implications of any changes your team makes to the Reference Architecture. +- Monitoring Gruntwork repositories for updates and new releases and applying them as appropriate. +- Maintaining all compliance standards after the Reference Architecture has been delivered. diff --git a/_docs-sources/landing-zone/index.md b/_docs-sources/landing-zone/index.md new file mode 100644 index 0000000000..5a3417fed9 --- /dev/null +++ b/_docs-sources/landing-zone/index.md @@ -0,0 +1 @@ +# Landing Zone diff --git a/_docs-sources/guides/stay-up-to-date/patcher/index.md b/_docs-sources/patcher/index.md similarity index 100% rename from _docs-sources/guides/stay-up-to-date/patcher/index.md rename to _docs-sources/patcher/index.md diff --git a/_docs-sources/pipelines/how-it-works/index.md b/_docs-sources/pipelines/how-it-works/index.md new file mode 100644 index 0000000000..051d397032 --- /dev/null +++ b/_docs-sources/pipelines/how-it-works/index.md @@ -0,0 +1,80 @@ +# How it works + +![Gruntwork Pipelines Architecture](/img/guides/build-it-yourself/pipelines/tftg-pipeline-architecture.png) + +## External CI Tool + +Gruntwork Pipelines has been validated with [CircleCI](https://circleci.com/), [GitHub Actions](https://github.com/features/actions), and [GitLab](https://about.gitlab.com/). However, it can be used with any external CI/CD tool. +The role of the CI/CD tool is to trigger jobs inside Gruntwork Pipelines. +We have [example configurations](https://github.com/gruntwork-io/terraform-aws-service-catalog/tree/master/examples/for-production/infrastructure-live/_ci/scripts) +that identify changed terraform modules and call the Gruntwork Pipelines invoker Lambda function. + +By default, the invoker Lambda function is run by a CLI tool called `infrastructure-deployer` from within your CI tool. + +## ECS Deploy Runner + +The [ECS Deploy Runner Module](https://github.com/gruntwork-io/terraform-aws-ci/tree/main/modules/ecs-deploy-runner) +is a flexible framework for running pre-defined, locked-down jobs in an isolated +ECS task. It serves as the foundation for Gruntwork Pipelines. +The components described below work together to trigger jobs, validate them, run them, and stream +the logs back to your CI tool as if they were running locally. + +### Infrastructure Deployer CLI + +The [Infrastructure Deployer CLI tool](https://github.com/gruntwork-io/terraform-aws-ci/tree/main/modules/infrastructure-deployer) +serves as the interface between your chosen CI tool and Gruntwork Pipelines. It is used to trigger +jobs in the deploy-runner. Primarily, it calls instances of the invoker lambda described in the next section. + +Usage: + +`infrastructure-deployer --aws-region AWS_REGION [other options] -- CONTAINER_NAME SCRIPT ARGS...` + +When launching a task, you may optionally set the following useful flags: + +- `max-wait-time` (default 2h0m0s) — timeout length for the action, this can be any golang parseable string +- `task-cpu` — A custom number of CPU units to allocate to the ECS task +- `task-memory` — A custom number of memory units to allocate to the ECS task + +To get the list of supported containers and scripts, pass in the `--describe-containers` option. For example: + +`infrastructure-deployer --describe-containers --aws-region us-west-2` + +This will list all the containers and the scripts for each container that can be invoked using the invoker function of +the ECS deploy runner stack deployed in `us-west-2`. + + +### Invoker Lambda + +The [Invoker Lambda](https://github.com/gruntwork-io/terraform-aws-ci/blob/main/modules/ecs-deploy-runner/invoker-lambda/invoker/index.py) +is an AWS Lambda function written in Python that acts as the AWS entrypoint for your pipeline. +It has 3 primary roles: + +1. Serving as a gatekeeper for pipelines runs, determining if a particular command is allowed to be run, and if the arguments are valid +2. Creating ECS tasks that run terraform, docker, or packer commands +3. Shipping deployment logs back to your CI/CD tool + +### Standard Configuration + +The ECS deploy runner is flexible and can be configured for many tasks. The [standard configuration](https://github.com/gruntwork-io/terraform-aws-ci/tree/main/modules/ecs-deploy-runner-standard-configuration) +is a set of ECS task definitions that we ship with Pipelines by default. +Once you have your pipeline deployed you can [modify the ECS Deploy Runner configuration](../maintain/extending.md) as you like. +The configuration defines what scripts are accepted by the invoker Lambda and which arguments may be provided. The invoker Lambda +will reject _any_ script or argument not defined in the ECS Deploy Runner configuration. +The default tasks are defined below. + +#### Docker Image Builder (Kaniko) + +The Docker Image Builder task definition allows CI jobs to build docker images. +This ECS task uses an open source library called [Kaniko](https://github.com/GoogleContainerTools/kaniko) to enable docker builds from within a docker container. +We provide a [Docker image](https://github.com/gruntwork-io/terraform-aws-ci/tree/main/modules/ecs-deploy-runner/docker/kaniko) based on Kaniko for this task. + +#### Packer AMI Builder + +The Packer AMI Builder task definition allows CI jobs to build AMIs using HashiCorp Packer. This task runs in +a [Docker image](https://github.com/gruntwork-io/terraform-aws-ci/tree/main/modules/ecs-deploy-runner/docker/deploy-runner) we provide. + +#### Terraform Planner and Applier + +The Terraform Planner task definition and Terraform Applier task definition are very similar. They allow CI jobs to +plan and apply Terraform and Terragrunt code. These tasks run in the same [Docker image](https://github.com/gruntwork-io/terraform-aws-ci/tree/main/modules/ecs-deploy-runner/docker/deploy-runner) +as the AMI builder. diff --git a/_docs-sources/pipelines/maintain/extending.md b/_docs-sources/pipelines/maintain/extending.md new file mode 100644 index 0000000000..55e9b1ed10 --- /dev/null +++ b/_docs-sources/pipelines/maintain/extending.md @@ -0,0 +1,137 @@ +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +# Extending your Pipeline + +Pipelines can be extended in several ways: +- Adding repositories to supporting building Docker images for many applications +- Updating which branches can kick off which jobs +- Adding additional build scripts that can run in Pipelines +- Adding permissions to Pipelines + + +## Adding a repository + +Pipelines has separate configurations for each type of job that can be performed (e.g., building a docker image, running terraform plan, running terraform apply). An allow-list of repos and branches is defined for each job type, which can be updated to extend your usage of pipelines to additional application repositories. + +This portion of the guide focuses on building Docker images for application repos. If you have repositories for which you would like to run `terraform plan` or `terraform apply` jobs, similar steps can be followed, modifying the appropriate task configurations. + + + + +If you’ve deployed Pipelines as a part of your Reference Architecture, we recommend following the guide on [how to deploy your apps into the Reference Architecture](../../guides/reference-architecture/example-usage-guide/deploy-apps/intro) to learn how to define a module for your application. + +To allow Pipelines jobs to be started by events in your repository, open `shared//mgmt/ecs-deploy-runner/terragrunt.hcl` and update `docker_image_builder_config.allowed_repos` to include the HTTPS Git URL of the application repo for which you would like to deploy Docker images. + +Since pipelines [cannot update itself](./updating.md), you must run `terragrunt plan` and `terragrunt apply` manually to deploy the change from your local machine. Run `terragrunt plan` to inspect the changes that will be made to your pipeline. Once the changes have been reviewed, run `terragrunt apply` to deploy the changes. + + + + +If you’ve deployed Pipelines as a standalone framework using the `ecs-deploy-runner` service in the Service Catalog, you will need to locate the file in which you’ve defined a module block sourcing the `ecs-deploy-runner` service. + +Once the `ecs-deploy-runner` module block is located, update the `allowed_repos` list in the `docker_image_builder_config` variable to include the HTTPS Git URL of the application repo for which you would like to deploy Docker images. + +Refer to the [Variable Reference](../../reference/services/ci-cd-pipeline/ecs-deploy-runner#reference) section for the service in the Library Reference for full configuration details. + +Run `terraform plan` to inspect the changes that will be made to your pipeline. Once the changes have been reviewed, run `terraform apply` to deploy the changes. To deploy the application to ECS or EKS you will need to deploy a task definition (ECS) or Deployment (EKS) that references the newly built image. + + + +### Adding infrastructure deployer to the new repo + +Pipelines can be triggered from GitHub events in many repositories. In order to configure Pipelines for the new repository, you need to add a step in your CI/CD configuration for the repository that uses the `infrastructure-deployer` CLI tool to trigger Docker image builds. + +```bash +export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text) +export DEPLOY_RUNNER_REGION=$(aws configure get region) +export ECR_REPO_URL="${ACCOUNT_ID}.dkr.ecr.${DEPLOY_RUNNER_REGION}.amazonaws.com" +export DOCKER_TAG=$(git rev-parse --short HEAD) +export REPOSITORY_NAME="example" +export GITHUB_ORG="example-org" + +infrastructure-deployer --aws-region "us-east-1" -- docker-image-builder build-docker-image \ + --repo "https://github.com/${GITHUB_ORG}/${REPOSITORY_NAME}" \ + --ref "origin/main" \ + --context-path "path/to/directory/with/dockerfile/" \ + --docker-image-tag "${ECR_REPO_URL}/${REPOSITORY_NAME}:${DOCKER_TAG}" \ +``` + +## Specifying branches that can be deployed + +Pipelines can be configured to only allow jobs to be performed on specific branches. For example, a common configuration is to allow `terraform plan` or `terragrunt plan` jobs for pull requests, and only allow `terraform apply` or `terragrunt apply` to run on merges to the main branch. + +Depending on your use case, you may need to modify the `allowed_apply_git_refs` attribute to update the allow-list of branch names that can kick off the `plan` and `apply` jobs. + +For example, a common configuration for `apply` jobs is to specify that this job can only run on the `main` branch: +```tf +allowed_apply_git_refs = ["main", "origin/main"] +``` + + + + +If you’ve deployed Pipelines as a part of your Reference Architecture, open `shared//mgmt/ecs-deploy-runner/terragrunt.hcl` and update the values in the `allowed_apply_git_refs` attribute for the job configuration you would like to modify (either `terraform_planner_config` or `terraform_applier_config`). + +Run `terragrunt plan` to inspect the changes that will be made to your pipeline. Once the changes have been reviewed, run `terragrunt apply` to deploy the changes. + + + + +If you’ve deployed Pipelines as a standalone framework using the `ecs-deploy-runner` service in the Service Catalog, you will need to locate the file in which you’ve defined a module block sourcing the `ecs-deploy-runner` service. + +By default, the `ecs-deploy-runner` service from the Service Catalog allows any git ref to be applied. After you locate the module block for `ecs-deploy-runner`, modify the `allowed_apply_git_refs` attribute for the job configuration that you would like to modify (either `terraform_planner_config` or `terraform_applier_config`). + +Run `terraform plan` to inspect the changes that will be made to your pipeline. Once the changes have been reviewed, run `terraform apply` to deploy the changes. + + + +## Adding a new AWS Service + +If you are expanding your usage of AWS to include an AWS service you’ve never used before, you will need to grant each job sufficient permissions to access that service. Pipelines executes in ECS tasks running in your AWS account(s). Each task (terraform planner, applier, docker builder, ami builder) has a distinct execution IAM role with only the permissions each task requires to complete successfully. For example, if you need to create an Amazon DynamoDB Table using Pipelines for the first time, you would want to add (at a minimum) the ability to list and describe tables to the policy for the `planner` IAM role, and all permissions for DynamoDB to the IAM policy for the `terraform-applier` IAM role. + +We recommend that the `planner` configuration have read-only access to resources, and the applier be able to read, create, modify, and destroy resources. + + + + +If you’ve deployed Pipelines as a part of your Reference Architecture, the permissions for the `terraform-planner` task are located in `_envcommon/mgmt/read_only_permissions.yml` and the permissions for the `terraform-applier` task are located in `_envcommon/mgmt/deploy_permissions.yml`. Open and add the required permissions to each file. + +After you are done updating both files, you will need to run `terragrunt plan`, review the changes, then `terragrunt apply` for each account in your Reference Architecture. +```bash +cd logs/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec -- terragrunt apply --terragrunt-source-update -auto-approve + +cd shared/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec -- terragrunt apply --terragrunt-source-update -auto-approve + +cd security/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec -- terragrunt apply --terragrunt-source-update -auto-approve + +cd dev/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec -- terragrunt apply --terragrunt-source-update -auto-approve + +cd stage/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec -- terragrunt apply --terragrunt-source-update -auto-approve + +cd prod/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec -- terragrunt apply --terragrunt-source-update -auto-approve +``` + + + +If you’ve deployed Pipelines as a standalone framework using the `ecs-deploy-runner` service in the Service Catalog, you will need to locate the file in which you’ve defined a module block sourcing the `ecs-deploy-runner` service. + +Modify the AWS IAM policy document being passed into the `iam_policy` variable for the [`terraform_applier_config`](../../reference/services/ci-cd-pipeline/ecs-deploy-runner#terraform_applier_config) and the [`terraform_planner_config`](../../reference/services/ci-cd-pipeline/ecs-deploy-runner#terraform_planner_config) variables. Refer to the [variable reference](../../reference/services/ci-cd-pipeline/ecs-deploy-runner#reference) section for the service in the Library Reference for the full set of configuration details for this service. + +After you are done updating the IAM policy documents, run `terraform plan` then review the changes that will be made. Finally, run `terraform apply` to apply the changes. + + + +## Adding scripts that can be run in Pipelines + +The `deploy-runner` Docker image for Pipelines only allows scripts within a single directory to be executed in the ECS task as an additional security measure. + +By default, the `deploy-runner` ships with three scripts — one to build HashiCorp Packer images, one to run `terraform plan` and `terraform apply`, and one to automatically update the value of a variable in a Terraform tfvars or Terragrunt HCL file. + +If you need to run a custom script in the `deploy-runner`, you must fork the image code, add an additional line to copy your script into directory designated by the `trigger_directory` argument. Then, you will need to rebuild the Docker image, push to ECR, then update your Pipelines deployment following the steps in [Updating your Pipeline](./updating.md). diff --git a/_docs-sources/pipelines/maintain/updating.md b/_docs-sources/pipelines/maintain/updating.md new file mode 100644 index 0000000000..462f38f6f3 --- /dev/null +++ b/_docs-sources/pipelines/maintain/updating.md @@ -0,0 +1,96 @@ +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +# Updating Your Pipeline + +Pipelines is built using the [`terraform-aws-ci`](../../reference/modules/terraform-aws-ci/ecs-deploy-runner/) module. We recommend updating your pipeline whenever there’s a new release of the module. + +By default, Pipelines cannot update it’s own infrastructure (ECS cluster, AWS Lambda function, etc), so you must run upgrades to Pipelines manually from your local machine. This safeguard is in place to prevent you from accidentally locking yourself out of the Pipeline when applying a change to permissions. + +For example, if you change the IAM permissions of the CI user, you may no longer be able to run the pipeline. The pipeline job that updates the permissions will also be affected by the change. This is a difficult scenario to recover from, since you will have lost access to make further changes using Pipelines. + +## Prerequisites + +This guide assumes you have the following: +- An AWS account with permissions to create the necessary resources +- An [AWS Identity and Access Management](https://aws.amazon.com/iam/) (IAM) user or role with permissions to start pipelines deployments and update AWS Lambda functions +- [AWS Command Line Interface](https://aws.amazon.com/cli/) (AWS CLI) installed on your local machine +- [`infrastructure-deployer`](https://github.com/gruntwork-io/terraform-aws-ci/tree/main/modules/infrastructure-deployer) CLI tool installed locally +- [`aws-vault`](https://www.github.com/99designs/aws-vault) installed locally for authenticating to AWS + +## Updating container images + +Gruntwork Pipelines uses two images — one for the [Deploy Runner](https://github.com/gruntwork-io/terraform-aws-ci/blob/main/modules/ecs-deploy-runner/docker/deploy-runner/Dockerfile) and one for [Kaniko](https://github.com/gruntwork-io/terraform-aws-ci/blob/main/modules/ecs-deploy-runner/docker/kaniko/Dockerfile). To update pipelines to the latest version, you must build and push new versions of each image. + +Pipelines has the ability to build container images, including the images it uses. You can use the `infrastructure-deployer` CLI tool locally to start building the new image versions. This is the same tool used by Pipelines in your CI system. + +```bash +export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text) +export DEPLOY_RUNNER_REGION=$(aws configure get region) +export DOCKERFILE_REPO="https://github.com/gruntwork-io/terraform-aws-ci.git" +export ECR_REPO_URL="${ACCOUNT_ID}.dkr.ecr.${DEPLOY_RUNNER_REGION}.amazonaws.com" +export TERRAFORM_AWS_CI_VERSION="v0.52.1" + +# Builds and pushes the deploy runner image +infrastructure-deployer --aws-region "$DEPLOY_RUNNER_REGION" -- docker-image-builder build-docker-image \ + --repo "$DOCKERFILE_REPO" \ + --ref "$TERRAFORM_AWS_CI_VERSION" \ + --context-path "modules/ecs-deploy-runner/docker/deploy-runner" \ + --env-secret 'github-token=GITHUB_OAUTH_TOKEN' \ + --docker-image-tag "${ECR_REPO_URL}/ecs-deploy-runner:${TERRAFORM_AWS_CI_VERSION}" \ + --build-arg "module_ci_tag=$TERRAFORM_AWS_CI_VERSION" + +# Builds and pushes the kaniko image +infrastructure-deployer --aws-region "$DEPLOY_RUNNER_REGION" -- docker-image-builder build-docker-image \ + --repo "$DOCKERFILE_REPO" \ + --ref "$TERRAFORM_AWS_CI_VERSION" \ + --context-path "modules/ecs-deploy-runner/docker/kaniko" \ + --env-secret 'github-token=GITHUB_OAUTH_TOKEN' \ + --docker-image-tag "${ECR_REPO_URL}/kaniko:${TERRAFORM_AWS_CI_VERSION}" \ + --build-arg "module_ci_tag=$TERRAFORM_AWS_CI_VERSION" +``` +Each image may take a few minutes to build and push. Once both images are built, you can update the image tag in your terraform module and update the infrastructure. + +## Updating infrastructure + +Next, update the references to these images to the new tag values. This will vary depending on if you’re using Pipelines as configured by the Reference Architecture or if you’ve deployed Pipelines as a standalone framework. + + + + +To update the image tags for pipelines deployed by a Reference Architecture, you update `common.hcl` with the new tag values for these images. The new tag value will be version of `terraform-aws-ci` that the images use. For example, if your newly created images are using the v0.52.1 release of `terraform-aws-ci`, update common.hcl to: + +``` +deploy_runner_container_image_tag = "v0.52.1" +kaniko_container_image_tag = "v0.52.1" +``` + +Next, apply the ecs-deploy-runner module in each account: +```bash +cd logs/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec your-logs -- terragrunt apply --terragrunt-source-update -auto-approve + +cd shared/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec your-shared -- terragrunt apply --terragrunt-source-update -auto-approve + +cd security/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec your-security -- terragrunt apply --terragrunt-source-update -auto-approve + +cd dev/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec your-dev -- terragrunt apply --terragrunt-source-update -auto-approve + +cd stage/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec your-stage -- terragrunt apply --terragrunt-source-update -auto-approve + +cd prod/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec your-prod -- terragrunt apply --terragrunt-source-update -auto-approve +``` + + + +If you’ve deployed Pipelines as a standalone framework using the `ecs-deploy-runner` service in the Service Catalog, refer to the [Variable Reference](../../reference/services/ci-cd-pipeline/ecs-deploy-runner#reference) section for the service in the Library Reference for configuration details. You will need to update the `docker_tag` value in the `container_image` object for the [`ami_builder_config`](../../reference/services/ci-cd-pipeline/ecs-deploy-runner#ami_builder_config), [`docker_image_builder_config`](../../reference/services/ci-cd-pipeline/ecs-deploy-runner#docker_image_builder_config), [`terraform_applier_config`](../../reference/services/ci-cd-pipeline/ecs-deploy-runner#terraform_applier_config), and [`terraform_planner_config`](../../reference/services/ci-cd-pipeline/ecs-deploy-runner#terraform_planner_config) variables. + +Once you have updated any references to the container image tags, you will need to run `terraform plan` and `terraform apply` in each account where pipelines is deployed. + + + diff --git a/_docs-sources/pipelines/multi-account/index.md b/_docs-sources/pipelines/multi-account/index.md new file mode 100644 index 0000000000..a9625729f9 --- /dev/null +++ b/_docs-sources/pipelines/multi-account/index.md @@ -0,0 +1,13 @@ +# Deploying Multi-Account Pipelines + +Have you heard about AWS multi-account setups? It's like having a pack of dogs - each one with its own unique personality, strengths, and weaknesses, but all working together to accomplish a common goal. + +Imagine you have a pack of dogs, each with their own special skills. You've got a fierce protector who guards the house, a speedy runner who chases down anything that moves, and a snuggly lap dog who just wants to cuddle all day. Each dog has its own needs, but they all rely on you as their owner to provide for them and keep them safe. + +Similarly, with AWS multi-account setups, you can have a whole pack of accounts, each with its own unique configuration and requirements, but all managed from a single "parent" account. It's like being the alpha dog of a pack, making sure each member is fed, healthy, and happy. + +And just like with a pack of dogs, there are different roles and responsibilities within an AWS multi-account setup. You've got the "owner" account, which is responsible for managing all the other accounts in the pack, and then you've got the "member" accounts, each with their own specific purposes and functions. + +It's important to keep all your accounts organized and working together smoothly, just like how you would keep your pack of dogs in line. You don't want one dog to get too aggressive and start fighting with the others, just like you don't want one AWS account to start interfering with the others. + +But if you can manage your pack of dogs successfully, they can work together to accomplish great things - just like how an AWS multi-account setup can help you achieve your goals with ease and efficiency. So, if you're a dog lover like me, you'll find that AWS multi-account setups are just as fun and rewarding as having a pack of loyal furry friends by your side. Woof! diff --git a/_docs-sources/pipelines/overview/index.md b/_docs-sources/pipelines/overview/index.md new file mode 100644 index 0000000000..a84abcf4ca --- /dev/null +++ b/_docs-sources/pipelines/overview/index.md @@ -0,0 +1,16 @@ +# What is Gruntwork Pipelines? + +Gruntwork Pipelines is a framework that enables you to use your preferred CI tool to +securely run an end-to-end pipeline for infrastructure code ([Terraform](https://www.terraform.io/)) and +app code ([Docker](https://www.docker.com/) or [Packer](https://www.packer.io/)). Rather than replace your existing CI/CD provider, Gruntwork Pipelines is designed to enhance the security +of your existing tool. + +Without Gruntwork Pipelines, CI/CD tools require admin level credentials to any AWS account where you deploy infrastructure. +This makes it trivial for anyone with access to your CI/CD system to access AWS credentials with permissions +greater than they might otherwise need. +Gruntwork Pipelines allows a highly restricted set of permissions to be supplied to the CI/CD tool while +infrastructure related permissions reside safely within your own AWS account. This reduces the exposure of your +high value AWS secrets. + + + diff --git a/_docs-sources/pipelines/tutorial/index.md b/_docs-sources/pipelines/tutorial/index.md new file mode 100644 index 0000000000..b876280e42 --- /dev/null +++ b/_docs-sources/pipelines/tutorial/index.md @@ -0,0 +1,355 @@ +# Single Account Tutorial + +In this tutorial, you’ll walk you through the process of setting up Gruntwork Pipelines in a single +AWS account. By the end, you’ll deploy: + +- ECR Repositories for storing Docker images + - `deploy-runner` — stores the default image for planning and applying terraform and building AMIs + - `kaniko` — stores the default image for building other Docker images using [kaniko](https://github.com/GoogleContainerTools/kaniko) + - `hello-world` — a demonstration repo used for illustrating how a Docker application might be managed with Gruntwork Pipelines +- Our [ECS Deploy Runner Module](https://github.com/gruntwork-io/terraform-aws-ci/tree/main/modules/ecs-deploy-runner) +- Supporting IAM Roles, IAM Policies, and CloudWatch Log Groups +- ECS Tasks + - `docker-image-builder` — builds Docker images within the `kaniko` container image + - `ami-builder` — builds AMIs using HashiCorp Packer within the `deploy-runner` image + - `terraform-planner` — Runs plan commands within the `deploy-runner` container + - `terraform-applier` — Runs apply commands within the `deploy-runner` container + +## Prerequisites + +Before you begin, make sure your system has: + +- [Docker](https://docs.docker.com/get-docker/), with support for Buildkit (version 18.09 or newer) +- [Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) (version 1.0 or newer) +- Valid [AWS credentials](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) for an IAM user with `AdministratorAccess` + +## Repo Setup + +The code for this tutorial can be found in the [Gruntwork Service Catalog](https://github.com/gruntwork-io/terraform-aws-service-catalog/blob/master/examples/for-learning-and-testing/gruntwork-pipelines/README.md). Start by cloning the repo: + +```shell +git clone https://github.com/gruntwork-io/terraform-aws-service-catalog.git +``` + +You will be following the example found at `terraform-aws-service-catalog/examples/for-learning-and-testing/gruntwork-pipelines` + +```shell +cd terraform-aws-service-catalog/examples/for-learning-and-testing/gruntwork-pipelines +``` + +## Create the required ECR repositories + +Change directories to deploy the Terraform for ECR + +```shell +cd ecr-repositories +``` + +Set the `AWS_REGION` environment variable to your desired AWS region: + +```shell +export AWS_REGION= +``` + +Authenticate with your AWS account and deploy the Terraform code provided to create the three +ECR repositories. + +Initialize Terraform to download required dependencies: +```shell +terraform init +``` + +Run plan and ensure the output matches your expectations: +```shell +terraform plan +``` + +Deploy the code using apply +```shell +terraform apply +``` + +## Build and Push the Docker Images + +The four standard Gruntwork Pipelines capabilities are instrumented by two separate Docker files + +1. `ecs-deploy-runner` — Terraform plan, apply and AMI building +2. `kaniko` — Docker image building. [Kaniko](https://github.com/GoogleContainerTools/kaniko) is a tool that supports building Docker images inside a container + +These Dockerfiles live in the ecs-deploy-runner module within [the terraform-aws-ci repository](https://github.com/gruntwork-io/terraform-aws-ci). In this example, you'll clone the terraform-aws-ci and running Docker build against the Dockerfiles defined there. + +You’re now going to build these two Docker images and push them to the ECR repositories you just created. + +### Export Environment Variables + +If you do not already have a GitHub Personal Access Token (PAT) available, you can follow this [guide to Create a new GitHub Personal Access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) + +For the purposes of this example, your token will need the `repo` scope, so that Gruntwork Pipelines is able to fetch modules and code from private Gruntwork repositories. Note that in production, the best practice is to create a separate GitHub machine user account, +and provision a GitHub PAT against that account. + +This GitHub PAT will be used for two purposes: +1. Initially, when running the Docker build commands below, the GitHub PAT will be used to fetch private code from `github.com/gruntwork-io`. +2. Once the Docker images are built, you’ll store your GitHub PAT in AWS Secrets Manager. When Gruntwork Pipelines is running on your behalf, it will fetch + your GitHub PAT from Secrets Manager "just in time" so that only the running ECS task has access to the token — and so that your token only exists for the lifespan + of the ephemeral ECS task container. + +Export a valid GitHub PAT using the following command so that you can use it to build Docker images that fetch private code via GitHub: +```shell +export GITHUB_OAUTH_TOKEN= +``` + +Export your AWS Account ID and primary region. The commands in the rest of this document require these variables to be set. The region to use is up to you. +```shell +export AWS_ACCOUNT_ID= +export AWS_REGION= +``` + +The Gruntwork Pipelines Dockerfiles used by Gruntwork Pipelines are stored in the `gruntwork-io/terraform-aws-ci` repository. Therefore, in order to pin both Dockerfiles +to a known version, you export the following variable which you’ll use during our Docker builds: + +```shell +export TERRAFORM_AWS_CI_VERSION=v0.51.4 +``` + +The latest version can be retrieved from the [releases page](https://github.com/gruntwork-io/terraform-aws-ci/releases) of the `gruntwork-io/terraform-aws-ci` repository. At a minimum, `v0.51.4` must be selected. + +### Clone `terraform-aws-ci` to your machine +Next, you are going to build the two Docker images required for this example. The Dockerfiles are defined in the [terraform-aws-ci](https://github.com/gruntwork-io/terraform-aws-ci) repository, so it must be available locally: + +```bash +git clone git@github.com:gruntwork-io/terraform-aws-ci.git +``` + +Change directory into the example folder: +```bash +cd terraform-aws-ci/modules/ecs-deploy-runner +``` + +### Build the ecs-deploy-runner and kaniko Docker images + +This next command is going to perform a Docker build of the `deploy-runner` image. You don’t need to authenticate to AWS in order to run this command, as the build will happen on your machine. +We do, however, pass your exported GitHub PAT into the build as a secret, so that the Docker build can fetch private code from `github.com/gruntwork-io`. Since you’re using BuildKit, the token +is only used during the build process and does not remain in the final image. + +Run the following command to build the ecs-deploy-runner Docker image: +```shell +DOCKER_BUILDKIT=1 docker build \ + --secret id=github-token,env=GITHUB_OAUTH_TOKEN \ + --build-arg module_ci_tag="$TERRAFORM_AWS_CI_VERSION" \ + --tag "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/ecs-deploy-runner:$TERRAFORM_AWS_CI_VERSION" \ + ./docker/deploy-runner/ +``` + +Similarly to the ecs-deploy-runner image, you’ll now use the Kaniko Dockerfile included in this example to build the kaniko image: +```shell +DOCKER_BUILDKIT=1 docker build \ + --secret id=github-token,env=GITHUB_OAUTH_TOKEN \ + --build-arg module_ci_tag="$TERRAFORM_AWS_CI_VERSION" \ + --tag "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/kaniko:$TERRAFORM_AWS_CI_VERSION" \ + ./docker/kaniko/ +``` + +### Log In and Push to ECR +Now you have local Docker images for ecs-deploy-runner and kaniko that are properly tagged, but before you can push it into the private ECR repository that you created +with our `terraform apply`, you need to authenticate with ECR itself. Authenticate to AWS and run the following: + +```shell +aws ecr get-login-password --region $AWS_REGION \ + | docker login -u AWS --password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com" +``` + +If you receive a success message from your previous command, you’re ready to push your ecs-deploy-runner image: +```shell +docker push "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/ecs-deploy-runner:$TERRAFORM_AWS_CI_VERSION" +``` + +```shell +docker push "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/kaniko:$TERRAFORM_AWS_CI_VERSION" +``` + +## Deploy the Pipelines Cluster + +Now that the ECR repositories are deployed and have the required Docker images, you are ready +to deploy the rest of Gruntwork Pipelines. The Terraform that defines the setup is defined in +`terraform-aws-service-catalog/examples/for-learning-and-testing/gruntwork-pipelines/pipelines-cluster` + +```shell +cd terraform-aws-service-catalog/examples/for-learning-and-testing/gruntwork-pipelines/pipelines-cluster +``` + +### Export a GitHub Personal Access Token (PAT) +For the purposes of this example, you may use the same PAT as before. In a production deployment, best practice +would be to create a separate GitHub machine user account. This module uses a slightly different naming convention for +its environment variable, so you’ll need to re-export the token: + +```shell +export TF_VAR_github_token= +``` + +### Configure and Deploy the ecs deploy runner +Authenticate to your AWS account and run `init`, then `apply`. +:::note +If you are using `aws-vault` to authenticate on the command line, you must supply the `--no-session` flag as explained in [this Knowledge Base entry](https://github.com/gruntwork-io/knowledge-base/discussions/647) +::: + +```shell +terraform init +``` + +```shell +terraform plan +``` +Check your plan output before applying: +```shell +terraform apply +``` + +## Install the `infrastructure-deployer` command line tool + +Gruntwork Pipelines requires all requests to transit through its Lambda function, which ensures only valid arguments and commands are passed along to ECS. +To invoke the Lambda function, you should use the `infrastructure-deployer` command line interface (CLI) tool. For testing and setup purposes, you’ll install and use the `infrastructure-deployer` CLI locally; when you’re ready to configure CI/CD, you’ll install and use it in your CI/CD config. + +If you do not already have the `gruntwork-install` binary installed, you can get it [here.](https://github.com/gruntwork-io/gruntwork-installer) + +```bash + +gruntwork-install --binary-name "infrastructure-deployer" --repo "https://github.com/gruntwork-io/terraform-aws-ci" --tag "$TERRAFORM_AWS_CI_VERSION" +``` +:::note +If you’d rather not use the Gruntwork installer, you can alternatively download the binary manually from [the releases page.](https://github.com/gruntwork-io/terraform-aws-ci/releases) +::: + +## Invoke your Lambda Function + +### Get your Lambda ARN from the output +Next, you need to retrieve the Amazon Resource Name (ARN) for the Lambda function that guards your Gruntwork Pipelines installation: + +```shell +terraform output -r gruntwork_pipelines_lambda_arn +``` + +Once you have your invoker Lambda’s ARN, export it like so: + +```shell +export INVOKER_FUNCTION_ARN= +``` + +This value is used by the `run-docker-build.sh` and `run-packer-build.sh` scripts in the next step. + +### Perform a Docker/Packer build via Pipelines + +Now that you have Gruntwork Pipelines installed in the `docker-packer-builder` configuration, let’s put arbitrary Docker and Packer builds through it! + +For your convenience, we’ve provided two scripts that you can run: +* `run-docker-build.sh` +* `run-packer-build.sh` + +These two scripts will: + +1. Ensure all required environment variables are set +2. Use the `infrastructure-deployer` CLI to send a Docker build request to the invoker lambda + +Once the request is sent, Gruntwork Pipelines will begin streaming the logs back to you so you can watch the images get built. The Docker build will push the completed image to your hello-world repository, and the Packer build will push the completed AMI to EC2. + +The following environment variables must be set in your shell before you run `run-docker-build.sh`: +* `AWS_ACCOUNT_ID` +* `AWS_REGION` +* `INVOKER_FUNCTION_ARN` + +## Prepare a test `infrastructure-live` repo + +You now have a functional Gruntwork Pipelines example that can build and deploy Docker images and AMIs. +Feel free to stop here and experiment with what you’ve built so far. The following steps will extend +pipelines to be capable of running Terraform plan and apply. + +Pipelines is a flexible solution that can be deployed in many configurations. +In your own organization, you might consider deploying one Pipelines installation with all the ECS tasks enabled, +or having a central Pipelines installation plus one in each account of your Reference Architecture. + +To test the plan and apply functionality, you’ll need a simple demo repository. +You may create your own or fork our [testing repo](https://github.com/gruntwork-io/terraform-module-in-root-for-terragrunt-test) + +## Enable the Terraform planner and applier + +We’ve intentionally deployed an incomplete version of Gruntwork Pipelines so far. To deploy the full version with the planner +and applier, you’ll need to make a few edits to the module. In this directory you should see a few files prefixed with `config_`. +Two are proper Terraform files with all the configuration for running the Docker image builder and the ami builder. + +Each consists of +* A `locals` block containing the configuration variables specifying which repos are allowed and providing credentials +* Some IAM resources that give the task permission to access the resources it needs + +The other two files have a `.example` postfix. Remove that postfix to let Terraform discover them. + +Next, let’s take a look at `main.tf`. You should see a `TODO` in the `locals` block, marking the location where the configuration might normally +live. As this example ships with the Docker image builder and AMI builder defined in external files we have commented out +the default null values. + +Comment out or delete the following lines: +* `terraform_planner_config = null` +* `terraform_planner_https_tokens_config = null` +* `terraform_applier_config = null` +* `terraform_applier_https_tokens_config = null` + +These values are now properly defined in the external `config_*.tf` files. + +## Configure the Terraform planner and applier + +Now that the planner and applier are enabled, you could run `terraform apply`, but the default values of a few +variables might not be correct for your test environment. Make the following changes to your `.tfvars` file to +define the correct repos and credentials. Pipelines is configured to reject any commands that aren’t explicitly allowed +by the configuration below: + +* `allowed_terraform_planner_repos = ["https://github.com/your-org/your-forked-repo.git"]` — a list of repos where `terraform plan` is allowed to be run +* `allowed_terraform_applier_repos = ["https://github.com/your-org/your-forked-repo.git"]` — a list of repos where `terraform apply` is allowed to be run +* optionally `machine_user_git_info = {name="machine_user_name", email="machine_user_email"}` — if you’d like to customize your machine user info +* optionally `allowed_apply_git_refs = ["master", "main", "branch1", ...]` — for any branches or git refs you’d like to be able to run `terraform apply` on + +Now you’re ready to run `terraform apply`! Once complete, you should see 2 new ECS task definitions in your AWS account: +* `ecs-deploy-runner-terraform-planner` +* `ecs-deploy-runner-terraform-applier` + +## Try a `plan` or `apply` + +With Gruntwork Pipelines deployed, it’s time to test it out! Run the following command to trigger +a `plan` or `apply`: + +```shell +infrastructure-deployer --aws-region us-east-1 -- terraform-planner infrastructure-deploy-script \ + --ref "master" \ + --binary "terraform" \ + --command "plan" +``` + +If you forked the example repo provided you should see `+ out = "Hello, World"` if the plan was a success. + +## Celebrate, you did it! + +As a next step you could add a `.github/workflows/pipeline.yml` file to your repo that runs the command above +or try it in your favorite CI/CD tool. Your tooling only needs permission to trigger the lambda +function `arn:aws:lambda:us-east-1::function:ecs-deploy-runner-invoker`. + +## Cleanup + +If you want to remove the infrastructure created, you can use Terraform `destroy`. + +```shell +terraform plan -destroy -out terraform.plan +terraform apply terraform.plan +``` + +To destroy the `ecr-repositories` resources you created, you’ll first need to empty the repos of any images: + +```shell +aws ecr batch-delete-image --repository-name ecs-deploy-runner --image-ids imageTag=$TERRAFORM_AWS_CI_VERSION +aws ecr batch-delete-image --repository-name kaniko --image-ids imageTag=$TERRAFORM_AWS_CI_VERSION +aws ecr batch-delete-image --repository-name hello-world --image-ids imageTag=v1.0.0 +``` + +Then Terraform can take care of the rest: + +```shell +cd ../ecr-repositories +terraform plan -destroy -out terraform.plan +terraform apply terraform.plan +``` diff --git a/_docs-sources/products.md b/_docs-sources/products.md new file mode 100644 index 0000000000..f6a8a9e0d2 --- /dev/null +++ b/_docs-sources/products.md @@ -0,0 +1,38 @@ +--- +hide_table_of_contents: true +hide_title: true +--- + +import Card from "/src/components/Card" +import CardGroup from "/src/components/CardGroup" +import CenterLayout from "/src/components/CenterLayout" + + + +# Gruntwork Products + + + + +A collection of reusable code that enables you to deploy and manage infrastructure quickly and reliably. + + +An end-to-end tech stack built using best practices on top of our Infrastructure as Code Library, deployed into your AWS accounts. + +A framework for running secure deployments for infrastructure code and application code. + + +Gain access to all resources included in your Gruntwork subscription. + + + + + diff --git a/_docs-sources/refarch/access/how-to-auth-CLI/index.md b/_docs-sources/refarch/access/how-to-auth-CLI/index.md new file mode 100644 index 0000000000..a9d878bed1 --- /dev/null +++ b/_docs-sources/refarch/access/how-to-auth-CLI/index.md @@ -0,0 +1,55 @@ +# Authenticate via the AWS command line interface (CLI) + +CLI access requires [AWS access keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html). We recommend using [aws-vault](https://github.com/99designs/aws-vault) for managing all aspects related to CLI authentication. To use `aws-vault` you will need to generate AWS Access Keys for your IAM user in the security account. + +:::tip + +`aws-vault` is not the only method which can be used to authenticate on the CLI. Please refer to [A Comprehensive Guide to Authenticating to AWS on the Command Line](https://blog.gruntwork.io/a-comprehensive-guide-to-authenticating-to-aws-on-the-command-line-63656a686799) for several other options. + +::: + +:::info + +MFA is required for the Reference Architecture, including on the CLI. See [configuring your IAM user](/refarch/access/setup-auth/#configure-your-iam-user) for instructions on setting up an MFA token. + +::: + +## Access resources in the security account + +To authenticate to the security account, you only need your AWS access keys and an MFA token. See [the guide](https://github.com/99designs/aws-vault#quick-start) on adding credentials to `aws-vault`. + +You should be able to run the following command using AWS CLI + +```bash +aws-vault exec -- aws sts get-caller-identity +``` + +and expect to get an output with your user's IAM role: + +```json +{ + "UserId": "AIDAXXXXXXXXXXXX”, + "Account": “", + "Arn": "arn:aws:iam:::user/" +} +``` + +## Accessing all other accounts + +To authenticate to all other accounts (e.g., dev, stage, prod), you will need the ARN of an IAM Role in that account to assume. To configure accessing accounts using assumed roles with `aws-vault` refer to [these instructions](https://github.com/99designs/aws-vault#roles-and-mfa). + +Given the following command (where `YOUR_ACCOUNT_PROFILE_NAME` will be any account other than your security account) + +```bash +aws-vault exec -- aws sts get-caller-identity +``` + +you should expect to see the following output: + +```json +{ + "UserId": "AIDAXXXXXXXXXXXX", + "Account": "", + "Arn": "arn:aws:sts:::assumed-role//11111111111111111111" +} +``` diff --git a/_docs-sources/refarch/access/how-to-auth-aws-web-console/index.md b/_docs-sources/refarch/access/how-to-auth-aws-web-console/index.md new file mode 100644 index 0000000000..77e6ec3b24 --- /dev/null +++ b/_docs-sources/refarch/access/how-to-auth-aws-web-console/index.md @@ -0,0 +1,26 @@ +# Authenticating to the AWS web console + +## Authenticate to the AWS Web Console in the security account + +To authenticate to the security account, you will need: + +1. IAM User Credentials. See [setting up initial access](/refarch/access/setup-auth/) for how to create IAM users. +1. An MFA Token. See [Configuring your IAM user](/refarch/access/setup-auth/#configure-your-iam-user). +1. The login URL. This should be of the format `https://.signin.aws.amazon.com/console`. + +## Authenticate to the AWS Web Console in all other accounts + +To authenticate to any other account (e.g., dev, stage, prod), you need to: + +1. Authenticate to the security account. All IAM users are defined in this account, you must always authenticate to it first. +1. [Assume an IAM Role in the other AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-console.html). To access other accounts, you switch to an IAM Role defined in that account. + +:::note +Note that to be able to access an IAM Role in some account, your IAM User must be in an IAM Group that has permissions to assume that IAM Role. +::: + +See the `cross-account-iam-roles` module for the [default set of IAM Roles](https://github.com/gruntwork-io/terraform-aws-security/blob/main/modules/cross-account-iam-roles/README.md#iam-roles-intended-for-human-users) that exist in each account. For example, to assume the allow-read-only-access-from-other-accounts IAM Role in the prod account, you must be in the \_account.prod-read-only IAM Group. See [Configure other IAM Users](/refarch/access/setup-auth/#configure-other-iam-users) for how you add users to IAM Groups. + +:::note +Not all of the default roles referenced in the `cross-account-iam-roles` module are deployed in each account. +::: diff --git a/_docs-sources/refarch/access/how-to-auth-ec2/index.md b/_docs-sources/refarch/access/how-to-auth-ec2/index.md new file mode 100644 index 0000000000..8f61b2ce0d --- /dev/null +++ b/_docs-sources/refarch/access/how-to-auth-ec2/index.md @@ -0,0 +1,63 @@ +# SSH to EC2 Instances + +You can SSH to any of your EC2 Instances in the Reference Architecture in two different ways: + +1. `ssh-grunt` (Recommended) +1. EC2 Key Pairs (For emergency / backup use only) + +## `ssh-grunt` (Recommended) + +[`ssh-grunt`](../../../reference/modules/terraform-aws-security/ssh-grunt/) is a tool developed by Gruntwork that automatically syncs user accounts from AWS IAM to your servers to allow individual developers to SSH onto EC2 instances using their own username and SSH keys. + +In this section, you will learn how to SSH to an EC2 instance in your Reference Architecture using `ssh-grunt`. Every EC2 instance has `ssh-grunt` installed by default. + +### Add users to SSH IAM Groups + +When running `ssh-grunt`, each EC2 instance specifies from which IAM Groups it will allow SSH access, and SSH access with sudo permissions. By default, these IAM Group names are `ssh-grunt-users` and `ssh-grunt-sudo-users`, respectively. To be able to SSH to an EC2 instance, your IAM User must be added to one of these IAM Groups (see Configure other IAM Users for instructions). + +### Upload your public SSH key + +1. Authenticate to the AWS Web Console in the security account. +1. Go to your IAM User profile page, select the "Security credentials" tab, and click "Upload SSH public key". +1. Upload your public SSH key (e.g. `~/.ssh/id_rsa.pub`). Do NOT upload your private key. + +### Determine your SSH username + +Your username for SSH is typically the same as your IAM User name. However, if your IAM User name has special characters that are not allowed by operating systems (e.g., most punctuation is not allowed), your SSH username may be a bit different, as specified in the `ssh-grunt` [documentation](../../../reference/modules/terraform-aws-security/ssh-grunt/). For example: + +1. If your IAM User name is `jane`, your SSH username will also be `jane`. +1. If your IAM User name is `jane@example.com`, your SSH username will be `jane`. +1. If your IAM User name is `_example.jane.doe`, your SSH username will be `example_jane_doe`. + + +### SSH to an EC2 instance + +Since most EC2 instances in the Reference Architecture are deployed into private subnets, you won't be able to access them over the public Internet. Therefore, you must first connect to the VPN server. See [VPN Authentication](../how-to-auth-vpn/index.md) for more details. + +Given that: + +1. Your IAM User name is jane. +1. You've uploaded your public SSH key to your IAM User profile. +1. Your private key is located at `/Users/jane/.ssh/id_rsa` on your local machine. +1. Your EC2 Instance's IP address is 1.2.3.4. + + +First, add your SSH Key into the SSH Agent using the following command: + +```bash +ssh-add /Users/jane/.ssh/id_rsa +``` + +Then, use this command to SSH to the EC2 Instance: + +```bash +ssh jane@1.2.3.4 +``` + +You should now be able to execute commands on the instance. + +## EC2 Key Pairs (For emergency / backup use only) + +When you launch an EC2 Instance in AWS, you can specify an EC2 Key Pair that can be used to SSH into the EC2 Instance. This suffers from an important problem: usually more than one person needs access to the EC2 Instance, which means you have to share this key with others. Sharing secrets of this sort is a security risk. Moreover, if someone leaves the company, to ensure they no longer have access, you'd have to change the Key Pair, which requires redeploying all of your servers. + +As part of the Reference Architecture deployment, Gruntwork will create EC2 Key Pairs and put the private keys into AWS Secrets Manager. These keys are there only for emergency / backup use: e.g., if there's a bug in `ssh-grunt` that prevents you from accessing your EC2 instances. We recommend only giving a handful of trusted admins access to these Key Pairs. diff --git a/_docs-sources/refarch/access/how-to-auth-vpn/index.md b/_docs-sources/refarch/access/how-to-auth-vpn/index.md new file mode 100644 index 0000000000..06fe05b672 --- /dev/null +++ b/_docs-sources/refarch/access/how-to-auth-vpn/index.md @@ -0,0 +1,43 @@ +# VPN Authentication + +Most of the AWS resources that comprise the Reference Architecture run in private subnets, which means they do not have a public IP address, and cannot be reached directly from the public Internet. This reduces the "surface area" that attackers can reach. Of course, you still need access into the VPCs so we exposed a single entrypoint into the network: an [OpenVPN server](https://openvpn.net/). + +## Install an OpenVPN client + +There are free and paid OpenVPN clients available for most major operating systems. Popular options include: + +1. OS X: [Viscosity](https://www.sparklabs.com/viscosity/) or [Tunnelblick](https://tunnelblick.net/). +1. Windows: [official client](https://openvpn.net/index.php/open-source/downloads.html). +1. Linux: + + ```bash title="Debian" + apt-get install openvpn + ``` + + ```bash title="Redhat" + yum install openvpn + ``` + +## Join the OpenVPN IAM Group + +Your IAM User needs access to SQS queues used by the OpenVPN server. Since IAM users are defined only in the security account, and the OpenVPN servers are defined in separate AWS accounts (stage, prod, etc), that means you need to authenticate to the accounts with the OpenVPN servers by assuming an IAM Role that has access to the SQS queues in those accounts. + +To be able to assume an IAM Role, your IAM user needs to be part of an IAM Group with the proper permissions, such as `_account.xxx-full-access` or `_account.xxx-openvpn-users`, where `xxx` is the name of the account you want to access (stage, prod, etc). See [Configure other IAM users](/refarch/access/setup-auth/#configure-other-iam-users) for instructions on adding users to IAM Groups. + +## Use openvpn-admin to generate a configuration file + +To connect to an OpenVPN server, you need an OpenVPN configuration file, which includes a certificate that you can use to authenticate. To generate this configuration file, do the following: + +1. Install the latest [`openvpn-admin binary`](https://github.com/gruntwork-io/terraform-aws-openvpn/releases) for your OS. + +1. Authenticate to AWS via the CLI. You will need to assume an IAM Role in the AWS account with the OpenVPN server you're trying to connect to. This IAM Role must have access to the SQS queues used by OpenVPN server. Typically, the `allow-full-access-from-other-accounts` or `openvpn-server-allow-certificate-requests-for-external-accounts` IAM Role is what you want. + +1. Run `openvpn-admin request --aws-region --username `. + +1. This will create your OpenVPN configuration file in your current directory. + +1. Load this configuration file into your OpenVPN client. + +## Connect to one of your OpenVPN servers + +To connect to an OpenVPN server in one of your app accounts (Dev, Stage, Prod), click the "Connect" button next to your configuration file in the OpenVPN client. After a few seconds, you should be connected. You will now be able to access all the resources within the AWS network (e.g., SSH to EC2 instances in private subnets) as if you were "in" the VPC itself. diff --git a/_docs-sources/refarch/access/index.md b/_docs-sources/refarch/access/index.md new file mode 100644 index 0000000000..3ff68c5509 --- /dev/null +++ b/_docs-sources/refarch/access/index.md @@ -0,0 +1,8 @@ +# How do I access my Reference Architecture? + +Haxx0r ipsum foo Trojan horse new all your base are belong to us ip error private shell fopen semaphore epoch char packet sniffer segfault gurfle bypass. Memory leak bubble sort injection leet malloc brute force double xss mega sudo mountain dew void echo win emacs linux piggyback bin. I'm compiling float bang case cat infinite loop Donald Knuth unix for /dev/null machine code then chown d00dz worm gnu crack packet bar eof while. + +Lib void brute force bypass nak concurrently all your base are belong to us break leapfrog bit default packet sniffer Linus Torvalds. Man pages packet stack trace Starcraft Donald Knuth pwned worm hello world public giga frack gurfle. Irc fork malloc fopen script kiddies flood blob fail hexadecimal while access semaphore loop mega Trojan horse foo gobble. + +Bang spoof *.* headers Dennis Ritchie pragma bubble sort mutex d00dz firewall wombat snarf. Win L0phtCrack back door big-endian tera injection flush suitably small values interpreter class hello world client segfault. Boolean buffer emacs highjack concurrently boolean I'm compiling malloc finally char protected void fopen ascii var cd Trojan horse public. + diff --git a/_docs-sources/refarch/access/setup-auth/index.md b/_docs-sources/refarch/access/setup-auth/index.md new file mode 100644 index 0000000000..44db323f16 --- /dev/null +++ b/_docs-sources/refarch/access/setup-auth/index.md @@ -0,0 +1,123 @@ +# Set up AWS Auth + +## Configure root users + +Each of your AWS accounts has a root user that you need to configure. When you created the child AWS accounts (dev, stage, prod, etc), you provided the root user's email address for each account; if you don't know what those email addresses were, you can log in to the root account (the parent of the AWS Organization) and go to the AWS Organizations Console to find them. + +Once you have the email addresses, you'll need the passwords. When you create child accounts in an AWS organization, AWS will not allow you to set the root password. In order to generate the root password: + +1. Go to the AWS Console. +1. If you had previously signed into some other AWS account as an IAM User, rather than a root user, click "Sign-in using root account credentials." +1. Enter the email address of the root user. +1. Click "Forgot your password" to reset the password. +1. Check the email address associated with the root user account for a link you can use to create a new password. + +:::danger +Please note that the root user account can do anything in your AWS account, bypassing the security restrictions you put in place, so you need to take extra care with protecting this account. +::: + +We strongly recommend that when you reset the password for each account, you also: + +1. Use a strong password: preferably 30+ characters, randomly generated, and stored in a secrets manager. +1. Enable Multi-Factor Auth (MFA): Follow [these instructions](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html#enable-virt-mfa-for-root) to enable MFA for the root user. + After this initial set up, you should _not_ use the root user account afterward except in very rare circumstances. (e.g., if you get locked out of your IAM User account and no one has permissions to reset your password). For day-to-day tasks, you should use an IAM User instead, as described in the next section. + +Please note that you'll have to repeat the process above of resetting the password and enabling MFA for every account in your organization: dev, stage, prod, shared, security, logs, and the root account. + +## Configure your IAM user + +The security account defines and manages all IAM Users. When deploying your Reference Architecture, Gruntwork creates an IAM User with admin permissions in the security account. The password for the IAM User is encrypted via PGP using [Keybase](https://keybase.io) (you'll need a free account) and is Base64-encoded. + +To access the Terraform state containing the password, you need to already be authenticated to the account. Thus to get access to the initial admin IAM User, we will use the root user credentials. To do this, you can either: + +- Log in to the AWS Web Console using the root user credentials for the security account and set up the password and AWS Access Keys for the IAM User. + +- Use the [Gruntwork CLI](https://github.com/gruntwork-io/gruntwork/) to rotate the password using the command: + + ```bash + gruntwork aws reset-password --iam-user-name + ``` + +Once you have access via your IAM user, finish hardening your security posture: + +1. Enable MFA for your IAM User by following [these instructions](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable.html). MFA is required by the Reference Architecture, and you won't be able to access any other accounts without it. + + :::note + Note that the name of the MFA must be exactly the same as the AWS IAM Username + ::: + +1. Log out and log back in — After enabling MFA, you need to log out and then log back in. This forces AWS to prompt you for your MFA token. + + :::caution + Until you enable MFA, you will not be able to access anything else in the web console. + ::: + +1. Create access keys for yourself by following [these instructions](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html). Store the access keys in a secrets manager. You will need these to authenticate to AWS from the command-line. + +## Configure other IAM users + +Now that your IAM user is all set up, you can configure IAM users for the rest of your team. + +:::note +Each of your users will need a free [Keybase](https://keybase.io/) account so that their credentials can be encrypted just for their access. +::: + +All of the IAM users are managed as code in the security account in the `account-baseline-app` module. If you open the `terragrunt.hcl` file in that repo, you should see the list of users, which will look something like: + +```yaml +jane@acme.com: + create_access_keys: false + create_login_profile: true + groups: + - full-access + pgp_key: keybase:jane_on_keybase +``` + +Here's how you would add two more users, Alice and Bob, to your security account: + +```yaml +jane@acme.com: + create_login_profile: true + groups: + - full-access + pgp_key: keybase:jane_on_keybase +alice@acme.com: + create_login_profile: true + groups: + - _account.dev-full-access + - _account.stage-full-access + - _account.prod-full-access + - iam-user-self-mgmt + pgp_key: keybase:alice_on_keybase +bob@acme.com: + create_login_profile: true + groups: + - _account.prod-read-only + - ssh-grunt-sudo-users + - iam-user-self-mgmt + pgp_key: keybase:bob_on_keybase +``` + +A few notes about the code above: + +1. **Groups**. We add each user to a set of IAM Groups: for example, we add Alice to IAM Groups that give her admin access in the dev, stage, and prod accounts, whereas Bob gets read-only access to prod, plus SSH access (with `sudo` permissions) to EC2 instances. For the full list of IAM Groups available, see the [IAM Groups module](https://github.com/gruntwork-io/terraform-aws-security/tree/main/modules/iam-groups#iam-groups). + +1. **PGP Keys**. We specify a PGP Key to use to encrypt any secrets for that user. Keys of the form `keybase:` are automatically fetched for user `` on [Keybase](https://keybase.io/). + +1. **Credentials**. For each user whose `create_login_profile` field is set to `true`, a password will be automatically generated. This password can be used to log in to the web console. This password will be encrypted with the user's PGP key and visible as a Terraform output. After you run `terragrunt apply`, you can copy/paste these encrypted credentials and send them to the user. + +To deploy this new code and create the new IAM Users, you will need to: + +1. Authenticate to AWS via the CLI. + +1. Apply your changes by running `terragrunt apply`. + +1. Share the login URL, usernames, and (encrypted) password with your team members. + + :::note + Make sure to tell each team member to follow the [Configure your IAM User instructions](#configure-your-iam-user) to log in, reset their password, and enable MFA. + ::: + + :::caution + Enabling MFA is required to access the Reference Architecture + ::: diff --git a/_docs-sources/refarch/configuration/index.md b/_docs-sources/refarch/configuration/index.md new file mode 100644 index 0000000000..136696d067 --- /dev/null +++ b/_docs-sources/refarch/configuration/index.md @@ -0,0 +1,47 @@ +# Get Started + +The Gruntwork Reference Architecture allows you to configure key aspects to your needs. Before you receive your deployed Reference Architecture, you will: +1. **Configure** your choice of your primary AWS region, database and compute flavors, domain names and more via a pull request +2. **Iterate** on the configuration in your pull request in response to Gruntwork preflight checks that spot blocking issues and ensure your deployment is ready to commence +3. **Merge** your pull request after all checks pass. Merging will automatically commence your Reference Architecture deployment +4. **Wait** until Gruntwork has successfully completed your deployment. You’ll receive an automated email indicating your deployment is complete + +Below, we'll outline the Reference Architecture at a high level. + +note: add pre-reqs section about things you need to know + +## Requirements + +This guide requires that you have access to an AWS IAM user or role in the AWS account that serves as your Organization Root for AWS Organizations with permissions to create member accounts. For more information on IAM policies for AWS organizations see the AWS guide on [managing IAM policies for AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_permissions_iam-policies.html#orgs_permissions_grant-admin-actions). + +## RefArch Configuration + +Your Reference Architecture configuration lives in your `infrastructure-live` repository on GitHub. Within your `infrastructure-live` repository, the `reference-architecture-form.yml` file defines all of your specific selections, domain names, AWS account IDs, etc. + +Gruntwork deployment tooling reads your `reference-architecture-form.yml` in order to first perform preflight checks to +ensure your accounts and selections are valid and ready for deployment. Once your preflight checks pass, and your pull request has been merged, Gruntwork tooling uses your `reference-architecture-form.yml` to deploy your Reference Architecture into your AWS accounts. + +Gruntwork provides bootstrap scripts, automated tooling, documentation and support to help you complete your setup steps and commence your Reference Architecture deployment. + +## Required Actions and Data +Some of the initial configuration steps will require you to *perform actions* against your AWS accounts, such as creating an IAM role that Gruntwork uses to access your accounts. Meanwhile, your `reference-architecture-form.yml` requires *data*, such as your AWS account IDs, domain name, etc. + +### Actions + +Wherever possible, Gruntwork attempts to automate setup actions *for you*. + +There is a bootstrap script in your `infrastructure-live` repository that will attempt to programmatically complete your setup actions (such as provisioning new AWS accounts on your behalf, registering domain names if you wish, etc) using a setup wizard and write the resulting *data* to your `reference-architecture-form.yml` file. + +### Data +`Data` refers to values, such as an AWS account ID, your desired domain name, etc, which may be the output of an action. + +The gruntwork CLI includes a [wizard](./run-the-wizard.md) that automates all of the steps to get the required data from you. We strongly recommended using the wizard for the majority of users. + +:::info Manual Configuration +If you are required to manually provision AWS accounts, domain names, or otherwise, the Gruntwork CLI has utilities to [manually bootstrap](https://github.com/gruntwork-io/gruntwork#bootstrap-manually) the required resources. This approach is only recommended for advanced users after consulting with Gruntwork. After all data has been generated manually, you will need to fill out the `reference-architecture-form.yml` manually. +::: + +## Let’s get started! + +Now that you understand the configuration and delivery process at a high level, we’ll get underway configuring your Reference Architecture. + diff --git a/_docs-sources/refarch/configuration/install-required-tools.md b/_docs-sources/refarch/configuration/install-required-tools.md new file mode 100644 index 0000000000..4f91a3625d --- /dev/null +++ b/_docs-sources/refarch/configuration/install-required-tools.md @@ -0,0 +1,30 @@ +# Install Required Tools + +Configuring your Reference Architecture requires that you have `git` and the `gruntwork` CLI tool installed on your machine. You have two options for installation. + +## Use the bootstrap script (preferred) + +The bootstrap script will ensure you have all required dependencies installed. Within your `infrastructure-live` repository, there are two bootstrap scripts. +- `bootstrap_unix.sh` which can be run on macOS and Linux machines +- `bootstrap_windows.py` which runs on Windows machines + +Choose the correct bootstrap script for your system. Both scripts perform the equivalent functionality. + +In addition to installing dependencies, the bootstrap script will: +- Ensure you are running the script in the root of your `infrastructure-live` repository +- Ensure you have sufficient GitHub access to access and clone private Gruntwork repositories +- Download the Gruntwork installer +- Install the Gruntwork command line interface (CLI) which contains the Reference Architecture configuration wizard +- [Run the Gruntwork wizard](./run-the-wizard) to assist you in completing your Reference Architecture configuration steps (see docs for [required permissions](./run-the-wizard.md#required-permissions)) + +## Install manually + +:::caution +We do not recommend this approach. The bootstrap script performs several checks to ensure you have all tools and access required to configure your Reference Architecture. You will need to perform these checks manually if installing tools manually. +::: + +If you prefer to install your tools manually, see the following sections on installing Git and the Gruntwork CLI. + +1. If you would like to install `git` manually, installation steps can be found on the [Git SCM Installing Git Guide](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). +2. If you would like to install the Gruntwork CLI manually, we recommend downloading the latest release from the [GitHub releases page](https://github.com/gruntwork-io/gruntwork/releases). + diff --git a/_docs-sources/refarch/configuration/preflight-checks.md b/_docs-sources/refarch/configuration/preflight-checks.md new file mode 100644 index 0000000000..e8c55344e1 --- /dev/null +++ b/_docs-sources/refarch/configuration/preflight-checks.md @@ -0,0 +1,38 @@ +# Iterate on Preflight checks + +Once you have run the setup wizard and pushed your `ref-arch-form` branch with your changes, GitHub Actions will commence, running the preflight checks. + +![Gruntwork Reference Architecture preflight checks](/img/preflight1.png) + +Preflight checks can take up to 4–5 minutes to complete after you push your commit. Any errors will be +directly annotated on the exact line of your form that presents a blocking issue, so be sure to check the *Files changed* tab of your pull request to see them: + +![Gruntwork Ref Arch preflight checks on your pull request](/img/preflight-error-on-pr.png) + +## Fix any errors + +In most cases, the error messages included in the preflight check annotations will provide sufficient information to remediate the underlying issue. If at any point you are confused or +need assistance, please reach out to us at `support@gruntwork.io` and we’ll be happy to assist you. + +## Commit and push your changes + +Once you have fixed any issues flagged by preflight checks, you can make a new commit with your latest form changes and push it up to the same branch. This will trigger a re-run of preflight +checks using your latest form data. + +## Merge your pull request + +Once your preflight checks pass, meaning there are no more error annotations on your pull request +and the GitHub check itself is green, you can merge your pull request to the `main` branch. + +## Wait for your deployment to complete + +Merging your `ref-arch-form` pull request to the `main` branch will automatically kick off the deployment process for your Reference Architecture. There’s nothing more for you to do at this point. + +:::caution +During deployment we ask that you do not log into, modify or interact with your Reference Architecture AWS accounts in any way or make any modifications to your `infrastructure-live` repo once you have merged your pull request. +::: + +Your deployment is now in Gruntwork engineers’ hands and we are notified of every single error your deployment encounters. We’ll work behind the scenes to complete your deployment, communicating with you via email or GitHub if we need +any additional information or if we need you to perform any remediation steps to un-block your deployment. + +Once your deployment completes, you’ll receive an automated email with next steps and a link to your Quick Start guide that has been written to your `infrastructure-live` repository. diff --git a/_docs-sources/refarch/configuration/provision-accounts.md b/_docs-sources/refarch/configuration/provision-accounts.md new file mode 100644 index 0000000000..9e8b2c24af --- /dev/null +++ b/_docs-sources/refarch/configuration/provision-accounts.md @@ -0,0 +1,7 @@ +# Provision AWS accounts + +Haxx0r ipsum mainframe bang ssh data public root client wombat recursively. Hexadecimal snarf chown highjack sudo for suitably small values null default bar unix server man pages endif ascii linux kilo tcp tunnel in. Long giga afk crack infinite loop buffer worm foo Dennis Ritchie. + +Protocol then bit while bar back door perl bang shell client bytes ifdef baz. Hello world mountain dew injection malloc var tunnel in todo class. For tera port bypass function packet sniffer for error char pragma printf sudo over clock grep continue. + +Linux mega var alloc xss linux tunnel in gc stdio.h int win back door mountain dew. Float I'm compiling null nak endif fatal Starcraft irc. Stack tcp foad port protocol ban protected eof ascii *.* blob flood then cat. diff --git a/_docs-sources/refarch/configuration/route53.md b/_docs-sources/refarch/configuration/route53.md new file mode 100644 index 0000000000..e042c0fdea --- /dev/null +++ b/_docs-sources/refarch/configuration/route53.md @@ -0,0 +1,7 @@ +# Configure Route53 and app domains + +Haxx0r ipsum mainframe bang ssh data public root client wombat recursively. Hexadecimal snarf chown highjack sudo for suitably small values null default bar unix server man pages endif ascii linux kilo tcp tunnel in. Long giga afk crack infinite loop buffer worm foo Dennis Ritchie. + +Protocol then bit while bar back door perl bang shell client bytes ifdef baz. Hello world mountain dew injection malloc var tunnel in todo class. For tera port bypass function packet sniffer for error char pragma printf sudo over clock grep continue. + +Linux mega var alloc xss linux tunnel in gc stdio.h int win back door mountain dew. Float I'm compiling null nak endif fatal Starcraft irc. Stack tcp foad port protocol ban protected eof ascii *.* blob flood then cat. diff --git a/_docs-sources/refarch/configuration/run-the-wizard.md b/_docs-sources/refarch/configuration/run-the-wizard.md new file mode 100644 index 0000000000..9e6cc5f44e --- /dev/null +++ b/_docs-sources/refarch/configuration/run-the-wizard.md @@ -0,0 +1,19 @@ +# Run the Wizard + +The Gruntwork CLI features a wizard designed to assist you in completing your Reference Architecture setup actions. The Gruntwork CLI wizard attempts to orchestrate all required configuration actions, such as provisioning AWS accounts, creating IAM roles used by Gruntwork tooling and engineers in each of the AWS accounts, registering new Route53 domain names, configuring Route53 Hosted Zones, and much more. + +If you have already run the wizard using the [bootstrap script](./install-required-tools.md#use-the-bootstrap-script-preferred), then you can skip this step. + +## Installation + +Installation instructions for the Gruntwork CLI can be found in [Install Required Tools](./install-required-tools.md#installing-gruntwork-cli). + +## Required Permissions + +To run the wizard you will need access to the AWS account that serves as the Organization Root of your AWS Organization. At a minimum, the AWS IAM user or role will need the `organizations:CreateAccount` action, which grants the ability to create member accounts. + +## Running the wizard + +To commence the wizard, first authenticate to AWS on the command line, then run `gruntwork wizard`. + +If you need to stop the running the wizard at any time, or if there is an error, the next time you run the wizard it will restart at the last step it stopped on. diff --git a/_docs-sources/refarch/configuration/setup-quotas.md b/_docs-sources/refarch/configuration/setup-quotas.md new file mode 100644 index 0000000000..06890dbfcb --- /dev/null +++ b/_docs-sources/refarch/configuration/setup-quotas.md @@ -0,0 +1,7 @@ +# Configure AWS account quotas + +Haxx0r ipsum mainframe bang ssh data public root client wombat recursively. Hexadecimal snarf chown highjack sudo for suitably small values null default bar unix server man pages endif ascii linux kilo tcp tunnel in. Long giga afk crack infinite loop buffer worm foo Dennis Ritchie. + +Protocol then bit while bar back door perl bang shell client bytes ifdef baz. Hello world mountain dew injection malloc var tunnel in todo class. For tera port bypass function packet sniffer for error char pragma printf sudo over clock grep continue. + +Linux mega var alloc xss linux tunnel in gc stdio.h int win back door mountain dew. Float I'm compiling null nak endif fatal Starcraft irc. Stack tcp foad port protocol ban protected eof ascii *.* blob flood then cat. diff --git a/_docs-sources/refarch/index.md b/_docs-sources/refarch/index.md new file mode 100644 index 0000000000..d30e8e41f7 --- /dev/null +++ b/_docs-sources/refarch/index.md @@ -0,0 +1,14 @@ +# Reference Architecture + +The Gruntwork Reference Architecture is an implementation of best practices for infrastructure in the cloud. It is an opinionated, end-to-end tech stack built on top of our Infrastructure as Code Library, deployed into the customer's AWS accounts. It is comprised of three pieces. +## Landing Zone + +Gruntwork Landing Zone is a Terraform-native approach to [AWS Landing zone / Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html). This uses Terraform to quickly create new AWS accounts, configure them with a standard security baseline, and defines a best-practices multi-account setup. + +## Sample Application + +Our [sample application](https://github.com/gruntwork-io/aws-sample-app) is built with JavaScript, Node.js, and Express.js, following [Twelve-Factor App](https://12factor.net/) practices. It consists of a load balancer, a front end, a backend, a cache, and a database. + +## Pipelines + +[Gruntwork Pipelines](/pipelines/overview/) makes the process of deploying infrastructure similar to how developers often deploy code. It is a code framework and approach that enables the customer to use your preferred CI tool to set up an end-to-end pipeline for infrastructure code. diff --git a/_docs-sources/refarch/support/getting-help/index.md b/_docs-sources/refarch/support/getting-help/index.md new file mode 100644 index 0000000000..f041879dbb --- /dev/null +++ b/_docs-sources/refarch/support/getting-help/index.md @@ -0,0 +1 @@ +# Link: support diff --git a/_docs-sources/refarch/support/onboarding/index.md b/_docs-sources/refarch/support/onboarding/index.md new file mode 100644 index 0000000000..7a36b15108 --- /dev/null +++ b/_docs-sources/refarch/support/onboarding/index.md @@ -0,0 +1,3 @@ +# Onboarding sessions + +HAXXOR IPSUM diff --git a/_docs-sources/refarch/usage/maintain-your-refarch/adding-new-account.md b/_docs-sources/refarch/usage/maintain-your-refarch/adding-new-account.md new file mode 100644 index 0000000000..0460ccb4b8 --- /dev/null +++ b/_docs-sources/refarch/usage/maintain-your-refarch/adding-new-account.md @@ -0,0 +1,291 @@ +# Adding a new account + +This document is a guide on how to add a new AWS account to your Reference Architecture. This is useful if you have a +need to expand the Reference Architecture with more accounts, like a test or sandbox account. + +## Create new Account in your AWS Org + +The first step to adding a new account is to create the new AWS Account in your AWS Organization. This can be done +either through the AWS Web Console, or by using the [Gruntwork CLI](https://github.com/gruntwork-io/gruntwork/). If you +are doing this via the CLI, you can run the following command to create the new account: + +```bash +gruntwork aws create --account "=" +``` + +Record the account name, AWS ID, and deploy order of the new account you just created in the +`accounts.json` file so that we can reference it throughout the process. + +### Set the deploy order + +The deploy order is the order in which the accounts are deployed when a common env file is modified (the files in +`_envcommon`). Note that the deploy order does not influence how changes to individual component configurations +(child Terragrunt configurations) are rolled out. + +Set the deploy order depending on the role that the account plays and how you want changes to be promoted across your +environment. + +General guidelines: + +- The riskier the change would be, the higher you should set the deploy order. You'll have to determine the level of + risk for each kind of change. +- The lowest deploy order should be set for `dev` and `sandbox` accounts. `dev` and `sandbox` accounts are typically the + least risky to break because they only affect internal users, and thus the impact to the business of downtime to these + accounts is limited. +- `prod` accounts should be deployed after all other app accounts (`dev`, `sandbox`, `stage`) because the risk of + downtime is higher. +- It could make sense for `prod` accounts to be deployed last, after shared services accounts (`shared`, `logs`, + `security`), but it depends on your risk level. +- Shared services accounts (`shared` and `logs`) should be deployed after the app accounts (`dev`, `sandbox`, `stage`, + `prod`). + - A potential outage in `shared` could prevent access to deploy old and new code to all of your environments (e.g., + a failed deploy of `account-baseline` could cause you to lose access to the ECR repos). This could be more + damaging than just losing access to `prod`. + - Similarly, an outage in `logs` could result in losing access to audit logs which can prevent detection of + malicious activity, or loss of compliance. +- `security` should be deployed after all other accounts. + - A potential outage in `security` could prevent loss of all access to all accounts, which will prevent you from + making any changes, which is the highest impact to your operations. Therefore we recommend deploying security + last. + +For example, suppose you have the following folder structure: + +```bash title="Infrastructure Live" +. +├── accounts.json +├── _envcommon +│ └── services +│ └── my-app.hcl +├── dev +│ └── us-east-1 +│ └── dev +│ └── services +│ └── my-app +│ └── terragrunt.hcl +│ +├── stage +│ └── us-east-1 +│ └── stage +│ └── services +│ └── my-app +│ └── terragrunt.hcl +└── prod + └── us-east-1 + └── prod + └── services + └── my-app + └── terragrunt.hcl +``` + +And suppose you had the following in your `accounts.json` file: + +```json title="accounts.json" +{ + "logs": { + "deploy_order": 5, + "id": "111111111111", + "root_user_email": "" + }, + "security": { + "deploy_order": 5, + "id": "222222222222", + "root_user_email": "" + }, + "shared": { + "deploy_order": 4, + "id": "333333333333", + "root_user_email": "" + }, + "dev": { + "deploy_order": 1, + "id": "444444444444", + "root_user_email": "" + }, + "stage": { + "deploy_order": 2, + "id": "555555555555", + "root_user_email": "" + }, + "prod": { + "deploy_order": 3, + "id": "666666666666", + "root_user_email": "" + } +} +``` + +If you make a change in `_envcommon/services/my-app.hcl`, then the Infrastructure CI/CD pipeline will proceed to run +`plan` and `apply` in the deploy order specified in the `accounts.json` file. For the example, this means that the +pipeline will run `plan` and `apply` on `dev` first, then `stage`, and then finally `prod`. If anything fails in +between, then the pipeline will halt at that point. That is, if there is an error trying to deploy to `dev`, then the +pipeline will halt without moving to `stage` or `prod`. + +If instead you made a change in `dev/us-east-1/dev/services/my-app/terragrunt.hcl` and +`prod/us-east-1/prod/services/my-app/terragrunt.hcl`, then the changes are applied simultaneously, ignoring the deploy +order. This is because a child config was updated directly, instead of the common configuration file. In this way, the +deploy order only influences the pipeline for updates to the common component configurations. + +### Configure MFA + +Once the account is created, log in using the root credentials and configure MFA using [this +document](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html#enable-virt-mfa-for-root) as a guide. + +:::caution + +It is critical to enable MFA as the root user can bypass just about any other security restrictions you put in place. + +::: + +:::tip + +Make sure you keep a paper copy of the virtual device secret key so that +you have a backup in case you lose your MFA device. + +::: + +### Create a temporary IAM User + +Once MFA is configured, set up a temporary IAM User with administrator access (the AWS managed IAM Policy +`AdministratorAccess`) and create an AWS Access key pair so you can authenticate on the command line. + +:::note + +At this point, you won't need to use the root credentials again until you are ready to delete the AWS account. + +::: + +## Update Logs, Security, and Shared accounts to allow cross account access + +In the Reference Architecture, all the AWS activity logs are configured to be streamed to a dedicated `logs` account. +This ensures that having full access to a particular account does not necessarily grant you the ability to tamper with +audit logs. + +In addition, all account access is managed by a central `security` account where the IAM Users are defined. This allows +you to manage access to accounts from a central location, and your users only need to manage a single set of AWS +credentials when accessing the environment. + +If you are sharing encrypted AMIs, then you will also need to ensure the new account has access to the KMS key that +encrypts the AMI root device. This is managed in the `shared` account baseline module. + +Finally, for the [ECS Deploy +Runner](https://github.com/gruntwork-io/terraform-aws-ci/tree/main/modules/ecs-deploy-runner) to work, the new account +needs to be able to access the secrets for accessing the remote repositories and the Docker images that back the build +runners. Both of these are stored in the `shared` account. + +In order for this setup to work for each new account that is created, the `logs`, `security`, and `shared` accounts need +to be made aware of the new account. This is handled through the `accounts.json` file in your +`infrastructure-live` repository. + +Once the `accounts.json` file is updated with the new account, you will want to grant the permissions for the new +account to access the shared resources. This can be done by running `terragrunt apply` in the `account-baseline` module +for the `logs`, `shared`, and `security` account, and the `ecr-repos` and `shared-secret-resource-policies` modules in the `shared` +account: + +```bash +(cd logs/_global/account-baseline && terragrunt apply) +(cd security/_global/account-baseline && terragrunt apply) +(cd shared/_global/account-baseline && terragrunt apply) +(cd shared/us-west-2/_regional/ecr-repos && terragrunt apply) +(cd shared/us-west-2/_regional/shared-secret-resource-policies && terragrunt apply) +``` + +Each call to apply will show you the plan for making the cross account changes. Verify the plan looks correct, and then +approve it to apply the updated cross account permissions. + +## Deploy the security baseline for the app account + +Now that the cross account access is configured, you are ready to start provisioning the new account! + +First, create a new folder for your account in `infrastructure-live`. The folder name should match the name of the AWS +account. + +Once the folder is created, create the following sub-folders and files with the following content: + +- ```json title="./infrastructure-live/account.hcl" + locals { + account_name = "" + } + ``` + +- ```bash title="./infrastructure-live/_global/region.hcl" + # Modules in the account _global folder don't live in any specific AWS region, but you still have to send the API calls + # to _some_ AWS region, so here we pick a default region to use for those API calls. + locals { + aws_region = "us-east-1" + } + ``` + +Next, copy over the `account-baseline` configuration from one of the application accounts (e.g., `dev`) and place it in +the `_global` folder: + +```bash +cp -r dev/\_global/account-baseline /\_global/account-baseline +``` + +Open the `terragrunt.hcl` file in the `account-baseline` folder and sanity check the configuration. Make sure there are +no hard coded parameters that are specific to the dev account. If you have not touched the configuration since the +Reference Architecture was deployed, you won't need to change anything. + +At this point, your folder structure for the new account should look like the following: + +```bash +. +└── new-account +├── account.hcl +└── \_global +├── region.hcl +└── account-baseline +└── terragrunt.hcl + +``` + +Once the folder structure looks correct and you have confirmed the `terragrunt.hcl` configuration is accurate, you are +ready to deploy the security baseline. Authenticate to the new account on the CLI (see [this blog +post](https://blog.gruntwork.io/a-comprehensive-guide-to-authenticating-to-aws-on-the-command-line-63656a686799) for +instructions) using the access credentials for the temporary IAM User you created above and run `terragrunt apply`. + +When running `apply`, you will see the plan for applying all the security baseline to the new account. Verify the plan +looks correct, and then approve it roll out the security baseline. + +At this point, you can now use the cross account access from the `security` account to authenticate to the new account. +Use your security account IAM User to assume the `allow-full-access-from-other-accounts` IAM Role in the new account to +confirm this. + +Once you confirm you have access to the new account from the `security` account, login using the +`allow-full-access-from-other-accounts` IAM Role and remove the temporary IAM User as you will no longer need to use it. + +## Deploy the ECS Deploy Runner + +Once the security baseline is deployed on the new account, you can deploy the ECS Deploy Runner. With the ECS Deploy +Runner, you will be able to provision new resources in the new account. + +To deploy the ECS Deploy Runner, copy the terragrunt configurations for `mgmt/vpc-mgmt` and `mgmt/ecs-deploy-runner` +from the `dev` account: + +```bash +mkdir -p /us-west-2/mgmt +cp -r dev/us-west-2/mgmt/{vpc-mgmt,ecs-deploy-runner} /us-west-2/mgmt +``` + +Be sure to open the `terragrunt.hcl` file in the copied folders and sanity check the configuration. Make sure there are +no hard coded parameters that are specific to the dev account. If you have not touched the configuration since the +Reference Architecture was deployed, you won't need to change anything. + +Once the configuration looks correct, go in to the `mgmt` folder and use `terragrunt run-all apply` to deploy the ECS +Deploy Runner: + +```bash +cd /us-west-2/mgmt && terragrunt run-all apply +``` + +:::note + +Because this uses `run-all`, the command will not pause to show you the plan. If you wish to view the plan, +run `apply` in each subfolder of the `mgmt` folder, in dependency graph order. You can see the dependency graph by using +the [graph-dependencies terragrunt +command](https://terragrunt.gruntwork.io/docs/reference/cli-options/#graph-dependencies). + +::: + +At this point, the ECS Deploy Runner is provisioned in the new account, and you can start using the Gruntwork Pipeline +to provision new infrastructure in the account. diff --git a/_docs-sources/refarch/usage/maintain-your-refarch/deploying-your-apps.md b/_docs-sources/refarch/usage/maintain-your-refarch/deploying-your-apps.md new file mode 100644 index 0000000000..9f3119a963 --- /dev/null +++ b/_docs-sources/refarch/usage/maintain-your-refarch/deploying-your-apps.md @@ -0,0 +1,480 @@ +--- +toc_max_heading_level: 2 +--- + +import Tabs from "@theme/Tabs" +import TabItem from "@theme/TabItem" + +# Deploying your apps + +In this guide, we'll walk you through deploying a Dockerized app to the App Orchestration cluster (ECS or EKS) running in +your Reference Architecture. + +## What's already deployed + +When Gruntwork initially deploys the Reference Architecture, we deploy the +[aws-sample-app](https://github.com/gruntwork-io/aws-sample-app/) into it, configured both as a frontend (i.e., +user-facing app that returns HTML) and as a backend (i.e., an app that's only accessible internally and returns JSON). +We recommend checking out the [aws-sample-app](https://github.com/gruntwork-io/aws-sample-app/) as it is designed to +deploy seamlessly into the Reference Architecture and demonstrates many important patterns you may wish to follow in +your own apps, such as how to package your app using Docker or Packer, do service discovery for microservices and data +stores in a way that works in dev and prod, securely manage secrets such as database credentials and self-signed TLS +certificates, automatically apply schema migrations to a database, and so on. + +However, for the purposes of this guide, we will create a much simpler app from scratch so you can see how all the +pieces fit together. Start with this simple app, and then, when you're ready, start adopting the more advanced +practices from [aws-sample-app](https://github.com/gruntwork-io/aws-sample-app/). + +## Deploying another app + +For this guide, we'll use a simple Node.js app as an example, but the same principles can be applied to any app. +Below is a classic, "Hello World" starter app that listens for requests on port `8080`. For this example +walkthrough, save this file as `server.js`. + +```js title="server.js" +const express = require("express") + +// Constants +const PORT = 8080 +const HOST = "0.0.0.0" + +// App +const app = express() +app.get("/simple-web-app", (req, res) => { + res.send("Hello world\n") +}) + +app.listen(PORT, HOST) +console.log(`Running on http://${HOST}:${PORT}`) +``` + +Since we need to pull in the dependencies (like ExpressJS) to run this app, we will also need a corresponding `package.json`. Please save this file along side `server.js`. + +```js title="package.json" +{ + "name": "docker_web_app", + "version": "1.0.0", + "main": "server.js", + "scripts": { + "start": "node server.js" + }, + "dependencies": { + "express": "^4.17.2" + } +} +``` + +## Dockerizing + +In order to deploy the app, we need to Dockerize the app. If you are not familiar with the basics of Docker, we +recommend you check out our "Crash Course on Docker and Packer" from the [Gruntwork Training +Library](https://training.gruntwork.io/p/a-crash-course-on-docker-packer). + +For this guide, we will use the following `Dockerfile` to package our app into a container (see [Docker +samples](https://docs.docker.com/samples/) for how to Dockerize many popular app formats): + +```docker +FROM node:14 + +# Create app directory +WORKDIR /usr/app + +COPY package*.json ./ + +RUN npm install +COPY . . + +# Ensure that our Docker image is configured to `EXPOSE` +# the port that our app is going to need for external communication. +EXPOSE 8080 +CMD [ "npm", "start" ] +``` + +The folder structure of our sample app looks like this: + +```shell +├── server.js +├── Dockerfile +└── package.json +``` + +To build this Docker image from the `Dockerfile`, run: + +```bash +docker build -t simple-web-app:latest . +``` + +Now you can test the container to see if it is working: + +```bash +docker run --rm -p 8080:8080 simple-web-app:latest +``` + +This starts the newly built container and links port `8080` on your machine to the container's port `8080`. You should +see output like below when you run this command: + +``` +> docker_web_app@1.0.0 start /usr/app +> node server.js + +Running on http://0.0.0.0:8080 +``` + +You should now be able to hit the app by opening `localhost:8080/simple-web-app` in your browser. Try it out to verify +you get the `"Hello world"` message from the server. + +## Publishing your Docker image + +Next, let's publish those images to an [ECR repo](https://aws.amazon.com/ecr/). All ECR repos are managed in the +`shared-services` AWS account in your Reference Architecture. + +First, you'll need to create the new ECR repository. + +Create a new branch on your infrastructure-live repository: + +```bash +git checkout -b simple-web-app-repo +``` + +Open `repos.yml` in `shared/us-west-2/_regional/ecr-repos` and add the desired repository name of your app. For the +purposes of our example, let's call ours `simple-web-app`: + +```yaml +simple-web-app: +external_account_ids_with_read_access: + # NOTE: we have to comment out the directives so that the python based data merger (see the `merge-data` hook under + # blueprints in this repository) can parse this yaml file. This still works when feeding through templatefile, as it + # will interleave blank comments with the list items, which yaml handles gracefully. + # %{ for account_id in account_ids } + - "${account_id}" +# %{ endfor } +external_account_ids_with_write_access: [] +tags: {} +enable_automatic_image_scanning: true +``` + +Commit and push the change: + +```bash +git add shared/us-west-2/shared/data-stores/ecr-repos/terragrunt.hcl && git commit -m 'Added simple-web-app repo' && git push +``` + +Now open a pull request on the `simple-web-app-repo` branch. + +This will cause the ECS deploy runner pipeline to run a `terragrunt plan` and append the plan output to the body of the PR you opened. If the plan output looks correct with no errors, somebody can review and approve the PR. Once approved, you can merge, which will kick off a `terragrunt apply` on the deploy runner, creating the repo. Follow the progress through your CI server. For example, you can go to GitHub actions workflows page and tail the logs from the ECS deploy runner there. + +Once the repository exists, you can use it with the Docker image. Each repo in ECR has a URL of the format `.dkr.ecr..amazonaws.com/`. For example, an ECR repo in `us-west-2`, and an app called `simple-web-app`, the registry URL would be: + +``` +.dkr.ecr.us-west-2.amazonaws.com/simple-web-app +``` + +You can create a Docker image for this repo, with a `v1` label, as follows: + +```bash +docker tag simple-web-app:latest .dkr.ecr.us-west-2.amazonaws.com/simple-web-app:v1 +``` + +Next, authenticate your Docker client with ECR in the shared-services account: + +```bash +aws ecr get-login-password --region "us-west-2" | docker login --username AWS --password-stdin .dkr.ecr.us-west-2.amazonaws.com +``` + +And finally, push your newly tagged image to publish it: + +```bash +docker push .dkr.ecr.us-west-2.amazonaws.com/simple-web-app:v1 +``` + +## Deploying your app + + + + + +Now that you have the Docker image of your app published, the next step is to deploy it to your ECS Cluster that was +set up as part of your reference architecture deployment. + +### Setting up the Application Load Balancer + +The first step is to create an [Application Load Balancer (ALB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) for the app. The ALB will be exposed to the Internet and will route incoming traffic to the app. It's possible to use a single ALB with multiple applications, but for this example, we'll create a new ALB in addition to the ALB used by the aws-sample-app. + +To set up a new ALB, you'll need to create a `terragrunt.hcl` in each app environment (that is, in dev, stage, and prod). For example, for the `stage` environment, create an `alb-simple-web-app` folder in `stage/us-west-2/networking/`. Next, you can copy over the contents of the alb `terragrunt.hcl` so you have something to start with. + +With the `terragrunt.hcl` file open, update the following parameters: + +- Set `alb_name` to your desired name: e.g., `alb-simple-web-app-stage` +- Set `domain_names` to a desired DNS name: e.g., `domain_names = ["simple-web-app-stage.example.com"]` +- Note that your domain is available in an account-level `local` variable, `local.account_vars.locals.domain_name.name`. You can thus use a string interpolation to avoid hardcoding the domain name: `domain_names = ["simple-web-app-stage.${local.account_vars.locals.domain_name.name}"]` + +That's it! + +### Setting up the ECS service + +The next step is to create a `terragrunt.hcl` file to deploy your app in each app environment (i.e. in dev, stage, +prod). To do this, we will first need to define the common inputs for deploying the `simple-web-app` service. + +Copy the file `_envcommon/services/ecs-sample-app-frontend.hcl` into a new file +`_envcommon/services/ecs-simple-web-app.hcl`. + +Next, update the following in the new `ecs-simple-web-app.hcl` configuration file: + +- Locate the `dependency "alb"` block and modify it to point to the new ALB configuration you just defined. That is, change the `config_path` to the relative path to your new ALB. e.g., `config_path = "../../networking/alb-simple-web-app"` +- Set the `service_name` local to your desired name: e.g., `simple-web-app-stage`. +- Update `ecs_node_port_mappings` to only have a map value for port 8080 +- In the `container_definitions` object, set `image` to the repo url of the just published Docker image: e.g., `.dkr.ecr.us-west-2.amazonaws.com/simple-web-app` +- Set `cpu` and `memory` to a low value like 256 and 512 +- Remove all the `environment` variables, leaving only an empty list, e.g. `environment = []` +- Remove port 8443 from the `portMappings` +- Remove the unnecessary `linuxParameters` parameter +- Remove the `iam_role_name` and `iam_policy` parameters since this simple web app doesn't need any IAM permissions + +Once the envcommon file is created, you can create the `terragrunt.hcl` file to deploy it in a specific environment. +For the purpose of this example, we will assume we want to deploy the simple web app into the `dev` account first. + +1. Create a `simple-web-app` folder in `dev/us-west-2/dev/services`. +1. Copy over the contents of the `sample-app-frontend terragrunt.hcl`. +1. Update the include path for `envcommon` to reference the new `ecs-simple-web-app.hcl` envcommon file you created + above. +1. Remove the unneeded `service_environment_variables`, `tls_secrets_manager_arn`, and `db_secrets_manager_arn` local + variables, as well as all usage of it in the file. +1. Update the `tag` local variable to reference the Docker image tag we created earlier. + +### Deploying your configuration + +The above are the minimum set of configurations that you need to deploy the app. You can take a look at [`variables.tf` +of `ecs-service`](https://github.com/gruntwork-io/terraform-aws-service-catalog/tree/main/modules/services/ecs-service) +for all the options. + +Once you've verified that everything looks fine, change to the new ALB directory you created, and run: + +```bash +terragrunt apply +``` + +This will show you the plan for adding the new ALB. Verify the plan looks correct, and then approve it to apply your ALB +configuration to create a new ALB. + +Now change to the new `services/simple-web-app` folder, and run + +```bash +terragrunt apply +``` + +Similar to applying the ALB configuration, this will show you the plan for adding the new service. Verify and approve +the plan to apply your application configuration, which will create a new ECS service along with a target group that +connects the ALB to the service. + +### Monitoring your deployment progress + +Due to the asynchronous nature of ECS deployments, a successful `terragrunt apply` does not always mean your app +was deployed successfully. The following commands will help you examine the ECS cluster from the CLI. + +First, you can find the available ECS clusters: + +```bash +aws --region us-west-2 ecs list-clusters +``` + +Armed with the available clusters, you can list the available ECS services on a cluster by running: + +```bash +aws --region us-west-2 ecs list-services --cluster +``` + +The list of services should include the new `simple-web-app` service you created. You can get more information about the service by describing it: + +``` +aws --region us-west-2 ecs describe-services --cluster --services +``` + +A healthy service should show `"status": "ACTIVE"` in the output. You can also review the list of `events` to see what has happened with the service recently. If the `status` shows something else, it's time to start debugging. + +### Debugging errors + +Sometimes, things don't go as planned. And when that happens, it's always beneficial to know how to locate the +source of the problem. + +By default, all the container logs from a `service` (`stdout` and `stderr`) are sent to CloudWatch Logs. This is ideal for +debugging situations where the container starts successfully but the service doesn't work as expected. Let's assume our +`simple-web-app` containers started successfully (which they did!) but for some reason our requests to those containers +are timing out or returning wrong content. + +1. Go to the "Logs" section of the [Cloudwatch Management Console](https://console.aws.amazon.com/cloudwatch/), click on Log groups, and look for the service in the list. For example: `/stage/ecs/simple-web-app-stage`. + +1. Click on the entry. You should be presented with a real-time log stream of the container. If your app logs to `stdout`, its logs will show up here. You can export the logs and analyze it in your preferred tool or use [CloudWatch Log Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html) to query the logs directly + in the AWS web console. + + + + + +Now that you have the Docker image of your app published, the next step is to deploy it to your EKS Cluster that was +set up as part of your reference architecture deployment. + +### Setting up the Kubernetes Service + +The next step is to create a `terragrunt.hcl` file to deploy your app in each app environment (i.e. in dev, stage, +prod). To do this, we will first need to define the common inputs for deploying the `simple-web-app` service. + +Copy the file `_envcommon/services/k8s-sample-app-frontend.hcl` into a new file +`_envcommon/services/k8s-simple-web-app.hcl`. + +Next, update the following in the new `k8s-simple-web-app.hcl` configuration file: + +- Set the `service_name` local to your desired name: e.g., `simple-web-app-stage`. +- In the `container_image` object, set `repository` to the repo url of the just published Docker image: e.g., `.dkr.ecr.us-west-2.amazonaws.com/simple-web-app`. +- Update the `domain_name` to configure a DNS entry for the service: e.g., `simple-web-app.${local.account_vars.local.domain_name.name}`. +- Remove the `scratch_paths` configuration, as our simple web app does not pull in secrets dynamically. +- Remove all environment variables, leaving only an empty map: e.g. `env_vars = {}`. +- Update health check paths to reflect our new service: + + - `alb_health_check_path` + - `liveness_probe_path` + - `readiness_probe_path` + +- Remove configurations for IAM Role service account binding, as our app won't be communicating with AWS: + - `service_account_name` + - `iam_role_name` + - `eks_iam_role_for_service_accounts_config` + - `iam_role_exists` + - `iam_policy` + +Once the envcommon file is created, you can create the `terragrunt.hcl` file to deploy it in a specific environment. +For the purpose of this example, we will assume we want to deploy the simple web app into the `dev` account first. + +1. Create a `simple-web-app` folder in `dev/us-west-2/dev/services`. +1. Copy over the contents of the `k8s-sample-app-frontend terragrunt.hcl`. +1. Update the include path for `envcommon` to reference the new `ecs-simple-web-app.hcl` envcommon file you created + above. +1. Remove the unneeded `tls_secrets_manager_arn` local variables, as well as all usage of it in the file. +1. Update the `tag` input variable to reference the Docker image tag we created earlier. + +### Deploying your configuration + +The above are the minimum set of configurations that you need to deploy the app. You can take a look at [`variables.tf` +of `k8s-service`](https://github.com/gruntwork-io/terraform-aws-service-catalog/tree/main/modules/services/k8s-service) +for all the available options. + +Once you've verified that everything looks fine, change to the new `services/simple-web-app` folder, and run + +```bash +terragrunt apply +``` + +This will show you the plan for deploying your new service. Verify the plan looks correct, and then approvie it to apply +your application configuration, which will create a new Kubernetes Deployment to schedule the Pods. In the process, +Kubernetes will allocate: + +- A `Service` resource to expose the Pods under a static IP within the Kubernetes cluster. +- An `Ingress` resource to expose the Pods externally under an ALB. +- A Route 53 Subdomain that binds to the ALB endpoint. + +Once the service is fully deployed, you can hit the configured DNS entry to reach your service. + +### Monitoring your deployment progress + +Due to the asynchronous nature of Kubernetes deployments, a successful `terragrunt apply` does not always mean your app +was deployed successfully. The following commands will help you examine the deployment progress from the CLI. + +First, if you haven't done so already, configure your `kubectl` client to access the EKS cluster. You can follow the +instructions [in this section of the +docs](https://github.com/gruntwork-io/terraform-aws-eks/blob/main/core-concepts.md#how-do-i-authenticate-kubectl-to-the-eks-cluster) +to configure `kubectl`. For this guide, we will use [kubergrunt](https://github.com/gruntwork-io/kubergrunt): + +``` +kubergrunt eks configure --eks-cluster-arn ARN_OF_EKS_CLUSTER +``` + +Once `kubectl` is configured, you can query the list of deployments: + +``` +kubectl get deployments --namespace applications +``` + +The list of deployments should include the new `simple-web-app` service you created. This will show you basic status +info of the deployment: + +``` +NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE +simple-web-app 3 3 3 3 5m +``` + +A stable deployment is indicated by all statuses showing the same counts. You can get more detailed information about a +deployment using the `describe deployments` command if the numbers are not aligned: + +``` +kubectl describe deployments simple-web-app --namespace applications +``` + +See the [How do I check the status of a +rollout?](https://github.com/gruntwork-io/helm-kubernetes-services/blob/main/charts/k8s-service/README.md#how-do-i-check-the-status-of-the-rollout) +documentation for more information on getting detailed information about Kubernetes Deployments. + +### Debugging errors + +Sometimes, things don't go as planned. And when that happens, it's always beneficial to know how to locate the +source of the problem. There are two places you can look for information about a failed Pod. + +### Using kubectl + +The `kubectl` CLI is a powerful tool that helps you investigate problems with your `Pods`. + +The first step is to obtain the metadata and status of the `Pods`. To lookup information about a `Pod`, retrieve them +using `kubectl`: + +```bash +kubectl get pods \ + -l "app.kubernetes.io/name=simple-web-app,app.kubernetes.io/instance=simple-web-app" \ + --all-namespaces +``` + +This will list out all the associated `Pods` with the deployment you just made. Note that this will show you a minimal +set of information about the `Pod`. However, this is a useful way to quickly scan the scope of the damage: + +- How many `Pods` are available? Are all of them failing or just a small few? +- Are the `Pods` in a crash loop? Have they booted up successfully? +- Are the `Pods` passing health checks? + +Once you can locate your failing `Pods`, you can dig deeper by using `describe pod` to get more information about a +single `Pod`. To do this, you will first need to obtain the `Namespace` and name for the `Pod`. This information should +be available in the previous command. Using that information, you can run: + +```bash +kubectl describe pod $POD_NAME -n $POD_NAMESPACE +``` + +to output the detailed information. This includes the event logs, which indicate additional information about any +failures that has happened to the `Pod`. + +You can also retrieve logs from a `Pod` (`stdout` and `stderr`) using `kubectl`: + +``` +kubectl logs $POD_NAME -n $POD_NAMESPACE +``` + +Most cluster level issues (e.g if there is not enough capacity to schedule the `Pod`) can be triaged with this +information. However, if there are issues booting up the `Pod` or if the problems lie in your application code, you will +need to dig into the logs. + +### CloudWatch Logs + +By default, all the container logs from a `Pod` (`stdout` and `stderr`) are sent to CloudWatch Logs. This is ideal for +debugging situations where the container starts successfully but the service doesn't work as expected. Let's assume our +`simple-web-app` containers started successfully (which they did!) but for some reason our requests to those containers +are timing out or returning wrong content. + +1. Go to the "Logs" section of the [Cloudwatch Management Console](https://console.aws.amazon.com/cloudwatch/) and look for the name of the EKS cluster in the table. + +1. Clicking it should take you to a new page that displays a list of entries. Each of these correspond to a `Pod` in the + cluster, and contain the `Pod` name. Look for the one that corresponds to the failing `Pod` and click it. + +1. You should be presented with a real-time log stream of the container. If your app logs to STDOUT, its logs will show + up here. You can export the logs and analyze it in your preferred tool or use [CloudWatch Log + Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html) to query the logs directly + in the AWS web console. + + + + diff --git a/_docs-sources/refarch/usage/maintain-your-refarch/extending.md b/_docs-sources/refarch/usage/maintain-your-refarch/extending.md new file mode 100644 index 0000000000..0e664f5fd5 --- /dev/null +++ b/_docs-sources/refarch/usage/maintain-your-refarch/extending.md @@ -0,0 +1,23 @@ +--- +title: "Extending your Reference Architecture" +--- + +# Extending and modifying your Reference Architecture + +Your Reference Architecture is delivered as a collection of IaC code. You will grow and evolve this codebase through out the lifetime of your cloud deployment. There are a few ways in which you can extend and modify your Reference Architecture: + +- You can immediately add any off-the-shelf Gruntwork services. +- You can create your own services using any Gruntwork modules. +- You can build your own modules and combine them into your own services. + +## Use Gruntwork's services + +Gruntwork provides a [_catalog_ of services](/iac/reference/) that can be added by directly referencing them in your terragrunt configuration. You can add these services to your architecture by creating references to them in the `_envcommon` directory, then each respective environment directory. + +## Composing your own services + +If Gruntwork doesn't already have the service you are looking you may be able to use our [modules](../../../iac/overview/modules) and combine them into your own bespoke new services to accelerate your development of the functionality you need. + +## Build your own modules + +If Gruntwork doesn't have existing modules for the AWS services that you are trying to deploy, you can always [create and deploy your own modules](/iac/getting-started/deploying-a-module), compose them into your on bespoke services and add them to your Reference Architecture. diff --git a/_docs-sources/refarch/usage/maintain-your-refarch/monitoring.md b/_docs-sources/refarch/usage/maintain-your-refarch/monitoring.md new file mode 100644 index 0000000000..7fdf492614 --- /dev/null +++ b/_docs-sources/refarch/usage/maintain-your-refarch/monitoring.md @@ -0,0 +1,47 @@ +# Monitoring, Alerting, and Logging + +You'll want to see what's happening in your AWS account: + +## Metrics + +You can find all the metrics for your AWS account on the [CloudWatch Metrics +Page](https://console.aws.amazon.com/cloudwatch/home?#metricsV2:). + +- Most AWS services emit metrics by default, which you'll find under the "AWS Namespaces" (e.g. EC2, ECS, RDS). + +- Custom metrics show up under "Custom Namespaces." In particular, the [cloudwatch-memory-disk-metrics-scripts + module](https://github.com/gruntwork-io/terraform-aws-monitoring/tree/main/modules/metrics/) is installed on every + server to emit metrics not available from AWS by default, including memory and disk usage. You'll find these under + the "Linux System" Namespace. + +You may want to create a [Dashboard](https://console.aws.amazon.com/cloudwatch/home?#dashboards:) +with the most useful metrics for your services and have that open on a big screen at all times. + +## Alerts + +A number of alerts have been configured using the [alarms modules in +terraform-aws-monitoring](https://github.com/gruntwork-io/terraform-aws-monitoring/tree/main/modules/alarms) to notify you +in case of problems, such as a service running out of disk space or a load balancer seeing too many 5xx errors. + +- You can find all the alerts in the [CloudWatch Alarms + Page](https://console.aws.amazon.com/cloudwatch/home?#alarm:alarmFilter=ANY). + +- You can also find [Route 53 Health Checks on this page](https://console.aws.amazon.com/route53/healthchecks/home#/). + These health checks test your public endpoints from all over the globe and notify you if your services are unreachable. + +That said, you probably don't want to wait for someone to check that page before realizing something is wrong, so +instead, you should subscribe to alerts via email or text message. Go to the [SNS Topics +Page](https://console.aws.amazon.com/sns/v2/home?#/topics), select the `cloudwatch-alarms` topic, and click "Actions -> +Subscribe to topic." + +If you'd like alarm notifications to go to a Slack channel, check out the [sns-to-slack +module](https://github.com/gruntwork-io/terraform-aws-monitoring/tree/main/modules/alarms/sns-to-slack). + +## Logs + +All of your services have been configured using the [cloudwatch-log-aggregation-scripts +module](https://github.com/gruntwork-io/terraform-aws-monitoring/tree/main/modules/logs/cloudwatch-log-aggregation-scripts) +to send their logs to [CloudWatch Logs](https://console.aws.amazon.com/cloudwatch/home?#logs:). Instead of SSHing to +each server to see a log file, and worrying about losing those log files if the server fails, you can just go to the +[CloudWatch Logs Page](https://console.aws.amazon.com/cloudwatch/home?#logs:) and browse and search log events for all +your servers in near-real-time. diff --git a/_docs-sources/refarch/usage/maintain-your-refarch/staying-up-to-date.md b/_docs-sources/refarch/usage/maintain-your-refarch/staying-up-to-date.md new file mode 100644 index 0000000000..00144fbd05 --- /dev/null +++ b/_docs-sources/refarch/usage/maintain-your-refarch/staying-up-to-date.md @@ -0,0 +1,29 @@ +# Staying up to date + +Keeping you Reference Architecture up to date is important for several reasons. AWS regularly releases updates and introduces changes to its services and features. By maintaining an up-to-date IaC codebase, you can adapt to these updates seamlessly. This ensures that your architecture remains aligned with the latest best practices and takes advantage of new functionalities, security enhancements, and performance optimizations offered by AWS. + +Neglecting to keep your IaC code up to date can lead to significant challenges. When you finally reach a point where an update becomes necessary, the process can become much more cumbersome and time-consuming. Outdated code may rely on deprecated or obsolete AWS resources, configurations, or APIs, making it difficult to migrate to newer versions smoothly. In such cases, the effort required to update the codebase can be substantially higher, potentially resulting in additional costs, delays, and increased risk of errors or production outages. + +## Upgrading Terraform across your modules + +It is important to regularly update your version of Terraform to ensure you have access to the latest features, bug fixes, security patches, and performance improvements necessary for smooth infrastructure provisioning and management. + +Neglecting regular updates may lead to increased complexity and difficulty when attempting to upgrade from multiple versions behind. This was particularly true during the pre-1.0 era of Terraform where significant changes and breaking modifications were more frequent. + +The test pipeline's workhorse, the ECS Deploy Runner, includes a Terraform version manager, +[`tfenv`](https://github.com/tfutils/tfenv), so that you can run multiple versions of Terraform with your +`infrastructure-live` repo. This is especially useful when you want to upgrade Terraform versions. + +1. You'll first need to add a `.terraform-version` file to the module directory of the module you're upgrading. +1. In that file, specify the Terraform version as a string, e.g. `1.0.8`. Then push your changes to a branch. +1. The test pipeline will detect the change to the module and run `plan` on that module. When it does this, it will + use the Terraform version you specified in the `.terraform-version` file. +1. If the `plan` output looks good and there are no issues, you can approve and merge to your default protected branch. Once the code is merged, the changes will be `apply`ed + using the newly specified Terraform version. + + :::info + + The `.tfstate` state file will be written in the version specified by the `.terraform-version` file. You can verify this by viewing the state file in the S3 + bucket containing all your Reference Architecture's state files. + + ::: diff --git a/_docs-sources/refarch/usage/maintain-your-refarch/undeploying.md b/_docs-sources/refarch/usage/maintain-your-refarch/undeploying.md new file mode 100644 index 0000000000..24f5ea1bb3 --- /dev/null +++ b/_docs-sources/refarch/usage/maintain-your-refarch/undeploying.md @@ -0,0 +1,204 @@ +# Undeploying your Reference Architecture + +Terraform makes it fairly easy to delete resources using the `destroy` command. This is very useful in testing and +pre-prod environments, but can also be dangerous in production environments. + +:::danger + +Be especially careful when running `destroy` in any production environment so you don't accidentally end up deleting +something you'll very much regret (e.g., a production database). + +If you delete resources, **there is no undo** + +::: + +## Prerequisites + +### Understand `force_destroy` on S3 buckets + +By default, if your Terraform code includes an S3 bucket, when you run `terraform destroy`, if that bucket contains +any content, Terraform will _not_ delete the bucket and instead will give you an error like this: + +```yaml +bucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket. +``` + +This is a safety mechanism to ensure that you don't accidentally delete your data. + +If you are absolutely sure you want to delete the contents of an S3 bucket (remember, there's no undo!), all the +services that use S3 buckets expose a `force_destroy` setting that you can set to `true` in your `terragrunt.hcl` +files to tell that service to delete the contents of the bucket when you run `destroy`. Here's a partial list of +services that expose this variable: + +:::note + +You may not have all of these in your Reference Architecture + +::: + +- `networking/alb` +- `mgmt/openvpn-server` +- `landingzone/account-baseline-app` +- `services/k8s-service` + +### Understand module dependencies + +Gruntwork Pipelines (the CI/CD pipeline deployed with your Reference Architecture) only **supports destroying modules +that have no downstream dependencies.** + +You can destroy multiple modules only if: + +- All of them have no dependencies. +- None of them are dependent on each other. + +#### Undeploying a module with many dependencies + +As an example, most modules depend on the `vpc` module, for fetching information about the VPC using [Terragrunt `dependency` +blocks](https://terragrunt.gruntwork.io/docs/reference/config-blocks-and-attributes/#dependency) or +[aws_vpc](https://www.terraform.io/docs/providers/aws/d/vpc.html) data source. If you undeploy your `vpc` +_before_ the modules that depend on it, then any command you try to run on those other modules will fail, as their +data sources will no longer be able to fetch the VPC info! + +Therefore, you should only destroy a module if you're sure no other module depends on it! Terraform does not provide +an easy way to track these sorts of dependencies. We have configured the modules here using Terragrunt [`dependency`](https://terragrunt.gruntwork.io/docs/reference/config-blocks-and-attributes/#dependency) blocks, so use those to find dependencies between modules. + +You can check the module dependency tree with `graph-dependencies` and GraphViz: + +```bash +aws-vault exec -- terragrunt graph-dependencies | dot -Tpng > dep-graph.png +open dep-graph.png +``` + +## Undeploying a module with no dependencies using Gruntwork Pipelines + +To destroy a module with no downstream dependencies, such as `route53-private` in the `dev` environment: + +1. Update the `force_destroy` variable in `dev/us-west-2/dev/networking/route53-private/terragrunt.hcl`. + [See force_destroy section](#pre-requisite-force_destroy-on-s3-buckets). + + ```json + force_destroy = true + ``` + +1. Open a pull request for that change and verify the plan in CI. You should see a trivial change to update the + module. +1. Go through the typical git workflow to get the change merged into the main branch. +1. As CI runs on the main branch, watch for the job to be held for approval. Approve the job, and wait for the + `deployment` step to complete so that the module is fully updated with the new variable. +1. Remove the module folder from the repo. For example: + + ```bash + rm -rf dev/us-west-2/dev/networking/route53-private + ``` + +1. Open a pull request for that change and verify the plan in CI. + - Make sure the `plan -destroy` output looks accurate. + - If you are deleting multiple modules (e.g., in `dev`, `stage`, and `prod`) you should see multiple plan + outputs -- one per folder deleted. You'll need to scroll through the plan output to see all of them, as + it runs `plan -destroy` for each folder individually. +1. Go through the typical git workflow to get the change merged into the main branch. +1. As CI runs on the main branch, watch for the job to be held for approval. Approve the job, and wait for the + `deployment` step to complete so that the module is fully _deleted_. +1. [Remove the Terraform state](#removing-the-terraform-state). +1. Repeat this process for upstream dependencies you may now want to destroy, always starting from the + modules that have no existing downstream dependencies. + +### Manually undeploying a single module + +You can also bypass the CI/CD pipeline and run destroy locally. For example: + +```bash +cd stage/us-west-2/stage/services/sample-app-frontend +terragrunt destroy +``` + +## Manually undeploying multiple modules or an entire environment + +_If you are absolutely sure you want to run destroy on multiple modules or an entire environment_, you can use the `destroy-all` command. For example, to undeploy the entire staging environment, you'd run: + +:::danger + +This operation cannot be undone! + +::: + +```bash +cd stage +terragrunt destroy-all +``` + +Terragrunt will then run `terragrunt destroy` in each subfolder of the current working directory, processing them in +reverse order based on the dependencies you define in the `terragrunt.hcl` files. + +To avoid interactive prompts from Terragrunt (use very carefully!!), add the `--terragrunt-non-interactive` flag: + +```bash +cd stage +terragrunt destroy-all --terragrunt-non-interactive +``` + +To undeploy everything except a couple specific sub-folders, add the `--terragrunt-exclude-dir` flag. For example, to +run `destroy` in each subfolder of the `stage` environment except MySQL and Redis, you'd run: + +``` +cd stage +terragrunt destroy-all \ + --terragrunt-exclude-dir stage/us-east-1/stage/data-stores/mysql \ + --terragrunt-exclude-dir stage/us-east-1/stage/data-stores/redis +``` + +## Removing the Terraform state + +:::danger + +Deleting state means that you lose the ability to manage your current Terraform resources! Be sure to only delete once you have confirmed all resources are destroyed. + +::: + +Once all the resources for an environment have been destroyed, you can remove the state objects managed by `terragrunt`. +The Reference Architecture manages state for each environment in an S3 bucket in each environment's AWS account. +Additionally, to prevent concurrent access to the state, it also utilizes a DynamoDB table to manage locks. + +To delete the state objects, login to the console and look for the S3 bucket in the environment you wish to undeploy. It +should begin with your company's name and end with `terraform-state`. Also look for a DynamoDB +table named `terraform-locks`. You can safely remove both **once you have confirmed all the resources have been +destroyed successfully**. + +## Useful tips + +- **Destroy resources in groups instead of all at once.** + + - There are [known instabilities](#known-errors) with destroying many modules at once. In addition, Terragrunt is + designed to process the modules in a graph, and will not continue on if there is an error. This means that you + could run into situations where Terragrunt has destroyed a module, but returns an error due to Terraform bugs that + prevent you from cleanly calling destroy twice. + - To address these instabilities, it is recommended to destroy the resources in groups. For example, you can start + by destroying all the services first (e.g., `stage/REGION/stage/services`), then the data stores (e.g., + `stage/REGION/stage/data-stores`), and finally the networking resources (e.g., `stage/REGION/stage/networking`). + - When identifying groups to destroy, use [terragrunt + graph-dependencies](https://terragrunt.gruntwork.io/docs/reference/cli-options/#graph-dependencies) to view the + dependency graph so that you destroy the modules in the right order. + +- **Empty + Delete S3 buckets using the web console (when destroying whole environments).** + - As mentioned in [Pre-requisite: force_destroy on S3 buckets](#pre-requisite-force_destroy-on-s3-buckets), it is + recommended to set `force_destroy = true` prior to running destroy so that Terraform can destroy the S3 buckets. + However, this can be cumbersome if you are destroying whole environments, as it can be difficult to flip the bit in + every single module. + - Alternatively, it is often faster and more convenient to empty and delete the buckets using the AWS web console before executing the `destroy` command with `terragrunt`. + - **IMPORTANT**: You should only do this if you are intending on destroying an entire environment. Otherwise, it is + too easy to accidentally delete the wrong S3 bucket. + +## Known Terraform errors + +If your `destroy` fails with: + +``` +variable "xxx" is nil, but no error was reported +``` + +Terraform has a couple bugs ([18197](https://github.com/hashicorp/terraform/issues/18197) and +[17862](https://github.com/hashicorp/terraform/issues/17862)) that may give this error when you run +`destroy`. + +This usually happens when the module already had `destroy` called on it previously and you re-run `destroy`. In this +case, your best bet is to skip over that module with the `--terragrunt-exclude-dir` (more details: [here](https://terragrunt.gruntwork.io/docs/reference/cli-options/#terragrunt-exclude-dir)). diff --git a/_docs-sources/refarch/usage/pipelines-integration/index.md b/_docs-sources/refarch/usage/pipelines-integration/index.md new file mode 100644 index 0000000000..eb9b36d383 --- /dev/null +++ b/_docs-sources/refarch/usage/pipelines-integration/index.md @@ -0,0 +1,98 @@ +# Pipelines integration + +CI/CD is a crucial tool for ensuring the smooth iteration and consistent delivery of Infrastructure as Code (IaC) to production environments. By adopting CI/CD practices, teams can automate the process of integrating and testing changes made to IaC code, allowing for frequent and reliable updates. With CI/CD, each change to the IaC codebase triggers an automated build process, ensuring that any new additions or modifications are properly incorporated. This enables developers to catch errors and conflicts early, facilitating collaboration and reducing the likelihood of issues surfacing during deployment. + +Gruntwork Pipelines is a framework that enables you to use your preferred CI tool to securely run an end-to-end pipeline for infrastructure code (Terraform) and app code (Docker or Packer). Rather than replace your existing CI/CD provider, Gruntwork Pipelines is designed to enhance the security of your existing tool. For more information please see the [full pipelines documentation](/pipelines/overview/). + +In the guide below, we walk through how to configure Gruntwork Pipelines in your CI/CD. + +## Set up machine user credentials + +### Get the machine user credentials from AWS + +1. Log into the Security account in the AWS Console. +1. Go into IAM and find the ci-machine-user under Users. +1. Go to Security Credentials > Access Keys > Create Access Key. +1. Save these values as the `AWS_ACCESS_KEY_ID` and the `AWS_SECRET_ACCESS_KEY` Environment Variables in CircleCI. + + | Env var name | Value | + | --------------------- | --------------------------------------------------------------------------------------------------------------------------------------- | + | AWS_ACCESS_KEY_ID | The Access Key generated for the machine user in the Security account. | + | AWS_SECRET_ACCESS_KEY | The Secret Key generated for the machine user in the Security account. | + | GITHUB_OAUTH_TOKEN | Enter the MachineUserGitHubPAT here. You can find this in `reference-architecture-form.yml` or in the shared account's Secrets Manager. | + +## Verify: Testing an infrastructure change end to end + +You can verify the pipeline by making a change to one of the modules. For example, follow the steps below to extend the +number of replicas in the sample app: + +1. Create a new branch in the `infrastructure-live` repo. + `git checkout -B add-replica-to-sample-app`. +1. Open the file `dev/us-west-2/dev/services/sample-app-frontend` in your editor. +1. Change the input variable `desired_number_of_tasks` to `2`. +1. Commit the change. + `git commit -a`. +1. Push the branch to GitHub and open a PR. + `git push add-replica-to-sample-app` +1. Login to CircleCI. Navigate to your infrastructure-live project. +1. Click on the new pipeline job for the branch `add-replica-to-sample-app` to see the build log. +1. Verify the `plan`. Make sure that the change corresponds to adding a new replica to the ECS service. +1. When satisfied with the plan, merge the PR into `main`. +1. Go back to the project and verify that a new build is started on the `main` branch. +1. Wait for the `plan` to finish. The build should hold for approval. +1. Approve the deployment by clicking `Approve`. +1. Wait for the `apply` to finish. +1. Login to the AWS console and verify the ECS service now has 2 replicas. + +## (Optional) Configure Slack notifications + +### Create a Slack App + +1. Visit [your apps](https://api.slack.com/apps) on the Slack API website, and click `Create New App`. +1. Name your application (e.g., `CircleCI` or `CircleCI-Pipeline`). +1. Then select the Slack workspace in which to install this app. + +### Set Permissions + +On the next page select the "Permissions" area, and add these 3 "scopes". + +- `chat:write` +- `chat:write.public` +- `files:write` + +

+Slack App Scopes +Slack App Scopes +

+ +### Install and Receive Token + +Install the application into the Slack workspace and save your OAuth Access Token. This will be used in +a CircleCI environment variable. + +

+Slack OAuth Tokens +Slack OAuth Tokens +

+ +

+Slack OAuth Access Token +Slack OAuth Access Token +

+ +### Choose a Slack channel to notify + +1. Choose or create a Slack channel in your workspace to notify with pipeline updates. +1. Right-click the channel name. You'll see a context menu. +1. Select `Copy link`. +1. Extract the Channel ID from the URL copied. E.g., `https://.slack.com/archives/` + +### Create env vars on CircleCI + +1. Login to CircleCI. Navigate to Project Settings -> Environment Variables. +1. Configure the following environment variables: + + | Env var name | Value | + | --------------------- | ----------------------------------------------------------------- | + | SLACK_ACCESS_TOKEN | The OAuth token acquired through the previous step. | + | SLACK_DEFAULT_CHANNEL | If no channel ID is specified, the app will attempt to post here. | diff --git a/_docs-sources/refarch/whats-this/index.md b/_docs-sources/refarch/whats-this/index.md new file mode 100644 index 0000000000..f86182bfc0 --- /dev/null +++ b/_docs-sources/refarch/whats-this/index.md @@ -0,0 +1,8 @@ +# What is all this? + + +Haxx0r ipsum endif race condition d00dz fork cookie recursively big-endian tera. Wabbit break concurrently printf script kiddies eof cd malloc warez chown kilo /dev/null todo ascii foad bang exception highjack epoch headers. Flush data piggyback class hexadecimal true syn ddos daemon snarf over clock. + +Cookie packet sniffer ifdef endif all your base are belong to us stdio.h bin ssh I'm sorry Dave, I'm afraid I can't do that terminal hack the mainframe. Concurrently Leslie Lamport brute force else socket malloc over clock foo grep double var mainframe. Ip cache access buffer pwned bytes system packet todo emacs gurfle dereference foad strlen deadlock alloc cat false for /dev/null. + +Wannabee dereference private wombat case root fatal char giga Leslie Lamport perl sudo sql ascii cat grep James T. Kirk bin stack trace afk. Malloc foad class daemon I'm compiling salt brute force highjack syn regex socket exception warez hexadecimal linux bit bytes echo hack the mainframe. Then wabbit injection Linus Torvalds pragma tunnel in data win protocol leet fopen printf void default gc Starcraft piggyback todo gnu concurrently. diff --git a/_docs-sources/refarch/whats-this/services.md b/_docs-sources/refarch/whats-this/services.md new file mode 100644 index 0000000000..e69de29bb2 diff --git a/_docs-sources/refarch/whats-this/understanding-the-deployment-process.md b/_docs-sources/refarch/whats-this/understanding-the-deployment-process.md new file mode 100644 index 0000000000..386391348b --- /dev/null +++ b/_docs-sources/refarch/whats-this/understanding-the-deployment-process.md @@ -0,0 +1,34 @@ +# Understanding the Deployment Process + +The Gruntwork Reference Architecture has three deployment phases. + +### Configuration + +Configuration of the Gruntwork Reference Architecture is primarily [your responsibility](../../intro/overview/what-you-provide). + +- We deliver a templated `infrastructure-live-${YOUR_COMPANY_NAME}` repository to you in our GitHub organization +- You access the repo in GitHub via invitation in the [Gruntwork Dev Portal](https://app.gruntwork.io) +- You use the Gruntwork CLI wizard to create accounts and set config options +- Pre-flight checks run via Github Actions to determine when the repo is ready for deployment +- The AWS accounts you are deploying the Reference Architecture to should be empty at conclusion of this phase +- You merge the PR to the `main` branch to initiate the deployment phase + +### Deployment + +The deployment phase is primarily [our responsibility](../../intro/overview/what-we-provide.md#gruntwork-reference-architecture). + +- We monitor the deployment and fix any errors that occur as needed +- In some cases, we may need to communicate with you to resolve issues (e.g. AWS quota problems) +- Deployment is completed and the `infrastructure-live-${YOUR_COMPANY_NAME}` repo is populated +- During the deployment phase, you should not attempt to modify resources in or respond to any automated notifications from your AWS accounts +- Once the deployment is complete, you will receive an email + +### Adoption + +The adoption phase is primarily [your responsibility](../../intro/overview/what-you-provide). + +- You complete “last mile” configuration following our handoff docs, including final Pipelines integrations with your CI/CD of choice +- You migrate the `infrastructure-live-${YOUR_COMPANY_NAME}` repo to your own Version Control System or Github Organization +- You revoke Gruntwork access to your AWS account +- At this points, your AWS accounts are fully in your control +- From this point forward, we expect you to self-serve, with assistance from Gruntwork Support, as needed diff --git a/_docs-sources/refarch/whats-this/what-is-a-reference-architecture.md b/_docs-sources/refarch/whats-this/what-is-a-reference-architecture.md new file mode 100644 index 0000000000..15e34a75ea --- /dev/null +++ b/_docs-sources/refarch/whats-this/what-is-a-reference-architecture.md @@ -0,0 +1,23 @@ +# What is a Reference Architecture? + +The Gruntwork Reference Architecture is an implementation of best practices for infrastructure in the cloud. It is and end-to-end tech stack built on top of our Infrastructure as Code Library, deployed into your AWS accounts. + +The Gruntwork Reference Architecture is opinionated, and delivered as code. It is written in [Terragrunt](https://terragrunt.gruntwork.io/), our thin wrapper that provides extra tools for managing remote state and keeping your configurations [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself). Our `_envcommon` pattern reduces the amount of code you need to copy from one place to another when creating additional identical infrastructure. + +## Components + +The Gruntwork Reference Architecture has three main components — Gruntwork Landing Zone, Gruntwork Pipelines, and a Sample Application. + +### Landing Zone + +Gruntwork Landing Zone is a terraform-native approach to [AWS Landing zone / Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html). This uses Terragrunt to quickly create new AWS accounts, configure them with a standard security baseline, and defines a best-practices multi-account setup. + + +### Pipelines + +[Gruntwork Pipelines](/pipelines/overview/) makes the process of deploying infrastructure similar to how developers often deploy code. It is a code framework and approach that enables the customer to use your preferred CI tool to set up an end-to-end pipeline for infrastructure code. + + +### Sample Application + +Our [sample application](https://github.com/gruntwork-io/aws-sample-app) is built with JavaScript, Node.js, and Express.js, following [Twelve-Factor App](https://12factor.net/) practices. It consists of a load balancer, a front end, a backend, a cache, and a database. diff --git a/_docs-sources/reference/services/intro/deploy-new-infrastructure.md b/_docs-sources/reference/services/intro/deploy-new-infrastructure.md index e879c37038..0bc46b3d21 100644 --- a/_docs-sources/reference/services/intro/deploy-new-infrastructure.md +++ b/_docs-sources/reference/services/intro/deploy-new-infrastructure.md @@ -118,7 +118,7 @@ deploy Terraform code from the Service Catalog. See 1. **GitHub Authentication**: All of Gruntwork's code lives in GitHub, and as most of the repos are private, you must authenticate to GitHub to be able to access the code. For Terraform, we recommend using Git / SSH URLs and using - SSH keys for authentication. See [Link Your GitHub ID](/intro/dev-portal/link-github-id) + SSH keys for authentication. See [Link Your GitHub ID](/developer-portal/link-github-id) for instructions on linking your GitHub ID and gaining access. 1. **Deploy**. You can now deploy the service as follows: @@ -258,7 +258,7 @@ Now you can create child `terragrunt.hcl` files to deploy services as follows: 1. **GitHub Authentication**: All of Gruntwork's code lives in GitHub, and as most of the repos are private, you must authenticate to GitHub to be able to access the code. For Terraform, we recommend using Git / SSH URLs and using SSH keys for authentication. See [How to get access to the Gruntwork Infrastructure as Code - Library](/intro/dev-portal/create-account) + Library](/developer-portal/create-account) for instructions on setting up your SSH key. 1. **Deploy**. You can now deploy the service as follows: @@ -321,7 +321,7 @@ Below are instructions on how to build an AMI using these Packer templates. We'l ``` See [How to get access to the Gruntwork Infrastructure as Code - Library](/intro/dev-portal/create-account) + Library](/developer-portal/create-account) for instructions on setting up GitHub personal access token. 1. **Set variables**. Each Packer template defines variables you can set in a `variables` block at the top, such as diff --git a/_docs-sources/intro/dev-portal/_category_.json b/docs/developer-portal/_category_.json similarity index 100% rename from _docs-sources/intro/dev-portal/_category_.json rename to docs/developer-portal/_category_.json diff --git a/_docs-sources/intro/dev-portal/create-account.md b/docs/developer-portal/create-account.md similarity index 80% rename from _docs-sources/intro/dev-portal/create-account.md rename to docs/developer-portal/create-account.md index 57472b19f5..265466711e 100644 --- a/_docs-sources/intro/dev-portal/create-account.md +++ b/docs/developer-portal/create-account.md @@ -30,3 +30,17 @@ For security, sign in emails expire after 10 minutes. You can enter your email a ## 3. Provide account details If you are the admin for your organization, you'll be prompted to confirm details including your company address and phone number, as well as a billing email. Provide the required information and click **Continue** to finish signing in. + +## Related Knowledge Base Discussions + +- [Invitation to the Developer Portal not received](https://github.com/orgs/gruntwork-io/discussions/716) +- [Trouble logging into the Portal with email](https://github.com/orgs/gruntwork-io/discussions/395) +- [How can the email associated with an account be changed?](https://github.com/orgs/gruntwork-io/discussions/714) + + + diff --git a/_docs-sources/intro/dev-portal/invite-team.md b/docs/developer-portal/invite-team.md similarity index 86% rename from _docs-sources/intro/dev-portal/invite-team.md rename to docs/developer-portal/invite-team.md index 9946b64a9c..62d4f770c2 100644 --- a/_docs-sources/intro/dev-portal/invite-team.md +++ b/docs/developer-portal/invite-team.md @@ -39,3 +39,17 @@ This change will take effect immediately. Any team members who have accepted the ## Requesting additional licenses The number of licenses available depends on the level of your subscription. You can see the total number of licenses as well as the number remaining at the top of the [Team](https://app.gruntwork.io/team) page. If you need to invite more team members than your current license limit allows, you may request additional licenses, which are billed at a standard monthly rate. To do so, contact sales@gruntwork.io. + +## Related Knowledge Base Discussions + +- [Invitation to the Developer Portal not received](https://github.com/orgs/gruntwork-io/discussions/716) +- [Trouble logging into the Portal with email](https://github.com/orgs/gruntwork-io/discussions/395) +- [How can the email associated with an account be changed?](https://github.com/orgs/gruntwork-io/discussions/714) + + + diff --git a/docs/intro/dev-portal/link-github-id.md b/docs/developer-portal/link-github-id.md similarity index 58% rename from docs/intro/dev-portal/link-github-id.md rename to docs/developer-portal/link-github-id.md index 7b2d228a84..19dd783c77 100644 --- a/docs/intro/dev-portal/link-github-id.md +++ b/docs/developer-portal/link-github-id.md @@ -1,8 +1,6 @@ -# Link Your GitHub ID +# Link Your GitHub Account -Gruntwork provides all code included in your subscription through GitHub. You’ll need to link a GitHub ID to your account in order to access the IaC Library on GitHub. Follow the steps below to link your GitHub ID. - -## Linking your GitHub account +Gruntwork provides all code included in your subscription through GitHub. You need to link a GitHub ID to your Gruntwork Developer Portal account in order to access the IaC Library on GitHub. Follow the steps below to link your GitHub ID. 1. First, sign in to the [Gruntwork Developer Portal](https://app.gruntwork.io). 2. Click the **Link my GitHub Account** button in the notice at the top of the home page, or the corresponding button located in your [Profile Settings](https://app.gruntwork.io/settings/profile). @@ -10,21 +8,21 @@ Gruntwork provides all code included in your subscription through GitHub. You’ 4. After being redirected back to the Gruntwork Developer Portal, click the **Accept My Invite** button. This will take you to GitHub again, where you can accept an invitation to join the Gruntwork organization. (You can ignore the corresponding invite email you receive from GitHub.) 5. Click **Join Gruntwork** to accept the invitation and access the IaC Library. -Once you’ve linked your account, the notice on the home page will disappear and you’ll find your GitHub ID recorded in your [Profile Settings](https://app.gruntwork.io/settings/profile). Going forward, you’ll have access to all private repositories included in your subscription. If you haven’t yet done so, we strongly recommend [adding an SSH key](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) to your GitHub account. An SSH key is required to access the Gruntwork IaC library without adding a password in your Terraform code. +:::info + +Once you’ve linked your account, the notice on the home page will disappear and you’ll find your GitHub ID recorded in your [Profile Settings](https://app.gruntwork.io/settings/profile). Going forward, you’ll have access to all private repositories included in your subscription. If you haven’t done so yet, we strongly recommend [adding an SSH key](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) to your GitHub account. An SSH key is required to access the Gruntwork IaC library without adding a password in your Terraform code. -## Linking a new GitHub account +::: -To link a new GitHub ID, you’ll first have to unlink the current one. Although uncommon, note that any private forks of Gruntwork repos will be deleted when you unlink your account. +## Related Knowledge Base Discussions -1. Sign in to the Gruntwork Developer Portal and navigate to your [Profile Settings](https://app.gruntwork.io/settings/profile). -2. Click **Unlink** in the description under the **GitHub Account** section. -3. Click **Yes, Unlink My Account** in the confirmation dialog that appears. -4. Proceed with the [steps above](#linking-your-github-account) to link a new GitHub account *using a private/incognito browser window*. (This guarantees you’ll have an opportunity to specify the new account you wish to link.) +- [I have linked my GitHub Account but do not have code access](https://github.com/orgs/gruntwork-io/discussions/715) +- [How can I change my GitHub account (unlink/link)?](https://github.com/orgs/gruntwork-io/discussions/713) diff --git a/docs/guides/working-with-code/using-modules.md b/docs/guides/working-with-code/using-modules.md index e88b9f41ea..fd2409712f 100644 --- a/docs/guides/working-with-code/using-modules.md +++ b/docs/guides/working-with-code/using-modules.md @@ -162,7 +162,7 @@ This code pulls in a module using Terraform’s native `module` functionality. F The `source` URL in the code above uses a Git URL with SSH authentication (see [module sources](https://www.terraform.io/docs/modules/sources.html) for all the types of `source` URLs you can use). -If you followed the [SSH key instructions](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) when [linking your GitHub ID](/intro/dev-portal/link-github-id.md), this will allow you to access private repos in the Gruntwork +If you followed the [SSH key instructions](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) when [linking your GitHub ID](/developer-portal/link-github-id.md), this will allow you to access private repos in the Gruntwork Infrastructure as Code Library without having to hard-code a password in your Terraform code. #### Versioned URL @@ -770,6 +770,6 @@ Now that you have your Terraform module deployed, you can pull in updates as fol diff --git a/docs/iac/getting-started/accessing-the-code.md b/docs/iac/getting-started/accessing-the-code.md new file mode 100644 index 0000000000..1ba5be8b6f --- /dev/null +++ b/docs/iac/getting-started/accessing-the-code.md @@ -0,0 +1,17 @@ +# Accessing the code + +Gruntwork provides all code included in your subscription to the Infrastructure as Code (IaC) library through GitHub. To gain access to the IaC Library, you must first [create an account in the Developer Portal](../../developer-portal/create-account.md). Once you have an account, you must [link your GitHub ID](../../developer-portal/link-github-id) to your Developer Portal account to gain access to the IaC Library. + +## Accessing Modules and Services in the IaC library + +Once you have gained access to the Gruntwork IaC library, you can view the source code for our modules and services in [GitHub](https://github.com/orgs/gruntwork-io/repositories). For a full list of modules and services, check the [Library Reference](../../iac/reference/index.md). + +In GitHub, each IaC repository is prefixed with `terraform-aws-` then a high level description of the modules it contains. For example, Amazon SNS, SQS, MSK, and Kinesis are located in the `terraform-aws-messaging` repository. In each repository, the modules are located in the `modules` directory. Example usage and tests are provided for each module in the `examples` and `tests` directories, respectively. + + + diff --git a/docs/iac/getting-started/deploying-a-module.md b/docs/iac/getting-started/deploying-a-module.md new file mode 100644 index 0000000000..a8610ec3d1 --- /dev/null +++ b/docs/iac/getting-started/deploying-a-module.md @@ -0,0 +1,262 @@ +# Deploying your first module + +[Modules](../overview/modules.md) allow you to define an interface to create one or many resources in the cloud or on-premise, similar to how in object oriented programming you can define a class that may have different attribute values across many instances. + +This tutorial will teach you how to develop a Terraform module that deploys an AWS Lambda function. We will create the required file structure, define an AWS Lambda function and AWS IAM role as code, then plan and apply the resource in an AWS account. Then, we’ll verify the deployment by invoking the Lambda using the AWS CLI. Finally, we'll clean up the resources we create to avoid unexpected costs. + +## Prerequisites +- An AWS account with permissions to create the necessary resources +- An [AWS Identity and Access Management](https://aws.amazon.com/iam/) (IAM) user or role with permissions to create AWS IAM roles and Lambda functions +- [AWS Command Line Interface](https://aws.amazon.com/cli/) (AWS CLI) installed on your local machine +- [Terraform](https://www.terraform.io) installed on your local machine + +## Create the module + +In this section you’ll create a Terraform module that can create an AWS Lambda function and IAM role. This module will include three files — `main.tf` which will contain the resource definitions, `variables.tf`, which specifies the possible inputs to the module, and `outputs.tf`, which specifies the values that can be used to pass references to attributes from the resources in the module. + +This module could be referenced many times to create any number of AWS Lambda functions and IAM roles. + + +### Create a basic file structure +First, create the directories and files that will contain the Terraform configuration. + +```bash +mkdir -p terraform-aws-gw-lambda-tutorial/modules/lambda +touch terraform-aws-gw-lambda-tutorial/modules/lambda/main.tf +touch terraform-aws-gw-lambda-tutorial/modules/lambda/variables.tf +touch terraform-aws-gw-lambda-tutorial/modules/lambda/outputs.tf +``` + +### Define the module resources + +First, define the resources that should be created by the module. This is where you define resource level blocks provided by Terraform. For this module, we need an AWS Lambda function and an IAM role that will be used by the Lambda function. + +Paste the following snippet in `terraform-aws-gw-lambda/modules/lambda/main.tf`. +```hcl title="terraform-aws-gw-lambda/modules/lambda/main.tf" +resource "aws_iam_role" "lambda_role" { + name = "${var.lambda_name}-role" + + assume_role_policy = < diff --git a/docs/iac/getting-started/setting-up.md b/docs/iac/getting-started/setting-up.md new file mode 100644 index 0000000000..4df2aec007 --- /dev/null +++ b/docs/iac/getting-started/setting-up.md @@ -0,0 +1,49 @@ +# Setting up your machine + +The Gruntwork IaC library requires that you have a few tools installed in order to leverage our pre-built modules and services. We recommend installing these tools locally so you can develop and deploy modules and services on your local machine. + +## Terraform + +Terraform is an open source infrastructure provisioning tool that allows you to define and manage a wide variety of infrastructure (e.g., servers, load balancers, databases, network settings, and so on) as code across a wide variety of providers (e.g., AWS, GCP, Azure). Terraform defines cloud and on-premise resources in human-readable configuration language and offers a consistent workflow for provisioning and managing infrastructure. + +Gruntwork’s IaC library is built using Terraform, so having Terraform installed is required. + +### Installation +Terraform is supported on Mac (x86 and Apple Silicon), Windows, and Linux. To learn how to install for your specific OS, follow the guide to [install Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli#install-cli) on the Hashicorp website. + +If you need multiple versions of Terraform installed, [tfenv](https://github.com/tfutils/tfenv#installation) is a tool for managing and using multiple versions of Terraform. It was inspired by similar tools `rbenv` for Ruby versions and `pyenv` for Python. + +### Learn more +If you’re new to Terraform, we recommend starting with learning about Terraform’s [configuration language](https://developer.hashicorp.com/terraform/language) then familiarizing yourself with the basics of [provisioning infrastructure](https://developer.hashicorp.com/terraform/cli/run) using Terraform. + +If you want to skip immediately to learning, you can learn how to [deploy your first module](./deploying-a-module.md). For a more in-depth guide, check out our [Comprehensive Guide to Terraform](https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca) for a thorough introduction to the language. + +## Terragrunt + +[Terragrunt](https://terragrunt.gruntwork.io) is a tool developed by Gruntwork that provides extra tools for keeping your Terraform configurations DRY, working with multiple Terraform modules, and managing remote state. Terragrunt allows you to execute multiple Terraform commands at once, centrally manage your Terraform state configuration, and set repeatable CLI arguments. Since Terraform is a dependency of Terragrunt, you can continue to write modules for Terraform in the Terraform configuration language, then reference and re-use the modules in different environments or applications. + +:::info +Terragrunt is not a required tool to use the IaC library, but it does provide many convenience features on top of Terraform. If you are using the Gruntwork [Reference Architecture](../../refarch/whats-this/what-is-a-reference-architecture), Terragrunt is a requirement. +::: + +### Installation +Terragrunt is supported on Mac (x86 and Apple Silicon), Windows, and Linux. To install Terragrunt, follow the guide on how to [install Terragrunt](https://terragrunt.gruntwork.io/docs/getting-started/install/) on the Terragrunt website. + +If you need multiple versions of Terragrunt installed, [tgswitch](https://github.com/warrensbox/tgswitch#installation) is a tool for managing and using multiple versions of Terragrunt with a similar feature set to `tfenv`. + +### Learn more +To learn more about Terragrunt, check out the [official documentation](https://terragrunt.gruntwork.io/docs/). + +## What’s Next + +Now that you’ve got the required tools installed, you’ll learn how to [access the IaC Library code](./accessing-the-code.md). + +If you’re ready to get started with creating and deploying a module, jump to [deploying your first module](./deploying-a-module.md). + + + diff --git a/docs/iac/overview/index.md b/docs/iac/overview/index.md new file mode 100644 index 0000000000..2c595f9ab3 --- /dev/null +++ b/docs/iac/overview/index.md @@ -0,0 +1,35 @@ +# What is the Infrastructure as Code Library? + +The Gruntwork Infrastructure as Code Library (IaC Library) is a collection of reusable code that enables you to deploy and manage infrastructure quickly and reliably. It promotes code reusability, modularity, and consistency in infrastructure deployments. We’ve taken the thousands of hours we spent building infrastructure on AWS and condensed all that experience and code into pre-built packages or modules. + +## Modules + +Modules are reusable code components that are used to deploy and manage specific pieces of infrastructure. These modules encapsulate the configuration and resource definitions required to create and manage a particular component, such as a VPC, ECS cluster, or an Auto Scaling Group. For more information on modules check out the [Modules page](/iac/overview/modules/). + +## Services + +Services in the service catalog are reusable code that combines multiple modules from the IaC Library, simplifying the deployment and management of complex infrastructure configurations. Rather than dealing with individual modules and their dependencies, users can directly deploy services tailored for a particular use case. + +For more information on the service catalog check out the [Services page](/iac/overview/services/). + +## Tools used in the IaC Library + +The Gruntwork IaC Library is deployed using the following tools: + +1. [Terraform](https://www.terraform.io/). Used to define and manage most of the basic infrastructure, such as servers, databases, load balancers, and networking. The Gruntwork Service Catalog is compatible with vanilla [Terraform](https://www.terraform.io/), [Terragrunt](https://terragrunt.gruntwork.io/), [Terraform + Cloud](https://www.hashicorp.com/blog/announcing-terraform-cloud/), and [Terraform + Enterprise](https://www.terraform.io/docs/enterprise/index.html). + +1. [Packer](https://www.packer.io/). Used to define and manage _machine images_ (e.g., VM images). The main use case is + to package code as [Amazon Machine Images (AMIs)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) + that run on EC2 instances. Once you’ve built an AMI, you use Terraform to deploy it into AWS. + +1. [Terratest](https://terratest.gruntwork.io/). Used for automated testing of modules and services. + + + diff --git a/docs/iac/overview/modules.md b/docs/iac/overview/modules.md new file mode 100644 index 0000000000..2db5020d65 --- /dev/null +++ b/docs/iac/overview/modules.md @@ -0,0 +1,30 @@ +# What is a Module? + +Modules are reusable code components that encapsulate the configuration and resource definitions needed to deploy and manage a specific piece of infrastructure, such as a VPC, ECS cluster, or Auto Scaling Group. Each module defines several AWS resources. For example, the VPC module contains resource definitions for subnets, NAT gateways, and more. Modules promote code reusability, modularity, and consistency in infrastructure deployments and can be customized in a variety of ways. + +Gruntwork modules are tested in AWS, in a randomly selected region, each time it changes to verify the infrastructure created matches the desired configuration. + +## When should I use a module? + +The Gruntwork Infrastructure as Code (IaC) Library contains [hundreds of modules](/iac/reference/) that you can use and combine. These modules are fairly generic building blocks, so you don’t typically deploy a single module directly. Instead, you write code that combines the modules you need for a specific use case. + +For example, one module might deploy the control plane for Kubernetes and a separate module could deploy worker nodes; you may need to combine both modules together to deploy a Kubernetes cluster. + +We recommend our [Service Catalog](/iac/overview/services/) for common use cases, but our full Module Catalog is available if you have a more complex use case. For a full list of modules available, refer to the [Gruntwork Infrastructure as Code Library](/iac/reference/). + +## How services are structured + +The code in the module repos are organized into three primary folders: + +1. `modules`: The core implementation code. All of the modules that you will use and deploy are defined within. For example to ECS cluster module in the `terraform-aws-ecs` repo in `modules/ecs-cluster`. + +1. `examples`: Sample code that shows how to use the modules in the `modules` folder and allows you to try them out without having to write any code: `cd` into one of the folders, follow a few steps in the README (e.g. run `terraform apply`), and you’ll have a fully working module up and running. In other words, this is executable documentation. + +1. `test`: Automated tests for the code in modules and examples. + + diff --git a/docs/iac/overview/services.md b/docs/iac/overview/services.md new file mode 100644 index 0000000000..2069a2adda --- /dev/null +++ b/docs/iac/overview/services.md @@ -0,0 +1,44 @@ +# What is a Service? + +The Gruntwork Service Catalog consists of a number of customizable, production-grade infrastructure-as-code services that you can use to deploy and manage your infrastructure. This includes Docker orchestration, EC2 orchestration, load balancing, networking, databases, caches, monitoring, alerting, CI/CD, secrets management, VPN, and much more. Services combine multiple modules to configure an end-to-end solution. + +## When should I use a service? + +Using a service can save you time piecing together individual modules and testing that they’re correctly referencing each other. These are designed for specific use cases such as EKS and ECS clusters, VPCs with public and private subnets, and databases. + +For example, the `eks-cluster` service combines all the modules you need to run an EKS (Kubernetes) cluster in a typical production environment, including modules for the control plane, worker nodes, secrets management, log aggregation, alerting, and so on. + +If you need more flexibility than our services provide, then you can combine modules from our [Module Catalog](/iac/overview/modules), your own modules, or open source modules to meet your specific use case. + +## How services are structured + +The code in the `terraform-aws-service-catalog` repo is organized into three primary folders: + +1. `modules`: The core implementation code of this repo. All the services that you will use and deploy are defined within, such as the EKS cluster service in modules/services/eks-cluster. + +1. `examples`: Sample code that shows how to use the services in the modules folder and allows you to try the services out without having to write any code: you `cd` into one of the folders, follow a few steps in the README (e.g., run `terraform apply`), and you’ll have fully working infrastructure up and running. In other words, this is executable documentation. Note that the examples folder contains two sub-folders: + + 1. `for-learning-and-testing`: Example code that is optimized for learning, experimenting, and testing, but not + direct production usage. Most of these examples use Terraform directly to make it easy to fill in dependencies + that are convenient for testing, but not necessarily those you’d use in production: e.g., default VPCs or mock + database URLs. + + 1. `for-production`: Example code optimized for direct usage in production. This is code from the [Gruntwork Reference + Architecture](https://gruntwork.io/reference-architecture/), and it shows you how we build an end-to-end, + integrated tech stack on top of the Gruntwork Service Catalog. To keep the code DRY and manage dependencies + between modules, the code is deployed using [Terragrunt](https://terragrunt.gruntwork.io/). However, Terragrunt + is NOT required to use the Gruntwork Service Catalog: you can alternatively use vanilla Terraform or Terraform + Cloud / Enterprise, as described [here](https://docs.gruntwork.io/reference/services/intro/deploy-new-infrastructure#how-to-deploy-terraform-code-from-the-service-catalog). + + 1. Not all modules have a `for-production` example, but you can still create a production-grade configuration by + using the template provided in this discussion question, [How do I use the modules in terraform-aws-service-catalog + if there is no example?](https://github.com/gruntwork-io/knowledge-base/discussions/360#discussioncomment-25705480). + +1. `test`: Automated tests for the code in modules and examples. + + diff --git a/docs/iac/reference/index.md b/docs/iac/reference/index.md new file mode 100644 index 0000000000..d1be2d7937 --- /dev/null +++ b/docs/iac/reference/index.md @@ -0,0 +1,14 @@ +# Library Reference + +The Library Reference serves as the definitive index for all actively maintained Modules and Services within the Gruntwork Infrastructure as Code Library. This comprehensive reference provides a dedicated page for each module and service providing descriptions, detailed information on input and output variables, and sample code to help you get started. + +If you're already familiar with the IaC Library and are ready to dive right in, you can find the full Service Catalog and Module catalog reference in the left sidebar. + +For an introduction to the Gruntwork IaC Library, check out the [Overview](/iac/overview) page. This page introduces the concept of Modules and Services, clarifies their respective purposes, and offers guidance on when and how to effectively utilize them. The overview is a great starting point for understanding what the library can offer and how to best navigate it. + + diff --git a/docs/iac/stay-up-to-date/updating.md b/docs/iac/stay-up-to-date/updating.md new file mode 100644 index 0000000000..a74745ec52 --- /dev/null +++ b/docs/iac/stay-up-to-date/updating.md @@ -0,0 +1,44 @@ +# Updating + +Updating a module or service requires changing the tagged version in the `source` attribute of the module block. For backwards compatible changes, this is as simple as incrementing the version number. For backwards incompatible changes, refer to the release notes for a migration guide in each module's Github repository release page. + +We recommend updating module versions in your development environment first, followed by staging, then production, to ensure that the update and any required changes are well understood. + +## Example: Update a version + +Below is a module block referencing version `0.15.3` of the `single-server` submodule from the `terraform-aws-server` module. + +To update to version `0.15.4`, you update the value to the right of `ref=` in the source attribute. Since the version number denotes that this update is backwards compatible, it should not require any other changes. + +```hcl +module "my_instance" { + # Old + # source = "git::git@github.com:gruntwork-io/terraform-aws-server.git//modules/single-server?ref=v0.15.3" + # New + source = "git::git@github.com:gruntwork-io/terraform-aws-server.git//modules/single-server?ref=v0.15.4" + + name = "my_instance" + ami = "ami-123456" + instance_type = "t2.medium" + keypair_name = "my-keypair" + user_data = "${var.user_data}" + + vpc_id = "${var.vpc_id}" + subnet_id = "${var.subnet_id}" +} +``` + +After making the change, run `terraform plan`, inspect the output to ensure it looks as you expect, then run `terraform apply`. + +## Patcher + +Keeping track of all references to modules and services is a complicated, error prone task. To solve this problem, Gruntwork developed [Patcher](https://gruntwork.io/patcher), which shows the version of a module you are using, the latest version available, and the changelog for the module. If you're interested in trying out Patcher, [request early access](https://gruntwork.io/early-access)! + + + + diff --git a/docs/iac/stay-up-to-date/versioning.md b/docs/iac/stay-up-to-date/versioning.md new file mode 100644 index 0000000000..068cbfc8c4 --- /dev/null +++ b/docs/iac/stay-up-to-date/versioning.md @@ -0,0 +1,44 @@ +# Versioning + +Gruntwork versions the IaC library using [Semantic Versioning](https://semver.org/) (SemVer). Since much of the Gruntwork IaC Library is still pre-1.0.0, most of the Gruntwork IaC Library uses 0.MINOR.PATCH version numbers. With 0.MINOR.PATCH, the rules are a bit different, where we increment the: + +- MINOR version when we make backward incompatible API changes, and +- PATCH version when we add backward compatible functionality or bug fixes + +For modules that have submodules (e.g., terraform-aws-server/modules/single-server), not every release contains changes to every module. While using the latest available version is recommended, the version that most recently contains changes for a module can be found in each submodule’s reference in the [Library Reference](../reference/index.md). + +![Submodules show the last version in which they were modified](/img/iac/stay-up-to-date/versioning/module_release_tag_versions.png) + +We release new module versions using GitHub releases, refer to the release notes in the GitHub repository release page for a list of changes and migration guides (when necessary). + +## Example: Reference a version + +The git tag created by the release can then be referenced in the source argument for a module block sourcing from a git URL. + +For example, below is a module block referencing version `0.15.4` of the `single-server` submodule from the `terraform-aws-server` module. +```hcl +module "my_instance" { + source = "git::git@github.com:gruntwork-io/terraform-aws-server.git//modules/single-server?ref=v0.15.4" + + name = "my_instance" + ami = "ami-123456" + instance_type = "t2.medium" + keypair_name = "my-keypair" + user_data = "${var.user_data}" + + vpc_id = "${var.vpc_id}" + subnet_id = "${var.subnet_id}" +} +``` + +## What’s next + +Once you start using versioned modules, it’s important to keep the modules up to date. Refer to the [Updating](./updating.md) guide to learn more. + + + diff --git a/docs/iac/support/contributing.md b/docs/iac/support/contributing.md new file mode 100644 index 0000000000..f64bddceb3 --- /dev/null +++ b/docs/iac/support/contributing.md @@ -0,0 +1,59 @@ +--- +sidebar_label: "Contributing" +--- + +# Contributing to the Gruntwork Infrastructure as Code Library + +Contributions to the Gruntwork Infrastructure as Code Library are very welcome and appreciated! If you find a bug or want to add a new +feature or even contribute an entirely new module, we are very happy to accept +[pull requests](https://help.github.com/articles/about-pull-requests/), provide feedback, and run your changes through +our automated test suite. + +This section outlines the process for contributing. + +## File a GitHub issue + +Before starting any work, we recommend filing a GitHub issue in the appropriate repo. This is your chance to ask +questions and get feedback from the maintainers and the community before you sink a lot of time into writing (possibly +the wrong) code. If there is anything you’re unsure about, just ask! + +## Update the documentation + +We recommend updating the documentation _before_ updating any code (see +[Readme Driven Development](http://tom.preston-werner.com/2010/08/23/readme-driven-development.html)). This ensures the +documentation stays up to date and allows you to think through the problem at a high level before you get lost in the +weeds of coding. + +## Update the tests + +We also recommend updating the automated tests _before_ updating any code (see +[Test Driven Development](https://en.wikipedia.org/wiki/Test-driven_development)). That means you add or update a test +case, verify that it’s failing with a clear error message, and then make the code changes to get that test to pass. +This ensures the tests stay up to date and verify all the functionality in the repo, including whatever new +functionality you’re adding in your contribution. The `test` folder in every repo will have documentation on how to run +the tests locally. + +## Update the code + +At this point, make your code changes and use your new test case to verify that everything is working. + +## Create a pull request + +[Create a pull request](https://help.github.com/articles/creating-a-pull-request/) with your changes. Please make sure +to include the following: + +1. A description of the change, including a link to your GitHub issue. +2. Any notes on backwards incompatibility. + +## Merge and release + +The maintainers for the repo will review your code and provide feedback. If everything looks good, they will merge the +code and release a new version. + + + diff --git a/docs/intro/first-deployment/using-terraform-modules.md b/docs/intro/first-deployment/using-terraform-modules.md index 95cb4bf943..ba2e2416d8 100644 --- a/docs/intro/first-deployment/using-terraform-modules.md +++ b/docs/intro/first-deployment/using-terraform-modules.md @@ -113,7 +113,7 @@ of the code. You’ll see an example of this soon. The code above will ONLY allow you to run it with a specific Terraform version. This is a safety measure to ensure you don’t accidentally pick up a new version of Terraform until you’re ready. This is important because once you’ve upgraded to a newer version, Terraform will no longer allow you to deploy that code with any older version. -For example, if a single person on your team upgrades to `1.1.8` and runs `apply`, then you’ll no longer be able to +For example, if a single person on your team upgrades to `1.1.8` and runs `apply`, then you’ll no longer be able to use the state file with `1.1.7`, and you’ll be forced to upgrade everyone on your team and all your CI servers to `1.1.8`. It’s best to do this explicitly, rather than accidentally, so we recommend pinning Terraform versions. @@ -148,7 +148,7 @@ This code pulls in a module using Terraform’s native `module` functionality. F The `source` URL in the code above uses a Git URL with SSH authentication (see [module sources](https://www.terraform.io/docs/modules/sources.html) for all the types of `source` URLs you can use). -If you have established your account and linked your GitHub ID according to the instruction in [Accessing the Dev Portal](/intro/dev-portal/create-account), this will allow you to access private repos in the Gruntwork +If you have established your account and linked your GitHub ID according to the instruction in [Accessing the Dev Portal](/developer-portal/create-account), this will allow you to access private repos in the Gruntwork Infrastructure as Code Library without having to hard-code a password in your Terraform code. #### Versioned URL @@ -240,6 +240,6 @@ output "private_persistence_subnet_ids" { diff --git a/docs/intro/overview/getting-started.mdx b/docs/intro/overview/getting-started.mdx index d92133cbe9..7eb68966cf 100644 --- a/docs/intro/overview/getting-started.mdx +++ b/docs/intro/overview/getting-started.mdx @@ -1,28 +1,20 @@ import { CardList } from "/src/components/CardGroup" -# Getting started - -In this introductory guide we’ll cover the fundamentals you'll need in order to be successful with Gruntwork. After setting up your account to gain access to Gruntwork products, we’ll help you install necessary tools and understand how they fit into the Gruntwork development workflow. Once finished, you’ll have the knowledge required to dive into our [guides](/guides) and make full use of the IaC Library. +# What's next Create an account with our Developer Portal to access the IaC Library and training courses. - + Prepare your local development environment for efficiently working with the industry standard DevOps tools. - + Learn how to leverage these tools with Gruntwork products to realize your infrastructure needs. @@ -32,6 +24,6 @@ In this introductory guide we’ll cover the fundamentals you'll need in order t diff --git a/docs/intro/overview/how-it-works.md b/docs/intro/overview/how-it-works.md deleted file mode 100644 index c4e0b80a25..0000000000 --- a/docs/intro/overview/how-it-works.md +++ /dev/null @@ -1,55 +0,0 @@ -# How it works - -## Overview - -There are two fundamental ways to engage Gruntwork: - -1. **Gruntwork builds your architecture.** We generate the [Reference Architecture](https://gruntwork.io/reference-architecture/) based on your needs, deploy into your AWS accounts, and give you 100% of the code. Since you have all the code, you can extend, enhance, and customize the environment exactly according to your needs. The deploy process takes about one day. -2. **Build it yourself.** The Gruntwork IaC library empowers you to [construct your own bespoke architecture](/guides#build-your-own-architecture) in record time. By mix-and-matching our modules and services you can quickly define a custom architecture to suit your needs, all with the confidence of having world-class, battle-tested code running under the hood. - -## What we provide - -The Gruntwork product suite is designed to help you implement a world-class DevOps setup. It includes a combination of products, services, and support. - -### Gruntwork IaC Library - -A battle-tested, production-grade _catalog_ of infrastructure code that contains the core "building blocks" of infrastructure. It includes everything you’ll need to set up: - -- A Multi-account structure -- An infrastructure CI/CD Pipeline -- Networking and VPCs -- App orchestration — ECS, EC2, Kubernetes, and more -- Data storage — Aurora, Elasticache, RDS, and more -- Best-practice security baselines -- _and more…_ - -### Gruntwork Compliance - -An optional _catalog extension_ that contains building blocks that implement various compliance standards. Today we support CIS compliance; SOC 2 is coming soon, and we plan on adding additional standards in the future. - -### Support - -Gruntwork offers basic and paid support options: - -- **[Community support](/support#get-support).** Get help via a [Gruntwork Community Slack](https://gruntwork-community.slack.com/archives/CHH9Y3Z62) and our [Knowledge Base](https://github.com/gruntwork-io/knowledge-base/discussions). -- **[Paid support](/support#paid-support-tiers).** Get help via email, a private Slack channel, or scheduled Zoom calls, with response times backed by SLAs. - -## What you provide - -Gruntwork products and services can help you quickly achieve world-class infrastructure. However, we aren’t a consulting company. To succeed, you (or your trusted DevOps consultant/contractor) must commit to learning how to leverage our products for your use cases, making any additional customizations, and deploying or migrating your apps and services. - -### Learn how to use our products - -To work effectively with our products, you’ll need to understand our opinionated stance on DevOps best practices and how to apply it for your purposes. You'll also need to learn how to use the Gruntwork products themselves. Our guides and support remain available to assist you in these endeavors. - -### Implement the “last mile” - -Gruntwork products strike a balance between opinionatedness and configurability. They’ll get you most of the way to your goal, but you may need to make some customizations to suit your use case. You may also need to adapt your apps and services to run in your new infrastructure. Our [Knowledge Base](https://github.com/gruntwork-io/knowledge-base/discussions) and [Community Slack Channel](https://gruntwork-community.slack.com/archives/CHH9Y3Z62) provide great resources to assist you in this effort. - - - diff --git a/docs/intro/overview/intro-to-gruntwork.md b/docs/intro/overview/intro-to-gruntwork.md index f42bf7e683..72857f44b2 100644 --- a/docs/intro/overview/intro-to-gruntwork.md +++ b/docs/intro/overview/intro-to-gruntwork.md @@ -1,25 +1,20 @@ -# Introduction to Gruntwork +# What we do -### What is Gruntwork? +**Gruntwork is a “DevOps accelerator” that gets you to a world-class DevOps setup leveraging infrastructure-as-code in just a few days.** -**Gruntwork is a "DevOps accelerator" designed to make it possible to achieve a world-class DevOps setup based completely on infrastructure-as-code in just a few days.** +Gruntwork works best for teams building new infrastructure (“greenfield”), either from scratch or as part of a migration. However, it can also be used by teams with existing infrastructure (“brownfield”) if they have sufficient DevOps experience. All Gruntwork products exist within a [framework](/guides/production-framework) we’ve devised specifically to emphasize DevOps industry best-practices and maximize your team’s efficiency. -All Gruntwork products exist within a framework we’ve devised specifically to emphasize DevOps industry best-practices and maximize your team’s efficiency. In the [how it works](how-it-works.md) section, we’ll cover how Gruntwork can help your team implement your infrastructure using this framework. +All Gruntwork products are built on and fully compatible with [Terraform](https://terraform.io). The one exception to this is the [Gruntwork Reference Architecture](/refarch/whats-this/what-is-a-reference-architecture), which uses [Terragrunt](https://terragrunt.gruntwork.io/) (one of our open source tools) to implement an end-to-end architecture. -Gruntwork works best for teams building new infrastructure ("greenfield"), either from scratch or as part of a migration. However, it can also be used by teams with existing infrastructure ("brownfield") if they have sufficient DevOps experience. +There are two fundamental ways to engage Gruntwork: -### Supported public clouds - -Gruntwork products focus on Amazon Web Services (AWS). Support for other public clouds such as GCP and Azure may be added in the future. - -### Gruntwork uses Terraform - -All Gruntwork products are built on and fully compatible with [open source Terraform](https://terraform.io). The one exception to this is the [Gruntwork Reference Architecture](https://gruntwork.io/reference-architecture/), which uses [Terragrunt](https://terragrunt.gruntwork.io/) (one of our open source tools) to implement an end-to-end architecture. +1. **Gruntwork builds your architecture.** We generate a Reference Architecture based on your needs, deploy into your AWS accounts, and give you 100% of the code. Since you have all the code, you can extend, enhance, and customize the environment exactly according to your needs. See [the docs](/refarch/whats-this/what-is-a-reference-architecture) for more information about our Reference Architecture. +2. **Build it yourself.** The [Gruntwork IaC library](/iac/overview/) empowers you to construct your own bespoke architecture in record time. By mix-and-matching our [modules](/iac/overview/modules) and [services](/iac/overview/services) you can quickly define a custom architecture to suit your needs, all with the confidence of having world-class, battle-tested code running under the hood. diff --git a/docs/intro/overview/prerequisites.md b/docs/intro/overview/prerequisites.md new file mode 100644 index 0000000000..ae8d8dcc8b --- /dev/null +++ b/docs/intro/overview/prerequisites.md @@ -0,0 +1,40 @@ +# What you need to know + +Gruntwork accelerates your infrastructure. Our products allow you to treat your infrastructure like you do your application: as code, complete with pull requests and peer reviews. Our products require a _variety of skills_ to maintain and customize to your needs over time. + +## Terraform + +Our modules are all built using [Terraform](https://www.terraform.io/). You should be comfortable using Terraform for Infrastructure as Code. + +## Terragrunt + +If you purchase the Reference Architecture, it is delivered in [Terragrunt](https://terragrunt.gruntwork.io/), our open source wrapper around Terraform which allows you to + +1. Separate your monolithic terraform state files into smaller ones to speed up your plans and applies +2. Keep your infrastructure code DRY + +See [How to Manage Multiple Environments with Terraform](https://blog.gruntwork.io/how-to-manage-multiple-environments-with-terraform-32c7bc5d692) and our [Terragrunt Quick start](https://terragrunt.gruntwork.io/docs/getting-started/quick-start/) documentation for more details. + +## Git and GitHub + +Our code is stored in Git repositories in GitHub. You must have a working knowledge of Git via SSH (`add`, `commit`, `pull`, branches, et cetera) and GitHub (Pull requests, issues, et cetera) in order to interface with the Reference Architecture and our code library. + +## Knowledge of Go, Shell, and Python + +Some of the modules we have leverage Go, Shell scripting and Python. To customize these to suit your needs, you may need to dive in and make changes. In addition, all of our automated testing is written in Go, so familiarity with Go is highly recommended. + +## AWS + +To be successful with the infrastructure provisioned by us, you must have a decent working knowledge of AWS, its permissions schemes ([IAM](https://aws.amazon.com/iam/)), services, and APIs. While having AWS certification is not required, it is certainly helpful. Since Gruntwork is an accelerator for your AWS infrastructure and not an abstraction layer in front of AWS, knowledge of AWS and the services you intend to use is required. + +## Containerization tools like Docker and Packer + +We create Docker containers throughout our code library, and use them heavily in our [Gruntwork Pipelines](/pipelines/overview/) product, an important piece of the Reference Architecture. Containerization is an important part of helping many companies scale in the cloud, and we’re no exception. Familiarity with creating docker images and pushing and pulling them from repositories is required. Likewise, we use Packer to build AMIs. Understanding Packer will enable you to build your own AMIs for your own infrastructure and make modifications to the infrastructure we provision for you. + + + diff --git a/docs/intro/overview/reference-architecture-prerequisites-guide.md b/docs/intro/overview/reference-architecture-prerequisites-guide.md deleted file mode 100644 index 59d888d471..0000000000 --- a/docs/intro/overview/reference-architecture-prerequisites-guide.md +++ /dev/null @@ -1,92 +0,0 @@ -# Reference Architecture Prerequisites Guide - -Gruntwork accelerates your infrastructure with our [Reference Architecture](https://gruntwork.io/reference-architecture/). This framework allows you to treat your infrastructure like you do your application: as code, complete with pull requests and peer reviews. The Reference Architecture requires a variety of skills to maintain it and customize it to your needs over time. - -Here's what your team will need so you can succeed with the Gruntwork Reference Architecture: - -
- - Knowledge of Terraform -
- -Our modules are all built using [Terraform](https://www.terraform.io/), and the Reference Architecture uses our modules to build out your infrastructure. You should be comfortable using Terraform for Infrastructure as Code. -
-
- -
- Knowledge of Terragrunt or willingness to learn -
- -The Reference Architecture is delivered in [Terragrunt](https://terragrunt.gruntwork.io/), our open source wrapper around Terraform which allows you to - -1. Separate your monolithic terraform state files into smaller ones to speed up your plans and applies -2. Keep your infrastructure code DRY - -See [How to Manage Multiple Environments with Terraform](https://blog.gruntwork.io/how-to-manage-multiple-environments-with-terraform-32c7bc5d692) and our [Terragrunt Quick start](https://terragrunt.gruntwork.io/docs/getting-started/quick-start/) documentation for more details. -
-
- -
- Knowledge of git and GitHub -
- -Our Reference Architecture and the modules that it consumes are all stored in Git repositories in GitHub. You must have a working knowledge of Git via SSH (`add`, `commit`, `pull`, branches, et cetera) and GitHub (Pull requests, issues, et cetera) in order to interface with the Reference Architecture and our code library. -
-
- -
- Knowledge of AWS and its services -
- -The Reference Architecture is provisioned in [AWS](https://aws.amazon.com/). To be successful with the infrastructure provisioned by us, you must have a decent working knowledge of AWS, its permissions schemes ([IAM](https://aws.amazon.com/iam/)), services, and APIs. While having AWS certification is not required, it is certainly helpful. Since Gruntwork is an accelerator for your AWS infrastructure and not an abstraction layer in front of AWS, knowledge of AWS and the services you intend to use is required. -
-
- -
- Knowledge of Gruntwork’s Limitations -
- -During the process of setting up the AWS accounts for your reference architecture, our tooling will automatically submit quota increase requests to AWS as a support ticket. These AWS quota increases are required to install the components of the Reference Architecture. Often, AWS will approve these requests quickly. Sometimes these support tickets will take some time for AWS to resolve. Unfortunately, some of these requests may be denied by AWS’s support team. Gruntwork can work with you to get these requests approved, but this can take some time, and that time is mostly out of our control. - -Gruntwork focuses on helping you launch and maintain your infrastructure as code. Understanding and using the AWS services that our code provisioned is up to you. Since Gruntwork is an accelerator for your AWS infrastructure and not an abstraction layer in front of AWS, knowledge of AWS and the services you intend to use is required. -
-
- -
- Knowledge of Go, Shell, and Python -
- -Some of the modules we have leverage Go, Shell scripting and Python. To customize these to suit your needs, you may need to dive in and make changes. In addition, all of our automated testing is written in Go, so familiarity with Go is highly recommended. -
-
- -
- Knowledge of containerization tools like Docker and Packer -
- -We create Docker containers throughout our code library, and use them heavily in our [Gruntwork Pipelines](https://gruntwork.io/pipelines/) product, an important piece of the Reference Architecture. Containerization is an important part of helping many companies scale in the cloud, and we’re no exception. Familiarity with creating docker images and pushing and pulling them from repositories is required. Likewise, we use Packer to build AMIs. Understanding Packer will enable you to build your own AMIs for your own infrastructure and make modifications to the infrastructure we provision for you. -
-
- -
- Brand new AWS accounts -
- -With our Gruntwork Wizard, we help you create new AWS accounts, which we’ll then use to build your Reference Architecture. All accounts must be completely empty. At this time we do not support “brown field” deployments of the Reference Architecture. -
-
- -
- Time to learn -
- -Gruntwork accelerates you down the road towards having your entire AWS cloud infrastructure captured as Infrastructure as Code. The Reference Architecture will set you up with a solid foundation with our [Landing Zone](https://gruntwork.io/landing-zone-for-aws/) and help you regularly modify your infrastructure with [Gruntwork Pipelines](https://gruntwork.io/pipelines/). Infrastructure and Infrastructure as Code is complex, and while we strive to make it as easy as possible for you, you will need time to understand the twists and turns of your infrastructure in order to tune it to fully suit your needs. -
-
- - diff --git a/docs/intro/overview/shared-responsibility-model.md b/docs/intro/overview/shared-responsibility-model.md deleted file mode 100644 index 7c8b1017bf..0000000000 --- a/docs/intro/overview/shared-responsibility-model.md +++ /dev/null @@ -1,53 +0,0 @@ -# Shared Responsibility Model - -:::note - -The implementation and maintenance of Gruntwork products in AWS is a shared responsibility between Gruntwork and the customer. - -::: - -## Gruntwork is responsible for: - -1. Providing a tested, updated, and richly featured collection of infrastructure code for the customer to use. -1. Maintaining a healthy Knowledge Base community where other engineers (including Grunts) post & answer questions. -1. For Pro / Enterprise Support customers: Answering questions via email and Slack. -1. For Reference Architecture customers: - 1. Generating the initial Reference Architecture based on our customer’s selections of available configurations. This includes: - 1. Our implementation of Landing Zone - 1. A complete sample app with underlying database and caching layer - 1. The Gruntwork Pipeline for deploying changes to infrastructure - 1. An overview of how to use the Reference Architecture - 1. Deploying the initial Reference Architecture into the customer’s brand new empty AWS accounts. - 1. Delivering the initial Reference Architecture Infrastructure as Code to the customer. - 1. Providing resources to the customer for deeply understanding the inner workings of the Reference Architecture. -1. For CIS customers: - 1. Providing IaC libraries to the CIS customer that correctly implement CIS requirements and restrictions. - 1. For aspects of the CIS AWS Foundations Benchmark where those requirements cannot be met by modules, but require human intervention, provide instructions on manual steps the customer must take to meet the requirements. - 1. For CIS Reference Architecture customers, deploying a Reference Architecture and providing access to infrastructure code that implements the CIS AWS Foundations Benchmark requirements out-of-the-box, wherever possible. - -## As a Gruntwork customer, you are responsible for: - -1. Staffing appropriately (as described in the [Prerequisites Guide](/intro/overview/reference-architecture-prerequisites-guide/)) to maintain and customize the modules and (if applicable) the Reference Architecture and to understand how the Gruntwork product works so that changes can be made to customize it to the customer’s needs. - 1. Raise limitations of Gruntwork modules as a feature request or a pull request. - 1. N.B., Gruntwork does not guarantee any turn-around time on getting features built or PRs reviewed and merged. Gruntwork modules must also be applicable to a wide range of companies, so we will be selective about features added and pull requests accepted. -1. Adding additional Infrastructure as Code to customize it for your company. -1. Communicating with AWS to fix account issues and limitations beyond Gruntwork’s control (quotas, account verification, et cetera). -1. For Reference Architecture customers: - 1. Following all provided manual steps in the Reference Architecture documents where automation is not possible. There are certain steps a Reference Architecture customer must perform on their own. Please keep an eye out for emails from Gruntwork engineers when you are configuring your Reference Architecture form for - deployment. - 1. Extending and customizing Gruntwork Pipelines beyond the basic CI/CD pipeline that Gruntwork has provided to suit your deployment requirements. - 1. Designing and implementing your AWS infrastructure beyond the Reference Architecture. - 1. Understanding and awareness of AWS resource costs for all infrastructure deployed into your AWS accounts ([Knowledge Base #307](https://github.com/gruntwork-io/knowledge-base/discussions/307) for Ref Arch baseline). - 1. Once deployed, maintaining the Reference Architecture to keep it secure and up to date. - 1. Keeping the Reference Architecture secure in accordance with their company needs. - 1. Understanding and accepting the security implications of any changes made to the Reference Architecture. - 1. Monitoring Gruntwork repositories for updates and new releases and applying them as appropriate. - 1. Maintaining all compliance standards after the Reference Architecture has been delivered. - - - diff --git a/docs/intro/overview/what-we-provide.md b/docs/intro/overview/what-we-provide.md new file mode 100644 index 0000000000..cdc7eb9341 --- /dev/null +++ b/docs/intro/overview/what-we-provide.md @@ -0,0 +1,49 @@ +# What we provide + +## Gruntwork IaC Library + +A battle-tested, production-grade _[catalog](/iac/reference/)_ of infrastructure code that contains the core “building blocks” of infrastructure. It includes everything you’ll need to set up: + +- A Multi-account structure +- An infrastructure CI/CD Pipeline +- Networking and VPCs +- App orchestration — ECS, EC2, Kubernetes, and more +- Data storage — Aurora, Elasticache, RDS, and more +- Best-practice security baselines +- _and [more…](/iac/reference/)_ + +## Gruntwork Compliance + +An optional _catalog extension_ that contains building blocks that correctly implement CIS compliance standards. For aspects of the CIS AWS Foundations Benchmark where those requirements cannot be met by modules, but require human intervention, we provide instructions on manual steps you must take to meet the requirements. + +:::note + +For CIS Reference Architecture customers, we deploy a Reference Architecture and provide access to infrastructure code that implements the CIS AWS Foundations Benchmark requirements out-of-the-box, wherever possible. + +::: + +## Gruntwork Reference Architecture + +An optional end-to-end, multi-account architecture that Gruntwork deploys into your brand new AWS accounts that includes: + +- Our implementation of Landing Zone +- A complete sample app with underlying database and caching layer +- The Gruntwork Pipeline for deploying changes to infrastructure +- An overview of how to use the Reference Architecture + +Once the infrastructure is deployed, Gruntwork engineers deliver the full Infrastructure as Code to you. + +## Support + +Gruntwork offers basic and paid support options: + +- **[Community support](/support#get-support).** Get help via a [Gruntwork Community Slack](https://gruntwork-community.slack.com/archives/CHH9Y3Z62) and our [Knowledge Base](https://github.com/gruntwork-io/knowledge-base/discussions) where we maintain healthy communities where other engineers (including Grunts) post & answer questions. +- **[Paid support](/support#paid-support-tiers).** Get help via email or a private Slack channel with response times backed by SLAs. + + + diff --git a/docs/intro/overview/what-you-provide.md b/docs/intro/overview/what-you-provide.md new file mode 100644 index 0000000000..8f0b6138ac --- /dev/null +++ b/docs/intro/overview/what-you-provide.md @@ -0,0 +1,59 @@ +# What you provide + +Gruntwork products and services can help you quickly achieve world-class infrastructure. However, we aren’t a consulting company. To succeed, you (or your trusted DevOps consultant/contractor) must commit to learning how to leverage our products for your use cases, making any additional customizations, and deploying or migrating your apps and services. + +## Your team + +You must be appropriately staffed in order to maintain and customize the modules, services, and (if applicable) the Reference Architecture. + +## Time to learn + +With Gruntwork, you can accelerate your journey towards capturing your AWS cloud infrastructure as Infrastructure as Code. Although our aim is to simplify this intricate process, gaining a comprehensive understanding of your infrastructure's complexities and tailoring it to your specific needs will require a significant investment of time and effort on your part. Our [product documentation](/products) and [support](/support) remain available to assist you in these endeavors. + +## Implement the “last mile” + +Gruntwork products strike a balance between being opinionated and configurable. They’ll get you most of the way to your goal, but you may need to make some customizations to suit your use case. You may also need to adapt your apps and services to run in your new infrastructure by customizing/adding additional Infrastructure as Code to customize according to the requirements for your company. Our [Knowledge Base](https://github.com/gruntwork-io/knowledge-base/discussions) and [Community Slack Channel](https://gruntwork-community.slack.com/archives/CHH9Y3Z62) provide great resources to assist you in this effort. + +If you notice a limitation or bug in Gruntwork modules, we greatly appreciate and welcome [customer PRs](/iac/support/contributing) or you raising this to our attention via [bug or feature requests](/support#share-feedback). + +:::note + +Gruntwork does not guarantee any turn-around time on getting features built or PRs reviewed and merged. Gruntwork modules must also be applicable to a wide range of companies, so we will be selective about features added and pull requests accepted. + +::: + +## Talk to AWS if needed + +You'll have to communicate with AWS to fix account issues and limitations beyond Gruntwork’s control (quotas, account verification, et cetera). + +## If you purchased a Reference Architecture + +### Perform any required manual steps + +Following all provided manual steps in the Reference Architecture documents where automation is not possible. There are certain steps a Reference Architecture customer must perform on their own. Please keep an eye out for emails from Gruntwork engineers when you are configuring your Reference Architecture form for +deployment. + +### Customize Pipelines + +Extend and customize Gruntwork Pipelines beyond the basic CI/CD pipeline that Gruntwork has provided to suit your deployment requirements. + +### Understand your AWS costs + +Understanding and awareness of AWS resource costs for all infrastructure deployed into your AWS accounts ([Knowledge Base #307](https://github.com/gruntwork-io/knowledge-base/discussions/307) for Ref Arch baseline). + +### Maintain your Reference Architecture + +Once deployed, Gruntwork hands the Reference Architecture over to your team. You should expect to keep it secure and up to date by: + +- Keeping the Reference Architecture secure in accordance with your company needs. +- Understanding and accepting the security implications of any changes your team makes to the Reference Architecture. +- Monitoring Gruntwork repositories for updates and new releases and applying them as appropriate. +- Maintaining all compliance standards after the Reference Architecture has been delivered. + + + diff --git a/docs/landing-zone/index.md b/docs/landing-zone/index.md new file mode 100644 index 0000000000..0e97ca3283 --- /dev/null +++ b/docs/landing-zone/index.md @@ -0,0 +1,9 @@ +# Landing Zone + + + diff --git a/docs/guides/stay-up-to-date/patcher/index.md b/docs/patcher/index.md similarity index 100% rename from docs/guides/stay-up-to-date/patcher/index.md rename to docs/patcher/index.md diff --git a/docs/pipelines/how-it-works/index.md b/docs/pipelines/how-it-works/index.md new file mode 100644 index 0000000000..216b53a8c0 --- /dev/null +++ b/docs/pipelines/how-it-works/index.md @@ -0,0 +1,88 @@ +# How it works + +![Gruntwork Pipelines Architecture](/img/guides/build-it-yourself/pipelines/tftg-pipeline-architecture.png) + +## External CI Tool + +Gruntwork Pipelines has been validated with [CircleCI](https://circleci.com/), [GitHub Actions](https://github.com/features/actions), and [GitLab](https://about.gitlab.com/). However, it can be used with any external CI/CD tool. +The role of the CI/CD tool is to trigger jobs inside Gruntwork Pipelines. +We have [example configurations](https://github.com/gruntwork-io/terraform-aws-service-catalog/tree/master/examples/for-production/infrastructure-live/_ci/scripts) +that identify changed terraform modules and call the Gruntwork Pipelines invoker Lambda function. + +By default, the invoker Lambda function is run by a CLI tool called `infrastructure-deployer` from within your CI tool. + +## ECS Deploy Runner + +The [ECS Deploy Runner Module](https://github.com/gruntwork-io/terraform-aws-ci/tree/main/modules/ecs-deploy-runner) +is a flexible framework for running pre-defined, locked-down jobs in an isolated +ECS task. It serves as the foundation for Gruntwork Pipelines. +The components described below work together to trigger jobs, validate them, run them, and stream +the logs back to your CI tool as if they were running locally. + +### Infrastructure Deployer CLI + +The [Infrastructure Deployer CLI tool](https://github.com/gruntwork-io/terraform-aws-ci/tree/main/modules/infrastructure-deployer) +serves as the interface between your chosen CI tool and Gruntwork Pipelines. It is used to trigger +jobs in the deploy-runner. Primarily, it calls instances of the invoker lambda described in the next section. + +Usage: + +`infrastructure-deployer --aws-region AWS_REGION [other options] -- CONTAINER_NAME SCRIPT ARGS...` + +When launching a task, you may optionally set the following useful flags: + +- `max-wait-time` (default 2h0m0s) — timeout length for the action, this can be any golang parseable string +- `task-cpu` — A custom number of CPU units to allocate to the ECS task +- `task-memory` — A custom number of memory units to allocate to the ECS task + +To get the list of supported containers and scripts, pass in the `--describe-containers` option. For example: + +`infrastructure-deployer --describe-containers --aws-region us-west-2` + +This will list all the containers and the scripts for each container that can be invoked using the invoker function of +the ECS deploy runner stack deployed in `us-west-2`. + + +### Invoker Lambda + +The [Invoker Lambda](https://github.com/gruntwork-io/terraform-aws-ci/blob/main/modules/ecs-deploy-runner/invoker-lambda/invoker/index.py) +is an AWS Lambda function written in Python that acts as the AWS entrypoint for your pipeline. +It has 3 primary roles: + +1. Serving as a gatekeeper for pipelines runs, determining if a particular command is allowed to be run, and if the arguments are valid +2. Creating ECS tasks that run terraform, docker, or packer commands +3. Shipping deployment logs back to your CI/CD tool + +### Standard Configuration + +The ECS deploy runner is flexible and can be configured for many tasks. The [standard configuration](https://github.com/gruntwork-io/terraform-aws-ci/tree/main/modules/ecs-deploy-runner-standard-configuration) +is a set of ECS task definitions that we ship with Pipelines by default. +Once you have your pipeline deployed you can [modify the ECS Deploy Runner configuration](../maintain/extending.md) as you like. +The configuration defines what scripts are accepted by the invoker Lambda and which arguments may be provided. The invoker Lambda +will reject _any_ script or argument not defined in the ECS Deploy Runner configuration. +The default tasks are defined below. + +#### Docker Image Builder (Kaniko) + +The Docker Image Builder task definition allows CI jobs to build docker images. +This ECS task uses an open source library called [Kaniko](https://github.com/GoogleContainerTools/kaniko) to enable docker builds from within a docker container. +We provide a [Docker image](https://github.com/gruntwork-io/terraform-aws-ci/tree/main/modules/ecs-deploy-runner/docker/kaniko) based on Kaniko for this task. + +#### Packer AMI Builder + +The Packer AMI Builder task definition allows CI jobs to build AMIs using HashiCorp Packer. This task runs in +a [Docker image](https://github.com/gruntwork-io/terraform-aws-ci/tree/main/modules/ecs-deploy-runner/docker/deploy-runner) we provide. + +#### Terraform Planner and Applier + +The Terraform Planner task definition and Terraform Applier task definition are very similar. They allow CI jobs to +plan and apply Terraform and Terragrunt code. These tasks run in the same [Docker image](https://github.com/gruntwork-io/terraform-aws-ci/tree/main/modules/ecs-deploy-runner/docker/deploy-runner) +as the AMI builder. + + + diff --git a/docs/pipelines/maintain/extending.md b/docs/pipelines/maintain/extending.md new file mode 100644 index 0000000000..541eb3a564 --- /dev/null +++ b/docs/pipelines/maintain/extending.md @@ -0,0 +1,145 @@ +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +# Extending your Pipeline + +Pipelines can be extended in several ways: +- Adding repositories to supporting building Docker images for many applications +- Updating which branches can kick off which jobs +- Adding additional build scripts that can run in Pipelines +- Adding permissions to Pipelines + + +## Adding a repository + +Pipelines has separate configurations for each type of job that can be performed (e.g., building a docker image, running terraform plan, running terraform apply). An allow-list of repos and branches is defined for each job type, which can be updated to extend your usage of pipelines to additional application repositories. + +This portion of the guide focuses on building Docker images for application repos. If you have repositories for which you would like to run `terraform plan` or `terraform apply` jobs, similar steps can be followed, modifying the appropriate task configurations. + + + + +If you’ve deployed Pipelines as a part of your Reference Architecture, we recommend following the guide on [how to deploy your apps into the Reference Architecture](../../guides/reference-architecture/example-usage-guide/deploy-apps/intro) to learn how to define a module for your application. + +To allow Pipelines jobs to be started by events in your repository, open `shared//mgmt/ecs-deploy-runner/terragrunt.hcl` and update `docker_image_builder_config.allowed_repos` to include the HTTPS Git URL of the application repo for which you would like to deploy Docker images. + +Since pipelines [cannot update itself](./updating.md), you must run `terragrunt plan` and `terragrunt apply` manually to deploy the change from your local machine. Run `terragrunt plan` to inspect the changes that will be made to your pipeline. Once the changes have been reviewed, run `terragrunt apply` to deploy the changes. + + + + +If you’ve deployed Pipelines as a standalone framework using the `ecs-deploy-runner` service in the Service Catalog, you will need to locate the file in which you’ve defined a module block sourcing the `ecs-deploy-runner` service. + +Once the `ecs-deploy-runner` module block is located, update the `allowed_repos` list in the `docker_image_builder_config` variable to include the HTTPS Git URL of the application repo for which you would like to deploy Docker images. + +Refer to the [Variable Reference](../../reference/services/ci-cd-pipeline/ecs-deploy-runner#reference) section for the service in the Library Reference for full configuration details. + +Run `terraform plan` to inspect the changes that will be made to your pipeline. Once the changes have been reviewed, run `terraform apply` to deploy the changes. To deploy the application to ECS or EKS you will need to deploy a task definition (ECS) or Deployment (EKS) that references the newly built image. + + + +### Adding infrastructure deployer to the new repo + +Pipelines can be triggered from GitHub events in many repositories. In order to configure Pipelines for the new repository, you need to add a step in your CI/CD configuration for the repository that uses the `infrastructure-deployer` CLI tool to trigger Docker image builds. + +```bash +export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text) +export DEPLOY_RUNNER_REGION=$(aws configure get region) +export ECR_REPO_URL="${ACCOUNT_ID}.dkr.ecr.${DEPLOY_RUNNER_REGION}.amazonaws.com" +export DOCKER_TAG=$(git rev-parse --short HEAD) +export REPOSITORY_NAME="example" +export GITHUB_ORG="example-org" + +infrastructure-deployer --aws-region "us-east-1" -- docker-image-builder build-docker-image \ + --repo "https://github.com/${GITHUB_ORG}/${REPOSITORY_NAME}" \ + --ref "origin/main" \ + --context-path "path/to/directory/with/dockerfile/" \ + --docker-image-tag "${ECR_REPO_URL}/${REPOSITORY_NAME}:${DOCKER_TAG}" \ +``` + +## Specifying branches that can be deployed + +Pipelines can be configured to only allow jobs to be performed on specific branches. For example, a common configuration is to allow `terraform plan` or `terragrunt plan` jobs for pull requests, and only allow `terraform apply` or `terragrunt apply` to run on merges to the main branch. + +Depending on your use case, you may need to modify the `allowed_apply_git_refs` attribute to update the allow-list of branch names that can kick off the `plan` and `apply` jobs. + +For example, a common configuration for `apply` jobs is to specify that this job can only run on the `main` branch: +```tf +allowed_apply_git_refs = ["main", "origin/main"] +``` + + + + +If you’ve deployed Pipelines as a part of your Reference Architecture, open `shared//mgmt/ecs-deploy-runner/terragrunt.hcl` and update the values in the `allowed_apply_git_refs` attribute for the job configuration you would like to modify (either `terraform_planner_config` or `terraform_applier_config`). + +Run `terragrunt plan` to inspect the changes that will be made to your pipeline. Once the changes have been reviewed, run `terragrunt apply` to deploy the changes. + + + + +If you’ve deployed Pipelines as a standalone framework using the `ecs-deploy-runner` service in the Service Catalog, you will need to locate the file in which you’ve defined a module block sourcing the `ecs-deploy-runner` service. + +By default, the `ecs-deploy-runner` service from the Service Catalog allows any git ref to be applied. After you locate the module block for `ecs-deploy-runner`, modify the `allowed_apply_git_refs` attribute for the job configuration that you would like to modify (either `terraform_planner_config` or `terraform_applier_config`). + +Run `terraform plan` to inspect the changes that will be made to your pipeline. Once the changes have been reviewed, run `terraform apply` to deploy the changes. + + + +## Adding a new AWS Service + +If you are expanding your usage of AWS to include an AWS service you’ve never used before, you will need to grant each job sufficient permissions to access that service. Pipelines executes in ECS tasks running in your AWS account(s). Each task (terraform planner, applier, docker builder, ami builder) has a distinct execution IAM role with only the permissions each task requires to complete successfully. For example, if you need to create an Amazon DynamoDB Table using Pipelines for the first time, you would want to add (at a minimum) the ability to list and describe tables to the policy for the `planner` IAM role, and all permissions for DynamoDB to the IAM policy for the `terraform-applier` IAM role. + +We recommend that the `planner` configuration have read-only access to resources, and the applier be able to read, create, modify, and destroy resources. + + + + +If you’ve deployed Pipelines as a part of your Reference Architecture, the permissions for the `terraform-planner` task are located in `_envcommon/mgmt/read_only_permissions.yml` and the permissions for the `terraform-applier` task are located in `_envcommon/mgmt/deploy_permissions.yml`. Open and add the required permissions to each file. + +After you are done updating both files, you will need to run `terragrunt plan`, review the changes, then `terragrunt apply` for each account in your Reference Architecture. +```bash +cd logs/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec -- terragrunt apply --terragrunt-source-update -auto-approve + +cd shared/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec -- terragrunt apply --terragrunt-source-update -auto-approve + +cd security/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec -- terragrunt apply --terragrunt-source-update -auto-approve + +cd dev/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec -- terragrunt apply --terragrunt-source-update -auto-approve + +cd stage/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec -- terragrunt apply --terragrunt-source-update -auto-approve + +cd prod/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec -- terragrunt apply --terragrunt-source-update -auto-approve +``` + + + +If you’ve deployed Pipelines as a standalone framework using the `ecs-deploy-runner` service in the Service Catalog, you will need to locate the file in which you’ve defined a module block sourcing the `ecs-deploy-runner` service. + +Modify the AWS IAM policy document being passed into the `iam_policy` variable for the [`terraform_applier_config`](../../reference/services/ci-cd-pipeline/ecs-deploy-runner#terraform_applier_config) and the [`terraform_planner_config`](../../reference/services/ci-cd-pipeline/ecs-deploy-runner#terraform_planner_config) variables. Refer to the [variable reference](../../reference/services/ci-cd-pipeline/ecs-deploy-runner#reference) section for the service in the Library Reference for the full set of configuration details for this service. + +After you are done updating the IAM policy documents, run `terraform plan` then review the changes that will be made. Finally, run `terraform apply` to apply the changes. + + + +## Adding scripts that can be run in Pipelines + +The `deploy-runner` Docker image for Pipelines only allows scripts within a single directory to be executed in the ECS task as an additional security measure. + +By default, the `deploy-runner` ships with three scripts — one to build HashiCorp Packer images, one to run `terraform plan` and `terraform apply`, and one to automatically update the value of a variable in a Terraform tfvars or Terragrunt HCL file. + +If you need to run a custom script in the `deploy-runner`, you must fork the image code, add an additional line to copy your script into directory designated by the `trigger_directory` argument. Then, you will need to rebuild the Docker image, push to ECR, then update your Pipelines deployment following the steps in [Updating your Pipeline](./updating.md). + + + diff --git a/docs/pipelines/maintain/updating.md b/docs/pipelines/maintain/updating.md new file mode 100644 index 0000000000..3528b8e45f --- /dev/null +++ b/docs/pipelines/maintain/updating.md @@ -0,0 +1,104 @@ +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +# Updating Your Pipeline + +Pipelines is built using the [`terraform-aws-ci`](../../reference/modules/terraform-aws-ci/ecs-deploy-runner/) module. We recommend updating your pipeline whenever there’s a new release of the module. + +By default, Pipelines cannot update it’s own infrastructure (ECS cluster, AWS Lambda function, etc), so you must run upgrades to Pipelines manually from your local machine. This safeguard is in place to prevent you from accidentally locking yourself out of the Pipeline when applying a change to permissions. + +For example, if you change the IAM permissions of the CI user, you may no longer be able to run the pipeline. The pipeline job that updates the permissions will also be affected by the change. This is a difficult scenario to recover from, since you will have lost access to make further changes using Pipelines. + +## Prerequisites + +This guide assumes you have the following: +- An AWS account with permissions to create the necessary resources +- An [AWS Identity and Access Management](https://aws.amazon.com/iam/) (IAM) user or role with permissions to start pipelines deployments and update AWS Lambda functions +- [AWS Command Line Interface](https://aws.amazon.com/cli/) (AWS CLI) installed on your local machine +- [`infrastructure-deployer`](https://github.com/gruntwork-io/terraform-aws-ci/tree/main/modules/infrastructure-deployer) CLI tool installed locally +- [`aws-vault`](https://www.github.com/99designs/aws-vault) installed locally for authenticating to AWS + +## Updating container images + +Gruntwork Pipelines uses two images — one for the [Deploy Runner](https://github.com/gruntwork-io/terraform-aws-ci/blob/main/modules/ecs-deploy-runner/docker/deploy-runner/Dockerfile) and one for [Kaniko](https://github.com/gruntwork-io/terraform-aws-ci/blob/main/modules/ecs-deploy-runner/docker/kaniko/Dockerfile). To update pipelines to the latest version, you must build and push new versions of each image. + +Pipelines has the ability to build container images, including the images it uses. You can use the `infrastructure-deployer` CLI tool locally to start building the new image versions. This is the same tool used by Pipelines in your CI system. + +```bash +export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text) +export DEPLOY_RUNNER_REGION=$(aws configure get region) +export DOCKERFILE_REPO="https://github.com/gruntwork-io/terraform-aws-ci.git" +export ECR_REPO_URL="${ACCOUNT_ID}.dkr.ecr.${DEPLOY_RUNNER_REGION}.amazonaws.com" +export TERRAFORM_AWS_CI_VERSION="v0.52.1" + +# Builds and pushes the deploy runner image +infrastructure-deployer --aws-region "$DEPLOY_RUNNER_REGION" -- docker-image-builder build-docker-image \ + --repo "$DOCKERFILE_REPO" \ + --ref "$TERRAFORM_AWS_CI_VERSION" \ + --context-path "modules/ecs-deploy-runner/docker/deploy-runner" \ + --env-secret 'github-token=GITHUB_OAUTH_TOKEN' \ + --docker-image-tag "${ECR_REPO_URL}/ecs-deploy-runner:${TERRAFORM_AWS_CI_VERSION}" \ + --build-arg "module_ci_tag=$TERRAFORM_AWS_CI_VERSION" + +# Builds and pushes the kaniko image +infrastructure-deployer --aws-region "$DEPLOY_RUNNER_REGION" -- docker-image-builder build-docker-image \ + --repo "$DOCKERFILE_REPO" \ + --ref "$TERRAFORM_AWS_CI_VERSION" \ + --context-path "modules/ecs-deploy-runner/docker/kaniko" \ + --env-secret 'github-token=GITHUB_OAUTH_TOKEN' \ + --docker-image-tag "${ECR_REPO_URL}/kaniko:${TERRAFORM_AWS_CI_VERSION}" \ + --build-arg "module_ci_tag=$TERRAFORM_AWS_CI_VERSION" +``` +Each image may take a few minutes to build and push. Once both images are built, you can update the image tag in your terraform module and update the infrastructure. + +## Updating infrastructure + +Next, update the references to these images to the new tag values. This will vary depending on if you’re using Pipelines as configured by the Reference Architecture or if you’ve deployed Pipelines as a standalone framework. + + + + +To update the image tags for pipelines deployed by a Reference Architecture, you update `common.hcl` with the new tag values for these images. The new tag value will be version of `terraform-aws-ci` that the images use. For example, if your newly created images are using the v0.52.1 release of `terraform-aws-ci`, update common.hcl to: + +``` +deploy_runner_container_image_tag = "v0.52.1" +kaniko_container_image_tag = "v0.52.1" +``` + +Next, apply the ecs-deploy-runner module in each account: +```bash +cd logs/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec your-logs -- terragrunt apply --terragrunt-source-update -auto-approve + +cd shared/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec your-shared -- terragrunt apply --terragrunt-source-update -auto-approve + +cd security/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec your-security -- terragrunt apply --terragrunt-source-update -auto-approve + +cd dev/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec your-dev -- terragrunt apply --terragrunt-source-update -auto-approve + +cd stage/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec your-stage -- terragrunt apply --terragrunt-source-update -auto-approve + +cd prod/$DEPLOY_RUNNER_REGION/mgmt/ecs-deploy-runner +aws-vault exec your-prod -- terragrunt apply --terragrunt-source-update -auto-approve +``` + + + +If you’ve deployed Pipelines as a standalone framework using the `ecs-deploy-runner` service in the Service Catalog, refer to the [Variable Reference](../../reference/services/ci-cd-pipeline/ecs-deploy-runner#reference) section for the service in the Library Reference for configuration details. You will need to update the `docker_tag` value in the `container_image` object for the [`ami_builder_config`](../../reference/services/ci-cd-pipeline/ecs-deploy-runner#ami_builder_config), [`docker_image_builder_config`](../../reference/services/ci-cd-pipeline/ecs-deploy-runner#docker_image_builder_config), [`terraform_applier_config`](../../reference/services/ci-cd-pipeline/ecs-deploy-runner#terraform_applier_config), and [`terraform_planner_config`](../../reference/services/ci-cd-pipeline/ecs-deploy-runner#terraform_planner_config) variables. + +Once you have updated any references to the container image tags, you will need to run `terraform plan` and `terraform apply` in each account where pipelines is deployed. + + + + + + diff --git a/docs/pipelines/multi-account/index.md b/docs/pipelines/multi-account/index.md new file mode 100644 index 0000000000..cd78a9a91e --- /dev/null +++ b/docs/pipelines/multi-account/index.md @@ -0,0 +1,21 @@ +# Deploying Multi-Account Pipelines + +Have you heard about AWS multi-account setups? It's like having a pack of dogs - each one with its own unique personality, strengths, and weaknesses, but all working together to accomplish a common goal. + +Imagine you have a pack of dogs, each with their own special skills. You've got a fierce protector who guards the house, a speedy runner who chases down anything that moves, and a snuggly lap dog who just wants to cuddle all day. Each dog has its own needs, but they all rely on you as their owner to provide for them and keep them safe. + +Similarly, with AWS multi-account setups, you can have a whole pack of accounts, each with its own unique configuration and requirements, but all managed from a single "parent" account. It's like being the alpha dog of a pack, making sure each member is fed, healthy, and happy. + +And just like with a pack of dogs, there are different roles and responsibilities within an AWS multi-account setup. You've got the "owner" account, which is responsible for managing all the other accounts in the pack, and then you've got the "member" accounts, each with their own specific purposes and functions. + +It's important to keep all your accounts organized and working together smoothly, just like how you would keep your pack of dogs in line. You don't want one dog to get too aggressive and start fighting with the others, just like you don't want one AWS account to start interfering with the others. + +But if you can manage your pack of dogs successfully, they can work together to accomplish great things - just like how an AWS multi-account setup can help you achieve your goals with ease and efficiency. So, if you're a dog lover like me, you'll find that AWS multi-account setups are just as fun and rewarding as having a pack of loyal furry friends by your side. Woof! + + + diff --git a/docs/pipelines/overview/index.md b/docs/pipelines/overview/index.md new file mode 100644 index 0000000000..088062fa53 --- /dev/null +++ b/docs/pipelines/overview/index.md @@ -0,0 +1,24 @@ +# What is Gruntwork Pipelines? + +Gruntwork Pipelines is a framework that enables you to use your preferred CI tool to +securely run an end-to-end pipeline for infrastructure code ([Terraform](https://www.terraform.io/)) and +app code ([Docker](https://www.docker.com/) or [Packer](https://www.packer.io/)). Rather than replace your existing CI/CD provider, Gruntwork Pipelines is designed to enhance the security +of your existing tool. + +Without Gruntwork Pipelines, CI/CD tools require admin level credentials to any AWS account where you deploy infrastructure. +This makes it trivial for anyone with access to your CI/CD system to access AWS credentials with permissions +greater than they might otherwise need. +Gruntwork Pipelines allows a highly restricted set of permissions to be supplied to the CI/CD tool while +infrastructure related permissions reside safely within your own AWS account. This reduces the exposure of your +high value AWS secrets. + + + + + + diff --git a/docs/pipelines/tutorial/index.md b/docs/pipelines/tutorial/index.md new file mode 100644 index 0000000000..8e9e7fe4aa --- /dev/null +++ b/docs/pipelines/tutorial/index.md @@ -0,0 +1,363 @@ +# Single Account Tutorial + +In this tutorial, you’ll walk you through the process of setting up Gruntwork Pipelines in a single +AWS account. By the end, you’ll deploy: + +- ECR Repositories for storing Docker images + - `deploy-runner` — stores the default image for planning and applying terraform and building AMIs + - `kaniko` — stores the default image for building other Docker images using [kaniko](https://github.com/GoogleContainerTools/kaniko) + - `hello-world` — a demonstration repo used for illustrating how a Docker application might be managed with Gruntwork Pipelines +- Our [ECS Deploy Runner Module](https://github.com/gruntwork-io/terraform-aws-ci/tree/main/modules/ecs-deploy-runner) +- Supporting IAM Roles, IAM Policies, and CloudWatch Log Groups +- ECS Tasks + - `docker-image-builder` — builds Docker images within the `kaniko` container image + - `ami-builder` — builds AMIs using HashiCorp Packer within the `deploy-runner` image + - `terraform-planner` — Runs plan commands within the `deploy-runner` container + - `terraform-applier` — Runs apply commands within the `deploy-runner` container + +## Prerequisites + +Before you begin, make sure your system has: + +- [Docker](https://docs.docker.com/get-docker/), with support for Buildkit (version 18.09 or newer) +- [Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) (version 1.0 or newer) +- Valid [AWS credentials](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) for an IAM user with `AdministratorAccess` + +## Repo Setup + +The code for this tutorial can be found in the [Gruntwork Service Catalog](https://github.com/gruntwork-io/terraform-aws-service-catalog/blob/master/examples/for-learning-and-testing/gruntwork-pipelines/README.md). Start by cloning the repo: + +```shell +git clone https://github.com/gruntwork-io/terraform-aws-service-catalog.git +``` + +You will be following the example found at `terraform-aws-service-catalog/examples/for-learning-and-testing/gruntwork-pipelines` + +```shell +cd terraform-aws-service-catalog/examples/for-learning-and-testing/gruntwork-pipelines +``` + +## Create the required ECR repositories + +Change directories to deploy the Terraform for ECR + +```shell +cd ecr-repositories +``` + +Set the `AWS_REGION` environment variable to your desired AWS region: + +```shell +export AWS_REGION= +``` + +Authenticate with your AWS account and deploy the Terraform code provided to create the three +ECR repositories. + +Initialize Terraform to download required dependencies: +```shell +terraform init +``` + +Run plan and ensure the output matches your expectations: +```shell +terraform plan +``` + +Deploy the code using apply +```shell +terraform apply +``` + +## Build and Push the Docker Images + +The four standard Gruntwork Pipelines capabilities are instrumented by two separate Docker files + +1. `ecs-deploy-runner` — Terraform plan, apply and AMI building +2. `kaniko` — Docker image building. [Kaniko](https://github.com/GoogleContainerTools/kaniko) is a tool that supports building Docker images inside a container + +These Dockerfiles live in the ecs-deploy-runner module within [the terraform-aws-ci repository](https://github.com/gruntwork-io/terraform-aws-ci). In this example, you'll clone the terraform-aws-ci and running Docker build against the Dockerfiles defined there. + +You’re now going to build these two Docker images and push them to the ECR repositories you just created. + +### Export Environment Variables + +If you do not already have a GitHub Personal Access Token (PAT) available, you can follow this [guide to Create a new GitHub Personal Access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) + +For the purposes of this example, your token will need the `repo` scope, so that Gruntwork Pipelines is able to fetch modules and code from private Gruntwork repositories. Note that in production, the best practice is to create a separate GitHub machine user account, +and provision a GitHub PAT against that account. + +This GitHub PAT will be used for two purposes: +1. Initially, when running the Docker build commands below, the GitHub PAT will be used to fetch private code from `github.com/gruntwork-io`. +2. Once the Docker images are built, you’ll store your GitHub PAT in AWS Secrets Manager. When Gruntwork Pipelines is running on your behalf, it will fetch + your GitHub PAT from Secrets Manager "just in time" so that only the running ECS task has access to the token — and so that your token only exists for the lifespan + of the ephemeral ECS task container. + +Export a valid GitHub PAT using the following command so that you can use it to build Docker images that fetch private code via GitHub: +```shell +export GITHUB_OAUTH_TOKEN= +``` + +Export your AWS Account ID and primary region. The commands in the rest of this document require these variables to be set. The region to use is up to you. +```shell +export AWS_ACCOUNT_ID= +export AWS_REGION= +``` + +The Gruntwork Pipelines Dockerfiles used by Gruntwork Pipelines are stored in the `gruntwork-io/terraform-aws-ci` repository. Therefore, in order to pin both Dockerfiles +to a known version, you export the following variable which you’ll use during our Docker builds: + +```shell +export TERRAFORM_AWS_CI_VERSION=v0.51.4 +``` + +The latest version can be retrieved from the [releases page](https://github.com/gruntwork-io/terraform-aws-ci/releases) of the `gruntwork-io/terraform-aws-ci` repository. At a minimum, `v0.51.4` must be selected. + +### Clone `terraform-aws-ci` to your machine +Next, you are going to build the two Docker images required for this example. The Dockerfiles are defined in the [terraform-aws-ci](https://github.com/gruntwork-io/terraform-aws-ci) repository, so it must be available locally: + +```bash +git clone git@github.com:gruntwork-io/terraform-aws-ci.git +``` + +Change directory into the example folder: +```bash +cd terraform-aws-ci/modules/ecs-deploy-runner +``` + +### Build the ecs-deploy-runner and kaniko Docker images + +This next command is going to perform a Docker build of the `deploy-runner` image. You don’t need to authenticate to AWS in order to run this command, as the build will happen on your machine. +We do, however, pass your exported GitHub PAT into the build as a secret, so that the Docker build can fetch private code from `github.com/gruntwork-io`. Since you’re using BuildKit, the token +is only used during the build process and does not remain in the final image. + +Run the following command to build the ecs-deploy-runner Docker image: +```shell +DOCKER_BUILDKIT=1 docker build \ + --secret id=github-token,env=GITHUB_OAUTH_TOKEN \ + --build-arg module_ci_tag="$TERRAFORM_AWS_CI_VERSION" \ + --tag "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/ecs-deploy-runner:$TERRAFORM_AWS_CI_VERSION" \ + ./docker/deploy-runner/ +``` + +Similarly to the ecs-deploy-runner image, you’ll now use the Kaniko Dockerfile included in this example to build the kaniko image: +```shell +DOCKER_BUILDKIT=1 docker build \ + --secret id=github-token,env=GITHUB_OAUTH_TOKEN \ + --build-arg module_ci_tag="$TERRAFORM_AWS_CI_VERSION" \ + --tag "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/kaniko:$TERRAFORM_AWS_CI_VERSION" \ + ./docker/kaniko/ +``` + +### Log In and Push to ECR +Now you have local Docker images for ecs-deploy-runner and kaniko that are properly tagged, but before you can push it into the private ECR repository that you created +with our `terraform apply`, you need to authenticate with ECR itself. Authenticate to AWS and run the following: + +```shell +aws ecr get-login-password --region $AWS_REGION \ + | docker login -u AWS --password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com" +``` + +If you receive a success message from your previous command, you’re ready to push your ecs-deploy-runner image: +```shell +docker push "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/ecs-deploy-runner:$TERRAFORM_AWS_CI_VERSION" +``` + +```shell +docker push "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/kaniko:$TERRAFORM_AWS_CI_VERSION" +``` + +## Deploy the Pipelines Cluster + +Now that the ECR repositories are deployed and have the required Docker images, you are ready +to deploy the rest of Gruntwork Pipelines. The Terraform that defines the setup is defined in +`terraform-aws-service-catalog/examples/for-learning-and-testing/gruntwork-pipelines/pipelines-cluster` + +```shell +cd terraform-aws-service-catalog/examples/for-learning-and-testing/gruntwork-pipelines/pipelines-cluster +``` + +### Export a GitHub Personal Access Token (PAT) +For the purposes of this example, you may use the same PAT as before. In a production deployment, best practice +would be to create a separate GitHub machine user account. This module uses a slightly different naming convention for +its environment variable, so you’ll need to re-export the token: + +```shell +export TF_VAR_github_token= +``` + +### Configure and Deploy the ecs deploy runner +Authenticate to your AWS account and run `init`, then `apply`. +:::note +If you are using `aws-vault` to authenticate on the command line, you must supply the `--no-session` flag as explained in [this Knowledge Base entry](https://github.com/gruntwork-io/knowledge-base/discussions/647) +::: + +```shell +terraform init +``` + +```shell +terraform plan +``` +Check your plan output before applying: +```shell +terraform apply +``` + +## Install the `infrastructure-deployer` command line tool + +Gruntwork Pipelines requires all requests to transit through its Lambda function, which ensures only valid arguments and commands are passed along to ECS. +To invoke the Lambda function, you should use the `infrastructure-deployer` command line interface (CLI) tool. For testing and setup purposes, you’ll install and use the `infrastructure-deployer` CLI locally; when you’re ready to configure CI/CD, you’ll install and use it in your CI/CD config. + +If you do not already have the `gruntwork-install` binary installed, you can get it [here.](https://github.com/gruntwork-io/gruntwork-installer) + +```bash + +gruntwork-install --binary-name "infrastructure-deployer" --repo "https://github.com/gruntwork-io/terraform-aws-ci" --tag "$TERRAFORM_AWS_CI_VERSION" +``` +:::note +If you’d rather not use the Gruntwork installer, you can alternatively download the binary manually from [the releases page.](https://github.com/gruntwork-io/terraform-aws-ci/releases) +::: + +## Invoke your Lambda Function + +### Get your Lambda ARN from the output +Next, you need to retrieve the Amazon Resource Name (ARN) for the Lambda function that guards your Gruntwork Pipelines installation: + +```shell +terraform output -r gruntwork_pipelines_lambda_arn +``` + +Once you have your invoker Lambda’s ARN, export it like so: + +```shell +export INVOKER_FUNCTION_ARN= +``` + +This value is used by the `run-docker-build.sh` and `run-packer-build.sh` scripts in the next step. + +### Perform a Docker/Packer build via Pipelines + +Now that you have Gruntwork Pipelines installed in the `docker-packer-builder` configuration, let’s put arbitrary Docker and Packer builds through it! + +For your convenience, we’ve provided two scripts that you can run: +* `run-docker-build.sh` +* `run-packer-build.sh` + +These two scripts will: + +1. Ensure all required environment variables are set +2. Use the `infrastructure-deployer` CLI to send a Docker build request to the invoker lambda + +Once the request is sent, Gruntwork Pipelines will begin streaming the logs back to you so you can watch the images get built. The Docker build will push the completed image to your hello-world repository, and the Packer build will push the completed AMI to EC2. + +The following environment variables must be set in your shell before you run `run-docker-build.sh`: +* `AWS_ACCOUNT_ID` +* `AWS_REGION` +* `INVOKER_FUNCTION_ARN` + +## Prepare a test `infrastructure-live` repo + +You now have a functional Gruntwork Pipelines example that can build and deploy Docker images and AMIs. +Feel free to stop here and experiment with what you’ve built so far. The following steps will extend +pipelines to be capable of running Terraform plan and apply. + +Pipelines is a flexible solution that can be deployed in many configurations. +In your own organization, you might consider deploying one Pipelines installation with all the ECS tasks enabled, +or having a central Pipelines installation plus one in each account of your Reference Architecture. + +To test the plan and apply functionality, you’ll need a simple demo repository. +You may create your own or fork our [testing repo](https://github.com/gruntwork-io/terraform-module-in-root-for-terragrunt-test) + +## Enable the Terraform planner and applier + +We’ve intentionally deployed an incomplete version of Gruntwork Pipelines so far. To deploy the full version with the planner +and applier, you’ll need to make a few edits to the module. In this directory you should see a few files prefixed with `config_`. +Two are proper Terraform files with all the configuration for running the Docker image builder and the ami builder. + +Each consists of +* A `locals` block containing the configuration variables specifying which repos are allowed and providing credentials +* Some IAM resources that give the task permission to access the resources it needs + +The other two files have a `.example` postfix. Remove that postfix to let Terraform discover them. + +Next, let’s take a look at `main.tf`. You should see a `TODO` in the `locals` block, marking the location where the configuration might normally +live. As this example ships with the Docker image builder and AMI builder defined in external files we have commented out +the default null values. + +Comment out or delete the following lines: +* `terraform_planner_config = null` +* `terraform_planner_https_tokens_config = null` +* `terraform_applier_config = null` +* `terraform_applier_https_tokens_config = null` + +These values are now properly defined in the external `config_*.tf` files. + +## Configure the Terraform planner and applier + +Now that the planner and applier are enabled, you could run `terraform apply`, but the default values of a few +variables might not be correct for your test environment. Make the following changes to your `.tfvars` file to +define the correct repos and credentials. Pipelines is configured to reject any commands that aren’t explicitly allowed +by the configuration below: + +* `allowed_terraform_planner_repos = ["https://github.com/your-org/your-forked-repo.git"]` — a list of repos where `terraform plan` is allowed to be run +* `allowed_terraform_applier_repos = ["https://github.com/your-org/your-forked-repo.git"]` — a list of repos where `terraform apply` is allowed to be run +* optionally `machine_user_git_info = {name="machine_user_name", email="machine_user_email"}` — if you’d like to customize your machine user info +* optionally `allowed_apply_git_refs = ["master", "main", "branch1", ...]` — for any branches or git refs you’d like to be able to run `terraform apply` on + +Now you’re ready to run `terraform apply`! Once complete, you should see 2 new ECS task definitions in your AWS account: +* `ecs-deploy-runner-terraform-planner` +* `ecs-deploy-runner-terraform-applier` + +## Try a `plan` or `apply` + +With Gruntwork Pipelines deployed, it’s time to test it out! Run the following command to trigger +a `plan` or `apply`: + +```shell +infrastructure-deployer --aws-region us-east-1 -- terraform-planner infrastructure-deploy-script \ + --ref "master" \ + --binary "terraform" \ + --command "plan" +``` + +If you forked the example repo provided you should see `+ out = "Hello, World"` if the plan was a success. + +## Celebrate, you did it! + +As a next step you could add a `.github/workflows/pipeline.yml` file to your repo that runs the command above +or try it in your favorite CI/CD tool. Your tooling only needs permission to trigger the lambda +function `arn:aws:lambda:us-east-1::function:ecs-deploy-runner-invoker`. + +## Cleanup + +If you want to remove the infrastructure created, you can use Terraform `destroy`. + +```shell +terraform plan -destroy -out terraform.plan +terraform apply terraform.plan +``` + +To destroy the `ecr-repositories` resources you created, you’ll first need to empty the repos of any images: + +```shell +aws ecr batch-delete-image --repository-name ecs-deploy-runner --image-ids imageTag=$TERRAFORM_AWS_CI_VERSION +aws ecr batch-delete-image --repository-name kaniko --image-ids imageTag=$TERRAFORM_AWS_CI_VERSION +aws ecr batch-delete-image --repository-name hello-world --image-ids imageTag=v1.0.0 +``` + +Then Terraform can take care of the rest: + +```shell +cd ../ecr-repositories +terraform plan -destroy -out terraform.plan +terraform apply terraform.plan +``` + + + diff --git a/docs/products.md b/docs/products.md new file mode 100644 index 0000000000..1fe554c8bf --- /dev/null +++ b/docs/products.md @@ -0,0 +1,46 @@ +--- +hide_table_of_contents: true +hide_title: true +--- + +import Card from "/src/components/Card" +import CardGroup from "/src/components/CardGroup" +import CenterLayout from "/src/components/CenterLayout" + + + +# Gruntwork Products + + + + +A collection of reusable code that enables you to deploy and manage infrastructure quickly and reliably. + + +An end-to-end tech stack built using best practices on top of our Infrastructure as Code Library, deployed into your AWS accounts. + +A framework for running secure deployments for infrastructure code and application code. + + +Gain access to all resources included in your Gruntwork subscription. + + + + + + + + diff --git a/docs/refarch/access/how-to-auth-CLI/index.md b/docs/refarch/access/how-to-auth-CLI/index.md new file mode 100644 index 0000000000..74dd7ad5da --- /dev/null +++ b/docs/refarch/access/how-to-auth-CLI/index.md @@ -0,0 +1,63 @@ +# Authenticate via the AWS command line interface (CLI) + +CLI access requires [AWS access keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html). We recommend using [aws-vault](https://github.com/99designs/aws-vault) for managing all aspects related to CLI authentication. To use `aws-vault` you will need to generate AWS Access Keys for your IAM user in the security account. + +:::tip + +`aws-vault` is not the only method which can be used to authenticate on the CLI. Please refer to [A Comprehensive Guide to Authenticating to AWS on the Command Line](https://blog.gruntwork.io/a-comprehensive-guide-to-authenticating-to-aws-on-the-command-line-63656a686799) for several other options. + +::: + +:::info + +MFA is required for the Reference Architecture, including on the CLI. See [configuring your IAM user](/refarch/access/setup-auth/#configure-your-iam-user) for instructions on setting up an MFA token. + +::: + +## Access resources in the security account + +To authenticate to the security account, you only need your AWS access keys and an MFA token. See [the guide](https://github.com/99designs/aws-vault#quick-start) on adding credentials to `aws-vault`. + +You should be able to run the following command using AWS CLI + +```bash +aws-vault exec -- aws sts get-caller-identity +``` + +and expect to get an output with your user's IAM role: + +```json +{ + "UserId": "AIDAXXXXXXXXXXXX”, + "Account": “", + "Arn": "arn:aws:iam:::user/" +} +``` + +## Accessing all other accounts + +To authenticate to all other accounts (e.g., dev, stage, prod), you will need the ARN of an IAM Role in that account to assume. To configure accessing accounts using assumed roles with `aws-vault` refer to [these instructions](https://github.com/99designs/aws-vault#roles-and-mfa). + +Given the following command (where `YOUR_ACCOUNT_PROFILE_NAME` will be any account other than your security account) + +```bash +aws-vault exec -- aws sts get-caller-identity +``` + +you should expect to see the following output: + +```json +{ + "UserId": "AIDAXXXXXXXXXXXX", + "Account": "", + "Arn": "arn:aws:sts:::assumed-role//11111111111111111111" +} +``` + + + diff --git a/docs/refarch/access/how-to-auth-aws-web-console/index.md b/docs/refarch/access/how-to-auth-aws-web-console/index.md new file mode 100644 index 0000000000..e8cb5522e5 --- /dev/null +++ b/docs/refarch/access/how-to-auth-aws-web-console/index.md @@ -0,0 +1,34 @@ +# Authenticating to the AWS web console + +## Authenticate to the AWS Web Console in the security account + +To authenticate to the security account, you will need: + +1. IAM User Credentials. See [setting up initial access](/refarch/access/setup-auth/) for how to create IAM users. +1. An MFA Token. See [Configuring your IAM user](/refarch/access/setup-auth/#configure-your-iam-user). +1. The login URL. This should be of the format `https://.signin.aws.amazon.com/console`. + +## Authenticate to the AWS Web Console in all other accounts + +To authenticate to any other account (e.g., dev, stage, prod), you need to: + +1. Authenticate to the security account. All IAM users are defined in this account, you must always authenticate to it first. +1. [Assume an IAM Role in the other AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-console.html). To access other accounts, you switch to an IAM Role defined in that account. + +:::note +Note that to be able to access an IAM Role in some account, your IAM User must be in an IAM Group that has permissions to assume that IAM Role. +::: + +See the `cross-account-iam-roles` module for the [default set of IAM Roles](https://github.com/gruntwork-io/terraform-aws-security/blob/main/modules/cross-account-iam-roles/README.md#iam-roles-intended-for-human-users) that exist in each account. For example, to assume the allow-read-only-access-from-other-accounts IAM Role in the prod account, you must be in the \_account.prod-read-only IAM Group. See [Configure other IAM Users](/refarch/access/setup-auth/#configure-other-iam-users) for how you add users to IAM Groups. + +:::note +Not all of the default roles referenced in the `cross-account-iam-roles` module are deployed in each account. +::: + + + diff --git a/docs/refarch/access/how-to-auth-ec2/index.md b/docs/refarch/access/how-to-auth-ec2/index.md new file mode 100644 index 0000000000..6decb2f4aa --- /dev/null +++ b/docs/refarch/access/how-to-auth-ec2/index.md @@ -0,0 +1,71 @@ +# SSH to EC2 Instances + +You can SSH to any of your EC2 Instances in the Reference Architecture in two different ways: + +1. `ssh-grunt` (Recommended) +1. EC2 Key Pairs (For emergency / backup use only) + +## `ssh-grunt` (Recommended) + +[`ssh-grunt`](../../../reference/modules/terraform-aws-security/ssh-grunt/) is a tool developed by Gruntwork that automatically syncs user accounts from AWS IAM to your servers to allow individual developers to SSH onto EC2 instances using their own username and SSH keys. + +In this section, you will learn how to SSH to an EC2 instance in your Reference Architecture using `ssh-grunt`. Every EC2 instance has `ssh-grunt` installed by default. + +### Add users to SSH IAM Groups + +When running `ssh-grunt`, each EC2 instance specifies from which IAM Groups it will allow SSH access, and SSH access with sudo permissions. By default, these IAM Group names are `ssh-grunt-users` and `ssh-grunt-sudo-users`, respectively. To be able to SSH to an EC2 instance, your IAM User must be added to one of these IAM Groups (see Configure other IAM Users for instructions). + +### Upload your public SSH key + +1. Authenticate to the AWS Web Console in the security account. +1. Go to your IAM User profile page, select the "Security credentials" tab, and click "Upload SSH public key". +1. Upload your public SSH key (e.g. `~/.ssh/id_rsa.pub`). Do NOT upload your private key. + +### Determine your SSH username + +Your username for SSH is typically the same as your IAM User name. However, if your IAM User name has special characters that are not allowed by operating systems (e.g., most punctuation is not allowed), your SSH username may be a bit different, as specified in the `ssh-grunt` [documentation](../../../reference/modules/terraform-aws-security/ssh-grunt/). For example: + +1. If your IAM User name is `jane`, your SSH username will also be `jane`. +1. If your IAM User name is `jane@example.com`, your SSH username will be `jane`. +1. If your IAM User name is `_example.jane.doe`, your SSH username will be `example_jane_doe`. + + +### SSH to an EC2 instance + +Since most EC2 instances in the Reference Architecture are deployed into private subnets, you won't be able to access them over the public Internet. Therefore, you must first connect to the VPN server. See [VPN Authentication](../how-to-auth-vpn/index.md) for more details. + +Given that: + +1. Your IAM User name is jane. +1. You've uploaded your public SSH key to your IAM User profile. +1. Your private key is located at `/Users/jane/.ssh/id_rsa` on your local machine. +1. Your EC2 Instance's IP address is 1.2.3.4. + + +First, add your SSH Key into the SSH Agent using the following command: + +```bash +ssh-add /Users/jane/.ssh/id_rsa +``` + +Then, use this command to SSH to the EC2 Instance: + +```bash +ssh jane@1.2.3.4 +``` + +You should now be able to execute commands on the instance. + +## EC2 Key Pairs (For emergency / backup use only) + +When you launch an EC2 Instance in AWS, you can specify an EC2 Key Pair that can be used to SSH into the EC2 Instance. This suffers from an important problem: usually more than one person needs access to the EC2 Instance, which means you have to share this key with others. Sharing secrets of this sort is a security risk. Moreover, if someone leaves the company, to ensure they no longer have access, you'd have to change the Key Pair, which requires redeploying all of your servers. + +As part of the Reference Architecture deployment, Gruntwork will create EC2 Key Pairs and put the private keys into AWS Secrets Manager. These keys are there only for emergency / backup use: e.g., if there's a bug in `ssh-grunt` that prevents you from accessing your EC2 instances. We recommend only giving a handful of trusted admins access to these Key Pairs. + + + diff --git a/docs/refarch/access/how-to-auth-vpn/index.md b/docs/refarch/access/how-to-auth-vpn/index.md new file mode 100644 index 0000000000..0535cfe201 --- /dev/null +++ b/docs/refarch/access/how-to-auth-vpn/index.md @@ -0,0 +1,51 @@ +# VPN Authentication + +Most of the AWS resources that comprise the Reference Architecture run in private subnets, which means they do not have a public IP address, and cannot be reached directly from the public Internet. This reduces the "surface area" that attackers can reach. Of course, you still need access into the VPCs so we exposed a single entrypoint into the network: an [OpenVPN server](https://openvpn.net/). + +## Install an OpenVPN client + +There are free and paid OpenVPN clients available for most major operating systems. Popular options include: + +1. OS X: [Viscosity](https://www.sparklabs.com/viscosity/) or [Tunnelblick](https://tunnelblick.net/). +1. Windows: [official client](https://openvpn.net/index.php/open-source/downloads.html). +1. Linux: + + ```bash title="Debian" + apt-get install openvpn + ``` + + ```bash title="Redhat" + yum install openvpn + ``` + +## Join the OpenVPN IAM Group + +Your IAM User needs access to SQS queues used by the OpenVPN server. Since IAM users are defined only in the security account, and the OpenVPN servers are defined in separate AWS accounts (stage, prod, etc), that means you need to authenticate to the accounts with the OpenVPN servers by assuming an IAM Role that has access to the SQS queues in those accounts. + +To be able to assume an IAM Role, your IAM user needs to be part of an IAM Group with the proper permissions, such as `_account.xxx-full-access` or `_account.xxx-openvpn-users`, where `xxx` is the name of the account you want to access (stage, prod, etc). See [Configure other IAM users](/refarch/access/setup-auth/#configure-other-iam-users) for instructions on adding users to IAM Groups. + +## Use openvpn-admin to generate a configuration file + +To connect to an OpenVPN server, you need an OpenVPN configuration file, which includes a certificate that you can use to authenticate. To generate this configuration file, do the following: + +1. Install the latest [`openvpn-admin binary`](https://github.com/gruntwork-io/terraform-aws-openvpn/releases) for your OS. + +1. Authenticate to AWS via the CLI. You will need to assume an IAM Role in the AWS account with the OpenVPN server you're trying to connect to. This IAM Role must have access to the SQS queues used by OpenVPN server. Typically, the `allow-full-access-from-other-accounts` or `openvpn-server-allow-certificate-requests-for-external-accounts` IAM Role is what you want. + +1. Run `openvpn-admin request --aws-region --username `. + +1. This will create your OpenVPN configuration file in your current directory. + +1. Load this configuration file into your OpenVPN client. + +## Connect to one of your OpenVPN servers + +To connect to an OpenVPN server in one of your app accounts (Dev, Stage, Prod), click the "Connect" button next to your configuration file in the OpenVPN client. After a few seconds, you should be connected. You will now be able to access all the resources within the AWS network (e.g., SSH to EC2 instances in private subnets) as if you were "in" the VPC itself. + + + diff --git a/docs/refarch/access/index.md b/docs/refarch/access/index.md new file mode 100644 index 0000000000..4db98dbfce --- /dev/null +++ b/docs/refarch/access/index.md @@ -0,0 +1,16 @@ +# How do I access my Reference Architecture? + +Haxx0r ipsum foo Trojan horse new all your base are belong to us ip error private shell fopen semaphore epoch char packet sniffer segfault gurfle bypass. Memory leak bubble sort injection leet malloc brute force double xss mega sudo mountain dew void echo win emacs linux piggyback bin. I'm compiling float bang case cat infinite loop Donald Knuth unix for /dev/null machine code then chown d00dz worm gnu crack packet bar eof while. + +Lib void brute force bypass nak concurrently all your base are belong to us break leapfrog bit default packet sniffer Linus Torvalds. Man pages packet stack trace Starcraft Donald Knuth pwned worm hello world public giga frack gurfle. Irc fork malloc fopen script kiddies flood blob fail hexadecimal while access semaphore loop mega Trojan horse foo gobble. + +Bang spoof *.* headers Dennis Ritchie pragma bubble sort mutex d00dz firewall wombat snarf. Win L0phtCrack back door big-endian tera injection flush suitably small values interpreter class hello world client segfault. Boolean buffer emacs highjack concurrently boolean I'm compiling malloc finally char protected void fopen ascii var cd Trojan horse public. + + + + diff --git a/docs/refarch/access/setup-auth/index.md b/docs/refarch/access/setup-auth/index.md new file mode 100644 index 0000000000..ab1f94aafe --- /dev/null +++ b/docs/refarch/access/setup-auth/index.md @@ -0,0 +1,131 @@ +# Set up AWS Auth + +## Configure root users + +Each of your AWS accounts has a root user that you need to configure. When you created the child AWS accounts (dev, stage, prod, etc), you provided the root user's email address for each account; if you don't know what those email addresses were, you can log in to the root account (the parent of the AWS Organization) and go to the AWS Organizations Console to find them. + +Once you have the email addresses, you'll need the passwords. When you create child accounts in an AWS organization, AWS will not allow you to set the root password. In order to generate the root password: + +1. Go to the AWS Console. +1. If you had previously signed into some other AWS account as an IAM User, rather than a root user, click "Sign-in using root account credentials." +1. Enter the email address of the root user. +1. Click "Forgot your password" to reset the password. +1. Check the email address associated with the root user account for a link you can use to create a new password. + +:::danger +Please note that the root user account can do anything in your AWS account, bypassing the security restrictions you put in place, so you need to take extra care with protecting this account. +::: + +We strongly recommend that when you reset the password for each account, you also: + +1. Use a strong password: preferably 30+ characters, randomly generated, and stored in a secrets manager. +1. Enable Multi-Factor Auth (MFA): Follow [these instructions](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html#enable-virt-mfa-for-root) to enable MFA for the root user. + After this initial set up, you should _not_ use the root user account afterward except in very rare circumstances. (e.g., if you get locked out of your IAM User account and no one has permissions to reset your password). For day-to-day tasks, you should use an IAM User instead, as described in the next section. + +Please note that you'll have to repeat the process above of resetting the password and enabling MFA for every account in your organization: dev, stage, prod, shared, security, logs, and the root account. + +## Configure your IAM user + +The security account defines and manages all IAM Users. When deploying your Reference Architecture, Gruntwork creates an IAM User with admin permissions in the security account. The password for the IAM User is encrypted via PGP using [Keybase](https://keybase.io) (you'll need a free account) and is Base64-encoded. + +To access the Terraform state containing the password, you need to already be authenticated to the account. Thus to get access to the initial admin IAM User, we will use the root user credentials. To do this, you can either: + +- Log in to the AWS Web Console using the root user credentials for the security account and set up the password and AWS Access Keys for the IAM User. + +- Use the [Gruntwork CLI](https://github.com/gruntwork-io/gruntwork/) to rotate the password using the command: + + ```bash + gruntwork aws reset-password --iam-user-name + ``` + +Once you have access via your IAM user, finish hardening your security posture: + +1. Enable MFA for your IAM User by following [these instructions](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable.html). MFA is required by the Reference Architecture, and you won't be able to access any other accounts without it. + + :::note + Note that the name of the MFA must be exactly the same as the AWS IAM Username + ::: + +1. Log out and log back in — After enabling MFA, you need to log out and then log back in. This forces AWS to prompt you for your MFA token. + + :::caution + Until you enable MFA, you will not be able to access anything else in the web console. + ::: + +1. Create access keys for yourself by following [these instructions](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html). Store the access keys in a secrets manager. You will need these to authenticate to AWS from the command-line. + +## Configure other IAM users + +Now that your IAM user is all set up, you can configure IAM users for the rest of your team. + +:::note +Each of your users will need a free [Keybase](https://keybase.io/) account so that their credentials can be encrypted just for their access. +::: + +All of the IAM users are managed as code in the security account in the `account-baseline-app` module. If you open the `terragrunt.hcl` file in that repo, you should see the list of users, which will look something like: + +```yaml +jane@acme.com: + create_access_keys: false + create_login_profile: true + groups: + - full-access + pgp_key: keybase:jane_on_keybase +``` + +Here's how you would add two more users, Alice and Bob, to your security account: + +```yaml +jane@acme.com: + create_login_profile: true + groups: + - full-access + pgp_key: keybase:jane_on_keybase +alice@acme.com: + create_login_profile: true + groups: + - _account.dev-full-access + - _account.stage-full-access + - _account.prod-full-access + - iam-user-self-mgmt + pgp_key: keybase:alice_on_keybase +bob@acme.com: + create_login_profile: true + groups: + - _account.prod-read-only + - ssh-grunt-sudo-users + - iam-user-self-mgmt + pgp_key: keybase:bob_on_keybase +``` + +A few notes about the code above: + +1. **Groups**. We add each user to a set of IAM Groups: for example, we add Alice to IAM Groups that give her admin access in the dev, stage, and prod accounts, whereas Bob gets read-only access to prod, plus SSH access (with `sudo` permissions) to EC2 instances. For the full list of IAM Groups available, see the [IAM Groups module](https://github.com/gruntwork-io/terraform-aws-security/tree/main/modules/iam-groups#iam-groups). + +1. **PGP Keys**. We specify a PGP Key to use to encrypt any secrets for that user. Keys of the form `keybase:` are automatically fetched for user `` on [Keybase](https://keybase.io/). + +1. **Credentials**. For each user whose `create_login_profile` field is set to `true`, a password will be automatically generated. This password can be used to log in to the web console. This password will be encrypted with the user's PGP key and visible as a Terraform output. After you run `terragrunt apply`, you can copy/paste these encrypted credentials and send them to the user. + +To deploy this new code and create the new IAM Users, you will need to: + +1. Authenticate to AWS via the CLI. + +1. Apply your changes by running `terragrunt apply`. + +1. Share the login URL, usernames, and (encrypted) password with your team members. + + :::note + Make sure to tell each team member to follow the [Configure your IAM User instructions](#configure-your-iam-user) to log in, reset their password, and enable MFA. + ::: + + :::caution + Enabling MFA is required to access the Reference Architecture + ::: + + + diff --git a/docs/refarch/configuration/index.md b/docs/refarch/configuration/index.md new file mode 100644 index 0000000000..ce5d5c031d --- /dev/null +++ b/docs/refarch/configuration/index.md @@ -0,0 +1,55 @@ +# Get Started + +The Gruntwork Reference Architecture allows you to configure key aspects to your needs. Before you receive your deployed Reference Architecture, you will: +1. **Configure** your choice of your primary AWS region, database and compute flavors, domain names and more via a pull request +2. **Iterate** on the configuration in your pull request in response to Gruntwork preflight checks that spot blocking issues and ensure your deployment is ready to commence +3. **Merge** your pull request after all checks pass. Merging will automatically commence your Reference Architecture deployment +4. **Wait** until Gruntwork has successfully completed your deployment. You’ll receive an automated email indicating your deployment is complete + +Below, we'll outline the Reference Architecture at a high level. + +note: add pre-reqs section about things you need to know + +## Requirements + +This guide requires that you have access to an AWS IAM user or role in the AWS account that serves as your Organization Root for AWS Organizations with permissions to create member accounts. For more information on IAM policies for AWS organizations see the AWS guide on [managing IAM policies for AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_permissions_iam-policies.html#orgs_permissions_grant-admin-actions). + +## RefArch Configuration + +Your Reference Architecture configuration lives in your `infrastructure-live` repository on GitHub. Within your `infrastructure-live` repository, the `reference-architecture-form.yml` file defines all of your specific selections, domain names, AWS account IDs, etc. + +Gruntwork deployment tooling reads your `reference-architecture-form.yml` in order to first perform preflight checks to +ensure your accounts and selections are valid and ready for deployment. Once your preflight checks pass, and your pull request has been merged, Gruntwork tooling uses your `reference-architecture-form.yml` to deploy your Reference Architecture into your AWS accounts. + +Gruntwork provides bootstrap scripts, automated tooling, documentation and support to help you complete your setup steps and commence your Reference Architecture deployment. + +## Required Actions and Data +Some of the initial configuration steps will require you to *perform actions* against your AWS accounts, such as creating an IAM role that Gruntwork uses to access your accounts. Meanwhile, your `reference-architecture-form.yml` requires *data*, such as your AWS account IDs, domain name, etc. + +### Actions + +Wherever possible, Gruntwork attempts to automate setup actions *for you*. + +There is a bootstrap script in your `infrastructure-live` repository that will attempt to programmatically complete your setup actions (such as provisioning new AWS accounts on your behalf, registering domain names if you wish, etc) using a setup wizard and write the resulting *data* to your `reference-architecture-form.yml` file. + +### Data +`Data` refers to values, such as an AWS account ID, your desired domain name, etc, which may be the output of an action. + +The gruntwork CLI includes a [wizard](./run-the-wizard.md) that automates all of the steps to get the required data from you. We strongly recommended using the wizard for the majority of users. + +:::info Manual Configuration +If you are required to manually provision AWS accounts, domain names, or otherwise, the Gruntwork CLI has utilities to [manually bootstrap](https://github.com/gruntwork-io/gruntwork#bootstrap-manually) the required resources. This approach is only recommended for advanced users after consulting with Gruntwork. After all data has been generated manually, you will need to fill out the `reference-architecture-form.yml` manually. +::: + +## Let’s get started! + +Now that you understand the configuration and delivery process at a high level, we’ll get underway configuring your Reference Architecture. + + + + diff --git a/docs/refarch/configuration/install-required-tools.md b/docs/refarch/configuration/install-required-tools.md new file mode 100644 index 0000000000..ff47e9022b --- /dev/null +++ b/docs/refarch/configuration/install-required-tools.md @@ -0,0 +1,38 @@ +# Install Required Tools + +Configuring your Reference Architecture requires that you have `git` and the `gruntwork` CLI tool installed on your machine. You have two options for installation. + +## Use the bootstrap script (preferred) + +The bootstrap script will ensure you have all required dependencies installed. Within your `infrastructure-live` repository, there are two bootstrap scripts. +- `bootstrap_unix.sh` which can be run on macOS and Linux machines +- `bootstrap_windows.py` which runs on Windows machines + +Choose the correct bootstrap script for your system. Both scripts perform the equivalent functionality. + +In addition to installing dependencies, the bootstrap script will: +- Ensure you are running the script in the root of your `infrastructure-live` repository +- Ensure you have sufficient GitHub access to access and clone private Gruntwork repositories +- Download the Gruntwork installer +- Install the Gruntwork command line interface (CLI) which contains the Reference Architecture configuration wizard +- [Run the Gruntwork wizard](./run-the-wizard) to assist you in completing your Reference Architecture configuration steps (see docs for [required permissions](./run-the-wizard.md#required-permissions)) + +## Install manually + +:::caution +We do not recommend this approach. The bootstrap script performs several checks to ensure you have all tools and access required to configure your Reference Architecture. You will need to perform these checks manually if installing tools manually. +::: + +If you prefer to install your tools manually, see the following sections on installing Git and the Gruntwork CLI. + +1. If you would like to install `git` manually, installation steps can be found on the [Git SCM Installing Git Guide](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). +2. If you would like to install the Gruntwork CLI manually, we recommend downloading the latest release from the [GitHub releases page](https://github.com/gruntwork-io/gruntwork/releases). + + + + diff --git a/docs/refarch/configuration/preflight-checks.md b/docs/refarch/configuration/preflight-checks.md new file mode 100644 index 0000000000..f5e3d2e5dc --- /dev/null +++ b/docs/refarch/configuration/preflight-checks.md @@ -0,0 +1,46 @@ +# Iterate on Preflight checks + +Once you have run the setup wizard and pushed your `ref-arch-form` branch with your changes, GitHub Actions will commence, running the preflight checks. + +![Gruntwork Reference Architecture preflight checks](/img/preflight1.png) + +Preflight checks can take up to 4–5 minutes to complete after you push your commit. Any errors will be +directly annotated on the exact line of your form that presents a blocking issue, so be sure to check the *Files changed* tab of your pull request to see them: + +![Gruntwork Ref Arch preflight checks on your pull request](/img/preflight-error-on-pr.png) + +## Fix any errors + +In most cases, the error messages included in the preflight check annotations will provide sufficient information to remediate the underlying issue. If at any point you are confused or +need assistance, please reach out to us at `support@gruntwork.io` and we’ll be happy to assist you. + +## Commit and push your changes + +Once you have fixed any issues flagged by preflight checks, you can make a new commit with your latest form changes and push it up to the same branch. This will trigger a re-run of preflight +checks using your latest form data. + +## Merge your pull request + +Once your preflight checks pass, meaning there are no more error annotations on your pull request +and the GitHub check itself is green, you can merge your pull request to the `main` branch. + +## Wait for your deployment to complete + +Merging your `ref-arch-form` pull request to the `main` branch will automatically kick off the deployment process for your Reference Architecture. There’s nothing more for you to do at this point. + +:::caution +During deployment we ask that you do not log into, modify or interact with your Reference Architecture AWS accounts in any way or make any modifications to your `infrastructure-live` repo once you have merged your pull request. +::: + +Your deployment is now in Gruntwork engineers’ hands and we are notified of every single error your deployment encounters. We’ll work behind the scenes to complete your deployment, communicating with you via email or GitHub if we need +any additional information or if we need you to perform any remediation steps to un-block your deployment. + +Once your deployment completes, you’ll receive an automated email with next steps and a link to your Quick Start guide that has been written to your `infrastructure-live` repository. + + + diff --git a/docs/refarch/configuration/provision-accounts.md b/docs/refarch/configuration/provision-accounts.md new file mode 100644 index 0000000000..f7dc4ce57d --- /dev/null +++ b/docs/refarch/configuration/provision-accounts.md @@ -0,0 +1,15 @@ +# Provision AWS accounts + +Haxx0r ipsum mainframe bang ssh data public root client wombat recursively. Hexadecimal snarf chown highjack sudo for suitably small values null default bar unix server man pages endif ascii linux kilo tcp tunnel in. Long giga afk crack infinite loop buffer worm foo Dennis Ritchie. + +Protocol then bit while bar back door perl bang shell client bytes ifdef baz. Hello world mountain dew injection malloc var tunnel in todo class. For tera port bypass function packet sniffer for error char pragma printf sudo over clock grep continue. + +Linux mega var alloc xss linux tunnel in gc stdio.h int win back door mountain dew. Float I'm compiling null nak endif fatal Starcraft irc. Stack tcp foad port protocol ban protected eof ascii *.* blob flood then cat. + + + diff --git a/docs/refarch/configuration/route53.md b/docs/refarch/configuration/route53.md new file mode 100644 index 0000000000..30a5cdb7f6 --- /dev/null +++ b/docs/refarch/configuration/route53.md @@ -0,0 +1,15 @@ +# Configure Route53 and app domains + +Haxx0r ipsum mainframe bang ssh data public root client wombat recursively. Hexadecimal snarf chown highjack sudo for suitably small values null default bar unix server man pages endif ascii linux kilo tcp tunnel in. Long giga afk crack infinite loop buffer worm foo Dennis Ritchie. + +Protocol then bit while bar back door perl bang shell client bytes ifdef baz. Hello world mountain dew injection malloc var tunnel in todo class. For tera port bypass function packet sniffer for error char pragma printf sudo over clock grep continue. + +Linux mega var alloc xss linux tunnel in gc stdio.h int win back door mountain dew. Float I'm compiling null nak endif fatal Starcraft irc. Stack tcp foad port protocol ban protected eof ascii *.* blob flood then cat. + + + diff --git a/docs/refarch/configuration/run-the-wizard.md b/docs/refarch/configuration/run-the-wizard.md new file mode 100644 index 0000000000..a9bb1aaf70 --- /dev/null +++ b/docs/refarch/configuration/run-the-wizard.md @@ -0,0 +1,27 @@ +# Run the Wizard + +The Gruntwork CLI features a wizard designed to assist you in completing your Reference Architecture setup actions. The Gruntwork CLI wizard attempts to orchestrate all required configuration actions, such as provisioning AWS accounts, creating IAM roles used by Gruntwork tooling and engineers in each of the AWS accounts, registering new Route53 domain names, configuring Route53 Hosted Zones, and much more. + +If you have already run the wizard using the [bootstrap script](./install-required-tools.md#use-the-bootstrap-script-preferred), then you can skip this step. + +## Installation + +Installation instructions for the Gruntwork CLI can be found in [Install Required Tools](./install-required-tools.md#installing-gruntwork-cli). + +## Required Permissions + +To run the wizard you will need access to the AWS account that serves as the Organization Root of your AWS Organization. At a minimum, the AWS IAM user or role will need the `organizations:CreateAccount` action, which grants the ability to create member accounts. + +## Running the wizard + +To commence the wizard, first authenticate to AWS on the command line, then run `gruntwork wizard`. + +If you need to stop the running the wizard at any time, or if there is an error, the next time you run the wizard it will restart at the last step it stopped on. + + + diff --git a/docs/refarch/configuration/setup-quotas.md b/docs/refarch/configuration/setup-quotas.md new file mode 100644 index 0000000000..455c591a21 --- /dev/null +++ b/docs/refarch/configuration/setup-quotas.md @@ -0,0 +1,15 @@ +# Configure AWS account quotas + +Haxx0r ipsum mainframe bang ssh data public root client wombat recursively. Hexadecimal snarf chown highjack sudo for suitably small values null default bar unix server man pages endif ascii linux kilo tcp tunnel in. Long giga afk crack infinite loop buffer worm foo Dennis Ritchie. + +Protocol then bit while bar back door perl bang shell client bytes ifdef baz. Hello world mountain dew injection malloc var tunnel in todo class. For tera port bypass function packet sniffer for error char pragma printf sudo over clock grep continue. + +Linux mega var alloc xss linux tunnel in gc stdio.h int win back door mountain dew. Float I'm compiling null nak endif fatal Starcraft irc. Stack tcp foad port protocol ban protected eof ascii *.* blob flood then cat. + + + diff --git a/docs/refarch/index.md b/docs/refarch/index.md new file mode 100644 index 0000000000..fbbd8a3f47 --- /dev/null +++ b/docs/refarch/index.md @@ -0,0 +1,22 @@ +# Reference Architecture + +The Gruntwork Reference Architecture is an implementation of best practices for infrastructure in the cloud. It is an opinionated, end-to-end tech stack built on top of our Infrastructure as Code Library, deployed into the customer's AWS accounts. It is comprised of three pieces. +## Landing Zone + +Gruntwork Landing Zone is a Terraform-native approach to [AWS Landing zone / Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html). This uses Terraform to quickly create new AWS accounts, configure them with a standard security baseline, and defines a best-practices multi-account setup. + +## Sample Application + +Our [sample application](https://github.com/gruntwork-io/aws-sample-app) is built with JavaScript, Node.js, and Express.js, following [Twelve-Factor App](https://12factor.net/) practices. It consists of a load balancer, a front end, a backend, a cache, and a database. + +## Pipelines + +[Gruntwork Pipelines](/pipelines/overview/) makes the process of deploying infrastructure similar to how developers often deploy code. It is a code framework and approach that enables the customer to use your preferred CI tool to set up an end-to-end pipeline for infrastructure code. + + + diff --git a/docs/refarch/support/getting-help/index.md b/docs/refarch/support/getting-help/index.md new file mode 100644 index 0000000000..2fd9aec1a1 --- /dev/null +++ b/docs/refarch/support/getting-help/index.md @@ -0,0 +1,9 @@ +# Link: support + + + diff --git a/docs/refarch/support/onboarding/index.md b/docs/refarch/support/onboarding/index.md new file mode 100644 index 0000000000..1b78951057 --- /dev/null +++ b/docs/refarch/support/onboarding/index.md @@ -0,0 +1,11 @@ +# Onboarding sessions + +HAXXOR IPSUM + + + diff --git a/docs/refarch/usage/maintain-your-refarch/adding-new-account.md b/docs/refarch/usage/maintain-your-refarch/adding-new-account.md new file mode 100644 index 0000000000..6d88fae940 --- /dev/null +++ b/docs/refarch/usage/maintain-your-refarch/adding-new-account.md @@ -0,0 +1,299 @@ +# Adding a new account + +This document is a guide on how to add a new AWS account to your Reference Architecture. This is useful if you have a +need to expand the Reference Architecture with more accounts, like a test or sandbox account. + +## Create new Account in your AWS Org + +The first step to adding a new account is to create the new AWS Account in your AWS Organization. This can be done +either through the AWS Web Console, or by using the [Gruntwork CLI](https://github.com/gruntwork-io/gruntwork/). If you +are doing this via the CLI, you can run the following command to create the new account: + +```bash +gruntwork aws create --account "=" +``` + +Record the account name, AWS ID, and deploy order of the new account you just created in the +`accounts.json` file so that we can reference it throughout the process. + +### Set the deploy order + +The deploy order is the order in which the accounts are deployed when a common env file is modified (the files in +`_envcommon`). Note that the deploy order does not influence how changes to individual component configurations +(child Terragrunt configurations) are rolled out. + +Set the deploy order depending on the role that the account plays and how you want changes to be promoted across your +environment. + +General guidelines: + +- The riskier the change would be, the higher you should set the deploy order. You'll have to determine the level of + risk for each kind of change. +- The lowest deploy order should be set for `dev` and `sandbox` accounts. `dev` and `sandbox` accounts are typically the + least risky to break because they only affect internal users, and thus the impact to the business of downtime to these + accounts is limited. +- `prod` accounts should be deployed after all other app accounts (`dev`, `sandbox`, `stage`) because the risk of + downtime is higher. +- It could make sense for `prod` accounts to be deployed last, after shared services accounts (`shared`, `logs`, + `security`), but it depends on your risk level. +- Shared services accounts (`shared` and `logs`) should be deployed after the app accounts (`dev`, `sandbox`, `stage`, + `prod`). + - A potential outage in `shared` could prevent access to deploy old and new code to all of your environments (e.g., + a failed deploy of `account-baseline` could cause you to lose access to the ECR repos). This could be more + damaging than just losing access to `prod`. + - Similarly, an outage in `logs` could result in losing access to audit logs which can prevent detection of + malicious activity, or loss of compliance. +- `security` should be deployed after all other accounts. + - A potential outage in `security` could prevent loss of all access to all accounts, which will prevent you from + making any changes, which is the highest impact to your operations. Therefore we recommend deploying security + last. + +For example, suppose you have the following folder structure: + +```bash title="Infrastructure Live" +. +├── accounts.json +├── _envcommon +│ └── services +│ └── my-app.hcl +├── dev +│ └── us-east-1 +│ └── dev +│ └── services +│ └── my-app +│ └── terragrunt.hcl +│ +├── stage +│ └── us-east-1 +│ └── stage +│ └── services +│ └── my-app +│ └── terragrunt.hcl +└── prod + └── us-east-1 + └── prod + └── services + └── my-app + └── terragrunt.hcl +``` + +And suppose you had the following in your `accounts.json` file: + +```json title="accounts.json" +{ + "logs": { + "deploy_order": 5, + "id": "111111111111", + "root_user_email": "" + }, + "security": { + "deploy_order": 5, + "id": "222222222222", + "root_user_email": "" + }, + "shared": { + "deploy_order": 4, + "id": "333333333333", + "root_user_email": "" + }, + "dev": { + "deploy_order": 1, + "id": "444444444444", + "root_user_email": "" + }, + "stage": { + "deploy_order": 2, + "id": "555555555555", + "root_user_email": "" + }, + "prod": { + "deploy_order": 3, + "id": "666666666666", + "root_user_email": "" + } +} +``` + +If you make a change in `_envcommon/services/my-app.hcl`, then the Infrastructure CI/CD pipeline will proceed to run +`plan` and `apply` in the deploy order specified in the `accounts.json` file. For the example, this means that the +pipeline will run `plan` and `apply` on `dev` first, then `stage`, and then finally `prod`. If anything fails in +between, then the pipeline will halt at that point. That is, if there is an error trying to deploy to `dev`, then the +pipeline will halt without moving to `stage` or `prod`. + +If instead you made a change in `dev/us-east-1/dev/services/my-app/terragrunt.hcl` and +`prod/us-east-1/prod/services/my-app/terragrunt.hcl`, then the changes are applied simultaneously, ignoring the deploy +order. This is because a child config was updated directly, instead of the common configuration file. In this way, the +deploy order only influences the pipeline for updates to the common component configurations. + +### Configure MFA + +Once the account is created, log in using the root credentials and configure MFA using [this +document](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html#enable-virt-mfa-for-root) as a guide. + +:::caution + +It is critical to enable MFA as the root user can bypass just about any other security restrictions you put in place. + +::: + +:::tip + +Make sure you keep a paper copy of the virtual device secret key so that +you have a backup in case you lose your MFA device. + +::: + +### Create a temporary IAM User + +Once MFA is configured, set up a temporary IAM User with administrator access (the AWS managed IAM Policy +`AdministratorAccess`) and create an AWS Access key pair so you can authenticate on the command line. + +:::note + +At this point, you won't need to use the root credentials again until you are ready to delete the AWS account. + +::: + +## Update Logs, Security, and Shared accounts to allow cross account access + +In the Reference Architecture, all the AWS activity logs are configured to be streamed to a dedicated `logs` account. +This ensures that having full access to a particular account does not necessarily grant you the ability to tamper with +audit logs. + +In addition, all account access is managed by a central `security` account where the IAM Users are defined. This allows +you to manage access to accounts from a central location, and your users only need to manage a single set of AWS +credentials when accessing the environment. + +If you are sharing encrypted AMIs, then you will also need to ensure the new account has access to the KMS key that +encrypts the AMI root device. This is managed in the `shared` account baseline module. + +Finally, for the [ECS Deploy +Runner](https://github.com/gruntwork-io/terraform-aws-ci/tree/main/modules/ecs-deploy-runner) to work, the new account +needs to be able to access the secrets for accessing the remote repositories and the Docker images that back the build +runners. Both of these are stored in the `shared` account. + +In order for this setup to work for each new account that is created, the `logs`, `security`, and `shared` accounts need +to be made aware of the new account. This is handled through the `accounts.json` file in your +`infrastructure-live` repository. + +Once the `accounts.json` file is updated with the new account, you will want to grant the permissions for the new +account to access the shared resources. This can be done by running `terragrunt apply` in the `account-baseline` module +for the `logs`, `shared`, and `security` account, and the `ecr-repos` and `shared-secret-resource-policies` modules in the `shared` +account: + +```bash +(cd logs/_global/account-baseline && terragrunt apply) +(cd security/_global/account-baseline && terragrunt apply) +(cd shared/_global/account-baseline && terragrunt apply) +(cd shared/us-west-2/_regional/ecr-repos && terragrunt apply) +(cd shared/us-west-2/_regional/shared-secret-resource-policies && terragrunt apply) +``` + +Each call to apply will show you the plan for making the cross account changes. Verify the plan looks correct, and then +approve it to apply the updated cross account permissions. + +## Deploy the security baseline for the app account + +Now that the cross account access is configured, you are ready to start provisioning the new account! + +First, create a new folder for your account in `infrastructure-live`. The folder name should match the name of the AWS +account. + +Once the folder is created, create the following sub-folders and files with the following content: + +- ```json title="./infrastructure-live/account.hcl" + locals { + account_name = "" + } + ``` + +- ```bash title="./infrastructure-live/_global/region.hcl" + # Modules in the account _global folder don't live in any specific AWS region, but you still have to send the API calls + # to _some_ AWS region, so here we pick a default region to use for those API calls. + locals { + aws_region = "us-east-1" + } + ``` + +Next, copy over the `account-baseline` configuration from one of the application accounts (e.g., `dev`) and place it in +the `_global` folder: + +```bash +cp -r dev/\_global/account-baseline /\_global/account-baseline +``` + +Open the `terragrunt.hcl` file in the `account-baseline` folder and sanity check the configuration. Make sure there are +no hard coded parameters that are specific to the dev account. If you have not touched the configuration since the +Reference Architecture was deployed, you won't need to change anything. + +At this point, your folder structure for the new account should look like the following: + +```bash +. +└── new-account +├── account.hcl +└── \_global +├── region.hcl +└── account-baseline +└── terragrunt.hcl + +``` + +Once the folder structure looks correct and you have confirmed the `terragrunt.hcl` configuration is accurate, you are +ready to deploy the security baseline. Authenticate to the new account on the CLI (see [this blog +post](https://blog.gruntwork.io/a-comprehensive-guide-to-authenticating-to-aws-on-the-command-line-63656a686799) for +instructions) using the access credentials for the temporary IAM User you created above and run `terragrunt apply`. + +When running `apply`, you will see the plan for applying all the security baseline to the new account. Verify the plan +looks correct, and then approve it roll out the security baseline. + +At this point, you can now use the cross account access from the `security` account to authenticate to the new account. +Use your security account IAM User to assume the `allow-full-access-from-other-accounts` IAM Role in the new account to +confirm this. + +Once you confirm you have access to the new account from the `security` account, login using the +`allow-full-access-from-other-accounts` IAM Role and remove the temporary IAM User as you will no longer need to use it. + +## Deploy the ECS Deploy Runner + +Once the security baseline is deployed on the new account, you can deploy the ECS Deploy Runner. With the ECS Deploy +Runner, you will be able to provision new resources in the new account. + +To deploy the ECS Deploy Runner, copy the terragrunt configurations for `mgmt/vpc-mgmt` and `mgmt/ecs-deploy-runner` +from the `dev` account: + +```bash +mkdir -p /us-west-2/mgmt +cp -r dev/us-west-2/mgmt/{vpc-mgmt,ecs-deploy-runner} /us-west-2/mgmt +``` + +Be sure to open the `terragrunt.hcl` file in the copied folders and sanity check the configuration. Make sure there are +no hard coded parameters that are specific to the dev account. If you have not touched the configuration since the +Reference Architecture was deployed, you won't need to change anything. + +Once the configuration looks correct, go in to the `mgmt` folder and use `terragrunt run-all apply` to deploy the ECS +Deploy Runner: + +```bash +cd /us-west-2/mgmt && terragrunt run-all apply +``` + +:::note + +Because this uses `run-all`, the command will not pause to show you the plan. If you wish to view the plan, +run `apply` in each subfolder of the `mgmt` folder, in dependency graph order. You can see the dependency graph by using +the [graph-dependencies terragrunt +command](https://terragrunt.gruntwork.io/docs/reference/cli-options/#graph-dependencies). + +::: + +At this point, the ECS Deploy Runner is provisioned in the new account, and you can start using the Gruntwork Pipeline +to provision new infrastructure in the account. + + + diff --git a/docs/refarch/usage/maintain-your-refarch/deploying-your-apps.md b/docs/refarch/usage/maintain-your-refarch/deploying-your-apps.md new file mode 100644 index 0000000000..539fcf5d70 --- /dev/null +++ b/docs/refarch/usage/maintain-your-refarch/deploying-your-apps.md @@ -0,0 +1,488 @@ +--- +toc_max_heading_level: 2 +--- + +import Tabs from "@theme/Tabs" +import TabItem from "@theme/TabItem" + +# Deploying your apps + +In this guide, we'll walk you through deploying a Dockerized app to the App Orchestration cluster (ECS or EKS) running in +your Reference Architecture. + +## What's already deployed + +When Gruntwork initially deploys the Reference Architecture, we deploy the +[aws-sample-app](https://github.com/gruntwork-io/aws-sample-app/) into it, configured both as a frontend (i.e., +user-facing app that returns HTML) and as a backend (i.e., an app that's only accessible internally and returns JSON). +We recommend checking out the [aws-sample-app](https://github.com/gruntwork-io/aws-sample-app/) as it is designed to +deploy seamlessly into the Reference Architecture and demonstrates many important patterns you may wish to follow in +your own apps, such as how to package your app using Docker or Packer, do service discovery for microservices and data +stores in a way that works in dev and prod, securely manage secrets such as database credentials and self-signed TLS +certificates, automatically apply schema migrations to a database, and so on. + +However, for the purposes of this guide, we will create a much simpler app from scratch so you can see how all the +pieces fit together. Start with this simple app, and then, when you're ready, start adopting the more advanced +practices from [aws-sample-app](https://github.com/gruntwork-io/aws-sample-app/). + +## Deploying another app + +For this guide, we'll use a simple Node.js app as an example, but the same principles can be applied to any app. +Below is a classic, "Hello World" starter app that listens for requests on port `8080`. For this example +walkthrough, save this file as `server.js`. + +```js title="server.js" +const express = require("express") + +// Constants +const PORT = 8080 +const HOST = "0.0.0.0" + +// App +const app = express() +app.get("/simple-web-app", (req, res) => { + res.send("Hello world\n") +}) + +app.listen(PORT, HOST) +console.log(`Running on http://${HOST}:${PORT}`) +``` + +Since we need to pull in the dependencies (like ExpressJS) to run this app, we will also need a corresponding `package.json`. Please save this file along side `server.js`. + +```js title="package.json" +{ + "name": "docker_web_app", + "version": "1.0.0", + "main": "server.js", + "scripts": { + "start": "node server.js" + }, + "dependencies": { + "express": "^4.17.2" + } +} +``` + +## Dockerizing + +In order to deploy the app, we need to Dockerize the app. If you are not familiar with the basics of Docker, we +recommend you check out our "Crash Course on Docker and Packer" from the [Gruntwork Training +Library](https://training.gruntwork.io/p/a-crash-course-on-docker-packer). + +For this guide, we will use the following `Dockerfile` to package our app into a container (see [Docker +samples](https://docs.docker.com/samples/) for how to Dockerize many popular app formats): + +```docker +FROM node:14 + +# Create app directory +WORKDIR /usr/app + +COPY package*.json ./ + +RUN npm install +COPY . . + +# Ensure that our Docker image is configured to `EXPOSE` +# the port that our app is going to need for external communication. +EXPOSE 8080 +CMD [ "npm", "start" ] +``` + +The folder structure of our sample app looks like this: + +```shell +├── server.js +├── Dockerfile +└── package.json +``` + +To build this Docker image from the `Dockerfile`, run: + +```bash +docker build -t simple-web-app:latest . +``` + +Now you can test the container to see if it is working: + +```bash +docker run --rm -p 8080:8080 simple-web-app:latest +``` + +This starts the newly built container and links port `8080` on your machine to the container's port `8080`. You should +see output like below when you run this command: + +``` +> docker_web_app@1.0.0 start /usr/app +> node server.js + +Running on http://0.0.0.0:8080 +``` + +You should now be able to hit the app by opening `localhost:8080/simple-web-app` in your browser. Try it out to verify +you get the `"Hello world"` message from the server. + +## Publishing your Docker image + +Next, let's publish those images to an [ECR repo](https://aws.amazon.com/ecr/). All ECR repos are managed in the +`shared-services` AWS account in your Reference Architecture. + +First, you'll need to create the new ECR repository. + +Create a new branch on your infrastructure-live repository: + +```bash +git checkout -b simple-web-app-repo +``` + +Open `repos.yml` in `shared/us-west-2/_regional/ecr-repos` and add the desired repository name of your app. For the +purposes of our example, let's call ours `simple-web-app`: + +```yaml +simple-web-app: +external_account_ids_with_read_access: + # NOTE: we have to comment out the directives so that the python based data merger (see the `merge-data` hook under + # blueprints in this repository) can parse this yaml file. This still works when feeding through templatefile, as it + # will interleave blank comments with the list items, which yaml handles gracefully. + # %{ for account_id in account_ids } + - "${account_id}" +# %{ endfor } +external_account_ids_with_write_access: [] +tags: {} +enable_automatic_image_scanning: true +``` + +Commit and push the change: + +```bash +git add shared/us-west-2/shared/data-stores/ecr-repos/terragrunt.hcl && git commit -m 'Added simple-web-app repo' && git push +``` + +Now open a pull request on the `simple-web-app-repo` branch. + +This will cause the ECS deploy runner pipeline to run a `terragrunt plan` and append the plan output to the body of the PR you opened. If the plan output looks correct with no errors, somebody can review and approve the PR. Once approved, you can merge, which will kick off a `terragrunt apply` on the deploy runner, creating the repo. Follow the progress through your CI server. For example, you can go to GitHub actions workflows page and tail the logs from the ECS deploy runner there. + +Once the repository exists, you can use it with the Docker image. Each repo in ECR has a URL of the format `.dkr.ecr..amazonaws.com/`. For example, an ECR repo in `us-west-2`, and an app called `simple-web-app`, the registry URL would be: + +``` +.dkr.ecr.us-west-2.amazonaws.com/simple-web-app +``` + +You can create a Docker image for this repo, with a `v1` label, as follows: + +```bash +docker tag simple-web-app:latest .dkr.ecr.us-west-2.amazonaws.com/simple-web-app:v1 +``` + +Next, authenticate your Docker client with ECR in the shared-services account: + +```bash +aws ecr get-login-password --region "us-west-2" | docker login --username AWS --password-stdin .dkr.ecr.us-west-2.amazonaws.com +``` + +And finally, push your newly tagged image to publish it: + +```bash +docker push .dkr.ecr.us-west-2.amazonaws.com/simple-web-app:v1 +``` + +## Deploying your app + + + + + +Now that you have the Docker image of your app published, the next step is to deploy it to your ECS Cluster that was +set up as part of your reference architecture deployment. + +### Setting up the Application Load Balancer + +The first step is to create an [Application Load Balancer (ALB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html) for the app. The ALB will be exposed to the Internet and will route incoming traffic to the app. It's possible to use a single ALB with multiple applications, but for this example, we'll create a new ALB in addition to the ALB used by the aws-sample-app. + +To set up a new ALB, you'll need to create a `terragrunt.hcl` in each app environment (that is, in dev, stage, and prod). For example, for the `stage` environment, create an `alb-simple-web-app` folder in `stage/us-west-2/networking/`. Next, you can copy over the contents of the alb `terragrunt.hcl` so you have something to start with. + +With the `terragrunt.hcl` file open, update the following parameters: + +- Set `alb_name` to your desired name: e.g., `alb-simple-web-app-stage` +- Set `domain_names` to a desired DNS name: e.g., `domain_names = ["simple-web-app-stage.example.com"]` +- Note that your domain is available in an account-level `local` variable, `local.account_vars.locals.domain_name.name`. You can thus use a string interpolation to avoid hardcoding the domain name: `domain_names = ["simple-web-app-stage.${local.account_vars.locals.domain_name.name}"]` + +That's it! + +### Setting up the ECS service + +The next step is to create a `terragrunt.hcl` file to deploy your app in each app environment (i.e. in dev, stage, +prod). To do this, we will first need to define the common inputs for deploying the `simple-web-app` service. + +Copy the file `_envcommon/services/ecs-sample-app-frontend.hcl` into a new file +`_envcommon/services/ecs-simple-web-app.hcl`. + +Next, update the following in the new `ecs-simple-web-app.hcl` configuration file: + +- Locate the `dependency "alb"` block and modify it to point to the new ALB configuration you just defined. That is, change the `config_path` to the relative path to your new ALB. e.g., `config_path = "../../networking/alb-simple-web-app"` +- Set the `service_name` local to your desired name: e.g., `simple-web-app-stage`. +- Update `ecs_node_port_mappings` to only have a map value for port 8080 +- In the `container_definitions` object, set `image` to the repo url of the just published Docker image: e.g., `.dkr.ecr.us-west-2.amazonaws.com/simple-web-app` +- Set `cpu` and `memory` to a low value like 256 and 512 +- Remove all the `environment` variables, leaving only an empty list, e.g. `environment = []` +- Remove port 8443 from the `portMappings` +- Remove the unnecessary `linuxParameters` parameter +- Remove the `iam_role_name` and `iam_policy` parameters since this simple web app doesn't need any IAM permissions + +Once the envcommon file is created, you can create the `terragrunt.hcl` file to deploy it in a specific environment. +For the purpose of this example, we will assume we want to deploy the simple web app into the `dev` account first. + +1. Create a `simple-web-app` folder in `dev/us-west-2/dev/services`. +1. Copy over the contents of the `sample-app-frontend terragrunt.hcl`. +1. Update the include path for `envcommon` to reference the new `ecs-simple-web-app.hcl` envcommon file you created + above. +1. Remove the unneeded `service_environment_variables`, `tls_secrets_manager_arn`, and `db_secrets_manager_arn` local + variables, as well as all usage of it in the file. +1. Update the `tag` local variable to reference the Docker image tag we created earlier. + +### Deploying your configuration + +The above are the minimum set of configurations that you need to deploy the app. You can take a look at [`variables.tf` +of `ecs-service`](https://github.com/gruntwork-io/terraform-aws-service-catalog/tree/main/modules/services/ecs-service) +for all the options. + +Once you've verified that everything looks fine, change to the new ALB directory you created, and run: + +```bash +terragrunt apply +``` + +This will show you the plan for adding the new ALB. Verify the plan looks correct, and then approve it to apply your ALB +configuration to create a new ALB. + +Now change to the new `services/simple-web-app` folder, and run + +```bash +terragrunt apply +``` + +Similar to applying the ALB configuration, this will show you the plan for adding the new service. Verify and approve +the plan to apply your application configuration, which will create a new ECS service along with a target group that +connects the ALB to the service. + +### Monitoring your deployment progress + +Due to the asynchronous nature of ECS deployments, a successful `terragrunt apply` does not always mean your app +was deployed successfully. The following commands will help you examine the ECS cluster from the CLI. + +First, you can find the available ECS clusters: + +```bash +aws --region us-west-2 ecs list-clusters +``` + +Armed with the available clusters, you can list the available ECS services on a cluster by running: + +```bash +aws --region us-west-2 ecs list-services --cluster +``` + +The list of services should include the new `simple-web-app` service you created. You can get more information about the service by describing it: + +``` +aws --region us-west-2 ecs describe-services --cluster --services +``` + +A healthy service should show `"status": "ACTIVE"` in the output. You can also review the list of `events` to see what has happened with the service recently. If the `status` shows something else, it's time to start debugging. + +### Debugging errors + +Sometimes, things don't go as planned. And when that happens, it's always beneficial to know how to locate the +source of the problem. + +By default, all the container logs from a `service` (`stdout` and `stderr`) are sent to CloudWatch Logs. This is ideal for +debugging situations where the container starts successfully but the service doesn't work as expected. Let's assume our +`simple-web-app` containers started successfully (which they did!) but for some reason our requests to those containers +are timing out or returning wrong content. + +1. Go to the "Logs" section of the [Cloudwatch Management Console](https://console.aws.amazon.com/cloudwatch/), click on Log groups, and look for the service in the list. For example: `/stage/ecs/simple-web-app-stage`. + +1. Click on the entry. You should be presented with a real-time log stream of the container. If your app logs to `stdout`, its logs will show up here. You can export the logs and analyze it in your preferred tool or use [CloudWatch Log Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html) to query the logs directly + in the AWS web console. + + + + + +Now that you have the Docker image of your app published, the next step is to deploy it to your EKS Cluster that was +set up as part of your reference architecture deployment. + +### Setting up the Kubernetes Service + +The next step is to create a `terragrunt.hcl` file to deploy your app in each app environment (i.e. in dev, stage, +prod). To do this, we will first need to define the common inputs for deploying the `simple-web-app` service. + +Copy the file `_envcommon/services/k8s-sample-app-frontend.hcl` into a new file +`_envcommon/services/k8s-simple-web-app.hcl`. + +Next, update the following in the new `k8s-simple-web-app.hcl` configuration file: + +- Set the `service_name` local to your desired name: e.g., `simple-web-app-stage`. +- In the `container_image` object, set `repository` to the repo url of the just published Docker image: e.g., `.dkr.ecr.us-west-2.amazonaws.com/simple-web-app`. +- Update the `domain_name` to configure a DNS entry for the service: e.g., `simple-web-app.${local.account_vars.local.domain_name.name}`. +- Remove the `scratch_paths` configuration, as our simple web app does not pull in secrets dynamically. +- Remove all environment variables, leaving only an empty map: e.g. `env_vars = {}`. +- Update health check paths to reflect our new service: + + - `alb_health_check_path` + - `liveness_probe_path` + - `readiness_probe_path` + +- Remove configurations for IAM Role service account binding, as our app won't be communicating with AWS: + - `service_account_name` + - `iam_role_name` + - `eks_iam_role_for_service_accounts_config` + - `iam_role_exists` + - `iam_policy` + +Once the envcommon file is created, you can create the `terragrunt.hcl` file to deploy it in a specific environment. +For the purpose of this example, we will assume we want to deploy the simple web app into the `dev` account first. + +1. Create a `simple-web-app` folder in `dev/us-west-2/dev/services`. +1. Copy over the contents of the `k8s-sample-app-frontend terragrunt.hcl`. +1. Update the include path for `envcommon` to reference the new `ecs-simple-web-app.hcl` envcommon file you created + above. +1. Remove the unneeded `tls_secrets_manager_arn` local variables, as well as all usage of it in the file. +1. Update the `tag` input variable to reference the Docker image tag we created earlier. + +### Deploying your configuration + +The above are the minimum set of configurations that you need to deploy the app. You can take a look at [`variables.tf` +of `k8s-service`](https://github.com/gruntwork-io/terraform-aws-service-catalog/tree/main/modules/services/k8s-service) +for all the available options. + +Once you've verified that everything looks fine, change to the new `services/simple-web-app` folder, and run + +```bash +terragrunt apply +``` + +This will show you the plan for deploying your new service. Verify the plan looks correct, and then approvie it to apply +your application configuration, which will create a new Kubernetes Deployment to schedule the Pods. In the process, +Kubernetes will allocate: + +- A `Service` resource to expose the Pods under a static IP within the Kubernetes cluster. +- An `Ingress` resource to expose the Pods externally under an ALB. +- A Route 53 Subdomain that binds to the ALB endpoint. + +Once the service is fully deployed, you can hit the configured DNS entry to reach your service. + +### Monitoring your deployment progress + +Due to the asynchronous nature of Kubernetes deployments, a successful `terragrunt apply` does not always mean your app +was deployed successfully. The following commands will help you examine the deployment progress from the CLI. + +First, if you haven't done so already, configure your `kubectl` client to access the EKS cluster. You can follow the +instructions [in this section of the +docs](https://github.com/gruntwork-io/terraform-aws-eks/blob/main/core-concepts.md#how-do-i-authenticate-kubectl-to-the-eks-cluster) +to configure `kubectl`. For this guide, we will use [kubergrunt](https://github.com/gruntwork-io/kubergrunt): + +``` +kubergrunt eks configure --eks-cluster-arn ARN_OF_EKS_CLUSTER +``` + +Once `kubectl` is configured, you can query the list of deployments: + +``` +kubectl get deployments --namespace applications +``` + +The list of deployments should include the new `simple-web-app` service you created. This will show you basic status +info of the deployment: + +``` +NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE +simple-web-app 3 3 3 3 5m +``` + +A stable deployment is indicated by all statuses showing the same counts. You can get more detailed information about a +deployment using the `describe deployments` command if the numbers are not aligned: + +``` +kubectl describe deployments simple-web-app --namespace applications +``` + +See the [How do I check the status of a +rollout?](https://github.com/gruntwork-io/helm-kubernetes-services/blob/main/charts/k8s-service/README.md#how-do-i-check-the-status-of-the-rollout) +documentation for more information on getting detailed information about Kubernetes Deployments. + +### Debugging errors + +Sometimes, things don't go as planned. And when that happens, it's always beneficial to know how to locate the +source of the problem. There are two places you can look for information about a failed Pod. + +### Using kubectl + +The `kubectl` CLI is a powerful tool that helps you investigate problems with your `Pods`. + +The first step is to obtain the metadata and status of the `Pods`. To lookup information about a `Pod`, retrieve them +using `kubectl`: + +```bash +kubectl get pods \ + -l "app.kubernetes.io/name=simple-web-app,app.kubernetes.io/instance=simple-web-app" \ + --all-namespaces +``` + +This will list out all the associated `Pods` with the deployment you just made. Note that this will show you a minimal +set of information about the `Pod`. However, this is a useful way to quickly scan the scope of the damage: + +- How many `Pods` are available? Are all of them failing or just a small few? +- Are the `Pods` in a crash loop? Have they booted up successfully? +- Are the `Pods` passing health checks? + +Once you can locate your failing `Pods`, you can dig deeper by using `describe pod` to get more information about a +single `Pod`. To do this, you will first need to obtain the `Namespace` and name for the `Pod`. This information should +be available in the previous command. Using that information, you can run: + +```bash +kubectl describe pod $POD_NAME -n $POD_NAMESPACE +``` + +to output the detailed information. This includes the event logs, which indicate additional information about any +failures that has happened to the `Pod`. + +You can also retrieve logs from a `Pod` (`stdout` and `stderr`) using `kubectl`: + +``` +kubectl logs $POD_NAME -n $POD_NAMESPACE +``` + +Most cluster level issues (e.g if there is not enough capacity to schedule the `Pod`) can be triaged with this +information. However, if there are issues booting up the `Pod` or if the problems lie in your application code, you will +need to dig into the logs. + +### CloudWatch Logs + +By default, all the container logs from a `Pod` (`stdout` and `stderr`) are sent to CloudWatch Logs. This is ideal for +debugging situations where the container starts successfully but the service doesn't work as expected. Let's assume our +`simple-web-app` containers started successfully (which they did!) but for some reason our requests to those containers +are timing out or returning wrong content. + +1. Go to the "Logs" section of the [Cloudwatch Management Console](https://console.aws.amazon.com/cloudwatch/) and look for the name of the EKS cluster in the table. + +1. Clicking it should take you to a new page that displays a list of entries. Each of these correspond to a `Pod` in the + cluster, and contain the `Pod` name. Look for the one that corresponds to the failing `Pod` and click it. + +1. You should be presented with a real-time log stream of the container. If your app logs to STDOUT, its logs will show + up here. You can export the logs and analyze it in your preferred tool or use [CloudWatch Log + Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html) to query the logs directly + in the AWS web console. + + + + + + + diff --git a/docs/refarch/usage/maintain-your-refarch/extending.md b/docs/refarch/usage/maintain-your-refarch/extending.md new file mode 100644 index 0000000000..af1c9a922c --- /dev/null +++ b/docs/refarch/usage/maintain-your-refarch/extending.md @@ -0,0 +1,31 @@ +--- +title: "Extending your Reference Architecture" +--- + +# Extending and modifying your Reference Architecture + +Your Reference Architecture is delivered as a collection of IaC code. You will grow and evolve this codebase through out the lifetime of your cloud deployment. There are a few ways in which you can extend and modify your Reference Architecture: + +- You can immediately add any off-the-shelf Gruntwork services. +- You can create your own services using any Gruntwork modules. +- You can build your own modules and combine them into your own services. + +## Use Gruntwork's services + +Gruntwork provides a [_catalog_ of services](/iac/reference/) that can be added by directly referencing them in your terragrunt configuration. You can add these services to your architecture by creating references to them in the `_envcommon` directory, then each respective environment directory. + +## Composing your own services + +If Gruntwork doesn't already have the service you are looking you may be able to use our [modules](../../../iac/overview/modules) and combine them into your own bespoke new services to accelerate your development of the functionality you need. + +## Build your own modules + +If Gruntwork doesn't have existing modules for the AWS services that you are trying to deploy, you can always [create and deploy your own modules](/iac/getting-started/deploying-a-module), compose them into your on bespoke services and add them to your Reference Architecture. + + + diff --git a/docs/refarch/usage/maintain-your-refarch/monitoring.md b/docs/refarch/usage/maintain-your-refarch/monitoring.md new file mode 100644 index 0000000000..fb046cda3c --- /dev/null +++ b/docs/refarch/usage/maintain-your-refarch/monitoring.md @@ -0,0 +1,55 @@ +# Monitoring, Alerting, and Logging + +You'll want to see what's happening in your AWS account: + +## Metrics + +You can find all the metrics for your AWS account on the [CloudWatch Metrics +Page](https://console.aws.amazon.com/cloudwatch/home?#metricsV2:). + +- Most AWS services emit metrics by default, which you'll find under the "AWS Namespaces" (e.g. EC2, ECS, RDS). + +- Custom metrics show up under "Custom Namespaces." In particular, the [cloudwatch-memory-disk-metrics-scripts + module](https://github.com/gruntwork-io/terraform-aws-monitoring/tree/main/modules/metrics/) is installed on every + server to emit metrics not available from AWS by default, including memory and disk usage. You'll find these under + the "Linux System" Namespace. + +You may want to create a [Dashboard](https://console.aws.amazon.com/cloudwatch/home?#dashboards:) +with the most useful metrics for your services and have that open on a big screen at all times. + +## Alerts + +A number of alerts have been configured using the [alarms modules in +terraform-aws-monitoring](https://github.com/gruntwork-io/terraform-aws-monitoring/tree/main/modules/alarms) to notify you +in case of problems, such as a service running out of disk space or a load balancer seeing too many 5xx errors. + +- You can find all the alerts in the [CloudWatch Alarms + Page](https://console.aws.amazon.com/cloudwatch/home?#alarm:alarmFilter=ANY). + +- You can also find [Route 53 Health Checks on this page](https://console.aws.amazon.com/route53/healthchecks/home#/). + These health checks test your public endpoints from all over the globe and notify you if your services are unreachable. + +That said, you probably don't want to wait for someone to check that page before realizing something is wrong, so +instead, you should subscribe to alerts via email or text message. Go to the [SNS Topics +Page](https://console.aws.amazon.com/sns/v2/home?#/topics), select the `cloudwatch-alarms` topic, and click "Actions -> +Subscribe to topic." + +If you'd like alarm notifications to go to a Slack channel, check out the [sns-to-slack +module](https://github.com/gruntwork-io/terraform-aws-monitoring/tree/main/modules/alarms/sns-to-slack). + +## Logs + +All of your services have been configured using the [cloudwatch-log-aggregation-scripts +module](https://github.com/gruntwork-io/terraform-aws-monitoring/tree/main/modules/logs/cloudwatch-log-aggregation-scripts) +to send their logs to [CloudWatch Logs](https://console.aws.amazon.com/cloudwatch/home?#logs:). Instead of SSHing to +each server to see a log file, and worrying about losing those log files if the server fails, you can just go to the +[CloudWatch Logs Page](https://console.aws.amazon.com/cloudwatch/home?#logs:) and browse and search log events for all +your servers in near-real-time. + + + diff --git a/docs/refarch/usage/maintain-your-refarch/staying-up-to-date.md b/docs/refarch/usage/maintain-your-refarch/staying-up-to-date.md new file mode 100644 index 0000000000..d3be91ad30 --- /dev/null +++ b/docs/refarch/usage/maintain-your-refarch/staying-up-to-date.md @@ -0,0 +1,37 @@ +# Staying up to date + +Keeping you Reference Architecture up to date is important for several reasons. AWS regularly releases updates and introduces changes to its services and features. By maintaining an up-to-date IaC codebase, you can adapt to these updates seamlessly. This ensures that your architecture remains aligned with the latest best practices and takes advantage of new functionalities, security enhancements, and performance optimizations offered by AWS. + +Neglecting to keep your IaC code up to date can lead to significant challenges. When you finally reach a point where an update becomes necessary, the process can become much more cumbersome and time-consuming. Outdated code may rely on deprecated or obsolete AWS resources, configurations, or APIs, making it difficult to migrate to newer versions smoothly. In such cases, the effort required to update the codebase can be substantially higher, potentially resulting in additional costs, delays, and increased risk of errors or production outages. + +## Upgrading Terraform across your modules + +It is important to regularly update your version of Terraform to ensure you have access to the latest features, bug fixes, security patches, and performance improvements necessary for smooth infrastructure provisioning and management. + +Neglecting regular updates may lead to increased complexity and difficulty when attempting to upgrade from multiple versions behind. This was particularly true during the pre-1.0 era of Terraform where significant changes and breaking modifications were more frequent. + +The test pipeline's workhorse, the ECS Deploy Runner, includes a Terraform version manager, +[`tfenv`](https://github.com/tfutils/tfenv), so that you can run multiple versions of Terraform with your +`infrastructure-live` repo. This is especially useful when you want to upgrade Terraform versions. + +1. You'll first need to add a `.terraform-version` file to the module directory of the module you're upgrading. +1. In that file, specify the Terraform version as a string, e.g. `1.0.8`. Then push your changes to a branch. +1. The test pipeline will detect the change to the module and run `plan` on that module. When it does this, it will + use the Terraform version you specified in the `.terraform-version` file. +1. If the `plan` output looks good and there are no issues, you can approve and merge to your default protected branch. Once the code is merged, the changes will be `apply`ed + using the newly specified Terraform version. + + :::info + + The `.tfstate` state file will be written in the version specified by the `.terraform-version` file. You can verify this by viewing the state file in the S3 + bucket containing all your Reference Architecture's state files. + + ::: + + + diff --git a/docs/refarch/usage/maintain-your-refarch/undeploying.md b/docs/refarch/usage/maintain-your-refarch/undeploying.md new file mode 100644 index 0000000000..e0cdda8626 --- /dev/null +++ b/docs/refarch/usage/maintain-your-refarch/undeploying.md @@ -0,0 +1,212 @@ +# Undeploying your Reference Architecture + +Terraform makes it fairly easy to delete resources using the `destroy` command. This is very useful in testing and +pre-prod environments, but can also be dangerous in production environments. + +:::danger + +Be especially careful when running `destroy` in any production environment so you don't accidentally end up deleting +something you'll very much regret (e.g., a production database). + +If you delete resources, **there is no undo** + +::: + +## Prerequisites + +### Understand `force_destroy` on S3 buckets + +By default, if your Terraform code includes an S3 bucket, when you run `terraform destroy`, if that bucket contains +any content, Terraform will _not_ delete the bucket and instead will give you an error like this: + +```yaml +bucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket. +``` + +This is a safety mechanism to ensure that you don't accidentally delete your data. + +If you are absolutely sure you want to delete the contents of an S3 bucket (remember, there's no undo!), all the +services that use S3 buckets expose a `force_destroy` setting that you can set to `true` in your `terragrunt.hcl` +files to tell that service to delete the contents of the bucket when you run `destroy`. Here's a partial list of +services that expose this variable: + +:::note + +You may not have all of these in your Reference Architecture + +::: + +- `networking/alb` +- `mgmt/openvpn-server` +- `landingzone/account-baseline-app` +- `services/k8s-service` + +### Understand module dependencies + +Gruntwork Pipelines (the CI/CD pipeline deployed with your Reference Architecture) only **supports destroying modules +that have no downstream dependencies.** + +You can destroy multiple modules only if: + +- All of them have no dependencies. +- None of them are dependent on each other. + +#### Undeploying a module with many dependencies + +As an example, most modules depend on the `vpc` module, for fetching information about the VPC using [Terragrunt `dependency` +blocks](https://terragrunt.gruntwork.io/docs/reference/config-blocks-and-attributes/#dependency) or +[aws_vpc](https://www.terraform.io/docs/providers/aws/d/vpc.html) data source. If you undeploy your `vpc` +_before_ the modules that depend on it, then any command you try to run on those other modules will fail, as their +data sources will no longer be able to fetch the VPC info! + +Therefore, you should only destroy a module if you're sure no other module depends on it! Terraform does not provide +an easy way to track these sorts of dependencies. We have configured the modules here using Terragrunt [`dependency`](https://terragrunt.gruntwork.io/docs/reference/config-blocks-and-attributes/#dependency) blocks, so use those to find dependencies between modules. + +You can check the module dependency tree with `graph-dependencies` and GraphViz: + +```bash +aws-vault exec -- terragrunt graph-dependencies | dot -Tpng > dep-graph.png +open dep-graph.png +``` + +## Undeploying a module with no dependencies using Gruntwork Pipelines + +To destroy a module with no downstream dependencies, such as `route53-private` in the `dev` environment: + +1. Update the `force_destroy` variable in `dev/us-west-2/dev/networking/route53-private/terragrunt.hcl`. + [See force_destroy section](#pre-requisite-force_destroy-on-s3-buckets). + + ```json + force_destroy = true + ``` + +1. Open a pull request for that change and verify the plan in CI. You should see a trivial change to update the + module. +1. Go through the typical git workflow to get the change merged into the main branch. +1. As CI runs on the main branch, watch for the job to be held for approval. Approve the job, and wait for the + `deployment` step to complete so that the module is fully updated with the new variable. +1. Remove the module folder from the repo. For example: + + ```bash + rm -rf dev/us-west-2/dev/networking/route53-private + ``` + +1. Open a pull request for that change and verify the plan in CI. + - Make sure the `plan -destroy` output looks accurate. + - If you are deleting multiple modules (e.g., in `dev`, `stage`, and `prod`) you should see multiple plan + outputs -- one per folder deleted. You'll need to scroll through the plan output to see all of them, as + it runs `plan -destroy` for each folder individually. +1. Go through the typical git workflow to get the change merged into the main branch. +1. As CI runs on the main branch, watch for the job to be held for approval. Approve the job, and wait for the + `deployment` step to complete so that the module is fully _deleted_. +1. [Remove the Terraform state](#removing-the-terraform-state). +1. Repeat this process for upstream dependencies you may now want to destroy, always starting from the + modules that have no existing downstream dependencies. + +### Manually undeploying a single module + +You can also bypass the CI/CD pipeline and run destroy locally. For example: + +```bash +cd stage/us-west-2/stage/services/sample-app-frontend +terragrunt destroy +``` + +## Manually undeploying multiple modules or an entire environment + +_If you are absolutely sure you want to run destroy on multiple modules or an entire environment_, you can use the `destroy-all` command. For example, to undeploy the entire staging environment, you'd run: + +:::danger + +This operation cannot be undone! + +::: + +```bash +cd stage +terragrunt destroy-all +``` + +Terragrunt will then run `terragrunt destroy` in each subfolder of the current working directory, processing them in +reverse order based on the dependencies you define in the `terragrunt.hcl` files. + +To avoid interactive prompts from Terragrunt (use very carefully!!), add the `--terragrunt-non-interactive` flag: + +```bash +cd stage +terragrunt destroy-all --terragrunt-non-interactive +``` + +To undeploy everything except a couple specific sub-folders, add the `--terragrunt-exclude-dir` flag. For example, to +run `destroy` in each subfolder of the `stage` environment except MySQL and Redis, you'd run: + +``` +cd stage +terragrunt destroy-all \ + --terragrunt-exclude-dir stage/us-east-1/stage/data-stores/mysql \ + --terragrunt-exclude-dir stage/us-east-1/stage/data-stores/redis +``` + +## Removing the Terraform state + +:::danger + +Deleting state means that you lose the ability to manage your current Terraform resources! Be sure to only delete once you have confirmed all resources are destroyed. + +::: + +Once all the resources for an environment have been destroyed, you can remove the state objects managed by `terragrunt`. +The Reference Architecture manages state for each environment in an S3 bucket in each environment's AWS account. +Additionally, to prevent concurrent access to the state, it also utilizes a DynamoDB table to manage locks. + +To delete the state objects, login to the console and look for the S3 bucket in the environment you wish to undeploy. It +should begin with your company's name and end with `terraform-state`. Also look for a DynamoDB +table named `terraform-locks`. You can safely remove both **once you have confirmed all the resources have been +destroyed successfully**. + +## Useful tips + +- **Destroy resources in groups instead of all at once.** + + - There are [known instabilities](#known-errors) with destroying many modules at once. In addition, Terragrunt is + designed to process the modules in a graph, and will not continue on if there is an error. This means that you + could run into situations where Terragrunt has destroyed a module, but returns an error due to Terraform bugs that + prevent you from cleanly calling destroy twice. + - To address these instabilities, it is recommended to destroy the resources in groups. For example, you can start + by destroying all the services first (e.g., `stage/REGION/stage/services`), then the data stores (e.g., + `stage/REGION/stage/data-stores`), and finally the networking resources (e.g., `stage/REGION/stage/networking`). + - When identifying groups to destroy, use [terragrunt + graph-dependencies](https://terragrunt.gruntwork.io/docs/reference/cli-options/#graph-dependencies) to view the + dependency graph so that you destroy the modules in the right order. + +- **Empty + Delete S3 buckets using the web console (when destroying whole environments).** + - As mentioned in [Pre-requisite: force_destroy on S3 buckets](#pre-requisite-force_destroy-on-s3-buckets), it is + recommended to set `force_destroy = true` prior to running destroy so that Terraform can destroy the S3 buckets. + However, this can be cumbersome if you are destroying whole environments, as it can be difficult to flip the bit in + every single module. + - Alternatively, it is often faster and more convenient to empty and delete the buckets using the AWS web console before executing the `destroy` command with `terragrunt`. + - **IMPORTANT**: You should only do this if you are intending on destroying an entire environment. Otherwise, it is + too easy to accidentally delete the wrong S3 bucket. + +## Known Terraform errors + +If your `destroy` fails with: + +``` +variable "xxx" is nil, but no error was reported +``` + +Terraform has a couple bugs ([18197](https://github.com/hashicorp/terraform/issues/18197) and +[17862](https://github.com/hashicorp/terraform/issues/17862)) that may give this error when you run +`destroy`. + +This usually happens when the module already had `destroy` called on it previously and you re-run `destroy`. In this +case, your best bet is to skip over that module with the `--terragrunt-exclude-dir` (more details: [here](https://terragrunt.gruntwork.io/docs/reference/cli-options/#terragrunt-exclude-dir)). + + + diff --git a/docs/refarch/usage/pipelines-integration/index.md b/docs/refarch/usage/pipelines-integration/index.md new file mode 100644 index 0000000000..23a161afe8 --- /dev/null +++ b/docs/refarch/usage/pipelines-integration/index.md @@ -0,0 +1,106 @@ +# Pipelines integration + +CI/CD is a crucial tool for ensuring the smooth iteration and consistent delivery of Infrastructure as Code (IaC) to production environments. By adopting CI/CD practices, teams can automate the process of integrating and testing changes made to IaC code, allowing for frequent and reliable updates. With CI/CD, each change to the IaC codebase triggers an automated build process, ensuring that any new additions or modifications are properly incorporated. This enables developers to catch errors and conflicts early, facilitating collaboration and reducing the likelihood of issues surfacing during deployment. + +Gruntwork Pipelines is a framework that enables you to use your preferred CI tool to securely run an end-to-end pipeline for infrastructure code (Terraform) and app code (Docker or Packer). Rather than replace your existing CI/CD provider, Gruntwork Pipelines is designed to enhance the security of your existing tool. For more information please see the [full pipelines documentation](/pipelines/overview/). + +In the guide below, we walk through how to configure Gruntwork Pipelines in your CI/CD. + +## Set up machine user credentials + +### Get the machine user credentials from AWS + +1. Log into the Security account in the AWS Console. +1. Go into IAM and find the ci-machine-user under Users. +1. Go to Security Credentials > Access Keys > Create Access Key. +1. Save these values as the `AWS_ACCESS_KEY_ID` and the `AWS_SECRET_ACCESS_KEY` Environment Variables in CircleCI. + + | Env var name | Value | + | --------------------- | --------------------------------------------------------------------------------------------------------------------------------------- | + | AWS_ACCESS_KEY_ID | The Access Key generated for the machine user in the Security account. | + | AWS_SECRET_ACCESS_KEY | The Secret Key generated for the machine user in the Security account. | + | GITHUB_OAUTH_TOKEN | Enter the MachineUserGitHubPAT here. You can find this in `reference-architecture-form.yml` or in the shared account's Secrets Manager. | + +## Verify: Testing an infrastructure change end to end + +You can verify the pipeline by making a change to one of the modules. For example, follow the steps below to extend the +number of replicas in the sample app: + +1. Create a new branch in the `infrastructure-live` repo. + `git checkout -B add-replica-to-sample-app`. +1. Open the file `dev/us-west-2/dev/services/sample-app-frontend` in your editor. +1. Change the input variable `desired_number_of_tasks` to `2`. +1. Commit the change. + `git commit -a`. +1. Push the branch to GitHub and open a PR. + `git push add-replica-to-sample-app` +1. Login to CircleCI. Navigate to your infrastructure-live project. +1. Click on the new pipeline job for the branch `add-replica-to-sample-app` to see the build log. +1. Verify the `plan`. Make sure that the change corresponds to adding a new replica to the ECS service. +1. When satisfied with the plan, merge the PR into `main`. +1. Go back to the project and verify that a new build is started on the `main` branch. +1. Wait for the `plan` to finish. The build should hold for approval. +1. Approve the deployment by clicking `Approve`. +1. Wait for the `apply` to finish. +1. Login to the AWS console and verify the ECS service now has 2 replicas. + +## (Optional) Configure Slack notifications + +### Create a Slack App + +1. Visit [your apps](https://api.slack.com/apps) on the Slack API website, and click `Create New App`. +1. Name your application (e.g., `CircleCI` or `CircleCI-Pipeline`). +1. Then select the Slack workspace in which to install this app. + +### Set Permissions + +On the next page select the "Permissions" area, and add these 3 "scopes". + +- `chat:write` +- `chat:write.public` +- `files:write` + +

+Slack App Scopes +Slack App Scopes +

+ +### Install and Receive Token + +Install the application into the Slack workspace and save your OAuth Access Token. This will be used in +a CircleCI environment variable. + +

+Slack OAuth Tokens +Slack OAuth Tokens +

+ +

+Slack OAuth Access Token +Slack OAuth Access Token +

+ +### Choose a Slack channel to notify + +1. Choose or create a Slack channel in your workspace to notify with pipeline updates. +1. Right-click the channel name. You'll see a context menu. +1. Select `Copy link`. +1. Extract the Channel ID from the URL copied. E.g., `https://.slack.com/archives/` + +### Create env vars on CircleCI + +1. Login to CircleCI. Navigate to Project Settings -> Environment Variables. +1. Configure the following environment variables: + + | Env var name | Value | + | --------------------- | ----------------------------------------------------------------- | + | SLACK_ACCESS_TOKEN | The OAuth token acquired through the previous step. | + | SLACK_DEFAULT_CHANNEL | If no channel ID is specified, the app will attempt to post here. | + + + diff --git a/docs/refarch/whats-this/index.md b/docs/refarch/whats-this/index.md new file mode 100644 index 0000000000..ec6e1dcb06 --- /dev/null +++ b/docs/refarch/whats-this/index.md @@ -0,0 +1,16 @@ +# What is all this? + + +Haxx0r ipsum endif race condition d00dz fork cookie recursively big-endian tera. Wabbit break concurrently printf script kiddies eof cd malloc warez chown kilo /dev/null todo ascii foad bang exception highjack epoch headers. Flush data piggyback class hexadecimal true syn ddos daemon snarf over clock. + +Cookie packet sniffer ifdef endif all your base are belong to us stdio.h bin ssh I'm sorry Dave, I'm afraid I can't do that terminal hack the mainframe. Concurrently Leslie Lamport brute force else socket malloc over clock foo grep double var mainframe. Ip cache access buffer pwned bytes system packet todo emacs gurfle dereference foad strlen deadlock alloc cat false for /dev/null. + +Wannabee dereference private wombat case root fatal char giga Leslie Lamport perl sudo sql ascii cat grep James T. Kirk bin stack trace afk. Malloc foad class daemon I'm compiling salt brute force highjack syn regex socket exception warez hexadecimal linux bit bytes echo hack the mainframe. Then wabbit injection Linus Torvalds pragma tunnel in data win protocol leet fopen printf void default gc Starcraft piggyback todo gnu concurrently. + + + diff --git a/docs/refarch/whats-this/services.md b/docs/refarch/whats-this/services.md new file mode 100644 index 0000000000..ef83022ef1 --- /dev/null +++ b/docs/refarch/whats-this/services.md @@ -0,0 +1,8 @@ + + + diff --git a/docs/refarch/whats-this/understanding-the-deployment-process.md b/docs/refarch/whats-this/understanding-the-deployment-process.md new file mode 100644 index 0000000000..e2c861e7fe --- /dev/null +++ b/docs/refarch/whats-this/understanding-the-deployment-process.md @@ -0,0 +1,42 @@ +# Understanding the Deployment Process + +The Gruntwork Reference Architecture has three deployment phases. + +### Configuration + +Configuration of the Gruntwork Reference Architecture is primarily [your responsibility](../../intro/overview/what-you-provide). + +- We deliver a templated `infrastructure-live-${YOUR_COMPANY_NAME}` repository to you in our GitHub organization +- You access the repo in GitHub via invitation in the [Gruntwork Dev Portal](https://app.gruntwork.io) +- You use the Gruntwork CLI wizard to create accounts and set config options +- Pre-flight checks run via Github Actions to determine when the repo is ready for deployment +- The AWS accounts you are deploying the Reference Architecture to should be empty at conclusion of this phase +- You merge the PR to the `main` branch to initiate the deployment phase + +### Deployment + +The deployment phase is primarily [our responsibility](../../intro/overview/what-we-provide.md#gruntwork-reference-architecture). + +- We monitor the deployment and fix any errors that occur as needed +- In some cases, we may need to communicate with you to resolve issues (e.g. AWS quota problems) +- Deployment is completed and the `infrastructure-live-${YOUR_COMPANY_NAME}` repo is populated +- During the deployment phase, you should not attempt to modify resources in or respond to any automated notifications from your AWS accounts +- Once the deployment is complete, you will receive an email + +### Adoption + +The adoption phase is primarily [your responsibility](../../intro/overview/what-you-provide). + +- You complete “last mile” configuration following our handoff docs, including final Pipelines integrations with your CI/CD of choice +- You migrate the `infrastructure-live-${YOUR_COMPANY_NAME}` repo to your own Version Control System or Github Organization +- You revoke Gruntwork access to your AWS account +- At this points, your AWS accounts are fully in your control +- From this point forward, we expect you to self-serve, with assistance from Gruntwork Support, as needed + + + diff --git a/docs/refarch/whats-this/what-is-a-reference-architecture.md b/docs/refarch/whats-this/what-is-a-reference-architecture.md new file mode 100644 index 0000000000..dae437c3fa --- /dev/null +++ b/docs/refarch/whats-this/what-is-a-reference-architecture.md @@ -0,0 +1,31 @@ +# What is a Reference Architecture? + +The Gruntwork Reference Architecture is an implementation of best practices for infrastructure in the cloud. It is and end-to-end tech stack built on top of our Infrastructure as Code Library, deployed into your AWS accounts. + +The Gruntwork Reference Architecture is opinionated, and delivered as code. It is written in [Terragrunt](https://terragrunt.gruntwork.io/), our thin wrapper that provides extra tools for managing remote state and keeping your configurations [DRY](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself). Our `_envcommon` pattern reduces the amount of code you need to copy from one place to another when creating additional identical infrastructure. + +## Components + +The Gruntwork Reference Architecture has three main components — Gruntwork Landing Zone, Gruntwork Pipelines, and a Sample Application. + +### Landing Zone + +Gruntwork Landing Zone is a terraform-native approach to [AWS Landing zone / Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html). This uses Terragrunt to quickly create new AWS accounts, configure them with a standard security baseline, and defines a best-practices multi-account setup. + + +### Pipelines + +[Gruntwork Pipelines](/pipelines/overview/) makes the process of deploying infrastructure similar to how developers often deploy code. It is a code framework and approach that enables the customer to use your preferred CI tool to set up an end-to-end pipeline for infrastructure code. + + +### Sample Application + +Our [sample application](https://github.com/gruntwork-io/aws-sample-app) is built with JavaScript, Node.js, and Express.js, following [Twelve-Factor App](https://12factor.net/) practices. It consists of a load balancer, a front end, a backend, a cache, and a database. + + + diff --git a/docs/reference/services/intro/deploy-new-infrastructure.md b/docs/reference/services/intro/deploy-new-infrastructure.md index a146d3cb5a..58ea673610 100644 --- a/docs/reference/services/intro/deploy-new-infrastructure.md +++ b/docs/reference/services/intro/deploy-new-infrastructure.md @@ -118,7 +118,7 @@ deploy Terraform code from the Service Catalog. See 1. **GitHub Authentication**: All of Gruntwork's code lives in GitHub, and as most of the repos are private, you must authenticate to GitHub to be able to access the code. For Terraform, we recommend using Git / SSH URLs and using - SSH keys for authentication. See [Link Your GitHub ID](/intro/dev-portal/link-github-id) + SSH keys for authentication. See [Link Your GitHub ID](/developer-portal/link-github-id) for instructions on linking your GitHub ID and gaining access. 1. **Deploy**. You can now deploy the service as follows: @@ -258,7 +258,7 @@ Now you can create child `terragrunt.hcl` files to deploy services as follows: 1. **GitHub Authentication**: All of Gruntwork's code lives in GitHub, and as most of the repos are private, you must authenticate to GitHub to be able to access the code. For Terraform, we recommend using Git / SSH URLs and using SSH keys for authentication. See [How to get access to the Gruntwork Infrastructure as Code - Library](/intro/dev-portal/create-account) + Library](/developer-portal/create-account) for instructions on setting up your SSH key. 1. **Deploy**. You can now deploy the service as follows: @@ -321,7 +321,7 @@ Below are instructions on how to build an AMI using these Packer templates. We'l ``` See [How to get access to the Gruntwork Infrastructure as Code - Library](/intro/dev-portal/create-account) + Library](/developer-portal/create-account) for instructions on setting up GitHub personal access token. 1. **Set variables**. Each Packer template defines variables you can set in a `variables` block at the top, such as @@ -398,6 +398,6 @@ most commonly used filters will be: diff --git a/docusaurus.config.js b/docusaurus.config.js index 510fd531f0..8b3257cc00 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -93,29 +93,39 @@ const config = { label: "Intro", docId: "intro/overview/intro-to-gruntwork", }, - { - type: "doc", - position: "left", - label: "Guides", - docId: "guides/index", - activeBasePath: "docs/guides", - }, { type: "dropdown", - label: "Library Reference", + position: "left", + label: "Docs", + id: "docs", items: [ { type: "doc", - label: "Modules", - docId: "reference/modules/intro", + label: "Infrastructure as Code Library", + docId: "iac/overview/index", + }, + { + type: "doc", + label: "Gruntwork Pipelines", + docId: "pipelines/overview/index", + }, + { + type: "doc", + label: "Reference Architecture", + docId: "refarch/whats-this/what-is-a-reference-architecture", }, { type: "doc", - label: "Services", - docId: "reference/services/intro/overview", + label: "Developer Portal", + docId: "developer-portal/create-account", }, ], }, + { + type: "doc", + label: "Library Reference", + docId: "iac/reference/index", + }, { to: "/tools", label: "Tools", position: "left" }, { to: "/courses", label: "Courses", position: "left" }, { @@ -223,6 +233,14 @@ const config = { label: "Terratest", href: "https://terratest.gruntwork.io", }, + { + label: "Gruntwork Releases", + to: "/guides/stay-up-to-date", + }, + { + label: "Style Guides", + to: "/guides/style", + }, { label: "Support", href: "/support", @@ -257,7 +275,15 @@ const config = { prism: { theme: lightCodeTheme, darkTheme: darkCodeTheme, - additionalLanguages: ["hcl"], + additionalLanguages: [ + "hcl", + "python", + "yaml", + "json", + "bash", + "go", + "docker", + ], }, algolia: algoliaConfig ? { diff --git a/package.json b/package.json index 9cbb966a61..a19fd0e968 100644 --- a/package.json +++ b/package.json @@ -7,7 +7,7 @@ }, "scripts": { "docusaurus": "docusaurus", - "start": "docusaurus start & onchange -i '_docs-sources/**/*(*.md|*.mdx|*.json)' -- yarn regenerate:local", + "start": "docusaurus start --port 3000 & onchange -i '_docs-sources/**/*(*.md|*.mdx|*.json)' -- yarn regenerate:local", "build": "docusaurus build", "swizzle": "docusaurus swizzle", "deploy": "docusaurus deploy", diff --git a/sidebars.js b/sidebars.js index 7028208b92..0522e727f2 100644 --- a/sidebars.js +++ b/sidebars.js @@ -10,7 +10,6 @@ */ const introSidebar = require("./sidebars/intro-guide.js") -const guidesSidebar = require("./sidebars/guides-index.js") const productionFrameworkSidebar = require("./sidebars/production-framework-guide.js") const refarchUsageSidebar = require("./sidebars/refarch-usage-guide.js") const landingZoneSidebar = require("./sidebars/landing-zone-guide.js") @@ -21,13 +20,19 @@ const complianceSidebar = require("./sidebars/compliance-guide.js") const updateGuideSidebars = require("./sidebars/update-guides.js") const apiSidebars = require("./sidebars/api-reference.js") const faqSidebars = require("./sidebars/faq.js") +const iacSidebars = require("./sidebars/iac.js") +const libraryRefSiderbars = require("./sidebars/library-reference.js") +const developerPortalSidebars = require("./sidebars/developer-portal.js") +const patcherSiderbars = require("./sidebars/patcher.js") +const pipelinesSiderbars = require("./sidebars/pipelines.js") +const landingZoneSidebars = require("./sidebars/landing-zone.js") +const refarchSidebar = require("./sidebars/refarch.js") // @ts-check /** @type {import('@docusaurus/plugin-content-docs').SidebarsConfig} */ const sidebars = { introSidebar, - guidesSidebar, productionFrameworkSidebar, refarchUsageSidebar, landingZoneSidebar, @@ -38,6 +43,13 @@ const sidebars = { ...updateGuideSidebars, ...apiSidebars, faqSidebars, + iacSidebars, + developerPortalSidebars, + patcherSiderbars, + pipelinesSiderbars, + landingZoneSidebars, + refarchSidebar, + libraryRefSiderbars, } module.exports = sidebars diff --git a/sidebars/developer-portal.js b/sidebars/developer-portal.js new file mode 100644 index 0000000000..8560fef675 --- /dev/null +++ b/sidebars/developer-portal.js @@ -0,0 +1,24 @@ +const kbLink = + "https://github.com/orgs/gruntwork-io/discussions?discussions_q=" + + // filter by discussions with the label "s:dev-portal" & sort by top voted discussions first + encodeURIComponent("label:s:dev-portal sort:top") + +const sidebar = [ + { + label: "Developer Portal", + type: "category", + collapsible: false, + items: [ + "developer-portal/create-account", + "developer-portal/invite-team", + "developer-portal/link-github-id", + { + type: "link", + label: "Knowledge Base", + href: kbLink, + }, + ], + }, +] + +module.exports = sidebar diff --git a/sidebars/guides-index.js b/sidebars/guides-index.js deleted file mode 100644 index ec65686747..0000000000 --- a/sidebars/guides-index.js +++ /dev/null @@ -1,40 +0,0 @@ -const sidebar = [ - { - label: "Foundations", - type: "doc", - id: "guides/index", - }, - { - label: "Reference Architecture", - type: "doc", - id: "guides/reference-architecture/index", - }, - { - label: "Build Your Own Architecture", - type: "doc", - id: "guides/build-it-yourself/index", - }, - { - label: "Update Guides", - type: "doc", - id: "guides/stay-up-to-date/index", - }, - { - label: "Style Guides", - type: "doc", - id: "guides/style/index", - }, - { - label: "Working with our code", - type: "category", - items: [ - "guides/working-with-code/using-modules", - "guides/working-with-code/tfc-integration", - "guides/working-with-code/versioning", - "guides/working-with-code/contributing", - "guides/working-with-code/forking", - ], - }, -] - -module.exports = sidebar diff --git a/sidebars/iac.js b/sidebars/iac.js new file mode 100644 index 0000000000..5b8d90d4b9 --- /dev/null +++ b/sidebars/iac.js @@ -0,0 +1,59 @@ +const sidebar = [ + { + label: "Infrastructure as Code", + type: "category", + collapsible: false, + items: [ + { + label: "Overview", + type: "category", + collapsible: false, + items: [ + "iac/overview/index", + "iac/overview/modules", + "iac/overview/services", + ], + }, + { + label: "Getting Started", + type: "category", + collapsible: false, + items: [ + "iac/getting-started/setting-up", + "iac/getting-started/accessing-the-code", + "iac/getting-started/deploying-a-module", + ], + }, + { + label: "Working with the Library", + type: "category", + collapsible: false, + items: [ + // "iac/usage/using-a-module", + // "iac/usage/using-a-service", + // "iac/usage/customizing-modules", + // "iac/usage/composing-your-own-service", + "guides/working-with-code/using-modules", + "guides/working-with-code/tfc-integration", + ], + }, + { + label: "Staying up to date", + type: "category", + collapsible: false, + items: [ + "iac/stay-up-to-date/versioning", + "iac/stay-up-to-date/updating", + ], + }, + { + label: "Support", + type: "category", + collapsible: false, + items: ["iac/support/contributing"], + }, + ], + }, +] + +module.exports = sidebar diff --git a/sidebars/intro-guide.js b/sidebars/intro-guide.js index a368b66eec..db09904f8c 100644 --- a/sidebars/intro-guide.js +++ b/sidebars/intro-guide.js @@ -1,10 +1,13 @@ const sidebar = [ { - Overview: [ + label: "Introduction", + type: "category", + collapsible: false, + items: [ "intro/overview/intro-to-gruntwork", - "intro/overview/how-it-works", - "intro/overview/reference-architecture-prerequisites-guide", - "intro/overview/shared-responsibility-model", + "intro/overview/what-we-provide", + "intro/overview/what-you-provide", + "intro/overview/prerequisites", // Temporarily hiding the unfinished sections from the sidebar We'll put // them back shortly and don't want to delete the pages as we know we're // going to have these sections within a few days. @@ -14,44 +17,6 @@ const sidebar = [ "intro/overview/getting-started", ], }, - { - "Core Concepts": [ - "intro/core-concepts/production-framework", - "intro/core-concepts/infrastructure-as-code", - "intro/core-concepts/immutable-infrastructure", - ], - }, - { - "Accessing the Dev Portal": [ - "intro/dev-portal/create-account", - "intro/dev-portal/invite-team", - "intro/dev-portal/link-github-id", - ], - }, - { - "Setting Up Your Environment": [ - "intro/environment-setup/recommended_tools", - ], - }, - { - "Tool Fundamentals": [ - "intro/tool-fundamentals/docker", - "intro/tool-fundamentals/packer", - "intro/tool-fundamentals/terraform", - "intro/tool-fundamentals/terragrunt", - ], - }, - { - "Deploy Your First Module": [ - "intro/first-deployment/using-terraform-modules", - "intro/first-deployment/testing", - "intro/first-deployment/deploy", - ], - }, - { - type: "doc", - id: "intro/next-steps", - }, ] module.exports = sidebar diff --git a/sidebars/landing-zone.js b/sidebars/landing-zone.js new file mode 100644 index 0000000000..d76b1a6614 --- /dev/null +++ b/sidebars/landing-zone.js @@ -0,0 +1,16 @@ +const sidebar = [ + { + label: "Landing Zones", + type: "category", + collapsible: false, + items: [ + { + label: "Getting Started", + type: "doc", + id: "landing-zone/index", + } + ] + } +] + +module.exports = sidebar diff --git a/sidebars/library-reference.js b/sidebars/library-reference.js new file mode 100644 index 0000000000..06e2ef03e2 --- /dev/null +++ b/sidebars/library-reference.js @@ -0,0 +1,79 @@ +const sidebar = [ + { + label: "Infrastructure as Code", + type: "category", + collapsible: false, + items: [ + { + label: "Library Reference", + type: "doc", + id: "iac/reference/index", + }, + { + type: "category", + collapsible: true, + collapsed: false, + label: "Service Catalog", + items: [ + { + "App Orchestration": [ + { + type: "autogenerated", + dirName: "reference/services/app-orchestration", + }, + ], + }, + { + "CI/CD Pipeline": [ + { + type: "autogenerated", + dirName: "reference/services/ci-cd-pipeline", + }, + ], + }, + { + "Data Storage": [ + { + type: "autogenerated", + dirName: "reference/services/data-storage", + }, + ], + }, + { + "Landing Zone": [ + { + type: "autogenerated", + dirName: "reference/services/landing-zone", + }, + ], + }, + { + Networking: [ + { + type: "autogenerated", + dirName: "reference/services/networking", + }, + ], + }, + { + Security: [ + { + type: "autogenerated", + dirName: "reference/services/security", + }, + ], + }, + ], + }, + { + type: "category", + collapsible: true, + collapsed: false, + label: "Module Catalog", + items: [{ type: "autogenerated", dirName: "reference/modules" }], + }, + ] + }, +] + +module.exports = sidebar diff --git a/sidebars/patcher.js b/sidebars/patcher.js new file mode 100644 index 0000000000..8b11ad203c --- /dev/null +++ b/sidebars/patcher.js @@ -0,0 +1,16 @@ +const sidebar = [ + { + label: "Patcher", + type: "category", + collapsible: false, + items: [ + { + label: "Getting Started", + type: "doc", + id: "patcher/index", + } + ] + } +] + +module.exports = sidebar diff --git a/sidebars/pipelines.js b/sidebars/pipelines.js new file mode 100644 index 0000000000..a0eca6f02b --- /dev/null +++ b/sidebars/pipelines.js @@ -0,0 +1,61 @@ +const kbLink = + "https://github.com/orgs/gruntwork-io/discussions?discussions_q=" + + // filter by discussions with the label s:CI/Pipelines & sort by top voted discussions first + encodeURIComponent("label:s:CI/Pipelines sort:top") + +const sidebar = [ + { + label: "Gruntwork Pipelines", + type: "category", + collapsible: false, + items: [ + { + label: "Overview", + type: "category", + collapsible: false, + items: [ + { + label: "What is Gruntwork Pipelines?", + type: "doc", + id: "pipelines/overview/index", + }, + { + label: "How it works", + type: "doc", + id: "pipelines/how-it-works/index", + }, + ], + }, + { + label: "Getting Started", + type: "category", + collapsible: false, + items: [ + { + label: "Single Account Tutorial", + type: "doc", + id: "pipelines/tutorial/index", + }, + // { + // label: "Deploying Multi-Account Pipelines", + // type: "doc", + // id: "pipelines/multi-account/index", + // }, + ], + }, + { + label: "Maintain Pipelines", + type: "category", + collapsible: false, + items: ["pipelines/maintain/updating", "pipelines/maintain/extending"], + }, + { + type: "link", + label: "Knowledge Base", + href: kbLink, + }, + ], + }, +] + +module.exports = sidebar diff --git a/sidebars/refarch.js b/sidebars/refarch.js new file mode 100644 index 0000000000..cb9af81a2d --- /dev/null +++ b/sidebars/refarch.js @@ -0,0 +1,67 @@ +const kbLink = + "https://github.com/orgs/gruntwork-io/discussions?discussions_q=" + + // filter by discussions with the label "s:Reference Architecture" & sort by top voted discussions first + encodeURIComponent('label:"s:Reference Architecture" sort:top') + +const sidebar = [ + { + label: "Reference Architecture", + type: "category", + collapsible: false, + items: [ + { + label: "Overview", + type: "category", + collapsible: false, + items: [ + "refarch/whats-this/what-is-a-reference-architecture", + "refarch/whats-this/understanding-the-deployment-process", + ], + }, + { + label: "Configuration", + type: "category", + collapsible: false, + items: [ + "refarch/configuration/index", + "refarch/configuration/install-required-tools", + "refarch/configuration/run-the-wizard", + "refarch/configuration/preflight-checks", + ], + }, + { + label: "Access", + type: "category", + collapsible: false, + items: [ + "refarch/access/setup-auth/index", + "refarch/access/how-to-auth-vpn/index", + "refarch/access/how-to-auth-aws-web-console/index", + "refarch/access/how-to-auth-CLI/index", + "refarch/access/how-to-auth-ec2/index", + ], + }, + { + label: "Usage", + type: "category", + collapsible: false, + items: [ + "refarch/usage/maintain-your-refarch/deploying-your-apps", + "refarch/usage/maintain-your-refarch/monitoring", + "refarch/usage/maintain-your-refarch/adding-new-account", + "refarch/usage/maintain-your-refarch/staying-up-to-date", + "refarch/usage/maintain-your-refarch/extending", + "refarch/usage/pipelines-integration/index", + "refarch/usage/maintain-your-refarch/undeploying", + ], + }, + { + type: "link", + label: "Knowledge Base", + href: kbLink, + }, + ], + }, +] + +module.exports = sidebar diff --git a/sidebars/update-guides.js b/sidebars/update-guides.js index e0143a42fa..363502faa4 100644 --- a/sidebars/update-guides.js +++ b/sidebars/update-guides.js @@ -246,18 +246,6 @@ const sidebars = { ], }, ], - patcher: [ - backLink, - { - label: "Keep up-to-date with Patcher", - type: "category", - link: { - type: "doc", - id: "guides/stay-up-to-date/patcher/index", - }, - items: [] - } - ] } module.exports = sidebars diff --git a/src/pages/index.tsx b/src/pages/index.tsx index ef360e26d8..6d63f2e51f 100644 --- a/src/pages/index.tsx +++ b/src/pages/index.tsx @@ -46,59 +46,41 @@ export default function Home(): JSX.Element {
Bought a Reference Architecture? Get your new infrastructure up - and running quickly with our comprehensive guide. + and running quickly with our getting started guide. - Follow our tutorials and learn how to deploy Gruntwork services - to construct your own bespoke architecture. + Create your account in the Gruntwork Developer Portal and add your teammates.
-

Discover Your Use Case

+

Products

- Streamline how you create, configure, and secure your AWS - accounts using Gruntwork Landing Zone. + A collection of reusable code that enables you to deploy and manage infrastructure quickly and reliably. - Use your preferred CI tool to set up an end-to-end pipeline for - your infrastructure code. + An end-to-end tech stack built using best practices on top of our Infrastructure as Code Library, deployed into your AWS accounts. - Set up your network according to industry best practices using - our VPC service. - - - Deploy Kubernetes using EKS to host all of your apps and - services. - - - Implement the CIS AWS Foundations Benchmark using our curated - collection of modules and services. + A framework for running secure deployments for infrastructure code and application code.
diff --git a/static/img/iac/stay-up-to-date/versioning/module_release_tag_versions.png b/static/img/iac/stay-up-to-date/versioning/module_release_tag_versions.png new file mode 100644 index 0000000000..009d5071e0 Binary files /dev/null and b/static/img/iac/stay-up-to-date/versioning/module_release_tag_versions.png differ diff --git a/static/img/pipelines-docker-packer-builder.png b/static/img/pipelines-docker-packer-builder.png new file mode 100644 index 0000000000..6e50c1ed8e Binary files /dev/null and b/static/img/pipelines-docker-packer-builder.png differ diff --git a/static/img/preflight-error-on-pr.png b/static/img/preflight-error-on-pr.png new file mode 100644 index 0000000000..b73980f058 Binary files /dev/null and b/static/img/preflight-error-on-pr.png differ diff --git a/static/img/preflight1.png b/static/img/preflight1.png new file mode 100644 index 0000000000..2433f93580 Binary files /dev/null and b/static/img/preflight1.png differ diff --git a/static/img/refarch/slack_app_scopes.png b/static/img/refarch/slack_app_scopes.png new file mode 100644 index 0000000000..a399dcb6ca Binary files /dev/null and b/static/img/refarch/slack_app_scopes.png differ diff --git a/static/img/refarch/slack_auth_token_key.png b/static/img/refarch/slack_auth_token_key.png new file mode 100644 index 0000000000..e8917f1d58 Binary files /dev/null and b/static/img/refarch/slack_auth_token_key.png differ diff --git a/static/img/refarch/slack_oauth_tokens.png b/static/img/refarch/slack_oauth_tokens.png new file mode 100644 index 0000000000..cd0f300aee Binary files /dev/null and b/static/img/refarch/slack_oauth_tokens.png differ