Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,6 @@ you to attach IAM policies to it, (b) specify which other IAM entities to trust,
can _assume_ the IAM role to be temporarily get access to the permissions in those IAM policies. The two most common
use cases for IAM roles are:



<div className="dlist">

#### Service roles
Expand All @@ -26,13 +24,10 @@ S3 bucket in account `B` and allow that role to be assumed by an IAM user in acc
able to access the contents of the S3 bucket by assuming the IAM role in account `B`. This ability to assume IAM
roles across different AWS accounts is the critical glue that truly makes a multi AWS account structure possible.


</div>

Here are some more details on how IAM roles work:



<div className="dlist">

#### IAM policies
Expand All @@ -45,30 +40,29 @@ You must define a _trust policy_ for each IAM role, which is a JSON document (ve
specifies who can assume this IAM role. For example, here is a trust policy that allows this IAM role to be assumed
by an IAM user named `Bob` in AWS account `111122223333`:


</div>

``` json
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Principal": {"AWS": "arn:aws:iam::111122223333:user/Bob"}
"Principal": { "AWS": "arn:aws:iam::111122223333:user/Bob" }
}
]
}
```

Note that a trust policy alone does NOT automatically give Bob the ability to assume this IAM role. Cross-account
access always requires permissions in *both* accounts. So, if Bob is in AWS account `111122223333` and you want him to
access always requires permissions in _both_ accounts. So, if Bob is in AWS account `111122223333` and you want him to
have access to an IAM role called `foo` in account `444455556666`, then you need to configure permissions in both
accounts: first, in account `444455556666`, the `foo` IAM role must have a trust policy that gives `sts:AssumeRole`
permissions to account `111122223333`, as shown above; second, in account `111122223333`, you also need to attach an
IAM policy to Bob’s IAM user that allows him to assume the `foo` IAM role, which might look like this:

``` json
```json
{
"Version": "2012-10-17",
"Statement": [
Expand All @@ -81,8 +75,6 @@ IAM policy to Bob’s IAM user that allows him to assume the `foo` IAM role, whi
}
```



<div className="dlist">

#### Assuming an IAM role
Expand All @@ -96,10 +88,10 @@ will be valid for 1-12 hours, depending on IAM role settings, after which you mu
new keys. Note that to make the `AssumeRole` API call, you must first authenticate to AWS using some other
mechanism. For example, for an IAM user to assume an IAM role, the workflow looks like this:


</div>

![The process for assuming an IAM role](/img/guides/build-it-yourself/landing-zone/assume-iam-role.png)
_The process for assuming an IAM role_

The basic steps are:

Expand All @@ -114,8 +106,6 @@ The basic steps are:
5. Now all of your subsequent API calls will be on behalf of the assumed IAM role, with access to whatever permissions
are attached to that role



<div className="dlist">

#### IAM roles and AWS services
Expand All @@ -132,9 +122,4 @@ copy credentials (access keys) onto that instance. The same strategy works with
use IAM roles as a secure way to give your Lambda functions, ECS services, Step Functions, and many other AWS
services permissions to access specific resources in your AWS account.


</div>




Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ pagination_label: Production-grade Design
With all the core concepts out of the way, let’s now discuss how to configure a production-grade AWS account structure that looks something like this:

![A production-grade AWS account structure](/img/guides/build-it-yourself/landing-zone/aws-account-structure.png)
_A production-grade AWS account structure_

This diagram has many accounts as part of a _multi-account security strategy_. Don't worry if it looks complicated:
we'll break it down piece by piece in the next few sections.
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ Jenkins, Circle, GitLab, etc).
TLDR: If you follow this guide, you’ll be able to set up a pipeline that works like this:

![For an extended version with audio commentary, see <https://youtu.be/iYXghJK7YdU>](/img/guides/build-it-yourself/pipelines/walkthrough.gif)
_For an extended version with audio commentary, see <https://youtu.be/iYXghJK7YdU>_

## Sections

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ minute moments before release. As a result, the integration process is very expe
conflicts, tracking down subtle bugs, and trying to stabilize release branches.

![Many teams employ a practice of working on their features over long periods of time on isolated branches. These long lived feature branches have a higher chance of merge conflicts when they’re finally ready to be integrated.](/img/guides/build-it-yourself/pipelines/feature-branch-merge-conflict.png)
_Many teams employ a practice of working on their features over long periods of time on isolated branches. These long lived feature branches have a higher chance of merge conflicts when they’re finally ready to be integrated._

In contrast, the Continuous Integration and Continuous Delivery model of development promotes more cross team
communication and integration work as development progresses. Going back to the ISS thought experiment, a CI/CD style of
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Trunk-based development model

![Trunk branch with a continuous stream of commits.](/img/guides/build-it-yourself/pipelines/trunk.png)
_Trunk branch with a continuous stream of commits._

The most common way to implement CI/CD is to use a _trunk-based development model_. In trunk-based development, all the
work is done on the same branch, called `trunk` or `master` depending on the Version Control System (VCS). You would
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,4 @@ With all the core concepts out of the way, let’s now discuss how to configure
infrastructure code, using a platform that looks something like this:

![Architecture of platform for running Terraform/Terragrunt CI/CD workflows.](/img/guides/build-it-yourself/pipelines/tftg-pipeline-architecture.png)
_Architecture of platform for running Terraform/Terragrunt CI/CD workflows._
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,4 @@
To put it all together, the following sequence diagram shows how all the various components work together:

![Sequence diagram of running Terraform/Terragrunt CI/CD workflows.](/img/guides/build-it-yourself/pipelines/tftg-pipeline-sequence-diagram.png)
_Sequence diagram of running Terraform/Terragrunt CI/CD workflows._
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ The first step is to deploy a VPC. Follow the instructions in
`module-vpc` to create a VPC setup that looks like this:

![A production-grade VPC setup deployed using module-vpc from the Gruntwork Infrastructure as Code Library](/img/guides/build-it-yourself/pipelines/vpc-diagram.png)
_A production-grade VPC setup deployed using module-vpc from the Gruntwork Infrastructure as Code Library_

We will use the Mgmt VPC to deploy our infrastructure deployment CD platform, since the infrastructure deployment
platform is a management infrastructure that is designed to deploy to multiple environments.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ over the public Internet (unless you blocked it using security groups and OS-lev
</div>

![Before VPCs, all your AWS resources were in one global IP address space anyone could access (unless you blocked them via security groups or firewalls)](/img/guides/build-it-yourself/vpc/no-vpc-diagram.png)
_Before VPCs, all your AWS resources were in one global IP address space anyone could access (unless you blocked them via security groups or firewalls)_

From a security standpoint, this represented a step backwards compared to traditional data centers where you could
configure most of your servers so they were physically unreachable from the public Internet.
Expand All @@ -28,6 +29,7 @@ your production environment:
</div>

![With VPCs, you could separate your AWS resources into completely isolated networks](/img/guides/build-it-yourself/vpc/vpc-no-subnets-diagram.png)
_With VPCs, you could separate your AWS resources into completely isolated networks_

You’ll see later in this guide how you can use VPCs, route tables, subnets, security groups, and NACLs to get
fine-grained control over what network traffic can or can’t reach your AWS resources.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Regions and availability zones

![AWS regions and availability zones](/img/guides/build-it-yourself/vpc/aws-regions.png)
_AWS regions and availability zones_

AWS has data centers all over the world, grouped into regions and availability zones. An _AWS region_ is a separate
geographic area, such as `us-east-2` (Ohio), `eu-west-1` (Ireland), and `ap-southeast-2` (Sydney). Within each region,
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Subnets

![VPCs partitioned into multiple subnets: public, private (services), private (persistence)](/img/guides/build-it-yourself/vpc/vpc-subnets-diagram.png)
_VPCs partitioned into multiple subnets: public, private (services), private (persistence)_

Each VPC is partitioned into one or more _[subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html:)_
(sub-networks). Each subnet controls a portion of the VPC’s CIDR range. For example, a VPC with the CIDR block
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# VPC Peering

![Multiple VPCs connected via VPC peering](/img/guides/build-it-yourself/vpc/vpc-diagram.png)
_Multiple VPCs connected via VPC peering_

Normally, you use VPCs to create isolated networks, so the resources in one VPC have no way to access the resources in
another VPC. _[VPC Peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html)_ is a networking
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,4 @@ With all the core concepts out of the way, let’s now discuss how to configure
something like this:

![A production-grade VPC setup](/img/guides/build-it-yourself/vpc/vpc-diagram.png)
_A production-grade VPC setup_
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Defense in depth

![Aerial view of Beaumaris Castle, showing multiple layers of walls for defense. Crown copyright 2016.](/img/guides/build-it-yourself/vpc/castle.jpeg)
_Aerial view of Beaumaris Castle, showing multiple layers of walls for defense. Crown copyright 2016._

People make mistakes all the time: forgetting to remove accounts, keeping ports open, including test credentials in
production code, etc. Rather than living in an idealized model where you assume people won’t make mistakes, you can
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Multiple subnet tiers

![Each VPC is partitioned into multiple tiers of subnets](/img/guides/build-it-yourself/vpc/subnets-diagram.png)
_Each VPC is partitioned into multiple tiers of subnets_

The third layer of defense is to use separate _subnet tiers_, where each tier contains multiple subnets configured in
the same way. We recommend the following three theirs for most use cases:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Security groups and NACLs

![Security group settings for the different subnet tiers](/img/guides/build-it-yourself/vpc/peering-diagram.png)
_Security group settings for the different subnet tiers_

Use security groups and NACLs to configure the following rules for each subnet tier:

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Why Kubernetes

![The popularity of container orchestration tools](/img/guides/build-it-yourself/kubernetes-cluster/docker-orchestration-google-trends.png)
_The popularity of container orchestration tools_

Kubernetes has become the de facto choice for container orchestration. Here’s why:

Expand Down Expand Up @@ -33,5 +34,4 @@ systems (Borg and Omega), and is now maintained by the Cloud Native Computing Fo
scale and resiliency (Google runs billions of containers per week) and with a huge community behind it, it’s
continuously getting better.


</div>
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ Let’s start by looking at Kubernetes from a very high level, and then graduall
level, a simple way to think about Kubernetes is as an operating system for your data center.

![Kubernetes is like an operating system for your data center, abstracting away the underlying hardware behind its API](/img/guides/build-it-yourself/kubernetes-cluster/kubernetes-simple.png)
_Kubernetes is like an operating system for your data center, abstracting away the underlying hardware behind its API_

<div className="dlist">

Expand All @@ -23,7 +24,6 @@ high-level, consistent, safe API (the _Kubernetes API_), without having to worry
between the servers or about managing any of the applications running on those servers (i.e., the orchestration tool
handles deploying applications, restarting them if they fail, allowing them to communicate over the network, etc.).


</div>

To use the Kernel API, your application makes system calls. To use the Kubernetes API, you make HTTPS calls, typically
Expand All @@ -38,6 +38,7 @@ of containers up and down with load, and so on.
If you zoom in a bit further on the Kubernetes architecture, it looks something like this:

![Kubernetes architecture](/img/guides/build-it-yourself/kubernetes-cluster/kubernetes-architecture.png)
_Kubernetes architecture_

Kubernetes consists of two main pieces: the control plane and worker nodes. Each of these will be discussed next.

Expand Down Expand Up @@ -77,7 +78,6 @@ always running.
_[etcd](https://etcd.io)_ is a distributed key-value store that the master nodes use as a persistent way to store the
cluster configuration.


</div>

## Worker nodes
Expand All @@ -101,5 +101,4 @@ also runs on each worker node. It is responsible for talking to the Kubernetes A
containers live at which IPs, and proxying requests from containers on the same worker node to those IPs. This is
used for Service Discovery within Kubernetes, a topic we’ll discuss later.


</div>
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ switch between contexts—and therefore, different clusters and users.
## Web UI (Dashboard)

![The Kubernetes Dashboard](/img/guides/build-it-yourself/kubernetes-cluster/kubernetes-dashboard.png)
_The Kubernetes Dashboard_

The _[Kubernetes Dashboard](https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/)_ is a
web-based interface you can use to manage your Kubernetes cluster. The dashboard is not enabled by default in most
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,4 @@ With all the core concepts out of the way, let's now discuss how to configure a
that looks something like this:

![Production-grade EKS architecture](/img/guides/build-it-yourself/kubernetes-cluster/eks-architecture.png)
_Production-grade EKS architecture_
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ The first step is to deploy a VPC. Follow the instructions in
`module-vpc` to create a VPC setup that looks like this:

![A production-grade VPC setup deployed using module-vpc from the Gruntwork Infrastructure as Code Library](/img/guides/build-it-yourself/vpc/vpc-diagram.png)
_A production-grade VPC setup deployed using module-vpc from the Gruntwork Infrastructure as Code Library_

After following this guide, you should have `vpc-app` wrapper module in your `infrastructure-modules` repo:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ Previously, we supported versions 1.3.0 and 1.2.0 of the Benchmark. If you are l
- To upgrade from v1.3.0 to v1.4.0, please follow [this upgrade guide](../../../stay-up-to-date/1-cis/0-how-to-update-to-cis-14/0-intro.md).

![CIS Benchmark Architecture](/img/guides/build-it-yourself/achieve-compliance/cis-account-architecture.png)
_CIS Benchmark Architecture_

## Sections

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ The compliance library is known as "Gruntwork CIS Service Catalog" and it has it
The image below shows the hierarchy between the different levels of modules from the different code libraries Gruntwork offers.

![Types of CIS module relationships to avoid repetitive code and minimize the amount of extra work needed to achieve compliance.](/img/guides/build-it-yourself/achieve-compliance/cis-module-relationships.png)
_Types of CIS module relationships to avoid repetitive code and minimize the amount of extra work needed to achieve compliance._

Let’s unpack this a bit.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ First, the short version:
Here's a diagram that shows a rough overview of what the Reference Architecture looks like:

![Architecture Diagram](/img/guides/reference-architecture/landing-zone-ref-arch.png)
_Architecture Diagram_

Now, the long version:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
This diagram shows a rough overview of the Gruntwork Pipelines architecture:

![Architecture Diagram](/img/guides/reference-architecture/gruntwork-pipelines-architecture.png)
_Architecture Diagram_

The Gruntwork Pipelines workflow, defined in [`.github/workflows/pipelines.yml`](https://github.com/gruntwork-io/terraform-aws-service-catalog/blob/master/examples/for-production/infrastructure-live/.github/workflows/pipelines.yml), works like this:

Expand Down Expand Up @@ -35,22 +36,26 @@ If you'd like to send Slack notifications when the pipeline is running, follow t
1. In Slack, open the Workflow builder:

![Slack Workflow Builder](/img/guides/reference-architecture/slack-workflow-1.png)
_Slack Workflow Builder_

2. Create a new Webhook workflow called "Gruntwork Pipelines"

![Slack Webhook workflow](/img/guides/reference-architecture/slack-workflow-2.png)
_Slack Webhook workflow_

3. Add the following text variables to the workflow: `branch`, `status`, `url`, `repo`, and `actor`

![Slack workflow variables](/img/guides/reference-architecture/slack-workflow-3.png)
_Slack workflow variables_

4. Once all of the variables are added, click Next.

5. Now add another step to the workflow

![Slack workflow add step](/img/guides/reference-architecture/slack-workflow-4.png)
_Slack workflow add step_

6. Add the "Send a message" step
6. Add the "Send a message" step

7. Choose a channel from the dropdown menu

Expand All @@ -72,7 +77,8 @@ If you'd like to send Slack notifications when the pipeline is running, follow t

12. Copy the webhook URL and save it. We will use this value below.

![Slack workflow add step](/img/guides/reference-architecture/slack-workflow-5.png)
![Slack workflow add step](/img/guides/reference-architecture/slack-workflow-5.png)
_Slack workflow add step_

13. Note that the webhook URL should be treated as sensitive. Anyone with the URL can send HTTP requests to the webhook!

Expand All @@ -81,11 +87,12 @@ If you'd like to send Slack notifications when the pipeline is running, follow t
1. Open the GitHub repository and navigate to Settings => Secrets.

![GitHub Secrets](/img/guides/reference-architecture/secrets.png)
_GitHub Secrets_

1. Create the following repository secrets:

- `AWS_ACCESS_KEY_ID`: This is the first value from the AWS IAM user step above.
- `AWS_SECRET_ACCESS_KEY`: This is the second value from the AWS IAM user step above.
- `AWS_ACCESS_KEY_ID`: This is the first value from the AWS IAM user step above.
- `AWS_SECRET_ACCESS_KEY`: This is the second value from the AWS IAM user step above.
- `GH_TOKEN`: Enter the GitHub machine user's oauth token here. If you don't know this, you can find it in the AWS Secrets Manager secret that you provided in the [`reference-architecture-form.yml`](https://github.com/gruntwork-io/terraform-aws-service-catalog/tree/master/examples/for-production/infrastructure-live/reference-architecture-form.yml).
- `SLACK_WEBHOOK_URL`: This is the value from the Slack Workflow step above.

Expand Down
Loading